Merge branch 'LCTT:master' into LibreOffice-is-Available-for-$8.99-on-Mac-App-Store

This commit is contained in:
cool-summer-021 2022-09-27 10:14:25 +08:00 committed by GitHub
commit 5b4fb6fc8e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
46 changed files with 5030 additions and 1490 deletions

View File

@ -0,0 +1,189 @@
[#]: subject: "20 Facts About Linus Torvalds, the Creator of Linux and Git"
[#]: via: "https://itsfoss.com/linus-torvalds-facts/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15063-1.html"
关于 Linux 和 Git 的创造者 Linus Torvalds 的 20 件趣事
======
> 一些已知的、或鲜为人知的事情 —— 这里有 20 件关于 Linux 内核创造者 Linus Torvalds 的趣事。
![Linus TorvaldsLinux 和 Git 的创造者][1]
[Linus Benedict Torvalds][2](林纳斯·本纳第克特·托瓦兹),在 1991 年还是一名攻读硕士的芬兰学生时,他开发了一个类 Unix 操作系统。从那时起,它引发了一场革命:今天,它为大多数 Web 服务器、许多嵌入式设备和 [500 强超级计算机][3] 中的每一台提供支持。
我已经写过一些鲜为人知的 [关于 Linux 的事实][4],但这篇文章不是关于 Linux 的而是关于它的创造者Linus Torvalds。
通过阅读他的传记《<ruby>[只是为了好玩][5]<rt>Just for Fun</rt></ruby>》,我了解了有关 Torvalds 的许多事情。如果你有兴趣,你也可以 [订购一本传记][6]。(这是一个 [受益推荐][7] 链接。)
### 关于 Linus Torvalds 的 20 个有趣事实
你可能已经知道一些关于 Linus 的事情,但是通过阅读这篇文章,你很有可能会了解一些关于他的新趣事。
#### 1、以诺贝尔奖获得者的名字命名
Linus Benedict Torvalds 于 1969 年 12 月 28 日出生于赫尔辛基。他来自一个记者家庭。他的父亲 [Nils Torvalds][11] 是芬兰政治家,可能是未来参加选举的总统候选人。
他的名字来自于诺贝尔化学与和平奖的双奖获得者 [Linus Pauling][12] 的名字。
#### 2、世界上所有的 Torvalds 都是亲戚
虽然你可能会找到几个名字为 Linus 的人,但你不会找到很多姓 Torvalds 的人 —— 因为“正确”的拼写实际上是 Torvald没有 s。他的祖父将名字从 Torvald 改为 Torvalds并在末尾添加了一个“s”。于是Torvalds 王朝(如果我可以这么称呼它的话)开始了。
由于这是一个不寻常的姓氏,所以世界上只有不到 30 个 Torvalds而且他们都是亲戚这是 Linus Torvalds 在他的传记中说的。
![年轻的 Linus Torvalds 和他的记者妹妹 Sara Torvalds][13]
#### 3、他的第一台电脑是 Commodore Vic 20
10 岁时Linus 开始在他外祖父的 Commodore Vic 20 上使用 BASIC 编写程序。这使他发现自己对计算机和编程的热爱。
#### 4、Linus Torwalds 少尉
尽管他更喜欢花时间在电脑上而不是体育活动上,但他必须参加强制性的军事训练。他的军衔是少尉。
#### 5、因为他没有钱购买 UNIX他创造了 Linux
1991 年初,出于对 [MS-DOS][14] 和 [MINIX][15] 不满意Torvalds 想购买一套 UNIX 系统。对我们来说幸运的是,他没有足够的钱。因此,他决定从头开始制作自己的 UNIX 复制品。
#### 6、Linux 可以被称为 Freax
1991 年 9 月Linus 发布了 Linux代表 “Linus's MINIX”并鼓励他的同好们使用其源代码进行更广泛的分发。
Linus 认为 Linux 这个名字太自负了。他想把它改成 Freax基于 free、freak 和 MINIX但他的朋友 Lemmarke 已经在 FTP 服务器上创建了一个名为 Linux 的目录。因此Linux 的名称才得以沿用下来。LCTT 译注:这个故事和我听到的不同。)
#### 7、Linux 是他在大学的主要项目
《Linux一种便携式操作系统》是他的硕士论文题目。
#### 8、他娶了他的学生
1993 年,他在赫尔辛基大学任教时,给学生们布置了一份写电子邮件的作业。是的,当时撰写电子邮件没那么简单。
一位名叫 Tove Monni 的女学生完成了这项任务,给他发送一封电子邮件,并邀请他出去约会。他接受了,三年后,他们三个女儿中的第一个出生了。
我应该说他开创了网恋的潮流吗?嗯……不!让我们就此打住 ;)
![Linus Torvalds 和他的妻子 Tove Monni Torvalds][16]
#### 9、Linus 有一颗以他的名字命名的小行星
他的名字获得了无数荣誉,包括一颗名为 [9793 Torvalds][17] 的小行星。
#### 10、Linus 不得不为 Linux 的商标而战
Linux 是 Linus Torvalds 的注册商标。Torvalds 最初并不关心这个商标,但在 1994 年 8 月William R. Della Croce, Jr. 注册了 Linux 商标,并开始向 Linux 开发人员索要版税。作为回应Torvalds 起诉了他,并于 1997 年解决了此案。
#### 11、史蒂夫·乔布斯希望他为苹果公司的 macOS 工作
2000 年,苹果公司的创始人 [史蒂夫·乔布斯邀请他为苹果公司的 macOS 工作][19]。Linus 拒绝了这个报酬丰厚的提议,并继续致力于开发 Linux 内核。
#### 12、Linus 还创建了 Git
大多数人都知道 Linus Torvalds 创建 Linux 内核,但他还创建了 [Git][20],这是一个广泛用于全世界的软件开发的版本控制系统。
直到 2005 年,(当时)专有服务 [BitKeeper][21] 还用于 Linux 内核的开发。而当 Bitkeeper 关闭其免费服务时Linus Torvalds 自己创建了 Git因为其他版本控制系统都不能满足他的需求。
#### 13、如今 Linus 几乎不编程
尽管 Linus 全职从事 Linux 内核工作但他几乎不再为它编写任何代码。事实上Linux 内核中的大部分代码都来自世界各地的贡献者。他在内核维护人员的帮助下,确保每个版本发布都能顺利进行。
#### 14、Torvalds 讨厌 C++
Linus Torvalds 极其 [不喜欢 C++ 编程语言][22],并对此直言不讳。他开玩笑说 Linux 内核的编译速度都比 C++ 程序快。
#### 15、即使是 Linus Torvalds 也发现 Linux 难以安装(现在你可以自我感觉良好了)
几年前Linus 说过 [他发现 Debian 难以安装][23]。众所周知,他 [在他的主要工作站上使用 Fedora][24]。
#### 16、他喜欢水肺潜水
Linus Torvalds 喜欢水肺潜水。他甚至创造了一种供水肺潜水员使用的潜水记录工具 [Subsurface][25]。你会惊讶地发现,有时他甚至会在论坛上回答一些普通问题。
![穿着潜水装备的 Linus Torvalds][26]
#### 17、满嘴脏话的 Torvalds 改善了他的行为
Torvalds 以在 Linux 内核邮件列表中使用 [轻度脏话][27] 而闻名,这遭到了一些业内人士的批评。但是,很难批评他对 “[F**k you, NVIDIA][28]” 的玩笑,因为它促使英伟达为 Linux 内核提供了更好的适配。
2018 年,[Torvalds 暂时离开了 Linux 内核开发,以改善他的行为][29]。这是在他签署有争议的 [Linux 内核开发人员行为准则][30] 之前完成的。
![Linus Torvalds 对英伟达的中指:去你的!英伟达][31]
#### 18、他太害羞了不敢在公共场合讲话
Linus 对公开演讲感到不舒服。他不怎么参加活动。而当他必须参加时,他更喜欢坐下来接受主持人的采访。这是他最喜欢的公开演讲方式。
#### 19、他不是社交媒体爱好者
[Google Plus][32] 是他使用过的唯一社交媒体平台。他甚至在空闲时花了一些时间 [点评了小组件][33]。Google Plus 现已停用了,因此他没有其他社交媒体帐户。
#### 20、Torvalds 定居在美国
Linus 于 1997 年移居美国,并与他的妻子 Tove 和他们的三个女儿在那里定居。他于 2010 年成为美国公民。目前,作为 [Linux 基金会][34] 的成员,他全职从事 Linux 内核工作。
很难说 Linus Torvalds 的净资产是多少,或者 Linus Torvalds 的收入是多少,因为这些信息从未公开过。
![Tove 和 Linus Torvalds 和他们的女儿 Patricia、Daniela 和 Celeste][35]
如果你有兴趣了解更多有关 Linus Torvalds 早期生活的信息,我建议你阅读他的传记,书名为 《<ruby>[只是为了好玩][5]<rt>Just for Fun</rt></ruby>》。
*免责声明:这里的一些图片来源于互联网,我没有图像的版权,我也不打算用这篇文章侵犯 Torvalds 家族的隐私。*
--------------------------------------------------------------------------------
via: https://itsfoss.com/linus-torvalds-facts/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[gpchn](https://github.com/gpchn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2017/12/Linus-Torvalds-featured-800x450.png
[2]: https://en.wikipedia.org/wiki/Linus_Torvalds
[3]: https://itsfoss.com/linux-runs-top-supercomputers/
[4]: https://itsfoss.com/facts-linux-kernel/
[5]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[6]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[7]: https://itsfoss.com/affiliate-policy/
[8]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[9]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[10]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[11]: https://en.wikipedia.org/wiki/Nils_Torvalds
[12]: https://en.wikipedia.org/wiki/Linus_Pauling
[13]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_and_sara_Torvalds.jpg
[14]: https://en.wikipedia.org/wiki/MS-DOS
[15]: https://www.minix3.org/
[16]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_torvalds-wife-800x533.jpg
[17]: http://enacademic.com/dic.nsf/enwiki/1928421
[18]: https://youtu.be/eE-ovSOQK0Y
[19]: https://www.macrumors.com/2012/03/22/steve-jobs-tried-to-hire-linux-creator-linus-torvalds-to-work-on-os-x/
[20]: https://en.wikipedia.org/wiki/Git
[21]: https://www.bitkeeper.org/
[22]: https://lwn.net/Articles/249460/
[23]: https://www.youtube.com/watch?v=qHGTs1NSB1s
[24]: https://plus.google.com/+LinusTorvalds/posts/Wh3qTjMMbLC
[25]: https://subsurface-divelog.org/
[26]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_Torvalds_in_SCUBA_gear.jpg
[27]: https://www.theregister.co.uk/2016/08/26/linus_torvalds_calls_own_lawyers_nasty_festering_disease/
[28]: https://www.youtube.com/watch?v=_36yNWw_07g
[29]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
[30]: https://itsfoss.com/linux-code-of-conduct/
[31]: https://itsfoss.com/wp-content/uploads/2012/09/Linus-Torvalds-Fuck-You-Nvidia.jpg
[32]: https://plus.google.com/+LinusTorvalds
[33]: https://plus.google.com/collection/4lfbIE
[34]: https://www.linuxfoundation.org/
[35]: https://itsfoss.com/wp-content/uploads/2017/12/patriciatorvalds.jpg
[36]: https://opensource.com/life/15/8/patricia-torvalds-interview
[37]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[38]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[39]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[40]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID

View File

@ -3,44 +3,43 @@
[#]: author: "Joël Krähemann https://opensource.com/users/joel2001k"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: " wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15065-1.html"
使用开源库提升 C 语言编程能力
使用开源库 GObject 和 libsoup 提升 C 语言编程能力
======
开源库 GObject 和 libsoup 已经做了很多工作,因此你可以专注于使用 C 语言开发神奇的应用。
![Why and how to handle exceptions in Python Flask][1]
(图源: Unsplash.com, Creative Commons Zero
![](https://img.linux.net.cn/data/attachment/album/202209/24/145218s1s1xk6s1mm2kg1x.jpg)
[GLib Object System (GObject)][2] 是为 C 语言提供了灵活且可扩展的面向对象框架的库。在这篇文章中,我将使用该库的 2.4 版本进行演示。
> 开源库 GObject 和 libsoup 做了很多工作,因此你可以专注于使用 C 语言开发神奇的应用。
<ruby>[GLib 对象系统][2]<rt>Object System</rt></ruby>GObject是一个为 C 语言提供灵活且可扩展的面向对象框架的库。在这篇文章中,我将使用该库的 2.4 版本进行演示。
GObject 库继承了 ANSI C 标准,拥有一些常见的数据类型,例如:
* gchar: 字符型
* guchar: 无符号字符型
* gunichar: 宽为 32 比特的 Unicode 字符型
* gboolean: 布尔型
* gint8, gint16, gint32, gint64: 8, 16, 32 和 64 比特有符号整数
* guint8, guint16, guint32, guint64: 无符号 8, 16, 32 和 64 比特整数
* gfloat: IEEE 754 标准单精度浮点数
* gdouble: IEEE 754 标准 双精度浮点数
* gpointer: 泛指针
* `gchar`字符型
* `guchar`无符号字符型
* `gunichar`32 位定宽 Unicode 字符型
* `gboolean`布尔型
* `gint8`、`gint16`、`gint32`、`gint64`:有符号 8、16、32 和 64 位整数
* `guint8`、`guint16`、`guint32`、`guint64`:无符号 8、16、32 和 64 位整数
* `gfloat`IEEE 754 标准单精度浮点数
* `gdouble`IEEE 754 标准双精度浮点数
* `gpointer`泛指针
### 函数指针
GObject 库还引入了类和接口的类型和对象体系。这是可能的,因为 ANSI C 语言理解函数指针。
GObject 库还引入了类和接口的类型和对象体系。之所以可以,是因为 ANSI C 语言可以理解函数指针。
你可以这样做来声明函数指针:
```c
```
void (*my_callback)(gpointer data);
```
首先,你需要给变量 `my_callback` 赋值:
```c
```
void my_callback_func(gpointer data)
{
  //do something
@ -51,7 +50,7 @@ my_callback = my_callback_func;
函数指针 `my_callback` 可以这样来调用:
```c
```
gpointer data;
data = g_malloc(512 * sizeof(gint16));
my_callback(data);
@ -59,11 +58,11 @@ my_callback(data);
### 对象类
GObject 基类由 2 个结构(`GObject` 和 `GObjectClass`)组成,你可以继承它们以实现你自己的对象。
`GObject` 基类由 2 个结构(`GObject` 和 `GObjectClass`)组成,你可以继承它们以实现你自己的对象。
你需要在结构体中先嵌入 `GObject``GObjectClass`
你需要在结构体中先嵌入 `GObject``GObjectClass`
```c
```
struct _MyObject
{
  GObject gobject;
@ -79,11 +78,11 @@ struct _MyObjectClass
GType my_object_get_type(void);
```
对象的实现包含了公有成员。GObject 也提供了私有成员的方法。这实际上是 C 源文件中的一个结构,而不是头文件。该类通常只包含函数指针。
对象的实现包含了公有成员。GObject 也提供了私有成员的方法。这实际上是 C 源文件中的一个结构,而不是头文件。该类通常只包含函数指针。
一个接口不能派生自另一个接口,比如:
```c
```
struct _MyInterface
{
  GInterface ginterface;
@ -93,7 +92,7 @@ struct _MyInterface
通过调用 `g_object_get()``g_object_set()` 函数来访问属性。若要获取属性,你必须提供特定类型的返回位置。建议先初始化返回位置:
```c
```
gchar *str
str = NULL;
@ -105,7 +104,7 @@ g_object_get(gobject,
或者你想要设置属性:
```c
```
g_object_set(gobject,
  "my-name", "Anderson",
  NULL);
@ -113,9 +112,11 @@ g_object_set(gobject,
### libsoup HTTP 库
`libsoup` 项目为 GNOME 提供了 HTTP 客服端和服务端使用的库。它使用 GObjects 和 glib 主循环与 GNOME 应用融合,并且还具有用于命令行的同步 API。首先,创建一个特定身份验证回调的 `libsoup` 会话。你也可以使用 cookie。
`libsoup` 项目为 GNOME 提供了 HTTP 客服端和服务端使用的库。它使用 GObjects 和 glib 主循环与集成到 GNOME 应用,并且还具有用于命令行的同步 API。
```c
首先,创建一个特定身份验证回调的 `libsoup` 会话。你也可以使用 cookie。
```
SoupSession *soup_session;
SoupCookieJar *jar;
@ -133,7 +134,7 @@ g_signal_connect(soup_session, "authenticate",
然后你可以像这样创建一个 HTTP GET 请求:
```c
```
SoupMessage *msg;
SoupMessageHeaders *response_headers;
SoupMessageBody *response_body;
@ -180,8 +181,7 @@ if(status == 200){
这是一个函数签名:
```c
```
#define MY_AUTHENTICATE_LOGIN "my-username"
#define MY_AUTHENTICATE_PASSWORD "my-password"
@ -200,11 +200,11 @@ void my_authenticate_callback(SoupSession *session,
### 一个 libsoup 服务器
想要基础的 HTTP 身份认证能够运行,你需要指定回调函数和服务器内容路径。然后再添加一个带有另一个回调的处理程序。
想要基础的 HTTP 身份认证能够运行,你需要指定回调函数和服务器上下文路径。然后再添加一个带有另一个回调的处理程序。
下面这个例子展示了在 8080 端口监听任何 IPv4 地址的消息:
```c
```
SoupServer *soup_server;
SoupAuthDomain *auth_domain;
GSocket *ip4_socket;
@ -248,9 +248,9 @@ soup_server_listen_socket(soup_server,
示例代码中,有两个回调函数。一个处理身份认证,另一个处理对它的请求。
假设你想要网页服务器运行一个用户名为 **my-username** 和口令为 **my-password** 的用户登录,并且用一个随机且唯一的用户 ID 字符串设置会话 cookie。
假设你想要网页服务器允许用户名为 `my-username` 和口令为 `my-password` 的凭证登录,并且用一个随机且唯一的用户 ID 字符串设置会话 cookie。
```c
```
gboolean my_xmlrpc_server_auth_callback(SoupAuthDomain *domain,
  SoupMessage *msg,
  const char *username,
@ -285,9 +285,9 @@ gboolean my_xmlrpc_server_auth_callback(SoupAuthDomain *domain,
}
```
内容路径 **my-xmlrpc** 进行处理的函数:
上下文路径 `my-xmlrpc` 进行处理的函数:
```c
```
void my_xmlrpc_server_callback(SoupServer *soup_server,
  SoupMessage *msg,
  const char *path,
@ -303,7 +303,7 @@ void my_xmlrpc_server_callback(SoupServer *soup_server,
### 更加强大的 C 语言
希望我的示例展现了 GObject 和 libsoup 项目给 C 语言带来了真正的提升。像这样在字面意义上扩展 C 语言,可以使 C 语言更易于使用。们已经为你做了许多工作,这样你可以专注于用 C 语言开发简单、直接的应用程序了。
希望我的示例展现了 GObject 和 libsoup 项目给 C 语言带来了真正的提升。像这样在字面意义上扩展 C 语言,可以使 C 语言更易于使用。们已经为你做了许多工作,这样你可以专注于用 C 语言开发简单、直接的应用程序了。
--------------------------------------------------------------------------------
@ -312,7 +312,7 @@ via: https://opensource.com/article/22/5/libsoup-gobject-c
作者:[Joël Krähemann][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,21 +3,20 @@
[#]: author: "Mark Meyer https://opensource.com/users/ofosos"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15061-1.html"
将你的 Python 脚本转换为命令行程序
======
使用 Python 中的 scaffold 和 click 库,你可以将一个简单的实用程序升级为一个成熟的命令行界面工具。
![Python 吉祥物和 Linux 的吉祥物企鹅][1]
> 使用 Python 中的 `scaffold``click` 库,你可以将一个简单的实用程序升级为一个成熟的命令行界面工具。
Image by: Opensource.com
![](https://img.linux.net.cn/data/attachment/album/202209/23/093712jyayyed8x7d8d8yd.jpg)
在我的职业生涯中,我写过、用过和看到过很多松散的脚本。一些人需要半自动化的任务,于是它们诞生了。一段时间后,它们变得越来越大。它们在一生中可能转手很多次。我常常希望这些脚本提供更多的命令行**类似工具**的感觉。但是,从一次性脚本到合适的工具,真正提高质量水平有多难呢?事实证明这在 Python 中并不难。
在我的职业生涯中,我写过、用过和看到过很多随意的脚本。一些人需要半自动化完成任务,于是它们诞生了。一段时间后,它们变得越来越大。它们在一生中可能转手很多次。我常常希望这些脚本提供更多的**命令行工具**的感觉。但是,从一次性脚本到合适的工具,真正提高质量水平有多难呢?事实证明这在 Python 中并不难。
### Scaffolding
### 搭建骨架脚本
在本文中,我将从一小段 Python 代码开始。我将把它应用到 `scaffold` 模块中,并使用 `click` 库扩展它以接受命令行参数。
@ -68,7 +67,7 @@ if __name__ == '__main__':
    rotoscope()
```
本文的所有非内联代码示例,你都可以在 [https://codeberg.org/ofosos/rotoscope][2] 中找到特定版本的代码。该仓库中的每个提交都描述了本文操作过程中一些有意义的步骤。
本文所有没有在这里插入显示的代码示例,你都可以在 [https://codeberg.org/ofosos/rotoscope][2] 中找到特定版本的代码。该仓库中的每个提交都描述了本文操作过程中一些有意义的步骤。
这个片段做了几件事:
@ -78,7 +77,7 @@ if __name__ == '__main__':
作为一个示例,它很简单,但它会让你理解这个过程。
### 使用 pyscaffold 创建应用程序
### 使用 Pyscaffold 创建应用程序
首先,你需要安装 `scaffold`、`click` 和 `tox` [Python 库][3]。
@ -86,14 +85,14 @@ if __name__ == '__main__':
$ python3 -m pip install scaffold click tox
```
安装 `scaffold` 后,切换到示例 `rotoscope` 项目所在的目录,然后执行以下命令:
安装 `scaffold` 后,切换到示例 `rotoscope` 项目所在的目录,然后执行以下命令:
```
$ putup rotoscope -p rotoscope \
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown
```
Pyscaffold 会重写我的 `README.md`,所以从 Git 恢复它:
@ -102,17 +101,13 @@ Pyscaffold 会重写我的 `README.md`,所以从 Git 恢复它:
$ git checkout README.md
```
Pyscaffold 在文档中说明了如何设置一个完整的示例项目我不会在这里介绍你之后可以探索。除此之外Pyscaffold 还可以在项目中为你提供持续集成CI模板
Pyscaffold 在文档中说明了如何设置一个完整的示例项目我不会在这里介绍你之后可以探索。除此之外Pyscaffold 还可以在项目中为你提供持续集成CI模板
* 打包: 你的项目现在启用了 PyPi所以你可以将其上传到一个仓库并从那里安装它。
* 文档: 你的项目现在有了一个完整的文档文件夹层次结构,它基于 Sphinx包括一个 readthedocs.org 构建器。
* 测试: 你的项目现在可以与 tox 一起使用,测试文件夹包含运行基于 pytest 的测试所需的所有样板文件。
* 依赖管理: 打包和测试基础结构都需要一种管理依赖关系的方法。`setup.cfg` 文件解决了这个问题,它包含所有依赖项。
* 预提交钩子: 包含 Python 源代码格式工具 "black" 和 Python 风格检查器 "flake8"。
* 预提交钩子: 包括 Python 源代码格式工具 black 和 Python 风格检查器 flake8。
查看测试文件夹并在项目目录中运行 `tox` 命令,它会立即输出一个错误:打包基础设施无法找到相关库。
@ -133,11 +128,11 @@ console_scripts =
就是这样,你可以从 Pyscaffold 免费获得所有打包、测试和文档设置。你还获得了一个预提交钩子来保证(大部分情况下)你按照设定规则提交。
### CLI 工具
### CLI 工具
现在,一些值会硬编码到脚本中,它们作为命令[参数][4]会更方便。例如,将 `INCOMING` 常量作为命令行参数会更好。
现在,一些值会硬编码到脚本中,它们作为命令 [参数][4] 会更方便。例如,将 `INCOMING` 常量作为命令行参数会更好。
首先,导入 [click][5] 库,使用 click 提供的命令装饰器对 `rotoscope()` 方法进行装饰,并添加一个 Click 传递给 `rotoscope` 函数的参数。Click 提供了一组验证器因此要向参数添加一个路径验证器。Click 还方便地使用函数的内嵌字符串作为命令行文档的一部分。所以你最终会得到以下方法签名:
首先,导入 [click][5] 库,使用 Click 提供的命令装饰器对 `rotoscope()` 方法进行装饰,并添加一个 Click 传递给 `rotoscope` 函数的参数。Click 提供了一组验证器因此要向参数添加一个路径验证器。Click 还方便地使用函数的内嵌字符串作为命令行文档的一部分。所以你最终会得到以下方法签名:
```
@click.command()
@ -151,7 +146,7 @@ def rotoscope(incoming):
主函数会调用 `rotoscope()`,它现在是一个 Click 命令,不需要传递任何参数。
选项也可以使用[环境变量][6]自动填充。例如,将 `ARCHIVE` 常量改为一个选项:
选项也可以使用 [环境变量][6] 自动填充。例如,将 `ARCHIVE` 常量改为一个选项:
```
@click.option('archive', '--archive', default='/Users/mark/archive', envvar='ROTO_ARCHIVE', type=click.Path())
@ -165,9 +160,9 @@ Click 可以做更多的事情,它有彩色的控制台输出、提示和子
### 测试
Click 对使用 CLI 运行器[运行端到端测试][7]提供了一些建议。你可以用它来实现一个完整的测试(在[示例项目][8]中,测试在 `tests` 文件夹中。)
Click 对使用 CLI 运行器 [运行端到端测试][7] 提供了一些建议。你可以用它来实现一个完整的测试(在 [示例项目][8] 中,测试在 `tests` 文件夹中。)
测试位于测试类的一个方法中。大多数约定与我在任何其他 Python 项目中使用的非常接近,但有一些细节,因为 rotoscope 使用 `click`。在 `test` 方法中,我创建了一个 `CliRunner`。测试使用它在一个隔离的文件系统中运行此命令。然后测试在隔离的文件系统中创建 `incoming``archive` 目录和一个虚拟的 `incoming/test.txt` 文件,然后它调用 CliRunner就像你调用命令行应用程序一样。运行完成后测试会检查隔离的文件系统并验证 `incoming` 为空,并且 `archive` 包含两个文件(最新链接和存档文件)。
测试位于测试类的一个方法中。大多数约定与我在其他 Python 项目中使用的非常接近,但有一些细节,因为 rotoscope 使用 `click`。在 `test` 方法中,我创建了一个 `CliRunner`。测试使用它在一个隔离的文件系统中运行此命令。然后测试在隔离的文件系统中创建 `incoming``archive` 目录和一个虚拟的 `incoming/test.txt` 文件,然后它调用 CliRunner就像你调用命令行应用程序一样。运行完成后测试会检查隔离的文件系统并验证 `incoming` 为空,并且 `archive` 包含两个文件(最新链接和存档文件)。
```
from os import listdir, mkdir
@ -196,9 +191,9 @@ class TestRotoscope:
要在控制台上执行这些测试,在项目的根目录中运行 `tox`
在执行测试期间,我在代码中发现了一个错误。当我进行 Click 转换时rotoscope 只是取消了最新文件的链接,无论它是否存在。测试从一个新的文件系统(不是我的主文件夹)开始,很快就失败了。我可以通过在一个很好的隔离和自动化测试环境中运行来防止这种错误。这将避免很多“它在我的机器上正常工作”的问题。
在执行测试期间,我在代码中发现了一个错误。当我进行 Click 转换时,`rotoscope` 只是取消了最新文件的链接,无论它是否存在。测试从一个新的文件系统(不是我的主文件夹)开始,很快就失败了。我可以通过在一个很好的隔离和自动化测试环境中运行来防止这种错误。这将避免很多“它在我的机器上正常工作”的问题。
### Scaffolding 和模块
### 搭建骨架脚本和模块
本文到此结束,我们可以使用 `scaffold``click` 完成一些高级操作。有很多方法可以升级一个普通的 Python 脚本,甚至可以将你的简单实用程序变成成熟的 CLI 工具。
@ -209,7 +204,7 @@ via: https://opensource.com/article/22/7/bootstrap-python-command-line-applicati
作者:[Mark Meyer][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,143 @@
[#]: subject: "Manage containers on Fedora Linux with Podman Desktop"
[#]: via: "https://fedoramagazine.org/manage-containers-on-fedora-linux-with-podman-desktop/"
[#]: author: "Mehdi Haghgoo https://fedoramagazine.org/author/powergame/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15058-1.html"
使用 Podman Desktop 在 Fedora Linux 上管理容器
======
![][1]
> Podman Desktop 是一个开源 GUI 应用,用于在 Linux、macOS 和 Windows 上管理容器。
从历史上看,开发人员一直使用 Docker Desktop 对容器进行图形化管理。这适用于那些安装了 Docker Daemon 和 Docker CLI 的人。然而,对于那些使用无守护进程的 Podman 工具的人来说,虽然有一些 Podman 前端,如 [Pods][2]、[Podman desktop companion][3] 和 [Cockpit][4],但没有官方应用。现在不是这种情况了。有了 Podman Desktop
本文将讨论由红帽和其他开源贡献者开发的 Podman Desktop 的特性、安装和使用。
### 安装
要在 Fedora Linux 上安装 Podman Desktop请访问 [podman-desktop.io][5],然后单击 “Download for Linux” 按钮。你将看到两个选项Flatpak 和 zip。在这个例子中我们使用的是 Flatpak。单击 “Flatpak” 链接后,通过双击文件在 GNOME 软件中打开它(如果你使用的是 GNOME。你也可以通过终端安装它
```
flatpak install podman-desktop-X.X.X.flatpak
```
在上面的命令中,将 X.X.X 替换为你下载的特定版本。如果你下载了 zip 文件,那么解压缩存档,然后启动 Podman Desktop 应用的二进制文件。你还可以通过进入 GitHub 上项目的 [发布][6] 页找到预发布版本。
### 特性
Podman Desktop 仍处于早期阶段。然而,它支持许多常见的容器操作,如创建容器镜像、运行容器等。此外,你可以在 “<ruby>首选项<rt>Preferences</rt></ruby>” 的 “<ruby>扩展<rt>Extensions</rt></ruby>” 部分下找到 Podman 扩展,你可以使用它来管理 macOS 和 Windows 上的 Podman 虚拟机。
此外Podman Desktop 支持 Docker Desktop 扩展。你可以在 “<ruby>首选项<rt>Preferences</rt></ruby>” 下的 “Docker Desktop Extensions” 安装此类扩展。应用窗口有两个窗格。左侧窄窗格显示应用的不同功能,右侧窗格是内容区域,它将根据左侧选择的内容显示相关信息。
![Podman Desktop 0.0.6 在 Fedora 36 上运行][7]
### 演示
为了全面了解 Podman Desktop 的功能,我们将从 Dockerfile 创建一个镜像并将其推送到注册中心,然后拉取并运行它,这一切都在 Podman Desktop 中完成。
#### 构建镜像
第一步是通过在命令行中输入以下行来创建一个简单的 Dockerfile
```
cat <<EOF>>Dockerfile
FROM docker.io/library/httpd:2.4
COPY . /var/www/html
WORKDIR /var/www/html
CMD ["httpd", "-D", "FOREGROUND"]
EOF
```
现在,点击 “<ruby>镜像<rt>Images</rt></ruby>” 并按下 “<ruby>构建镜像<rt>Build Image</rt></ruby>” 按钮。你将被带到一个新页面以指定 Dockerfile、构建上下文和镜像名称。在 Containerfile 路径下,单击并浏览以选择你的 Dockerfile。在镜像名称下输入镜像的名称。如果要将镜像推送到容器注册中心那么可以以 `example.com/username/repo:tag` 形式指定完全限定的镜像名称FQIN。在此示例中我输入 `quay.io/codezombie/demo-httpd:latest`,因为我在 `quay.io` 上有一个名为 `demo-httpd` 的公共仓库。你可以按照类似的格式来指定容器注册中心Quay、Docker Hub、GitHub Container Registry 等)的 FQIN。现在按下 “<ruby>构建<rt>Build</rt></ruby>” 按钮并等待构建完成。
#### 推送镜像
构建完成后,就该推送镜像了。所以,我们需要在 Podman Desktop 中配置一个注册中心。进入 “<ruby>首选项<rt>Preferences</rt></ruby>” -> “<ruby>注册中心<rt>Registries</rt></ruby>” 并按下 “<ruby>添加注册中心<rt>Add registry</rt></ruby>” 按钮。
![添加注册中心对话框][8]
在 “<ruby>添加注册中心<rt>Add registry</rt></ruby>” 对话框中,输入你的注册中心服务器地址和用户凭据,然后单击 “<ruby>添加注册中心<rt>Add registry</rt></ruby>”。
现在,回到镜像列表中我的镜像,并按下上传图标将其推送到仓库。当你将鼠标悬停在设置中添加的注册中心名称开头的镜像名称上时(此演示中的 `quay.io`),镜像名称旁边会出现一个推送按钮。
![将鼠标悬停在镜像名称上时出现的按钮][9]
![镜像通过 Podman Desktop 推送到仓库][10]
镜像被推送后,任何有权访问镜像仓库的人都可以拉取它。由于我的镜像仓库是公开的,因此你可以轻松地将其拉入 Podman Desktop。
#### 拉取镜像
因此,为确保一切正常,请在本地删除此镜像并将其拉入 Podman Desktop。在列表中找到镜像并按删除图标将其删除。删除镜像后单击 “<ruby>拉取镜像<rt>Pull Image</rt></ruby>” 按钮。在 “<ruby>要拉取的镜像<rt>Image to Pull</rt></ruby>” 输入完全限定名称,然后按 “<ruby>拉取镜像<rt>Pull Image</rt></ruby>”。
![Our container image is successfully pulled][11]
#### 创建一个容器
作为 Podman Desktop 演示的最后一部分,让我们从镜像中启动一个容器并检查结果。转到 “<ruby>容器<rt>Containers</rt></ruby>” 并按 “<ruby>创建容器<rt>Create Container</rt></ruby>”。这将打开一个包含两个选项的对话框:“<ruby>从 Containerfile/Dockerfile<rt>From Containerfile/Dockerfile</rt></ruby>” 和 “<ruby>从已有镜像<rt>From existing image</rt></ruby>”。按下 “<ruby>从已有镜像<rt>From existing image</rt></ruby>”。这将进入镜像列表。在那里,选择我们要拉取的镜像。
![在 Podman Desktop 中创建容器][12]
现在,我们从列表中选择我们最近拉取的镜像,然后按它前面的 “<ruby>运行<rt>Play</rt></ruby>” 按钮。在出现的对话框中,我输入 `demo-web` 作为容器名,输入 `8000` 作为端口映射,然后按下 “<ruby>启动容器<rt>Start Container</rt></ruby>”。
![Container configuration][13]
容器开始运行,我们可以通过运行以下命令检查 Apache 服务器的默认页面:
```
curl http://localhost:8000
```
![可以工作!][14]
你还应该能够在容器列表中看到正在运行的容器,其状态已更改为 “<ruby>运行中<rt>Running</rt></ruby>”。在那里,你会在容器前面找到可用的操作。例如,你可以单击终端图标打开 TTY 进入到容器中!
![][15]
### 接下来是什么
Podman Desktop 还很年轻,处于 [积极开发][16] 中。 GitHub 上有一个项目 [路线图][17],其中列出了令人兴奋的按需功能,包括:
* Kubernetes 集成
* 支持 Pod
* 任务管理器
* 卷支持
* 支持 Docker Compose
* Kind 支持
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/manage-containers-on-fedora-linux-with-podman-desktop/
作者:[Mehdi Haghgoo][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/powergame/
[b]: https://github.com/lkxed
[1]: https://fedoramagazine.org/wp-content/uploads/2022/09/podman-desktop-on-fedora-816x345.jpg
[2]: https://github.com/marhkb/pods
[3]: https://github.com/iongion/podman-desktop-companion
[4]: https://github.com/cockpit-project/cockpit/
[5]: https://podman-desktop.io/
[6]: https://github.com/containers/podman-desktop/releases/
[7]: https://fedoramagazine.org/wp-content/uploads/2022/08/pd.png
[8]: https://fedoramagazine.org/wp-content/uploads/2022/08/registry.png
[9]: https://fedoramagazine.org/wp-content/uploads/2022/08/image.png
[10]: https://fedoramagazine.org/wp-content/uploads/2022/08/Screenshot-from-2022-08-27-23-51-38.png
[11]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-2.png
[12]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-3.png
[13]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-5.png
[14]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-6.png
[15]: https://fedoramagazine.org/wp-content/uploads/2022/09/image-2-1024x393.png
[16]: https://github.com/containers/podman-desktop
[17]: https://github.com/orgs/containers/projects/2

View File

@ -3,21 +3,24 @@
[#]: author: "Amit Shingala https://www.opensourceforu.com/author/amit-shingala/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15060-1.html"
准备好迎接 AIOps 时代
======
*随着技术的进步,企业,无论大小,都必须将自己转变为数字公司。转型不再是“选择”的问题;相反,它是关于“如何”推进过渡。这就是 AIOps 的用武之地。*
将组织转变为数字公司会遇到很多挑战。缺乏专门的 IT 技能、组织变革管理、不断变化的客户需求和混合环境只是其中的一小部分。企业需要增强其 IT 运营 (ITOps) 以应对这些挑战并满足客户期望。
![](https://img.linux.net.cn/data/attachment/album/202209/23/083440mxyb6e388ze2sbbs.jpg)
> 随着技术的进步,企业,无论大小,都必须将自己转变为数字公司。转型不再是“选择”的问题;相反,它是关于“如何”推进过渡。这就是 AIOps 的用武之地。
将组织转变为数字公司会遇到很多挑战。缺乏专门的 IT 技能、组织变革管理、不断变化的客户需求和混合环境只是其中的一小部分。企业需要增强其 IT 运营ITOps以应对这些挑战并满足客户期望。
### 数字化转型AIOps 之路
未来ITOps 将结合算法和人工智能,使 IT 系统的性能变得透明,并帮助他们提供无缝体验。
#### Gartner“AIOps 对 IT 运营的长期影响将是变革性的。”
> “AIOps 对 IT 运营的长期影响将是变革性的。” —— Gartner
AIOps 对于成功的数字化转型至关重要,可以帮助系统以现代业务所需的速度运行。反过来,这将确定公司获得和保持市场领先地位的速度。
@ -31,7 +34,7 @@ AIOps 结合人工智能和机器学习来分析 IT 运营的数据。这是将
AIOps 平台使用大数据。他们从各种 IT 运营和设备收集数据以自动识别和实时响应问题同时仍提供传统的历史分析。然后AIOps 使用机器学习对组合的 IT 数据执行综合分析。
结果是自动化驱动的洞察力驱使持续改进和修复。 AIOps 支持基本 IT 功能的持续集成和部署 (CI/CD)
结果是自动化驱动的洞察力驱使持续改进和修复。AIOps 支持基本 IT 功能的持续集成和部署 CI/CD
### AIOps 的范围是什么?
@ -52,13 +55,13 @@ AIOps 平台使用大数据。他们从各种 IT 运营和设备收集数据,
随着应用和 IT 环境的扩展,它们会产生大量数据。 IT 运营团队因无法管理的数据而筋疲力尽。但是,人工智能可以处理大量数据。随着数据量的扩大,将人工智能纳入 IT 流程的机会要大得多。
异常检测、分类和预测都可以通过使用机器学习和深度学习模型来完成,这些模型擅长分析海量数据并提供分析。 AIOps 的许多功能可帮助公司通过交互式仪表盘提供良好的用户体验。
异常检测、分类和预测都可以通过使用机器学习和深度学习模型来完成这些模型擅长分析海量数据并提供分析。AIOps 的许多功能可帮助公司通过交互式仪表盘提供良好的用户体验。
实施 AIOps 的企业报告了诸如无缝体验、更低的运营费用、更快的客户服务、更短的平均解决时间和更少的停机时间等好处。 AIOps 通过基于预测分析做出坚定的决策来支持 IT 运营。
### 最后一点
AIOps 是 IT 运营分析 (ITOA) 的下一步。 人工智能、认知技能和 RPA机器人流程自动化用于在基础设施或 IT 运营问题成为问题之前自动修复它们。 自我修复系统是 AIOps 的最终目标。
AIOps 是 IT 运营分析ITOA的下一步。 人工智能、认知技能和 RPA机器人流程自动化用于在基础设施或 IT 运营问题成为问题之前自动修复它们。 自我修复系统是 AIOps 的最终目标。
--------------------------------------------------------------------------------
@ -67,7 +70,7 @@ via: https://www.opensourceforu.com/2022/09/get-ready-to-embrace-the-aiops-era/
作者:[Amit Shingala][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,25 +3,30 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15067-1.html"
在 Arch Linux 和其他发行版中使用终端连接到 WiFi
======
本快速指南介绍了在 Arch Linux 和其他发行版中使用终端设置和连接 WiFi 所需的步骤。
![](https://img.linux.net.cn/data/attachment/album/202209/24/185145fcas1rje3f8pr8sa.jpg)
> 本快速指南介绍了在 Arch Linux 和其他发行版中使用终端设置和连接 WiFi 所需的步骤。
本指南非常适合没有 GUI 只有终端且没有其他有线互联网连接可用的情况。这些步骤可帮助你手动检测无线网卡和设备,并通过终端密码验证连接到 WiFi 热点。
本指南使用 [iwd][1] (Net Wireless Daemon) 通过终端连接到 WiFi。
本指南使用 [iwd][1]Net Wireless Daemon通过终端连接到 WiFi。
### 在 Arch Linux 和其他发行版中使用终端连接到 WiFi
#### 1. 设置 iwd
#### 1设置 iwd
`iwd` 包有三个主要模块:
**iwctl**:无线客户端,**iwd**:守护进程,**iwmon**:监控工具
- `iwctl`:无线客户端
- `iwd`:守护进程
- `iwmon`:监控工具
在终端中输入:
@ -31,35 +36,35 @@ iwctl
![iwctl 提示符][2]
如果找不到命令,那么需要从[此处][3]下载安装包。
如果找不到命令,那么需要从 [此处][3] 下载安装包。
从任何其他具有 Internet 连接的系统/笔记本电脑获取帮助,以通过安装 USB 下载和安装软件包。
从任何其他具有互联网连接的系统/笔记本电脑获取帮助,以通过安装 USB 下载和安装软件包。
或者,如果你有一个可连接互联网的 USB 适配器,那么将其插入你的系统。并通过以下命令安装。
或者,如果你有一个可连接互联网的 USB 网卡,那么将其插入你的系统。并通过以下命令安装。
USB 适配器应该在 Arch 和当今大多数 Linux 系统中开箱即用,连接到互联网。
USB 网卡应该可以在 Arch 和当今大多数 Linux 系统中开箱即用,连接到互联网。
**Arch**
**Arch**
```
pacman -S iwd
```
**Debian、Ubuntu 和其他类似发行版**
**Debian、Ubuntu 和其他类似发行版**
```
sudo apt-get install iwd
```
**Fedora**
**Fedora**
```
sudo dnf install iwd
```
如果你收到 `iwctl` 提示符(如下所示),那么继续下一步。
如果你看到了 `iwctl` 提示符(如下所示),那么继续下一步。
#### 2. 配置
#### 2配置
运行以下命令以获取系统的**无线设备名称**。
@ -79,15 +84,15 @@ station wlan0 get-networks
该命令为你提供具有安全类型和信号强度的可用 WiFi 网络列表。
#### 连接
#### 3、连接
要**连接到 WiFi 网络**,请使用上述 “get-networks” 命令中的 WiFi 接入点名称运行以下命令。
要**连接到 WiFi 网络**,请使用上述 `get-networks` 命令中的 WiFi 接入点名称运行以下命令。
```
station wlan0 connect
```
出现提示时输入你的 WiFi 密码。
出现提示时输入你的 WiFi 密码。
![使用 iwctl 连接到 WiFi][6]
@ -95,27 +100,25 @@ station wlan0 connect
### 使用指南
* 如下所示,你可以使用简单的 ping 命令检查连接。ping 回复成功的数据包传输表示连接稳定。
如下所示,你可以使用简单的 `ping` 命令检查连接。`ping` 回复成功的数据包传输表示连接稳定。
```
ping -c 3 google.com
```
* 你还可以使用以下命令检查连接状态。
你还可以使用以下命令检查连接状态。
```
station wlan0 show
```
* iwd`/var/lib/iwd` 中保存 `.psk` 后缀的配置文件,其中带有你的接入点名称。
`iwd``/var/lib/iwd` 中保存 `.psk` 后缀的配置文件,其中带有你的接入点名称。此文件包含使用你的 WiFi 网络的密码和 SSID 生成的哈希文件。
* 此文件包含使用你的 WiFi 网络的密码和 SSID 生成的哈希文件。
* 按 `CTRL+D` 退出 `iwctl` 提示符。
`CTRL+D` 退出 `iwctl` 提示符。
### 总结
我希望本指南可以帮助你通过终端连接到互联网。当你没有其他方式连接到 WiFi 时,这会有所帮助。例如,如果你在独立系统(不是 VM中安装 Arch Linux那么需要连接到 Internet 以通过终端使用 pacman 下载软件包。
我希望本指南可以帮助你通过终端连接到互联网。当你没有其他方式连接到 WiFi 时,这会有所帮助。例如,如果你在独立系统(不是 VM中安装 Arch Linux那么需要连接到互联网以通过终端使用 `pacman` 下载软件包。
如果你遇到任何问题,请在下面的评论栏中指出错误消息。
@ -126,7 +129,7 @@ via: https://www.debugpoint.com/connect-wifi-terminal-linux/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,175 @@
[#]: subject: "Install Linux Mint with Windows 11 Dual Boot [Complete Guide]"
[#]: via: "https://www.debugpoint.com/linux-mint-install-windows/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15073-1.html"
使用 Windows 11 双引导安装 Linux Mint
======
![](https://img.linux.net.cn/data/attachment/album/202209/26/115222iqlvc0w020m37mc3.jpg)
> 将 Linux Mint 与 Windows 11或 Windows 10同时安装并制作双引导系统的完整指南。
如果你是新 Linux 用户,想在不删除 OEM 安装的 Windows 的情况下安装 Linux Mint请遵循本指南。完成下面描述的步骤后你应该拥有一个双引导系统你可以在其中学习和在 Linux 系统中完成工作,而无需引导 Windows。
### 1、开始之前你需要什么
启动到你的 Windows 系统并从官方网站下载 Linux Mint ISO 文件。 ISO 文件是 Linux Mint 的安装镜像,我们将在本指南中使用它。
在官网图1下载 Cinnamon 桌面版的 ISO适合所有人
> **[下载链接][1]**
![图 1从官网下载 Linux Mint][2]
下载后,将 U 盘插入你的系统。然后使用 Rufus 或 [Etcher][3] 将上面下载的 .ISO 文件写入该 USB 驱动器。
### 2、准备一个分区来安装 Linux Mint
正常情况下Windows 笔记本电脑通常配备 C 盘和 D 盘。C 盘是安装 Windows 的地方。对于新的笔记本电脑D 驱动器通常是空的(任何后续驱动器,如 E 等)。现在,你有两个选项可供选择:一是 **缩小 C 盘** 为额外的 Linux 安装腾出空间。第二个是**使用其他驱动器/分区**,例如 D 盘或 E盘。
选择你希望的方法。
如果你选择使用 D 盘或 E 盘用于 Linux 系统,请确保先禁用 BitLocker然后再禁用现代 OEM 安装的 Windows 笔记本电脑附带的所有其他功能。
* 从开始菜单打开 Windows PowerShell 并键入以下命令(图 2以禁用 BitLocker。根据你的目标驱动程序更改驱动器号这里我使用了驱动器 E
```
manage-bde -off E
```
![图 2禁用 Windows 驱动器中的 BitLocker 以安装 Linux][4]
如果你选择缩小 C 盘(或任何其他驱动器),请从开始菜单打开“<ruby>磁盘管理<rt>Disk Management</rt></ruby>”,它将显示你的整个磁盘布局。
* 右键单击​​并在要缩小的驱动器上选择“<ruby>缩小卷<rt>Shrink Volume</rt></ruby>”(图 3以便为 Linux Mint 腾出位置。
![图 3磁盘分区中的压缩卷选项示例][5]
* 在下一个窗口中,在“<ruby>输入要缩小的空间量(以 MB 为单位)<rt>Enter the amount of space to shrink in MB</rt></ruby>”下以 MB 为单位提供你的分区大小(图 4。显然它应该小于或等于“<ruby>可用空间大小<rt>Size of available space</rt></ruby>”中提到的值。因此,对于 100 GB 的分区,给出 100*1024=102400 MB。
* 完成后,单击“<ruby>缩小<rt>Shrink</rt></ruby>”。
![图 4输入 Linux 分区的大小][6]
现在,你应该会看到一个“<ruby>未分配空间<rt>Unallocated Space</rt></ruby>”,如下所示(图 5。右键单击它并选择“<ruby>新建简单卷<rt>New Simple Volume</rt></ruby>”。
![图 5创建未分配空间][7]
* 此向导将使用文件系统准备和格式化分区。注意:你可以在 Windows 本身中或在 Linux Mint 安装期间执行此操作。Linux Mint 安装程序也为你提供了创建文件系统表和准备分区的选项,我建议你在这里做。
* 在接下来的一系列屏幕中(图 6、7 和 8以 MB 为单位给出分区大小,分配驱动器号(例如 D、E、F和文件系统为 fat32。
![图 6新建简单卷向导-page1][8]
![图 7新建简单卷向导-page2][9]
![图 8新建简单卷向导-page3][10]
* 最后,你应该会看到你的分区已准备好安装 Linux Mint。你应该在 Mint 安装期间按照以下步骤选择此选项。
![图 9安装 Linux 的最终分区][11]
* 作为预防措施,**记下分区大小**(你刚刚在图 9 中作为示例创建的分区)以便在安装程序中快速识别它。
### 3、在 BIOS 中禁用安全启动
插入 USB 驱动器并重新启动系统。
* 开机时,反复按相应的功能键进入 BIOS。你的笔记本电脑型号的按键可能不同。下面是主要笔记本电脑品牌的参考。
| 笔记本厂商 | 进入 BIOS 的功能键 |
| :- | :- |
| 宏碁 | `F2``DEL` |
| 华硕 | PC 使用 `F2`,主板是 `F2``DEL` |
| 戴尔 | `F2``F12` |
| 惠普 | `ESC``F10` |
| Lenovo | `F2``Fn + F2` |
| Lenovo台式机 | F1` |
| LenovoThinkPad | `Enter + F1` |
| 微星 | `DEL` |
| 微软 Surface 平板 | 按住音量增加键 |
| ORIGIN PC | `F2` |
| 三星 | `F2` |
| 索尼 | `F1`、`F2` 或 `F3` |
| 东芝 | `F2` |
* 你应该禁用 BIOS 安全设置并确保将启动设备优先级设置为 U 盘。
* 然后按 `F10` 保存并退出。
### 4、安装 Linux Mint
如果一切顺利,你应该会看到一个安装 Linux Mint 的菜单。选择 “Start Linux Mint……” 选项。
![图 10Linux Mint GRUB 菜单启动安装][12]
片刻之后,你应该会看到 Linux Mint Live 桌面。在桌面上,你应该会看到一个安装 Linux Mint 的图标以启动安装。
在下一组屏幕中,选择你的语言、键盘布局、选择安装多媒体编解码器并点击继续按钮。
在安装类型窗口中,选择 “<ruby>其他<rt>Something Else</rt></ruby>” 选项。
在下一个窗口(图 11仔细选择以下内容
![图11选择 Windows 11 安装 Linux Mint 的目标分区][13]
* 在“<ruby>设备<rt>Device</rt></ruby>”下,选择刚刚创建的分区;你可以通过我之前提到的要记下的分区大小来识别它。
* 然后点击“<ruby>更改<rt>Change</rt></ruby>”,在编辑分区窗口中,选择 Ext4 作为文件系统,选择格式化分区选项和挂载点为 `/`
* 单击“<ruby>确定<rt>OK</rt></ruby>”,然后为你的系统选择“<ruby>引导加载程序<rt> boot loader</rt></ruby>”;理想情况下,它应该是下拉列表中的第一个条目。
* 仔细检查更改。因为一旦你点击立即安装,你的磁盘将被格式化,并且无法恢复。当你认为一切准备就绪,请单击“<ruby>立即安装<rt>Install Now</rt></ruby>”。
在以下屏幕中,选择你的位置,输入你的姓名并创建用于登录系统的用户 ID 和密码。安装应该开始(图 12
![图 12安装中][14]
安装完成后(图 13取出 U 盘并重新启动系统。
![图 13安装完成][15]
如果一切顺利,在成功安装为双引导系统后,你应该会看到带有 Windows 11 和 Linux Mint 的 GRUB。
现在你可以使用 [Linux Mint][16] 并体验快速而出色的 Linux 发行版。
### 总结
在本教程中,我向你展示了如何在装有 OEM 的 Windows 的笔记本电脑或台式机中使用 Linux Mint 创建一个简单的双启动系统。这些步骤包括分区、创建可引导 USB、格式化和安装。
尽管上述说明适用于 Linux Mint 21 Vanessa但是它现在应该可以用于所有其他出色的 [Linux 发行版][17]。
如果你遵循本指南,请在下面的评论框中告诉我你的安装情况。
如果你成功了,欢迎来到自由世界!
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-mint-install-windows/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[gpchn](https://github.com/gpchn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.linuxmint.com/download.php
[2]: https://www.debugpoint.com/wp-content/uploads/2022/09/Download-Linux-Mint-from-the-official-website.jpg
[3]: https://www.debugpoint.com/etcher-bootable-usb-linux/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/09/Disable-BitLocker-in-Windows-Drives-to-install-Linux.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/09/Example-of-Shrink-Volume-option-in-Disk-Partition-1024x453.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/09/Enter-the-size-of-your-Linux-Partition.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/09/Unallocated-space-is-created.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page1.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page2.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page3.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/09/Final-partition-for-installing-Linux.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/09/Linux-Mint-GRUB-Menu-to-kick-off-installation.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/09/Choose-the-target-partition-to-install-Linux-Mint-with-Windows-11.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-in-progress.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-complete.jpg
[16]: https://www.debugpoint.com/linux-mint
[17]: https://www.debugpoint.com/category/distributions
[18]: https://www.debugpoint.com/install-java-17-ubuntu-mint/

View File

@ -3,37 +3,40 @@
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15064-1.html"
AMD 的开源图形驱动程序 Vulkan现在支持光线追踪
AMD 的开源图形驱动程序 Vulkan 现在支持光线追踪
======
*RDNA 2 GPU 的 Linux 用户可以使用 AMD 的 AMDVLK GPUOpen 开源 Vulkan 驱动程序。*
![](https://www.opensourceforu.com/wp-content/uploads/2022/09/AMD-Ryzen-Zen-CPUs_Next-Gen-1536x842.jpg)
> RDNA 2 GPU 的 Linux 用户可以使用 AMD 的 AMDVLK GPUOpen 开源 Vulkan 驱动程序。
用于 Radeon RX 6000 GPU 的 AMDVLK GPUOpen 图形驱动程序在过去一周改进了对 64 位光线追踪的支持。这涵盖了支持 RDNA 2 图形的 APU 以及桌面/移动 GPU。所有平台上的所有 AMD Vulkan 驱动程序现在都支持硬件光线追踪,包括 Mesa3D RADV、AMDVLK GPUOpen 和 AMDGPU-PRO。
GPURT 库的基础是 C++ 接口。根据其用法和依赖关系,公共接口被拆分为各种头文件。用户可以在官方 GitHub 网站上了解更多信息,该网站还包括结构细分,关于 RDNA 2 GPU 光线追踪库 (GPURT)。最新的 AMDVLK GPUOpen v-2022.Q3.4 信息如下:
GPU 光线追踪库GPURT的基础是一个 C++ 接口。根据其用法和依赖关系,公共接口被拆分为各种头文件。用户可以在官方的 GitHub 仓库上了解更多信息,它还包括了 RDNA 2 GPURT 的结构细分。最新的 AMDVLK GPUOpen v-2022.Q3.4 信息如下:
**更新和新功能**
**更新和新功能**
- 扩展 Navi2x 的 64 位光线追踪功能。
- 将 Vulkan 标头升级到版本 1.3.225
- 游戏性能优化,包括《荣耀战魂》和《奇点灰烬》
**目前解决的问题**
**已解决的问题:**
- 解析传输前的dEQP-VK.api.copy和blit..resolve image.whole copy有新版本CTS失败。
- `dEQP-VK.api.copy_and_blit.*.resolve_image.whole_copy_before_resolving_transfer.*` 新版本 CTS 失败。
- dEQP-VK.pipeline.creation 缓存控件有一个 CTS 警告。
- Ubuntu 22.04 上的 Firefox 损坏
- VulkanInfo 崩溃,管道缓存已停用
- RX 6800 上的 RGP 测试套件故障
新的改进包括 GPU 光线追踪库或 GPURT它将包括使用 HLSL 等着色器在光线追踪中看到的边界体积层次 (BVH) 的构造和排序处理。这个库将提供一个标准库来改进图形渲染并引入更多的统一性。DirectX 12 DXR 也将与新库一起使用。
新的改进包括 GPU 光线追踪库GPURT它将包括使用 HLSL 之类的着色器在光线追踪中看到的边界体积层次BVH的构造和排序处理。这个库将提供一个标准库来改进图形渲染并引入更多的统一性。DirectX 12 DXR 也将与新库一起使用。
GPU 光线追踪 (GPURT) 库的描述为“一个静态库(可交付源代码),为支持 DXR (DirectX 12) 和 Vulkan® RT API 的 AMD 驱动程序提供与光线追踪相关的功能。” 该公司的平台抽象库用于构建库PAL
对 GPU 光线追踪GPURT库的描述为“一个静态库源代码交付为支持 DXRDirectX 12和 Vulkan® RT API 的 AMD 驱动程序提供与光线追踪相关的功能。” 该公司的平台抽象库用于构建库PAL
用户可在此处获得最新 AMDVLK GPUOpen v-2022.Q3.4 升级的安装说明。用户在更新任何软件、硬件或驱动程序之前应备份所有相关数据,以免丢失重要文件。为了让最新的 Linux 驱动程序为 AMD、Intel 和 NVIDIA 技术做好准备,已经投入了大量工作,这些技术都是在今年第一季度推出的。
用户可参考最新 AMDVLK GPUOpen v-2022.Q3.4 升级的安装说明。用户在更新任何软件、硬件或驱动程序之前应备份所有相关数据,以免丢失重要文件。
为了让最新的 Linux 驱动程序为 AMD、Intel 和 NVIDIA 技术做好准备,已经投入了大量工作,这些技术都是在今年第一季度推出的。
@ -43,8 +46,8 @@ via: https://www.opensourceforu.com/2022/09/amds-open-source-vulkan-graphics-dri
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[littlebirdnest](https://github.com/littlebirdnest)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: subject: "Wow! Rust-based Redox OS Gets an Anonymous Donation of $390,000 in Cryptocurrency"
[#]: via: "https://news.itsfoss.com/redox-os-anonymous-donation/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15069-1.html"
哇!基于 Rust 的 Redox OS 获得 390,000 美元的加密货币匿名捐赠
======
> Redox OS 刚刚获得了大量匿名捐款。它将用于什么?令人兴奋的事情!
![Wow! Rust-based Redox OS Gets an Anonymous Donation of $390,000 in Cryptocurrency][1]
[Redox OS][2] 是一个用 Rust 编写的类 Unix 操作系统。
该项目由 Jeremy Soller 于 2015 年发起,他被公认为 [System76][3] 的首席工程师及[Pop!_OS][4] 的维护者。
我们还介绍了它今年早些时候的最后一个版本:[基于 Rust 的 Redox OS 0.7.0 推出增强硬件支持](https://news.itsfoss.com/redox-os-0-7-0-release/)。
虽然这些更新包括的改进可以让它在更多硬件上启动,但它可能不是大多数用户的日常驱动程序的替代品。
然而,这是一个令人兴奋的项目,值得关注。
**而在收到匿名捐款后,事情变得更加精彩**。
🤯 刚刚有人向 Redux OS 的捐赠地址发送了 **299 个以太坊**,相当于近 **39 万美元**(加密货币市场涨跌不定)。
嗯,那是一大笔钱!
![A Video from YouTube][7]
根据 Jeremy 的最新推文,他还没有立即决定如何处理它。
> 一位匿名捐赠者刚刚向 @redox_os 捐赠地址发送了 299 以太(相当于 393,000 美元) 。这个地址和交易都是公开的。我不知道如何处理这种规模的捐赠但在进行一些研究后很快就会有更多细节。https://t.co/f3yBDghWSh
>
但是,对推文的回复给了我们一些很好的建议。
一些人建议将其捐赠给负责 Rust 语言的人,还有一些人建议用这笔钱来赞助学习 Rust 和 OS 开发。
他肯定可以使用它来扩展 Redox OS 或其他任何需要该资源的东西。
归根结底,对于想要更多基于 Rust 的东西的人来说,无论 Jeremy 选择做什么,这都可能间接成为一件好事。
或者,也许买一辆带有 Redox OS 标志的布加迪?好吧,一些推特用户对这一事件有过搞笑的回复!😂
> 这不是开源项目第一次收到大量的加密货币捐赠。Apache 软件基金会在 2018 年收到了价值 100 万美元的比特币。
当 Jeremy 决定分享有关捐赠的更多细节以及他打算如何处理时,我将更新这篇文章。
💬 *你如何看待匿名捐赠给 Redox OS如果你得到那笔捐款你会怎么做在下面的评论框中让我们知道你的想法。*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/redox-os-anonymous-donation/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/littlebirdnest)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/donor-sends-ether-to-redux-os.png
[2]: https://www.redox-os.org/
[3]: https://system76.com/
[4]: https://pop.system76.com/
[5]: https://news.itsfoss.com/redox-os-0-7-0-release/
[7]: https://tenor.com/embed/17544086
[8]: https://twitter.com/redox_os?ref_src=twsrc%5Etfw
[9]: https://t.co/f3yBDghWSh
[13]: https://news.apache.org/foundation/entry/the-apache-software-foundation-receives

View File

@ -0,0 +1,213 @@
[#]: subject: "GNOME 43: Top New Features and Release Wiki"
[#]: via: "https://www.debugpoint.com/gnome-43/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15071-1.html"
GNOME 43 发布,标志性的版本
======
> 对 GNOME 43 桌面环境的各种功能的介绍,以及它给你的日常需求和工作流程带来的变化和影响。
![通过 GNOME OS 运行的 GNOME 43][1]
这篇文章总结了所有关于 GNOME 43 的必要信息包括功能、发布时间表等等。GNOME 43 版本可能是自 GNOME 40 以来在功能和对你的工作流程影响最大的一个版本。
主要的变化包括更新的 Shell 和更快的性能,内含了 GTK4 和 libadwaita 的转换,翻新的文件应用和 Web 应用的奇妙变化。
所有这些必要的变化都是早该进行的,并将改变你在 GNOME 桌面上的传统工作流程,使你的工作效率更高。
### 时间表
GNOME 43 于 2022 年 9 月 21 日 [正式发布][2]。
* GNOME 43 测试版2022 年 8 月 31 日
* GNOME 43 候选版2022 年 9 月 4 日
* GNOME 43 最终版2022 年 9 月 21 日
### GNOME 43 的功能
#### 1、核心 Shell 的变化
* 终于,得益于 Wayland 最近的工作GNOME 有了对高分辨率的滚轮支持。所以,如果你有一个高分辨率的显示器,用一个高级的鼠标(比如罗技 MX Master 3来滚动应该成为了一种享受。
* 除了上述情况GNOME 43 中的<ruby>直接扫描输出<rt>direct scanout</rt></ruby> 支持将有助于多显示器环境。
* 服务器端的窗口装饰得到了基本的颜色支持。
* Shell 还实现了一个功能,当焦点改变时,通知会消失,并不等待超时。
* 和每个版本一样,你在整个桌面上会体验到更好的动画性能,改进了网格和概览导航以及关键的更新,这给你带来了顺滑的体验。
这些就是核心变化的关键总结。现在,让我们来谈谈快速设置。
#### 2、新的快速设置菜单
系统托盘中的快速设置完全改变了。快速设置项目和菜单现在采用药丸状的切换按钮,用鲜艳的颜色来显示系统中正在发生的事情。该菜单也是动态的,并支持层叠的菜单项目。此外,你可以在快速设置中选择音频设备。
这里有一个快速演示,更多的屏幕截图和文章,请阅读:[GNOME 43 快速设置][3]。
![GNOME 43 的快速设置演示][4]
#### 3. 文件应用
GNOME <ruby>文件应用<rt>Files</rt></ruby>在 GNOME 43 版本中增加了很多功能。这个应用程序的改进清单非常巨大。文件管理器是任何桌面环境中使用最多的应用程序。因此,文件应用中的变化对整个用户群的影响最大。
这是 GTK4 版的文件应用第一次亮相(它在 GNOME 42 发布时还没有准备好),它将会彻底改变你的工作流程。
我将尝试用一个简短的列表来解释其中的大部分内容。否则,这将是一篇冗长的文章。我将单独推送另一篇关于文件应用的功能的文章。
##### 自适应侧边栏
可以让你访问导航、收藏夹、网络驱动器等的文件应用侧边栏是响应式的。当文件应用窗口的大小达到一定程度时,它会 [自动隐藏][5] 自己。如果你工作时有很多打开的窗口,而且显示器较小,那么这是一个熟悉而方便的功能。
另一个令人兴奋的功能是,当侧边栏完全隐藏时,在左上方会出现一个图标,点击可使其可见。
![自动隐藏侧边栏的文件应用 43][6]
##### 徽章
很久以前GNOME 中就有了徽章,后来它们消失了。因此,徽章在 GNOME 43 中以文件和目录旁边的小图标的形象卷土重来。这些图标代表着类型,如符号链接、只读等。此外,这些图标会根据你的主题改变它们的颜色,而且一个文件也可以有多个图标。
![GNOME 43 中的徽章][7]
##### 橡皮筋选择
接下来是期待已久的橡皮筋选择功能,它 [终于到来了][8]。现在你可以通过拖动选择机制来选择文件和文件夹。这是用户要求最多的功能之一。
![橡皮筋选择功能][9]
##### GtkColumnView 代替了 GtkTreeView
当你把鼠标放在列视图中的项目上时,你会看到一个焦点行,这是 GNOME 43 文件应用的另一个关键功能。但是它在 [树形视图不能显示][10],可能计划在下一次迭代中实现。
![GtkColumnView 启用了焦点行][11]
##### 重新设计的属性窗口,具有交互式的权限和可执行文件检测功能
通过采用 GTK4属性窗口 [完全改变了][12]。该窗口现在更加简洁,设计合理,只在需要的时候显示必要的项目。
此外,属性对话框可以确定文件类型并提供合适的选项。例如,如果你查看一个 Shell 脚本或文本文件的属性,你会得到一个选项,使其可执行。相反,图像文件的属性不会给你一个可执行的选项。
![智能属性窗口][13]
##### 标签式视图的改进
文件的标签式视图得到了一些 [额外的更新][14]。最值得注意的是,当拖动文件到标签时,可以适当地聚焦,在当前聚焦的标签之后创建标签,等等。
##### 重新设计的右键菜单
对文件或文件夹的主要右键菜单进行了分组。首先,打开选项被归入一个子菜单中。其次,复制/粘贴/剪切选项被合并到一个组中。最后,垃圾箱、重命名和压缩选项被归为一组。
此外,“<ruby>在终端打开<rt>Open in terminal</rt></ruby>”的选项对所有文件和文件夹都可用。然而仍然缺失一个“创建新文件”的选项这是我在这个版本中所期望的LCTT 译者:预计 GNOME 44 文件应用将出现此功能)
![各种上下文菜单][15]
##### 其他变化
文件应用中其他醒目的变化是垃圾箱图标,以及其他位置(网络驱动器、磁盘)在右键菜单中有了属性菜单。
最后,文件应用的偏好窗口被重新设计,以显示更多的基本项目。重新设计后,普通用户可以很容易地找到适当的文件设置。
#### 4、Web 应用
让我们抽出一些时间来谈谈我们心爱的 Epiphany又称 GNOME Web是 GNOME 桌面上基于 WebKit 的原生网页浏览器。
这些更新早就应该开始了,并且终于从这个版本开始出现了。
首先GNOME Web 现在支持 WebExtension API。它可以让你在网络中下载和安装火狐和谷歌浏览器的扩展。以下是做法
* 从 Firefox 附加组件或谷歌 Chrome 扩展页面下载任何扩展文件xpi 或 crx 文件)。
* 点击汉堡菜单,选择<ruby>扩展程序<rt>Extensions</rt></ruby>
* 最后,点击<ruby>添加<rt>Add</rt></ruby>来安装它们。
WebExtension 的支持是使 Web 应用尽快可用的关键步骤。
其次,可以使用火狐浏览器同步选项,让你通过火狐浏览器账户登录 Web 应用,同步书签和其他浏览器项目。
![使用火狐账户登录 Web 应用][16]
Web 应用中其他值得注意的变化包括对 “查看源代码” 的支持、GTK4 的移植工作和一个更新的 PDF 库PDF.js 2.13.216)。
Web 应用中仍然缺少的一个关键组件是 [通过 GStreamer 支持WebRTC][17]。一旦这个功能出现,它将是一个适合日常使用的浏览器。
希望有一天,我们都有一个体面的非火狐、非 Chromium 的替代浏览器。
#### 5、设置应用
<ruby>设置应用<rt>Settings</rt></ruby> 的窗口中,大部分改进和视觉微调在这个版本中出现。重要的变化包括警报中的 “狗叫声” 在经过长时间的 [有趣的对话][18] 后现在已经消失。
此外,引入了一个新的设备安全面板,日期和时间面板中的时区地图也修改了。
设置窗口的侧边栏也是响应式的,并为你提供自动隐藏功能,如上图所示的文件应用一样。
#### 6、软件应用
GNOME <ruby>软件应用<rt>Software</rt></ruby> 有两个关键的变化。这些变化使你可以在一个页面上查看应用程序的更多信息。
首先,一个新“该作者的其他应用程序”部分,为你提供了一个由当前应用程序的作者编写的应用程序列表。这有助于发现并告诉你应用作者有多受欢迎。
其次GNOME 43 软件应用现在在一个单独的窗口中为你提供了 Flatpak 应用程序所需的详细权限列表。因此,你可以在安装它们之前确认该应用程序所需权限。
另一个关键的视觉变化是在应用程序概览主页面上新增了 “适用于 Fedora/任何发行版”部分,这需要配置。
![软件应用中的开发者的其他应用程序部分][19]
#### 7、气候变化墙纸
我不确定这个功能是否有了。因为我找不到它,但我听说过它。所以,我想我应该在这里提到它。
这个功能是GNOME 43 带来了一张背景墙纸,显示了全球温度在几十年间是如何从 [海洋条纹][20] 上升的。该墙纸包含了垂直的彩色编码条,表示低和高的温度。我认为这是一个很好的提示,也是提高人们认识的努力。这是它在 GitLab 中的 [提交][21]。
此外,还有几张新的 [白天和黑夜][22] 的新鲜壁纸。
这就是我可以找到并总结的所有基本变化。除了这些GNOME 43 还有大量的错误修复、性能改进和代码清理。
Fedora 37 将在发布时采用 GNOME 43它的某些部分应该在 10 月发布的 Ubuntu 22.10 中出现。
### 总结
GNOME 43 是一个标志性的版本因为它改变了几个基本的设计影响了数百万用户的工作流程。快速设置的转变是非常棒的而且早该如此了。此外文件应用、Web 应用和设置应用的必要改变将提高你的工作效率。
此外,新功能的到来,同时保持了设计准则和美学的理念。一个好的用户界面需要一个深思熟虑的过程,而开发者在这个版本中做了完美的工作。
所以,差不多就是这样了。这就是 GNOME 43 的内容。如果你打算得到这个更新并想从 KDE 跳到 GNOME请告诉我
🗨️请在下面的评论区让我知道你最喜欢的功能。
举杯~
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/gnome-43/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/08/GNOME-43-Running-via-GNOME-OS.jpg
[2]: https://debugpointnews.com/gnome-43-release/
[3]: https://www.debugpoint.com/gnome-43-quick-settings/
[4]: https://www.debugpoint.com/?attachment_id=10682
[5]: https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/877
[6]: https://www.debugpoint.com/?attachment_id=10684
[7]: https://www.debugpoint.com/?attachment_id=10685
[8]: https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/817
[9]: https://www.debugpoint.com/wp-content/uploads/2022/08/Rubberband-Selection-Feature.gif
[10]: https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/817
[11]: https://www.debugpoint.com/wp-content/uploads/2022/08/GtkColumnView-enables-row-focus.gif
[12]: https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/745
[13]: https://www.debugpoint.com/wp-content/uploads/2022/08/Intelligent-properties-window.jpg
[14]: https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/595
[15]: https://www.debugpoint.com/?attachment_id=10689
[16]: https://www.debugpoint.com/wp-content/uploads/2022/08/Login-to-Web-using-Firefox-account.jpg
[17]: https://twitter.com/_philn_/status/1490391956970684422
[18]: https://discourse.gnome.org/t/dog-barking-error-message-sound/9529/2
[19]: https://www.debugpoint.com/wp-content/uploads/2022/08/Other-APPS-by-developer-section-in-Software.jpg
[20]: https://showyourstripes.info/s/globe/
[21]: https://gitlab.gnome.org/GNOME/gnome-backgrounds/-/commit/a142d5c88702112fae3b64a6d90d10488150d8c1
[22]: https://www.debugpoint.com/custom-light-dark-wallpaper-gnome/

View File

@ -0,0 +1,39 @@
[#]: subject: "Lawmakers Proposes A New Bill To Protect Open Source Software"
[#]: via: "https://www.opensourceforu.com/2022/09/lawmakers-proposes-a-new-bill-to-protect-open-source-software/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15072-1.html"
美国立法者提出一项保护开源软件的新法案
======
![](https://www.opensourceforu.com/wp-content/uploads/2022/09/law-4-1536x1025.jpeg)
> 美国的《保护开源软件法案》将责成管理和预算办公室提供有关如何安全使用开源软件的说明。
美国立法者周四提出了一项要求美国网络安全和基础设施安全局CISA需创建风险框架以提高开源软件安全性的措施。为了降低依赖开源代码的系统风险各机构将利用该框架CISA 将决定关键基础设施所有者和运营商是否也可以自愿使用它。
大多数系统依赖于免费提供的并由社区维护的开源软件来构建网站和应用程序;最大的用户之一是美国联邦政府。该立法由美国国土安全委员会主席兼高级成员、俄亥俄州共和党参议员 Sens. Rob Portman、R-Ohio 和 Gary Peters D-Mich 在一次听证会后提出,以回应在开源代码中发现的影响美国联邦系统和全球数百万其他系统的严重、广泛的 Log4j 漏洞。
“这一事件对联邦系统和关键基础设施公司——包括银行、医院和公用事业公司——构成了严重威胁,美国人每天都依赖这些公司提供基本服务,”彼得斯在公告中说。“这项明智的两党立法将有助于保护开源软件,并进一步加强我们的网络安全防御,防止网络犯罪分子和外国对手对全国网络发起的不断的攻击。”
这项《保护开源软件法》还要求美国管理和预算办公室为各机构发布关于保护开源软件的指南,在 CISA 网络安全咨询委员会中设立一个软件安全小组委员会,并要求 CISA 聘请开源软件专家协助处理网络事件。
在此之前Peters 和 Portman 的提议已获得美国参议院一致通过并签署成为法律,以加强州和地方政府的网络防御,并迫使关键基础设施的所有者和运营商向 CISA 报告重大网络攻击和勒索软件付款。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/lawmakers-proposes-a-new-bill-to-protect-open-source-software/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/littlebirdnest)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -0,0 +1,114 @@
[#]: subject: "systemd is Now Available in WSL"
[#]: via: "https://news.itsfoss.com/systemd-wsl/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "vvvbbbcz"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15068-1.html"
systemd 已可用于 WSL
======
> 微软的 WSL 现已支持 systemd为用户提供了更好的体验。你可阅读此文了解更多。
![systemd 已可用于 WSL][1]
WSL<ruby>Windows 的 Linux 子系统<rt>Windows Subsystem for Linux</rt></ruby>)终于拥有了对 systemd 的支持,这是在 systemd 的创建者加入微软的几个月后实现的。
> **[更多 Linux 开发者们加入微软systemd 的创建者也加入这一行列][2]**
而这已通过微软和 Cannonical 的合作成为可能。
> **如果你好奇 systemd 是什么**
>
> systemd 是一套 Linux 系统的基本组成模块。它提供了一个系统和服务管理器,作为 PID 1 运行,并启动系统的其他部分。
>
> 来自systemd.io
它作为一个初始化系统,启动并维持用户空间其他服务的正常运行。
让我们看看它是如何被引入 WSL 的。
### systemd 增强 WSL 的体验
![WSL: 与 Cannonical 合作以支持 systemd][4]
在 WSL 中引入 systemd主要是为改善 Windows 机器上的 Linux 工作流程。
像 Debian、Ubuntu、Fedora 等,都是默认运行 systemd 的。因此,这项整合将使这些发行版的用户更方便地在 WSL 上做更多工作。
很多关键的 Linux 程序也是靠 systemd 实现的。例如 snap、microk8s 和 LXD 都依赖它。
即使我们有 [不含 systemd 的发行版][5] 可用,它们也并不适合所有人。因此,在 WSL 上添加对 systemd 的支持是很有意义的。
systemd 的存在也使得在 Windows 中使用更多工具来测试和运行成为可能,从而带来更好的 WSL 体验。
### 它是如何实现的
WSL 背后的团队必须修改其架构,它们让 WSL 的初始化进程在 Linux 发行版中以 systemd 的一个子进程启动。
正如其 [官方公告][7] 所述,这样做使得 WSL 初始化程序能够为 Windows 和 Linux 子系统之间的通讯提供必要的基础。
它们还做了额外的修改,通过防止 systemd 保持 WSL 实例的活动以确保系统的干净关机。
你亦可访问他们的 [官方文档][8] 以了解更多。
### 在 WSL 上使用 systemd
> 现有的 WSL 用户必须在他们的系统上手动启用 systemd以防止由于 systemd 的引入而导致的启动问题。
首先,你必须确保你的系统运行的是 **0.67.6** 或更高版本的 WSL。
你可以通过以下命令检查你的 WSL 版本。
```
wsl --version
```
如果你正在运行旧版本,你可以通过 <ruby>微软应用商店<rt>Microsoft Store</rt></ruby> 或者以下命令更新它。
```
wsl --update
```
此外,如果你不是 <ruby>Windows 预览体验成员<rt>Windows Insider</rt></ruby>,你可以到 [WSL 发行页面][9] 下载它来体验。
为了让 systemd 在你的系统上运行,你需要修改 [wsl.conf][10] 这个文件以确保 systemd 在启动时运行。
`wsl.conf` 添加以下几行以使 WSL 在启动时运行 systemd
```
[boot]
systemd=true
```
最后,重启你的 WSL 实例以见证更改。
随着对 systemd 的支持,微软在 WSL 的发展又前进了一大步,这将使得 WSL 吸引更多用户。
*💬 是否对 WSL 支持 systemd 感到兴奋?或是你更喜欢无 systemd 的发行版?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/systemd-wsl/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[vvvbbbcz](https://github.com/vvvbbbcz)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/systemd-now-available-on-wsl.png
[2]: https://news.itsfoss.com/systemd-creator-microsoft/
[3]: https://news.itsfoss.com/systemd-creator-microsoft/
[4]: https://youtu.be/Ja3qikzd-as
[5]: https://itsfoss.com/systemd-free-distros/
[6]: https://itsfoss.com/systemd-free-distros/
[7]: https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/
[8]: https://learn.microsoft.com/en-in/windows/wsl/
[9]: https://github.com/microsoft/WSL/releases
[10]: https://learn.microsoft.com/en-in/windows/wsl/wsl-config#wslconf

View File

@ -1,77 +0,0 @@
[#]: subject: "Wow! Rust-based Redox OS Gets an Anonymous Donation of $390,000 in Cryptocurrency"
[#]: via: "https://news.itsfoss.com/redox-os-anonymous-donation/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Wow! Rust-based Redox OS Gets an Anonymous Donation of $390,000 in Cryptocurrency
======
Redox OS just got a massive anonymous donation. What will it be used for? Something exciting!
![Wow! Rust-based Redox OS Gets an Anonymous Donation of $390,000 in Cryptocurrency][1]
[Redox OS][2] is a Unix-like operating system written in Rust.
The project was launched in 2015 by Jeremy Soller, popularly recognized as the Principal Engineer at [System76][3] and a maintainer for [Pop!_OS][4].
We also covered its last release earlier this year:
[Rust-based Redox OS 0.7.0 Arrives with Enhanced Hardware Support][5]
While the update involved improvements that allowed it to boot on more hardware, it may not be a replacement daily driver for most users.
Nevertheless, it is an exciting project to keep an eye on.
**And things got more exciting for it after it received an anonymous donation**.
🤯 Someone just sent **299 Ethereum** to Redux OS's donation address, equivalent to nearly **$3,90,000** (with ups and downs in the cryptocurrency market).
Well, that is a lot of money!
![A Video from YouTube][7]
As per Jeremy's last tweet, he hasn't decided immediately what to do with it.
> An anonymous donor just sent 299 Ether (equivalent to 393,000 USD) to the [@redox_os][8] donation address. Both this address and transaction are public. I have no idea what to do with a donation of this size but will have more details soon after some research.[https://t.co/f3yBDghWSh][9]
However, the replies to the tweet give us some good suggestions.
Some suggest donating it to the folks responsible for Rust language, and some suggest using the money to sponsor learning rust and OS development.
He can surely use it to scale up Redox OS or anything else that needs that resource.
At the end of the day, for people who want more Rust-based stuff, this could be indirectly a good thing with whatever Jeremy chooses to do.
Or, maybe buy a Buggati with a Redox OS logo on it? Well, some Twitter users have had hilarious replies to this incident! 😂
> This is not the first time an open-source project received a significant cryptocurrency donation. The Apache Software Foundation [received bitcoins][13] valued at $1 M in 2018.
I shall update this article when Jeremy decides to share more details on the donation and what he plans to do with it.
💬 *What do you think about the anonymous donation to Redox OS? What would you do if you got that donation? Let us know your thoughts in the comments box below.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/redox-os-anonymous-donation/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/donor-sends-ether-to-redux-os.png
[2]: https://www.redox-os.org/
[3]: https://system76.com/
[4]: https://pop.system76.com/
[5]: https://news.itsfoss.com/redox-os-0-7-0-release/
[7]: https://tenor.com/embed/17544086
[8]: https://twitter.com/redox_os?ref_src=twsrc%5Etfw
[9]: https://t.co/f3yBDghWSh
[13]: https://news.apache.org/foundation/entry/the-apache-software-foundation-receives

View File

@ -0,0 +1,95 @@
[#]: subject: "A Modular Chromebook Launched by Framework and Google"
[#]: via: "https://news.itsfoss.com/chromebook-framework/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A Modular Chromebook Launched by Framework and Google
======
The first modular Chromebook ever? Built by Framework in collaboration with Google. Check this out.
![A Modular Chromebook Launched by Framework and Google][1]
[Framework][2] has launched a Chromebook in collaboration with Google; the main focus of their products has always been to provide repairable and modular laptops.
With the launch of this laptop, they have implemented the same formula as their previous laptops, but with a touch of Google in the form of ChromeOS.
Let's see what they have on offer.
### Framework Laptop Chromebook Edition
![framework laptop chromebook edition][3]
The laptop runs ChromeOS and is a user-serviceable laptop with a very high degree of modularity.
It is based on the existing '[Framework Laptop][4],' with a milled aluminum chassis, that can comfortably house all the parts and keeps the weight in check at 1.3 kg.
Users can choose from a host of different options, such as switching out memory and storage or the ability to choose which ports they want and where they want those.
![framework laptop chromebook edition repairability][5]
You can choose from a variety of RAM options ranging from 8 GB to 64 GB, in terms of storage, users can install up to a 1 TB NVMe SSD and an additional SSD of either 250 GB or 1 TB capacity. The default configuration comes with 256 GB of storage.
Being a modular laptop, you can choose your ports, like USB-C, USB-A, HDMI, or Ethernet, and have them on any side you want.
![port selection framework][6]
By providing these user-friendly modularities and using post-consumer recycled materials, Framework offers a very sustainable product compared to its competitors.
![framework laptop chromebook edition pcr materials][7]
Regarding privacy, the Chromebook has two hardware privacy switches that cut power from the camera and the microphones to disable any access.
![framework laptop chromebook edition hardware privacy switches][8]
#### Specifications
The Framework Laptop Chromebook Edition is powered by a **12-core Intel i5-1240P**, which can turbo up to 4.4 GHz; it also features Intel's Iris Xe graphics for running graphical workloads.
Other notable highlights include:
* 13.5-inch display (2256x1504) with 100% sRGB support and a 3:2 aspect ratio.
* 55Wh battery.
* Set of 80dB Stereo 2W Speakers.
* Backlight keyboard with 1.5 mm key travel.
* Support for WiFi 6E.
* 1080p 60FPS webcam.
#### Availability & Pricing
The laptop is up for pre-order starting at **$999** on their [official website][9]; it is currently limited to buyers from the US and Canada only.
[Framework Chromebook][10]
They are expecting shipments to begin in early December and are offering a fully-refundable **$100 deposit to pre-book** the laptop.
Framework seems to have taken the right step in attracting users to its product by offering a specific ChromeOS version of its existing laptop, especially after Google has seemingly stopped any further development on their ChromeOS laptops, the Pixelbook series.
💬 *What do you think? Can this be a viable alternative compared to the other Chromebooks in the market?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/chromebook-framework/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/framework-chromebook-laptop.jpg
[2]: https://frame.work/
[3]: https://news.itsfoss.com/content/images/2022/09/framework_laptop_chromebook_edition.png
[4]: https://frame.work/products/laptop-12-gen-intel
[5]: https://news.itsfoss.com/content/images/2022/09/framework_laptop_chromebook_edition_repairability.png
[6]: https://news.itsfoss.com/content/images/2022/09/expansion-cards.jpg
[7]: https://news.itsfoss.com/content/images/2022/09/framework_laptop_chromebook_edition_pcr_materials.png
[8]: https://news.itsfoss.com/content/images/2022/09/framework_laptop_chromebook_edition_privacy.png
[9]: https://frame.work/products/laptop-chromebook-12-gen-intel
[10]: https://frame.work/products/laptop-chromebook-12-gen-intel

View File

@ -0,0 +1,36 @@
[#]: subject: "Kubernetes To Soon Support Confidential Computing"
[#]: via: "https://www.opensourceforu.com/2022/09/kubernetes-to-soon-support-confidential-computing/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubernetes To Soon Support Confidential Computing
======
*Constellation is the first kubernetes to be always-encrypted (K8S). This refers to a K8S in which all of your workloads and control plane are completely shielded and you can remotely confirm this using cryptographic certificates.*
Using secret computing and confidential VMs, the Constellation Kubernetes engine isolates Kubernetes clusters from the rest of the cloud architecture. As a result, data is always encrypted, both at rest and in memory, and a confidential context is created. Since it adds security and secrecy to data and workflows operating on the public cloud, confidential computing, according to Edgeless Systems, the company that created Constellation, is the future of cloud computing.
Kubernetes nodes run inside private virtual machines using Constellation. According to Edgeless Systems, confidential machines are an evolution of secure enclaves that extend the three principles of confidential computing—runtime encryption, isolation, and remote attestation—across the entire virtual system. The underlying hardwares particular support for private computing, such as AMD Secure Encrypted Virtualization (AEM), SEV-Secure Nested Paging (SEV-SNP), and Intel Trust Domain Extensions, is used by confidential VMs (TDX). Additionally, ARM revealed its new V9 architecture, known as Realms, last year. This design includes private VM features.
Constellation attempts to offer attestation, or verification via cryptographic certificates, in addition to “always-on” encryption, at the cluster level. Fedora CoreOS, which is built on an immutable file system and is geared for containers, is used by Confidential VMS in Constellation. Constellation also makes use of Sigstore to protect the DevOps chain of trust.
Performance may be a worry while using secret computing. Yes, encryption affects performance, but a joint benchmark by AMD and Microsoft found that this only results in a tiny performance hit of between 2% and 8%. Edgeless Systems claims that Constellation will perform similarly for heavy workloads.
Given that Constellation is CNCF-certified and interoperable with all major clouds, including GCP and Azure, this should guarantee its interoperability with other Kubernetes workloads and tools.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/kubernetes-to-soon-support-confidential-computing/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -1,263 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (aREversez)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Real Novelty of the ARPANET)
[#]: via: (https://twobithistory.org/2021/02/07/arpanet.html)
[#]: author: (Two-Bit History https://twobithistory.org)
The Real Novelty of the ARPANET
======
If you run an image search for the word “ARPANET,” you will find lots of maps showing how the [government research network][1] expanded steadily across the country throughout the late 60s and early 70s. Im guessing that most people reading or hearing about the ARPANET for the first time encounter one of these maps.
Obviously, the maps are interesting—its hard to believe that there were once so few networked computers that their locations could all be conveyed with what is really pretty lo-fi cartography. (Were talking 1960s overhead projector diagrams here. You know the vibe.) But the problem with the maps, drawn as they are with bold lines stretching across the continent, is that they reinforce the idea that the ARPANETs paramount achievement was connecting computers across the vast distances of the United States for the first time.
Today, the internet is a lifeline that keeps us tethered to each other even as an airborne virus has us all locked up indoors. So its easy to imagine that, if the ARPANET was the first draft of the internet, then surely the world that existed before it was entirely disconnected, since thats where wed be without the internet today, right? The ARPANET must have been a big deal because it connected people via computers when that hadnt before been possible.
That view doesnt get the history quite right. It also undersells what made the ARPANET such a breakthrough.
### The Debut
The Washington Hilton stands near the top of a small rise about a mile and a half northeast of the National Mall. Its two white-painted modern facades sweep out in broad semicircles like the wings of a bird. The New York Times, reporting on the hotels completion in 1965, remarked that the building looks “like a sea gull perched on a hilltop nest.”[1][2]
The hotel hides its most famous feature below ground. Underneath the driveway roundabout is an enormous ovoid event space known as the International Ballroom, which was for many years the largest pillar-less ballroom in DC. In 1967, the Doors played a concert there. In 1968, Jimi Hendrix also played a concert there. In 1972, a somewhat more sedate act took over the ballroom to put on the inaugural International Conference on Computing Communication, where a promising research project known as the ARPANET was demonstrated publicly for the first time.
The 1972 ICCC, which took place from October 24th to 26th, was attended by about 800 people.[2][3] It brought together all of the leading researchers in the nascent field of computer networking. According to internet pioneer Bob Kahn, “if somebody had dropped a bomb on the Washington Hilton, it would have destroyed almost all of the networking community in the US at that point.”[3][4]
Not all of the attendees were computer scientists, however. An advertisement for the conference claimed it would be “user-focused” and geared toward “lawyers, medical men, economists, and government men as well as engineers and communicators.”[4][5] Some of the conferences sessions were highly technical, such as the session titled “Data Network Design Problems I” and its sequel session, “Data Network Design Problems II.” But most of the sessions were, as promised, focused on the potential social and economic impacts of computer networking. One session, eerily prescient today, sought to foster a discussion about how the legal system could act proactively “to safeguard the right of privacy in the computer data bank.”[5][6]
The ARPANET demonstration was intended as a side attraction of sorts for the attendees. Between sessions, which were held either in the International Ballroom or elsewhere on the lower level of the hotel, attendees were free to wander into the Georgetown Ballroom (a smaller ballroom/conference room down the hall from the big one),[6][7] where there were 40 terminals from a variety of manufacturers set up to access the ARPANET.[7][8] These terminals were dumb terminals—they only handled input and output and could do no computation on their own. (In fact, in 1972, its likely that all of these terminals were hardcopy terminals, i.e. teletype machines.) The terminals were all hooked up to a computer known as a Terminal Interface Message Processor or TIP, which sat on a raised platform in the middle of the room. The TIP was a kind of archaic router specially designed to connect dumb terminals to the ARPANET. Using the terminals and the TIP, the ICCC attendees could experiment with logging on and accessing some of the computers at the 29 host sites then comprising the ARPANET.[8][9]
To exhibit the networks capabilities, researchers at the host sites across the country had collaborated to prepare 19 simple “scenarios” for users to experiment with. These scenarios were compiled into [a booklet][10] that was handed to conference attendees as they tentatively approached the maze of wiring and terminals.[9][11] The scenarios were meant to prove that the new technology worked but also that it was useful, because so far the ARPANET was “a highway system without cars,” and its Pentagon funders hoped that a public demonstration would excite more interest in the network.[10][12]
The scenarios thus showed off a diverse selection of the software that could be accessed over the ARPANET: There were programming language interpreters, one for a Lisp-based language at MIT and another for a numerical computing environment called Speakeasy hosted at UCLA; there were games, including a chess program and an implementation of Conways Game of Life; and—perhaps most popular among the conference attendees—there were several AI chat programs, including the famous ELIZA chat program developed at MIT by Joseph Weizenbaum.
The researchers who had prepared the scenarios were careful to list each command that users were expected to enter at their terminals. This was especially important because the sequence of commands used to connect to any given ARPANET host could vary depending on the host in question. To experiment with the AI chess program hosted on the MIT Artificial Intelligence Laboratorys PDP-10 minicomputer, for instance, conference attendees were instructed to enter the following:
_`[LF]`, `[SP]`, and `[CR]` below stand for the line feed, space, and carriage return keys respectively. Ive explained each command after `//`, but this syntax was not used for the annotations in the original._
```
@r [LF] // Reset the TIP
@e [SP] r [LF] // "Echo remote" setting, host echoes characters rather than TIP
@L [SP] 134 [LF] // Connect to host number 134
:login [SP] iccXXX [CR] // Login to the MIT AI Lab's system, where "XXX" should be user's initials
:chess [CR] // Start chess program
```
If conference attendees were successfully able to enter those commands, their reward was the opportunity to play around with some of the most cutting-edge chess software available at the time, where the layout of the board was represented like this:
```
BR BN BB BQ BK BB BN BR
BP BP BP BP ** BP BP BP
-- ** -- ** -- ** -- **
** -- ** -- BP -- ** --
-- ** -- ** WP ** -- **
** -- ** -- ** -- ** --
WP WP WP WP -- WP WP WP
WR WN WB WQ WK WB WN WR
```
In contrast, to connect to UCLAs IBM System/360 and run the Speakeasy numerical computing environment, conference attendees had to enter the following:
```
@r [LF] // Reset the TIP
@t [SP] o [SP] L [LF] // "Transmit on line feed" setting
@i [SP] L [LF] // "Insert line feed" setting, i.e. send line feed with each carriage return
@L [SP] 65 [LF] // Connect to host number 65
tso // Connect to IBM Time-Sharing Option system
logon [SP] icX [CR] // Log in with username, where "X" should be a freely chosen digit
iccc [CR] // This is the password (so secure!)
speakez [CR] // Start Speakeasy
```
Successfully running that gauntlet gave attendees the power to multiply and transpose and do other operations on matrices as quickly as they could input them at their terminal:
```
:+! a=m*transpose(m);a [CR]
:+! eigenvals(a) [CR]
```
Many of the attendees were impressed by the demonstration, but not for the reasons that we, from our present-day vantage point, might assume. The key piece of context hard to keep in mind today is that, in 1972, being able to use a computer remotely, even from a different city, was not new. Teletype devices had been used to talk to distant computers for decades already. Almost a full five years before the ICCC, Bill Gates was in a Seattle high school using a teletype to run his first BASIC programs on a General Electric computer housed elsewhere in the city. Merely logging in to a host computer and running a few commands or playing a text-based game was routine. The software on display here was pretty neat, but the two scenarios Ive told you about so far could ostensibly have been experienced without going over the ARPANET.
Of course, something new was happening under the hood. The lawyers, policy-makers, and economists at the ICCC might have been enamored with the clever chess program and the chat bots, but the networking experts would have been more interested in two other scenarios that did a better job of demonstrating what the ARPANET project had achieved.
The first of these scenarios involved a program called `NETWRK` running on MITs ITS operating system. The `NETWRK` command was the entrypoint for several subcommands that could report various aspects of the ARPANETs operating status. The `SURVEY` subcommand reported which hosts on the network were functioning and available (they all fit on a single list), while the `SUMMARY.OF.SURVEY` subcommand aggregated the results of past `SURVEY` runs to report an “up percentage” for each host as well as how long, on average, it took for each host to respond to messages. The output of the `SUMMARY.OF.SURVEY` subcommand was a table that looked like this:
```
--HOST-- -#- -%-UP- -RESP-
UCLA-NMC 001 097% 00.80
SRI-ARC 002 068% 01.23
UCSB-75 003 059% 00.63
...
```
The host number field, as you can see, has room for no more than three digits (ha!). Other `NETWRK` subcommands allowed users to look at summary of survey results over a longer historical period or to examine the log of survey results for a single host.
The second of these scenarios featured a piece of software called the SRI-ARC Online System being developed at Stanford. This was a fancy piece of software with lots of functionality (it was the software system that Douglas Engelbart demoed in the “Mother of All Demos”), but one of the many things it could do was make use of what was essentially a file hosting service run on the host at UC Santa Barbara. From a terminal at the Washington Hilton, conference attendees could copy a file created at Stanford onto the host at UCSB simply by running a `copy` command and answering a few of the computers questions:
_`[ESC]`, `[SP]`, and `[CR]` below stand for the escape, space, and carriage return keys respectively. The words in parentheses are prompts printed by the computer. The escape key is used to autocomplete the filename on the third line. The file being copied here is called `<system>sample.txt;1`, where the trailing one indicates the files version number and `<system>` indicates the directory. This was a convention for filenames used by the TENEX operating system._[11][13]
```
@copy
(TO/FROM UCSB) to
(FILE) <system>sample [ESC] .TXT;1 [CR]
(CREATE/REPLACE) create
```
These two scenarios might not look all that different from the first two, but they were remarkable. They were remarkable because they made it clear that, on the ARPANET, humans could talk to computers but computers could also talk to _each other._ The `SURVEY` results collected at MIT werent collected by a human regularly logging in to each machine to check if it was up—they were collected by a program that knew how to talk to the other machines on the network. Likewise, the file transfer from Stanford to UCSB didnt involve any humans sitting at terminals at either Stanford or UCSB—the user at a terminal in Washington DC was able to get the two computers to talk each other merely by invoking a piece of software. Even more, it didnt matter which of the 40 terminals in the Ballroom you were sitting at, because you could view the MIT network monitoring statistics or store files at UCSB using any of the terminals with almost the same sequence of commands.
This is what was totally new about the ARPANET. The ICCC demonstration didnt just involve a human communicating with a distant computer. It wasnt just a demonstration of remote I/O. It was a demonstration of software remotely communicating with other software, something nobody had seen before.
To really appreciate why it was this aspect of the ARPANET project that was important and not the wires-across-the-country, physical connection thing that the host maps suggest (the wires were leased phone lines anyhow and were already there!), consider that, before the ARPANET project began in 1966, the ARPA offices in the Pentagon had a terminal room. Inside it were three terminals. Each connected to a different computer; one computer was at MIT, one was at UC Berkeley, and another was in Santa Monica.[12][14] It was convenient for the ARPA staff that they could use these three computers even from Washington DC. But what was inconvenient for them was that they had to buy and maintain terminals from three different manufacturers, remember three different login procedures, and familiarize themselves with three different computing environments in order to use the computers. The terminals might have been right next to each other, but they were merely extensions of the host computing systems on the other end of the wire and operated as differently as the computers did. Communicating with a distant computer was possible before the ARPANET; the problem was that the heterogeneity of computing systems limited how sophisticated the communication could be.
### Come Together, Right Now
So what Im trying to drive home here is that there is an important distinction between statement A, “the ARPANET connected people in different locations via computers for the first time,” and statement B, “the ARPANET connected computer systems to each other for the first time.” That might seem like splitting hairs, but statement A elides some illuminating history in a way that statement B does not.
To begin with, the historian Joy Lisi Rankin has shown that people were socializing in cyberspace well before the ARPANET came along. In _A Peoples History of Computing in the United States_, she describes several different digital communities that existed across the country on time-sharing networks prior to or apart from the ARPANET. These time-sharing networks were not, technically speaking, computer networks, since they consisted of a single mainframe computer running computations in a basement somewhere for many dumb terminals, like some portly chthonic creature with tentacles sprawling across the country. But they nevertheless enabled most of the social behavior now connoted by the word “network” in a post-Facebook world. For example, on the Kiewit Network, which was an extension of the Dartmouth Time-Sharing System to colleges and high schools across the Northeast, high school students collaboratively maintained a “gossip file” that allowed them to keep track of the exciting goings-on at other schools, “creating social connections from Connecticut to Maine.”[13][15] Meanwhile, women at Mount Holyoke College corresponded with men at Dartmouth over the network, perhaps to arrange dates or keep in touch with boyfriends.[14][16] This was all happening in the 1960s. Rankin argues that by ignoring these early time-sharing networks we impoverish our understanding of how American digital culture developed over the last 50 years, leaving room for a “Silicon Valley mythology” that credits everything to the individual genius of a select few founding fathers.
As for the ARPANET itself, if we recognize that the key challenge was connecting the computer _systems_ and not just the physical computers, then that might change what we choose to emphasize when we tell the story of the innovations that made the ARPANET possible. The ARPANET was the first ever packet-switched network, and lots of impressive engineering went into making that happen. I think its a mistake, though, to say that the ARPANET was a breakthrough because it was the first packet-switched network and then leave it at that. The ARPANET was meant to make it easier for computer scientists across the country to collaborate; that project was as much about figuring out how different operating systems and programs written in different languages would interface with each other than it was about figuring out how to efficiently ferry data back and forth between Massachusetts and California. So the ARPANET was the first packet-switched network, but it was also an amazing standards success story—something I find especially interesting given [how][17] [many][18] [times][19] Ive written about failed standards on this blog.
Inventing the protocols for the ARPANET was an afterthought even at the time, so naturally the job fell to a group made up largely of graduate students. This group, later known as the Network Working Group, met for the first time at UC Santa Barbara in August of 1968.[15][20] There were 12 people present at that first meeting, most of whom were representatives from the four universities that were to be the first host sites on the ARPANET when the equipment was ready.[16][21] Steve Crocker, then a graduate student at UCLA, attended; he told me over a Zoom call that it was all young guys at that first meeting, and that Elmer Shapiro, who chaired the meeting, was probably the oldest one there at around 38. ARPA had not put anyone in charge of figuring out how the computers would communicate once they were connected, but it was obvious that some coordination was necessary. As the group continued to meet, Crocker kept expecting some “legitimate adult” with more experience and authority to fly out from the East Coast to take over, but that never happened. The Network Working Group had ARPAs tacit approval—all those meetings involved lots of long road trips, and ARPA money covered the travel expenses—so they were it.[17][22]
The Network Working Group faced a huge challenge. Nobody had ever sat down to connect computer systems together in a general-purpose way; that flew against all of the assumptions that prevailed in computing in the late 1960s:
> The typical mainframe of the period behaved as if it were the only computer in the universe. There was no obvious or easy way to engage two diverse machines in even the minimal communication needed to move bits back and forth. You could connect machines, but once connected, what would they say to each other? In those days a computer interacted with devices that were attached to it, like a monarch communicating with his subjects. Everything connected to the main computer performed a specific task, and each peripheral device was presumed to be ready at all times for a fetch-my-slippers type command…. Computers were strictly designed for this kind of interaction; they send instructions to subordinate card readers, terminals, and tape units, and they initiate all dialogues. But if another device in effect tapped the computer on the shoulder with a signal and said, “Hi, Im a computer too,” the receiving machine would be stumped.[18][23]
As a result, the Network Working Groups progress was initially slow.[19][24] The group did not settle on an “official” specification for any protocol until June, 1970, nearly two years after the groups first meeting.[20][25]
But by the time the ARPANET was to be shown off at the 1972 ICCC, all the key protocols were in place. A scenario like the chess scenario exercised many of them. When a user ran the command `@e r`, short for `@echo remote`, that instructed the TIP to make use of a facility in the new TELNET virtual teletype protocol to inform the remote host that it should echo the users input. When a user then ran the command `@L 134`, short for `@login 134`, that caused the TIP to invoke the Initial Connection Protocol with host 134, which in turn would cause the remote host to allocate all the necessary resources for the connection and drop the user into a TELNET session. (The file transfer scenario I described may well have made use of the File Transfer Protocol, though that protocol was only ready shortly before the conference.[21][26]) All of these protocols were known as “level three” protocols, and below them were the host-to-host protocol at level two (which defined the basic format for the messages the hosts should expect from each other), and the host-to-IMP protocol at level one (which defined how hosts communicated with the routing equipment they were linked to). Incredibly, the protocols all worked.
In my view, the Network Working Group was able to get everything together in time and just generally excel at its task because it adopted an open and informal approach to standardization, as exemplified by the famous Request for Comments (RFC) series of documents. These documents, originally circulated among the members of the Network Working Group by snail mail, were a way of keeping in touch between meetings and soliciting feedback to ideas. The “Request for Comments” framing was suggested by Steve Crocker, who authored the first RFC and supervised the RFC mailing list in the early years, in an attempt to emphasize the open-ended and collaborative nature of what the group was trying to do. That framing, and the availability of the documents themselves, made the protocol design process into a melting pot of contributions and riffs on other peoples contributions where the best ideas could emerge without anyone losing face. The RFC process was a smashing success and is still used to specify internet standards today, half a century later.
Its this legacy of the Network Working Group that I think we should highlight when we talk about ARPANETs impact. Though today one of the most magical things about the internet is that it can connect us with people on the other side of the planet, its only slightly facetious to say that that technology has been with us since the 19th century. Physical distance was conquered well before the ARPANET by the telegraph. The kind of distance conquered by the ARPANET was instead the logical distance between the operating systems, character codes, programming languages, and organizational policies employed at each host site. Implementing the first packet-switched network was of course a major feat of engineering that should also be mentioned, but the problem of agreeing on standards to connect computers that had never been designed to play nice with each other was the harder of the two big problems involved in building the ARPANET—and its solution was the most miraculous part of the ARPANET story.
In 1981, ARPA issued a “Completion Report” reviewing the first decade of the ARPANETs history. In a section with the belabored title, “Technical Aspects of the Effort Which Were Successful and Aspects of the Effort Which Did Not Materialize as Originally Envisaged,” the authors wrote:
> Possibly the most difficult task undertaken in the development of the ARPANET was the attempt—which proved successful—to make a number of independent host computer systems of varying manufacture, and varying operating systems within a single manufactured type, communicate with each other despite their diverse characteristics.[22][27]
There you have it from no less a source than the federal government of the United States.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][28] on Twitter or subscribe to the [RSS feed][29] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> It's been too long, I know, but I finally got around to writing a new post. This one is about how REST APIs should really be known as FIOH APIs instead (Fuck It, Overload HTTP):<https://t.co/xjMZVZgsEz>
>
> — TwoBitHistory (@TwoBitHistory) [June 28, 2020][30]
1. “Hilton Hotel Opens in Capital Today.” _The New York Times_, 20 March 1965, <https://www.nytimes.com/1965/03/20/archives/hilton-hotel-opens-in-capital-today.html?searchResultPosition=1>. Accessed 7 Feb. 2021. [↩︎][31]
2. James Pelkey. _Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988,_ Chapter 4, Section 12, 2007, <http://www.historyofcomputercommunications.info/Book/4/4.12-ICCC%20Demonstration71-72.html>. Accessed 7 Feb. 2021. [↩︎][32]
3. Katie Hafner and Matthew Lyon. _Where Wizards Stay Up Late: The Origins of the Internet_. New York, Simon &amp; Schuster, 1996, p. 178. [↩︎][33]
4. “International Conference on Computer Communication.” _Computer_, vol. 5, no. 4, 1972, p. c2, <https://www.computer.org/csdl/magazine/co/1972/04/01641562/13rRUxNmPIA>. Accessed 7 Feb. 2021. [↩︎][34]
5. “Program for the International Conference on Computer Communication.” _The Papers of Clay T. Whitehead_, Box 42, <https://d3so5znv45ku4h.cloudfront.net/Box+042/013_Speech-International+Conference+on+Computer+Communications,+Washington,+DC,+October+24,+1972.pdf>. Accessed 7 Feb. 2021. [↩︎][35]
6. Its actually not clear to me which room was used for the ARPANET demonstration. Lots of sources talk about a “ballroom,” but the Washington Hilton seems to consider the room with the name “Georgetown” more of a meeting room. So perhaps the demonstration was in the International Ballroom instead. But RFC 372 alludes to a booking of the “Georgetown Ballroom” for the demonstration. A floorplan of the Washington Hilton can be found [here][36]. [↩︎][37]
7. Hafner, p. 179. [↩︎][38]
8. ibid., p. 178. [↩︎][39]
9. Bob Metcalfe. “Scenarios for Using the ARPANET.” _Collections-Computer History Museum_, <https://www.computerhistory.org/collections/catalog/102784024>. Accessed 7 Feb. 2021. [↩︎][40]
10. Hafner, p. 176. [↩︎][41]
11. Robert H. Thomas. “Planning for ACCAT Remote Site Operations.” BBN Report No. 3677, October 1977, <https://apps.dtic.mil/sti/pdfs/ADA046366.pdf>. Accessed 7 Feb. 2021. [↩︎][42]
12. Hafner, p. 12. [↩︎][43]
13. Joy Lisi Rankin. _A Peoples History of Computing in the United States_. Cambridge, MA, Harvard University Press, 2018, p. 84. [↩︎][44]
14. Rankin, p. 93. [↩︎][45]
15. Steve Crocker. Personal interview. 17 Dec. 2020. [↩︎][46]
16. Crocker sent me the minutes for this meeting. The document lists everyone who attended. [↩︎][47]
17. Steve Crocker. Personal interview. [↩︎][48]
18. Hafner, p. 146. [↩︎][49]
19. “Completion Report / A History of the ARPANET: The First Decade.” BBN Report No. 4799, April 1981, <https://walden-family.com/bbn/arpanet-completion-report.pdf>, p. II-13. [↩︎][50]
20. Im referring here to RFC 54, “Official Protocol Proffering.” [↩︎][51]
21. Hafner, p. 175. [↩︎][52]
22. “Completion Report / A History of the ARPANET: The First Decade,” p. II-29. [↩︎][53]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2021/02/07/arpanet.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[aREversez](https://github.com/aREversez)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/ARPANET
[2]: tmp.pnPpRrCI3S#fn:1
[3]: tmp.pnPpRrCI3S#fn:2
[4]: tmp.pnPpRrCI3S#fn:3
[5]: tmp.pnPpRrCI3S#fn:4
[6]: tmp.pnPpRrCI3S#fn:5
[7]: tmp.pnPpRrCI3S#fn:6
[8]: tmp.pnPpRrCI3S#fn:7
[9]: tmp.pnPpRrCI3S#fn:8
[10]: https://archive.computerhistory.org/resources/access/text/2019/07/102784024-05-001-acc.pdf
[11]: tmp.pnPpRrCI3S#fn:9
[12]: tmp.pnPpRrCI3S#fn:10
[13]: tmp.pnPpRrCI3S#fn:11
[14]: tmp.pnPpRrCI3S#fn:12
[15]: tmp.pnPpRrCI3S#fn:13
[16]: tmp.pnPpRrCI3S#fn:14
[17]: https://twobithistory.org/2018/05/27/semantic-web.html
[18]: https://twobithistory.org/2018/12/18/rss.html
[19]: https://twobithistory.org/2020/01/05/foaf.html
[20]: tmp.pnPpRrCI3S#fn:15
[21]: tmp.pnPpRrCI3S#fn:16
[22]: tmp.pnPpRrCI3S#fn:17
[23]: tmp.pnPpRrCI3S#fn:18
[24]: tmp.pnPpRrCI3S#fn:19
[25]: tmp.pnPpRrCI3S#fn:20
[26]: tmp.pnPpRrCI3S#fn:21
[27]: tmp.pnPpRrCI3S#fn:22
[28]: https://twitter.com/TwoBitHistory
[29]: https://twobithistory.org/feed.xml
[30]: https://twitter.com/TwoBitHistory/status/1277259930555363329?ref_src=twsrc%5Etfw
[31]: tmp.pnPpRrCI3S#fnref:1
[32]: tmp.pnPpRrCI3S#fnref:2
[33]: tmp.pnPpRrCI3S#fnref:3
[34]: tmp.pnPpRrCI3S#fnref:4
[35]: tmp.pnPpRrCI3S#fnref:5
[36]: https://www3.hilton.com/resources/media/hi/DCAWHHH/en_US/pdf/DCAWH.Floorplans.Apr25.pdf
[37]: tmp.pnPpRrCI3S#fnref:6
[38]: tmp.pnPpRrCI3S#fnref:7
[39]: tmp.pnPpRrCI3S#fnref:8
[40]: tmp.pnPpRrCI3S#fnref:9
[41]: tmp.pnPpRrCI3S#fnref:10
[42]: tmp.pnPpRrCI3S#fnref:11
[43]: tmp.pnPpRrCI3S#fnref:12
[44]: tmp.pnPpRrCI3S#fnref:13
[45]: tmp.pnPpRrCI3S#fnref:14
[46]: tmp.pnPpRrCI3S#fnref:15
[47]: tmp.pnPpRrCI3S#fnref:16
[48]: tmp.pnPpRrCI3S#fnref:17
[49]: tmp.pnPpRrCI3S#fnref:18
[50]: tmp.pnPpRrCI3S#fnref:19
[51]: tmp.pnPpRrCI3S#fnref:20
[52]: tmp.pnPpRrCI3S#fnref:21
[53]: tmp.pnPpRrCI3S#fnref:22

View File

@ -1,67 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Veryzzj)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How I prioritize tasks on my to-do list)
[#]: via: (https://opensource.com/article/21/1/prioritize-tasks)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
How I prioritize tasks on my to-do list
======
Use the Eisenhower Matrix to better organize and prioritize your to-do
lists.
![Team checklist and to dos][1]
In prior years, this annual series covered individual apps. This year, we are looking at all-in-one solutions in addition to strategies to help in 2021. Welcome to day 4 of 21 Days of Productivity in 2021. In this article, I'll examine a strategy for prioritizing tasks on a to-do list. To find the open source tool that suits your routine, check out [this list][2]. 
It is easy to add things to a task or to-do list. Almost too easy, really. And once on the list, the challenge becomes figuring out what to do first. Do we do the thing at the top of the list? Is the top of the list the most important thing? How do we figure out what the most important thing is?
![To-do list][3]
To-do. Today? Tomorrow? Who knows? (Kevin Sonney, [CC BY-SA 4.0][4])
[Much like email][5], we can prioritize tasks based on a couple of things, and this lets us figure out what needs to be done first and what can wait until later.
I use a method commonly known as the _Eisenhower Matrix_ since it was taken from a quote by US President Dwight D. Eisenhower. Draw a box split horizontally and vertically. Label the columns "Urgent" and "Not Urgent," and the rows "Important" and "Not Important."
![Eisenhower matrix][6]
An Eisenhower Matrix. (Kevin Sonney, [CC BY-SA 4.0][4])
You can typically put all of the things on a to-do list in one of the boxes. But how do we know what to put where? Urgency and importance are often subjective. So the first step is to decide what is important to you. My family (including my pets), my work, and my hobbies are all important. If something on my to-do list isn't related to those three things, I can immediately put it into the "Not Important" row.
Urgency is a little more cut and dry. Is it something that needs to be done today or tomorrow? Then it is probably "Urgent." Does it have a deadline that is approaching, but there are days/weeks/months until that time, or perhaps it doesn't have a deadline at all? Certainly "Not Urgent."
Now we can translate the boxes into priorities. "Urgent/Important" is the highest priority (i.e., Priority 1) and needs to be done first. "Not Urgent/Important" comes next (Priority 2), then "Urgent/Not Important" (Priority 3), and finally "Not Urgent/Not Important" (Priority 4 or no priority at all).
Notice that "Urgent/Not Important" is third, not second. This is because people spend a lot of time on things that are important because they are urgent, not because they are _actually_ important. When I look at these, I ask myself some questions. Are these tasks that need to be done by me specifically? Are these tasks I can ask other people to do? Are they important and urgent for someone else? Does that change their importance for me? Maybe they need to be re-classified, or perhaps they are things I can ask someone else to do and remove them from my list.
![After prioritizing][7]
After prioritizing. (Kevin Sonney, [CC BY-SA 4.0][4])
There is a single question to ask about the items in the "Not Urgent/Not Important" box, and that is "Do these need to even be on my list at all?" In all honesty, we often clog up our to-do lists with things that are not urgent or important, and we can remove them from our list altogether. I know it is hard to admit that "This is never going to get done," but after I accept that, it is a relief to take that item off my list and not worry about it anymore.
After all that, it is pretty easy to look at my list and say, "This is what I need to work on now," and get it done, which is what my to-do list is for: Providing guidance and focus to my day.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/prioritize-tasks
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
[2]: https://opensource.com/article/20/5/alternatives-list
[3]: https://opensource.com/sites/default/files/pictures/to-do-list.png (To-do list)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/21/1/email-rules
[6]: https://opensource.com/sites/default/files/pictures/eisenhower-matrix.png (Eisenhower matrix)
[7]: https://opensource.com/sites/default/files/pictures/after-prioritizing.png (After prioritizing)

View File

@ -1,66 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MareDevi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How open source builds distributed trust)
[#]: via: (https://opensource.com/article/21/1/open-source-distributed-trust)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
How open source builds distributed trust
======
Trust in open source is a positive feedback loop.
![Trust][1]
This is an edited excerpt from my forthcoming book on _Trust in Computing and the Cloud_ for [Wiley][2] and leads on from a previous article I wrote called [_Trust &amp; choosing open source_][3].
In that article, I asked the question: What are we doing when we say, "I trust open source software"? In reply, I suggested that what we are doing is making a determination that enough of the people who have written and tested it have similar requirements to mine, and that their expertise, combined, is such that the risk to my using the software is acceptable. I also introduced the idea of _distributed trust_.
The concept of distributing trust across a community is an application of the _wisdom of the crowd_ theory posited by Aristotle, where the assumption is that the opinions of many typically show more wisdom than the opinion of one or a few. While demonstrably false in its simplest form in some situations—the most obvious example being examples of popular support for totalitarian regimes—this principle can provide a very effective mechanism for establishing certain information.
This distillation of collective experience allows what we refer to as _distributed trust_ and is collected through numerous mechanisms on the internet. Some, like TripAdvisor or Glassdoor, record information about organisations or the services they provide, while others, like UrbanSitter or LinkedIn, allow users to add information about specific people (see, for instance, LinkedIn's Recommendations and Skills &amp; Endorsements sections in individuals' profiles). The benefits that can accrue from these examples are significantly increased by the network effect, as the number of possible connections between members increases exponentially as the number of members increases.
Other examples of distributed trust include platforms like Twitter, where the number of followers that an account receives can be seen as a measure of its reputation and even of its trustworthiness, a calculation which we should view with a strong degree of scepticism. Indeed, Twitter felt that it had to address the social power of accounts with large numbers of followers and instituted a "verified accounts" mechanism to let people know that "an account of public interest is authentic." Interestingly, the company had to suspend the service after problems related to users' expectations of exactly what "verified" meant or implied: a classic case of differing understanding of context between different groups.
Where is the relevance to open source, then? The community aspect of open source is actually a driver towards building distributed trust. This is because, once you become a part of the community around an open source project, you assume one or more of the roles that you start trusting once you say that you "trust" an open source project (see my previous article). Examples include architect, designer, developer, reviewer, technical writer, tester, deployer, bug reporter, or bug fixer. The more involvement you have in a project, the more you become part of the community, which can, in time, become a _community of practice_.
Jean Lave and Etienne Wenger introduced the concept of communities of practice in the book _[Situated Learning: Legitimate Peripheral Participation][4]_, where groups evolve into communities as their members share a passion and participate in shared activities, leading to improving their skills and knowledge together. The core concept here is that as participants learn _around_ a community of practice, they become members of it at the same time:
> "Legitimate peripheral participation refers both to the development of knowledgeably skilled identities in practice and to the reproduction and transformation of communities of practice."
Wenger further explored the concept of communities of practice, how they form, requirements for their health, and how they encourage learning in _[Communities of Practice: Learning, Meaning, and Identity][5]_. He identified _negotiability of meaning_ ("why are we working together, what are we trying to achieve?") as core to a community of practice and noted that without _engagement_, _imagination_, and _alignment_ by individuals, communities of practice will not be robust.
We can align this with our views of how distributed trust is established and built: when you realise that your impact on open source can be equal to that of others, the distributed trust relationships that you hold to members of a community become less transitive (second- or third-hand or even more remote) and more immediate. You understand that the impact you can have on the creation, maintenance, requirements, and quality of the software you are running can be the same as all of the other, previously anonymous contributors with whom you are now forming a community of practice or whose existing community of practice you are joining. Then you become part of a network of trust relationships that are distributed but less removed from what you experience when buying and operating proprietary software.
The process does not stop there; as a common property of open source projects is cross-pollination, where developers from one project also work on others. This increases as the network effect of multiple open source projects allows reuse and dependencies on other projects to rise and leads to greater take-up across the entire set of projects.
It is easy to see why many open source contributors become open source enthusiasts or evangelists, not just for a single project but for open source as a whole. In fact, work by Stanford sociologist [Mark Granovetter][6] suggests that too many strong ties within communities can lead to cliques and stagnation, but weak ties provide movement of ideas and trends around communities. This awareness of other projects and the communities that exist around them and the flexibility of ideas across projects leads to distributed trust being able to be extended (albeit with weaker assurances) beyond the direct or short-chain indirect relationships that contributors experience within projects where they have immediate experience and out towards other projects where external observation or peripheral involvement shows that similar relationships exist between contributors.
Put simply, the act of being involved in an open source project and building trust relationships through participation leads to stronger distributed trust towards similar open source projects or just to other projects that are similarly open source.
What does this mean for each of us? It means that the more we get involved in open source, the more trust we can have in open source, as there will be a corresponding growth in the involvement—and therefore trust—of other people in open source. Trust in open source isn't just a network effect: it's a positive feedback loop!
* * *
_This article was originally published on [Alice, Eve, and Bob][7] and is reprinted with the author's permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/open-source-distributed-trust
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_trust.png?itok=KMfi0Rdo (Trust)
[2]: https://wiley.com/
[3]: https://aliceevebob.com/2019/06/18/trust-choosing-open-source/
[4]: https://books.google.com/books/about/Situated_Learning.html?id=CAVIOrW3vYAC
[5]: https://books.google.com/books?id=Jb8mAAAAQBAJ&dq=Communities%20of%20Practice:%20Learning,%20meaning%20and%20identity&lr=
[6]: https://en.wikipedia.org/wiki/Mark_Granovetter
[7]: https://aliceevebob.com/2020/11/17/how-open-source-builds-distributed-trust/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/9/protect-home-network"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "PeterPan0106"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -65,7 +65,7 @@ via: https://opensource.com/article/22/9/protect-home-network
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
译者:[PeterPan0106](https://github.com/PeterPan0106)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,180 +0,0 @@
[#]: subject: "3 ways to use the Linux inxi command"
[#]: via: "https://opensource.com/article/22/9/linux-inxi-command"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
3 ways to use the Linux inxi command
======
I use inxi on Linux to check my laptop batter, CPU information, and even the weather.
![Coding on a computer][1]
I was looking for information about the health of my laptop battery when I stumbled upon `inxi`. It's a command line system information tool that provides a wealth of information about your Linux computer, whether it's a laptop, desktop, or server.
The `inxi` command is [licensed][2] with the GPLv3, and many Linux distributions include it. According to its Git repository: "inxi strives to support the widest range of operating systems and hardware, from the most simple consumer desktops, to the most advanced professional hardware and servers."
Documentation is robust, and the project maintains a complete [man page][3] online. Once installed, you can access the man page on your system with the `man inxi` command.
### Install inxi on Linux
Generally, you can install `inxi` from your distribution's software repository or app center. For example, on Fedora, CentOS, Mageia, or similar:
```
$ sudo dnf install inxi
```
On Debian, Elementary, Linux Mint, or similar:
```
$ sudo apt install inxi
```
You can find more information about installation options for your Linux distribution [here][4].
### 3 ways to use inxi on Linux
Once you install `inxi`, you can explore all its options. There are numerous options to help you learn more about your system. The most fundamental command provides a basic overview of your system:
```
$ inxi -b
System:
  Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
        Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS
Machine:
  Type: Laptop System: HP product: Dev One Notebook PC v: N/A
        serial: <superuser required>
  Mobo: HP model: 8A78 v: KBC Version 01.03 serial: <superuser required>
        UEFI: Insyde v: F.05 date: 06/14/2022
Battery:
  ID-1: BATT charge: 50.6 Wh (96.9%) condition: 52.2/53.2 Wh (98.0%)
CPU:
  Info: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics [MT MCP]
        speed (MHz): avg: 915 min/max: 400/4507
Graphics:
  Device-1: AMD Cezanne driver: amdgpu v: kernel
  Device-2: Quanta HP HD Camera type: USB driver: uvcvideo
  Display: x11 server: X.Org v: 1.21.1.3 driver: X: loaded: amdgpu,ati
        unloaded: fbdev,modesetting,radeon,vesa gpu: amdgpu
        resolution: 1920x1080~60Hz
  OpenGL:
        renderer: AMD RENOIR (LLVM 13.0.1 DRM 3.47 5.19.0-76051900-generic)
        v: 4.6 Mesa 22.0.5
Network:
  Device-1: Realtek RTL8822CE 802.11ac PCIe Wireless Network Adapter
        driver: rtw_8822ce
Drives:
  Local Storage: total: 953.87 GiB used: 75.44 GiB (7.9%)
Info:
  Processes: 347 Uptime: 15m Memory: 14.96 GiB used: 2.91 GiB (19.4%)
  Shell: Bash inxi: 3.3.13
```
### 1. Display battery status
You can check your battery health using the `-B` option. The result shows the system battery ID, charge condition, and other information:
```
$ inxi -B
Battery:
ID-1: BATT charge: 44.3 Wh (85.2%) condition: 52.0/53.2 Wh (97.7%)
```
### 2. Display CPU info
Find out more information about the CPU with the `-C` option:
```
$ inxi -C
CPU:
Info: 8-core model: AMD Ryzen 7 PRO 5850U with Radeon Graphics bits: 64
type: MT MCP cache: L2: 4 MiB
Speed (MHz): avg: 400 min/max: 400/4507 cores: 1: 400 2: 400 3: 400
4: 400 5: 400 6: 400 7: 400 8: 400 9: 400 10: 400 11: 400 12: 400 13: 400
14: 400 15: 400 16: 400
```
The output of `inxi` uses colored text by default. You can change that to improve readability, as needed, by using the "color switch."
The command option is `-c` followed by any number between 0 and 42 to suit your tastes.
```
$ inxi -c 42
```
Here is an example of a couple of different options using color 5 and then 7:
![inxi -c 5 command][5]
The software can show hardware temperature, fan speed, and other information about your system using the sensors in your Linux system. Enter `inxi -s` and read the result below:
![inxi -s][6]
### 3. Combine options
You can combine options for `inxi` to get complex output when supported. For example, `inxi -S` provides system information, and `-v` provides verbose output. Combining the two gives the following:
```
$ inxi -S
System:
  Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
        Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS
$ inxi -Sv
CPU: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics (-MT MCP-)
speed/min/max: 634/400/4507 MHz Kernel: 5.19.0-76051900-generic x86_64
Up: 20m Mem: 3084.2/15318.5 MiB (20.1%) Storage: 953.87 GiB (7.9% used)
Procs: 346 Shell: Bash inxi: 3.3.13
```
### Bonus: Check the weather
Your computer isn't all `inxi` can gather information about. With the `-w` option, you can also get weather information for your locale:
```
$ inxi -w
Weather:
  Report: temperature: 14 C (57 F) conditions: Clear sky
  Locale: Wellington, G2, NZL
        current time: Tue 30 Aug 2022 16:28:14 (Pacific/Auckland)
        Source: WeatherBit.io
```
You can get weather information for other areas of the world by specifying the city and country you want along with `-W` :
```
$ inxi -W rome,italy
Weather:
  Report: temperature: 20 C (68 F) conditions: Clear sky
  Locale: Rome, Italy current time: Tue 30 Aug 2022 06:29:52
        Source: WeatherBit.io
```
### Wrap up
There are many great tools to gather information about your computer. I use different ones depending on the machine, the desktop, or my mood. What are your favorite system information tools?
Image by: (Don Watkins, CC BY-SA 4.0)
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/linux-inxi-command
作者:[Don Watkins][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/code_computer_laptop_hack_work.png
[2]: https://github.com/smxi/inxi/blob/master/LICENSE.txt
[3]: https://smxi.org/docs/inxi-man.htm
[4]: https://smxi.org/docs/inxi-installation.htm#inxi-repo-install
[5]: https://opensource.com/sites/default/files/2022-09/inxi-c5.png
[6]: https://opensource.com/sites/default/files/2022-09/inxi-s.png

View File

@ -1,106 +0,0 @@
[#]: subject: "Atoms is a GUI Tool to Let You Manage Linux Chroot Environments Easily"
[#]: via: "https://itsfoss.com/atoms-chroot-tool/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Atoms is a GUI Tool to Let You Manage Linux Chroot Environments Easily
======
A chroot environment provides you with isolation for testing in Linux. You do not need to take the hassle of creating a virtual machine. Instead, if you want to test an application or something else, create a chroot environment that allows you to select a different root directory.
So, with chroot, you get to test stuff without giving the application access to the rest of the system. Any application you install or anything you try gets confined to that directory and does not affect the functioning of your operating system.
Chroot has its perks, which is why it is a convenient way to test things for various users (especially system administrators).
Unfortunately, all of this works via the Linux terminal. What if you can have a graphical user interface to make things a little easy? Thats where “**Atoms**” comes in.
### Atoms: A GUI to Manage Linux Chroot(s)
![atoms][1]
Atoms is a GUI tool that makes it convenient to create and manage Linux chroot environments.
It also supports integration with [Distrobox][2]. So, you can also manage containers using Atoms.
However, the developers mention that this tool does not offer seamless integration with Podman, explaining its purpose: “*its purpose is only to allow the user to open a shell in a new environment, be it chroot or container.”*
If you are looking for such a thing, you might want to check out [pods][3].
### Features of Atoms
![atoms options][4]
Atoms is a straightforward GUI program that lets you create chroot environments for several supported Linux distributions.
Let me highlight the supported distros along with their functionalities offered:
* Browse files for the chroot(s) created.
* Ability to choose mount points to expose.
* Access to the console.
* Supported Linux distros include Ubuntu, Alpine Linux, Fedora, Rocky Linux, Gentoo, AlmaLinux, OpenSUSE, Debian, and CentOS.
It is incredibly easy to use. Creating an atom from within the app is a one-click process.
All you have to do is name the atom, and select the Linux distribution from the list of available options (Ubuntu as the selection in the screenshot above). It downloads the image and sets up the chroot environment for you in a few minutes as shown below.
![atom config][5]
Once its done, you can access the options to launch the console to manage the chroot environment or customize/delete it.
![atoms option][6]
To access the console, head to the other tab menu. Pretty seamless experience, and works well, at least for Ubuntu that I tested.
![atoms console][7]
Additionally, you can detach the console to access it as a separate window.
![atoms detach console][8]
### Installing Atoms on Linux
You can install Atoms on any Linux distribution with the Flatpak package available on [Flathub][9]. Follow our [Flatpak guide][10] if you are new to Linux.
**Note:** The latest stable version **1.0.2** is only available via Flathub.
To explore its source code and other details, head to its [GitHub page][11].
### Conclusion
The Linux command line is powerful and you can do almost anything with the commands. But not everyone feels comfortable with it and thus tools like Atoms make it more convenient by providing a GUI.
And Atoms is not the only one of this kind. There is [Grub Customizer][12] that makes it easier to change [Grub][13] configuration which can be done with the command line.
I believe there are many more such tools out there.
*What do you think about using a GUI program like Atom to manage Chroot environments? Share your thoughts in the comments down below.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/atoms-chroot-tool/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/09/atoms.png
[2]: https://itsfoss.com/distrobox/
[3]: https://github.com/marhkb/pods
[4]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-options.png
[5]: https://itsfoss.com/wp-content/uploads/2022/09/atom-config.png
[6]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-option.png
[7]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-console.png
[8]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-detach-console.png
[9]: https://flathub.org/apps/details/pm.mirko.Atoms
[10]: https://itsfoss.com/flatpak-guide/
[11]: https://github.com/AtomsDevs/Atoms
[12]: https://itsfoss.com/grub-customizer-ubuntu/
[13]: https://itsfoss.com/what-is-grub/

View File

@ -1,111 +0,0 @@
[#]: subject: "How to Access Android Devices Internal Storage and SD Card in Ubuntu, Linux Mint using Media Transfer Protocol (MTP)"
[#]: via: "https://www.debugpoint.com/how-to-access-android-devices-internal-storage-and-sd-card-in-ubuntu-linux-mint-using-media-transfer-protocol-mtp/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Access Android Devices Internal Storage and SD Card in Ubuntu, Linux Mint using Media Transfer Protocol (MTP)
======
This tutorial will show how to access android devices using MTP in Ubuntu and how to access SD card contents.
**This tutorial will show how to access android devices using MTP in Ubuntu and how to access SD card contents.**
MTP, or [media transfer protocol][1], is an extension of the Picture transfer protocol and is implemented in the Android marshmallow version. After the marshmallow update, you cant use the android devices as typical mass storage devices you can just plug in and see the internal storage contents and the SD card contents in a file manager such as in Thunar or GNOME Files. This is due to the OS being unable to determine the MTP devices, and also, a list of supported devices is not yet implemented.
### Steps to access Android Devices in Ubuntu, Linux Mint
* Install [libmtp][2], FUSE file system for MTP enabled devices [mtpfs][3] using below commands
```
sudo apt install go-mtpfs
sudo apt install libmtp
sudo apt install mtpfs mtp-tools
```
* Create a directory in `/media` using the below command and changing the permission to write
```
sudo mkdir /media/MTPdevice
sudo chmod 775 /media/MTPdevice
sudo mtpfs -o allow_other /media/MTPdevice
```
* Plug in your Android device using a USB cable in Ubuntu.
* On your Android device, swipe down from above on the home screen and click Touch for more options.
* In the following menu, select the option “Transfer File (MTP)“.
![MTP Option1][4]
![MTP Option2][5]
* Run the below command in the terminal to find out the device ID etc. You can see the VID and PID in the command output for your device. Note down these two numbers (highlighted in below image).
```
mtp-detect
```
![mtp-detect Command Output][6]
* Open the android rules file using the text editor using the below command.
```
sudo gedit /etc/udev/rules.d/51-android.rules
```
* If you are using the latest Ubuntu, where gedit is not installed, use the below command.
```
sudo gnome-text-editor /etc/udev/rules.d/51-android.rules
```
* Type the below line using your devices VID and PID in the `51-android.rules` file (which you note down in above step).
* Save and close the file.
```
SUBSYSTEM=="usb", ATTR{idVendor}=="22b8", ATTR{idProduct}=="2e82", MODE="0666"
```
* Run the below command to restart the device manager via [systemd][7].
```
sudo service udev restart
```
### Next steps to access contents
* The next steps are mainly needed to access the contents of the external SD card memory of your android device.
* I had to do these because the file manager was NOT showing the contents of the SD card. This is not a solution, though, but it is a workaround which works for most of the users as per this [Google forum post][8] and worked for my Motorola G 2nd Gen with SanDisk SD card. * Safely remove your connected device in Ubuntu. * Turn off the device. Remove the SD card from the device. * Turn on the device without the SD card. * Turn off the device again. * Put the SD card back in and turn on the device again.
* Reboot your Ubuntu machine and plug in your android device.
* Now you can see the contents of your android devices internal storage and the SD card contents.
![MTP Device Contents in Ubuntu][9]
### Conclusion
The above tutorial to access Android device contents in Ubuntu worked on older and new Ubuntu releases with Android devices (Samsung, OnePlus & Motorolla). Try these steps and it might work if you are facing difficulties to access the contents. In my opinion, MTP is very slow compared to good old plug and play options.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/how-to-access-android-devices-internal-storage-and-sd-card-in-ubuntu-linux-mint-using-media-transfer-protocol-mtp/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://en.wikipedia.org/wiki/Media_Transfer_Protocol
[2]: https://sourceforge.net/projects/libmtp/
[3]: https://launchpad.net/ubuntu/+source/mtpfs
[4]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Option1.png
[5]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Option2.png
[6]: https://www.debugpoint.com/wp-content/uploads/2016/03/mtp-detect.png
[7]: https://www.debugpoint.com/systemd-systemctl-service/
[8]: https://productforums.google.com/forum/#!topic/nexus/11d21gbWyQo;context-place=topicsearchin/nexus/category$3Aconnecting-to-networks-and-devices%7Csort:relevance%7Cspell:false
[9]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Device-Contents-in-Ubuntu.png

View File

@ -0,0 +1,169 @@
[#]: subject: "5 Git configurations I make on Linux"
[#]: via: "https://opensource.com/article/22/9/git-configuration-linux"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 Git configurations I make on Linux
======
This is a simple guide to quickly get started working with Git and a few of its many configuration options.
![Linux keys on the keyboard for a desktop computer][1]
Setting up Git on Linux is simple, but here are the five things I do to get the perfect configuration:
1. [Create global configuration][2]
2. [Set default name][3]
3. [Set default email address][4]
4. [Set default branch name][5]
5. [Set default editor][6]
I manage my code, shell scripts, and documentation versioning using Git. This means that for each new project I start, the first step is to create a directory for its content and make it into a Git repository:
```
$ mkdir newproject
$ cd newproject
$ git init
```
There are certain general settings that I always want. Not many, but enough that I don't want to have to repeat the configuration each time. I like to take advantage of the *global* configuration capability of Git.
Git offers the `git config` command for manual configuration but this is a lot of work with certain caveats. For example, a common item to set is your email address. You can set it by running `git config user.email` followed by your email address. However, this will only take effect if you are in an existing Git directory:
```
$ git config user.email alan@opensource.com
fatal: not in a git directory
```
Plus, when this command is run within a Git repository, it only configures that specific one. The process must be repeated for new repositories. I can avoid this repetition by setting it globally. The *--global* option will instruct Git to write the email address to the global configuration file; `~/.gitconfig`, even creating it if necessary:
> Remember, the tilde (~) character represents your home directory. In my case that is /home/alan.
```
$ git config --global user.email alan@opensource.com
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
```
The downside here is if you have a large list of preferred settings, you will have a lot of commands to enter. This is time-consuming and prone to human error. Git provides an even more efficient and convenient way to directly edit your global configuration file—that is the first item on my list!
### 1. Create global configuration
If you have just started using Git, you may not have this file at all. Not to worry, let's skip the searching and get started. Just use the `--edit` option:
```
$ git config --global --edit
```
If no file is found, Git will generate one with the following content and open it in your shell environment's default editor:
```
# This is Git's per-user configuration file.
[user]
# Please adapt and uncomment the following lines:
#       name = Alan
#       email = alan@hopper
~
~
~
"~/.gitconfig" 5L, 155B                                     1,1           All
```
Now that we have opened the editor and Git has created the global configuration file behind the scenes, we can continue with the rest of the settings.
### 2. Set default name
Name is the first directive in the file, so let's start with that. The command line to set mine is `git config --global user.name "Alan Formy-Duval"`. Instead of running this command, just edit the *name* directive in the configuration file:
```
name = Alan Formy-Duval
```
### 3. Set default email address
The email address is the second directive, so let's update it. By default, Git uses your system-provided name and email address. If this is incorrect or you prefer something different, you can specify it in the configuration file. In fact, if you have not configured them, Git will let you know with a friendly message the first time you commit:
```
Committer: Alan <alan@hopper>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate....
```
The command line to set mine is`git config --global user.email`**["alan@opensource.com"][7]**. Instead, edit the *email* directive in the configuration file and provide your preferred address:
```
email = alan@opensource.com
```
The last two settings that I like to set are the default branch name and the default editor. These directives will need to be added while you are still in the editor.
### 4. Set default branch name
There is currently a trend to move away from the usage of the word *master* as the default branch name. As a matter of fact, Git will highlight this trend with a friendly message upon initialization of a new repository:
```
$ git init
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:   git config --global init.defaultBranch <name>
```
This directive, named *defaultBranch*, needs to be located in a new section named *init*. It is now generally accepted that many coders use the word *main* for their default branch. This is what I like to use. Add this section followed by the directive to the configuration:
```
[init]
            defaultBranch = main
```
### 5. Set default editor
The fifth setting that I like to set is the default editor. This refers to the editor that Git will present for typing your commit message each time you commit changes to your repository. Everyone has a preference whether it is [nano][8], [emacs][9], [vi][10], or something else. I'm happy with vi. So, to set your editor, add a *core* section that includes the *editor* directive:
```
[core]
            editor = vi
```
That's the last one. Exit the editor. Git saves the global configuration file in your home directory. If you run the editing command again, you will see all of the content. Notice that the configuration file is a plain text file, so it can also be viewed using text tools such as the [cat command][11]. This is how mine appears:
```
$ cat ~/.gitconfig
[user]
        email = alan@opensource.com
        name = Alan Formy-Duval
[core]
        editor = vi
[init]
        defaultBranch = main
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/git-configuration-linux
作者:[Alan Formy-Duval][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/linux_keyboard_desktop.png
[2]: https://opensource.com/article/22/9/git-configuration-linux#create-global-configuration
[3]: https://opensource.com/article/22/9/git-configuration-linux#set-default-name
[4]: https://opensource.com/article/22/9/git-configuration-linux#set-default-email-address
[5]: https://opensource.com/article/22/9/git-configuration-linux#set-default-branch-name
[6]: https://opensource.com/article/22/9/git-configuration-linux#set-default-editor
[7]: https://opensource.com/mailto:alan@opensource.com
[8]: https://opensource.com/article/20/12/gnu-nano
[9]: https://opensource.com/resources/what-emacs
[10]: https://opensource.com/article/19/3/getting-started-vim
[11]: https://opensource.com/article/19/2/getting-started-cat-command

View File

@ -0,0 +1,187 @@
[#]: subject: "Ansible Register Variable"
[#]: via: "https://ostechnix.com/ansible-register/"
[#]: author: "Karthick https://ostechnix.com/author/karthick/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ansible Register Variable
======
This guide explains **what is Ansible register** and how to **capture a task output in Ansible using the register** variables in Linux.
#### Contents
1. Introduction To Ansible Register Variable
2. Register The Output
3. Accessing Individual Attributes
4. Decision Making Using Control Statement
5. Writing Register Output To A File
6. Iterate And Register
7. Conclusion
### Introduction To Ansible Register Variable
**Register** is used to capture the output of any task and store it in a variable. This variable can later be used along with print, loops, conditionals etc.
When you run a task in **[Ansible][1]**, the entire output of the task will not be printed to stdout(Terminal). If you wish to see the output of the task, then you have to store it in the register variable and then later print it.
In ansible, **[debug module][2]** is used to print the output to the terminal. You can also create a decision-based task by combining the register attributes with the conditional statement(when).
In the upcoming sections, you will learn how to register the output, understand the structure of the register output, print the output to stdout, Write output to a file and use register with when statement.
### Register The Output
Let's see how to use the register module through a use case. On my Ubuntu managed node, I need to check whether the virtualenv package is installed. If the package is not installed I will install it.
Create a new **[playbook][3]** and add the following play definition. Modify the play definition parameters as per your requirements.
```
- name: Test playbook - ansible register
hosts: ubuntu.anslab.com
gather_facts: False
become: True
```
I have the following task which will use the shell module to run the `"which virtualenv"` command. The output of this task is captured in the register variable named **"virtualenv_output"**. You can name this variable as per your wish.
In the second task, the register variable "`virtualenv_output` " will be printed using the debug module.
```
tasks:
- name: First Task - Using shell module to check if virtualenv is present or not
ansible.builtin.shell:
cmd: which virtualenv
register: virtualenv_output
ignore_errors: True
- name: Second Task - Print the full output
ansible.builtin.debug:
var: virtualenv_output
```
**Heads Up:** Using the **apt** module, you can simplify this task. But for demonstration purposes, I am using the **shell** module.
Check if there is any syntax error before submitting the playbook by running the following command.
```
$ ansible-playbook --syntax-check
```
Run the following command to submit the playbook.
```
$ ansible-playbook
```
In the below output, you can see the output of the first task. The output will be in `JSON` format. Depending upon the module and the task, the output will vary.
![Register The Output][4]
A few important attributes in the output will help you to write further logic in the playbook.
1. Failed - Return "True" if the task is failed and "False" if successfully executed.
2. Rc - The return code of the command submitted through the shell module.
3. Stderr, `Stderr`_line - Error messages.
4. Stdout, `Stdout`_lines - Stdout messages.
### Accessing Individual Attributes
Take a look at the below task. Instead of printing the entire output of the register variable, I am just trying to print the return code.
```
- name: Just checking the exit code
ansible.builtin.debug:
msg: "{{ virtualenv_output.rc }}"
```
In the below output, you can see only the return code is printed instead of the full output.
![Print Only Return Code][5]
Alternative to the **dot** notation, you can also access the attributes using the **python dictionary** notation.
```
- name: Just checking the exit code - Python dict way
ansible.builtin.debug:
msg: "{{ virtualenv_output['rc'] }}"
```
### Decision Making Using Control Statement
Lets build further logic based on the return code from the first task. If the return code is 1, then the task is said to be failed meaning there is no virtualenv package in my managed node.
Take a look at the below task where I am using the apt module to install the **"python3-virtualenv"** package. This task will only run when if the return code of the register variable is not equal to zero. Again I am registering this task output to the variable called **"virtualevn_install_output"**.
```
- name: Install the package based on the return code
ansible.builtin.apt:
pkg: python3-virtualenv
state: present
when: virtualenv_output.rc != 0
register: virtualenv_install_output
```
### Writing Register Output To A File
Instead of using the debug module to print the output to stdout(Terminal), you can write the output to a file. There are many modules to do this task but here I am using the **copy** module and storing the content of the register variable in the file `virtualenv_output`.
```
- name: Reroute the output to a file
ansible.builtin.copy:
content: "{{virtualenv_install_output}}"
dest: "/home/vagrant/virtualenv_output"
```
![Writing Register Output To A File][6]
### Iterate And Register
Take a look at the below task where I am using the **shell** module to remove some dummy files I manually created. Using **loop**, I am iterating through the list of files and passing it as the argument to the shell module.
The result of each iteration will be appended to the register variable **removed_output**.
```
- name: Using loops
ansible.builtin.shell:
cmd: rm -f "{{item}}"
register: removed_output
loop:
- test_file.txt
- abc.txt
- name: Print the removed output
ansible.builtin.debug:
msg:
- Return code for {{removed_output.results.0.item}} is {{removed_output.results.0.rc}}
- Return code for {{removed_output.results.1.item}} is {{removed_output.results.1.rc}}
```
![Iterate And Register][7]
### Conclusion
In this article, we have discussed what is ansible register variable and how to register the task output. We also discussed how to use the attributes from the register variable along with a conditional statement to create the condition-based task. Finally, we have seen how the output is appended to the register variable when you run a task iteratively.
--------------------------------------------------------------------------------
via: https://ostechnix.com/ansible-register/
作者:[Karthick][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/karthick/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/introduction-to-ansible-automation-platform/
[2]: https://ostechnix.com/ansible-debug-module/
[3]: https://ostechnix.com/ansible-playbooks/
[4]: https://ostechnix.com/wp-content/uploads/2022/09/Register-The-Output.png
[5]: https://ostechnix.com/wp-content/uploads/2022/09/Print-Only-Return-Code.png
[6]: https://ostechnix.com/wp-content/uploads/2022/09/Writing-Register-Output-To-A-File.png
[7]: https://ostechnix.com/wp-content/uploads/2022/09/Iterate-And-Register.png

View File

@ -0,0 +1,125 @@
[#]: subject: "EuroLinux Desktop is an RHEL-based Distro With Enterprise Perks"
[#]: via: "https://news.itsfoss.com/eurolinux-desktop/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
EuroLinux Desktop is an RHEL-based Distro With Enterprise Perks
======
EuroLinux Desktop is an interesting RHEL-based distribution that offers stability, security, and a nice user interface. Let's take a look.
![EuroLinux Desktop is an RHEL-based Distro With Enterprise Perks][1]
EuroLinux Desktop is a Linux distro that aims to provide a well-rounded package for Windows and macOS users.
You might be aware that EuroLinux is a Polish company focused on providing server-based distro since 2013 meant to cater to enterprises.
But looks like they are introducing a **Red Hat Enterprise Linux 9-based** desktop-focused distribution. 🤯
As per their [announcement][2]:
> EuroLinux Desktop is a modern operating system that combines the aesthetic and functionality of Windows and macOS with the reliability and security of Enterprise Linux distributions.
Sounds exciting! So, what do we know about it?
### What's Special?
EuroLinux Desktop is based on RHEL 9, which enables it to serve you with server-level stability and security alongside a decent desktop experience.
It worked fine on as a virtual machine setup. You can test it on your spare computer before trying it out as a daily driver.
This enables it to provide seamless compatibility with other [RHEL-based server distros][3] such as CentOS, Rocky Linux, AlmaLinux, and more.
#### Targets Windows And macOS users
![eurolinux desktop screenshot][4]
EuroLinux Desktop aims to lure in Windows and macOS users with a familiar user interface layout with its implementation of a translucent dock at the bottom of the screen. It also provides users with various customization options to play with.
Multitasking is very simple, thanks to the thoughtful implementation of the GNOME window manager.
![eurolinux desktop multitasking window][5]
All of this is made possible by using the [GNOME desktop environment][6] as a solid foundation to build upon and a few tweaks by their team.
#### Extensive Media Codec Support
![eurolinux desktop media player][7]
EuroLinux Desktop also supports a vast number of media file formats that enables you to play the most common audio and video files, such as .mp3, .mp4, .avi, .flac, and more.
You can use the included Totem video player to play any media files.
#### LibreOffice Suite
![eurolinux desktop libreoffice][8]
The complete LibreOffice suite is included with the distro to give users a very useful set of tools to make your work a bit easier.
#### GNOME Software Center
![eurolinux_desktop_software_center][11]
Of course, it comes equipped with the GNOME Software Center, and Flathub repository enabled, which should let you install any application you want.
It also supports AppImages. So, you should be fine for the most part.
### Who Is It Aimed At?
Being an RHEL-based desktop OS, it may not be easy to recommend this distro to every user.
For example, users familiar with Ubuntu might struggle with executing terminal commands for troubleshooting on EuroLinux.
But on the other hand, it might appeal to users who are coming from a Windows or a macOS system and want to try Linux for the first time.
Sure, it may not be one of the best Linux distributions for beginners yet. But, it should be a refreshing option as more users get to try it.
[Best Linux Distributions That are Most Suitable for Beginners][12]
It also has a lot of things to offer for use in public administration and educational institutions, such as a **10-year software life cycle**, an update management system, technical support, and more.
EuroLinux Desktop also has something for gamers, it supports software like Steam, Lutris, Wine, and more.
### Get Started With EuroLinux Desktop
The overall package seems adequate as it can cater to both Linux users and Windows/macOS users.
You can download the ISO file for EuroLinux Desktop from the button below.
To know more about the release, visit the [official product webpage][14].
😲 *The ISO file size is massive (6.8 GB). You're in for an adventure!*
[Download EuroLinux Desktop][15]
💬 *Will you be trying out EuroLinux Desktop? Are you willing to switch from Windows or macOS to an RHEL-based desktop distro?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/eurolinux-desktop/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/eurolinux-desktop.png
[2]: https://en.euro-linux.com/blog/eurolinux-desktop/
[3]: https://itsfoss.com/rhel-based-server-distributions/
[4]: https://news.itsfoss.com/content/images/2022/09/eurolinuxdesktop_home.jpg
[5]: https://news.itsfoss.com/content/images/2022/09/eurolinux_desktop_multitasking.png
[6]: https://www.gnome.org/
[7]: https://news.itsfoss.com/content/images/2022/09/eurolinuxdesktop_media.jpg
[8]: https://news.itsfoss.com/content/images/2022/09/eurolinuxdesktop_libre.jpg
[11]: https://news.itsfoss.com/content/images/2022/09/eurolinux_desktop_software_center.png
[12]: https://itsfoss.com/best-linux-beginners/
[14]: https://en.euro-linux.com/eurolinux/desktop/
[15]: https://dn.euro-linux.com/ELD-9-x86_64-latest.iso
[16]: https://www.humblebundle.com/books/linux-no-starch-press-books?partner=itsfoss

View File

@ -0,0 +1,157 @@
[#]: subject: "How to Install and Use GNOME Boxes to Create Virtual Machines"
[#]: via: "https://www.debugpoint.com/install-use-gnome-boxes/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install and Use GNOME Boxes to Create Virtual Machines
======
This quick tutorial explains the steps to install and use GNOME Boxes and create virtual machines, with some tips and troubleshooting.
Virtualization is the process of running a virtual instance (rather than an actual one) with an abstracted layer of hardware. In popular terms, it allows you to install and run multiple operating systems (Linux, Windows) simultaneously.
A [Virtual machine][1] is a simulated operating system that runs on top of another operating system and uses the same hardware, and storage space as the host machine. Although, you can control how much shared-memory or space can be allocated to virtual machines.
There are multiple software available to create virtual machines, e.g. [Virtual Box][2], KVM, Hyper-V, VM Ware player, and GNOME Boxes.
But honestly, most of them are complex to use and sometimes not stable enough. [GNOME Boxes][3] is another free and open-source software that is very easy to use and makes it simple for you to create, and manage virtual machines by abstracting lots of options.
### Install GNOME Boxes
If you are running Fedora with GNOME Spin, you should already have it installed. However, for Ubuntu, Linux Mint, Kubuntu, and other distributions, you can simply run below to install it in your system.
```
sudo apt install gnome-boxes
```
#### Via Flatpak
It is also available via the Flatpak package. I would recommend you to use this version. First, set up your system to use Flatpak using [this guide][4], and then run the following command from the terminal to install.
```
flatpak install flathub org.gnome.Boxes
```
### Create Virtual Machine using GNOME Boxes
* Launch GNOME Boxes from the application menu.
* To create a virtual machine, you need an image (*.ISO) of the operating system you want to virtualize.
* You can download any operating system iso images from the official download page of the distributions. For this guide, I am using Pop! OS, which is an excellent Linux distribution.
* After you launch, click on the “+” icon at the top to start and select “Create a virtual machine”.
![Create Virtual Machine][5]
In the next window, you can choose already available downloads, or you can select your iso file as OS source. Click on the “Operating system image file” and choose your iso file.
Assign the memory and storage space of your virtual machine. Remember, your virtual machine would take the memory and storage from your host system. So try not to assign as max.
For example, in the below image I have assigned 2GB memory for the virtual machine (guest) from the total 8GB memory of the host system.
Similarly, choose minimum storage space as well if you want to just test any OS. But if you are creating a virtual machine for servers or serious work, be logical in how much space or memory you want to assign.
Another important thing to remember is that the storage disk space which you allow will be blocked permanently unless you delete the virtual machine. So, you wont get that much of disk space as free even if your virtual machine doesnt use the entire allocated space.
![Allocate resources for your virtual machine][6]
Continue with the installation.
In the partition window, you should see one hard disk and partition, which is the virtual machine disk space. Usually, they are named as `/dev/vda` or `/dev/sda`.
Dont worry; you can play around with this partition, which will not impact your physical disk partitions or any data on your actual host system. Follow the same /root partition while installing Linux, and continue.
![Virtual machine partition][7]
After you complete the installation, you should be seeing your new operating system in the virtual machine. In the GNOME Boxes, you should see an entry to the system. You can click once to boot your virtual machine.
You can power off the virtual machine by using your virtual machine operating systems internal shutdown option.
If you want, you can also delete the virtual machine by choosing the context menu option.
![Context menu in installed virtual machine][8]
You can also check how much memory, and CPU your virtual machine uses from the properties window.
Note that you can adjust the memory and other items of your existing virtual machines using properties.
![System properties][9]
### Troubleshooting
Here are some of the common errors or issues which you may face while using GNOME Boxes.
##### 1. Resolution problems in virtual machines
If your virtual machine is having low resolution, which is incompatible with your host system, then you have to install the below items. Open up the terminal in the guest system (not in the host system) and run the below commands.
**For Ubuntu-based distributions**
```
sudo apt install spice-vdagent spice-webdavd
```
**For Fedora**
```
sudo dnf install spice-vdagent spice-webdavd
```
These two packages help to determine proper resolutions, copy/paste between host and guest, sharing files via public folders, etc.
Once installed, reboot the guest system; Or you can log off and re-login once after reboot, you should be seeing the proper resolution.
##### 2. GNOME Boxes dont start a virtual machine in Ubuntu 18.04 based distributions
If you are creating a virtual machine in Boxes 3.34 then you should know that there was a bug that caused your virtual machine to fail to start. To fix that you have to follow some additional steps. Remember these are not required for the latest Boxes 3.36.
Open a terminal window and run below command to change the qemu config file
```
sudo gedit /etc/modprobe.d/qemu-system-x86.conf
```
Add the below line in the above file and save.
```
group=kvm
```
Now, run below command to add your username to the KVM group.
```
sudo usermod -a -G kvm <your account name>
```
### Wrapping Up
In this article, you have see how to install and use GNOME Boxes to take advantage of virtualization. I hope it helps you.
🗨️ If you face any errors or have any questions with virtual machines with GNOME Boxes, let me know using the comment box below.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/install-use-gnome-boxes/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.redhat.com/en/topics/virtualization/what-is-a-virtual-machine
[2]: https://www.debugpoint.com/tag/virtualbox/
[3]: https://wiki.gnome.org/Apps/Boxes
[4]: https://www.debugpoint.com/how-to-install-flatpak-apps-ubuntu-linux/
[5]: https://www.debugpoint.com/wp-content/uploads/2020/05/Create-Virtual-Machine.png
[6]: https://www.debugpoint.com/wp-content/uploads/2020/05/Allocate-resources-for-your-virtual-machine.png
[7]: https://www.debugpoint.com/wp-content/uploads/2020/05/Virtual-machine-partition.png
[8]: https://www.debugpoint.com/wp-content/uploads/2020/05/Context-menu-in-installed-virtual-machine.png
[9]: https://www.debugpoint.com/wp-content/uploads/2020/05/System-properties.png

View File

@ -0,0 +1,229 @@
[#]: subject: "11 Gorgeous KDE Plasma Themes to Make Your Linux Desktop Even More Beautiful"
[#]: via: "https://itsfoss.com/best-kde-plasma-themes/"
[#]: author: "sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
11 Gorgeous KDE Plasma Themes to Make Your Linux Desktop Even More Beautiful
======
One of the most powerful features of the [KDE Plasma desktop is its fantastic potential for customization][1].
Speaking of customization, changing the theme is perhaps its most common and most visual aspect.
Not that the default Breeze theme is bad looking. Its just that you can give it an entirely different look with a new theme and icon set.
![Default KDE Plasma Theme Breeze][2]
Let me help you with that. Ill share some beautiful KDE Plasma themes you can choose from. Ill also show the steps for installing the chosen themes later in this article.
### Best KDE Plasma Themes
Please note that this is not a ranking list. Theme at number 3 should not be considered better than the one at number 7 or 8.
### 1. Sweet
Sweet is one of the most popular KDE themes out there. This theme, only available in dark mode, gives an aesthetic look to your system.
![Sweet Theme][3]
It can be **installed through KDE system settings**. It also has a dedicated icon pack, called Candy Icons, which also gets installed automatically, if you install it through System Settings.
[Sweet Theme][4]
### 2. Materia KDE
Materia is another popular theme liked by many desktop users. It has a polished and elegant look. Available in three variants, Materia, Materia Light, and Materia Dark.
![Materia Dark][5]
Materia Light is a pure white theme and Materia dark offers a complete dark experience. At the same time, Materia Theme gives a blend of both Dark and White.
This theme also **can be installed through KDE system settings**.
[Materia KDE][6]
### 3. Nordic
Nordic theme has a separate fan base among dark theme lovers. It is created around the Nord color palette, which is both comfortable for the eyes and elegant to watch.
![Nordic KDE][7]
Created by the same [developer of Sweet theme][8], it can be **installed from KDE System Settings**.
[Nordic][9]
### 4. WhiteSur
WhiteSur, developed by Vinceliuice, is a theme, aimed at MacOS theme lovers. It achieves a great similarity to MacOS appearance, which can be further enhanced with KDE Panel, Latte Dock etc.
![WhiteSur][10]
It provides an icon pack, which adds more aesthetics to the look and feel. This popular theme also provides both dark and light variants.
[WhiteSur][11]
### 5. Layan
Layan theme is available for both light and dark variants. And this is one of those themes, that provides rounded corners and looks neat and polished.
![Layan][12]
Layan uses Tela Circle icons and is **available to install from system settings**.
[Layan][13]
### 6. Qogir
Available in both light and dark variants, Qogir is a minimal theme, which can make your system look neat and cool.
![Qogir][14]
It has a very similar look to what appears on Budgie desktop. You can simply install the Qogir theme and its associated icon pack **from system settings**.
[Qogir][15]
### 7. Fluent Round
This theme can create a look and feel of the latest Windows 11 if you are a fan of that OS. Keeping this similarity aside, the Fluent theme is literally a great theme, available in both light/dark variants.
![Fluent KDE Theme][16]
It provides a polished look to your system, together with a dedicated dark/light icon package.
[Fluent Round][17]
### 8. Orchis
Orchis is quite popular among GNOME GTK theming and it is available for KDE also. Orchis has both light/dark. If you are **installing it through system settings**, the Tela Icon pack will also be installed, which you can change anytime from system settings.
![Orchis KDE Theme][18]
As in GNOME, this material-inspired theme improves the polishness of your desktop.
[Orchis][19]
### 9. Iridescent Round
If you are a fan of cyberpunk themes or futuristic themes, this can be a good option. The default wallpaper, which gets **installed through system settings**, looks artistic and gives a nerd vibe to your desktop.
![Iridescent Round][20]
It can create a visual experience if used with some cool plasma widgets and icon sets.
[Iridescent Round][21]
### 10. Nova Papilio
A rounded light theme centered around purple color. The theme is visually pleasing if you like light themes and extremely rounded corners.
![Nova Papilio][22]
The theme can be **installed from the system settings**.
[Nova Papilio][23]
### 11. WinSur Dark
As the name suggests, it has certain visual elements from Windows and macOS themes.
![Winsur Dark][24]
It has light/dark versions and you can **find it in system settings**. The theme has rounded corners and quite a polished look. But from my personal experience, it may make the display a bit congested on small displays.
[WinSur Dark][25]
#### Honorable Mentions
Listing themes, particularly in the case of a DE-like KDE Plasma can be a difficult task. Because there is a huge number of themes available. The above list gives a starting point, to those who dont want to spend time browsing for good looking themes.
Apart from this list, there are certain themes, like [Ant-Dark][26], [Aritim Dark][27], [Dracula][28], etc. that can also provide some nice visual experience to users.
### How to use these themes
Coming to the theme, there are a couple of methods to theme your KDE Plasma desktop. You can find it in brief below. It is a bit different from [theming GNOME desktop environment][29].
#### Install Theme from Settings
This is the most common and easiest method. Head on to KDE Settings. Select Appearance and click on Global Themes. Now, you can search for themes from the button as shown in the screenshots below.
![Download new global themes from KDE Plasma system settings][30]
You will get a comprehensive list of themes. Here, you can view the results with sort options. Once you found a theme, click on it and press install.
![List of available themes in KDE plasma system settings][31]
In most cases, this will apply a corresponding plasma theme and icons.
#### Apply Theme From Downloaded Theme Files
In some cases, you may find an interesting theme on some websites and unavailable in the KDE store. In this case, you need to download and extract the file. Once done, place the global theme folder of the downloaded theme in `~/.local/share/plasma/look-and-feel/` and Plasma Theme folder of the downloaded theme in `~/.local/share/plasma/desktoptheme/`.
![KDE Plasma themes folder in file manager][32]
Now head on to the settings and you will find the theme listed under the Appearance section.
#### Install Theme through Package Managers
This is a limited option. There are some themes, which found their way to the official repositories of your distribution. You can search for these themes and install them with your package manager. For example, in Ubuntu, you can install the Materia-KDE theme by running:
```
sudo apt install materia-kde
```
As said above, only a limited number of themes will be available here, that too varies with distributions. **Once installed, you can change the theme from System Settings > Appearance**.
### Wrapping Up
So, I listed some of my favorite KDE Plasma themes. I also demonstrated the steps for changing the themes.
Do you find some interesting theme here? Do you have some other favorite KDE theme that you would like to share with us in the comment section?
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-kde-plasma-themes/
作者:[sreenath][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/kde-customization/
[2]: https://itsfoss.com/wp-content/uploads/2022/09/breeze.webp
[3]: https://itsfoss.com/wp-content/uploads/2022/09/sweet-theme.webp
[4]: https://store.kde.org/s/KDE%20Store/p/1294729
[5]: https://itsfoss.com/wp-content/uploads/2022/09/materia-dark.png
[6]: https://www.pling.com/p/1462622
[7]: https://itsfoss.com/wp-content/uploads/2022/09/nordic-kde.png
[8]: https://github.com/EliverLara/Nordic
[9]: https://www.pling.com/p/1267246
[10]: https://itsfoss.com/wp-content/uploads/2022/09/whitesur.webp
[11]: https://www.pling.com/p/1400424
[12]: https://itsfoss.com/wp-content/uploads/2022/09/layan.webp
[13]: https://www.pling.com/p/1325241
[14]: https://itsfoss.com/wp-content/uploads/2022/09/qogir.png
[15]: https://www.pling.com/p/1675755
[16]: https://itsfoss.com/wp-content/uploads/2022/09/fluent-kde-theme.webp
[17]: https://www.pling.com/p/1631673
[18]: https://itsfoss.com/wp-content/uploads/2022/09/orchis-kde-theme.png
[19]: https://www.pling.com/p/1458927
[20]: https://itsfoss.com/wp-content/uploads/2022/09/iridescent-round.webp
[21]: https://www.pling.com/p/1640906
[22]: https://itsfoss.com/wp-content/uploads/2022/09/nova_papilio.webp
[23]: https://www.pling.com/p/1663528
[24]: https://itsfoss.com/wp-content/uploads/2022/09/winsur-dark.webp
[25]: https://www.pling.com/p/1373646
[26]: https://www.pling.com/p/1464332
[27]: https://www.pling.com/p/1281836
[28]: https://www.pling.com/p/1370871
[29]: https://itsfoss.com/install-switch-themes-gnome-shell/
[30]: https://itsfoss.com/wp-content/uploads/2022/09/download-new-global-themes-from-kde-plasma-system-settings.webp
[31]: https://itsfoss.com/wp-content/uploads/2022/09/list-of-available-themes-in-kde-plasma-system-settings.webp
[32]: https://itsfoss.com/wp-content/uploads/2022/09/kde-plasma-themes-folder-in-file-manager.png

View File

@ -0,0 +1,157 @@
[#]: subject: "5 Free and Open-Source Figma Alternatives"
[#]: via: "https://itsfoss.com/figma-alternatives/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 Free and Open-Source Figma Alternatives
======
Figma is a popular interface designing tool. You can get started free or opt for premium subscription plans for advanced use.
It is an impressive platform that many professionals rely on. However, in 2021, [Figma][1] changed its free plan by imposing certain restrictions. While this made some users look for alternatives, it was still manageable for many.
Unfortunately, in 2022, the **announcement of Adobe acquiring Figma for $20 billion** put off many users. So, everyone has started looking for alternatives that are free and potentially open-source.
To help you out, we decided to compile a list of free and open-source alternatives to Figma that you can try.
**Note:** The alternatives mentioned are not necessarily exact replacements for Figma. We recommend you try them to see how they fit your requirements.
### 1. Penpot
![A Video from YouTube][2]
**Key Highlights:**
* Self-hosting option.
* Uses SVG as the native format.
* Web-based.
* Cross-platform.
Penpot is quickly being recognized as a solid free and open-source Figma alternative.
Even if it is in its beta phase, the users seem to like what it offers when writing this. Im not a design expert, but the user experience with the tool seems impressive.
The unique thing about Penpot is that it uses SVG as its native format, which is rare but also provides immense benefits to the designers.
![penpot screenshot][3]
You can expect the essential features from Figma as the developers mention the tools original inspiration is Figma, and they aim to provide a familiar user experience without adding hurdles to your design adventures.
Head to its official website or GitHub page to explore more.
[Penpot][4]
### 2. Quant UX
![A Video from YouTube][5]
**Key Highlights:**
* Prototyping and Testing.
* Limited access without signing up.
* New beta features are regularly added.
* Self-host option.
Quant UX is a prototyping tool where you can test your designs and get insights about them.
You can create a custom prototype or select any available screen sizes for an Android phone, iPhone, or desktop.
This is also something where you will find features constantly added, and some of them are in beta. It is focused more on testing things by letting you import your designs or create a simple mockup.
It allows you access to a few things without signing up, but to get all features working, you need to sign up for an account. Explore more on its [GitHub page][6].
[Quant UX][7]
### 3. Plasmic
![A Video from YouTube][8]
**Key Highlights:**
* Free and open source.
* Drag and drop functionality.
* It supports importing designs from Figma.
Plasmic is a remarkable design tool for building web pages. If you were using Figma for web design, this could be an alternative tool to check out.
It provides most of the features for free and unlocks things like more extended version history, analytics, and other special features for teams when you opt for a premium plan. It is not just limited to designing the web pages but also supports A/B testing to experiment and improve the user interaction of your website.
Whether you are using an [open-source CMS][9] or a Jamstack site, Plasmic supports integration almost everywhere. Head to its official site or [GitHub page][10] to learn more.
[Plasmic][11]
### 4. Wireflow
![wireflow userflow][12]
**Key Highlights:**
* Free to use.
* No paid options.
* It is not actively maintained.
Wireflow is an interesting offering as a user flow prototype tool, and it is entirely free to use with no paid option.
Also, you do not need to sign up for an account. Get started from its official website and collaborate with others to plan your project and brainstorm.
Unfortunately, it has not seen any recent development activity since 2021. But, it is still active and remains a free and open-source solution. You can check out its [GitHub page][13] for more information.
[Wireflow][14]
### 5. Akira UX
![akira ux 2020][15]
**Key Highlights:**
* Early development app.
* Focuses on being a native Linux UX app.
[Akira UX][16] is an exciting project aiming to bring a native Linux design utility that works as well as some web-based solutions.
Akiras project lead joined Mozilla Thunderbird as a product design manager. So, as of now, the project is not super actively developed. But, as a free and open-source project, anyone can pick it up and work on the same vision.
It is currently an early development version that you can test. You can find it available on Flathubs beta channel and get it installed as per its [GitHub page instructions][17].
[Akira UX][18]
### Wrapping Up
It is not easy to replace Figma with a free and open-source solution. However, if you are not concerned about all of Figmas functionalities, some of our recommendations should help you get the job done.
*Do you know of any other free and open-source replacements to Figma? Let me know your thoughts in the comments below.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/figma-alternatives/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://www.figma.com/
[2]: https://youtu.be/JozESuPcVpg
[3]: https://itsfoss.com/wp-content/uploads/2022/09/penpot-screenshot.jpg
[4]: https://penpot.app/
[5]: https://youtu.be/eGDTAJlB-uI
[6]: https://github.com/KlausSchaefers/quant-ux
[7]: https://quant-ux.com/
[8]: https://youtu.be/sXXpe5jjnRs
[9]: https://itsfoss.com/open-source-cms/
[10]: https://github.com/plasmicapp/plasmic
[11]: https://www.plasmic.app/
[12]: https://itsfoss.com/wp-content/uploads/2022/09/wireflow-userflow-800x570.jpg
[13]: https://github.com/vanila-io/wireflow
[14]: https://wireflow.co/
[15]: https://itsfoss.com/wp-content/uploads/2022/09/akira-ux-2020.png
[16]: https://itsfoss.com/akira-design-tool/
[17]: https://github.com/akiraux/Akira
[18]: https://github.com/akiraux/Akira

View File

@ -0,0 +1,164 @@
[#]: subject: "Customize Xfce Desktop for Modern Look and Productivity"
[#]: via: "https://www.debugpoint.com/customize-xfce-modern-look-2020-edition/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Customize Xfce Desktop for Modern Look and Productivity
======
You can customize the super lightweight Xfce desktop for a modern look and improve productivity by tweaking various settings. In this guide, we explain the steps to configure a basic Xfce desktop installation (for example Xubuntu) entirely and make it look modern while being productive.
Xfce is a fast and lightweight desktop built on the GTK toolkit. It uses Xfwm (Xfce Window Manager), providing complete customization options via visual settings. Although the actual configuration files are hidden from the general users. Currently, Xfce is going through a process of porting the GTK2 components to GTK3 with the current stable and upcoming release.
This guide is prepared on Xfce 4.14 in Xubuntu 20.10 release. But overall, it is applicable for all Xfce desktops available for multiple Linux distributions such as Manjaro, Fedora, Arch, etc.
![Xfce Desktop - Before Customization][1]
### Customize Xfce Desktop Look and Productivity
##### Panels
Panels are the core component of the Xfce desktop. When you first boot up Xfce, the top panel is the go-to section for all your needs. It contains the application launcher menu, system tray with notifications, and a list of opened applications. Xfce (Xubuntu) comes with the great Whisker Menu, which already gives you a modern look.
So for this customization guide, first, I will add a new panel at the bottom and then eventually delete the top one. Now, if you wish, you can keep the top panel Or move it to the left or right from the Panel properties. Totally as per your taste.
* Right-click on the top panel and open `Panel -> Panel Preferences`. Click on the green + icon to create a new Panel.
![Panel Preferences][2]
![Add New Panel option][3]
Ideally, the default Panel is Panel 0, and the new one should be Panel 1. You will see a blank Panel is created on the desktop.
![New Panel][4]
**Drag the panel** to the **bottom** of the screen via the handle.
On the **Display tab**of the Panel Preference Window:
* Set Auto-hide the panel to Always.
* Check the Lock Panel.
On the **Appearance** Tab, you can choose the Background Style of the Panel. You can choose Opacity as well. For this guide, I will keep
* Background as None.
* Icon Adjust size automatically = Yes
* Opacity (Enter and Leave) = 100
Lets start adding some **applets**. Go to the Items tab and click on the green + icon to start adding some Xfce Applets.
You can add as many items as you want. For this guide, I have added the below items.
* Whisker Menu application menu
* Window Menu List of open applications
* Places File manager
* Notes quick sticky pad
* Screenshot
* Indicator Plugin
* Clock with date and time
* Action buttons to show log out, shutdown
* Show Desktop
Once done, press close.
To add some **additional applications**, you can open the application menu and right-click on any application. Then click **Add to Panel** and choose Panel 1 (for this example).
For example, I have added some additional applications to Panel 1 as below.
* Firefox
* Thunderbird
* LibreOffice Calc
* GIMP
Again, this is completely as per your taste. You can customize the new Panel as you wish.
When you are done, right-click on the top panel and open Panel Preferences. Select Panel 0 and click the “-” red icon to delete the top panel. Be cautious, as it will remove the default Xfce panel altogether. And before removing the Panel, make sure the newly created Panel 1 is visible.
##### Icons
You can change the default icon theme of Xfce. The [xfce-look.org][5] provides hundreds of GTK themes and icons. Download your favourite icon theme, and extract them. Then copy the top icon theme folder to `/usr/share/icons`. Or, you can create a folder `.icons` under your home directory and copy the icons folder.
Then you can open the **Appearance** Window and change the icon via the icon tab.
For this example, I have used the Uniform+ icon set, which looks very nice and comes with many application-specific icon sets. Its a big download (around 300mb+); you can get it from the below link.
[Download Uniform+ Icon set for Xfce][6]
##### Wallpaper
The Xfce-look.org has plenty of Xfce wallpaper with the Xfce mascot to choose from. Like the orange one which I have used for this tutorial. You can download it from [here][7].
##### Dock
There are some Docks which you can directly install in Xfce instead of Panels. I have not included a dock for this guide because you can customize it just fine without installing some additional applications and compromising more performance.
However, Docks are always looked better than panels. For example, if you really want, you can download DockBarX for Xfce and add it to your desktop. To install DockBarX, use the below PPA and commands in Ubuntu-based systems.
```
sudo add-apt-repository ppa:xuzhen666/dockbarxsudo apt-get updatesudo apt-get install dockbarx
```
```
sudo apt-get install xfce4-dockbarx-plugin
```
```
sudo apt-get install dockbarx-themes-extra
```
After applying all the above customization, your desktop should look like this if all goes well.
![Xfce Desktop after Customization][8]
### Settings Changes for Productivity
Now, coming to some tweaks of Xfce, which I prefer to do. These are specific small settings changes for your desktop which make your life easy and you can become more productive.
**Configure Whisker Menu to launch with Left Super Key**
It is handy when you can just open the application key via the left super key or right super key. To configure the Whisker Menu to launch with the left super key, open Window Manager and go to Keyboard. Add Superkey with xfce4-whisker-menu and press ok.
**Switch categories by Hovering Mouse**
Open Whisker menu settings. Go to the behaviour tab and check the Switch categories by hovering over the option.
**Add Battery and network data transfer indicator**
There is two additional panel applet available which you can add to your panel. They are Battery indicators and network data transfer. They give you a visual representation of battery status with % and upload/download data speed for the network.
![Xfce Customization Dark Mode][9]
### Wrapping Up
Again, the above outlined Xfce desktop look changes are just an idea for configuring your desktop. There are hundreds of settings available in Xfce, which you can easily change via settings as per your need and workflow. This is what makes Xfce a great alternative GTK desktop to GNOME.
Thats it. You can now enjoy a completely different [Xfce][10] desktop.
Let me know in the comment box below if you face any difficulties configuring the Xfce desktop and have any questions.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/customize-xfce-modern-look-2020-edition/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2020/11/Xfce-Desktop-Before-Customization-1024x576.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2020/11/Panel-Preferences.jpg
[3]: https://www.debugpoint.com/wp-content/uploads/2020/11/Add-New-Panel-option.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2020/11/New-Panel.jpg
[5]: https://www.xfce-look.org/browse/cat/
[6]: https://www.xfce-look.org/p/1012468/
[7]: https://www.xfce-look.org/p/1351819/
[8]: https://www.debugpoint.com/wp-content/uploads/2020/11/Xfce-Desktop-after-Customization.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2020/11/Xfce-Customization-Dark-Mode.jpg
[10]: https://www.debugpoint.com/tag/xfce

View File

@ -0,0 +1,287 @@
[#]: subject: "How To Enable RPM Fusion Repository In Fedora, RHEL, AlmaLinux, Rocky Linux"
[#]: via: "https://ostechnix.com/how-to-enable-rpm-fusion-repository-in-fedora-rhel/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Enable RPM Fusion Repository In Fedora, RHEL, AlmaLinux, Rocky Linux
======
Install RPM Fusion Repository In Fedora, RHEL, AlmaLinux, Rocky Linux
In this brief guide, we will see **what is RPM Fusion repository**, why should we **install RPM Fusion repository**, and finally how to **enable RPM Fusion repository** in Fedora, RHEL, and its clones like CentOS, AlmaLinux and Rocky Linux distributions.
#### Contents
1. What is RPM Fusion Repository?
2. 1. Enable RPM Fusion Repository In Fedora Linux 3. 1.1. List Repositories In Fedora
4. 2. Enable RPM Fusion Repository In RHEL 5. 2.1. List Installed Repositories In RHEL-based Systems
6. Conclusion
### What is RPM Fusion Repository?
Fedora project strictly adheres the Fedora [licensing policies][1]. It excludes some packages from the official repositories for the following reasons:
* If a package is propriety, it can't be included in Fedora;
* If a package is closed-source, it can't be included in Fedora;
* If a package is legally encumbered, it cannot be included in Fedora;
* If package violates United States laws (specifically, Federal or applicable state laws), it cannot be included in Fedora.
Any package that fails to meet the aforementioned policies will not be included in the official repositories of Fedora and RHEL. This is why some third party repositories, which have liberal licensing policies, are created. One such repository is **RPM Fusion**.
RPM Fusion is a community-maintained, third-party software repository that provides packages that the Fedora project and Red Hat can't ship due to legal and various other reasons as stated earlier.
The RPM Fusion is a must have to install the necessary multimedia codecs, proprietary software and drivers in Fedora, RHEL and its clones like CentOS, AlmaLinux and Rocky Linux etc.
RPM Fusion has two repositories namely `"free"` and `"nonfree"`. The `free` repository contains the packages that are Open Source as defined by Fedora licensing guidelines. The `nonfree` repository contains redistributable packages that are not Open Source and packages that are not free for commercial purpose.
You can add both repos and use them simultaneously on your personal system. There won't be any conflicts between the packages in the `free` and `nonfree` repos. If you're interested in running only free packages, just add the `free` repo and install the `nonfree` repo later.
### 1. Enable RPM Fusion Repository In Fedora Linux
To enable both the `free` and the `nonfree` RPM Fusion repositories on your Fedora system, run:
```
$ sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
**Sample output:**
```
Last metadata expiration check: 1:51:10 ago on Thursday 29 April 2021 02:10:14 PM.
rpmfusion-free-release-34.noarch.rpm 5.5 kB/s | 11 kB 00:02
rpmfusion-nonfree-release-34.noarch.rpm 6.6 kB/s | 11 kB 00:01
Dependencies resolved.
Package Architecture Version Repository Size
Installing:
rpmfusion-free-release noarch 34-1 @commandline 11 k
rpmfusion-nonfree-release noarch 34-1 @commandline 11 k
Transaction Summary
Install 2 Packages
Total size: 23 k
Installed size: 11 k
Is this ok [y/N]: y
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : rpmfusion-nonfree-release-34-1.noarch 1/2
Installing : rpmfusion-free-release-34-1.noarch 2/2
Verifying : rpmfusion-free-release-34-1.noarch 1/2
Verifying : rpmfusion-nonfree-release-34-1.noarch 2/2
Installed:
rpmfusion-free-release-34-1.noarch rpmfusion-nonfree-release-34-1.noarch
Complete!
```
![Enable RPM Fusion Repository In Fedora Linux][2]
Like I already mentioned, you can install only the `free` repo like below:
```
$ sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
To enable only the non-free RPM Fusion repository, do:
```
$ sudo dnf install https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
**On Fedora Silverblue:**
To add and enable `free` and `nonfree` RPM Fusion repositories on a Fedora Silverblue machine, run:
```
$ sudo rpm-ostree install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
Well, RPM Fusion repository is installed and enabled. Let us go ahead and see if RPM Fusion is enabled or not.
#### 1.1. List Repositories In Fedora
To **[find the list of installed repositories][3]** in Fedora, run:
```
$ dnf repolist
```
**Sample output:**
```
repo id repo name
fedora Fedora 34 - x86_64
fedora-cisco-openh264 Fedora 34 openh264 (From Cisco) - x86_64
fedora-modular Fedora Modular 34 - x86_64
rpmfusion-free RPM Fusion for Fedora 34 - Free
rpmfusion-free-updates RPM Fusion for Fedora 34 - Free - Updates
rpmfusion-nonfree RPM Fusion for Fedora 34 - Nonfree
rpmfusion-nonfree-updates RPM Fusion for Fedora 34 - Nonfree - Updates
updates Fedora 34 - x86_64 - Updates
updates-modular Fedora Modular 34 - x86_64 - Updates
```
If you want to list only the enabled repositories, the command would be:
```
$ dnf repolist enabled
```
The first time you attempt to install packages from the RPM Fusion repositories, the `dnf` utility prompts you to confirm the signature of the repositories. Type **y** and hit ENTER to confirm it.
```
[...]
warning: /var/cache/dnf/rpmfusion-free-27856ae4f82a6a42/packages/ffmpeg-4.4-2.fc34.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID d651ff2e: NOKEY
RPM Fusion for Fedora 34 - Free 1.6 MB/s | 1.7 kB 00:00
Importing GPG key 0xD651FF2E:
Userid : "RPM Fusion free repository for Fedora (2020) rpmfusion-buildsys@lists.rpmfusion.org"
Fingerprint: E9A4 91A3 DE24 7814 E7E0 67EA E06F 8ECD D651 FF2E
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-rpmfusion-free-fedora-34
Is this ok [y/N]: y
[...]
```
### 2. Enable RPM Fusion Repository In RHEL
In RHEL and its clones like CentOS, AlmaLinux, Rocky Linux etc., you must enable **[EPEL]** repository before enabling RPM Fusion repository.
To install EPEL repository in Red Hat Enterprise Linux system, run:
```
$ sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-$(rpm -E %rhel).noarch.rpm
```
After enabling the EPEL repository, run the following command to enable RPM Fusion repository in RHEL and its compatible clones CentOS, AlmaLinux and Rocky Linux:
```
$ sudo dnf install --nogpgcheck https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-$(rpm -E %rhel).noarch.rpm https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-$(rpm -E %rhel).noarch.rpm
```
**Sample output:**
```
Last metadata expiration check: 0:09:07 ago on Friday 23 September 2022 11:41:49 AM UTC.
rpmfusion-free-release-8.noarch.rpm 861 B/s | 11 kB 00:12
rpmfusion-nonfree-release-8.noarch.rpm 877 B/s | 11 kB 00:12
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
rpmfusion-free-release noarch 8-0.1 @commandline 11 k
rpmfusion-nonfree-release noarch 8-0.1 @commandline 11 k
Transaction Summary
================================================================================
Install 2 Packages
Total size: 22 k
Installed size: 7.6 k
Is this ok [y/N]: y
Downloading Packages:
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : rpmfusion-free-release-8-0.1.noarch 1/2
Installing : rpmfusion-nonfree-release-8-0.1.noarch 2/2
Verifying : rpmfusion-free-release-8-0.1.noarch 1/2
Verifying : rpmfusion-nonfree-release-8-0.1.noarch 2/2
Installed:
rpmfusion-free-release-8-0.1.noarch rpmfusion-nonfree-release-8-0.1.noarch
Complete!
```
![Enable RPM Fusion Repository In RHEL, CentOS, AlmaLinux, Rocky Linux][4]
If you are using CentOS Steam 8, you need to enable **[PowerTools]** repository as well.
```
$ sudo dnf config-manager --enable powertools
```
CentOS 8 (older version) used a case sensitive name for the **PowerTools** repository:
```
$ sudo dnf config-manager --enable PowerTools
```
On RHEL 8, you should enable subscription:
```
$ sudo subscription-manager repos --enable "codeready-builder-for-rhel-8-$(uname -m)-rpms"
```
In RHEL 7 and its compatible clones like CentOS 7, run the following command to enable EPEL and RPM Fusion repositories:
```
$ sudo yum localinstall --nogpgcheck https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-7.noarch.rpm https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-7.noarch.rpm
```
#### 2.1. List Installed Repositories In RHEL-based Systems
You can view the [list of the installed repositories][5] at any time using the following commands:
```
$ dnf repolist
```
Or,
```
$ yum repolist
```
**Sample output:**
```
repo id repo name
appstream AlmaLinux 8 - AppStream
baseos AlmaLinux 8 - BaseOS
docker-ce-stable Docker CE Stable - x86_64
epel Extra Packages for Enterprise Linux 8 - x86_64
epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64
extras AlmaLinux 8 - Extras
rpmfusion-free-updates RPM Fusion for EL 8 - Free - Updates
rpmfusion-nonfree-updates RPM Fusion for EL 8 - Nonfree - Updates
```
![List Installed Repositories In RHEL, CentOS, AlmaLinux, Rocky Linux][6]
### Conclusion
That's it. You know now how to **enable RPM Fusion repository RPM-based systems** such as Fedora, RHEL, CentOS, AlmaLinux, and Rocky Linux. Enabling RPM Fusion in a newly installed system is mandatory as it provides a lots of unofficial packages that are not included in the official repositories.
**Resource:**
* [RPM Fusion Configuration][7]
--------------------------------------------------------------------------------
via: https://ostechnix.com/how-to-enable-rpm-fusion-repository-in-fedora-rhel/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/
[2]: https://ostechnix.com/wp-content/uploads/2021/04/Enable-RPM-Fusion-repository-in-Fedora-Linux.png
[3]: https://ostechnix.com/find-list-installed-repositories-commandline-linux/
[4]: https://ostechnix.com/wp-content/uploads/2022/09/Enable-RPM-Fusion-repository-in-RHEL-CentOS-AlmaLinux-Rocky-Linux.png
[5]: https://ostechnix.com/find-list-installed-repositories-commandline-linux/
[6]: https://ostechnix.com/wp-content/uploads/2022/09/List-Installed-Repositories-In-RHEL-CentOS-AlmaLinux-Rocky-Linux.png
[7]: https://rpmfusion.org/Configuration

View File

@ -0,0 +1,695 @@
[#]: subject: "How to build a dynamic distributed database with DistSQL"
[#]: via: "https://opensource.com/article/22/9/dynamic-distributed-database-distsql"
[#]: author: "Raigor Jiang https://opensource.com/users/raigor"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to build a dynamic distributed database with DistSQL
======
Take a look at a data sharding scenario in which DistSQL's flexibility allows you to create a distributed database.
Distributed databases are common for many reasons. They increase reliability, redundancy, and performance. Apache ShardingSphere is an open source framework that enables you to transform any database into a distributed database. Since the release of ShardingSphere 5.0.0, DistSQL (Distributed SQL) has provided dynamic management for the ShardingSphere ecosystem.
In this article, I demonstrate a data sharding scenario in which DistSQL's flexibility allows you to create a distributed database. At the same time, I show some syntax sugar to simplify operating procedures, allowing your potential users to choose their preferred syntax.
A series of DistSQL statements are run through practical cases to give you a complete set of practical DistSQL sharding management methods, which create and maintain distributed databases through dynamic management.
![Diagram of database sharding management options][2]
Image by:
(Jiang Longtao, CC BY-SA 4.0)
### What is sharding?
In database terminology, *sharding* is the process of partitioning a table into separate entities. While the table data is directly related, it often exists on different physical database nodes or, at the very least, within separate logical partitions.
### Practical case example
To follow along with this example, you must have these components in place, either in your lab or in your mind as you read this article:
* Two sharding tables: t_order and t_order_item.
* For both tables, database shards are carried out with the user_id field, and table shards with the order_id field.
* The number of shards is two databases times three tables.
![Apache ShardingSphere databases][3]
Image by: (Jiang Longtao, CC BY-SA 4.0)
### Set up the environment
1. Prepare a database (MySQL, MariaDB, PostgreSQL, or openGauss) instance for access. Create two new databases: **demo_ds_0** and **demo_ds_1**.
2. Deploy [Apache ShardingSphere-Proxy 5.1.2][4] and [Apache ZooKeeper][5]. ZooKeeper acts as a governance center and stores ShardingSphere metadata information.
3. Configure `server.yaml` in the Proxy conf directory as follows:
```
mode:
  type: Cluster
  repository:
    type: ZooKeeper
    props:
      namespace: governance_ds
      server-lists: localhost:2181 #ZooKeeper address
      retryIntervalMilliseconds: 500
      timeToLiveSeconds: 60
      maxRetries: 3
      operationTimeoutMilliseconds: 500
  overwrite: falserules:
 - !AUTHORITY
    users:
     - root@%:root
```
4. Start ShardingSphere-Proxy and connect it to Proxy using a client, for example:
```
$ mysql -h 127.0.0.1 -P 3307 -u root -p
```
5. Create a distributed database:
```
CREATE DATABASE sharding_db;USE sharding_db;
```
#### Add storage resources
Next, add storage resources corresponding to the database:
```
ADD RESOURCE ds_0 (
    HOST=127.0.0.1,
    PORT=3306,
    DB=demo_ds_0,
    USER=root,
    PASSWORD=123456
), ds_1(
    HOST=127.0.0.1,
    PORT=3306,
    DB=demo_ds_1,
    USER=root,
    PASSWORD=123456
);
```
View the storage resources:
```
mysql> SHOW DATABASE RESOURCES\G;
******** 1. row ***************************
         name: ds_1
         type: MySQL
         host: 127.0.0.1
         port: 3306
           db: demo_ds_1
          -- Omit partial attributes
******** 2. row ***************************
         name: ds_0
         type: MySQL
         host: 127.0.0.1
         port: 3306
           db: demo_ds_0
          -- Omit partial attributes
```
Adding the optional `\G` switch to the query statement makes the output format easy to read.
### Create sharding rules
ShardingSphere's sharding rules support regular sharding and automatic sharding. Both sharding methods have the same effect. The difference is that the configuration of automatic sharding is more concise, while regular sharding is more flexible and independent.
Refer to the following links for more details on automatic sharding:
* [Intro to DistSQL-An Open Source and More Powerful SQL][6]
* [AutoTable: Your Butler-Like Sharding Configuration Tool][7]
Next, it's time to adopt regular sharding and use the **INLINE** expression algorithm to implement the sharding scenarios described in the requirements.
### Primary key generator
The primary key generator creates a secure and unique primary key for a data table in a distributed scenario. For details, refer to the document [Distributed Primary Key][8]*.*
1. Create a primary key generator:
```
CREATE SHARDING KEY GENERATOR snowflake_key_generator (
TYPE(NAME=SNOWFLAKE)
);
```
2. Query the primary key generator:
```
mysql> SHOW SHARDING KEY GENERATORS;
+-------------------------+-----------+-------+
| name                    | type      | props |
+-------------------------+-----------+-------+
| snowflake_key_generator | snowflake | {}    |
+-------------------------+-----------+-------+
1 row in set (0.01 sec)
```
### Sharding algorithm
1. Create a database sharding algorithm used by **t_order** and **t_order_item** in common:
```
-- Modulo 2 based on user_id in database sharding
CREATE SHARDING ALGORITHM database_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="ds_${user_id % 2}"))
);
```
2. Create different table shards algorithms for **t_order** and **t_order_item:**
```
-- Modulo 3 based on order_id in table sharding
CREATE SHARDING ALGORITHM t_order_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_${order_id % 3}"))
);
CREATE SHARDING ALGORITHM t_order_item_inline (
TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_item_${order_id % 3}"))
);
```
3. Query the sharding algorithm:
```
mysql> SHOW SHARDING ALGORITHMS;
+---------------------+--------+---------------------------------------------------+
| name                | type   | props                                             |
+---------------------+--------+---------------------------------------------------+
| database_inline     | inline | algorithm-expression=ds_${user_id % 2}            |
| t_order_inline      | inline | algorithm-expression=t_order_${order_id % 3}      |
| t_order_item_inline | inline | algorithm-expression=t_order_item_${order_id % 3} |
+---------------------+--------+---------------------------------------------------+
3 rows in set (0.00 sec)
```
### Create a default sharding strategy
The [sharding strategy][9] consists of a sharding key and sharding algorithm, which in this case is **databaseStrategy** and **tableStrategy**. Because **t_order** and **t_order_item** have the same database sharding field and sharding algorithm, create a default strategy to be used by all shard tables with no sharding strategy configured.
1. Create a default database sharding strategy:
```
CREATE DEFAULT SHARDING DATABASE STRATEGY (
TYPE=STANDARD,SHARDING_COLUMN=user_id,SHARDING_ALGORITHM=database_inline
);
```
2. Query default strategy:
```
mysql> SHOW DEFAULT SHARDING STRATEGY\G;
*************************** 1. row ***************************
                    name: TABLE
                    type: NONE
         sharding_column:
 sharding_algorithm_name:
 sharding_algorithm_type:
sharding_algorithm_props:
*************************** 2. row ***************************
                    name: DATABASE
                    type: STANDARD
         sharding_column: user_id
 sharding_algorithm_name: database_inline
 sharding_algorithm_type: inline
sharding_algorithm_props: {algorithm-expression=ds_${user_id % 2}}
2 rows in set (0.00 sec)
```
You have not configured the default table sharding strategy, so the default strategy of **TABLE** is **NONE**.
### Set sharding rules
The primary key generator and sharding algorithm are both ready. Now you can create sharding rules. The method I demonstrate below is a little complicated and involves multiple steps. In the next section, I'll show you how to create sharding rules in just one step, but for now, witness how it's typically done.
First, define **t_order**:
```
CREATE SHARDING TABLE RULE t_order (
DATANODES("ds_${0..1}.t_order_${0..2}"),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM=t_order_inline),
KEY_GENERATE_STRATEGY(COLUMN=order_id,KEY_GENERATOR=snowflake_key_generator)
);
```
Here is an explanation of the values found above:
* DATANODES specifies the data nodes of shard tables.
* TABLE_STRATEGY specifies the table strategy, among which SHARDING_ALGORITHM uses created sharding algorithm t_order_inline.
* KEY_GENERATE_STRATEGY specifies the primary key generation strategy of the table. Skip this configuration if primary key generation is not required.
Next, define **t_order_item**:
```
CREATE SHARDING TABLE RULE t_order_item (
DATANODES("ds_${0..1}.t_order_item_${0..2}"),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM=t_order_item_inline),
KEY_GENERATE_STRATEGY(COLUMN=order_item_id,KEY_GENERATOR=snowflake_key_generator)
);
```
Query the sharding rules to verify what you've created:
```
mysql> SHOW SHARDING TABLE RULES\G;
************************** 1. row ***************************
                           table: t_order
               actual_data_nodes: ds_${0..1}.t_order_${0..2}
             actual_data_sources:
          database_strategy_type: STANDARD
        database_sharding_column: user_id
database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_${order_id % 3}
              key_generate_column: order_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 2. row ***************************
                            table: t_order_item
                actual_data_nodes: ds_${0..1}.t_order_item_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_item_${order_id % 3}
              key_generate_column: order_item_id
               key_generator_type: snowflake
              key_generator_props:
2 rows in set (0.00 sec)
```
This looks right so far. You have now configured the sharding rules for **t_order** and **t_order_item**.
You can skip the steps for creating the primary key generator, sharding algorithm, and default strategy, and complete the sharding rules in one step. Here's how to make it easier.
### Sharding rule syntax
For instance, if you want to add a shard table called **t_order_detail**, you can create sharding rules as follows:
```
CREATE SHARDING TABLE RULE t_order_detail (
DATANODES("ds_${0..1}.t_order_detail_${0..1}"),
DATABASE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=user_id,SHARDING_ALGORITHM(TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="ds_${user_id % 2}")))),
TABLE_STRATEGY(TYPE=STANDARD,SHARDING_COLUMN=order_id,SHARDING_ALGORITHM(TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="t_order_detail_${order_id % 3}")))),
KEY_GENERATE_STRATEGY(COLUMN=detail_id,TYPE(NAME=snowflake))
);
```
This statement specifies a database sharding strategy, table strategy, and primary key generation strategy, but it doesn't use existing algorithms. The DistSQL engine automatically uses the input expression to create an algorithm for the sharding rules of **t_order_detail**.
Now there's a primary key generator:
```
mysql> SHOW SHARDING KEY GENERATORS;
+--------------------------+-----------+-------+
| name                     | type      | props |
+--------------------------+-----------+-------+
| snowflake_key_generator  | snowflake | {}    |
| t_order_detail_snowflake | snowflake | {}    |
+--------------------------+-----------+-------+
2 rows in set (0.00 sec)
```
Display the sharding algorithm:
```
mysql> SHOW SHARDING ALGORITHMS;
+--------------------------------+--------+-----------------------------------------------------+
| name                           | type   | props                                               |
+--------------------------------+--------+-----------------------------------------------------+
| database_inline                | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_inline                 | inline | algorithm-expression=t_order_${order_id % 3}        |
| t_order_item_inline            | inline | algorithm-expression=t_order_item_${order_id % 3}   |
| t_order_detail_database_inline | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_detail_table_inline    | inline | algorithm-expression=t_order_detail_${order_id % 3} |
+--------------------------------+--------+-----------------------------------------------------+
5 rows in set (0.00 sec)
```
And finally, the sharding rules:
```
mysql> SHOW SHARDING TABLE RULES\G;
*************************** 1. row ***************************
                            table: t_order
                actual_data_nodes: ds_${0..1}.t_order_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_${order_id % 3}
              key_generate_column: order_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 2. row ***************************
                            table: t_order_item
                actual_data_nodes: ds_${0..1}.t_order_item_${0..2}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_item_${order_id % 3}
              key_generate_column: order_item_id
               key_generator_type: snowflake
              key_generator_props:
*************************** 3. row ***************************
                            table: t_order_detail
                actual_data_nodes: ds_${0..1}.t_order_detail_${0..1}
              actual_data_sources:
           database_strategy_type: STANDARD
         database_sharding_column: user_id
 database_sharding_algorithm_type: inline
database_sharding_algorithm_props: algorithm-expression=ds_${user_id % 2}
              table_strategy_type: STANDARD
            table_sharding_column: order_id
    table_sharding_algorithm_type: inline
   table_sharding_algorithm_props: algorithm-expression=t_order_detail_${order_id % 3}
              key_generate_column: detail_id
               key_generator_type: snowflake
              key_generator_props:
3 rows in set (0.01 sec)
```
In the `CREATE SHARDING TABLE RULE` statement, **DATABASE_STRATEGY**, **TABLE_STRATEGY**, and **KEY_GENERATE_STRATEGY** can reuse existing algorithms.
Alternatively, they can be defined quickly through syntax. The difference is that additional algorithm objects are created.
### Configuration and verification
Once you have created the configuration verification rules, you can verify them in the following ways.
1. Check node distribution:
DistSQL provides `SHOW SHARDING TABLE NODES` for checking node distribution, and users can quickly learn the distribution of shard tables:
```
mysql> SHOW SHARDING TABLE NODES;
+----------------+------------------------------------------------------------------------------------------------------------------------------+
| name           | nodes                                                                                                                        |
+----------------+------------------------------------------------------------------------------------------------------------------------------+
| t_order        | ds_0.t_order_0, ds_0.t_order_1, ds_0.t_order_2, ds_1.t_order_0, ds_1.t_order_1, ds_1.t_order_2                               |
| t_order_item   | ds_0.t_order_item_0, ds_0.t_order_item_1, ds_0.t_order_item_2, ds_1.t_order_item_0, ds_1.t_order_item_1, ds_1.t_order_item_2 |
| t_order_detail | ds_0.t_order_detail_0, ds_0.t_order_detail_1, ds_1.t_order_detail_0, ds_1.t_order_detail_1                                   |
+----------------+------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.01 sec)
mysql> SHOW SHARDING TABLE NODES t_order_item;
+--------------+------------------------------------------------------------------------------------------------------------------------------+
| name         | nodes                                                                                                                        |
+--------------+------------------------------------------------------------------------------------------------------------------------------+
| t_order_item | ds_0.t_order_item_0, ds_0.t_order_item_1, ds_0.t_order_item_2, ds_1.t_order_item_0, ds_1.t_order_item_1, ds_1.t_order_item_2 |
+--------------+------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
```
You can see that the node distribution of the shard table is consistent with what is described in the requirement.
### SQL preview
Previewing SQL is also an easy way to verify configurations. Its syntax is `PREVIEW SQL`. First, make a query with no shard key, with all routes:
```
mysql> PREVIEW SELECT * FROM t_order;
+------------------+---------------------------------------------------------------------------------------------+
| data_source_name | actual_sql                                                                                  |
+------------------+---------------------------------------------------------------------------------------------+
| ds_0             | SELECT * FROM t_order_0 UNION ALL SELECT * FROM t_order_1 UNION ALL SELECT * FROM t_order_2 |
| ds_1             | SELECT * FROM t_order_0 UNION ALL SELECT * FROM t_order_1 UNION ALL SELECT * FROM t_order_2 |
+------------------+---------------------------------------------------------------------------------------------+
2 rows in set (0.13 sec)
mysql> PREVIEW SELECT * FROM t_order_item;
+------------------+------------------------------------------------------------------------------------------------------------+
| data_source_name | actual_sql                                                                                                 |
+------------------+------------------------------------------------------------------------------------------------------------+
| ds_0             | SELECT * FROM t_order_item_0 UNION ALL SELECT * FROM t_order_item_1 UNION ALL SELECT * FROM t_order_item_2 |
| ds_1             | SELECT * FROM t_order_item_0 UNION ALL SELECT * FROM t_order_item_1 UNION ALL SELECT * FROM t_order_item_2 |
+------------------+------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
```
Now specify **user_id** in a query with a single database route:
```
mysql> PREVIEW SELECT * FROM t_order WHERE user_id = 1;
+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| data_source_name | actual_sql                                                                                                                                        |
+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| ds_1             | SELECT * FROM t_order_0 WHERE user_id = 1 UNION ALL SELECT * FROM t_order_1 WHERE user_id = 1 UNION ALL SELECT * FROM t_order_2 WHERE user_id = 1 |
+------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.14 sec)
mysql> PREVIEW SELECT * FROM t_order_item WHERE user_id = 2;
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| data_source_name | actual_sql                                                                                                                                                       |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| ds_0             | SELECT * FROM t_order_item_0 WHERE user_id = 2 UNION ALL SELECT * FROM t_order_item_1 WHERE user_id = 2 UNION ALL SELECT * FROM t_order_item_2 WHERE user_id = 2 |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
```
Specify **user_id** and **order_id** with a single table route:
```
mysql> PREVIEW SELECT * FROM t_order WHERE user_id = 1 AND order_id = 1;
+------------------+------------------------------------------------------------+
| data_source_name | actual_sql                                                 |
+------------------+------------------------------------------------------------+
| ds_1             | SELECT * FROM t_order_1 WHERE user_id = 1 AND order_id = 1 |
+------------------+------------------------------------------------------------+
1 row in set (0.04 sec)
mysql> PREVIEW SELECT * FROM t_order_item WHERE user_id = 2 AND order_id = 5;
+------------------+-----------------------------------------------------------------+
| data_source_name | actual_sql                                                      |
+------------------+-----------------------------------------------------------------+
| ds_0             | SELECT * FROM t_order_item_2 WHERE user_id = 2 AND order_id = 5 |
+------------------+-----------------------------------------------------------------+
1 row in set (0.01 sec)
```
Single-table routes scan the fewest shard tables and offer the highest efficiency.
### Query unused resources
During system maintenance, algorithms or storage resources that are no longer in use may need to be released, or resources that need to be released may have been referenced and cannot be deleted. DistSQL's `SHOW UNUSED RESOURCES` command can solve these problems:
```
mysql> ADD RESOURCE ds_2 (
    ->     HOST=127.0.0.1,
    ->     PORT=3306,
    ->     DB=demo_ds_2,
    ->     USER=root,
    ->     PASSWORD=123456
    -> );
Query OK, 0 rows affected (0.07 sec)
mysql> SHOW UNUSED RESOURCES\G;
*************************** 1. row ***************************
                           name: ds_2
                           type: MySQL
                           host: 127.0.0.1
                           port: 3306
                             db: demo_ds_2
connection_timeout_milliseconds: 30000
      idle_timeout_milliseconds: 60000
      max_lifetime_milliseconds: 2100000
                  max_pool_size: 50
                  min_pool_size: 1
                      read_only: false
               other_attributes: {"dataSourceProperties":{"cacheServerConfiguration":"true","elideSetAutoCommits":"true","useServerPrepStmts":"true","cachePrepStmts":"true","useSSL":"false","rewriteBatchedStatements":"true","cacheResultSetMetadata":"false","useLocalSessionState":"true","maintainTimeStats":"false","prepStmtCacheSize":"200000","tinyInt1isBit":"false","prepStmtCacheSqlLimit":"2048","serverTimezone":"UTC","netTimeoutForStreamingResults":"0","zeroDateTimeBehavior":"round"},"healthCheckProperties":{},"initializationFailTimeout":1,"validationTimeout":5000,"leakDetectionThreshold":0,"poolName":"HikariPool-8","registerMbeans":false,"allowPoolSuspension":false,"autoCommit":true,"isolateInternalQueries":false}
1 row in set (0.03 sec)
```
#### Query unused primary key generator
DistSQL can also display unused sharding key generators with the `SHOW UNUSED SHARDING KEY GENERATORS` :
```
mysql> SHOW SHARDING KEY GENERATORS;
+--------------------------+-----------+-------+
| name                     | type      | props |
+--------------------------+-----------+-------+
| snowflake_key_generator  | snowflake | {}    |
| t_order_detail_snowflake | snowflake | {}    |
+--------------------------+-----------+-------+
2 rows in set (0.00 sec)
mysql> SHOW UNUSED SHARDING KEY GENERATORS;
Empty set (0.01 sec)
mysql> CREATE SHARDING KEY GENERATOR useless (
    -> TYPE(NAME=SNOWFLAKE)
    -> );
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW UNUSED SHARDING KEY GENERATORS;
+---------+-----------+-------+
| name    | type      | props |
+---------+-----------+-------+
| useless | snowflake |       |
+---------+-----------+-------+
1 row in set (0.01 sec)
```
#### Query unused sharding algorithm
DistSQL can reveal unused sharding algorithms with (you guessed it) the `SHOW UNUSED SHARDING ALGORITHMS` command:
```
mysql> SHOW SHARDING ALGORITHMS;
+--------------------------------+--------+-----------------------------------------------------+
| name                           | type   | props                                               |
+--------------------------------+--------+-----------------------------------------------------+
| database_inline                | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_inline                 | inline | algorithm-expression=t_order_${order_id % 3}        |
| t_order_item_inline            | inline | algorithm-expression=t_order_item_${order_id % 3}   |
| t_order_detail_database_inline | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_detail_table_inline    | inline | algorithm-expression=t_order_detail_${order_id % 3} |
+--------------------------------+--------+-----------------------------------------------------+
5 rows in set (0.00 sec)
mysql> CREATE SHARDING ALGORITHM useless (
    -> TYPE(NAME=INLINE,PROPERTIES("algorithm-expression"="ds_${user_id % 2}"))
    -> );
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW UNUSED SHARDING ALGORITHMS;
+---------+--------+----------------------------------------+
| name    | type   | props                                  |
+---------+--------+----------------------------------------+
| useless | inline | algorithm-expression=ds_${user_id % 2} |
+---------+--------+----------------------------------------+
1 row in set (0.00 sec)
```
#### Query rules that use the target storage resources
You can also see used resources within rules with `SHOW RULES USED RESOURCE`. All rules that use a resource can be queried, not limited to the sharding rule.
```
mysql> DROP RESOURCE ds_0;
ERROR 1101 (C1101): Resource [ds_0] is still used by [ShardingRule].
mysql> SHOW RULES USED RESOURCE ds_0;
+----------+----------------+
| type     | name           |
+----------+----------------+
| sharding | t_order        |
| sharding | t_order_item   |
| sharding | t_order_detail |
+----------+----------------+
3 rows in set (0.00 sec)
```
#### Query sharding rules that use the target primary key generator
You can find sharding rules using a key generator with `SHOW SHARDING TABLE RULES USED KEY GENERATOR` :
```
mysql> SHOW SHARDING KEY GENERATORS;
+--------------------------+-----------+-------+
| name                     | type      | props |
+--------------------------+-----------+-------+
| snowflake_key_generator  | snowflake | {}    |
| t_order_detail_snowflake | snowflake | {}    |
| useless                  | snowflake | {}    |
+--------------------------+-----------+-------+
3 rows in set (0.00 sec)
mysql> DROP SHARDING KEY GENERATOR snowflake_key_generator;
ERROR 1121 (C1121): Sharding key generator `[snowflake_key_generator]` in database `sharding_db` are still in used.
mysql> SHOW SHARDING TABLE RULES USED KEY GENERATOR snowflake_key_generator;
+-------+--------------+
| type  | name         |
+-------+--------------+
| table | t_order      |
| table | t_order_item |
+-------+--------------+
2 rows in set (0.00 sec)
```
#### Query sharding rules that use the target algorithm
Show sharding rules using a target algorithm with `SHOW SHARDING TABLE RULES USED ALGORITHM` :
```
mysql> SHOW SHARDING ALGORITHMS;
+--------------------------------+--------+-----------------------------------------------------+
| name                           | type   | props                                               |
+--------------------------------+--------+-----------------------------------------------------+
| database_inline                | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_inline                 | inline | algorithm-expression=t_order_${order_id % 3}        |
| t_order_item_inline            | inline | algorithm-expression=t_order_item_${order_id % 3}   |
| t_order_detail_database_inline | inline | algorithm-expression=ds_${user_id % 2}              |
| t_order_detail_table_inline    | inline | algorithm-expression=t_order_detail_${order_id % 3} |
| useless                        | inline | algorithm-expression=ds_${user_id % 2}              |
+--------------------------------+--------+-----------------------------------------------------+
6 rows in set (0.00 sec)
mysql> DROP SHARDING ALGORITHM t_order_detail_table_inline;
ERROR 1116 (C1116): Sharding algorithms `[t_order_detail_table_inline]` in database `sharding_db` are still in used.
mysql> SHOW SHARDING TABLE RULES USED ALGORITHM t_order_detail_table_inline;
+-------+----------------+
| type  | name           |
+-------+----------------+
| table | t_order_detail |
+-------+----------------+
1 row in set (0.00 sec)
```
### Make sharding better
DistSQL provides a flexible syntax to help simplify operations. In addition to the **INLINE** algorithm, DistSQL supports standard sharding, compound sharding, HINT sharding, and custom sharding algorithms.
If you have any questions or suggestions about [Apache ShardingSphere][10], please feel free to post them on [ShardingSphereGitHub][11].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/dynamic-distributed-database-distsql
作者:[Raigor Jiang][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/raigor
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/BIZ_darwincloud_520x292_0311LL.png
[2]: https://opensource.com/sites/default/files/2022-09/1sharding.png
[3]: https://opensource.com/sites/default/files/2022-09/2shardingsphere.png
[4]: https://shardingsphere.apache.org/document/5.1.2/en/overview/
[5]: https://zookeeper.apache.org/
[6]: https://medium.com/nerd-for-tech/intro-to-distsql-an-open-source-more-powerful-sql-bada4099211?source=your_stories_page-------------------------------------
[7]: https://medium.com/geekculture/autotable-your-butler-like-sharding-configuration-tool-9a45dbb7e285
[8]: https://shardingsphere.apache.org/document/current/en/features/sharding/concept/#distributed-primary-key
[9]: https://shardingsphere.apache.org/document/5.1.2/en/features/sharding/concept/sharding/
[10]: https://shardingsphere.apache.org/
[11]: https://github.com/apache/shardingsphere

View File

@ -0,0 +1,87 @@
[#]: subject: "Install JDBC on Linux in 3 steps"
[#]: via: "https://opensource.com/article/22/9/install-jdbc-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install JDBC on Linux in 3 steps
======
Install Java, install JDBC with Maven, and install the database. Then you're ready to interact with databases in your Java code.
![What is your favorite open source Java IDE?][1]
Image by: Pixabay. CC0.
When you write an application, it's common to require data storage. Sometimes you're storing assets your application needs to function, and other times you're storing user data, including preferences and save data. One way to store data is in a database, and in order to communicate between your code and a database, you need a database binding or connector for your language. For Java, a common database connector is JDBC (Java database connectivity.)
### 1. Install Java
Of course, to develop with Java you must also have Java installed. I recommend [SDKman][2] for Linux, macOS, and WSL or Cygwin. For Windows, you can download OpenJDK from [developers.redhat.com][3].
### 2. Install JDBC with Maven
JDBC is an API, imported into your code with the statement `import java.sql.*`, but for it to be useful you must have a database driver and a database installed for it to interact with. The database driver you use and the database you want to communicate with must match: to interact with MySQL, you need a MySQL driver, to interact with SQLite3, you must have the SQLite3 driver, and so on.
For this article, I use [PostgreSQL][4], but all the major databases, including [MariaDB][5] and [SQLite3][6], have JDBC drivers.
You can download JDBC for PostgreSQL from [jdbc.postgresql.org][7]. I use [Maven][8] to manage Java dependencies, so I include it in `pom.xml` (adjusting the version number for what's current on [Maven Central][9]):
```
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.5.0</version>
</dependency>
```
### 3. Install the database
You have to install the database you want to connect to through JDBC. There are several very good open source databases, but I had to choose one for this article, so I chose PostgreSQL.
To install PostgreSQL on Linux, use your software repository. On Fedora, CentOS, Mageia, and similar:
```
$ sudo dnf install postgresql postgresql-server
```
On Debian, Linux Mint, Elementary, and similar:
```
$ sudo apt install postgresql postgresql-contrib
```
### Database connectivity
If you're not using PostgreSQL, the same general process applies:
1. Install Java.
2. Find the JDBC driver for your database of choice and include it in your `pom.xml` file.
3. Install the database (server and client) on your development OS.
Three steps and you're ready to start writing code.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/install-jdbc-linux
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/java-coffee-mug.jpg
[2]: https://opensource.com/article/22/3/manage-java-versions-sdkman
[3]: https://developers.redhat.com/products/openjdk/download?intcmp=7013a000002qLH8AAM
[4]: http://LINK-TO-POSTGRESQL-INTRO-ARTICLE
[5]: https://www.redhat.com/sysadmin/mysql-mariadb-introduction
[6]: https://opensource.com/article/21/2/sqlite3-cheat-sheet
[7]: https://jdbc.postgresql.org/download.html
[8]: https://opensource.com/article/22/3/maven-manage-java-dependencies
[9]: https://mvnrepository.com/artifact/org.postgresql/postgresql

View File

@ -0,0 +1,285 @@
[#]: subject: "Drop your database for PostgreSQL"
[#]: via: "https://opensource.com/article/22/9/drop-your-database-for-postgresql"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Drop your database for PostgreSQL
======
Postgres is one of the most flexible databases available, and it's open source.
Databases are tools to store information in an organized but flexible way. A spreadsheet is essentially a database, but the constraints of a graphical application render most spreadsheet applications useless to programmers. With [Edge][3] and IoT devices becoming significant target platforms, developers need powerful but lightweight solutions for storing, processing, and querying large amounts of data. One of my favourite combinations is the PostgreSQL database and [Lua bindings][4], but the possibilities are endless. Whatever language you use, Postgres is a great choice for a database, but you need to know some basics before adopting it.
### Install Postgres
To install PostgreSQL on Linux, use your software repository. On Fedora, CentOS, Mageia, and similar:
```
$ sudo dnf install postgresql postgresql-server
```
On Debian, Linux Mint, Elementary, and similar:
```
$ sudo apt install postgresql postgresql-contrib
```
On macOS and Windows, download an installer from [postgresql.org][5].
### Setting up Postgres
Most distributions install the Postgres database without *starting* it, but provide you with a script or [systemd service][6] to help it start reliably. However, before you start PostgreSQL, you must create a database cluster.
#### Fedora
On Fedora, CentOS, or similar, there's a Postgres setup script provided in the Postgres package. Run this script for easy configuration:
```
$ sudo /usr/bin/postgresql-setup --initdb
[sudo] password:
 * Initializing database in '/var/lib/pgsql/data'
 * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
```
#### Debian
On Debian-based distributions, setup is performed automatically by `apt` during installation.
#### Everything else
Finally, if you're running something else, then you can just use the toolchain provided by Postgres itself. The `initdb` command creates a database cluster, but you must run it as the `postgres` user, an identity you may temporarily assume using `sudo` :
```
$ sudo -u postgres \
"initdb -D /var/lib/pgsql/data \
--locale en_US.UTF-8 --auth md5 --pwprompt"
```
### Start Postgres
Now that a cluster exists, start the Postgres server using either the command provided to you in the output of `initdb` or with systemd:
```
$ sudo systemctl start postgresql
```
### Creating a database user
To create a Postgres user, use the `createuser` command. The `postgres` user is the superuser of the Postgres install,
```
$ sudo -u postgres createuser --interactive --password bogus
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) y
Shall the new role be allowed to create more new roles? (y/n) n
Password:
```
### Create a database
To create a new database, use the `createdb` command. In this example, I create the database `exampledb` and assign ownership of it to the user `bogus` :
```
$ createdb exampledb --owner bogus
```
### Interacting with PostgreSQL
You can interact with a PostgreSQL database using the `psql` command. This command provides an interactive shell so you can view and update your databases. To connect to a database, specify the user and database you want to use:
```
$ psql --user bogus exampledb
psql (XX.Y)
Type "help" for help.
exampledb=>
```
### Create a table
Databases contain tables, which can be visualized as a spreadsheet. There's a series of rows (called *records* in a database) and columns. The intersection of a row and a column is called a *field*.
The Structured Query Language (SQL) is named after what it provides: A method to inquire about the contents of a database in a predictable and consistent syntax to receive useful results.
Currently, your database is empty, devoid of any tables. You can create a table with the `CREATE` query. It's useful to combine this with the `IF NOT EXISTS` statement, which prevents PostgreSQL from clobbering an existing table.
Before you createa table, think about what kind of data (the "data type" in SQL terminology) you anticipate the table to contain. In this example, I create a table with one column for a unique identifier and one column for some arbitrary text up to nine characters.
```
exampledb=> CREATE TABLE IF NOT EXISTS my_sample_table(
exampledb(> id SERIAL,
exampledb(> wordlist VARCHAR(9) NOT NULL
);
```
The `SERIAL` keyword isn't actually a data type. It's [special notation in PostgreSQL][7] that creates an auto-incrementing integer field. The `VARCHAR` keyword is a data type indicating a variable number of characters within a limit. In this code, I've specified a maximum of 9 characters. There are lots of data types in PostgreSQL, so refer to the project documentation for a list of options.
### Insert data
You can populate your new table with some sample data by using the `INSERT` SQL keyword:
```
exampledb=> INSERT INTO my_sample_table (wordlist) VALUES ('Alice');
INSERT 0 1
```
Your data entry fails, should you attempt to put more than 9 characters into the `wordlist` field:
```
exampledb=> INSERT INTO my_sample_table (WORDLIST) VALUES ('Alexandria');
ERROR:  VALUE too long FOR TYPE CHARACTER VARYING(9)
```
### Alter a table or column
When you need to change a field definition, you use the `ALTER` SQL keyword. For instance, should you decide that a nine character limit for `wordlist`, you can increase its allowance by setting its data type:
```
exampledb=> ALTER TABLE my_sample_table
ALTER COLUMN wordlist SET DATA TYPE VARCHAR(10);
ALTER TABLE
exampledb=> INSERT INTO my_sample_table (WORDLIST) VALUES ('Alexandria');
INSERT 0 1
```
### View data in a table
SQL is a query language, so you view the contents of a database through queries. Queries can be simple, or it can involve joining complex relationships between several different tables. To see everything in a table, use the `SELECT` keyword on `*` (an asterisk is a wildcard):
```
exampledb=> SELECT * FROM my_sample_table;
 id |  wordlist
\----+------------
  1 | Alice
  2 | Bob
  3 | Alexandria
(3 ROWS)
```
### More data
PostgreSQL can handle a lot of data, but as with any database the key to success is how you design your database for storage and what you do with the data once you've got it stored. A relatively large public data set can be found on [OECD.org][8], and using this you can try some advanced database techniques.
First, download the data as comma-separated values (CSV) and save the file as `land-cover.csv` in your `Downloads` folder.
Browse the data in a text editor or spreadsheet application to get an idea of what columns there are, and what kind of data each column contains. Look at the data carefully and keep an eye out for exceptions to an apparent rule. For instance, the `COU` column, containing a country code such as `AUS` for Australia and `GRC` for Greece, tends to be 3 characters until the oddity `BRIICS`.
Once you understand the data you're working with, you can prepare a Postgres database:
```
$ createdb landcoverdb --owner bogus
$ psql --user bogus landcoverdb
landcoverdb=> create table land_cover(
country_code varchar(6),
country_name varchar(76),
small_subnational_region_code varchar(5),
small_subnational_region_name varchar(14),
large_subnational_region_code varchar(17),
large_subnational_region_name varchar(44),
measure_code varchar(13),
measure_name varchar(29),
land_cover_class_code varchar(17),
land_cover_class_name varchar(19),
year_code integer,
year_value integer,
unit_code varchar(3),
unit_name varchar(17),
power_code integer,
power_name varchar(9),
reference_period_code varchar(1),
reference_period_name varchar(1),
value float(8),
flag_codes varchar(1),
flag_names varchar(1));
```
### Importing data
Postgres can import CSV data directly using the special metacommand `\copy` :
```
landcoverdb=> \copy land_cover from '~/land-cover.csv' with csv header delimiter ','
COPY 22113
```
That's 22,113 records imported. Seems like a good start!
### Querying data
A broad `SELECT` statement to see all columns of all 22,113 records is possible, and Postgres very nicely pipes the output to a screen pager so you can scroll through the output at a leisurely pace. However, using advanced SQL you can get some useful views of what's otherwise some pretty raw data.
```
landcoverdb=> SELECT
    lcm.country_name,
    lcm.year_value,
    SUM(lcm.value) sum_value
FROM land_cover lcm
JOIN (
    SELECT
        country_name,
        large_subnational_region_name,
        small_subnational_region_name,
        MAX(year_value) max_year_value
    FROM land_cover
    GROUP BY country_name,
        large_subnational_region_name,
        small_subnational_region_name
) AS lcmyv
ON
    lcm.country_name = lcmyv.country_name AND
    lcm.large_subnational_region_name = lcmyv.large_subnational_region_name AND
    lcm.small_subnational_region_name = lcmyv.small_subnational_region_name AND
    lcm.year_value = lcmyv.max_year_value
GROUP BY lcm.country_name,
    lcm.large_subnational_region_name,
    lcm.small_subnational_region_name,
    lcm.year_value
ORDER BY country_name,
    year_value;
```
Here's some sample output:
```
\---------------+------------+------------
 Afghanistan    |       2019 |  743.48425
 Albania        |       2019 |  128.82532
 Algeria        |       2019 |  2417.3281
 American Samoa |       2019 |   100.2007
 Andorra        |       2019 |  100.45613
 Angola         |       2019 |  1354.2192
 Anguilla       |       2019 | 100.078514
 Antarctica     |       2019 |  12561.907
[...]
```
SQL is a rich langauge, and so it's beyond the scope of this article. Read through the SQL code and see if you can modify it to provide a different set of data.
### Open database
PostgreSQL is one of the great open source databases. With it, you can design repositories for structured data, and then use SQL to view it in different ways so you can gain fresh perspectives on that data. Postgres integrates with many languages, including Python, Lua, Groovy, Java, and more, so regardless of your toolset, you can probably make use of this excellent database.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/drop-your-database-for-postgresql
作者:[Seth Kenlon][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png
[2]: https://creativecommons.org/licenses/by/3.0/us/
[3]: https://www.redhat.com/en/topics/edge-computing/what-is-edge-computing?intcmp=7013a000002qLH8AAM
[4]: https://github.com/arcapos/luapgsql
[5]: https://www.postgresql.org/download/
[6]: https://opensource.com/article/21/4/sysadmins-love-systemd
[7]: https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-SERIAL
[8]: https://stats.oecd.org/Index.aspx?DataSetCode=LAND_COVER

View File

@ -0,0 +1,105 @@
[#]: subject: "Wolfi is a Linux Un(distro) Built for Software Supply Chain Security"
[#]: via: "https://news.itsfoss.com/wolfi-linux-undistro/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Wolfi is a Linux Un(distro) Built for Software Supply Chain Security
======
Wolfi is a Linux undistro that focuses on resolving security issues with the software supply chain. Explore more here.
![Wolfi is a Linux Un(distro) Built for Software Supply Chain Security][1]
The software supply chain includes everything that goes into developing, building, storing, and running it and its dependencies.
As per the [State of the Software Supply Chain 2021 report][2], between 2020 and 2021 alone, attacks on the software supply chain increased by a shocking **650%**.
![][3]
That's a staggering percentage. 🤯
So, everyone in the industry, ranging from code platforms like [GitHub][4] to tech giants like Google, has been putting their best efforts into coming up with various initiatives to enhance the security of the software supply chain.
One of the examples include:
[Google Sponsors $1 Million to Fund Secure Open Source Program by The Linux Foundation][5]
📢 To join the efforts, [Chainguard][7], a security firm specializing in open-source software and cloud-native development, has introduced a **Linux distro designed to secure the software supply chain**.
💡 They call it an "**Undistro**" because it is not a full-fledged Linux distribution to run on bare metal.
Instead, it is a **container-focused Linux distribution**. So, let me tell you more about it.
### Wolfi: A Container-specific Linux Distribution
The worlds smallest Octopus is named Wolfi, which inspired them to use the same to represent minimalism to flexibility for this Linux distribution.
Wolfi aims to address issues with containers, which are mainly used to build and ship software.
Furthermore, Chainguard mentions that there are several issues with running containers; some include:
* Running vulnerable container images.
* Distributions used in container lag behind upstream versions.
* Container images include more software than needed, increasing the attack surface.
* Not designed to meet compliance requirements or standards like [SLSA][8].
So, Wolfi is a distro that aims to solve these problems by being a solution **designed for container/cloud-native environments**while **minimizing dependencies** as much as possible.
It provides a secure foundation that reduces the effort/time to review and mitigate security vulnerabilities while increasing productivity.
Chainguard explains this as follows:
> Building a new, container-specific distribution offers the chance to vastly simplify things by dropping support for traditional distribution features that are now irrelevant (like packaging Linux itself!), and other things like SBOMs become simpler when we can build them in from the start. We can also embrace the immutable nature of containers and avoid package updates altogether, instead preferring to rebuild from scratch with new versions.
### Key Features of Wolfi
To achieve its purpose, Wolfi has a few key highlights for you to encourage using it:
* Provides a high-quality, build-time SBOM as standard for all packages.
* Packages are designed to be granular and independent, to support minimal images.
* Uses the proven and reliable APK package format.
* Fully declarative and reproducible build system.
* Designed to support glibc and musl.
If you are not familiar with the securing software supply chain, this might go over your head.
![Securing your software supply chain][11]
So, I suggest looking at [Wikipedia][12] to understand the terms. The video above should also help you learn more.
### Try Wolfi
To try Chainguard images using the Wolfi undistro, you can head to its [GitHub page][13] to find all the technical instructions.
[Try Wolfi][14]
💬 *What do you think about Wolfi? Do you think it will solve the problem of securing the software supply chain? Let us know your thoughts in the comments.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/wolfi-linux-undistro/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/wolfi-distro-software-supply-chain-security.png
[2]: https://www.sonatype.com/resources/state-of-the-software-supply-chain-2021
[3]: https://news.itsfoss.com/content/images/2022/09/software-chain-attacks.jpg
[4]: https://news.itsfoss.com/gitlab-open-source-tool-malicious-code/
[5]: https://news.itsfoss.com/google-sos-sponsor/
[7]: https://www.chainguard.dev/
[8]: https://slsa.dev/
[11]: https://youtu.be/Dg-hD4HHKT8
[12]: https://en.wikipedia.org/wiki/Software_supply_chain
[13]: https://github.com/chainguard-images/
[14]: https://github.com/chainguard-images/
[15]: https://www.humblebundle.com/books/linux-no-starch-press-books?partner=itsfoss

View File

@ -0,0 +1,107 @@
[#]: subject: "Audacity 3.2 Released With VST3 Plugins and Apple Silicon Support"
[#]: via: "https://news.itsfoss.com/audacity-3-2-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Audacity 3.2 发布,增加了 VST3 插件和 Apple Silicon 支持
======
Audacity 的新更新带来了一些重要功能,例如 VST3 插件支持和新效果按钮。
![Audacity 3.2 Released With VST3 Plugins and Apple Silicon Support][1]
Audacity 3.2 来了,一个最流行的免费和开源音频编辑和录制工具更新了
[此版本是在之前的主要版本Audacity 3.0](https://news.itsfoss.com/audacity-3-0-release/)是一年多之前发布的。
[即使在去年引起争议](https://news.itsfoss.com/audacity-fiasco-fork/)之后,它仍然是目前最好的 Linux 音频编辑器之一。
此版本有许多新增功能,例如对**VST3 插件的支持、对 Apple 芯片、FFMPEG 5.0 的支持**等等。
让我们快速了解一下 Audacity 的新功能。
### **Audacity 3.2:有什么新功能?**
这是一个点更新,带来了重大的变化和补充。
一些主要亮点包括:
- 支持 VST3 插件。
- 苹果芯片支持。
- FFMPEG 5.0。
- VST3、LV2、音频单元和 LADSPA 的实时功能。
- 用于实时效果的专用按钮。
- 删除 Linux 系统的 JACK 要求。
- 对用户界面的各种调整。
### **推荐阅读📖**
[Linux 的最佳音频编辑器 - 它是 FOSS](https://itsfoss.com/best-audio-editors-linux/)
### **新效果按钮**
![audacity 3.2 effects button][6]
Audacity 的界面中添加了一个专用的实时效果按钮,使用户可以轻松地即时下载和应用效果。
[您可以访问官方 wiki](https://support.audacityteam.org/audio-editing/using-realtime-effects)了解有关此功能的更多信息。
### **VST3 插件支持**
Audacity 3.2 还引入了对 VST3 插件的支持,这使用户能够利用 VST3 的高级音频处理功能,包括高效利用 CPU、更好地处理 MIDI、支持 MIDI I/O 等等。
### **将音频上传到云端**
Audacity 3.2 also brings in support for VST3 plugins, this enables users to take advantage of the advanced audio handling features of VST3, which include efficient usage of CPU, better handling of MIDI, support for MIDI I/O, and more.
#### Upload Audio To The Cloud
![audacity 3.2 share audio button][8]
Another exciting feature that has been added to Audacity is the option to share audio directly from the application to Audacity's new cloud audio platform [audio.com][9].
### **FFMPEG 5.0**
Audacity 现在还支持 FFMPEG 5.0;这确保了用户可以利用最新的开源音频/视频库套件。
### **苹果芯片支持**
此版本还为基于 arm64 架构的 Apple Silicon 带来了 macOS 支持。
### **下载 Audacity 3.2**
[您可以通过其官方网站](https://www.audacityteam.org/download/)、[GitHub 版本部分](https://github.com/audacity/audacity/releases)、[Flathub](https://flathub.org/apps/details/org.audacityteam.Audacity)或[Snap](https://snapcraft.io/audacity)下载最新的 Audacity 版本。
[无畏 3.2 下载](https://www.audacityteam.org/download/)
💬 *你会尝试 Audacity 3.2 吗?我认为 VST3 插件支持应该会改变游戏规则,想法?在下面的评论部分分享它!*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/audacity-3-2-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/audacity-3-2-release.jpg
[2]: https://news.itsfoss.com/audacity-3-0-release/
[3]: https://news.itsfoss.com/audacity-fiasco-fork/
[4]: https://itsfoss.com/best-audio-editors-linux/
[6]: https://news.itsfoss.com/content/images/2022/09/Audacity_3.2_effects_button.gif
[7]: https://support.audacityteam.org/audio-editing/using-realtime-effects
[8]: https://news.itsfoss.com/content/images/2022/09/Audacity_3.2_share_audio_button.png
[9]: https://audio.com/
[10]: https://www.audacityteam.org/download/
[11]: https://github.com/audacity/audacity/releases
[12]: https://flathub.org/apps/details/org.audacityteam.Audacity
[13]: https://snapcraft.io/audacity
[14]: https://www.audacityteam.org/download/

View File

@ -0,0 +1,102 @@
[#]: subject: "UbuntuDDE Remix 22.04 LTS Released!"
[#]: via: "https://news.itsfoss.com/ubuntudde-remix-22-04-released/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
UbuntuDDE Remix 22.04 LTS 发布!
======
UbuntuDDE 22.04 LTS 与 Linux Kernel 5.15、DDE Store 和几个升级版本一起发布。
![UbuntuDDE Remix 22.04 LTS Released!][1]
UbuntuDDE Remix 是一个在 Ubuntu 之上集成深度桌面环境的发行版。不想尝试 Deepin 发行版但喜欢其用户界面的用户可以尝试一下。
以[Ubuntu 22.04 LTS](https://news.itsfoss.com/ubuntu-22-04-release/)为基础,这是一次重大升级。
让我们看看他们能提供什么。
### **DDE Grand Search**
他们通过这个发行版为 Ubuntu 添加了许多新东西,例如“**大搜索栏**”、基于 GTK 的应用程序的升级版本、**新壁纸**、**DDE 应用程序商店**等等。
让我们看看 UbuntuDDE Remix 22.04 带来的一些关键变化。
### **DDE 大搜索**
![ubuntudde remix 22.04 grand search][4]
他们称之为“DDE Grand Search”这是一个快速应用启动器。
这使用户能够快速搜索任何内容,无论是应用程序、文件、文件夹,甚至是简单的网络搜索。它由键盘快捷键(**Shift+空格键**)激活。
### **Linux 内核 5.15**
该发行版还具有 Linux Kernel 5.15,它为各种功能打开了大门,例如对 Intel Alder Lake CPU 的增强支持、对 NTFS3 驱动程序的改进、改进的 Apple M1 支持等等。
我们之前介绍了此 Linux 内核版本的亮点,您可以查看它以获取更多信息:
[Linux 内核 5.15 LTS 发布!为 Linux 带来改进的 NTFS 驱动程序](https://news.itsfoss.com/linux-kernel-5-15-release/)
### **新安装程序重新设计**
![ubuntudde remix 22.04 installer][7]
UbuntuDDE Remix 上的安装程序似乎从[QT 安装程序框架](https://doc.qt.io/qtinstallerframework/ifw-overview.html)的书中吸取了几页,通过在 Calamares 安装程序中提供基于 QT 的样式,并使用非常熟悉的布局以及所有常用选项来轻松安装发行版。
### **新壁纸**
![ubuntudde remix 22.04 new wallpapers][9]
该版本还包括许多新壁纸供您使用。
### **🛠️其他变化**
您可以期待它附带的明显的 Deepin 应用程序和好东西。一些值得注意的提及包括:
- 预装 DDE 应用商店
- LibreOffice 7.3.6.2
- 通过 OTA 更新定期进行软件更新
- 包含升级的基于 DTK 的应用程序如深度音乐、深度终端、Boot Maker、系统监视器等
### **下载 UbuntuDDE 混音 22.04**
您可以前往官方[下载页面下载](https://ubuntudde.com/download/)UbuntuDDE Remix 22.04 的 ISO 文件。
[UbuntuDDE 混音 22.04](https://bit.ly/ubuntudde-22-04-fosshost)
如果您正在寻找一个现场 USB 创建工具来安装 UbuntuDDE Remix请阅读本指南以轻松创建一个
使用[Rufus安装linux这是最好的 Live USB 创建工具](https://itsfoss.com/live-usb-creator-linux/)
请注意,它不是官方的 Ubuntu 风格因此“Remix”但在 baord 上尝试使用 Deepin 桌面看起来很令人兴奋。
*💬你怎么看?你想在 Ubuntu 上体验深度桌面吗?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntudde-remix-22-04-released/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/ubunturemixdde-22-04-lts.png
[2]: https://news.itsfoss.com/ubuntu-22-04-release/
[3]: https://news.itsfoss.com/content/images/2022/09/UbuntuDDE_Remix_22.04_Desktop.png
[4]: https://news.itsfoss.com/content/images/2022/09/UbuntuDDE_Remix_22.04_Grand-Search.png
[5]: https://news.itsfoss.com/linux-kernel-5-15-release/
[7]: https://news.itsfoss.com/content/images/2022/09/UbuntuDDE_Remix_22.04_Installer.png
[8]: https://doc.qt.io/qtinstallerframework/ifw-overview.html
[9]: https://news.itsfoss.com/content/images/2022/09/UbuntuDDE_Remix_22.04_New_Wallpapers.png
[10]: https://ubuntudde.com/download/
[11]: https://bit.ly/ubuntudde-22-04-fosshost
[12]: https://itsfoss.com/live-usb-creator-linux/
[14]: https://www.humblebundle.com/books/linux-no-starch-press-books?partner=itsfoss

View File

@ -1,193 +0,0 @@
[#]: subject: "20 Facts About Linus Torvalds, the Creator of Linux and Git"
[#]: via: "https://itsfoss.com/linus-torvalds-facts/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
关于 Linux 和 Git 的创造者 Linus Torvalds 的 20 个事实
======
*简介:一些已知的,或鲜为人知的东西——这里有 20 个关于 Linux 内核创造者 Linus Torvalds 的事实。*
![Linus TorvaldsLinux 和 Git 的创造者][1]
[Linus Torvalds][2] 是一名芬兰学生,他在 1991 年攻读硕士学位时开发了一个类 Unix 操作系统。从那时起,它引发了一场革命:今天它为大多数网络、许多嵌入式设备和 [500 强超级计算机][3] 中的每一台提供支持。
我已经写过一些鲜为人知的 [关于 Linux 的事实][4]。但这篇文章不是关于 Linux 的而是它的创造者Linus Torvalds。
通过阅读他的传记 [Just for Fun][5],我了解了有关 Torvalds 的许多事情。如果你有兴趣,你可以[从亚马逊订购一份传记][6]。(这是一个 [附属][7] 链接。)
### 关于 Linus Torvalds 的 20 个有趣事实
你可能已经知道一些关于 Linus 的事实,但是通过阅读这篇文章,你很有可能会了解一些关于他的新事实。
#### 1. 以诺贝尔奖获得者的名字命名
Linus Benedict Torvalds 于 1969 年 12 月 28 日出生于赫尔辛基。他来自一个记者家庭。他的父亲 [Nils Torvalds][11] 是芬兰政治家,可能是未来选举的总统候选人。
他以 [Linus Pauling][12] 的名字命名,他是诺贝尔化学与和平奖的双奖获得者。
#### 2.世界上所有的Torvalds都是亲戚
虽然您可能会找到几个名字为 Linus 的人,但您不会找到很多名字为 Torvalds 的人——因为“正确”的拼写实际上是 Torvald没有 s。他的祖父将他的名字从 Torvald 改为 Torvalds并在末尾添加了一个“s”。于是Torwalds 王朝(如果我可以这么称呼它的话)开始了。
由于这是一个不寻常的姓氏,所以世界上几乎没有 30 个 Torvalds而且他们都是亲戚Linus Torvalds 在他的传记中声称。
![Linus Torvalds 和姐姐 Sara Torvalds][13]
#### 3. Commodore Vic 20 是他的第一台电脑
10 岁时Linus 开始在他外祖父的 Commodore Vic 20 上使用 BASIC 编写程序。这是他发现自己对计算机和编程的热爱的时候。
#### 4. Linus Torwalds 少尉
尽管他更喜欢花时间在电脑上而不是体育活动上,但他必须参加强制性的军事训练。他担任少尉军衔。
#### 5. 他创建 Linux 是因为他没有钱购买 UNIX
1991 年初,由于对 [MS-DOS][14] 和 [MINIX][15] 不满Torvalds 想购买 UNIX 系统。对我们来说幸运的是,他没有足够的钱。因此,他决定从头开始制作自己的 UNIX 克隆系统。
#### 6. Linux 可以被称为 Freax
91 年 9 月Linus 发布了 Linux代表“Linus's MINIX”并鼓励他的同事使用其源代码进行更广泛的分发。
Linus 认为 Linux 这个名字太自负了。他想把它改成 Freax基于 free、freak 和 MINIX但他的朋友 Lemmarke 已经在他的 FTP 服务器上创建了一个名为 Linux 的目录。因此Linux 的名称才得以保留。
#### 7. Linux 是他在大学的主要项目
“Linux一种便携式操作系统”是他的硕士论文题目。
#### 8. 他娶了他的学生
1993年他在赫尔辛基大学任教时把写电子邮件的任务作为家庭作业交给了学生。是的当时撰写电子邮件是一件大事。
一位名叫 Tove Monni 的女学生通过向他发送一封电子邮件邀请他约会来完成了这项任务。他接受了,三年后,他们三个女儿中的第一个出生了。
我应该说他开始了网恋的潮流吗?嗯……还是不了!让我们把它留在那里 ;)
![Linus Torvalds 和他的妻子 Tove Monni Torvalds][16]
#### 9. Linus 有一颗以他的名字命名的小行星
他的名字获得了无数奖项,包括一颗名为 [9793 Torvalds][17] 的小行星。
#### 10. Linus 不得不为 Linux 的商标而战
Linux 是向 Linus Torvalds 注册的商标。 Torvalds 最初并不关心这个商标,但在 1994 年 8 月William R. Della Croce, Jr. 注册了 Linux 商标,并开始向 Linux 开发人员索要版税。 Torvalds 作为回报起诉了他,并于 1997 年解决了此案。
[Linus Torvalds 是谁2分钟了解他][18]
#### 11. Steve Jobs 希望他在 Apple 的 macOS 上工作
2000 年Apple 的创始人 [Steve Jobs 邀请他在 Apple 的 macOS 上工作][19]。Linus 拒绝了丰厚的报价并继续致力于开发 Linux 内核。
#### 12. Linus 还创建了 Git
大多数人都知道 Linus Torvalds 创建 Linux 内核。但他还创建了 [Git][20],这是一个广泛用于全球软件开发的版本控制系统。
直到 2005 年,(当时)专有服务 [BitKeeper][21] 被用于 Linux 内核开发。当 Bitkeeper 关闭其免费服务时Linus Torvalds 自己创建了 Git因为其他版本控制系统都不能满足他的需求。
#### 13. 这些天Linus 几乎不编程
尽管 Linus 全职从事 Linux 内核工作但他几乎不再为它编写任何代码。事实上Linux 内核中的大部分代码都来自世界各地的贡献者。在内核维护人员的帮助下,他确保每个版本都能顺利进行。
#### 14. Torvalds 讨厌 C++
Linus Torvalds 极其的[不喜欢 C++ 编程语言][22],他对此非常直言不讳。他开玩笑说 Linux 内核的编译速度比 C++ 程序快。
#### 15. 即使是 Linus Torvalds 也发现 Linux 难以安装(你现在可以自我感觉良好了)
几年前Linus 说过 [他发现 Debian 难以安装][23]。他[已知在他的主力工作设备上使用 Fedora][24]。
#### 16. 他喜欢水肺潜水
Linus Torvalds 喜欢水肺潜水。他甚至创造了 [Subsurface][25],一种供水肺潜水员使用的潜水记录工具。您会惊讶地发现,有时他甚至会在其论坛上回答一般性问题。
![穿着潜水装备的 Linus Torvalds][26]
#### 17. 满嘴脏话的 Torvalds 改善了他的行为
Torvalds 以在 Linux 内核邮件列表中使用 [轻度脏话][27] 而闻名,这遭到了一些业内人士的批评。但是,很难批评他对“[F**k you, NVIDIA][28]”的玩笑,因为它促使 NVIDIA 更好地适配 Linux 内核。
2018 年,[Torvalds 暂停了 Linux 内核开发以改善他的行为][29]。这是在他签署有争议的 [Linux 内核开发人员行为准则][30] 之前完成的。
![Linus Torvalds 对 Nvidia 的中指:去你的 Nvidia][31]
#### 18. 他太害羞了,不敢在公共场合讲话
Linus 对公开演讲感到不舒服。他不参加很多活动。而当他这样做时,他更喜欢坐下来接受主持人的采访。这是他最喜欢的公开演讲方式。
#### 19. 不是社交媒体爱好者
[Google Plus][32] 是他使用过的唯一社交媒体平台。他甚至在空闲时间在那里花了一些时间[审查小工具][33]。Google Plus 现已停产,因此他没有其他社交媒体帐户。
#### 20. Torvalds 定居美国
Linus 于 1997 年移居美国,并与他的妻子 Tove 和他们的三个女儿在那里定居。他于 2010 年成为美国公民。目前,作为 [Linux 基金会][34] 的一部分,他全职从事 Linux 内核工作。
很难说 Linus Torvalds 的净资产是多少,或者 Linus Torvalds 的收入是多少,因为这些信息从未公开过。
![Tove 和 Linus Torvalds 和他们的女儿 Patricia、Daniela 和 Celeste][35]
图片来源:[opensource.com][36]
如果您有兴趣了解更多有关 Linus Torvalds 早期生活的信息,我建议您阅读他的传记,书名为 [Just for Fun][37]。
*免责声明:这里的一些图片来源于互联网,我没有图像的版权,我也不打算用这篇文章侵犯 Torvalds 家族的隐私。*
--------------------------------------------------------------------------------
via: https://itsfoss.com/linus-torvalds-facts/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[gpchn](https://github.com/gpchn)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2017/12/Linus-Torvalds-featured-800x450.png
[2]: https://en.wikipedia.org/wiki/Linus_Torvalds
[3]: https://itsfoss.com/linux-runs-top-supercomputers/
[4]: https://itsfoss.com/facts-linux-kernel/
[5]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[6]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[7]: https://itsfoss.com/affiliate-policy/
[8]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[9]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[10]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[11]: https://en.wikipedia.org/wiki/Nils_Torvalds
[12]: https://en.wikipedia.org/wiki/Linus_Pauling
[13]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_and_sara_Torvalds.jpg
[14]: https://en.wikipedia.org/wiki/MS-DOS
[15]: https://www.minix3.org/
[16]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_torvalds-wife-800x533.jpg
[17]: http://enacademic.com/dic.nsf/enwiki/1928421
[18]: https://youtu.be/eE-ovSOQK0Y
[19]: https://www.macrumors.com/2012/03/22/steve-jobs-tried-to-hire-linux-creator-linus-torvalds-to-work-on-os-x/
[20]: https://en.wikipedia.org/wiki/Git
[21]: https://www.bitkeeper.org/
[22]: https://lwn.net/Articles/249460/
[23]: https://www.youtube.com/watch?v=qHGTs1NSB1s
[24]: https://plus.google.com/+LinusTorvalds/posts/Wh3qTjMMbLC
[25]: https://subsurface-divelog.org/
[26]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_Torvalds_in_SCUBA_gear.jpg
[27]: https://www.theregister.co.uk/2016/08/26/linus_torvalds_calls_own_lawyers_nasty_festering_disease/
[28]: https://www.youtube.com/watch?v=_36yNWw_07g
[29]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
[30]: https://itsfoss.com/linux-code-of-conduct/
[31]: https://itsfoss.com/wp-content/uploads/2012/09/Linus-Torvalds-Fuck-You-Nvidia.jpg
[32]: https://plus.google.com/+LinusTorvalds
[33]: https://plus.google.com/collection/4lfbIE
[34]: https://www.linuxfoundation.org/
[35]: https://itsfoss.com/wp-content/uploads/2017/12/patriciatorvalds.jpg
[36]: https://opensource.com/life/15/8/patricia-torvalds-interview
[37]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[38]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[39]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[40]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID

View File

@ -0,0 +1,255 @@
[#]: collector: (lujun9972)
[#]: translator: (aREversez)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Real Novelty of the ARPANET)
[#]: via: (https://twobithistory.org/2021/02/07/arpanet.html)
[#]: author: (Two-Bit History https://twobithistory.org)
ARPANET 的真正创新之处
======
如果你在搜索引擎中输入“ARPANET”搜索相关图片你会看到许多地图的图片上面是上世纪六十年代末七十年代初 [美国政府创建的研究网络][1],该网络不断延伸扩展,横跨了整个美国。我猜很多人第一次了解到 ARPANET 的时候都看过这种地图。
可以说,这些地图很有意思,毕竟我们很难想象过去连接网络的计算机是那么少,就连如此低保真的图片都可以表示出美国全部机器的所在位置(这里的低保真指的是高射投影仪成像技术,而不是大家熟知的 lo-fi 氛围音乐。不过这些地图是有问题的。地图上加粗的线条连接着大陆各地强化了人们的一种观念ARPANET 最大的贡献就是首次将横跨美国东西两地的电脑连接了起来。
今天,即便是在病毒肆虐、人们困居家中的情况下,网络也能把我们联系起来,可谓是我们的生命线。所以,如果认为 ARPANET 是最早的互联网那么在那之前世界必然相互隔绝毕竟那时还没有今天的互联网对吧ARPANET 首次通过计算机将人们连接起来,一定是一件惊天动地的大事。
但是,这一观点却与历史事实不符,而且它也没有进一步解释 ARPANET 的重要性。
### 初露锋芒
华盛顿希尔顿酒店坐落于国家广场东北方向约 2.4 千米处的一座小山丘山顶附近。酒店左右两侧白色的现代化立面分别向外延展出半个圆形活像一只飞鸟的双翼。1965 年,酒店竣工之后,纽约时报报道称这座建筑物就像“一只栖息在山顶巢穴上的海鸥”[1][2]。
不过这家酒店最有名的特点却深藏在地下。在车道交汇处下方有着一个巨大的蛋形活动场地这就是人们熟知的国际宴会厅多年来一直是华盛顿特区最大的无柱宴会厅。1967 年大门乐队在此举办了一场音乐会。1968 年,“吉他之神”吉米·亨德里克斯也在此举办了一场音乐会。到了 1972 年,国际宴会厅隐去了以往的喧嚣,举办了首届国际计算机通信会议。在这场大会上,研究项目 ARPANET 首次公开亮相。
1972 年的国际计算机通信会议举办时间为 10 月 24-26 日,与会人数约八百人[2][3]。在这场大会上,计算机网络这一新兴领域的领袖人物齐聚一堂。因特网的先驱鲍勃·卡恩称,“如果有人在华盛顿希尔顿酒店上方丢了一颗炸弹,那么美国的整个网络研究领域将会毁于一旦”[3][4]。
当然,不是所有的与会人员都是计算机科学家。根据当时的宣传广告,这场大会将会聚焦用户,照顾到“律师、医务人员、经济学家、政府工作者、工程师以及通信员等从业人员”[4][5]。虽然大会的部分议题非常专业,比如“数据网络设计问题(一)”与“数据网络设计问题(二)”,但是正如宣传广告所承诺的,大部分会议的主要关注点还是计算机网络给经济社会带来的潜在影响。其中甚至有一场会议以惊人的先见之明探讨了如何积极利用法律制度“保护计算机数据库中的隐私权益”[5][6]。
展示 ARPANET 的目的在于尽可能地吸引与会者的关注。各场会议在国际宴会厅或酒店更下一层的其他地方举行,会议休息期间,与会者可以自由进入乔治敦宴会厅(在国际宴会厅走廊尽头的一个较小的宴会厅,也可以说是会议室)[6][7],那里放置着 40 台由不同制造商生产的终端,用以访问 ARPANET [7][8]。这些终端属于哑终端,也就是说,只能用来输入命令、输出结果,本身无法进行计算。事实上,因为是 1972 年所以这些终端可能都是硬拷贝终端即电传打字机。哑终端与一台被称为“终端接口信息处理机”TIP的计算机相连接后者放置在宴会厅中间的一个高台上。TIP 是早期的一种路由器,哑终端可通过 TIP 连接到 ARPANET。有了终端和 TIPICCC 与会者可以尝试登录和访问组成 ARPANET 的 29 个主机站的计算机 [8][9]。
为了展示网络的性能,美国全国各主机站的研究员们通力合作,准备了 19 个简易的“情景”,供用户测试使用。他们还出了 [一份小册子][10],将这些情景收录其中。如果与会人员打算进入这个满是电线与哑终端的房间,就会得到这样一本小册子 [9][11]。通过这些情景,研究员不仅要证明网络这项新技术的可行性,还要证明其实用性,因为 ARPANET 那时还只是“一条没有汽车驶过的公路”。此外,来自国防部的投资者们也希望,公开展示 ARPANET 可以进一步激发人们对网络的兴趣 [10][12]。
因此,这些情景充分展示了在 ARPANET 网络上可以使用的软件的丰富性有程序语言解释器其中一个是麻省理工学院MIT Lisp 语言的解释器,另一个是加州大学洛杉矶分校的数值计算环境 Speakeasy 项目的解释器;还有一些游戏,包括国际象棋和 <ruby>康威生命游戏<rt>Conway's Game of Life</rt></ruby>;以及最受与会者欢迎的几个人工智能聊天程序,包括由 MIT 的计算机科学家约瑟夫·魏泽堡开发的著名聊天程序伊莉莎。
设置情景的研究人员小心翼翼地列出了他们想让用户在终端机上输入的每一条命令。这点很重要,因为用于连接 ARPANET 主机的命令序列可能会因为主机的不同而发生变化。比如,为了能在 MIT 人工智能实验室的 PDP-10 微型电脑上测试人工智能国际象棋程序,与会者需要按照指示输入以下命令:
在下方代码块中, _`[LF]`、`[SP]` 以及 `[CR]` 分别代表换行、空格以及回车键。我在每行的 `//` 符号后面都解释了当前一行命令的含义,不过当时的小册子本来是没有使用这一符号的。
```
@r [LF] // 重置 TIP
@e [SP] r [LF] // “远程回显”设置, 主机回显字符TIP 不回显
@L [SP] 134 [LF] // 连接 134 号主机
:login [SP] iccXXX [CR] // 登录 MIT 人工智能实验室的系统“XXX”代表用户名首字母缩写
:chess [CR] // 启动国际象棋程序
```
如果与会者输入了上述命令,那么他就可以体验当时最先进的国际象棋程序,其棋盘布局如下:
```
BR BN BB BQ BK BB BN BR
BP BP BP BP ** BP BP BP
-- ** -- ** -- ** -- **
** -- ** -- BP -- ** --
-- ** -- ** WP ** -- **
** -- ** -- ** -- ** --
WP WP WP WP -- WP WP WP
WR WN WB WQ WK WB WN WR
```
与之不同的是,如果要连接加州大学洛杉矶分校的 IBM System/360 机器,运行 Speakeasy 数值计算环境,与会者需要输入以下命令:
```
@r [LF] // 重置 TIP
@t [SP] o [SP] L [LF] // “传递换行”设置
@i [SP] L [LF] // “插入换行”设置,即回车时发送换行符。
@L [SP] 65 [LF] // 连接 65 号主机
tso // 连接 IBM 分时可选软件系统
logon [SP] icX [CR] // 输入用户名进行登录“X”可为任意数字
iccc [CR] // 输入密码(够安全!)
speakez [CR] // 启动 Speakeasy
```
输入上述命令后,与会者可以在终端中对矩阵进行乘法、转置以及其他运算,如下所示:
```
:+! a=m*transpose(m);a [CR]
:+! eigenvals(a) [CR]
```
当时,这场演示给许多人都留下了深刻的印象,但原因并不是我们所想的那样,毕竟我们有的只是后见之明。今天的人们总是记不住,在 1972 年,即便身处两个不同的城市,远程登录使用计算机也已经不是一件新鲜事儿了。在那之前的数十年,电传打字机就已经用于与相隔很远的计算机传递信息了。在 ICCC 第一届大会之前,差不多整整五年,在西雅图的一所高中,比尔·盖茨使用电传打字机,在该市其他地方的通用电气计算机上运行了他的第一个 BASIC 程序。在当时,登录远程计算机,运行几行命令或者玩一些文字游戏,只不过是家常便饭。因此,虽说上文提到的软件的确很不错,但是即便没有 ARPANET我刚刚介绍的两个情景勉强也是可以实现的。
当然ARPANET 一定带来了新的东西。参加本次大会的律师、政治家与经济学家关注更多的可能是国际象棋游戏与聊天机器人,但是网络专家们可能对另外两个情景更感兴趣,因为它们将 ARPANET 的作用更好地展示了出来。
在其中一个情景下MIT <ruby>非兼容分时系统<rt>ITS</rt></ruby> 上运行了一个名为 `NETWRK` 的程序。`NETWRK` 命令下有若干个子命令,输入这些子命令就能得到 ARPANET 各方面的运行状态。`SURVEY` 子命令可以列出 ARPANET 上面运行的主机与空闲的主机,放入一个列表中;`SUMMARY.OF.SURVEY` 子命令对 `SURVEY` 子命令的运行结果进行统计,得出每台主机的“正常运行比率”,以及每台主机响应消息的平均时间。`SUMMARY.OF.SURVEY` 子命令以表格的形式输出结果,如下所示:
```
--HOST-- -#- -%-UP- -RESP-
UCLA-NMC 001 097% 00.80
SRI-ARC 002 068% 01.23
UCSB-75 003 059% 00.63
...
```
可以看到,主机编号的占位不超过三个数字。其他 `NETWRK` 子命令能够查看较长时间内查询结果的概要,或者检查单个主机查询结果的日志。
第二个情景用到了斯坦福大学开发的一款软件——SRI-ARC 联机系统。这款软件功能齐全,非常优秀。美国发明家道格拉斯·恩格尔巴特在 <ruby>所有演示之母<rt>Mother of All Demos</rt></ruby> 上演示的正是 SRI-ARC 联机系统。这款软件可以在加州大学圣芭芭拉分校的主机上运行本质上属于文件托管的服务。使用华盛顿希尔顿酒店的终端,用户就可以将斯坦福大学主机上创建的文件复制到加州大学圣芭芭拉分校的主机上。操作也很简单,只需执行 `copy` 命令,然后回答计算机的下列问题:
_在下方的代码块中`[ESC]`、`[SP]` 与 `[CR]` 分别代表退出、空格与回车键;圆括号中的文字是计算机打印出的提示信息;第三行中的退出键用于自动补全文件名。此处复制的文件是 `<system>sample.txt;1`,其中文件名末尾的数字 1 代表文件的版本号,`<system>` 表示文件路径。这种文件名是 TENEX 操作系统上面的惯用写法。_[11][13]
```
@copy
(TO/FROM UCSB) to
(FILE) <system>sample [ESC] .TXT;1 [CR]
(CREATE/REPLACE) create
```
这两个情景看起来好像和最初提及的两个情景没有太大区别,但是此二者却意义非凡。因为它们证明了,在 ARPANET 上面,不仅人们可以与计算机进行交流,计算机与计算机也可以 _相互_ 交流。保存在 MIT 的 `SURVEY` 命令的结果并非由人类定期登录并检查每台机器的运行状态收集而来,而是由一款能在网络上与其他机器进行交流的软件收集得到的。同样的道理,在斯坦福大学与加州大学圣芭芭拉分校之间传输文件的情景下,也没有人守在两所大学的终端旁边,特区的终端用户仅仅使用了一款软件,就能让其他两地的计算机连接起来。此外,这一点无关乎你使用的是宴会厅里的哪一台电脑,因为只要输入同样的命令序列,就能在任意一台电脑上浏览 MIT 的网络监视数据,或者在加州大学圣芭芭拉分校的计算机上储存文件。
这才是 ARPANET 的全新之处。本次国际计算机通信会议演示的不仅仅是人与远程电脑之间的交互,也不仅仅是远程输入输出的操作,更是一个软件与其他软件之间的远程通讯,这一点才是史无前例的。
为什么这一点才是最重要的,而不是地图上画着的那些贯穿整个美国、实际连接起来的电线呢(这些线是租赁的电话线,而且它们以前就在那了!)?要知道,早在 1966 年 ARPANET 项目启动之前,美国国防部的高级研究计划署打造了一间终端室,里面有三台终端。三台终端分别连接着位于 MIT、加州大学伯克利分校以及圣塔莫尼卡三地的计算机 [12][14]。对于高级研究计划署的工作人员来说,即便他们身处华盛顿特区,使用这三台计算机也非常方便。不过,这其中也有不便之处:工作人员必须购买和维护来自三家不同制造商的终端,牢记三种不同的登录步骤,熟悉三种不同的计算环境。虽然这三台终端机可能就放在一起,但是它们只是电线另一端主机系统的延申,而且操作也和那些计算机一样各不相同。所以说,在 ARPANET 项目诞生之前,远程连接计算机进行通讯就已经实现了,但问题是不同的计算系统阻碍了通讯朝着更加先进复杂的方向发展。
### 此刻,集合起来
因此我想说的是说法一ARPANET 首次通过计算机将不同地方的人们连接了起来与说法二ARPANET 首次将多个计算机系统彼此连接了起来)之间有着云泥之别。听起来似乎有些吹毛求疵,咬文嚼字,但是相较于说法二,说法一忽略了一些重要的历史发展阶段。
首先历史学家乔伊·利西·兰金Joy Lisi Rankin指出早在 ARPANET 诞生之前人们就已经在网络空间中进行交流了。在《美国人民的计算机历史》_A Peoples History of Computing in the United States_一书中兰金介绍了多个覆盖全国的数字社区这些社区运行在早于 ARPANET 的分时网络上面。从技术层面讲分时网络并不属于计算机网络因为它仅仅由一台大型主机构成。这种计算机放置在地下室中为多台哑终端提供计算颇像一只又黑又胖的奇怪生物触手向外伸展着遍及整个美国。不过在分时网络时代被后社交媒体时代称为“网络”的大部分社会行为应有尽有。例如Kiewit 网络是达特茅斯分时系统的延申应用,服务于美国东北部的各个大学和高中。在 Kiewit 网络上,高中生们共同维护着一个“<ruby>八卦文件<rt>gossip file</rt></ruby>”,用来记录其他学校发生的趣闻趣事,“在康涅狄格州和缅因州之间建立起了社交联系” [13][15]。同时,曼荷莲女子学院的女生通过网络与达特茅斯学院的男生进行交流,或者是安排约会,或者是与男朋友保持联系 [14][16]。这些事实都发生在上世纪六十年代。兰金认为,如果忽视了早期的分时网络,我们对美国过去 50 年数字文化发展的认识必然是贫瘠的:我们眼里可能只有所谓的“<ruby>硅谷神话<rt>Silicon Valley mythology</rt></ruby>”,认为计算机领域的所有发展都要归功于少数的几位天才,或者说互联网科技巨头的创始人。
回到 ARPANET如果我们能意识到真正的困难是计算机 _系统_ 的联通,而非机器本身的连接,那么在探讨 ARPANET 的创新点时我们就会更加倾向于第二种说法。ARPANET 是第一个包交换网络涉及到许多重要的技术应用。但是如果仅仅因为这项优势就说它是一项突破我觉得这种说法本身就是错的。ARPANET 旨在促进全美计算机科学家之间的合作目的是要弄明白不同的操作系统与不同语言编写的软件如何配合使用而非如何在麻省和加州之间实现高效的数据传输。因此ARPANET 不仅是第一个包交换网络,它还是一项非常成功且优秀的标准。在我看来,后者更有意思,毕竟我在博客上曾经写过许多颇有瑕疵的标准:[语义网][17]、[RSS][18] 与 [FOAF ][19]。
ARPANET 项目初期没有考虑到网络协议,协议的制定是后来的事情了。因此,这项工作自然落到了主要由研究生组成的组织——<ruby>国际网络工作组<rt>Network Working Group</rt></ruby> 身上。该组织的首次会议于 1968 年在加州大学圣芭芭拉分校举办 [15][20]。当时只有 12 人参会,大部分都是来自上述四所大学的代表 [16][21]。来自加州大学洛杉矶分校的研究生史蒂夫·克罗克Steve Crocker参加了这场会议。他告诉我工作组首次会议的参会者清一色都是年轻人最年长的可能要数会议主席埃尔默·夏皮罗Elmer Shapiro他当年 38 岁左右。高级研究计划署没有安排负责研究计算机连接之后如何进行通信的人员,但是很明显它需要提供一定的协助。随着工作组会议的陆续开展,克罗克一直期望着更有经验与威望的“法定成年人”从东海岸飞过来接手这项工作,但是期望终究还是落空了。在高级研究计划署的默许之下,工作组举办了多场会议,其中包括很多长途旅行,差旅费由计划署报销,这些就是计划署给与工作组的全部协助了 [17][22]。
当时,国际网络工作组面临着巨大的挑战。组内成员都没有使用通用方式连接计算机系统的经验,而且这本来就与上世纪六十年代末计算机领域盛行的全部观点相悖:
> 那个时候的主机就像是全世界唯一的一台计算机。即便是最简短的交流会话,两台主机也无法轻易做到。并不是说机器没办法相互连接,只是连接之后,两台计算机又能做些什么呢?当时,计算机和与其相连的其他设备之间的通讯,就像帝王与群臣之间的对话一般。连接到主机的设备各自执行着自己的任务,每台外围设备都保持着常备不懈的状态,等待着上司的命令。当时的计算机就是严格按照这类交流需求设计出来的;它们向读卡器、终端与磁带机等下属设备发号施令,发起所有会话。但是,如果一台计算机拍了拍另一台计算机的肩膀,说道,“你好,我也是一台计算机”,那么另一台计算机可就傻眼了,什么也回答不上来 [18][23]。
于是,工作组的最初进展比较缓慢 [19][24]。直到 1970 年 6 月,也就是首次会议将近两年之后,工作组才为网络协议选定了一套官方规范 [20][25]。
到了 1972 年,在国际计算机通信会议上展示 ARPANET 的时候,所有的协议已经准备到位了。会议期间,这些协议运用到了国际象棋等情景之中。用户运行 `@e r` 命令(`@echo remote` 命令的缩写形式),可以指示 TIP 使用新远程登录虚拟终端协议提供的服务,通知远程主机回显用户输入的内容。接着,用户运行 `@L 134` 命令(`@login 134` 命令的缩写形式),让 TIP 在 134 号主机上调用<ruby>初始连接协议<rt>Initial Connection Protocol</rt></ruby>,该协议指示远程主机分配出连接所需的全部必要资源,并将用户带入远程登录会话中。上述文件传输的情景也许用到了 <ruby>文件传输协议<rt>File Transfer Protocol</rt></ruby>,而该协议恰好是在大会举办前夕才刚刚完成的 [21][26]。所有这些协议都是“三层”协议,其下的第二层是主机到主机层协议,定义了主机之间可以相互发送和接收的信息的基本格式;第一层是主机到接口通信处理机协议,定义了主机如何与连接的远程设备进行通信。令人感到不可思议的是,这些协议都能正常运行。
在我看来,网络工作组之所以能够在大会举办之前做好万全的准备,顺利且出色地完成任务,在于他们采用了开放且非正式的标准化方法,其中一个典型的例子就是著名的 Request for CommentsRFC系列文档。RFC 文档最初通过传统邮件供工作组成员进行传阅让成员们在没有举办会议的时候也能保持联系同时收集成员反馈汇集各方智慧。RFC 框架是克罗克提出的,他写出了第一篇 RFC 文档,并在早期负责管理 RFC 的邮寄列表。他这样做是为了强调工作组开放协作的活动本质。有了这套框架以及触手可及的文档ARPANET 的协议设计过程成了一个大熔炉每个人都可以贡献出自己的力量步步推进精益求精让最棒的想法脱颖而出使得每一位贡献者都值得尊敬。总而言之RFC 获得了巨大成功,并且直至今天,长达半个世纪之后,它依旧是网络标准的“说明书”。
因此,说起 ARPANET 的影响力,我认为不得不强调的一点正是工作组留下的这一成果。今天,互联网可以把世界各地的人们连接起来,这也是它最神奇的属性之一。不过如果说这项技术到了上世纪才开始使用,那可就有些滑稽可笑了。要知道,在 ARPANET 出现之前,人们就已经通过电报打破了现实距离的限制。而 ARPANET 打破的应该是各个主机站点因使用不同的操作系统、字符编码、程序语言以及组织策略而在逻辑层面产生的差异限制。当然,不得不提的是,将第一个包交换网络投入使用在技术方面绝对是一大壮举。不过,在建立 ARPANET 网络过程中遇到的两大难题中,更为复杂的一项则是制定统一的标准并用以连接原本无法相互协作的计算机。而这一难题的解决方案,也成了 ARPANET 整个建立与发展历史中最为神奇的一个章节。
1981 年,高级研究计划署发表了一份“完工报告”,回顾了 ARPANET 项目的第一个十年。在《付出收获了回报的技术方面以及付出未能实现最初设想的技术方面》这一冗长的小标题下,作者们写道:
> 或许,在 ARPANET 的开发过程中,最艰难的一项任务就是,尽管主机制造商各不相同,或者同一制造商下操作系统各不相同,我们仍需在众多的独立主机系统之间实现通讯交流。好在这项任务后来取得了成功 [22][27]。
你可以从美国联邦政府获得相关信息。
_如果你喜欢这篇文章欢迎关注推特 [@TwoBitHistory][28],也可通过 [RSS feed][29] 订阅获取最新文章。_
1. “Hilton Hotel Opens in Capital Today.” _The New York Times_, 20 March 1965, <https://www.nytimes.com/1965/03/20/archives/hilton-hotel-opens-in-capital-today.html?searchResultPosition=1>. Accessed 7 Feb. 2021. [↩︎][31]
2. James Pelkey. _Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988,_ Chapter 4, Section 12, 2007, <http://www.historyofcomputercommunications.info/Book/4/4.12-ICCC%20Demonstration71-72.html>. Accessed 7 Feb. 2021. [↩︎][32]
3. Katie Hafner and Matthew Lyon. _Where Wizards Stay Up Late: The Origins of the Internet_. New York, Simon &amp; Schuster, 1996, p. 178. [↩︎][33]
4. “International Conference on Computer Communication.” _Computer_, vol. 5, no. 4, 1972, p. c2, <https://www.computer.org/csdl/magazine/co/1972/04/01641562/13rRUxNmPIA>. Accessed 7 Feb. 2021. [↩︎][34]
5. “Program for the International Conference on Computer Communication.” _The Papers of Clay T. Whitehead_, Box 42, <https://d3so5znv45ku4h.cloudfront.net/Box+042/013_Speech-International+Conference+on+Computer+Communications,+Washington,+DC,+October+24,+1972.pdf>. Accessed 7 Feb. 2021. [↩︎][35]
6. 我其实并不清楚 ARPANET 是在哪个房间展示的。很多地方都提到了“宴会厅”,但是华盛顿希尔顿酒店更习惯于叫它“乔治敦”,而不是把它当成一间会议室。因此,或许这场展示是在国际宴会厅举办的。但是 RFC 372 号文件又提到了预定“乔治敦”作为展示场地一事。华盛顿希尔顿酒店的楼层平面图可以点击 [此处][36] 查看。 [↩︎][37]
7. Hafner, p. 179. [↩︎][38]
8. ibid., p. 178. [↩︎][39]
9. Bob Metcalfe. “Scenarios for Using the ARPANET.” _Collections-Computer History Museum_, <https://www.computerhistory.org/collections/catalog/102784024>. Accessed 7 Feb. 2021. [↩︎][40]
10. Hafner, p. 176. [↩︎][41]
11. Robert H. Thomas. “Planning for ACCAT Remote Site Operations.” BBN Report No. 3677, October 1977, <https://apps.dtic.mil/sti/pdfs/ADA046366.pdf>. Accessed 7 Feb. 2021. [↩︎][42]
12. Hafner, p. 12. [↩︎][43]
13. Joy Lisi Rankin. _A Peoples History of Computing in the United States_. Cambridge, MA, Harvard University Press, 2018, p. 84. [↩︎][44]
14. Rankin, p. 93. [↩︎][45]
15. Steve Crocker. Personal interview. 17 Dec. 2020. [↩︎][46]
16. 克罗克将会议记录文件发给了我,文件列出了所有的参会者。[↩︎][47]
17. Steve Crocker. Personal interview. [↩︎][48]
18. Hafner, p. 146. [↩︎][49]
19. “Completion Report / A History of the ARPANET: The First Decade.” BBN Report No. 4799, April 1981, <https://walden-family.com/bbn/arpanet-completion-report.pdf>, p. II-13. [↩︎][50]
20. 这里我指的是 RFC 54 号文件中的“官方协议”。[↩︎][51]
21. Hafner, p. 175. [↩︎][52]
22. “Completion Report / A History of the ARPANET: The First Decade,” p. II-29. [↩︎][53]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2021/02/07/arpanet.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[aREversez](https://github.com/aREversez)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/ARPANET
[2]: tmp.pnPpRrCI3S#fn:1
[3]: tmp.pnPpRrCI3S#fn:2
[4]: tmp.pnPpRrCI3S#fn:3
[5]: tmp.pnPpRrCI3S#fn:4
[6]: tmp.pnPpRrCI3S#fn:5
[7]: tmp.pnPpRrCI3S#fn:6
[8]: tmp.pnPpRrCI3S#fn:7
[9]: tmp.pnPpRrCI3S#fn:8
[10]: https://archive.computerhistory.org/resources/access/text/2019/07/102784024-05-001-acc.pdf
[11]: tmp.pnPpRrCI3S#fn:9
[12]: tmp.pnPpRrCI3S#fn:10
[13]: tmp.pnPpRrCI3S#fn:11
[14]: tmp.pnPpRrCI3S#fn:12
[15]: tmp.pnPpRrCI3S#fn:13
[16]: tmp.pnPpRrCI3S#fn:14
[17]: https://twobithistory.org/2018/05/27/semantic-web.html
[18]: https://twobithistory.org/2018/12/18/rss.html
[19]: https://twobithistory.org/2020/01/05/foaf.html
[20]: tmp.pnPpRrCI3S#fn:15
[21]: tmp.pnPpRrCI3S#fn:16
[22]: tmp.pnPpRrCI3S#fn:17
[23]: tmp.pnPpRrCI3S#fn:18
[24]: tmp.pnPpRrCI3S#fn:19
[25]: tmp.pnPpRrCI3S#fn:20
[26]: tmp.pnPpRrCI3S#fn:21
[27]: tmp.pnPpRrCI3S#fn:22
[28]: https://twitter.com/TwoBitHistory
[29]: https://twobithistory.org/feed.xml
[30]: https://twitter.com/TwoBitHistory/status/1277259930555363329?ref_src=twsrc%5Etfw
[31]: tmp.pnPpRrCI3S#fnref:1
[32]: tmp.pnPpRrCI3S#fnref:2
[33]: tmp.pnPpRrCI3S#fnref:3
[34]: tmp.pnPpRrCI3S#fnref:4
[35]: tmp.pnPpRrCI3S#fnref:5
[36]: https://www3.hilton.com/resources/media/hi/DCAWHHH/en_US/pdf/DCAWH.Floorplans.Apr25.pdf
[37]: tmp.pnPpRrCI3S#fnref:6
[38]: tmp.pnPpRrCI3S#fnref:7
[39]: tmp.pnPpRrCI3S#fnref:8
[40]: tmp.pnPpRrCI3S#fnref:9
[41]: tmp.pnPpRrCI3S#fnref:10
[42]: tmp.pnPpRrCI3S#fnref:11
[43]: tmp.pnPpRrCI3S#fnref:12
[44]: tmp.pnPpRrCI3S#fnref:13
[45]: tmp.pnPpRrCI3S#fnref:14
[46]: tmp.pnPpRrCI3S#fnref:15
[47]: tmp.pnPpRrCI3S#fnref:16
[48]: tmp.pnPpRrCI3S#fnref:17
[49]: tmp.pnPpRrCI3S#fnref:18
[50]: tmp.pnPpRrCI3S#fnref:19
[51]: tmp.pnPpRrCI3S#fnref:20
[52]: tmp.pnPpRrCI3S#fnref:21
[53]: tmp.pnPpRrCI3S#fnref:22

View File

@ -0,0 +1,67 @@
[#]: collector: "lujun9972"
[#]: translator: "Veryzzj"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "How I prioritize tasks on my to-do list"
[#]: via: "https://opensource.com/article/21/1/prioritize-tasks"
[#]: author: "Kevin Sonney https://opensource.com/users/ksonney"
如何确定待办事项上任务的优先级
======
使用艾森豪威尔矩阵更好地安排你的待办事项的优先次序。
![Team checklist and to dos][1]
在前几年这个年度系列涵盖了单个应用程序。今年除了策略之外我们还将着眼于一体式解决方案以在2021提供帮助。欢迎来到2021年“21天生产力”活动的第4天。在本文中将研究一种在待办事项上确定任务优先级的策略。想要找到适合你日常工作的开源工具请查看[此列表][2]。
把事情添加到任务或待办事项中很容易。几乎太容易了。而一旦列入清单,挑战就变成了弄清楚先做什么。我们要做清单首位的事情吗?清单首位的事情是最重要的吗?如何弄清楚最重要的事是什么?
![To-do list][3]
要做的事。今天明天谁知道呢Kevin Sonney, [CC BY-SA 4.0][4]
[与电子邮件一样][5],我们可以根据一些事情来确定任务的优先级,这可以让我们弄清楚什么事情需要先做,什么可以等到以后再做。
我使用一种被称为_艾森豪威尔矩阵_ 的方法,它取自美国总统 Dwight D. Eisenhower 的一句话。画一个水平和垂直分割的方框。在列上标明“紧急”和“不紧急”,在行上标明“重要”和“不重要”。
![Eisenhower matrix][6]
 一个艾森豪威尔矩阵。Kevin Sonney, [CC BY-SA 4.0][4]
你可以把待办事项上的任务放在其中一个框里。但如何知道一个任务应该放在哪里?紧迫性和重要性往往是主观的。因此,第一步就是决定什么对你来说是重要的。我的家庭(包括宠物)、工作和爱好都很重要。如果待办事项上的东西与这三件事无关,我可以立即把它放到 “不重要”行。
紧迫性是一个比较简单的问题。一件事需要在今天或明天完成吗?那么它可能是“紧急的”。一件事是否有一个即将到来的最后期限,但离那个时间还有几天/几周/几个月,或者它根本就没有最后期限?当然是“不急的”。
现在我们可以将这些框转化为优先级。“紧急/重要”是最高优先级(即第一优先级),需要首先完成。接下来是“不紧急/重要”优先级2然后是“紧急/不重要”优先级3最后是“不紧急/不重要”优先级4或根本没有优先级
请注意,"紧急/不重要 "是第三位,而不是第二位。这是因为,人们花了很多时间在那些看似重要的事情上,只是因为它们比较紧急,实际上这些事并不是重要的。当我看到这类事项时,我会问自己一些问题。 这些任务需要我具体完成吗?这些任务我可以要求其他人去做吗?它们对其他人来说是否重要和紧急?而这是否改变了它们对我的重要性?也许它们需要重新分类,或者我可以要求别人完成,并将它们从我的清单中删除。
![After prioritizing][7]
确定优先级后。 Kevin Sonney, [CC BY-SA 4.0][4]
对于”不紧急/不重要“框中的事项,有一个问题要问,那就是“这些事情到底需不需要放在我的清单上?”说实话,我们经常用那些不紧急或不重要的事情来填满待办事项清单,但其实完全可以将它们从清单上删除。我知道承认“这事永远不会完成”是很难的,但在接受这个事实后,把这个事情从清单上删除并且不用再为它担心,是一种解脱。
经过这一切,看着清单很容易说出:“这是我现在需要做的事情。“然后完成它。这就是待办事项的作用:为一天提供指导和重点。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/prioritize-tasks
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[Veryzzj](https://github.com/Veryzzj)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf "Team checklist and to dos"
[2]: https://opensource.com/article/20/5/alternatives-list
[3]: https://opensource.com/sites/default/files/pictures/to-do-list.png "To-do list"
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/21/1/email-rules
[6]: https://opensource.com/sites/default/files/pictures/eisenhower-matrix.png "Eisenhower matrix"
[7]: https://opensource.com/sites/default/files/pictures/after-prioritizing.png "After prioritizing"

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: (MareDevi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How open source builds distributed trust)
[#]: via: (https://opensource.com/article/21/1/open-source-distributed-trust)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
开源如何构建分布式信任
======
对开源的信任是一个积极的反馈循环。
![信任][1]
这是我即将为[Wiley][2]出版的《计算和云计算中的信任》Trust in Computing and the Cloud一书中经过编辑的节选也是我之前写的一篇文章[_《信任与选择开源》_][3](_Trust &amp; choosing open source_)的延伸。
在那篇文章中,我提出了一个问题。当我们说 "我相信开放源码软件 "时,我们在做什么?作为回答,我认为,我们正在做的是确定有足够多的编写和测试该软件的人与我有类似的要求,而且他们的专业知识加在一起,使我使用该软件的风险可以接受。我同时也介绍了 "分布式信任 "的概念。
在社区内分配信任的概念是亚里士多德提出的“人群智慧理论”的应用,其中的假设是,许多人的意见通常比一个人或少数人的意见更有明智。虽然在某些情况下,最简单的形式显然是错误的——最明显的例子是民众对极权主义政权的支持——但这一原则可以提供一个非常有效的机制来建立某些信息。
我们称这种集体经验的提炼为“分布式信任”它通过互联网上的许多机制收集。如TripAdvisor或Glassdoor记录了关于组织或其提供的服务的信息,还有像UrbanSitter或LinkedIn允许用户添加关于特定人的信息例如见LinkedIn的推荐和技能与个人档案中的认可部分。从这些例子中可以获得的利益因网络效应而大大增加因为随着成员数量的增加成员之间可能的联系数量也成倍增加。
分布式信任的例子还包括像Twitter这样的平台一个账户的追随者数量可以被视为衡量其声誉甚至是衡量其可信度的标准我们应该以强烈的怀疑态度去看待这种计算。 事实上Twitter认为它必须解决拥有大量追随者的账户的社会力量问题并建立了一个为 "验证账户 "机制,让人们知道 "一个具有公共利益的账户是真实的"。但是有趣的是,该公司不得不暂停这项服务,因为用户对 "验证 "的确切含义或暗示的期望出现了问题:这就是不同群体之间对内容理解不同的典型案例。
那么,开源的相关性在哪里呢?开源的社区方面实际上就是建立分布式信任的一个驱动力。因为一旦你成为一个开源项目周围社区的一部分,你就会承担一个或多个角色,一旦你说你 "信任 "一个开源项目你就会开始信任这些角色见我之前的文章。例如建筑师、设计师、开发人员、审查员、技术作家、测试员、部署员、错误报告者或错误修复者。你对一个项目的参与越多你就越是社区的一部分久而久之这就可以成为一个“实践社区”community of practice
Jean Lave和Etienne Wenger在[_《情境学习正当的外围参与》_][4](_Situated Learning: Legitimate Peripheral Participation_)一书中提出了实践社区的概念,团体在成员热情分享和参与共同活动的过程中演变成社区,导致他们的技能和知识共同提高。这里的核心概念是:当参与者在实践社区周围学习时,他们同时也成为社区的成员。
> “正当的的外围参与既指在实践中知识,技能,身份的发展,也指实践社区的再生产和转化。”
Wenger在[_《实践社区学习、意义和身份》_][5]_Communities of Practice: Learning, Meaning, and Identity_中进一步探讨了实践社区的概念:它们如何形成、对其健康的要求,以及它们如何鼓励学习。他认为,意义的可协商性("我们为什么要一起工作,我们要实现什么?")是实践社区的核心,并指出,如果没有个人的参与、想象力和一致性,实践社区将不会有活力。
我们可以把这一点与我们对分布式信任如何建立和构建的看法结合起来:当你意识到你对开源的影响可以与其他人的影响相同时,你对社区成员的分布式信任关系就变得不那么具有传递性(第二或第三手甚至更遥远),而是更加直接。你明白,你对你所运行的软件的创建、维护、需求和质量所能产生的影响,可以与所有其他以前匿名的贡献者一样,你现在正在与他们形成一个实践社区,或者你正在加入他们的现有实践社区。然后,你就会成为一个信任关系网络的一部分,这个网络是分布式的,但与你购买和操作专利软件时的经历相差不大。
这个过程并不会停止:因为开源项目的一个共同属性是“交叉授粉”,即一个项目的开发者也在其他项目上工作。由于多个开源项目之间的网络效应,使得对其他项目的重用和依赖性上升,导致整个项目的吸收量增加。
这就很容易理解为什么许多开源贡献者会成为开源爱好者或传道者,不仅仅是为单个项目,而是为整个开源项目。事实上,斯坦福大学社会学家[Mark Granovetter][6]的工作表明,社区内太多的强关系会导致小团体和停滞不前,但弱关系会使思想和趋势在社区内流动。这种对其他项目和围绕它们存在的社区的认识,以及想法在项目间的灵活性,导致分布式信任能够被扩展(尽管保证比较弱),超越贡献者在他们有直接经验的项目中所经历的直接或短链间接关系,并向其他项目扩展,因为外部观察或外围参与显示贡献者之间存在类似关系。
简单地说,参与开源项目并通过参与建立信任关系的行为会导致对类似的开源项目或只是对其他类似的开源项目产生更强的分布式信任。
这对我们每个人来说意味着什么?它意味着我们越是参与开源,我们对开源的信任度就越高,而其他人对开源的参与度也会相应提高,从而对开源的信任度也会提高。对开源的信任不仅仅是一个网络效应:它是一个积极的反馈循环!
* * *
_本文最初发表于[Alice, Eve, and Bob][7]经作者许可转载。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/open-source-distributed-trust
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[MareDevi](https://github.com/MareDevi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_trust.png?itok=KMfi0Rdo (Trust)
[2]: https://wiley.com/
[3]: https://aliceevebob.com/2019/06/18/trust-choosing-open-source/
[4]: https://books.google.com/books/about/Situated_Learning.html?id=CAVIOrW3vYAC
[5]: https://books.google.com/books?id=Jb8mAAAAQBAJ&dq=Communities%20of%20Practice:%20Learning,%20meaning%20and%20identity&lr=
[6]: https://en.wikipedia.org/wiki/Mark_Granovetter
[7]: https://aliceevebob.com/2020/11/17/how-open-source-builds-distributed-trust/

View File

@ -1,142 +0,0 @@
[#]: subject: "Manage containers on Fedora Linux with Podman Desktop"
[#]: via: "https://fedoramagazine.org/manage-containers-on-fedora-linux-with-podman-desktop/"
[#]: author: "Mehdi Haghgoo https://fedoramagazine.org/author/powergame/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 Podman Desktop 在 Fedora Linux 上管理容器
======
![][1]
Podman Desktop 是一个开源 GUI 应用,用于在 Linux、macOS 和 Windows 上管理容器。
从历史上看,开发人员一直使用 Docker Desktop 对容器进行图形化管理。这适用于那些安装了 Docker Daemon 和 Docker CLI 的人。然而,对于那些使用 Podman daemon-less 工具的人来说,虽然有一些 Podman 前端,如 [Pods][2]、[Podman desktop companion][3] 和 [Cockpit][4],但没有官方应用。现在不是这种情况了。有了 Podman Desktop
本文将讨论由 Red Hat 和其他开源贡献者开发的 Podman Desktop 的特性、安装和使用。
### 安装
要在 Fedora Linux 上安装 Podman Desktop请访问 [podman-desktop.io][5],然后单击 *Download for Linux* 按钮。你将看到两个选项Flatpak 和 zip。在这个例子中我们使用的是 Flatpak。单击 *Flatpak* 后,通过双击文件在 GNOME 软件中打开它(如果你使用的是 GNOME。你也可以通过终端安装它
```
flatpak install podman-desktop-X.X.X.flatpak
```
在上面的命令中,将 X.X.X 替换为你下载的特定版本。如果你下载了 zip 文件,则、那么解压缩存档,然后启动 *Podman Desktop* 应用的二进制文件。你还可以通过进入 GitHub 上项目的[发布][6]页找到预发布版本。
### 特性
Podman Desktop 仍处于早期阶段。然而,它支持许多常见的容器操作,如创建容器镜像、运行容器等。此外,你可以在“首选项”的“扩展目录”下找到 Podman 扩展,你可以使用它来管理 macOS 和 Windows 上的 Podman 虚拟机。此外Podman Desktop 支持 Docker Desktop 扩展。
你可以在“首选项”下的 “Docker Desktop Extensions” 安装此类扩展。应用窗口有两个窗格。左侧窄窗格显示应用的不同功能,右侧窗格是内容区域,它将根据左侧选择的内容显示相关信息。
![Podman Desktop 0.0.6 在 Fedora 36 上运行][7]
### 演示
为了全面了解 Podman Desktop 的功能,我们将从 Dockerfile 创建一个镜像并将其推送到注册表,然后拉取并运行它,这一切都在 Podman Desktop 中完成。
#### 构建镜像
第一步是通过在命令行中输入以下行来创建一个简单的 Dockerfile
```
cat <<EOF>>Dockerfile
FROM docker.io/library/httpd:2.4
COPY . /var/www/html
WORKDIR /var/www/html
CMD ["httpd", "-D", "FOREGROUND"]
EOF
```
现在,点击“镜像”并按下“构建镜像”按钮。你将被带到一个新页面以指定 Dockerfile、构建上下文和镜像名称。在 Containerfile 路径下,单击并浏览以选择你的 Dockerfile。在镜像名称下输入镜像的名称。如果要将镜像推送到容器注册表那么可以以 example.com/username/repo:tag 形式指定完全限定的镜像名称 (FQIN)。在此示例中,我输入 quay.io/codezombie/demo-httpd:latest因为我在 quay.io 上有一个名为 demo-httpd 的公共仓库。你可以按照类似的格式来指定容器注册表Quay、Docker Hub、GitHub Container Registry 等)的 FQIN。现在按*构建*并等待构建完成。
#### 推送镜像
构建完成后,就该推送镜像了。所以,我们需要在 Podman Desktop 中配置一个注册表。进入 Preferences->Registries 并按 *Add registry*
![Add Registry 对话框][8]
在 “Add Registry” 对话框中,输入你的注册表服务器地址和用户凭据,然后单击 “ADD REGISTRY”。
现在,我回到镜像列表中的镜像,并按下上传图标将其推送到仓库。当你将鼠标悬停在设置中添加的注册表名称开头的镜像名称上时(此演示中的 quay.io镜像名称旁边会出现一个推送按钮。
![将鼠标悬停在镜像名称上时出现的按钮][9]
![镜像通过 Podman Desktop 推送到仓库][10]
图像被推送后,任何有权访问镜像仓库的人都可以拉取它。由于我的镜像仓库是公开的,因此你可以轻松地将其拉入 Podman Desktop。
#### 拉取镜像
因此,为确保一切正常,请在本地删除此镜像并将其拉入 Podman Desktop。在列表中找到图像并按*删除*图标将其删除。删除图像后,单击 *Pull Image* 按钮。在 *Image to Pull* 输入完全限定名称,然后按 *Pull image*
![Our container image is successfully pulled][11]
#### 创建一个容器
作为 Podman Desktop 演示的最后一部分,让我们从镜像中启动一个容器并检查结果。我转到 *Containers* 并按 *Create Container*。这将打开一个包含两个选项的对话框:*From Containerfile/Dockerfile* 和 *From existing image*。按下 *From existing image*。这将进入镜像列表。在那里,选择我们要拉取的图像。
![在 Podman Desktop 中创建容器][12]
现在,我们从列表中选择我们最近拉取的图像,然后按它前面的 *Play* 按钮。在出现的对话框中,我输入 demo-web 作为*容器名*,输入 8000 作为*端口映射*,然后按下 *Start Container*
![Container configuration][13]
容器开始运行,我们可以通过运行以下命令检查 Apache 服务器的默认页面:
```
curl http://localhost:8000
```
![可以工作!][14]
你还应该能够在容器列表中看到正在运行的容器,其状态已更改为 *Running*。在那里,你会在容器前面找到可用的操作。例如,你可以单击终端图标打开 TTY 进入到容器中!
![][15]
### 接下来是什么
Podman Desktop 还很年轻,处于[积极开发][16]中。 GitHub 上有一个项目[路线图][17],其中列出了令人兴奋的按需功能,包括:
* Kubernetes 集成
* 支持 Pod
* 任务管理器
* 卷支持
* 支持 Docker Compose
* Kind 支持
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/manage-containers-on-fedora-linux-with-podman-desktop/
作者:[Mehdi Haghgoo][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/powergame/
[b]: https://github.com/lkxed
[1]: https://fedoramagazine.org/wp-content/uploads/2022/09/podman-desktop-on-fedora-816x345.jpg
[2]: https://github.com/marhkb/pods
[3]: https://github.com/iongion/podman-desktop-companion
[4]: https://github.com/cockpit-project/cockpit/
[5]: https://podman-desktop.io/
[6]: https://github.com/containers/podman-desktop/releases/
[7]: https://fedoramagazine.org/wp-content/uploads/2022/08/pd.png
[8]: https://fedoramagazine.org/wp-content/uploads/2022/08/registry.png
[9]: https://fedoramagazine.org/wp-content/uploads/2022/08/image.png
[10]: https://fedoramagazine.org/wp-content/uploads/2022/08/Screenshot-from-2022-08-27-23-51-38.png
[11]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-2.png
[12]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-3.png
[13]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-5.png
[14]: https://fedoramagazine.org/wp-content/uploads/2022/08/image-6.png
[15]: https://fedoramagazine.org/wp-content/uploads/2022/09/image-2-1024x393.png
[16]: https://github.com/containers/podman-desktop
[17]: https://github.com/orgs/containers/projects/2

View File

@ -1,161 +0,0 @@
[#]: subject: "Install Linux Mint with Windows 11 Dual Boot [Complete Guide]"
[#]: via: "https://www.debugpoint.com/linux-mint-install-windows/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 Windows 11 双引导安装 Linux Mint [完整指南]
======
将 Linux Mint 与 Windows 11或 Windows 10同时安装并制作双引导系统的完整指南。
如果您是新的 Linux 用户,尝试在不删除 OEM 安装的 Windows 的情况下安装 Linux Mint请遵循本指南。完成下面描述的步骤后您应该拥有一个双引导系统您可以在其中学习和在 Linux 系统中完成工作,而无需引导 Windows。
### 1. 开始之前您需要什么?
启动到您的 Windows 系统并从官方网站下载 Linux Mint ISO 文件。 ISO 文件是 Linux Mint 的安装映像,我们将在本指南中使用它。
* 在官网图1下载 Cinnamon 桌面版的 ISO适合大家
* [下载链接][1]
![图1从官网下载Linux Mint][2]
* 下载后,将 U 盘插入您的系统。然后使用 Rufus 或 [Etcher][3] 将上面下载的 .ISO 文件写入该 USB 驱动器。
### 2.准备一个分区来安装Linux Mint
理想情况下Windows 笔记本电脑通常配备 C 盘和 D 盘。 C 盘是安装 Windows 的地方。对于新的笔记本电脑D 驱动器通常是空的(任何后续驱动器,如 E 等)。现在,您有两个选项可供选择:一是**缩小 C 盘** 为额外的 Linux 安装腾出空间。第二个是**使用其他驱动器/分区**,例如 D 盘或 E盘。
选择您希望的方法。
* 如果您选择使用 D 盘或 E 盘进行 Linux 安装,请确保先禁用 BitLocker然后再禁用现代 OEM 安装的 Windows 笔记本电脑附带的所有其他功能。 * 从开始菜单打开 Windows PowerShell 并键入以下命令(图 2以禁用 BitLocker。根据您的目标驱动程序更改驱动器号这里我使用了驱动器 E
```
manage-bde -off E
```
![图2禁用 Windows 驱动器中的 BitLocker 以安装 Linux][4]
* 现在,如果您选择缩小 C 盘(或任何其他驱动器),请从开始菜单打开“磁盘管理”,它将显示您的整个磁盘布局。
* 右键单击并在要缩小的驱动器上选择“Shrink Volume”图 3以便为 Linux Mint 腾出位置。
* 在下一个窗口中,在“输入要缩小的空间量(以 MB 为单位)”下以 MB 为单位提供您的分区大小(图 4。显然它应该小于或等于“可用空间大小”中提到的值。因此对于 100 GB 的分区,给出 100*1024=102400 MB。
* 完成后,单击收缩。
![磁盘分区中的压缩卷选项示例][5]
![图4输入 Linux 分区的大小][6]
* 现在,您应该会看到一个“未分配空间”,如下所示(图 5。右键单击它并选择“新建简单卷”。
* 此向导将使用文件系统准备和格式化分区。注意:您可以在 Windows 本身中或在 Linux Mint 安装期间执行此操作。 Linux Mint 安装程序还为您提供了创建文件系统表和准备分区的选项。我建议您在这里做。
* 在接下来的一系列屏幕中(图 6,7 和 8以 MB 为单位给出分区大小,分配驱动器号(例如 D、E、F和文件系统为 fat32。
* 最后,您应该会看到您的分区已准备好安装 Linux Mint。您应该在 Mint 安装期间按照以下步骤选择此选项。
* 作为预防措施,记下分区大小(您刚刚在图 9 中作为示例创建)以便在安装程序中快速识别它。
![图5创建未分配空间][7]
![图6新建简单卷向导-page1][8]
![图7新建简单卷向导-page2][9]
![图8新建简单卷向导-page3][10]
![图9安装Linux的最终分区][11]
### 3. 在 BIOS 中禁用安全启动
* 插入 USB 驱动器并重新启动系统。
* 开机时反复按相应的功能键进入BIOS。您的笔记本电脑型号的密钥可能不同。这是主要笔记本电脑品牌的参考。
* 您应该禁用安全 BIOS 并确保将启动设备优先级设置为 U 盘。
* 然后按 F10 保存并退出。
| Laptop OEM | Function key to enter BIOS |
| :- | :- |
| Acer | F2 or DEL |
| ASUS | F2 for all PCs, F2 or DEL for motherboards |
| Dell | F2 or F12 |
| HP | ESC or F10 |
| Lenovo | F2 or Fn + F2 |
| Lenovo (Desktops) | F1 |
| Lenovo (ThinkPads) | Enter + F1. |
| MSI | DEL for motherboards and PCs |
| Microsoft Surface Tablets | Press and hold the volume up button. |
| Origin PC | F2 |
| Samsung | F2 |
| Sony | F1, F2, or F3 |
| Toshiba | F2 |
### 4. 安装 Linux Mint
如果一切顺利,您应该会看到一个安装 Linux Mint 的菜单。选择“启动 Linux Mint……”选项。
![图 10Linux Mint GRUB 菜单启动安装][12]
* 片刻之后,您应该会看到 Linux Mint Live 桌面。在桌面上,您应该会看到一个安装 Linux Mint 的图标以启动安装。
* 在下一组屏幕中,选择您的语言、键盘布局、选择安装多媒体编解码器并点击继续按钮。
* 在安装类型窗口中,选择其他选项。
* 在下一个窗口(图 11仔细选择以下内容
* 在设备下,选择刚刚创建的分区;您可以通过我之前提到的要记下的尺寸来识别它。
* 然后点击更改在编辑分区窗口中选择Ext4作为文件系统选择格式化分区选项和挂载点为“/”。
* 单击确定,然后为您的系统选择引导加载程序;理想情况下,它应该是下拉列表中的第一个条目。
* 仔细检查更改。因为一旦您点击立即安装,您的磁盘将被格式化,并且无法恢复。当您认为一切准备就绪,请单击立即安装。
![图11选择Windows 11安装Linux Mint的目标分区][13]
在以下屏幕中,选择您的位置,输入您的姓名并创建用于登录系统的用户 ID 和密码。安装应该开始(图 12
安装完成后(图 13取出 U 盘并重新启动系统。
![图12安装中][14]
![图13安装完成][15]
如果一切顺利,在成功安装为双引导系统后,您应该会看到带有 Windows 11 和 Linux Mint 的 GRUB。
现在您可以继续使用 [Linux Mint][16] 并体验快速而出色的 Linux 发行版。
### 总结
在本教程中,我向您展示了如何在装有 OEM 的 Windows 的笔记本电脑或台式机中使用 Linux Mint 创建一个简单的双启动系统。这些步骤包括分区、创建可引导 USB、格式化和安装。
尽管上述说明适用于 Linux Mint 21 Vanessa但是它现在应该可以与所有其他出色的 [Linux 发行版][17] 一起正常工作。
如果您遵循本指南,请在下面的评论框中告诉我您的安装情况。
如果您成功了,欢迎来到自由!
[下一篇:如何在 Ubuntu 22.04、22.10、Linux Mint 21 中安装 Java 17][18]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-mint-install-windows/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[gpchn](https://github.com/gpchn)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.linuxmint.com/download.php
[2]: https://www.debugpoint.com/wp-content/uploads/2022/09/Download-Linux-Mint-from-the-official-website.jpg
[3]: https://www.debugpoint.com/etcher-bootable-usb-linux/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/09/Disable-BitLocker-in-Windows-Drives-to-install-Linux.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/09/Example-of-Shrink-Volume-option-in-Disk-Partition-1024x453.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/09/Enter-the-size-of-your-Linux-Partition.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/09/Unallocated-space-is-created.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page1.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page2.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page3.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/09/Final-partition-for-installing-Linux.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/09/Linux-Mint-GRUB-Menu-to-kick-off-installation.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/09/Choose-the-target-partition-to-install-Linux-Mint-with-Windows-11.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-in-progress.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-complete.jpg
[16]: https://www.debugpoint.com/linux-mint
[17]: https://www.debugpoint.com/category/distributions
[18]: https://www.debugpoint.com/install-java-17-ubuntu-mint/

View File

@ -0,0 +1,180 @@
[#]: subject: "3 ways to use the Linux inxi command"
[#]: via: "https://opensource.com/article/22/9/linux-inxi-command"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux inxi 命令的 3 种使用方法
======
我在 Linux 上使用 inxi 来检查我的笔记本电脑电池、CPU 信息,甚至天气。
![Coding on a computer][1]
当我在查询有关笔记本电脑电池健康状况的信息时,我偶然发现了 `inxi`。它是一个命令行系统信息工具,可提供有关你的 Linux 计算机(无论是笔记本电脑、台式机还是服务器)的大量信息。
`inxi` 命令使用 GPLv3 [许可][2],许多 Linux 发行版都包含它。根据它的 Git 存储库“inxi 努力支持最广泛的操作系统和硬件,从最简单的消费台式机到最先进的专业硬件和服务器。”
文档很完善,并且该项目在线维护了完整的[手册页][3]。安装后,你可以使用 `man inxi` 命令访问系统上的手册页。
### 在 Linux 上安装 inxi
通常,你可以从发行版的软件仓库或应用中心安装 `inxi`。例如,在 Fedora、CentOS、Mageia 或类似发行版上:
```
$ sudo dnf install inxi
```
在 Debian、Elementary、Linux Mint 或类似发行版上:
```
$ sudo apt install inxi
```
你可以在[此处][4]找到有关 Linux 发行版安装选项的更多信息。
### 在 Linux 上使用 inxi 的 3 种方法
当你安装了 `inxi`,你可以探索它的所有选项。有许多选项可帮助你了解有关系统的更多信息。最基本的命令提供了系统的基本概览:
```
$ inxi -b
System:
Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS
Machine:
Type: Laptop System: HP product: Dev One Notebook PC v: N/A
serial: <superuser required>
Mobo: HP model: 8A78 v: KBC Version 01.03 serial: <superuser required>
UEFI: Insyde v: F.05 date: 06/14/2022
Battery:
ID-1: BATT charge: 50.6 Wh (96.9%) condition: 52.2/53.2 Wh (98.0%)
CPU:
Info: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics [MT MCP]
speed (MHz): avg: 915 min/max: 400/4507
Graphics:
Device-1: AMD Cezanne driver: amdgpu v: kernel
Device-2: Quanta HP HD Camera type: USB driver: uvcvideo
Display: x11 server: X.Org v: 1.21.1.3 driver: X: loaded: amdgpu,ati
unloaded: fbdev,modesetting,radeon,vesa gpu: amdgpu
resolution: 1920x1080~60Hz
OpenGL:
renderer: AMD RENOIR (LLVM 13.0.1 DRM 3.47 5.19.0-76051900-generic)
v: 4.6 Mesa 22.0.5
Network:
Device-1: Realtek RTL8822CE 802.11ac PCIe Wireless Network Adapter
driver: rtw_8822ce
Drives:
Local Storage: total: 953.87 GiB used: 75.44 GiB (7.9%)
Info:
Processes: 347 Uptime: 15m Memory: 14.96 GiB used: 2.91 GiB (19.4%)
Shell: Bash inxi: 3.3.13
```
### 1. 显示电池状态
你可以使用 `-B` 选项检查电池健康状况。结果显示系统电池 ID、充电情况和其他信息
```
$ inxi -B
Battery:
ID-1: BATT charge: 44.3 Wh (85.2%) condition: 52.0/53.2 Wh (97.7%)
```
### 2. 显示 CPU 信息
使用 `-C` 选项了解有关 CPU 的更多信息:
```
$ inxi -C
CPU:
Info: 8-core model: AMD Ryzen 7 PRO 5850U with Radeon Graphics bits: 64
type: MT MCP cache: L2: 4 MiB
Speed (MHz): avg: 400 min/max: 400/4507 cores: 1: 400 2: 400 3: 400
4: 400 5: 400 6: 400 7: 400 8: 400 9: 400 10: 400 11: 400 12: 400 13: 400
14: 400 15: 400 16: 400
```
`inxi` 的输出默认使用彩色文本。你可以根据需要使用“颜色开关”进行更改以提高可读性。
命令选项是 `-c` 后跟 0 到 42 之间的任意数字以适合你的习惯。
```
$ inxi -c 42
```
以下是使用颜色 5 和 7 的几个不同选项的示例:
![inxi -c 5 command][5]
该软件可以使用 Linux 系统中的传感器显示硬件温度、风扇速度和有关系统的其他信息。输入 `inxi -s` 并读取以下结果:
![inxi -s][6]
### 3. 组合选项
如果支持,你可以组合 `inxi` 的选项以获得复杂的输出。例如,`inxi -S` 提供系统信息,`-v` 提供详细输出。将两者结合起来可以得到以下结果:
```
$ inxi -S
System:
Host: pop-os Kernel: 5.19.0-76051900-generic x86_64 bits: 64
Desktop: GNOME 42.3.1 Distro: Pop!_OS 22.04 LTS
$ inxi -Sv
CPU: 8-core AMD Ryzen 7 PRO 5850U with Radeon Graphics (-MT MCP-)
speed/min/max: 634/400/4507 MHz Kernel: 5.19.0-76051900-generic x86_64
Up: 20m Mem: 3084.2/15318.5 MiB (20.1%) Storage: 953.87 GiB (7.9% used)
Procs: 346 Shell: Bash inxi: 3.3.13
```
### 额外功能:查看天气
`inxi` 可以收集到的信息并不只有你的电脑。使用 `-w` 选项,你还可以获取你所在地区的天气信息:
```
$ inxi -w
Weather:
Report: temperature: 14 C (57 F) conditions: Clear sky
Locale: Wellington, G2, NZL
current time: Tue 30 Aug 2022 16:28:14 (Pacific/Auckland)
Source: WeatherBit.io
```
你可以通过指定你想要的城市和国家以及 `-W` 来获取世界其他地区的天气信息:
```
$ inxi -W rome,italy
Weather:
Report: temperature: 20 C (68 F) conditions: Clear sky
Locale: Rome, Italy current time: Tue 30 Aug 2022 06:29:52
Source: WeatherBit.io
```
### 总结
有许多很棒的工具可以收集有关你的计算机的信息。 我根据机器、桌面或我的心情使用不同的。 你最喜欢的系统信息工具是什么?
图片来源Don WatkinsCC BY-SA 4.0
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/linux-inxi-command
作者:[Don Watkins][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/code_computer_laptop_hack_work.png
[2]: https://github.com/smxi/inxi/blob/master/LICENSE.txt
[3]: https://smxi.org/docs/inxi-man.htm
[4]: https://smxi.org/docs/inxi-installation.htm#inxi-repo-install
[5]: https://opensource.com/sites/default/files/2022-09/inxi-c5.png
[6]: https://opensource.com/sites/default/files/2022-09/inxi-s.png

View File

@ -0,0 +1,107 @@
[#]: subject: "Atoms is a GUI Tool to Let You Manage Linux Chroot Environments Easily"
[#]: via: "https://itsfoss.com/atoms-chroot-tool/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Atoms 是一个可以让你轻松管理 Linux Chroot 环境的 GUI 工具
======
chroot 环境为你在 Linux 中进行测试提供了隔离。你无需费心创建虚拟机。相反,如果你想测试应用或其他东西,请创建一个允许你选择不同根目录的 chroot 环境。
因此,使用 chroot你可以在不让应用访问系统其余部分的情况下进行测试。你安装的任何应用或你尝试的任何东西都会被限制在该目录中并且不会影响操作系统的功能。
chroot 有它的好处,这就是为什么它是为各种用户(尤其是系统管理员)测试事物的便捷方式。
不幸的是,所有这些都通过 Linux 终端运行。如果你可以有一个图形用户界面来让事情变得简单一些呢?这就是“**Atoms**”的用武之地。
### Atoms管理 Linux Chroot 的 GUI
![atoms][1]
Atoms 是一个 GUI 工具,它可以方便地创建和管理 Linux chroot 环境。
它还支持与 [Distrobox][2] 的集成。因此,你还可以使用 Atoms 管理容器。
但是,开发人员提到该工具不提供与 Podman 的无缝集成,并解释其用途:“*它的目的只是允许用户在新环境中打开 shell无论是 chroot 还是容器。”*
如果你正在寻找这样的东西,你可能需要试试 [pods][3]。
### Atoms 的特性
![atoms options][4]
Atoms 是一个简单的 GUI 程序,可让你为多个受支持的 Linux 发行版创建 chroot 环境。
让我重点介绍支持的发行版及其提供的功能:
* 浏览创建的 chroot(s) 文件。
* 能够选择要暴露的挂载点。
* 访问控制台。
* 支持的 Linux 发行版包括 Ubuntu、Alpine Linux、Fedora、Rocky Linux、Gentoo、AlmaLinux、OpenSUSE、Debian 和 CentOS。
It is incredibly easy to use. Creating an atom from within the app is a one-click process.
它非常易于使用。从应用中创建 atom 只需一键。
你所要做的就是为 atom 命名,然后从可用选项列表中选择 Linux 发行版Ubuntu 作为上面截图中的选择)。它会在几分钟内下载镜像并为你设置 chroot 环境,如下所示。
![atom config][5]
完成后,你可以访问选项启动控制台以管理 chroot 环境或自定义/删除它。
![atoms option][6]
要访问控制台,请转到另一个选项卡菜单。非常无缝的体验,并且运行良好,至少对于我测试过的 Ubuntu 而言。
![atoms console][7]
此外,你可以分离控制台以将其作为单独的窗口进行访问。
![atoms detach console][8]
### 在 Linux 上安装 Atom
你可以使用 [Flathub][9] 上提供的 Flatpak 包在任何 Linux 发行版上安装 Atoms。如果你是 Linux 新手,请遵循我们的 [Flatpak 指南][10]。
**注意:** 最新的稳定版本 **1.0.2** 只能通过 Flathub 获得。
要探索其源代码和其他详细信息,请访问其 [GitHub 页面][11]。
### 总结
Linux 命令行功能强大,你几乎可以使用这些命令执行任何操作。但并不是每个人都对它感到满意,因此像 Atoms 这样的工具通过提供 GUI 使其更加方便。
Atoms 并不是唯一的一种。还有 [Grub Customizer][12] 可以更轻松地更改 [Grub][13] 配置,这可以通过命令行完成。
我相信还有更多这样的工具。
*你如何看待使用像 Atom 这样的 GUI 程序来管理 Chroot 环境?在下面的评论中分享你的想法。*
--------------------------------------------------------------------------------
via: https://itsfoss.com/atoms-chroot-tool/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/09/atoms.png
[2]: https://itsfoss.com/distrobox/
[3]: https://github.com/marhkb/pods
[4]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-options.png
[5]: https://itsfoss.com/wp-content/uploads/2022/09/atom-config.png
[6]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-option.png
[7]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-console.png
[8]: https://itsfoss.com/wp-content/uploads/2022/09/atoms-detach-console.png
[9]: https://flathub.org/apps/details/pm.mirko.Atoms
[10]: https://itsfoss.com/flatpak-guide/
[11]: https://github.com/AtomsDevs/Atoms
[12]: https://itsfoss.com/grub-customizer-ubuntu/
[13]: https://itsfoss.com/what-is-grub/

View File

@ -0,0 +1,114 @@
[#]: subject: "How to Access Android Devices Internal Storage and SD Card in Ubuntu, Linux Mint using Media Transfer Protocol (MTP)"
[#]: via: "https://www.debugpoint.com/how-to-access-android-devices-internal-storage-and-sd-card-in-ubuntu-linux-mint-using-media-transfer-protocol-mtp/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何使用媒体传输协议 (MTP) 在 Ubuntu、Linux Mint 中访问 Android 设备的内部存储和 SD 卡
======
**本教程将展示如何在 Ubuntu 中使用 MTP 访问 android 设备以及如何访问 SD 卡内容。**
MTP即[媒体传输协议][1],是图片传输协议的扩展,它在 Android marshmallow 版本中实现。 marshmallow 更新后,你无法将 android 设备用作典型的大容量存储设备,这可以让你直接插入并在文件管理器(例如 Thunar 或 GNOME Files中查看内部存储内容和 SD 卡内容。这是由于操作系统无法确定 MTP 设备,而且还没有实现支持的设备列表。
### 在 Ubuntu、Linux Mint 中访问 Android 设备的步骤
* 使用以下命令为启用 MTP 的设备 [mtpfs][3] 安装 [libmtp][2]、FUSE 文件系统。
```
sudo apt install go-mtpfs
sudo apt install libmtp
sudo apt install mtpfs mtp-tools
```
* 使用以下命令在 `/media` 中创建一个目录并更改写入权限。
```
sudo mkdir /media/MTPdevice
sudo chmod 775 /media/MTPdevice
sudo mtpfs -o allow_other /media/MTPdevice
```
* 在 Ubuntu 中使用 USB 线缆插入你的 Android 设备。
* 在你的 Android 设备上,在主屏幕下拉,然后单击 “Touch for more options”。
* 在下面的菜单中选择“Transfer File (MTP)”选项。
![MTP Option1][4]
![MTP Option2][5]
* 在终端中运行以下命令查找设备 ID 等。你可以在设备的命令输出中看到 VID 和 PID。记下这两个数字在下图中高亮显示
```
mtp-detect
```
![mtp-detect Command Output][6]
* 使用以下命令使用文本编辑器打开 android 规则文件。
```
sudo gedit /etc/udev/rules.d/51-android.rules
```
* 如果你使用的是未安装 gedit 的最新 Ubuntu请使用以下命令。
```
sudo gnome-text-editor /etc/udev/rules.d/51-android.rules
```
* 在 `51-android.rules` 文件中使用你设备的 VID 和 PID 输入以下行(你在上面的步骤中记下)。
* 保存并关闭文件。
```
SUBSYSTEM=="usb", ATTR{idVendor}=="22b8", ATTR{idProduct}=="2e82", MODE="0666"
```
* 运行以下命令通过 [systemd][7] 重启设备管理器。
```
sudo service udev restart
```
### 访问内容的后续步骤
* 接下来的步骤主要需要访问你的 Android 设备的外部 SD 卡的内容。
* 我必须这样做,因为文件管理器没有显示 SD 卡的内容。不过,这不是一个解决方案,但它是一种临时方案,根据这个 [Google 论坛帖子][8],它适用于大多数用户,并且适用于我的带有闪迪 SD 卡的摩托罗拉 G 2nd Gen。
* 在 Ubuntu 中安全删除你连接的设备。
* 关闭设备。从设备中取出 SD 卡。
* 在没有 SD 卡的情况下打开设备。
* 再次关闭设备。
* 将 SD 卡重新插入并再次打开设备。
* 重启你的 Ubuntu 机器并插入你的安卓设备。
* 现在你可以看到你的安卓设备的内部存储和 SD 卡的内容。
![MTP Device Contents in Ubuntu][9]
### 总结
上述在 Ubuntu 中访问安卓设备内容的教程在旧版和新版 Ubuntu 中的安卓设备三星、一加和摩托罗拉上都可以使用。如果你在访问内容时遇到困难可以试试这些步骤它可能会起作用。在我看来MTP 与老式的即插即用方案相比非常慢。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/how-to-access-android-devices-internal-storage-and-sd-card-in-ubuntu-linux-mint-using-media-transfer-protocol-mtp/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://en.wikipedia.org/wiki/Media_Transfer_Protocol
[2]: https://sourceforge.net/projects/libmtp/
[3]: https://launchpad.net/ubuntu/+source/mtpfs
[4]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Option1.png
[5]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Option2.png
[6]: https://www.debugpoint.com/wp-content/uploads/2016/03/mtp-detect.png
[7]: https://www.debugpoint.com/systemd-systemctl-service/
[8]: https://productforums.google.com/forum/#!topic/nexus/11d21gbWyQo;context-place=topicsearchin/nexus/category$3Aconnecting-to-networks-and-devices%7Csort:relevance%7Cspell:false
[9]: https://www.debugpoint.com/wp-content/uploads/2016/03/MTP-Device-Contents-in-Ubuntu.png

View File

@ -0,0 +1,175 @@
[#]: subject: "4 Simple Steps to Clean Your Ubuntu System"
[#]: via: "https://www.debugpoint.com/4-simple-steps-clean-ubuntu-system-linux/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
清理 Ubuntu 系统的 4 个简单步骤
======
现在,试试看这 4 个简单的步骤,来清理你的 Ubuntu 系统吧。
这份精简指南将告诉你如何清理 Ubuntu 系统以及如何释放一些磁盘空间。
如果你的 Ubuntu 系统已经运行了至少一年,尽管系统是最新的,你仍然可能会觉得你的 Ubuntu 系统运行缓慢且滞后。
在过去,因为你想试验某一应用程序,或是在看到了它的好评推荐之后,而安装了许多应用程序,但你并没有删除它们。下面这些方法可以帮助你找出一些可以释放的隐藏磁盘空间。
### 清理 Ubuntu 系统的步骤
#### 1. 清理 Apt 缓存
Apt 缓存(apt cache)是 Ubuntu 系统保存你下载过的所有文件的地方,以供你之后可以查看它。大多数用户不会去清理 Apt 缓存,而它却可能会占用数百兆字节。
打开终端,并运行以下命令,可以得到你的 Apt 缓存有多少:
```
du -sh /var/cache/apt/archives
```
![][1]
如果你的 Ubuntu 系统已经安装了很久的话,你将看到这个数字非常大。运行以下命令来清理 Apt 缓存。
```
sudo apt-get clean
```
#### 2. 删除无用的内核
如果你已经运行 Ubuntu 系统超过了一年,那么你安装多个内核的可能性很高。如果你的硬件是最新的,并且与 Linux 兼容而没有太多配置,你可以删除旧的内核,保留最新的内核。
在终端运行以下命令来删除旧的内核:
```
sudo apt-get autoremove --purge
```
![Autoremove Purge][2]
#### 3. 删除旧的应用程序和软件包
如果你是一个喜欢尝试 Linux 应用程序的人,那么你的系统中肯定有一些不再需要的没用的应用程序。
现在,你可能已经忘记了你安装过的应用程序名称。不过你可以在终端运行以下命令来查看你最近安装的内容:
你会得到通过 `apt` 命令安装的应用程序和软件包的列表:
```
history | grep "apt-get install"
```
![List of apt installed app History][3]
你将得到最近安装的应用程序列表:
```
grep " install " /var/log/dpkg.log.1
```
```
zgrep " install " /var/log/dpkg.log.2.gz
```
你可以运行以下命令来删除应用程序和软件包:
```
sudo apt remove app1 package1
```
#### 4. 使用系统清理应用
有大量免费和原生系统 [清理应用][4] 可以使用。但是,我认为 [BleachBit][5] 是清理系统最好的一个应用,因为它经久不衰。
使用以下命令安装 BleachBit 或通过应用商店安装。
```
sudo apt install bleachbit
```
安装后,打开 BleachBit并运行扫描。它会向你显示浏览器占用的所有缓存文件、临时文件、垃圾等你只需单击一个按钮即可清理它。
![][6]
### 额外提示
#### 清理 Flatpak 软件包
Flatpak 这一应用程序和运行时(runtime)会占用大量磁盘空间。因为在设计上Flatpak 的可执行文件结合了运行时。尽管运行时可以在相关应用程序之间共享,但许多未使用的剩余运行时可能会占用你的磁盘空间。
删除一些未使用的 Flatpak 包最直接的方法是下面的命令。在终端运行这一命令。
```
flatpak uninstall --unused
```
可以参考 [这篇文章][7] 了解有关 Flatpak 包的更多信息。
#### 清理未使用的 Snap 项目
如果你使用 Ubuntu 系统,那么你很有可能使用的是 Snap 软件包。随着时间的推移Snap 会积累不相关的运行时和文件。你可以使用以下脚本来清理一些没用的 snap 运行时。
将下面的脚本复制到一个新文件中,并将其命名为 `clean_snap.sh`
然后使用 `chmod +x clean_snap.sh` 命令来赋予它可执行权限,并通过 `./clean_snap.sh` 运行
```
#!/bin/bash
#Removes old revisions of snaps
#CLOSE ALL SNAPS BEFORE RUNNING THIS
set -eu
LANG=en_US.UTF-8
snap list --all | awk '/disabled/{print $1, $3}' |
while read snapname revision; do
snap remove "$snapname" --revision="$revision"
done
```
可以参考 [这篇文章][8] 了解有关清理 Snap 包的更多信息。
#### 额外提示
你还可以使用以下命令来手动搜索大文件。
```
find /home -type f -exec du -h {} + | sort -hr | head -20
```
例如,运行以下命令,你会得到根目录 `/` 中的前 20 个大文件。现在你可以查看大文件,并使用文件管理器手动删除它们。请注意删除文件时要非常小心。尽量不要涉及 `/home` 目录以外的任何内容。
![Find Large files in Linux][9]
### 总结
这样就完成了。如果你按照上述步骤操作,你一定能够释放 Ubuntu 系统中的一些空间,现在你的系统有剩余空间了。你可以按照这些措施来清理 Ubuntu 系统。不要忘记使用最新的软件包,使你的系统保持到最新。
🗨️ 如果你认为使用上述的技巧可以释放一些磁盘空间,并使得你的 Ubuntu 更快了,请在下方评论区留言。你通常又是使用什么命令来清理你的 Ubuntu 系统?
快留言告诉我吧。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/4-simple-steps-clean-ubuntu-system-linux/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2018/07/apt-cache.png
[2]: https://www.debugpoint.com/wp-content/uploads/2018/07/Autoremove-Purge-1024x218.png
[3]: https://www.debugpoint.com/wp-content/uploads/2018/07/List-of-apt-installed-app-History.png
[4]: https://www.debugpoint.com/2017/02/stacer-is-a-system-monitoring-and-clean-up-utility-for-ubuntu/
[5]: https://www.bleachbit.org
[6]: https://www.debugpoint.com/wp-content/uploads/2018/07/BleachBit-Clean-your-system.png
[7]: https://www.debugpoint.com/clean-up-flatpak/
[8]: https://www.debugpoint.com/clean-up-snap/
[9]: https://www.debugpoint.com/wp-content/uploads/2018/07/Find-Large-files-in-Linux-1024x612.png
[10]: https://www.debugpoint.com/gnome-43/