mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
commit
fd1f35fc6e
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13109-1.html)
|
||||
[#]: subject: (Intel formally launches Optane for data center memory caching)
|
||||
[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
英特尔 Optane:用于数据中心内存缓存
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/12/111720yq1rvxcncjdsjb0g.jpg)
|
||||
|
||||
> 英特尔推出了包含 3D Xpoint 内存技术的 Optane 持久内存产品线。英特尔的这个解决方案介乎于 DRAM 和 NAND 中间,以此来提升性能。
|
||||
|
||||
![Intel][1]
|
||||
|
||||
英特尔在 2019 年 4 月的[大规模数据中心活动][2]中正式推出 Optane 持久内存产品线。它已经问世了一段时间,但是目前的 Xeon 服务器处理器还不能充分利用它。而新的 Xeon8200 和 9200 系列可以充分利用 Optane 持久内存的优势。
|
||||
|
||||
由于 Optane 是英特尔的产品(与美光合作开发),所以意味着 AMD 和 ARM 的服务器处理器不能够支持它。
|
||||
|
||||
正如[我之前所说的][3],OptaneDC 持久内存采用与美光合作研发的 3D Xpoint 内存技术。3D Xpoint 是一种比 SSD 更快的非易失性内存,速度几乎与 DRAM 相近,而且它具有 NAND 闪存的持久性。
|
||||
|
||||
第一个 3D Xpoint 产品是被称为英特尔“尺子”的 SSD,因为它们被设计成细长的样子,很像尺子的形状。它们被设计这样是为了适合 1u 的服务器机架。在发布的公告中,英特尔推出了新的利用四芯或者 QLC 3D NAND 内存的英特尔 SSD D5-P4325 [尺子][7] SSD,可以在 1U 的服务器机架上放 1PB 的存储。
|
||||
|
||||
OptaneDC 持久内存的可用容量最初可以通过使用 128GB 的 DIMM 达到 512GB。英特尔数据中心集团执行副总裁及总经理 Navin Shenoy 说:“OptaneDC 持久内存可达到的容量是 DRAM 的 2 到 4 倍。”
|
||||
|
||||
他说:“我们希望服务器系统的容量可以扩展到每个插槽 4.5TB 或者 8 个插槽 36TB,这是我们第一代 Xeon 可扩展芯片的 3 倍。”
|
||||
|
||||
### 英特尔Optane内存的使用和速度
|
||||
|
||||
Optane 有两种不同的运行模式:内存模式和应用直连模式。内存模式是将 DRAM 放在 Optane 内存之上,将 DRAM 作为 Optane 内存的缓存。应用直连模式是将 DRAM 和 OptaneDC 持久内存一起作为内存来最大化总容量。并不是每个工作负载都适合这种配置,所以应该在对延迟不敏感的应用程序中使用。正如英特尔推广的那样,Optane 的主要使用情景是内存模式。
|
||||
|
||||
几年前,当 3D Xpoint 最初发布时,英特尔宣称 Optane 的速度是 NAND 的 1000 倍,耐用是 NAND 的 1000 倍,密度潜力是 DRAM 的 10 倍。这虽然有点夸张,但这些因素确实很令人着迷。
|
||||
|
||||
在 256B 的连续 4 个缓存行中使用 Optane 内存可以达到 8.3GB/秒的读速度和 3.0GB/秒的写速度。与 SATA SSD 的 500MB/秒左右的读/写速度相比,可以看到性能有很大提升。请记住,Optane 充当内存,所以它会缓存被频繁访问的 SSD 中的内容。
|
||||
|
||||
这是了解 OptaneDC 的关键。它能将非常大的数据集存储在离内存非常近的位置,因此具有很低延迟的 CPU 可以最小化访问较慢的存储子系统的访问延迟,无论存储是 SSD 还是 HDD。现在,它提供了一种可能性,即把多个 TB 的数据放在非常接近 CPU 的地方,以实现更快的访问。
|
||||
|
||||
### Optane 内存的一个挑战
|
||||
|
||||
唯一真正的挑战是 Optane 插进内存所在的 DIMM 插槽。现在有些主板的每个 CPU 有多达 16 个 DIMM 插槽,但是这仍然是客户和设备制造商之间需要平衡的电路板空间:Optane 还是内存。有一些 Optane 驱动采用了 PCIe 接口进行连接,可以减轻主板上内存的拥挤。
|
||||
|
||||
3D Xpoint 由于它写数据的方式,提供了比传统的 NAND 闪存更高的耐用性。英特尔承诺 Optane 提供 5 年保修期,而很多 SSD 只提供 3 年保修期。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html
|
||||
[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html
|
||||
[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html
|
||||
[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
|
||||
[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,305 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chensanle)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13107-1.html)
|
||||
[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions)
|
||||
[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
基于 Tmux 的多会话终端管理示例
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/11/101058ffso6wzzw94wm2ng.jpg)
|
||||
|
||||
我们已经了解到如何通过 [GNU Screen][2] 进行多会话管理。今天,我们将要领略另一个著名的管理会话的命令行实用工具 **Tmux**。类似 GNU Screen,Tmux 是一个帮助我们在单一终端窗口中创建多个会话,同一时间内同时运行多个应用程序或进程的终端复用工具。Tmux 自由、开源并且跨平台,支持 Linux、OpenBSD、FreeBSD、NetBSD 以及 Mac OS X。本文将讨论 Tmux 在 Linux 系统下的高频用法。
|
||||
|
||||
### Linux 下安装 Tmux
|
||||
|
||||
Tmux 可以在绝大多数的 Linux 官方仓库下获取。
|
||||
|
||||
在 Arch Linux 或它的变种系统下,执行下列命令来安装:
|
||||
|
||||
```
|
||||
$ sudo pacman -S tmux
|
||||
```
|
||||
|
||||
Debian、Ubuntu 或 Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install tmux
|
||||
```
|
||||
|
||||
Fedora:
|
||||
```
|
||||
$ sudo dnf install tmux
|
||||
```
|
||||
|
||||
RHEL 和 CentOS:
|
||||
|
||||
```
|
||||
$ sudo yum install tmux
|
||||
```
|
||||
|
||||
SUSE/openSUSE:
|
||||
|
||||
```
|
||||
$ sudo zypper install tmux
|
||||
```
|
||||
|
||||
以上,我们已经完成 Tmux 的安装。之后我们继续看看一些 Tmux 示例。
|
||||
|
||||
### Tmux 命令示例: 多会话管理
|
||||
|
||||
Tmux 默认所有命令的前置命令都是 `Ctrl+b`,使用前牢记这个快捷键即可。
|
||||
|
||||
> **注意**:**Screen** 的前置命令都是 `Ctrl+a`.
|
||||
|
||||
#### 创建 Tmux 会话
|
||||
|
||||
在终端中运行如下命令创建 Tmux 会话并附着进入:
|
||||
|
||||
```
|
||||
tmux
|
||||
```
|
||||
|
||||
抑或,
|
||||
|
||||
```
|
||||
tmux new
|
||||
```
|
||||
|
||||
一旦进入 Tmux 会话,你将看到一个 **沉在底部的绿色的边栏**,如下图所示。
|
||||
|
||||
![][3]
|
||||
|
||||
*创建 Tmux 会话*
|
||||
|
||||
这个绿色的边栏能很容易提示你当前是否身处 Tmux 会话当中。
|
||||
|
||||
#### 退出 Tmux 会话
|
||||
|
||||
退出当前 Tmux 会话仅需要使用 `Ctrl+b` 和 `d`。无需同时触发这两个快捷键,依次按下 `Ctrl+b` 和 `d` 即可。
|
||||
|
||||
退出当前会话后,你将能看到如下输出:
|
||||
|
||||
```
|
||||
[detached (from session 0)]
|
||||
```
|
||||
|
||||
#### 创建有名会话
|
||||
|
||||
如果使用多个会话,你很可能会混淆运行在多个会话中的应用程序。这种情况下,我们需要会话并赋予名称。譬如需要 web 相关服务的会话,就创建一个名称为 “webserver”(或任意一个其他名称) 的 Tmux 会话。
|
||||
|
||||
```
|
||||
tmux new -s webserver
|
||||
```
|
||||
|
||||
这里是新的 Tmux 有名会话:
|
||||
|
||||
![][4]
|
||||
|
||||
*拥有自定义名称的 Tmux 会话*
|
||||
|
||||
如你所见上述截图,这个 Tmux 会话的名称已经被标注为 “webserver”。如此,你可以在多个会话中,轻易的区分应用程序的所在。
|
||||
|
||||
退出会话,轻按 `Ctrl+b` 和 `d`。
|
||||
|
||||
#### 查看 Tmux 会话清单
|
||||
|
||||
查看 Tmux 会话清单,执行:
|
||||
|
||||
```
|
||||
tmux ls
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
![][5]
|
||||
|
||||
*列出 Tmux 会话*
|
||||
|
||||
如你所见,我们开启了两个 Tmux 会话。
|
||||
|
||||
#### 创建非附着会话
|
||||
|
||||
有时候,你可能想要简单创建会话,但是并不想自动切入该会话。
|
||||
|
||||
创建一个非附着会话,并赋予名称 “ostechnix”,运行:
|
||||
|
||||
```
|
||||
tmux new -s ostechnix -d
|
||||
```
|
||||
|
||||
上述命令将会创建一个名为 “ostechnix” 的会话,但是并不会附着进入。
|
||||
|
||||
你可以通过使用 `tmux ls` 命令验证:
|
||||
|
||||
![][6]
|
||||
|
||||
*创建非附着会话*
|
||||
|
||||
#### 附着进入 Tmux 会话
|
||||
|
||||
通过如下命令,你可以附着进入最后一个被创建的会话:
|
||||
|
||||
```
|
||||
tmux attach
|
||||
```
|
||||
|
||||
抑或,
|
||||
|
||||
```
|
||||
tmux a
|
||||
```
|
||||
|
||||
如果你想附着进入任意一个指定的有名会话,譬如 “ostechnix”,运行:
|
||||
|
||||
```
|
||||
tmux attach -t ostechnix
|
||||
```
|
||||
|
||||
或者,简写为:
|
||||
|
||||
```
|
||||
tmux a -t ostechnix
|
||||
```
|
||||
|
||||
#### 关闭 Tmux 会话
|
||||
|
||||
当你完成或者不再需要 Tmux 会话,你可以通过如下命令关闭:
|
||||
|
||||
```
|
||||
tmux kill-session -t ostechnix
|
||||
```
|
||||
|
||||
当身处该会话时,使用 `Ctrl+b` 以及 `x`。点击 `y` 来关闭会话。
|
||||
|
||||
可以通过 `tmux ls` 命令验证。
|
||||
|
||||
关闭所有 Tmux 服务下的所有会话,运行:
|
||||
|
||||
```
|
||||
tmux kill-server
|
||||
```
|
||||
|
||||
谨慎!这将终止所有 Tmux 会话,并不会产生任何警告,即便会话存在运行中的任务。
|
||||
|
||||
如果不存在活跃的 Tmux 会话,将看到如下输出:
|
||||
|
||||
```
|
||||
$ tmux ls
|
||||
no server running on /tmp/tmux-1000/default
|
||||
```
|
||||
|
||||
#### 切割 Tmux 窗口
|
||||
|
||||
切割窗口成多个小窗口,在 Tmux 中,这个叫做 “Tmux 窗格”。每个窗格中可以同时运行不同的程序,并同时与所有的窗格进行交互。每个窗格可以在不影响其他窗格的前提下可以调整大小、移动位置和控制关闭。我们可以以水平、垂直或者二者混合的方式切割屏幕。
|
||||
|
||||
##### 水平切割窗格
|
||||
|
||||
欲水平切割窗格,使用 `Ctrl+b` 和 `"`(半个双引号)。
|
||||
|
||||
![][7]
|
||||
|
||||
*水平切割 Tmux 窗格*
|
||||
|
||||
可以使用组合键进一步切割面板。
|
||||
|
||||
##### 垂直切割窗格
|
||||
|
||||
垂直切割面板,使用 `Ctrl+b` 和 `%`。
|
||||
|
||||
![][8]
|
||||
|
||||
*垂直切割 Tmux 窗格*
|
||||
|
||||
##### 水平、垂直混合切割窗格
|
||||
|
||||
我们也可以同时采用水平和垂直的方案切割窗格。看看如下截图:
|
||||
|
||||
![][9]
|
||||
|
||||
*切割 Tmux 窗格*
|
||||
|
||||
首先,我通过 `Ctrl+b` `"` 水平切割,之后通过 `Ctrl+b` `%` 垂直切割下方的窗格。
|
||||
|
||||
如你所见,每个窗格下我运行了不同的程序。
|
||||
|
||||
##### 切换窗格
|
||||
|
||||
通过 `Ctrl+b` 和方向键(上下左右)切换窗格。
|
||||
|
||||
##### 发送命令给所有窗格
|
||||
|
||||
之前的案例中,我们在每个窗格中运行了三个不同命令。其实,也可以发送相同的命令给所有窗格。
|
||||
|
||||
为此,使用 `Ctrl+b` 然后键入如下命令,之后按下回车:
|
||||
|
||||
```
|
||||
:setw synchronize-panes
|
||||
```
|
||||
|
||||
现在在任意窗格中键入任何命令。你将看到相同命令影响了所有窗格。
|
||||
|
||||
##### 交换窗格
|
||||
|
||||
使用 `Ctrl+b` 和 `o` 交换窗格。
|
||||
|
||||
##### 展示窗格号
|
||||
|
||||
使用 `Ctrl+b` 和 `q` 展示窗格号。
|
||||
|
||||
##### 终止窗格
|
||||
|
||||
要关闭窗格,直接键入 `exit` 并且按下回车键。或者,按下 `Ctrl+b` 和 `x`。你会看到确认信息。按下 `y` 关闭窗格。
|
||||
|
||||
![][10]
|
||||
|
||||
*关闭窗格*
|
||||
|
||||
##### 放大和缩小 Tmux 窗格
|
||||
|
||||
我们可以将 Tmux 窗格放大到当前终端窗口的全尺寸,以获得更好的文本可视性,并查看更多的内容。当你需要更多的空间或专注于某个特定的任务时,这很有用。在完成该任务后,你可以将 Tmux 窗格缩小(取消放大)到其正常位置。更多详情请看以下链接。
|
||||
|
||||
- [如何缩放 Tmux 窗格以提高文本可见度?](https://ostechnix.com/how-to-zoom-tmux-panes-for-better-text-visibility/)
|
||||
|
||||
#### 自动启动 Tmux 会话
|
||||
|
||||
当通过 SSH 与远程系统工作时,在 Tmux 会话中运行一个长期运行的进程总是一个好的做法。因为,它可以防止你在网络连接突然中断时失去对运行进程的控制。避免这个问题的一个方法是自动启动 Tmux 会话。更多详情,请参考以下链接。
|
||||
|
||||
- [通过 SSH 登录远程系统时自动启动 Tmux 会话](https://ostechnix.com/autostart-tmux-session-on-remote-system-when-logging-in-via-ssh/)
|
||||
|
||||
### 总结
|
||||
|
||||
这个阶段下,你已经获得了基本的 Tmux 技能来进行多会话管理,更多细节,参阅 man 页面。
|
||||
|
||||
```
|
||||
$ man tmux
|
||||
```
|
||||
|
||||
GNU Screen 和 Tmux 工具都能透过 SSH 很好的管理远程服务器。学习 Screen 和 Tmux 命令,像个行家一样,彻底通过这些工具管理远程服务器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chensanle](https://github.com/chensanle)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png
|
||||
[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png
|
@ -0,0 +1,72 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13115-1.html)
|
||||
[#]: subject: (Where are all the IoT experts going to come from?)
|
||||
[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
物联网专家都从何而来?
|
||||
======
|
||||
|
||||
> 物联网(IoT)的快速发展催生了对跨职能专家进行培养的需求,这些专家可以将传统的网络和基础设施专业知识与数据库和报告技能相结合。
|
||||
|
||||
![Kevin \(CC0\)][1]
|
||||
|
||||
如果物联网(IoT)要实现其宏伟的诺言,它将需要大量聪明、熟练、**训练有素**的工人军团来实现这一切。而现在,这些人将从何而来尚不清楚。
|
||||
|
||||
这就是我为什么有兴趣同资产优化软件公司 [AspenTech][2] 的产品管理、研发高级总监 Keith Flynn 通邮件的原因,他说,当处理大量属于物联网范畴的新技术时,你需要能够理解如何配置技术和解释数据的人。Flynn 认为,现有的教育机构对物联网特定课程的需求越来越大,这同时也给了以物联网为重点,提供了完善课程的新私立学院机会。
|
||||
|
||||
Flynn 跟我说,“在未来,物联网项目将与如今普遍的数据管理和自动化项目有着巨大的不同……未来需要更全面的技能和交叉交易能力,这样我们才会说同一种语言。”
|
||||
|
||||
Flynn 补充说,随着物联网每年增长 30%,将不再依赖于几个特定的技能,“从传统的部署技能(如网络和基础设施)到数据库和报告技能,坦白说,甚至是基础数据科学,都将需要一起理解和使用。”
|
||||
|
||||
### 召集所有物联网顾问
|
||||
|
||||
Flynn 预测,“受过物联网教育的人的第一个大机会将会是在咨询领域,随着咨询公司对行业趋势的适应或淘汰……有受过物联网培训的员工将有助于他们在物联网项目中的定位,并在新的业务线中提出要求——物联网咨询。”
|
||||
|
||||
对初创企业和小型公司而言,这个问题尤为严重。“组织越大,他们越有可能雇佣到不同技术类别的人”Flynn 这样说到,“但对于较小的组织和较小的物联网项目来说,你则需要一个能同时兼顾的人。”
|
||||
|
||||
两者兼而有之?还是**一应俱全?**物联网“需要将所有知识和技能组合在一起”,Flynn 说到,“并不是所有技能都是全新的,只是在此之前从来没有被归纳在一起或放在一起教授过。”
|
||||
|
||||
### 未来的物联网专家
|
||||
|
||||
Flynn 表示,真正的物联网专业技术是从基础的仪器仪表和电气技能开始的,这能帮助工人发明新的无线发射器或提升技术,以提高电池寿命和功耗。
|
||||
|
||||
“IT 技能,如网络、IP 寻址、子网掩码、蜂窝和卫星也是物联网的关键需求”,Flynn 说。他还认为物联网需要数据库管理技能和云管理和安全专业知识,“特别是当高级过程控制(APC)将传感器数据直接发送到数据库和数据湖等事情成为常态时。”
|
||||
|
||||
### 物联网专家又从何而来?
|
||||
|
||||
Flynn 说,标准化的正规教育课程将是确保毕业生或证书持有者掌握一套正确技能的最佳途径。他甚至还列出了一个样本课程。“按时间顺序开始,从基础知识开始,比如 [电气仪表] 和测量。然后讲授网络知识,数据库管理和云计算课程都应在此之后开展。这个学位甚至可以循序渐进至现有的工程课程中,这可能需要两年时间……来完成物联网部分的学业。”
|
||||
|
||||
虽然企业培训也能发挥作用,但实际上却是“说起来容易做起来难”,Flynn 这样警告,“这些培训需要针对组织的具体努力而推动。”
|
||||
|
||||
当然,现在市面上已经有了 [大量的在线物联网培训课程和证书课程][5]。但追根到底,这一工作全都依赖于工人自身的推断。
|
||||
|
||||
“在这个世界上,随着科技不断改变行业,提升技能是非常重要的”,Flynn 说,“如果这种提升技能的推动力并不是来源于你的雇主,那么在线课程和认证将会是提升你自己很好的一个方式。我们只需要创建这些课程……我甚至可以预见组织将与提供这些课程的高等教育机构合作,让他们的员工更好地开始。当然,物联网课程的挑战在于它需要不断发展以跟上科技的发展。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Percy (@scvoet)](https://github.com/scvoet)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg
|
||||
[2]: https://www.aspentech.com/
|
||||
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
|
||||
[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training
|
||||
[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
|
||||
[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
|
||||
[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
|
||||
[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13096-1.html)
|
||||
[#]: subject: (EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World)
|
||||
[#]: via: (https://itsfoss.com/endeavouros/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
EndeavourOS:填补 Antergos 在 ArchLinux 世界留下的空白
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/07/225558rdb85bmm6uumro71.jpg)
|
||||
|
||||
我相信我们的大多数读者都知道 [Antergos 项目的终结][2]。在这一消息宣布之后,Antergos 社区的成员创建了几个发行版来继承 Antergos。今天,我们将着眼于 Antergos 的“精神继承人”之一:[EndeavourOS][3]。
|
||||
|
||||
### EndeavourOS 不是 Antergos 的分支
|
||||
|
||||
在我们开始之前,我想非常明确地指出,EndeavourOS 并不是一个 Antergos 的复刻版本。开发者们以 Antergos 为灵感,创建了一个基于 Arch 的轻量级发行版。
|
||||
|
||||
![Endeavouros First Boot][4]
|
||||
|
||||
根据 [这个项目网站][5] 的说法,EndeavourOS 的诞生是因为 Antergos 社区的人们想要保持 Antergos 的精神。他们的目标很简单:“让 Arch 拥有一个易于使用的安装程序和一个友好、有帮助的社区,在掌握系统的过程中能够有一个社区可以依靠。”
|
||||
|
||||
与许多基于 Arch 的发行版不同,EndeavourOS 打算像 [原生 Arch][5] 那样使用,“所以没有一键式安装你喜欢的应用程序的解决方案,也没有一堆你最终不需要的预装应用程序。”对于大多数人来说,尤其是那些刚接触 Linux 和 Arch 的人,会有一个学习曲线,但 EndeavourOS 的目标是建立一个大型友好的社区,鼓励人们提出问题并了解他们的系统。
|
||||
|
||||
![Endeavouros Installing][6]
|
||||
|
||||
### 正在进行的工作
|
||||
|
||||
EndeavourOS 在 [2019 年 5 月 23 日首次宣布成立][8] 随后 [在 7 月 15 日发布第一个版本][7]。不幸的是,这意味着开发人员无法将他们计划的所有功能全部整合进来。(LCTT 译注:本文原文发表于 2019 年,而现在,EndeavourOS 还在持续活跃着。)
|
||||
|
||||
例如,他们想要一个类似于 Antergos 的在线安装,但却遇到了[当前选项的问题][9]。“Cnchi 运行在 Antergos 生态系统之外会造成严重的问题,需要彻底重写才能发挥作用。RebornOS 的 Fenix 安装程序还没有完全成型,需要更多时间才能正常运行。”于是现在,EndeavourOS 将会和 [Calamares 安装程序 ][10] 一起发布。
|
||||
|
||||
EndeavourOS 会提供 [比 Antergos 少的东西][9]:它的存储库比 Antergos 小,尽管他们会附带一些 AUR 包。他们的目标是提供一个接近 Arch 却不是原生 Arch 的系统。
|
||||
|
||||
![Endeavouros Updating With Kalu][12]
|
||||
|
||||
开发者[进一步声明 ][13]:
|
||||
|
||||
> “Linux,特别是 Arch,核心精神是自由选择,我们提供了一个基本的安装,让你在一个精细的层面上方便地探索各项选择。我们永远不会强行为你作决定,比如为你安装 GUI 应用程序,如 Pamac,甚至采用沙盒解决方案,如 Flatpak 或 Snaps。想安装成什么样子完全取决于你,这是我们与 Antergos 或 Manjaro 的主要区别,但与 Antergos 一样,如果你安装的软件包遇到问题,我们会尽力帮助你。”
|
||||
|
||||
### 体验 EndeavourOS
|
||||
|
||||
我在 [VirtualBox][14] 中安装了 EndeavourOS,并且研究了一番。当我第一次启动时,我看到一个窗口,里面有关于安装的 EndeavourOS 网站的链接。它还有一个安装按钮和一个手动分区工具。Calamares 安装程序的安装过程非常顺利。
|
||||
|
||||
在我重新启动到新安装的 EndeavourOS 之后,迎接我的是一个彩色主题的 XFCE 桌面。我还收到了一堆通知消息。我使用过的大多数基于 Arch 的发行版都带有一个 GUI 包管理器,比如 [pamac][15] 或 [octopi][16],以进行系统更新。EndeavourOS 配有 [kalu][17](kalu 是 “Keeping Arch Linux Up-to-date” 的缩写)。它可以更新软件包、可以看 Archlinux 新闻、可以更新 AUR 包等等。一旦它检查到有更新,它就会显示通知消息。
|
||||
|
||||
我浏览了一下菜单,看看默认安装了什么。默认的安装并不多,连办公套件都没有。他们想让 EndeavourOS 成为一块空白画布,让任何人都可以创建他们想要的系统。他们正朝着正确的方向前进。
|
||||
|
||||
![Endeavouros Desktop][18]
|
||||
|
||||
### 总结思考
|
||||
|
||||
EndeavourOS 还很年轻。第一个稳定版本都没有发布多久。它缺少一些东西,最重要的是一个在线安装程序。这就是说,我们无法估计他能够走到哪一步。(LCTT 译注:本文发表于 2019 年)
|
||||
|
||||
虽然它不是 Antergos 的精确复刻,但 EndeavourOS 希望复制 Antergos 最重要的部分——热情友好的社区。很多时候,Linux 社区对初学者似乎是不受欢迎甚至是完全敌对的。我看到越来越多的人试图与这种消极情绪作斗争,并将更多的人引入 Linux。随着 EndeavourOS 团队把焦点放在社区建设上,我相信一个伟大的发行版将会诞生。
|
||||
|
||||
如果你当前正在使用 Antergos,有一种方法可以让你[不用重装系统就切换到 EndeavourOS][20]
|
||||
|
||||
如果你想要一个 Antergos 的精确复刻,我建议你去看看 [RebornOS][21]。他们目前正在开发一个名为 Fenix 的 Cnchi 安装程序的替代品。
|
||||
|
||||
你试过 EndeavourOS 了吗?你的感受如何?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/endeavouros/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-logo.png?ssl=1
|
||||
[2]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[3]: https://endeavouros.com/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-first-boot.png?resize=800%2C600&ssl=1
|
||||
[5]: https://endeavouros.com/info-2/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-installing.png?resize=800%2C600&ssl=1
|
||||
[7]: https://endeavouros.com/endeavouros-first-stable-release-has-arrived/
|
||||
[8]: https://forum.antergos.com/topic/11780/endeavour-antergos-community-s-next-stage
|
||||
[9]: https://endeavouros.com/what-to-expect-on-the-first-release/
|
||||
[10]: https://calamares.io/
|
||||
[11]: https://itsfoss.com/veltos-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-updating-with-kalu.png?resize=800%2C600&ssl=1
|
||||
[13]: https://endeavouros.com/second-week-after-the-stable-release/
|
||||
[14]: https://itsfoss.com/install-virtualbox-ubuntu/
|
||||
[15]: https://aur.archlinux.org/packages/pamac-aur/
|
||||
[16]: https://octopiproject.wordpress.com/
|
||||
[17]: https://github.com/jjk-jacky/kalu
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-desktop.png?resize=800%2C600&ssl=1
|
||||
[19]: https://itsfoss.com/clear-linux/
|
||||
[20]: https://forum.endeavouros.com/t/how-to-switch-from-antergos-to-endevouros/105/2
|
||||
[21]: https://rebornos.org/
|
@ -1,18 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13103-1.html)
|
||||
[#]: subject: (The Zen of Python: Why timing is everything)
|
||||
[#]: via: (https://opensource.com/article/19/12/zen-python-timeliness)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Python 之禅:为什么时间就是一切
|
||||
Python 之禅:时机最重要
|
||||
======
|
||||
|
||||
> 这是 Python 之禅特别系列的一部分,重点是第十五和第十六条原则:现在与永远。
|
||||
> 这是 Python 之禅特别系列的一部分,重点是第十五和第十六条原则:现在与将来。
|
||||
|
||||
!["桌子上的时钟、笔和记事本"][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/09/231557dkuzz22ame4ja2jj.jpg)
|
||||
|
||||
Python 一直在不断发展。Python 社区对特性请求的渴求是无止境的,对现状也总是不满意的。随着 Python 越来越流行,这门语言的变化会影响到更多的人。
|
||||
|
@ -0,0 +1,47 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13116-1.html)
|
||||
[#]: subject: (How to tell if implementing your Python code is a good idea)
|
||||
[#]: via: (https://opensource.com/article/19/12/zen-python-implementation)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
如何判断你的 Python 代码实现是否合适?
|
||||
======
|
||||
|
||||
> 这是 Python 之禅特别系列的一部分,重点介绍第十七和十八条原则:困难和容易。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/14/120518rjkwvjs76p9d1911.jpg)
|
||||
|
||||
一门语言并不是抽象存在的。每一个语言功能都必须用代码来实现。承诺一些功能是很容易的,但实现起来就会很麻烦。复杂的实现意味着更多潜在的 bug,甚至更糟糕的是,会带来日复一日的维护负担。
|
||||
|
||||
对于这个难题,[Python 之禅][2] 中有答案。
|
||||
|
||||
### <ruby>如果一个实现难以解释,那就是个坏思路<rt>If the implementation is hard to explain, it's a bad idea</rt></ruby>
|
||||
|
||||
编程语言最重要的是可预测性。有时我们用抽象的编程模型来解释某个结构的语义,而这些模型与实现并不完全对应。然而,最好的释义就是*解释该实现*。
|
||||
|
||||
如果该实现很难解释,那就意味着这条路行不通。
|
||||
|
||||
### <ruby>如果一个实现易于解释,那它可能是一个好思路<rt>If the implementation is easy to explain, it may be a good idea</rt></ruby>
|
||||
|
||||
仅仅因为某事容易,并不意味着它值得。然而,一旦解释清楚,判断它是否是一个好思路就容易得多。
|
||||
|
||||
这也是为什么这个原则的后半部分故意含糊其辞的原因:没有什么可以肯定一定是好的,但总是可以讨论一下。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/zen-python-implementation
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
|
||||
[2]: https://www.python.org/dev/peps/pep-0020/
|
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13099-1.html)
|
||||
[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/pacman-command/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Arch Linux 的 pacman 命令入门
|
||||
======
|
||||
|
||||
> 这本初学者指南向你展示了在 Linux 中可以使用 pacman 命令做什么,如何使用它们来查找新的软件包,安装和升级新的软件包,以及清理你的系统。
|
||||
|
||||
[pacman][1] 包管理器是 [Arch Linux][2] 和其他主要发行版如 Red Hat 和 Ubuntu/Debian 之间的主要区别之一。它结合了简单的二进制包格式和易于使用的 [构建系统][3]。`pacman` 的目标是方便地管理软件包,无论它是来自 [官方库][4] 还是用户自己构建的软件库。
|
||||
|
||||
如果你曾经使用过 Ubuntu 或基于 debian 的发行版,那么你可能使用过 `apt-get` 或 `apt` 命令。`pacman` 在 Arch Linux 中是同样的命令。如果你 [刚刚安装了 Arch Linux][5],在安装 Arch Linux 后,首先要做的 [几件事][6] 之一就是学习使用 `pacman` 命令。
|
||||
|
||||
在这个初学者指南中,我将解释一些基本的 `pacman` 命令的用法,你应该知道如何用这些命令来管理你的基于 Archlinux 的系统。
|
||||
|
||||
### Arch Linux 用户应该知道的几个重要的 pacman 命令
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/09/111411uqadijqdd8afgk56.jpg)
|
||||
|
||||
与其他包管理器一样,`pacman` 可以将包列表与软件库同步,它能够自动解决所有所需的依赖项,以使得用户可以通过一个简单的命令下载和安装软件。
|
||||
|
||||
#### 通过 pacman 安装软件
|
||||
|
||||
你可以用以下形式的代码来安装一个或者多个软件包:
|
||||
|
||||
```
|
||||
pacman -S 软件包名1 软件包名2 ...
|
||||
```
|
||||
|
||||
![安装一个包][8]
|
||||
|
||||
`-S` 选项的意思是<ruby>同步<rt>synchronization</rt></ruby>,它的意思是 `pacman` 在安装之前先与软件库进行同步。
|
||||
|
||||
`pacman` 数据库根据安装的原因将安装的包分为两组:
|
||||
|
||||
* **显式安装**:由 `pacman -S` 或 `-U` 命令直接安装的包
|
||||
* **依赖安装**:由于被其他显式安装的包所 [依赖][9],而被自动安装的包。
|
||||
|
||||
#### 卸载已安装的软件包
|
||||
|
||||
卸载一个包,并且删除它的所有依赖。
|
||||
|
||||
```
|
||||
pacman -R 软件包名
|
||||
```
|
||||
|
||||
![移除一个包][10]
|
||||
|
||||
删除一个包,以及其不被其他包所需要的依赖项:
|
||||
|
||||
```
|
||||
pacman -Rs 软件包名
|
||||
```
|
||||
|
||||
如果需要这个依赖的包已经被删除了,这条命令可以删除所有不再需要的依赖项:
|
||||
|
||||
```
|
||||
pacman -Qdtq | pacman -Rs -
|
||||
```
|
||||
|
||||
#### 升级软件包
|
||||
|
||||
`pacman` 提供了一个简单的办法来 [升级 Arch Linux][11]。你只需要一条命令就可以升级所有已安装的软件包。这可能需要一段时间,这取决于系统的新旧程度。
|
||||
|
||||
以下命令可以同步存储库数据库,*并且* 更新系统的所有软件包,但不包括不在软件库中的“本地安装的”包:
|
||||
|
||||
```
|
||||
pacman -Syu
|
||||
```
|
||||
|
||||
* `S` 代表同步
|
||||
* `y` 代表更新本地存储库
|
||||
* `u` 代表系统更新
|
||||
|
||||
也就是说,同步到中央软件库(主程序包数据库),刷新主程序包数据库的本地副本,然后执行系统更新(通过更新所有有更新版本可用的程序包)。
|
||||
|
||||
![系统更新][12]
|
||||
|
||||
> 注意!
|
||||
>
|
||||
> 对于 Arch Linux 用户,在系统升级前,建议你访问 [Arch-Linux 主页][2] 查看最新消息,以了解异常更新的情况。如果系统更新需要人工干预,主页上将发布相关的新闻。你也可以订阅 [RSS 源][13] 或 [Arch 的声明邮件][14]。
|
||||
>
|
||||
> 在升级基础软件(如 kernel、xorg、systemd 或 glibc) 之前,请注意查看相应的 [论坛][15],以了解大家报告的各种问题。
|
||||
>
|
||||
> 在 Arch 和 Manjaro 等滚动发行版中不支持**部分升级**。这意味着,当新的库版本被推送到软件库时,软件库中的所有包都需要根据库版本进行升级。例如,如果两个包依赖于同一个库,则仅升级一个包可能会破坏依赖于该库的旧版本的另一个包。
|
||||
|
||||
#### 用 Pacman 查找包
|
||||
|
||||
`pacman` 使用 `-Q` 选项查询本地包数据库,使用 `-S` 选项查询同步数据库,使用 `-F` 选项查询文件数据库。
|
||||
|
||||
`pacman` 可以在数据库中搜索包,包括包的名称和描述:
|
||||
|
||||
```
|
||||
pacman -Ss 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
![查找一个包][16]
|
||||
|
||||
查找已经被安装的包:
|
||||
|
||||
```
|
||||
pacman -Qs 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
根据文件名在远程软包中查找它所属的包:
|
||||
|
||||
```
|
||||
pacman -F 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
查看一个包的依赖树:
|
||||
|
||||
```
|
||||
pactree 软件包名
|
||||
```
|
||||
|
||||
#### 清除包缓存
|
||||
|
||||
`pacman` 将其下载的包存储在 `/var/cache/Pacman/pkg/` 中,并且不会自动删除旧版本或卸载的版本。这有一些优点:
|
||||
|
||||
1. 它允许 [降级][17] 一个包,而不需要通过其他来源检索以前的版本。
|
||||
2. 已卸载的软件包可以轻松地直接从缓存文件夹重新安装。
|
||||
|
||||
但是,有必要定期清理缓存以防止文件夹增大。
|
||||
|
||||
[pacman contrib][19] 包中提供的 [paccache(8)][18] 脚本默认情况下会删除已安装和未安装包的所有缓存版本,但最近 3 个版本除外:
|
||||
|
||||
```
|
||||
paccache -r
|
||||
```
|
||||
|
||||
![清除缓存][20]
|
||||
|
||||
要删除当前未安装的所有缓存包和未使用的同步数据库,请执行:
|
||||
|
||||
```
|
||||
pacman -Sc
|
||||
```
|
||||
|
||||
要从缓存中删除所有文件,请使用清除选项两次,这是最激进的方法,不会在缓存文件夹中留下任何内容:
|
||||
|
||||
```
|
||||
pacman -Scc
|
||||
```
|
||||
|
||||
#### 安装本地或者第三方的包
|
||||
|
||||
安装不是来自远程存储库的“本地”包:
|
||||
|
||||
```
|
||||
pacman -U 本地软件包路径.pkg.tar.xz
|
||||
```
|
||||
|
||||
安装官方存储库中未包含的“远程”软件包:
|
||||
|
||||
```
|
||||
pacman -U http://www.example.com/repo/example.pkg.tar.xz
|
||||
```
|
||||
|
||||
### 额外内容:用 pacman 排除常见错误
|
||||
|
||||
下面是使用 `pacman` 管理包时可能遇到的一些常见错误。
|
||||
|
||||
#### 提交事务失败(文件冲突)
|
||||
|
||||
如果你看到以下报错:
|
||||
|
||||
```
|
||||
error: could not prepare transaction
|
||||
error: failed to commit transaction (conflicting files)
|
||||
package: /path/to/file exists in filesystem
|
||||
Errors occurred, no packages were upgraded.
|
||||
```
|
||||
|
||||
这是因为 `pacman` 检测到文件冲突,不会为你覆盖文件。
|
||||
|
||||
解决这个问题的一个安全方法是首先检查另一个包是否拥有这个文件(`pacman-Qo 文件路径`)。如果该文件属于另一个包,请提交错误报告。如果文件不属于另一个包,请重命名“存在于文件系统中”的文件,然后重新发出更新命令。如果一切顺利,文件可能会被删除。
|
||||
|
||||
你可以显式地运行 `pacman -S –overwrite 要覆盖的文件模式**,强制 `pacman` 覆盖与 给模式匹配的文件,而不是手动重命名并在以后删除属于该包的所有文件。
|
||||
|
||||
#### 提交事务失败(包无效或损坏)
|
||||
|
||||
在 `/var/cache/pacman/pkg/` 中查找 `.part` 文件(部分下载的包),并将其删除。这通常是由在 `pacman.conf` 文件中使用自定义 `XferCommand` 引起的。
|
||||
|
||||
#### 初始化事务失败(无法锁定数据库)
|
||||
|
||||
当 `pacman` 要修改包数据库时,例如安装包时,它会在 `/var/lib/pacman/db.lck` 处创建一个锁文件。这可以防止 `pacman` 的另一个实例同时尝试更改包数据库。
|
||||
|
||||
如果 `pacman` 在更改数据库时被中断,这个过时的锁文件可能仍然保留。如果你确定没有 `pacman` 实例正在运行,那么请删除锁文件。
|
||||
|
||||
检查进程是否持有锁定文件:
|
||||
|
||||
```
|
||||
lsof /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果上述命令未返回任何内容,则可以删除锁文件:
|
||||
|
||||
```
|
||||
rm /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果你发现 `lsof` 命令输出了使用锁文件的进程的 PID,请先杀死这个进程,然后删除锁文件。
|
||||
|
||||
我希望你喜欢我对 `pacman` 基础命令的介绍。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pacman-command/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/pacman/
|
||||
[2]: https://www.archlinux.org/
|
||||
[3]: https://wiki.archlinux.org/index.php/Arch_Build_System
|
||||
[4]: https://wiki.archlinux.org/index.php/Official_repositories
|
||||
[5]: https://itsfoss.com/install-arch-linux/
|
||||
[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1
|
||||
[9]: https://wiki.archlinux.org/index.php/Dependency
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1
|
||||
[11]: https://itsfoss.com/update-arch-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1
|
||||
[13]: https://www.archlinux.org/feeds/news/
|
||||
[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/
|
||||
[15]: https://bbs.archlinux.org/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1
|
||||
[17]: https://wiki.archlinux.org/index.php/Downgrade
|
||||
[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8
|
||||
[19]: https://www.archlinux.org/packages/?name=pacman-contrib
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1
|
@ -0,0 +1,201 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13104-1.html)
|
||||
[#]: subject: (Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself)
|
||||
[#]: via: (https://itsfoss.com/arch-based-linux-distros/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
9 个易用的基于 Arch 的用户友好型 Linux 发行版
|
||||
======
|
||||
|
||||
在 Linux 社区中,[Arch Linux][1] 有一群狂热的追随者。这个轻量级的发行版以 DIY 的态度提供了最前沿的更新。
|
||||
|
||||
但是,Arch 的目标用户是那些更有经验的用户。因此,它通常被认为是那些技术不够(或耐心不够)的人所无法触及的。
|
||||
|
||||
事实上,只是最开始的步骤,[安装 Arch Linux 就足以把很多人吓跑][2]。与大多数其他发行版不同,Arch Linux 没有一个易于使用的图形安装程序。安装过程中涉及到的磁盘分区,连接到互联网,挂载驱动器和创建文件系统等只用命令行工具来操作。
|
||||
|
||||
对于那些不想经历复杂的安装和设置的人来说,有许多用户友好的基于 Arch 的发行版。
|
||||
|
||||
在本文中,我将向你展示一些 Arch 替代发行版。这些发行版附带了图形安装程序、图形包管理器和其他工具,比它们的命令行版本更容易使用。
|
||||
|
||||
### 更容易设置和使用的基于 Arch 的 Linux 发行版
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/10/112812sc42txp4eexco44x.jpg)
|
||||
|
||||
请注意,这不是一个排名列表。这些数字只是为了计数的目的。排第二的发行版不应该被认为比排第七的发行版好。
|
||||
|
||||
#### 1、Manjaro Linux
|
||||
|
||||
![][4]
|
||||
|
||||
[Manjaro][5] 不需要任何介绍。它是几年来最流行的 Linux 发行版之一,它值得拥有。
|
||||
|
||||
Manjaro 提供了 Arch Linux 的所有优点,同时注重用户友好性和可访问性。Manjaro 既适合新手,也适合有经验的 Linux 用户。
|
||||
|
||||
**对于新手**,它提供了一个用户友好的安装程序,系统本身也设计成可以在你[最喜爱的桌面环境 ][6](DE)或窗口管理器中直接“开箱即用”。
|
||||
|
||||
**对于更有经验的用户**,Manjaro 还提供多种功能,以满足每个个人的口味和喜好。[Manjaro Architect][7] 提供了安装各种 Manjaro 风格的选项,并为那些想要完全自由地塑造系统的人提供了各种桌面环境、文件系统([最近推出的 ZFS][8]) 和引导程序的选择。
|
||||
|
||||
Manjaro 也是一个滚动发布的前沿发行版。然而,与 Arch 不同的是,Manjaro 首先测试更新,然后将其提供给用户。稳定在这里也很重要。
|
||||
|
||||
#### 2、ArcoLinux
|
||||
|
||||
![][9]
|
||||
|
||||
[ArcoLinux][10](以前称为 ArchMerge)是一个基于 Arch Linux 的发行版。开发团队提供了三种变体。ArcoLinux、ArcoLinuxD 和 ArcoLinuxB。
|
||||
|
||||
ArcoLinux 是一个功能齐全的发行版,附带有 [Xfce 桌面][11]、[Openbox][12] 和 [i3 窗口管理器][13]。
|
||||
|
||||
**ArcoLinuxD** 是一个精简的发行版,它包含了一些脚本,可以让高级用户安装任何桌面和应用程序。
|
||||
|
||||
**ArcoLinuxB** 是一个让用户能够构建自定义发行版的项目,同时还开发了几个带有预配置桌面的社区版本,如 Awesome、bspwm、Budgie、Cinnamon、Deepin、GNOME、MATE 和 KDE Plasma。
|
||||
|
||||
ArcoLinux 还提供了各种视频教程,因为它非常注重学习和获取 Linux 技能。
|
||||
|
||||
#### 3、Archlabs Linux
|
||||
|
||||
![][14]
|
||||
|
||||
[ArchLabs Linux][15] 是一个轻量级的滚动版 Linux 发行版,基于最精简的 Arch Linux,带有 [Openbox][16] 窗口管理器。[ArchLabs][17] 在观感设计中受到 [BunsenLabs][18] 的影响和启发,主要考虑到中级到高级用户的需求。
|
||||
|
||||
#### 4、Archman Linux
|
||||
|
||||
![][19]
|
||||
|
||||
[Archman][20] 是一个独立的项目。Arch Linux 发行版对于没有多少 Linux 经验的用户来说通常不是理想的操作系统。要想在最小的挫折感下让事情变得更有意义,必须要有相当的背景知识。Archman Linux 的开发人员正试图改变这种评价。
|
||||
|
||||
Archman 的开发是基于对开发的理解,包括用户反馈和体验组件。根据团队过去的经验,将用户的反馈和要求融合在一起,确定路线图并完成构建工作。
|
||||
|
||||
#### 5、EndeavourOS
|
||||
|
||||
![][21]
|
||||
|
||||
当流行的基于 Arch 的发行版 [Antergos 在 2019 结束][22] 时,它留下了一个友好且非常有用的社区。Antergos 项目结束的原因是因为该系统对于开发人员来说太难维护了。
|
||||
|
||||
在宣布结束后的几天内,一些有经验的用户通过创建一个新的发行版来填补 Antergos 留下的空白,从而维护了以前的社区。这就是 [EndeavourOS][23] 的诞生。
|
||||
|
||||
[EndeavourOS][24] 是轻量级的,并且附带了最少数量的预装应用程序。一块近乎空白的画布,随时可以个性化。
|
||||
|
||||
#### 6、RebornOS
|
||||
|
||||
![][25]
|
||||
|
||||
[RebornOS][26] 开发人员的目标是将 Linux 的真正威力带给每个人,一个 ISO 提供了 15 个桌面环境可供选择,并提供无限的定制机会。
|
||||
|
||||
RebornOS 还声称支持 [Anbox][27],它可以在桌面 Linux 上运行 Android 应用程序。它还提供了一个简单的内核管理器 GUI 工具。
|
||||
|
||||
再加上 [Pacman][28]、[AUR][29],以及定制版本的 Cnchi 图形安装程序,Arch Linux 终于可以让最没有经验的用户也能够使用了。
|
||||
|
||||
#### 7、Chakra Linux
|
||||
|
||||
![][30]
|
||||
|
||||
一个社区开发的 GNU/Linux 发行版,它的亮点在 KDE 和 Qt 技术。[Chakra Linux][31] 不在特定日期安排发布,而是使用“半滚动发布”系统。
|
||||
|
||||
这意味着 Chakra Linux 的核心包被冻结,只在修复安全问题时才会更新。这些软件包是在最新版本经过彻底测试后更新的,然后再转移到永久软件库(大约每六个月更新一次)。
|
||||
|
||||
除官方软件库外,用户还可以安装 Chakra 社区软件库 (CCR) 的软件包,该库为官方存储库中未包含的软件提供用户制作的 PKGINFOs 和 [PKGBUILD][32] 脚本,其灵感来自于 Arch 用户软件库(AUR)。
|
||||
|
||||
#### 8、Artix Linux
|
||||
|
||||
![Artix Mate Edition][33]
|
||||
|
||||
[Artix Linux][34] 也是一个基于 Arch Linux 的滚动发行版,它使用 [OpenRC][35]、[runit][36] 或 [s6][37] 作为初始化工具而不是 [systemd][38]。
|
||||
|
||||
Artix Linux 有自己的软件库,但作为一个基于 `pacman` 的发行版,它可以使用 Arch Linux 软件库或任何其他衍生发行版的软件包,甚至可以使用明确依赖于 systemd 的软件包。也可以使用 [Arch 用户软件库][29](AUR)。
|
||||
|
||||
#### 9、BlackArch Linux
|
||||
|
||||
![][39]
|
||||
|
||||
BlackArch 是一个基于 Arch Linux 的 [渗透测试发行版][40],它提供了大量的网络安全工具。它是专门为渗透测试人员和安全研究人员创建的。该软件库包含 2400 多个[黑客和渗透测试工具 ][41],可以单独安装,也可以分组安装。BlackArch Linux 兼容现有的 Arch Linux 包。
|
||||
|
||||
### 想要真正的原版 Arch Linux 吗?可以使用图形化 Arch 安装程序简化安装
|
||||
|
||||
如果你想使用原版的 Arch Linux,但又被它困难的安装所难倒。幸运的是,你可以下载一个带有图形安装程序的 Arch Linux ISO。
|
||||
|
||||
Arch 安装程序基本上是 Arch Linux ISO 的一个相对容易使用的基于文本的安装程序。它比裸奔的 Arch 安装容易得多。
|
||||
|
||||
#### Anarchy Installer
|
||||
|
||||
![][42]
|
||||
|
||||
[Anarchy installer][43] 打算为新手和有经验的 Linux 用户提供一种简单而无痛苦的方式来安装 ArchLinux。在需要的时候安装,在需要的地方安装,并且以你想要的方式安装。这就是 Anarchy 的哲学。
|
||||
|
||||
启动安装程序后,将显示一个简单的 [TUI 菜单][44],列出所有可用的安装程序选项。
|
||||
|
||||
#### Zen Installer
|
||||
|
||||
![][45]
|
||||
|
||||
[Zen Installer][46] 为安装 Arch Linux 提供了一个完整的图形(点击式)环境。它支持安装多个桌面环境 、AUR 以及 Arch Linux 的所有功能和灵活性,并且易于图形化安装。
|
||||
|
||||
ISO 将引导一个临场环境,然后在你连接到互联网后下载最新稳定版本的安装程序。因此,你将始终获得最新的安装程序和更新的功能。
|
||||
|
||||
### 总结
|
||||
|
||||
对于许多用户来说,基于 Arch 的发行版会是一个很好的无忧选择,而像 Anarchy 这样的图形化安装程序至少离原版的 Arch Linux 更近了一步。
|
||||
|
||||
在我看来,[Arch Linux 的真正魅力在于它的安装过程][2],对于 Linux 爱好者来说,这是一个学习的机会,而不是麻烦。Arch Linux 及其衍生产品有很多东西需要你去折腾,但是在折腾的过程中你就会进入到开源软件的世界,这里是神奇的新世界。下次再见!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/arch-based-linux-distros/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/
|
||||
[2]: https://itsfoss.com/install-arch-linux/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/arch-based-linux-distributions.png?ssl=1
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/manjaro-20.jpg?ssl=1
|
||||
[5]: https://manjaro.org/
|
||||
[6]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[7]: https://itsfoss.com/manjaro-architect-review/
|
||||
[8]: https://itsfoss.com/manjaro-20-release/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/arcolinux.png?ssl=1
|
||||
[10]: https://arcolinux.com/
|
||||
[11]: https://www.xfce.org/
|
||||
[12]: http://openbox.org/wiki/Main_Page
|
||||
[13]: https://i3wm.org/
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archlabs.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/archlabs-review/
|
||||
[16]: https://en.wikipedia.org/wiki/Openbox
|
||||
[17]: https://archlabslinux.com/
|
||||
[18]: https://www.bunsenlabs.org/
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archman.png?ssl=1
|
||||
[20]: https://archman.org/en/
|
||||
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/04_endeavouros_slide.jpg?ssl=1
|
||||
[22]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[23]: https://itsfoss.com/endeavouros/
|
||||
[24]: https://endeavouros.com/
|
||||
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/RebornOS.png?ssl=1
|
||||
[26]: https://rebornos.org/
|
||||
[27]: https://anbox.io/
|
||||
[28]: https://itsfoss.com/pacman-command/
|
||||
[29]: https://itsfoss.com/aur-arch-linux/
|
||||
[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Chakra_Goedel_Screenshot.png?ssl=1
|
||||
[31]: https://www.chakralinux.org/
|
||||
[32]: https://wiki.archlinux.org/index.php/PKGBUILD
|
||||
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Artix_MATE_edition.png?ssl=1
|
||||
[34]: https://artixlinux.org/
|
||||
[35]: https://en.wikipedia.org/wiki/OpenRC
|
||||
[36]: https://en.wikipedia.org/wiki/Runit
|
||||
[37]: https://en.wikipedia.org/wiki/S6_(software)
|
||||
[38]: https://en.wikipedia.org/wiki/Systemd
|
||||
[39]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/BlackArch.png?ssl=1
|
||||
[40]: https://itsfoss.com/linux-hacking-penetration-testing/
|
||||
[41]: https://itsfoss.com/best-kali-linux-tools/
|
||||
[42]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/anarchy.jpg?ssl=1
|
||||
[43]: https://anarchyinstaller.org/
|
||||
[44]: https://en.wikipedia.org/wiki/Text-based_user_interface
|
||||
[45]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/zen.jpg?ssl=1
|
||||
[46]: https://sourceforge.net/projects/revenge-installer/
|
260
published/20200615 LaTeX Typesetting - Part 1 (Lists).md
Normal file
260
published/20200615 LaTeX Typesetting - Part 1 (Lists).md
Normal file
@ -0,0 +1,260 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "rakino"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-13112-1.html"
|
||||
[#]: subject: "LaTeX Typesetting – Part 1 (Lists)"
|
||||
[#]: via: "https://fedoramagazine.org/latex-typesetting-part-1/"
|
||||
[#]: author: "Earl Ramirez https://fedoramagazine.org/author/earlramirez/"
|
||||
|
||||
LaTeX 排版(1):列表
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
本系列基于前文《[在 Fedora 上用 LaTex 和 TeXstudio 排版你的文档][2]》和《[LaTeX 基础][3]》,本文即系列的第一部分,是关于 LaTeX 列表的。
|
||||
|
||||
### 列表类型
|
||||
|
||||
LaTeX 中的列表是封闭的环境,列表中的每个项目可以取一行文字到一个完整的段落。在 LaTeX 中有三种列表类型:
|
||||
|
||||
* `itemize`:<ruby>无序列表<rt>unordered list</rt></ruby>/<ruby>项目符号列表<rt>bullet list</rt></ruby>
|
||||
* `enumerate`:<ruby>有序列表<rt>ordered list</rt></ruby>
|
||||
* `description`:<ruby>描述列表<rt>descriptive list</rt></ruby>
|
||||
|
||||
### 创建列表
|
||||
|
||||
要创建一个列表,需要在每个项目前加上控制序列 `\item`,并在项目清单前后分别加上控制序列 `\begin{<类型>}` 和 `\end`{<类型>}`(将其中的 `<类型>` 替换为将要使用的列表类型),如下例:
|
||||
|
||||
#### itemize(无序列表)
|
||||
|
||||
```
|
||||
\begin{itemize}
|
||||
\item Fedora
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
#### enumerate(有序列表)
|
||||
|
||||
```
|
||||
\begin{enumerate}
|
||||
\item Fedora CoreOS
|
||||
\item Fedora Silverblue
|
||||
\item Fedora Spin
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
#### description(描述列表)
|
||||
|
||||
```
|
||||
\begin{description}
|
||||
\item[Fedora 6] Code name Zod
|
||||
\item[Fedora 8] Code name Werewolf
|
||||
\end{description}
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
### 列表项目间距
|
||||
|
||||
可以通过在导言区加入 `\usepackage{enumitem}` 来自定义默认的间距,宏包 `enumitem` 启用了选项 `noitemsep` 和控制序列 `\itemsep`,可以在列表中使用它们,如下例所示:
|
||||
|
||||
#### 使用选项 noitemsep
|
||||
|
||||
将选项 `noitemsep` 封闭在方括号内,并同下文所示放在控制序列 `\begin` 之后,该选项将移除默认的间距。
|
||||
|
||||
```
|
||||
\begin{itemize}[noitemsep]
|
||||
\item Fedora
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
#### 使用控制序列 \itemsep
|
||||
|
||||
控制序列 `\itemsep` 必须以一个数字作为后缀,用以表示列表项目之间应该有多少空间。
|
||||
|
||||
```
|
||||
\begin{itemize} \itemsep0.75pt
|
||||
\item Fedora Silverblue
|
||||
\item Fedora CoreOS
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
### 嵌套列表
|
||||
|
||||
LaTeX 最多最多支持四层嵌套列表,如下例:
|
||||
|
||||
#### 嵌套无序列表
|
||||
|
||||
```
|
||||
\begin{itemize}[noitemsep]
|
||||
\item Fedora Versions
|
||||
\begin{itemize}
|
||||
\item Fedora 8
|
||||
\item Fedora 9
|
||||
\begin{itemize}
|
||||
\item Werewolf
|
||||
\item Sulphur
|
||||
\begin{itemize}
|
||||
\item 2007-05-31
|
||||
\item 2008-05-13
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
#### 嵌套有序列表
|
||||
|
||||
```
|
||||
\begin{enumerate}[noitemsep]
|
||||
\item Fedora Versions
|
||||
\begin{enumerate}
|
||||
\item Fedora 8
|
||||
\item Fedora 9
|
||||
\begin{enumerate}
|
||||
\item Werewolf
|
||||
\item Sulphur
|
||||
\begin{enumerate}
|
||||
\item 2007-05-31
|
||||
\item 2008-05-13
|
||||
\end{enumerate}
|
||||
\end{enumerate}
|
||||
\end{enumerate}
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
### 每种列表类型的列表样式名称
|
||||
|
||||
**enumerate(有序列表)** | **itemize(无序列表)**
|
||||
---|---
|
||||
`\alph*` (小写字母) | `$\bullet$` (●)
|
||||
`\Alph*` (大写字母) | `$\cdot$` (•)
|
||||
`\arabic*` (阿拉伯数字) | `$\diamond$` (◇)
|
||||
`\roman*` (小写罗马数字) | `$\ast$` (✲)
|
||||
`\Roman*` (大写罗马数字) | `$\circ$` (○)
|
||||
| `$-$` (-)
|
||||
|
||||
### 按嵌套深度划分的默认样式
|
||||
|
||||
**嵌套深度** | **enumerate(有序列表)** | **itemize(无序列表)**
|
||||
---|---|---
|
||||
1 | 阿拉伯数字 | (●)
|
||||
2 | 小写字母 | (-)
|
||||
3 | 小写罗马数字 | (✲)
|
||||
4 | 大写字母 | (•)
|
||||
|
||||
### 设置列表样式
|
||||
|
||||
下面的例子列举了无序列表的不同样式。
|
||||
|
||||
```
|
||||
% 无序列表样式
|
||||
\begin{itemize}
|
||||
\item[$\ast$] Asterisk
|
||||
\item[$\diamond$] Diamond
|
||||
\item[$\circ$] Circle
|
||||
\item[$\cdot$] Period
|
||||
\item[$\bullet$] Bullet (default)
|
||||
\item[--] Dash
|
||||
\item[$-$] Another dash
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
有三种设置列表样式的方式,下面将按照优先级从高到低的顺序分别举例。
|
||||
|
||||
#### 方式一:为各项目单独设置
|
||||
|
||||
将需要的样式名称封闭在方括号内,并放在控制序列 `\item` 之后,如下例:
|
||||
|
||||
```
|
||||
% 方式一
|
||||
\begin{itemize}
|
||||
\item[$\ast$] Asterisk
|
||||
\item[$\diamond$] Diamond
|
||||
\item[$\circ$] Circle
|
||||
\item[$\cdot$] period
|
||||
\item[$\bullet$] Bullet (default)
|
||||
\item[--] Dash
|
||||
\item[$-$] Another dash
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
#### 方式二:为整个列表设置
|
||||
|
||||
将需要的样式名称以 `label=` 前缀并封闭在方括号内,放在控制序列 `\begin` 之后,如下例:
|
||||
|
||||
```
|
||||
% 方式二
|
||||
\begin{enumerate}[label=\Alph*.]
|
||||
\item Fedora 32
|
||||
\item Fedora 31
|
||||
\item Fedora 30
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
#### 方式三:为整个文档设置
|
||||
|
||||
该方式将改变整个文档的默认样式。使用 `\renewcommand` 来设置项目标签的值,下例分别为四个嵌套深度的项目标签设置了不同的样式。
|
||||
|
||||
```
|
||||
% 方式三
|
||||
\renewcommand{\labelitemi}{$\ast$}
|
||||
\renewcommand{\labelitemii}{$\diamond$}
|
||||
\renewcommand{\labelitemiii}{$\bullet$}
|
||||
\renewcommand{\labelitemiv}{$-$}
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
LaTeX 支持三种列表,而每种列表的风格和间距都是可以自定义的。在以后的文章中,我们将解释更多的 LaTeX 元素。
|
||||
|
||||
关于 LaTeX 列表的延伸阅读可以在这里找到:[LaTeX List Structures][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/latex-typesetting-part-1/
|
||||
|
||||
作者:[Earl Ramirez][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[rakino](https://github.com/rakino)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/earlramirez/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png
|
||||
[2]: https://fedoramagazine.org/typeset-latex-texstudio-fedora
|
||||
[3]: https://fedoramagazine.org/fedora-classroom-latex-101-beginners
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-1.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-2.png
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-3.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-4.png
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-5.png
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-7.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-8.png
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-9.png
|
||||
[12]: https://en.wikibooks.org/wiki/LaTeX/List_Structures
|
@ -0,0 +1,126 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13110-1.html)
|
||||
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
|
||||
[#]: via: (https://itsfoss.com/material-shell/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
使用 Material Shell 扩展将你的 GNOME 桌面打造成平铺式风格
|
||||
======
|
||||
|
||||
平铺式窗口的特性吸引了很多人的追捧。也许是因为它很好看,也许是因为它能提高 [Linux 快捷键][1] 玩家的效率。又或者是因为使用不同寻常的平铺式窗口是一种新奇的挑战。
|
||||
|
||||
![Tiling Windows in Linux | Image Source][2]
|
||||
|
||||
从 i3 到 [Sway][3],Linux 桌面拥有各种各样的平铺式窗口管理器。配置一个平铺式窗口管理器需要一个陡峭的学习曲线。
|
||||
|
||||
这就是为什么像 [Regolith 桌面][4] 这样的项目会存在,给你预先配置好的平铺桌面,让你可以更轻松地开始使用平铺窗口。
|
||||
|
||||
让我给你介绍一个类似的项目 —— Material Shell。它可以让你用上平铺式桌面,甚至比 [Regolith][5] 还简单。
|
||||
|
||||
### Material Shell 扩展:将 GNOME 桌面转变成平铺式窗口管理器
|
||||
|
||||
[Material Shell][6] 是一个 GNOME 扩展,这就是它最好的地方。这意味着你不需要注销并登录其他桌面环境。你只需要启用或关闭这个扩展就可以自如的切换你的工作环境。
|
||||
|
||||
我会列出 Material Shell 的各种特性,但是也许视频更容易让你理解:
|
||||
|
||||
- [video](https://youtu.be/Wc5mbuKrGDE)
|
||||
|
||||
这个项目叫做 Material Shell 是因为它遵循 [Material Design][8] 原则。因此这个应用拥有一个美观的界面。这就是它最重要的一个特性。
|
||||
|
||||
#### 直观的界面
|
||||
|
||||
Material Shell 添加了一个左侧面板,以便快速访问。在此面板上,你可以在底部找到系统托盘,在顶部找到搜索和工作区。
|
||||
|
||||
所有新打开的应用都会添加到当前工作区中。你也可以创建新的工作区并切换到该工作区,以将正在运行的应用分类。其实这就是工作区最初的意义。
|
||||
|
||||
在 Material Shell 中,每个工作区都可以显示为具有多个应用程序的行列,而不是包含多个应用程序的程序框。
|
||||
|
||||
#### 平铺式窗口
|
||||
|
||||
在工作区中,你可以一直在顶部看到所有打开的应用程序。默认情况下,应用程序会像在 GNOME 桌面中那样铺满整个屏幕。你可以使用右上角的布局改变器来改变布局,将其分成两半、多列或多个应用网格。
|
||||
|
||||
这段视频一目了然的显示了以上所有功能:
|
||||
|
||||
- [video](https://player.vimeo.com/video/460050750?dnt=1&app_id=122963)
|
||||
|
||||
#### 固定布局和工作区
|
||||
|
||||
Material Shell 会记住你打开的工作区和窗口,这样你就不必重新组织你的布局。这是一个很好的特性,因为如果你对应用程序的位置有要求的话,它可以节省时间。
|
||||
|
||||
#### 热建/快捷键
|
||||
|
||||
像任何平铺窗口管理器一样,你可以使用键盘快捷键在应用程序和工作区之间切换。
|
||||
|
||||
* `Super+W` 切换到上个工作区;
|
||||
* `Super+S` 切换到下个工作区;
|
||||
* `Super+A` 切换到左边的窗口;
|
||||
* `Super+D` 切换到右边的窗口;
|
||||
* `Super+1`、`Super+2` … `Super+0` 切换到某个指定的工作区;
|
||||
* `Super+Q` 关闭当前窗口;
|
||||
* `Super+[鼠标拖动]` 移动窗口;
|
||||
* `Super+Shift+A` 将当前窗口左移;
|
||||
* `Super+Shift+D` 将当前窗口右移;
|
||||
* `Super+Shift+W` 将当前窗口移到上个工作区;
|
||||
* `Super+Shift+S` 将当前窗口移到下个工作区。
|
||||
|
||||
### 安装 Material Shell
|
||||
|
||||
> 警告!
|
||||
>
|
||||
> 对于大多数用户来说,平铺式窗口可能会导致混乱。你最好先熟悉如何使用 GNOME 扩展。如果你是 Linux 新手或者你害怕你的系统发生翻天覆地的变化,你应当避免使用这个扩展。
|
||||
|
||||
Material Shell 是一个 GNOME 扩展。所以,请 [检查你的桌面环境][9],确保你运行的是 GNOME 3.34 或者更高的版本。
|
||||
|
||||
除此之外,我注意到在禁用 Material Shell 之后,它会导致 Firefox 的顶栏和 Ubuntu 的坞站消失。你可以在 GNOME 的“扩展”应用程序中禁用/启用 Ubuntu 的坞站扩展来使其变回原来的样子。我想这些问题也应该在系统重启后消失,虽然我没试过。
|
||||
|
||||
我希望你知道 [如何使用 GNOME 扩展][10]。最简单的办法就是 [在浏览器中打开这个链接][11],安装 GNOME 扩展浏览器插件,然后启用 Material Shell 扩展即可。
|
||||
|
||||
![][12]
|
||||
|
||||
如果你不喜欢这个扩展,你也可以在同样的链接中禁用它。或者在 GNOME 的“扩展”应用程序中禁用它。
|
||||
|
||||
![][13]
|
||||
|
||||
### 用不用平铺式?
|
||||
|
||||
我使用多个电脑屏幕,我发现 Material Shell 不适用于多个屏幕的情况。这是开发者将来可以改进的地方。
|
||||
|
||||
除了这个毛病以外,Material Shell 是个让你开始使用平铺式窗口的好东西。如果你尝试了 Material Shell 并且喜欢它,请 [在 GitHub 上给它一个星标或赞助它][14] 来鼓励这个项目。
|
||||
|
||||
由于某些原因,平铺窗户越来越受欢迎。最近发布的 [Pop OS 20.04][15] 也增加了平铺窗口的功能。有一个类似的项目叫 PaperWM,也是这样做的。
|
||||
|
||||
但正如我前面提到的,平铺布局并不适合所有人,它可能会让很多人感到困惑。
|
||||
|
||||
你呢?你是喜欢平铺窗口还是喜欢经典的桌面布局?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/material-shell/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/sway-window-manager/
|
||||
[4]: https://itsfoss.com/regolith-linux-desktop/
|
||||
[5]: https://regolith-linux.org/
|
||||
[6]: https://material-shell.com
|
||||
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[8]: https://material.io/
|
||||
[9]: https://itsfoss.com/find-desktop-environment/
|
||||
[10]: https://itsfoss.com/gnome-shell-extensions/
|
||||
[11]: https://extensions.gnome.org/extension/3357/material-shell/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
|
||||
[14]: https://github.com/material-shell/material-shell
|
||||
[15]: https://itsfoss.com/pop-os-20-04-review/
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13090-1.html)
|
||||
[#]: subject: (7 Bash tutorials to enhance your command line skills in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/1/bash)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
7 个 Bash 教程,提高你的命令行技能(2021 版)
|
||||
======
|
||||
|
||||
> Bash 是大多数 Linux 系统上的默认命令行 shell。所以你为什么不试着学习如何最大限度地利用它呢?
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/06/001837cujo0ql3utfobrrj.jpg)
|
||||
|
||||
Bash 是大多数 Linux 系统上的默认命令行 shell。所以你为什么不试着学习如何最大限度地利用它呢?今年,我们推荐了许多很棒的文章来帮助你充分利用 Bash shell 的强大功能。以下是一些关于 Bash 阅读次数最多的文章:
|
||||
|
||||
### 《通过重定向在 Linux 终端任意读写数据》
|
||||
|
||||
输入和输出重定向是任何编程或脚本语言的基础功能。从技术上讲,只要你与电脑互动,它就会自然而然地发生。输入从 stdin(标准输入,通常是你的键盘或鼠标)读取,输出到 stdout(标准输出,一般是文本或数据流),而错误被发送到 stderr(标准错误,一般和标准输出是一个位置)。了解这些数据流的存在,使你能够在使用 Bash 等 shell 时控制信息的去向。Seth Kenlon [分享][2]了这些很棒的技巧,可以让你在不需要大量鼠标移动和按键的情况下从一个地方获取数据。你可能不经常使用重定向,但学习使用它可以为你节省大量不必要的打开文件和复制粘贴数据的时间。
|
||||
|
||||
### 《系统管理员 Bash 脚本入门》
|
||||
|
||||
Bash 是自由开源软件,所以任何人都可以安装它,不管他们运行的是 Linux、BSD、OpenIndiana、Windows 还是 macOS。Seth Kenlon [帮助][3]你学习如何使用 Bash 的命令和特性,使其成为最强大的 shell 之一。
|
||||
|
||||
### 《针对大型文件系统可以试试此 Bash 脚本》
|
||||
|
||||
你是否曾经想列出一个目录中的所有文件,只显示其中的文件,不包括其他内容?或者只显示目录?如果你有,那么 Nick Clifton 的[文章][4]可能正是你正在寻找的。Nick 分享了一个漂亮的 Bash 脚本,它可以列出目录、文件、链接或可执行文件。该脚本使用 `find` 命令进行搜索,然后运行 `ls` 显示详细信息。对于管理大型 Linux 系统的人来说,这是一个漂亮的解决方案。
|
||||
|
||||
### 《用 Bash 工具对你的 Linux 系统配置进行快照》
|
||||
|
||||
你可能想与他人分享你的 Linux 配置,原因有很多。你可能需要帮助排除系统上的一个问题,或者你对自己创建的环境非常自豪,想向其他开源爱好者展示它。Don Watkins 向我们[展示][5]了 screenFetch 和 Neofetch 来捕获和分享你的系统配置。
|
||||
|
||||
### 《6 个方便的 Git 脚本》
|
||||
|
||||
Git 已经成为一个无处不在的代码管理系统。了解如何管理 Git 存储库可以简化你的开发体验。Bob Peterson [分享][6]了 6 个 Bash 脚本,它们将使你在使用 Git 存储库时更加轻松。`gitlog` 打印当前补丁的简略列表,并与主版本相对照。这个脚本的不同版本可以显示补丁的 SHA1 id 或在一组补丁中搜索字符串。
|
||||
|
||||
### 《改进你 Bash 脚本的 5 种方法》
|
||||
|
||||
系统管理员通常编写各种或长或短的 Bash 脚本,以完成各种任务。Alan Formy-Duval [解释][7]了如何使 Bash 脚本更简单、更健壮、更易于阅读和调试。我们可能会考虑到我们需要使用诸如 Python、C 或 Java 之类的语言来实现更高的功能,但其实也不一定需要。因为 Bash 脚本语言就已经非常强大。要最大限度地发挥它的效用,还有很多东西要学。
|
||||
|
||||
### 《我珍藏的 Bash 秘籍》
|
||||
|
||||
Katie McLaughlin 帮助你提高你的工作效率,用别名和其他快捷方式解决你经常忘记的事情。当你整天与计算机打交道时,找到可重复的命令并标记它们以方便以后使用是非常美妙的。Katie [总结][8]了一些有用的 Bash 特性和帮助命令,可以节省你的时间。
|
||||
|
||||
这些 Bash 小技巧将一个已经很强大的 shell 提升到一个全新的级别。也欢迎分享你自己的建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/bash
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
|
||||
[2]: https://linux.cn/article-12385-1.html
|
||||
[3]: https://opensource.com/article/20/4/bash-sysadmins-ebook
|
||||
[4]: https://linux.cn/article-12025-1.html
|
||||
[5]: https://opensource.com/article/20/1/screenfetch-neofetch
|
||||
[6]: https://linux.cn/article-11797-1.html
|
||||
[7]: https://opensource.com/article/20/1/improve-bash-scripts
|
||||
[8]: https://linux.cn/article-11841-1.html
|
201
published/20210125 Why you need to drop ifconfig for ip.md
Normal file
201
published/20210125 Why you need to drop ifconfig for ip.md
Normal file
@ -0,0 +1,201 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "MjSeven"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-13089-1.html"
|
||||
[#]: subject: "Why you need to drop ifconfig for ip"
|
||||
[#]: via: "https://opensource.com/article/21/1/ifconfig-ip-linux"
|
||||
[#]: author: "Rajan Bhardwaj https://opensource.com/users/rajabhar"
|
||||
|
||||
放弃 ifconfig,拥抱 ip 命令
|
||||
======
|
||||
|
||||
> 开始使用现代方法配置 Linux 网络接口。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/05/233847lpg1lnz7kl2czgfj.jpg)
|
||||
|
||||
在很长一段时间内,`ifconfig` 命令是配置网络接口的默认方法。它为 Linux 用户提供了很好的服务,但是网络很复杂,所以配置网络的命令必须健壮。`ip` 命令是现代系统中新的默认网络命令,在本文中,我将向你展示如何使用它。
|
||||
|
||||
`ip` 命令工作在 [OSI 网络栈][2] 的两个层上:第二层(数据链路层)和第三层(网络 或 IP)层。它做了之前 `net-tools` 包的所有工作。
|
||||
|
||||
### 安装 ip
|
||||
|
||||
`ip` 命令包含在 `iproute2util` 包中,它可能已经在你的 Linux 发行版中安装了。如果没有,你可以从发行版的仓库中进行安装。
|
||||
|
||||
### ifconfig 和 ip 使用对比
|
||||
|
||||
`ip` 和 `ifconfig` 命令都可以用来配置网络接口,但它们做事方法不同。接下来,作为对比,我将用它们来执行一些常见的任务。
|
||||
|
||||
#### 查看网口和 IP 地址
|
||||
|
||||
如果你想查看主机的 IP 地址或网络接口信息,`ifconfig` (不带任何参数)命令提供了一个很好的总结。
|
||||
|
||||
```
|
||||
$ ifconfig
|
||||
|
||||
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
|
||||
ether bc:ee:7b:5e:7d:d8 txqueuelen 1000 (Ethernet)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10<host>
|
||||
loop txqueuelen 1000 (Local Loopback)
|
||||
RX packets 41 bytes 5551 (5.4 KiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 41 bytes 5551 (5.4 KiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 10.1.1.6 netmask 255.255.255.224 broadcast 10.1.1.31
|
||||
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212 prefixlen 64 scopeid 0x0<global>
|
||||
inet6 fe80::8eb3:4bc0:7cbb:59e8 prefixlen 64 scopeid 0x20<link>
|
||||
ether 08:71:90:81:1e:b5 txqueuelen 1000 (Ethernet)
|
||||
RX packets 569459 bytes 779147444 (743.0 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 302882 bytes 38131213 (36.3 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
新的 `ip` 命令提供了类似的结果,但命令是 `ip address show`,或者简写为 `ip a`:
|
||||
|
||||
```
|
||||
$ ip a
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
|
||||
link/ether bc:ee:7b:5e:7d:d8 brd ff:ff:ff:ff:ff:ff
|
||||
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
|
||||
link/ether 08:71:90:81:1e:b5 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.1.1.6/27 brd 10.1.1.31 scope global dynamic wlan0
|
||||
valid_lft 83490sec preferred_lft 83490sec
|
||||
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212/64 scope global noprefixroute dynamic
|
||||
valid_lft 6909sec preferred_lft 3309sec
|
||||
inet6 fe80::8eb3:4bc0:7cbb:59e8/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
#### 添加 IP 地址
|
||||
|
||||
使用 `ifconfig` 命令添加 IP 地址命令为:
|
||||
|
||||
```
|
||||
$ ifconfig eth0 add 192.9.203.21
|
||||
```
|
||||
|
||||
`ip` 类似:
|
||||
|
||||
```
|
||||
$ ip address add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
`ip` 中的子命令可以缩短,所以下面这个命令同样有效:
|
||||
|
||||
```
|
||||
$ ip addr add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
你甚至可以更短些:
|
||||
|
||||
```
|
||||
$ ip a add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
#### 移除一个 IP 地址
|
||||
|
||||
添加 IP 地址与删除 IP 地址正好相反。
|
||||
|
||||
使用 `ifconfig`,命令是:
|
||||
|
||||
```
|
||||
$ ifconfig eth0 del 192.9.203.21
|
||||
```
|
||||
|
||||
`ip` 命令的语法是:
|
||||
|
||||
```
|
||||
$ ip a del 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
#### 启用或禁用组播
|
||||
|
||||
使用 `ifconfig` 接口来启用或禁用 <ruby>[组播][3]<rt>multicast</rt></ruby>:
|
||||
|
||||
```
|
||||
# ifconfig eth0 multicast
|
||||
```
|
||||
|
||||
对于 `ip`,使用 `set` 子命令与设备(`dev`)以及一个布尔值和 `multicast` 选项:
|
||||
|
||||
```
|
||||
# ip link set dev eth0 multicast on
|
||||
```
|
||||
|
||||
#### 启用或禁用网络
|
||||
|
||||
每个系统管理员都熟悉“先关闭,然后打开”这个技巧来解决问题。对于网络接口来说,即打开或关闭网络。
|
||||
|
||||
`ifconfig` 命令使用 `up` 或 `down` 关键字来实现:
|
||||
|
||||
```
|
||||
# ifconfig eth0 up
|
||||
```
|
||||
|
||||
或者你可以使用一个专用命令:
|
||||
|
||||
```
|
||||
# ifup eth0
|
||||
```
|
||||
|
||||
`ip` 命令使用 `set` 子命令将网络设置为 `up` 或 `down` 状态:
|
||||
|
||||
```
|
||||
# ip link set eth0 up
|
||||
```
|
||||
|
||||
#### 开启或关闭地址解析功能(ARP)
|
||||
|
||||
使用 `ifconfig`,你可以通过声明它来启用:
|
||||
|
||||
```
|
||||
# ifconfig eth0 arp
|
||||
```
|
||||
|
||||
使用 `ip`,你可以将 `arp` 属性设置为 `on` 或 `off`:
|
||||
|
||||
```
|
||||
# ip link set dev eth0 arp on
|
||||
```
|
||||
|
||||
### ip 和 ipconfig 的优缺点
|
||||
|
||||
`ip` 命令比 `ifconfig` 更通用,技术上也更有效,因为它使用的是 `Netlink` 套接字,而不是 `ioctl` 系统调用。
|
||||
|
||||
`ip` 命令可能看起来比 `ifconfig` 更详细、更复杂,但这是它拥有更多功能的一个原因。一旦你开始使用它,你会了解它的内部逻辑(例如,使用 `set` 而不是看起来随意混合的声明或设置)。
|
||||
|
||||
最后,`ifconfig` 已经过时了(例如,它缺乏对网络命名空间的支持),而 `ip` 是为现代网络而生的。尝试并学习它,使用它,你会由衷高兴的!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/ifconfig-ip-linux
|
||||
|
||||
作者:[Rajan Bhardwaj][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rajabhar
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk "Tips and gears turning"
|
||||
[2]: https://en.wikipedia.org/wiki/OSI_model
|
||||
[3]: https://en.wikipedia.org/wiki/Multicast
|
@ -0,0 +1,234 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (amwps290)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13093-1.html)
|
||||
[#]: subject: (Write GIMP scripts to make image processing faster)
|
||||
[#]: via: (https://opensource.com/article/21/1/gimp-scripting)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
编写 GIMP 脚本使图像处理更快
|
||||
======
|
||||
|
||||
> 通过向一批图像添加效果来学习 GIMP 的脚本语言 Script-Fu。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/06/231011c0xhvxitxjv899qv.jpg)
|
||||
|
||||
前一段时间,我想给方程图片加一个黑板式的外观。我开始是使用 [GIMP][2] 来处理的,我对结果很满意。问题是我必须对图像执行几个操作,当我想再次使用此样式,不想对所有图像重复这些步骤。此外,我确信我会很快忘记这些步骤。
|
||||
|
||||
![Fourier transform equations][3]
|
||||
|
||||
*傅立叶变换方程式(Cristiano Fontana,[CC BY-SA 4.0] [4])*
|
||||
|
||||
GIMP 是一个很棒的开源图像编辑器。尽管我已经使用了多年,但从未研究过其批处理功能或 [Script-Fu][5] 菜单。这是探索它们的绝好机会。
|
||||
|
||||
### 什么是 Script-Fu?
|
||||
|
||||
[Script-Fu][6] 是 GIMP 内置的脚本语言。是一种基于 [Scheme][7] 的编程语言。如果你从未使用过 Scheme,请尝试一下,因为它可能非常有用。我认为 Script-Fu 是一个很好的入门方法,因为它对图像处理具有立竿见影的效果,所以你可以很快感觉到自己的工作效率的提高。你也可以使用 [Python][8] 编写脚本,但是 Script-Fu 是默认选项。
|
||||
|
||||
为了帮助你熟悉 Scheme,GIMP 的文档提供了深入的 [教程][9]。Scheme 是一种类似于 [Lisp][10] 的语言,因此它的主要特征是使用 [前缀][11] 表示法和 [许多括号][12]。函数和运算符通过前缀应用到操作数列表中:
|
||||
|
||||
```
|
||||
(函数名 操作数 操作数 ...)
|
||||
|
||||
(+ 2 3)
|
||||
↳ 返回 5
|
||||
|
||||
(list 1 2 3 5)
|
||||
↳ 返回一个列表,包含 1、 2、 3 和 5
|
||||
```
|
||||
|
||||
我花了一些时间才找到完整的 GIMP 函数列表文档,但实际上很简单。在 **Help** 菜单中,有一个 **Procedure Browser**,其中包含所有可用的函数的丰富详尽文档。
|
||||
|
||||
![GIMP Procedure Browser][13]
|
||||
|
||||
### 使用 GIMP 的批处理模式
|
||||
|
||||
你可以使用 `-b` 选项以批处理的方式启动 GIMP。`-b` 选项的参数可以是你想要运行的脚本,或者用一个 `-` 来让 GIMP 进入交互模式而不是命令行模式。正常情况下,当你启动 GIMP 的时候,它会启动图形界面,但是你可以使用 `-i` 选项来禁用它。
|
||||
|
||||
### 开始编写你的第一个脚本
|
||||
|
||||
创建一个名为 `chalk.scm` 的文件,并把它保存在 **Preferences** 窗口中 **Folders** 选项下的 **Script** 中指定的 `script` 文件夹下。就我而言,是在 `$HOME/.config/GIMP/2.10/scripts`。
|
||||
|
||||
在 `chalk.scm` 文件中,写入下面的内容:
|
||||
|
||||
```
|
||||
(define (chalk filename grow-pixels spread-amount percentage)
|
||||
(let* ((image (car (gimp-file-load RUN-NONINTERACTIVE filename filename)))
|
||||
(drawable (car (gimp-image-get-active-layer image)))
|
||||
(new-filename (string-append "modified_" filename)))
|
||||
(gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0))
|
||||
(gimp-selection-grow image grow-pixels)
|
||||
(gimp-context-set-foreground '(0 0 0))
|
||||
(gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0)
|
||||
(gimp-selection-none image)
|
||||
(plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount)
|
||||
(gimp-drawable-invert drawable TRUE)
|
||||
(plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0)
|
||||
(gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename)
|
||||
(gimp-image-delete image)))
|
||||
```
|
||||
|
||||
### 定义脚本变量
|
||||
|
||||
在脚本中, `(define (chalk filename grow-pixels spread-amound percentage) ...)` 函数定义了一个名叫 `chalk` 的新函数。它的函数参数是 `filename`、`grow-pixels`、`spread-amound` 和 `percentage`。在 `define` 中的所有内容都是 `chalk` 函数的主体。你可能已经注意到,那些名字比较长的变量中间都有一个破折号来分割。这是类 Lisp 语言的惯用风格。
|
||||
|
||||
`(let* ...)` 函数是一个特殊<ruby>过程<rt>procedure</rt></ruby>,可以让你定义一些只有在这个函数体中才有效的临时变量。临时变量有 `image`、`drawable` 以及 `new-filename`。它使用 `gimp-file-load` 来载入图片,这会返回它所包含的图片的一个列表。并通过 `car` 函数来选取第一项。然后,它选择第一个活动层并将其引用存储在 `drawable` 变量中。最后,它定义了包含图像新文件名的字符串。
|
||||
|
||||
为了帮助你更好地了解该过程,我将对其进行分解。首先,启动带 GUI 的 GIMP,然后你可以通过依次点击 **Filters → Script-Fu → Console** 来打开 Script-Fu 控制台。 在这种情况下,不能使用 `let *`,因为变量必须是持久的。使用 `define` 函数定义 `image` 变量,并为其提供查找图像的正确路径:
|
||||
|
||||
```
|
||||
(define image (car (gimp-file-load RUN-NONINTERACTIVE "Fourier.png" "Fourier.png")))
|
||||
```
|
||||
|
||||
似乎在 GUI 中什么也没有发生,但是图像已加载。 你需要通过以下方式来让图像显示:
|
||||
|
||||
```
|
||||
(gimp-display-new image)
|
||||
```
|
||||
|
||||
![GUI with the displayed image][14]
|
||||
|
||||
现在,获取活动层并将其存储在 `drawable` 变量中:
|
||||
|
||||
```
|
||||
(define drawable (car (gimp-image-get-active-layer image)))
|
||||
```
|
||||
|
||||
最后,定义图像的新文件名:
|
||||
|
||||
```
|
||||
(define new-filename "modified_Fourier.png")
|
||||
```
|
||||
|
||||
运行命令后,你将在 Script-Fu 控制台中看到以下内容:
|
||||
|
||||
![Script-Fu console][15]
|
||||
|
||||
在对图像执行操作之前,需要定义将在脚本中作为函数参数的变量:
|
||||
|
||||
```
|
||||
(define grow-pixels 2)
|
||||
(define spread-amount 4)
|
||||
(define percentage 3)
|
||||
```
|
||||
|
||||
### 处理图片
|
||||
|
||||
现在,所有相关变量都已定义,你可以对图像进行操作了。 脚本的操作可以直接在控制台上执行。第一步是在活动层上选择黑色。颜色被写成一个由三个数字组成的列表,即 `(list 0 0 0)` 或者是 `'(0 0 0)`:
|
||||
|
||||
```
|
||||
(gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0))
|
||||
```
|
||||
|
||||
![Image with the selected color][16]
|
||||
|
||||
扩大选取两个像素:
|
||||
|
||||
```
|
||||
(gimp-selection-grow image grow-pixels)
|
||||
```
|
||||
|
||||
![Image with the selected color][17]
|
||||
|
||||
将前景色设置为黑色,并用它填充选区:
|
||||
|
||||
```
|
||||
(gimp-context-set-foreground '(0 0 0))
|
||||
(gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0)
|
||||
```
|
||||
|
||||
![Image with the selection filled with black][18]
|
||||
|
||||
删除选区:
|
||||
|
||||
```
|
||||
(gimp-selection-none image)
|
||||
```
|
||||
|
||||
![Image with no selection][19]
|
||||
|
||||
随机移动像素:
|
||||
|
||||
```
|
||||
(plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount)
|
||||
```
|
||||
|
||||
![Image with pixels moved around][20]
|
||||
|
||||
反转图像颜色:
|
||||
|
||||
```
|
||||
(gimp-drawable-invert drawable TRUE)
|
||||
```
|
||||
|
||||
![Image with pixels moved around][21]
|
||||
|
||||
随机化像素:
|
||||
|
||||
```
|
||||
(plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0)
|
||||
```
|
||||
|
||||
![Image with pixels moved around][22]
|
||||
|
||||
将图像保存到新文件:
|
||||
|
||||
```
|
||||
(gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename)
|
||||
```
|
||||
|
||||
![Equations of the Fourier transform and its inverse][23]
|
||||
|
||||
*傅立叶变换方程 (Cristiano Fontana, [CC BY-SA 4.0][4])*
|
||||
|
||||
### 以批处理模式运行脚本
|
||||
|
||||
现在你知道了脚本的功能,可以在批处理模式下运行它:
|
||||
|
||||
```
|
||||
gimp -i -b '(chalk "Fourier.png" 2 4 3)' -b '(gimp-quit 0)'
|
||||
```
|
||||
|
||||
在运行 `chalk` 函数之后,它将使用 `-b` 选项调用第二个函数 `gimp-quit` 来告诉 GIMP 退出。
|
||||
|
||||
### 了解更多
|
||||
|
||||
本教程向你展示了如何开始使用 GIMP 的内置脚本功能,并介绍了 GIMP 的 Scheme 实现:Script-Fu。如果你想继续前进,建议你查看官方文档及其[入门教程][9]。如果你不熟悉 Scheme 或 Lisp,那么一开始的语法可能有点吓人,但我还是建议你尝试一下。这可能是一个不错的惊喜。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/gimp-scripting
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cristianofontana
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
|
||||
[2]: https://www.gimp.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/fourier.png (Fourier transform equations)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://docs.gimp.org/en/gimp-filters-script-fu.html
|
||||
[6]: https://docs.gimp.org/en/gimp-concepts-script-fu.html
|
||||
[7]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
||||
[8]: https://docs.gimp.org/en/gimp-filters-python-fu.html
|
||||
[9]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html
|
||||
[10]: https://en.wikipedia.org/wiki/Lisp_%28programming_language%29
|
||||
[11]: https://en.wikipedia.org/wiki/Polish_notation
|
||||
[12]: https://xkcd.com/297/
|
||||
[13]: https://opensource.com/sites/default/files/uploads/procedure_browser.png (GIMP Procedure Browser)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/gui01_image.png (GUI with the displayed image)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/console01_variables.png (Script-Fu console)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/gui02_selected.png (Image with the selected color)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/gui03_grow.png (Image with the selected color)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/gui04_fill.png (Image with the selection filled with black)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/gui05_no_selection.png (Image with no selection)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/gui06_spread.png (Image with pixels moved around)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/gui07_invert.png (Image with pixels moved around)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/gui08_hurl.png (Image with pixels moved around)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/modified_fourier.png (Equations of the Fourier transform and its inverse)
|
@ -0,0 +1,176 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13100-1.html)
|
||||
[#]: subject: (Why I use the D programming language for scripting)
|
||||
[#]: via: (https://opensource.com/article/21/1/d-scripting)
|
||||
[#]: author: (Lawrence Aberba https://opensource.com/users/aberba)
|
||||
|
||||
我为什么要用 D 语言写脚本?
|
||||
======
|
||||
|
||||
> D 语言以系统编程语言而闻名,但它也是编写脚本的一个很好的选择。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/09/134351j4m3hrhll0h38plp.jpg)
|
||||
|
||||
D 语言由于其静态类型和元编程能力,经常被宣传为系统编程语言。然而,它也是一种非常高效的脚本语言。
|
||||
|
||||
由于 Python 在自动化任务和快速实现原型想法方面的灵活性,它通常被选为脚本语言。这使得 Python 对系统管理员、[管理者][2]和一般的开发人员非常有吸引力,因为它可以自动完成他们可能需要手动完成的重复性任务。
|
||||
|
||||
我们自然也可以期待任何其他的脚本编写语言具有 Python 的这些特性和能力。以下是我认为 D 是一个不错的选择的两个原因。
|
||||
|
||||
### 1、D 很容易读和写
|
||||
|
||||
作为一种类似于 C 的语言,D 应该是大多数程序员所熟悉的。任何使用 JavaScript、Java、PHP 或 Python 的人对 D 语言都很容易上手。
|
||||
|
||||
如果你还没有安装 D,请[安装 D 编译器][3],这样你就可以[运行本文中的 D 代码][4]。你也可以使用[在线 D 编辑器][5]。
|
||||
|
||||
下面是一个 D 代码的例子,它从一个名为 `words.txt` 的文件中读取单词,并在命令行中打印出来:
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
用 D 语言写脚本:
|
||||
|
||||
```
|
||||
#!/usr/bin/env rdmd
|
||||
// file print_words.d
|
||||
|
||||
// import the D standard library
|
||||
import std;
|
||||
|
||||
void main(){
|
||||
// open the file
|
||||
File("./words.txt")
|
||||
|
||||
//iterate by line
|
||||
.byLine
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
}
|
||||
```
|
||||
|
||||
这段代码以 [释伴][6] 开头,它将使用 [rdmd][7] 来运行这段代码,`rdmd` 是 D 编译器自带的编译和运行代码的工具。假设你运行的是 Unix 或 Linux,在运行这个脚本之前,你必须使用` chmod` 命令使其可执行:
|
||||
|
||||
```
|
||||
chmod u+x print_words.d
|
||||
```
|
||||
|
||||
现在脚本是可执行的,你可以运行它:
|
||||
|
||||
```
|
||||
./print_words.d
|
||||
```
|
||||
|
||||
这将在你的命令行中打印以下内容:
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
恭喜你,你写了第一个 D 语言脚本。你可以看到 D 是如何让你按顺序链式调用函数,这让阅读代码的感觉很自然,类似于你在头脑中思考问题的方式。这个[功能让 D 成为我最喜欢的编程语言][8]。
|
||||
|
||||
试着再写一个脚本:一个非营利组织的管理员有一个捐款的文本文件,每笔金额都是单独的一行。管理员想把前 10 笔捐款相加,然后打印出金额:
|
||||
|
||||
```
|
||||
#!/usr/bin/env rdmd
|
||||
// file sum_donations.d
|
||||
|
||||
import std;
|
||||
|
||||
void main()
|
||||
{
|
||||
double total = 0;
|
||||
|
||||
// open the file
|
||||
File("monies.txt")
|
||||
|
||||
// iterate by line
|
||||
.byLine
|
||||
|
||||
// pick first 10 lines
|
||||
.take(10)
|
||||
|
||||
// remove new line characters (\n)
|
||||
.map!(strip)
|
||||
|
||||
// convert each to double
|
||||
.map!(to!double)
|
||||
|
||||
// add element to total
|
||||
.tee!((x) { total += x; })
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
|
||||
// print total
|
||||
writeln("total: ", total);
|
||||
}
|
||||
```
|
||||
|
||||
与 `each` 一起使用的 `!` 操作符是[模板参数][9]的语法。
|
||||
|
||||
### 2、D 是快速原型设计的好帮手
|
||||
|
||||
D 是灵活的,它可以快速地将代码敲打在一起,并使其发挥作用。它的标准库中包含了丰富的实用函数,用于执行常见的任务,如操作数据(JSON、CSV、文本等)。它还带有一套丰富的通用算法,用于迭代、搜索、比较和 mutate 数据。这些巧妙的算法通过定义通用的 [基于范围的接口][10] 而按照序列进行处理。
|
||||
|
||||
上面的脚本显示了 D 中的链式调用函数如何提供顺序处理和操作数据的要领。D 的另一个吸引人的地方是它不断增长的用于执行普通任务的第三方包的生态系统。一个例子是,使用 [Vibe.d][11] web 框架构建一个简单的 web 服务器很容易。下面是一个例子:
|
||||
|
||||
```
|
||||
#!/usr/bin/env dub
|
||||
/+ dub.sdl:
|
||||
dependency "vibe-d" version="~>0.8.0"
|
||||
+/
|
||||
void main()
|
||||
{
|
||||
import vibe.d;
|
||||
listenHTTP(":8080", (req, res) {
|
||||
res.writeBody("Hello, World: " ~ req.path);
|
||||
});
|
||||
runApplication();
|
||||
}
|
||||
```
|
||||
|
||||
它使用官方的 D 软件包管理器 [Dub][12],从 [D 软件包仓库][13]中获取 vibe.d Web 框架。Dub 负责下载 Vibe.d 包,然后在本地主机 8080 端口上编译并启动一个 web 服务器。
|
||||
|
||||
### 尝试一下 D 语言
|
||||
|
||||
这些只是你可能想用 D 来写脚本的几个原因。
|
||||
|
||||
D 是一种非常适合开发的语言。你可以很容易从 D 下载页面安装,因此下载编译器,看看例子,并亲自体验 D 语言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/d-scripting
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aberba
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://opensource.com/article/20/3/automating-community-management-python
|
||||
[3]: https://tour.dlang.org/tour/en/welcome/install-d-locally
|
||||
[4]: https://tour.dlang.org/tour/en/welcome/run-d-program-locally
|
||||
[5]: https://run.dlang.io/
|
||||
[6]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
||||
[7]: https://dlang.org/rdmd.html
|
||||
[8]: https://opensource.com/article/20/7/d-programming
|
||||
[9]: http://ddili.org/ders/d.en/templates.html
|
||||
[10]: http://ddili.org/ders/d.en/ranges.html
|
||||
[11]: https://vibed.org
|
||||
[12]: https://dub.pm/getting_started
|
||||
[13]: https://code.dlang.org
|
@ -1,16 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13094-1.html)
|
||||
[#]: subject: (4 tips for preventing notification fatigue)
|
||||
[#]: via: (https://opensource.com/article/21/1/alert-fatigue)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
防止通知疲劳的 4 个技巧
|
||||
======
|
||||
不要让提醒淹没自己:设置重要的提醒,让其他提醒消失。 你会感觉更好,工作效率更高。
|
||||
![Working on a team, busy worklife][1]
|
||||
|
||||
> 不要让提醒淹没自己:设置重要的提醒,让其它提醒消失。你会感觉更好,工作效率更高。
|
||||
|
||||
![W】(https://img.linux.net.cn/data/attachment/album/202102/06/234924mo3okotjlv7lo3yo.jpg)
|
||||
|
||||
在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的第十八天。
|
||||
|
||||
@ -18,9 +20,9 @@
|
||||
|
||||
![Text box offering to send notifications][2]
|
||||
|
||||
NOPE(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
*NOPE(Kevin Sonney, [CC BY-SA 4.0][3])*
|
||||
|
||||
如此多的应用、网站和服务想要提醒我们每一件小事,以至于我们很容易就把它们全部调出来。而且,如果我们不这样做,我们将开始遭受**提醒疲劳**的困扰,这时我们处于边缘的状态,只是等待下一个提醒,并生活在恐惧之中。
|
||||
如此多的应用、网站和服务想要提醒我们每一件小事,我们很容易就把它们全部调出来。而且,如果我们不这样做,我们将开始遭受**提醒疲劳**的困扰 —— 这时我们处于边缘的状态,只是等待下一个提醒,并生活在恐惧之中。
|
||||
|
||||
提醒疲劳在那些因工作而被随叫随到的人中非常常见。它也发生在那些 **FOMO** (错失恐惧症)的人身上,从而对每一个关键词、标签或在社交媒体上提到他们感兴趣的事情都会设置提醒。
|
||||
|
||||
@ -28,31 +30,28 @@ NOPE(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
|
||||
![Alert for a task][4]
|
||||
|
||||
我可以忽略这个,对吧?(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
*我可以忽略这个,对吧?(Kevin Sonney, [CC BY-SA 4.0][3])*
|
||||
|
||||
1. 弄清楚什么更适合你:视觉提醒或声音提醒。我使用视觉弹出和声音的组合,但这对我是有效的。有些人需要触觉提醒。比如手机或手表的震动。找到适合你的那一种。
|
||||
2. 为重要的提醒指定独特的音调或视觉效果。我有一个朋友,他的工作页面有最响亮、最讨厌的铃声。这个_设计_是为了吸引他的注意力,让他看到提醒。我的显示器上有一盏灯,当我在待命时收到工作提醒时,它就会闪烁红灯,以及发送通知到我手机上。
|
||||
3. 关掉那些其实不重要的警报。社交网络、网站和应用都希望得到你的关注。他们不会在意你是否错过会议、约会迟到,或者熬夜到凌晨4点厄运滚动。关掉那些不重要的,让那些重要的可以被看到。
|
||||
2. 为重要的提醒指定独特的音调或视觉效果。我有一个朋友,他的工作页面的铃声最响亮、最讨厌。这旨在吸引他的注意力,让他看到提醒。我的显示器上有一盏灯,当我在待命时收到工作提醒时,它就会闪烁红灯,以及发送通知到我手机上。
|
||||
3. 关掉那些实际上无关紧要的警报。社交网络、网站和应用都希望得到你的关注。它们不会在意你是否错过会议、约会迟到,或者熬夜到凌晨 4 点。关掉那些不重要的,让那些重要的可以被看到。
|
||||
4. 每隔一段时间就改变一下。上个月有效的东西,下个月可能就不行了。我们会适应、习惯一些东西,然后我们会忽略。如果有些东西不奏效,就换个东西试试吧!它不会伤害你,即使无法解决问题,也许你也会学到一些新知识。
|
||||
|
||||
|
||||
![Blue alert indicators light][5]
|
||||
|
||||
蓝色是没问题。红色是有问题。(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
*蓝色是没问题。红色是有问题。(Kevin Sonney, [CC BY-SA 4.0][3])*
|
||||
|
||||
### 开源和选择
|
||||
|
||||
一个好的应用为通知提供了很多选择。我最喜欢的一个是 Android 的 Etar 日历应用。[Etar 可以从开源 F-droid 仓库中获得][6]。
|
||||
一个好的应用可以通知提供很多选择。我最喜欢的一个是 Android 的 Etar 日历应用。[Etar 可以从开源 F-droid 仓库中获得][6]。
|
||||
|
||||
Etar 和许多开源应用一样,为你提供了所有的选项,尤其是通知设置。
|
||||
Etar 和许多开源应用一样,为你提供了很多选项,尤其是通知设置。
|
||||
|
||||
![Etar][7]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][3])
|
||||
|
||||
通过 Etar,你可以激活或停用弹出式通知,设置打盹时间、打盹延迟、是否提醒你已拒绝的事件等。结合有计划的日程安排策略,你可以通过控制数字助手对你需要做的事情进行提示的频率来改变你一天的进程。
|
||||
|
||||
提醒和警报真的很有用,只要我们收到重要的提醒并予以注意即可。这可能需要一些实验,但最终,少一些噪音是好事,而且更容易注意到真正需要我们注意的提醒。
|
||||
提醒和警报真的很有用,只要我们收到重要的提醒并予以注意即可。这可能需要做一些实验,但最终,少一些噪音是好事,而且更容易注意到真正需要我们注意的提醒。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -61,7 +60,7 @@ via: https://opensource.com/article/21/1/alert-fatigue
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13106-1.html)
|
||||
[#]: subject: (How to Run a Shell Script in Linux [Essentials Explained for Beginners])
|
||||
[#]: via: (https://itsfoss.com/run-shell-script-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
基础:如何在 Linux 中运行一个 Shell 脚本
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/10/235325tkv7h8dvlp4makkk.jpg)
|
||||
|
||||
在 Linux 中有两种运行 shell 脚本的方法。你可以使用:
|
||||
|
||||
```
|
||||
bash script.sh
|
||||
```
|
||||
|
||||
或者,你可以像这样执行 shell 脚本:
|
||||
|
||||
```
|
||||
./script.sh
|
||||
```
|
||||
|
||||
这可能很简单,但没太多解释。不要担心,我将使用示例来进行必要的解释,以便你能理解为什么在运行一个 shell 脚本时要使用给定的特定语法格式。
|
||||
|
||||
我将使用这一行 shell 脚本来使需要解释的事情变地尽可能简单:
|
||||
|
||||
```
|
||||
abhishek@itsfoss:~/Scripts$ cat hello.sh
|
||||
|
||||
echo "Hello World!"
|
||||
```
|
||||
|
||||
### 方法 1:通过将文件作为参数传递给 shell 以运行 shell 脚本
|
||||
|
||||
第一种方法涉及将脚本文件的名称作为参数传递给 shell 。
|
||||
|
||||
考虑到 bash 是默认 shell,你可以像这样运行一个脚本:
|
||||
|
||||
```
|
||||
bash hello.sh
|
||||
```
|
||||
|
||||
你知道这种方法的优点吗?**你的脚本不需要执行权限**。对于简单的任务非常方便快速。
|
||||
|
||||
![在 Linux 中运行一个 Shell 脚本][1]
|
||||
|
||||
如果你还不熟悉,我建议你 [阅读我的 Linux 文件权限详细指南][2] 。
|
||||
|
||||
记住,将其作为参数传递的需要是一个 shell 脚本。一个 shell 脚本是由命令组成的。如果你使用一个普通的文本文件,它将会抱怨错误的命令。
|
||||
|
||||
![运行一个文本文件为脚本][3]
|
||||
|
||||
在这种方法中,**你要明确地具体指定你想使用 bash 作为脚本的解释器** 。
|
||||
|
||||
shell 只是一个程序,并且 bash 只是 Shell 的一种实现。还有其它的 shell 程序,像 ksh 、[zsh][4] 等等。如果你安装有其它的 shell ,你也可以使用它们来代替 bash 。
|
||||
|
||||
例如,我已安装了 zsh ,并使用它来运行相同的脚本:
|
||||
|
||||
![使用 Zsh 来执行 Shell 脚本][5]
|
||||
|
||||
### 方法 2:通过具体指定 shell 脚本的路径来执行脚本
|
||||
|
||||
另外一种运行一个 shell 脚本的方法是通过提供它的路径。但是要这样做之前,你的文件必须是可执行的。否则,当你尝试执行脚本时,你将会得到 “权限被拒绝” 的错误。
|
||||
|
||||
因此,你首先需要确保你的脚本有可执行权限。你可以 [使用 chmod 命令][8] 来给予你自己脚本的这种权限,像这样:
|
||||
|
||||
```
|
||||
chmod u+x script.sh
|
||||
```
|
||||
|
||||
使你的脚本是可执行之后,你只需输入文件的名称及其绝对路径或相对路径。大多数情况下,你都在同一个目录中,因此你可以像这样使用它:
|
||||
|
||||
```
|
||||
./script.sh
|
||||
```
|
||||
|
||||
如果你与你的脚本不在同一个目录中,你可以具体指定脚本的绝对路径或相对路径:
|
||||
|
||||
![在其它的目录中运行 Shell 脚本][9]
|
||||
|
||||
在脚本前的这个 `./` 是非常重要的(当你与脚本在同一个目录中)。
|
||||
|
||||
![][10]
|
||||
|
||||
为什么当你在同一个目录下,却不能使用脚本名称?这是因为你的 Linux 系统会在 `PATH` 环境变量中指定的几个目录中查找可执行的文件来运行。
|
||||
|
||||
这里是我的系统的 `PATH` 环境变量的值:
|
||||
|
||||
```
|
||||
abhishek@itsfoss:~$ echo $PATH
|
||||
/home/abhishek/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
```
|
||||
|
||||
这意味着在下面目录中具有可执行权限的任意文件都可以在系统的任何位置运行:
|
||||
|
||||
* `/home/abhishek/.local/bin`
|
||||
* `/usr/local/sbin`
|
||||
* `/usr/local/bin`
|
||||
* `/usr/sbin`
|
||||
* `/usr/bin`
|
||||
* `/sbin`
|
||||
* `/bin`
|
||||
* `/usr/games`
|
||||
* `/usr/local/games`
|
||||
* `/snap/bin`
|
||||
|
||||
Linux 命令(像 `ls`、`cat` 等)的二进制文件或可执行文件都位于这些目录中的其中一个。这就是为什么你可以在你系统的任何位置通过使用命令的名称来运作这些命令的原因。看看,`ls` 命令就是位于 `/usr/bin` 目录中。
|
||||
|
||||
![][11]
|
||||
|
||||
当你使用脚本而不具体指定其绝对路径或相对路径时,系统将不能在 `PATH` 环境变量中找到提及的脚本。
|
||||
|
||||
> 为什么大多数 shell 脚本在其头部包含 #! /bin/bash ?
|
||||
>
|
||||
> 记得我提过 shell 只是一个程序,并且有 shell 程序的不同实现。
|
||||
>
|
||||
> 当你使用 `#! /bin/bash` 时,你是具体指定 bash 作为解释器来运行脚本。如果你不这样做,并且以 `./script.sh` 的方式运行一个脚本,它通常会在你正在运行的 shell 中运行。
|
||||
>
|
||||
> 有问题吗?可能会有。看看,大多数的 shell 语法是大多数种类的 shell 中通用的,但是有一些语法可能会有所不同。
|
||||
>
|
||||
> 例如,在 bash 和 zsh 中数组的行为是不同的。在 zsh 中,数组索引是从 1 开始的,而不是从 0 开始。
|
||||
>
|
||||
>![Bash Vs Zsh][12]
|
||||
>
|
||||
> 使用 `#! /bin/bash` 来标识该脚本是 bash 脚本,并且应该使用 bash 作为脚本的解释器来运行,而不受在系统上正在使用的 shell 的影响。如果你使用 zsh 的特殊语法,你可以通过在脚本的第一行添加 `#! /bin/zsh` 的方式来标识其是 zsh 脚本。
|
||||
>
|
||||
> 在 `#!` 和 `/bin/bash` 之间的空格是没有影响的。你也可以使用 `#!/bin/bash` 。
|
||||
|
||||
### 它有帮助吗?
|
||||
|
||||
我希望这篇文章能够增加你的 Linux 知识。如果你还有问题或建议,请留下评论。
|
||||
|
||||
专家用户可能依然会挑出我遗漏的东西。但这种初级题材的问题是,要找到信息的平衡点,避免细节过多或过少,并不容易。
|
||||
|
||||
如果你对学习 bash 脚本感兴趣,在我们专注于系统管理的网站 [Linux Handbook][14] 上,我们有一个 [完整的 Bash 初学者系列][13] 。如果你想要,你也可以 [购买带有附加练习的电子书][15] ,以支持 Linux Handbook。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/run-shell-script-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/run-a-shell-script-linux.png?resize=741%2C329&ssl=1
|
||||
[2]: https://linuxhandbook.com/linux-file-permissions/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-text-file-as-script.png?resize=741%2C329&ssl=1
|
||||
[4]: https://www.zsh.org
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/execute-shell-script-with-zsh.png?resize=741%2C253&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/run-multiple-commands-in-linux.png?fit=800%2C450&ssl=1
|
||||
[7]: https://itsfoss.com/run-multiple-commands-linux/
|
||||
[8]: https://linuxhandbook.com/chmod-command/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-shell-script-in-other-directory.png?resize=795%2C272&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/executing-shell-scripts-linux.png?resize=800%2C450&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/locating-command-linux.png?resize=795%2C272&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bash-vs-zsh.png?resize=795%2C386&ssl=1
|
||||
[13]: https://linuxhandbook.com/tag/bash-beginner/
|
||||
[14]: https://linuxhandbook.com
|
||||
[15]: https://www.buymeacoffee.com/linuxhandbook
|
@ -1,45 +1,46 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13113-1.html)
|
||||
[#]: subject: (3 wishes for open source productivity in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/1/productivity-wishlist)
|
||||
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
|
||||
|
||||
2021 年开源生产力的 3 个愿望
|
||||
======
|
||||
2021年,开源世界可以拓展的有很多。这是我特别感兴趣的三个领域。
|
||||
|
||||
> 2021年,开源世界可以拓展的有很多。这是我特别感兴趣的三个领域。
|
||||
|
||||
![Looking at a map for career journey][1]
|
||||
|
||||
在前几年,这个年度系列涵盖了单个的应用。今年,我们除了关注 2021 年的策略外,还将关注一体化解决方案。欢迎来到 2021 年 21 天生产力的最后一天。
|
||||
|
||||
我们已经到了另外一个系列的结尾处。因此,让我们谈谈我希望在 2021 年看到的更多事情。
|
||||
我们已经到了又一个系列的结尾处。因此,让我们谈谈我希望在 2021 年看到的更多事情。
|
||||
|
||||
### 断网
|
||||
|
||||
![Large Lego set built by the author][2]
|
||||
|
||||
我是在假期期间制作的(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
*我在假期期间制作的(Kevin Sonney, [CC BY-SA 4.0][3])*
|
||||
|
||||
对_许多_人来说,2020 年是非常困难的一年。疫情大流行、各种政治事件、24 小时新闻报道等等,都对我们的精神健康造成了伤害。虽然我谈到了[抽出时间进行自我护理][4],但我只提到了断网:也就是关闭提醒、手机、平板等,暂时无视这个世界。我公司的一位经理居然告诉我们,如果放假或休息一天,就把所有与工作有关的东西都关掉(除非我们在值班)。我最喜欢的“断网”活动之一就是听音乐和搭建大而复杂的乐高。
|
||||
对*许多、许多的*人来说,2020 年是非常困难的一年。疫情大流行、各种政治事件、24 小时的新闻轰炸等等,都对我们的精神健康造成了伤害。虽然我确实谈到了 [抽出时间进行自我护理][4],但我只是想断网:也就是关闭提醒、手机、平板等,暂时无视这个世界。我公司的一位经理实际上告诉我们,如果放假或休息一天,就把所有与工作有关的东西都关掉(除非我们在值班)。我最喜欢的“断网”活动之一就是听音乐和搭建大而复杂的乐高。
|
||||
|
||||
### 可访问性
|
||||
|
||||
尽管我谈论的许多技术都是任何人都可以做的,但是软件方面的可访问性都有一定难度。自迁移之初以来,Linux 和开源世界在辅助技术方面已经走了很长一段路。但是,仍然有太多的应用和系统不会考虑有些用户没有与设计者相同的能力。我一直在关注这一领域的发展,因为每个人都应该能够访问事物。
|
||||
尽管我谈论的许多技术都是任何人都可以做的,但是软件方面的可访问性都有一定难度。相对于自由软件运动之初,Linux 和开源世界在辅助技术方面已经有了长足发展。但是,仍然有太多的应用和系统不会考虑有些用户没有与设计者相同的能力。我一直在关注这一领域的发展,因为每个人都应该能够访问事物。
|
||||
|
||||
### 更多的一体化选择
|
||||
|
||||
![JPilot all in one organizer software interface][5]
|
||||
|
||||
JPilot(Kevin Sonney, [CC BY-SA 4.0][3])
|
||||
*JPilot(Kevin Sonney, [CC BY-SA 4.0][3])*
|
||||
|
||||
在 FOSS 世界中,一体化的个人信息管理解决方案远没有商业软件世界中那么多。总体趋势是使用单独的应用,它们必须通过配置来相互通信或通过中介服务(如 CalDAV 服务器)。移动市场在很大程度上推动了这一趋势,但我仍然向往像 [JPilot][6] 这样无需额外插件或服务就能完成几乎所有我需要的事情的日子。
|
||||
|
||||
|
||||
非常感谢大家阅读这个年度系列。如果你认为我错过了什么,或者明年需要注意什么,请在下方评论。
|
||||
|
||||
就像我在[生产力炼金术][7]上说的那样,尽最大努力保持生产力!
|
||||
就像我在 [生产力炼金术][7] 上说的那样,尽最大努力保持生产力!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -48,7 +49,7 @@ via: https://opensource.com/article/21/1/productivity-wishlist
|
||||
作者:[Kevin Sonney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,16 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13097-1.html)
|
||||
[#]: subject: (Generate QR codes with this open source tool)
|
||||
[#]: via: (https://opensource.com/article/21/2/zint-barcode-generator)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
用这个开源工具生成二维码
|
||||
Zint:用这个开源工具生成二维码
|
||||
======
|
||||
Zint 可以轻松生成 50 多种类型的自定义条码。
|
||||
![Working from home at a laptop][1]
|
||||
|
||||
> Zint 可以轻松生成 50 多种类型的自定义条码。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/07/231854y8ffstg0m6l2fcmz.jpg)
|
||||
|
||||
二维码是一种很好的可以向人们提供信息的方式,且没有打印的麻烦和费用。大多数人的智能手机都支持二维码扫描,无论其操作系统是什么。
|
||||
|
||||
@ -18,24 +20,22 @@ Zint 可以轻松生成 50 多种类型的自定义条码。
|
||||
|
||||
在互联网上搜索一个简单的、开源的方法来创建二维码时,我发现了 [Zint][2]。Zint 是一个优秀的开源 (GPLv3.0) 生成条码的解决方案。根据该项目的 [GitHub 仓库][3]:“Zint 是一套可以方便地对任何一种公共领域条形码标准的数据进行编码的程序,并允许你将这种功能集成到你自己的程序中。”
|
||||
|
||||
Zint 支持 50 多种类型的条形码,包括二维码码 (ISO 18004),你可以轻松地创建这些条形码,然后复制和粘贴到 word 文档、博客、维基和其他数字媒体中。人们可以用智能手机扫描这些二维码,快速链接到信息。
|
||||
Zint 支持 50 多种类型的条形码,包括二维码(ISO 18004),你可以轻松地创建这些条形码,然后复制和粘贴到 word 文档、博客、维基和其他数字媒体中。人们可以用智能手机扫描这些二维码,快速链接到信息。
|
||||
|
||||
### 安装 Zint
|
||||
|
||||
Zint 适用于 Linux、macOS 和 Windows。
|
||||
|
||||
你可以在基于 Ubuntu 的 Linux 发行版上使用 apt 安装 Zint 命令:
|
||||
|
||||
你可以在基于 Ubuntu 的 Linux 发行版上使用 `apt` 安装 Zint 命令:
|
||||
|
||||
```
|
||||
`$ sudo apt install zint`
|
||||
$ sudo apt install zint
|
||||
```
|
||||
|
||||
我还想要一个图形用户界面 (GUI),所以我安装了 Zint-QT:
|
||||
|
||||
我还想要一个图形用户界面(GUI),所以我安装了 Zint-QT:
|
||||
|
||||
```
|
||||
`$ sudo apt install zint-qt`
|
||||
$ sudo apt install zint-qt
|
||||
```
|
||||
|
||||
请参考手册的[安装部分][4],了解 macOS 和 Windows 的说明。
|
||||
@ -46,16 +46,12 @@ Zint 适用于 Linux、macOS 和 Windows。
|
||||
|
||||
![Generating QR code with Zint][5]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][6])
|
||||
|
||||
Zint 的 50 多个其他条码选项包括许多国家的邮政编码、DotCode、EAN、EAN-14 和通用产品代码 (UPC)。[项目文档][2]中包含了它可以渲染的所有代码的完整列表。
|
||||
|
||||
你可以将任何条形码复制为 BMP 或 SVG,或者将输出保存为你应用中所需要的任何尺寸的图像文件。这是我的 77x77 像素的二维码。
|
||||
|
||||
![QR code][7]
|
||||
|
||||
(Don Watkins, [CC BY-SA 4.0][6])
|
||||
|
||||
该项目维护了一份出色的用户手册,其中包含了在[命令行][8]和 [GUI][9] 中使用 Zint 的说明。你甚至可以[在线][10]试用 Zint。对于功能请求或错误报告,请[访问网站][11]或[发送电子邮件][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -65,7 +61,7 @@ via: https://opensource.com/article/21/2/zint-barcode-generator
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
255
published/20210204 A hands-on tutorial of SQLite3.md
Normal file
255
published/20210204 A hands-on tutorial of SQLite3.md
Normal file
@ -0,0 +1,255 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "amwps290"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-13117-1.html"
|
||||
[#]: subject: "A hands-on tutorial of SQLite3"
|
||||
[#]: via: "https://opensource.com/article/21/2/sqlite3-cheat-sheet"
|
||||
[#]: author: "Klaatu https://opensource.com/users/klaatu"
|
||||
|
||||
SQLite3 实践教程
|
||||
======
|
||||
|
||||
> 开始使用这个功能强大且通用的数据库吧。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202102/14/131146jsx2kvyobwxwswct.jpg)
|
||||
|
||||
应用程序经常需要保存数据。无论你的用户是创建简单的文本文档、复杂的图形布局、游戏进度还是错综复杂的客户和订单号列表,软件通常都意味着生成数据。有很多方法可以存储数据以供重复使用。你可以将文本转储为 INI、[YAML][2]、XML 或 JSON 等配置格式,可以输出原始的二进制数据,也可以将数据存储在结构化数据库中。SQLite 是一个自包含的、轻量级数据库,可轻松创建、解析、查询、修改和传输数据。
|
||||
|
||||
- 下载 [SQLite3 备忘录][3]
|
||||
|
||||
SQLite 专用于 [公共领域][4],[从技术上讲,这意味着它没有版权,因此不需要许可证][5]。如果你需要许可证,则可以 [购买所有权担保][6]。SQLite 非常常见,大约有 1 万亿个 SQLite 数据库正在使用中。在每个基于 Webkit 的 Web 浏览器,现代电视机,汽车多媒体系统以及无数其他软件应用程序中,Android 和 iOS 设备, macOS 和 Windows 10 计算机,大多数 Linux 系统上都包含多个这种数据库。
|
||||
|
||||
总而言之,它是用于存储和组织数据的一个可靠而简单的系统。
|
||||
|
||||
### 安装
|
||||
|
||||
你的系统上可能已经有 SQLite 库,但是你需要安装其命令行工具才能直接使用它。在 Linux上,你可能已经安装了这些工具。该工具提供的命令是 `sqlite3` (而不仅仅是 sqlite)。
|
||||
|
||||
如果没有在你的 Linux 或 BSD 上安装 SQLite,你则可以从软件仓库中或 ports 树中安装 SQLite,也可以从源代码或已编译的二进制文件进行[下载并安装][7]。
|
||||
|
||||
在 macOS 或 Windows 上,你可以从 [sqlite.org][7] 下载并安装 SQLite 工具。
|
||||
|
||||
### 使用 SQLite
|
||||
|
||||
通过编程语言与数据库进行交互是很常见的。因此,像 Java、Python、Lua、PHP、Ruby、C++ 以及其他编程语言都提供了 SQLite 的接口(或“绑定”)。但是,在使用这些库之前,了解数据库引擎的实际情况以及为什么你对数据库的选择很重要是有帮助的。本文向你介绍 SQLite 和 `sqlite3` 命令,以便你熟悉该数据库如何处理数据的基础知识。
|
||||
|
||||
### 与 SQLite 交互
|
||||
|
||||
你可以使用 `sqlite3` 命令与 SQLite 进行交互。 该命令提供了一个交互式的 shell 程序,以便你可以查看和更新数据库。
|
||||
|
||||
```
|
||||
$ sqlite3
|
||||
SQLite version 3.34.0 2020-12-01 16:14:00
|
||||
Enter ".help" for usage hints.
|
||||
Connected to a transient in-memory database.
|
||||
Use ".open FILENAME" to reopen on a persistent database.
|
||||
sqlite>
|
||||
```
|
||||
|
||||
该命令将使你处于 SQLite 的子 shell 中,因此现在的提示符是 SQLite 的提示符。你以前使用的 Bash 命令在这里将不再适用。你必须使用 SQLite 命令。要查看 SQLite 命令列表,请输入 `.help`:
|
||||
|
||||
```
|
||||
sqlite> .help
|
||||
.archive ... Manage SQL archives
|
||||
.auth ON|OFF SHOW authorizer callbacks
|
||||
.backup ?DB? FILE Backup DB (DEFAULT "main") TO FILE
|
||||
.bail ON|off Stop after hitting an error. DEFAULT OFF
|
||||
.binary ON|off Turn BINARY output ON OR off. DEFAULT OFF
|
||||
.cd DIRECTORY CHANGE the working directory TO DIRECTORY
|
||||
[...]
|
||||
```
|
||||
|
||||
这些命令中的其中一些是二进制的,而其他一些则需要唯一的参数(如文件名、路径等)。这些是 SQLite Shell 的管理命令,不是用于数据库查询。数据库以结构化查询语言(SQL)进行查询,许多 SQLite 查询与你从 [MySQL][8] 和 [MariaDB][9] 数据库中已经知道的查询相同。但是,数据类型和函数有所不同,因此,如果你熟悉另一个数据库,请特别注意细微的差异。
|
||||
|
||||
### 创建数据库
|
||||
|
||||
启动 SQLite 时,可以打开内存数据库,也可以选择要打开的数据库:
|
||||
|
||||
```
|
||||
$ sqlite3 mydatabase.db
|
||||
```
|
||||
|
||||
如果还没有数据库,则可以在 SQLite 提示符下创建一个数据库:
|
||||
|
||||
```
|
||||
sqlite> .open mydatabase.db
|
||||
```
|
||||
|
||||
现在,你的硬盘驱动器上有一个空文件,可以用作 SQLite 数据库。 文件扩展名 `.db` 是任意的。你也可以使用 `.sqlite` 或任何你想要的后缀。
|
||||
|
||||
### 创建一个表
|
||||
|
||||
数据库包含一些<ruby>表<rt>table</rt></ruby>,可以将其可视化为电子表格。有许多的行(在数据库中称为<ruby>记录<rt>record</rt></ruby>)和列。行和列的交集称为<ruby>字段<rt>field</rt></ruby>。
|
||||
|
||||
结构化查询语言(SQL)以其提供的内容而命名:一种以可预测且一致的语法查询数据库内容以接收有用的结果的方法。SQL 读起来很像普通的英语句子,即使有点机械化。当前,你的数据库是一个没有任何表的空数据库。
|
||||
|
||||
你可以使用 `CREATE` 来创建一个新表,你可以和 `IF NOT EXISTS` 结合使用。以便不会破坏现在已有的同名的表。
|
||||
|
||||
你无法在 SQLite 中创建一个没有任何字段的空表,因此在尝试 `CREATE` 语句之前,必须考虑预期表将存储的数据类型。在此示例中,我将使用以下列创建一个名为 `member` 的表:
|
||||
|
||||
* 唯一标识符
|
||||
* 人名
|
||||
* 记录创建的时间和日期
|
||||
|
||||
#### 唯一标识符
|
||||
|
||||
最好用唯一的编号来引用记录,幸运的是,SQLite 认识到这一点,创建一个名叫 `rowid` 的列来为你自动实现这一点。
|
||||
|
||||
无需 SQL 语句即可创建此字段。
|
||||
|
||||
#### 数据类型
|
||||
|
||||
对于我的示例表中,我正在创建一个 `name` 列来保存 `TEXT` 类型的数据。为了防止在没有指定字段数据的情况下创建记录,可以添加 `NOT NULL` 指令。
|
||||
|
||||
用 `name TEXT NOT NULL` 语句来创建。
|
||||
|
||||
SQLite 中有五种数据类型(实际上是 _储存类别_):
|
||||
|
||||
* `TEXT`:文本字符串
|
||||
* `INTEGER`:一个数字
|
||||
* `REAL`:一个浮点数(小数位数无限制)
|
||||
* `BLOB`:二进制数据(例如,.jpeg 或 .webp 图像)
|
||||
* `NULL`:空值
|
||||
|
||||
#### 日期和时间戳
|
||||
|
||||
SQLite 有一个方便的日期和时间戳功能。它本身不是数据类型,而是 SQLite 中的一个函数,它根据所需的格式生成字符串或整数。 在此示例中,我将其保留为默认值。
|
||||
|
||||
创建此字段的 SQL 语句是:`datestamp DATETIME DEFAULT CURRENT_TIMESTAMP`。
|
||||
|
||||
### 创建表的语句
|
||||
|
||||
在 SQLite 中创建此示例表的完整 SQL:
|
||||
|
||||
```
|
||||
sqlite> CREATE TABLE
|
||||
...> IF NOT EXISTS
|
||||
...> member (name TEXT NOT NULL,
|
||||
...> datestamp DATETIME DEFAULT CURRENT_TIMESTAMP);
|
||||
```
|
||||
|
||||
在此代码示例中,我在语句的分句后按了回车键。以使其更易于阅读。除非以分号(`;`)终止,否则 SQLite 不会运行你的 SQL 语句。
|
||||
|
||||
你可以使用 SQLite 命令 `.tables` 验证表是否已创建:
|
||||
|
||||
```
|
||||
sqlite> .tables
|
||||
member
|
||||
```
|
||||
|
||||
### 查看表中的所有列
|
||||
|
||||
你可以使用 `PRAGMA` 语句验证表包含哪些列和行:
|
||||
|
||||
```
|
||||
sqlite> PRAGMA table_info(member);
|
||||
0|name|TEXT|1||0
|
||||
1|datestamp|DATETIME|0|CURRENT_TIMESTAMP|0
|
||||
```
|
||||
|
||||
### 数据输入
|
||||
|
||||
你可以使用 `INSERT` 语句将一些示例数据填充到表中:
|
||||
|
||||
```
|
||||
> INSERT INTO member (name) VALUES ('Alice');
|
||||
> INSERT INTO member (name) VALUES ('Bob');
|
||||
> INSERT INTO member (name) VALUES ('Carol');
|
||||
> INSERT INTO member (name) VALUES ('David');
|
||||
```
|
||||
|
||||
查看表中的数据:
|
||||
|
||||
```
|
||||
> SELECT * FROM member;
|
||||
Alice|2020-12-15 22:39:00
|
||||
Bob|2020-12-15 22:39:02
|
||||
Carol|2020-12-15 22:39:05
|
||||
David|2020-12-15 22:39:07
|
||||
```
|
||||
|
||||
#### 添加多行数据
|
||||
|
||||
现在创建第二个表:
|
||||
|
||||
```
|
||||
> CREATE TABLE IF NOT EXISTS linux (
|
||||
...> distro TEXT NOT NULL);
|
||||
```
|
||||
|
||||
填充一些示例数据,这一次使用小的 `VALUES` 快捷方式,因此你可以在一个命令中添加多行。关键字 `VALUES` 期望以括号形式列出列表,而用多个逗号分隔多个列表:
|
||||
|
||||
```
|
||||
> INSERT INTO linux (distro)
|
||||
...> VALUES ('Slackware'), ('RHEL'),
|
||||
...> ('Fedora'),('Debian');
|
||||
```
|
||||
|
||||
### 修改表结构
|
||||
|
||||
你现在有两个表,但是到目前为止,两者之间没有任何关系。它们每个都包含独立的数据,但是可能你可能需要将第一个表的成员与第二个表中列出的特定项相关联。
|
||||
|
||||
为此,你可以为第一个表创建一个新列,该列对应于第二个表。由于两个表都设计有唯一标识符(这要归功于 SQLite 的自动创建),所以连接它们的最简单方法是将其中一个的 `rowid` 字段用作另一个的选择器。
|
||||
|
||||
在第一个表中创建一个新列,以存储第二个表中的值:
|
||||
|
||||
```
|
||||
> ALTER TABLE member ADD os INT;
|
||||
```
|
||||
|
||||
使用 `linux` 表中的唯一标识符作为 `member` 表中每一条记录中 `os` 字段的值。因为记录已经存在。因此你可以使用 `UPDATE` 语句而不是使用 `INSERT` 语句来更新数据。需要特别注意的是,你首先需要选中特定的一行来然后才能更新其中的某个字段。从句法上讲,这有点相反,更新首先发生,选择匹配最后发生:
|
||||
|
||||
```
|
||||
> UPDATE member SET os=1 WHERE name='Alice';
|
||||
```
|
||||
|
||||
对 `member` 表中的其他行重复相同的过程。更新 `os` 字段,为了数据多样性,在四行记录上分配三种不同的发行版(其中一种加倍)。
|
||||
|
||||
### 联接表
|
||||
|
||||
现在,这两个表相互关联,你可以使用 SQL 显示关联的数据。数据库中有多种 _联接方式_,但是一旦掌握了基础知识,就可以尝试所有的联接形式。这是一个基本联接,用于将 `member` 表的 `os` 字段中的值与 linux 表的 `rowid` 字段相关联:
|
||||
|
||||
```
|
||||
> SELECT * FROM member INNER JOIN linux ON member.os=linux.rowid;
|
||||
Alice|2020-12-15 22:39:00|1|Slackware
|
||||
Bob|2020-12-15 22:39:02|3|Fedora
|
||||
Carol|2020-12-15 22:39:05|3|Fedora
|
||||
David|2020-12-15 22:39:07|4|Debian
|
||||
```
|
||||
|
||||
`os` 和 `rowid` 字段形成了关联。
|
||||
|
||||
在一个图形应用程序中,你可以想象 `os` 字段是一个下拉选项菜单,其中的值是 `linux` 表中 `distro` 字段中的数据。将相关的数据集通过唯一的字段相关联,可以确保数据的一致性和有效性,并且借助 SQL,你可以在以后动态地关联它们。
|
||||
|
||||
### 了解更多
|
||||
|
||||
SQLite 是一个非常有用的自包含的、可移植的开源数据库。学习以交互方式使用它是迈向针对 Web 应用程序进行管理或通过编程语言库使用它的重要的第一步。
|
||||
|
||||
如果你喜欢 SQLite,也可以尝试由同一位作者 Richard Hipp 博士的 [Fossil][10]。
|
||||
|
||||
在学习和使用 SQLite 时,有一些常用命令可能会有所帮助,所以请立即下载我们的 [SQLite3 备忘单][3]!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/sqlite3-cheat-sheet
|
||||
|
||||
作者:[Klaatu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/klaatu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP "Cheat Sheet cover image"
|
||||
[2]: https://www.redhat.com/sysadmin/yaml-beginners
|
||||
[3]: https://opensource.com/downloads/sqlite-cheat-sheet
|
||||
[4]: https://sqlite.org/copyright.html
|
||||
[5]: https://directory.fsf.org/wiki/License:PublicDomain
|
||||
[6]: https://www.sqlite.org/purchase/license?
|
||||
[7]: https://www.sqlite.org/download.html
|
||||
[8]: https://www.mysql.com/
|
||||
[9]: https://mariadb.org/
|
||||
[10]: https://opensource.com/article/20/11/fossil
|
@ -1,78 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Where are all the IoT experts going to come from?)
|
||||
[#]: via: (https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Where are all the IoT experts going to come from?
|
||||
======
|
||||
The fast growth of the internet of things (IoT) is creating a need to train cross-functional experts who can combine traditional networking and infrastructure expertise with database and reporting skills.
|
||||
![Kevin \(CC0\)][1]
|
||||
|
||||
If the internet of things (IoT) is going to fulfill its enormous promise, it’s going to need legions of smart, skilled, _trained_ workers to make everything happen. And right now, it’s not entirely clear where those people are going to come from.
|
||||
|
||||
That’s why I was interested in trading emails with Keith Flynn, senior director of product management, R&D at asset-optimization software company [AspenTech][2], who says that when dealing with the slew of new technologies that fall under the IoT umbrella, you need people who can understand how to configure the technology and interpret the data. Flynn sees a growing need for existing educational institutions to house IoT-specific programs, as well as an opportunity for new IoT-focused private colleges, offering a well -ounded curriculum
|
||||
|
||||
“In the future,” Flynn told me, “IoT projects will differ tremendously from the general data management and automation projects of today. … The future requires a more holistic set of skills and cross-trading capabilities so that we’re all speaking the same language.”
|
||||
|
||||
**[ Also read: [20 hot jobs ambitious IT pros should shoot for][3] ]**
|
||||
|
||||
With the IoT growing 30% a year, Flynn added, rather than a few specific skills, “everything from traditional deployment skills, like networking and infrastructure, to database and reporting skills and, frankly, even basic data science, need to be understood together and used together.”
|
||||
|
||||
### Calling all IoT consultants
|
||||
|
||||
“The first big opportunity for IoT-educated people is in the consulting field,” Flynn predicted. “As consulting companies adapt or die to the industry trends … having IoT-trained people on staff will help position them for IoT projects and make a claim in the new line of business: IoT consulting.”
|
||||
|
||||
The problem is especially acute for startups and smaller companies. “The bigger the organization, the more likely they have a means to hire different people across different lines of skillsets,” Flynn said. “But for smaller organizations and smaller IoT projects, you need someone who can do both.”
|
||||
|
||||
Both? Or _everything?_ The IoT “requires a combination of all knowledge and skillsets,” Flynn said, noting that “many of the skills aren’t new, they’ve just never been grouped together or taught together before.”
|
||||
|
||||
**[ [Looking to upgrade your career in tech? This comprehensive online course teaches you how.][4] ]**
|
||||
|
||||
### The IoT expert of the future
|
||||
|
||||
True IoT expertise starts with foundational instrumentation and electrical skills, Flynn said, which can help workers implement new wireless transmitters and boost technology for better battery life and power consumption.
|
||||
|
||||
“IT skills, like networking, IP addressing, subnet masks, cellular and satellite are also pivotal IoT needs,” Flynn said. He also sees a need for database management skills and cloud management and security expertise, “especially as things like [advanced process control] APC and sending sensor data directly to databases and data lakes become the norm.”
|
||||
|
||||
### Where will IoT experts come from?
|
||||
|
||||
Flynn said standardized formal education courses would be the best way to make sure that graduates or certificate holders have the right set of skills. He even laid out a sample curriculum: “Start in chronological order with the basics like [Electrical & Instrumentation] E&I and measurement. Then teach networking, and then database administration and cloud courses should follow that. This degree could even be looped into an existing engineering course, and it would probably take two years … to complete the IoT component.”
|
||||
|
||||
While corporate training could also play role, “that’s easier said than done,” Flynn warned. “Those trainings will need to be organization-specific efforts and pushes.”
|
||||
|
||||
Of course, there are already [plenty of online IoT training courses and certificate programs][5]. But, ultimately, the responsibility lies with the workers themselves.
|
||||
|
||||
“Upskilling is incredibly important in this world as tech continues to transform industries,” Flynn said. “If that upskilling push doesn’t come from your employer, then online courses and certifications would be an excellent way to do that for yourself. We just need those courses to be created. ... I could even see organizations partnering with higher-education institutions that offer these courses to give their employees better access to it. Of course, the challenge with an IoT program is that it will need to constantly evolve to keep up with new advancements in tech.”
|
||||
|
||||
**[ For more on IoT, see [tips for securing IoT on your network][6], our list of [the most powerful internet of things companies][7] and learn about the [industrial internet of things][8]. | Get regularly scheduled insights by [signing up for Network World newsletters][9]. ]**
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404489/where-are-all-the-iot-experts-going-to-come-from.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/programmer_certification-skills_code_devops_glasses_student_by-kevin-unsplash-100764315-large.jpg
|
||||
[2]: https://www.aspentech.com/
|
||||
[3]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
|
||||
[5]: https://www.google.com/search?client=firefox-b-1-d&q=iot+training
|
||||
[6]: https://www.networkworld.com/article/3254185/internet-of-things/tips-for-securing-iot-on-your-network.html#nww-fsb
|
||||
[7]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
|
||||
[8]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
|
||||
[9]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
263
sources/talk/20210207 The Real Novelty of the ARPANET.md
Normal file
263
sources/talk/20210207 The Real Novelty of the ARPANET.md
Normal file
@ -0,0 +1,263 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Real Novelty of the ARPANET)
|
||||
[#]: via: (https://twobithistory.org/2021/02/07/arpanet.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
The Real Novelty of the ARPANET
|
||||
======
|
||||
|
||||
If you run an image search for the word “ARPANET,” you will find lots of maps showing how the [government research network][1] expanded steadily across the country throughout the late ’60s and early ’70s. I’m guessing that most people reading or hearing about the ARPANET for the first time encounter one of these maps.
|
||||
|
||||
Obviously, the maps are interesting—it’s hard to believe that there were once so few networked computers that their locations could all be conveyed with what is really pretty lo-fi cartography. (We’re talking 1960s overhead projector diagrams here. You know the vibe.) But the problem with the maps, drawn as they are with bold lines stretching across the continent, is that they reinforce the idea that the ARPANET’s paramount achievement was connecting computers across the vast distances of the United States for the first time.
|
||||
|
||||
Today, the internet is a lifeline that keeps us tethered to each other even as an airborne virus has us all locked up indoors. So it’s easy to imagine that, if the ARPANET was the first draft of the internet, then surely the world that existed before it was entirely disconnected, since that’s where we’d be without the internet today, right? The ARPANET must have been a big deal because it connected people via computers when that hadn’t before been possible.
|
||||
|
||||
That view doesn’t get the history quite right. It also undersells what made the ARPANET such a breakthrough.
|
||||
|
||||
### The Debut
|
||||
|
||||
The Washington Hilton stands near the top of a small rise about a mile and a half northeast of the National Mall. Its two white-painted modern facades sweep out in broad semicircles like the wings of a bird. The New York Times, reporting on the hotel’s completion in 1965, remarked that the building looks “like a sea gull perched on a hilltop nest.”[1][2]
|
||||
|
||||
The hotel hides its most famous feature below ground. Underneath the driveway roundabout is an enormous ovoid event space known as the International Ballroom, which was for many years the largest pillar-less ballroom in DC. In 1967, the Doors played a concert there. In 1968, Jimi Hendrix also played a concert there. In 1972, a somewhat more sedate act took over the ballroom to put on the inaugural International Conference on Computing Communication, where a promising research project known as the ARPANET was demonstrated publicly for the first time.
|
||||
|
||||
The 1972 ICCC, which took place from October 24th to 26th, was attended by about 800 people.[2][3] It brought together all of the leading researchers in the nascent field of computer networking. According to internet pioneer Bob Kahn, “if somebody had dropped a bomb on the Washington Hilton, it would have destroyed almost all of the networking community in the US at that point.”[3][4]
|
||||
|
||||
Not all of the attendees were computer scientists, however. An advertisement for the conference claimed it would be “user-focused” and geared toward “lawyers, medical men, economists, and government men as well as engineers and communicators.”[4][5] Some of the conference’s sessions were highly technical, such as the session titled “Data Network Design Problems I” and its sequel session, “Data Network Design Problems II.” But most of the sessions were, as promised, focused on the potential social and economic impacts of computer networking. One session, eerily prescient today, sought to foster a discussion about how the legal system could act proactively “to safeguard the right of privacy in the computer data bank.”[5][6]
|
||||
|
||||
The ARPANET demonstration was intended as a side attraction of sorts for the attendees. Between sessions, which were held either in the International Ballroom or elsewhere on the lower level of the hotel, attendees were free to wander into the Georgetown Ballroom (a smaller ballroom/conference room down the hall from the big one),[6][7] where there were 40 terminals from a variety of manufacturers set up to access the ARPANET.[7][8] These terminals were dumb terminals—they only handled input and output and could do no computation on their own. (In fact, in 1972, it’s likely that all of these terminals were hardcopy terminals, i.e. teletype machines.) The terminals were all hooked up to a computer known as a Terminal Interface Message Processor or TIP, which sat on a raised platform in the middle of the room. The TIP was a kind of archaic router specially designed to connect dumb terminals to the ARPANET. Using the terminals and the TIP, the ICCC attendees could experiment with logging on and accessing some of the computers at the 29 host sites then comprising the ARPANET.[8][9]
|
||||
|
||||
To exhibit the network’s capabilities, researchers at the host sites across the country had collaborated to prepare 19 simple “scenarios” for users to experiment with. These scenarios were compiled into [a booklet][10] that was handed to conference attendees as they tentatively approached the maze of wiring and terminals.[9][11] The scenarios were meant to prove that the new technology worked but also that it was useful, because so far the ARPANET was “a highway system without cars,” and its Pentagon funders hoped that a public demonstration would excite more interest in the network.[10][12]
|
||||
|
||||
The scenarios thus showed off a diverse selection of the software that could be accessed over the ARPANET: There were programming language interpreters, one for a Lisp-based language at MIT and another for a numerical computing environment called Speakeasy hosted at UCLA; there were games, including a chess program and an implementation of Conway’s Game of Life; and—perhaps most popular among the conference attendees—there were several AI chat programs, including the famous ELIZA chat program developed at MIT by Joseph Weizenbaum.
|
||||
|
||||
The researchers who had prepared the scenarios were careful to list each command that users were expected to enter at their terminals. This was especially important because the sequence of commands used to connect to any given ARPANET host could vary depending on the host in question. To experiment with the AI chess program hosted on the MIT Artificial Intelligence Laboratory’s PDP-10 minicomputer, for instance, conference attendees were instructed to enter the following:
|
||||
|
||||
_`[LF]`, `[SP]`, and `[CR]` below stand for the line feed, space, and carriage return keys respectively. I’ve explained each command after `//`, but this syntax was not used for the annotations in the original._
|
||||
|
||||
```
|
||||
@r [LF] // Reset the TIP
|
||||
@e [SP] r [LF] // "Echo remote" setting, host echoes characters rather than TIP
|
||||
@L [SP] 134 [LF] // Connect to host number 134
|
||||
:login [SP] iccXXX [CR] // Login to the MIT AI Lab's system, where "XXX" should be user's initials
|
||||
:chess [CR] // Start chess program
|
||||
```
|
||||
|
||||
If conference attendees were successfully able to enter those commands, their reward was the opportunity to play around with some of the most cutting-edge chess software available at the time, where the layout of the board was represented like this:
|
||||
|
||||
```
|
||||
BR BN BB BQ BK BB BN BR
|
||||
BP BP BP BP ** BP BP BP
|
||||
-- ** -- ** -- ** -- **
|
||||
** -- ** -- BP -- ** --
|
||||
-- ** -- ** WP ** -- **
|
||||
** -- ** -- ** -- ** --
|
||||
WP WP WP WP -- WP WP WP
|
||||
WR WN WB WQ WK WB WN WR
|
||||
```
|
||||
|
||||
In contrast, to connect to UCLA’s IBM System/360 and run the Speakeasy numerical computing environment, conference attendees had to enter the following:
|
||||
|
||||
```
|
||||
@r [LF] // Reset the TIP
|
||||
@t [SP] o [SP] L [LF] // "Transmit on line feed" setting
|
||||
@i [SP] L [LF] // "Insert line feed" setting, i.e. send line feed with each carriage return
|
||||
@L [SP] 65 [LF] // Connect to host number 65
|
||||
tso // Connect to IBM Time-Sharing Option system
|
||||
logon [SP] icX [CR] // Log in with username, where "X" should be a freely chosen digit
|
||||
iccc [CR] // This is the password (so secure!)
|
||||
speakez [CR] // Start Speakeasy
|
||||
```
|
||||
|
||||
Successfully running that gauntlet gave attendees the power to multiply and transpose and do other operations on matrices as quickly as they could input them at their terminal:
|
||||
|
||||
```
|
||||
:+! a=m*transpose(m);a [CR]
|
||||
:+! eigenvals(a) [CR]
|
||||
```
|
||||
|
||||
Many of the attendees were impressed by the demonstration, but not for the reasons that we, from our present-day vantage point, might assume. The key piece of context hard to keep in mind today is that, in 1972, being able to use a computer remotely, even from a different city, was not new. Teletype devices had been used to talk to distant computers for decades already. Almost a full five years before the ICCC, Bill Gates was in a Seattle high school using a teletype to run his first BASIC programs on a General Electric computer housed elsewhere in the city. Merely logging in to a host computer and running a few commands or playing a text-based game was routine. The software on display here was pretty neat, but the two scenarios I’ve told you about so far could ostensibly have been experienced without going over the ARPANET.
|
||||
|
||||
Of course, something new was happening under the hood. The lawyers, policy-makers, and economists at the ICCC might have been enamored with the clever chess program and the chat bots, but the networking experts would have been more interested in two other scenarios that did a better job of demonstrating what the ARPANET project had achieved.
|
||||
|
||||
The first of these scenarios involved a program called `NETWRK` running on MIT’s ITS operating system. The `NETWRK` command was the entrypoint for several subcommands that could report various aspects of the ARPANET’s operating status. The `SURVEY` subcommand reported which hosts on the network were functioning and available (they all fit on a single list), while the `SUMMARY.OF.SURVEY` subcommand aggregated the results of past `SURVEY` runs to report an “up percentage” for each host as well as how long, on average, it took for each host to respond to messages. The output of the `SUMMARY.OF.SURVEY` subcommand was a table that looked like this:
|
||||
|
||||
```
|
||||
--HOST-- -#- -%-UP- -RESP-
|
||||
UCLA-NMC 001 097% 00.80
|
||||
SRI-ARC 002 068% 01.23
|
||||
UCSB-75 003 059% 00.63
|
||||
...
|
||||
```
|
||||
|
||||
The host number field, as you can see, has room for no more than three digits (ha!). Other `NETWRK` subcommands allowed users to look at summary of survey results over a longer historical period or to examine the log of survey results for a single host.
|
||||
|
||||
The second of these scenarios featured a piece of software called the SRI-ARC Online System being developed at Stanford. This was a fancy piece of software with lots of functionality (it was the software system that Douglas Engelbart demoed in the “Mother of All Demos”), but one of the many things it could do was make use of what was essentially a file hosting service run on the host at UC Santa Barbara. From a terminal at the Washington Hilton, conference attendees could copy a file created at Stanford onto the host at UCSB simply by running a `copy` command and answering a few of the computer’s questions:
|
||||
|
||||
_`[ESC]`, `[SP]`, and `[CR]` below stand for the escape, space, and carriage return keys respectively. The words in parentheses are prompts printed by the computer. The escape key is used to autocomplete the filename on the third line. The file being copied here is called `<system>sample.txt;1`, where the trailing one indicates the file’s version number and `<system>` indicates the directory. This was a convention for filenames used by the TENEX operating system._[11][13]
|
||||
|
||||
```
|
||||
@copy
|
||||
(TO/FROM UCSB) to
|
||||
(FILE) <system>sample [ESC] .TXT;1 [CR]
|
||||
(CREATE/REPLACE) create
|
||||
```
|
||||
|
||||
These two scenarios might not look all that different from the first two, but they were remarkable. They were remarkable because they made it clear that, on the ARPANET, humans could talk to computers but computers could also talk to _each other._ The `SURVEY` results collected at MIT weren’t collected by a human regularly logging in to each machine to check if it was up—they were collected by a program that knew how to talk to the other machines on the network. Likewise, the file transfer from Stanford to UCSB didn’t involve any humans sitting at terminals at either Stanford or UCSB—the user at a terminal in Washington DC was able to get the two computers to talk each other merely by invoking a piece of software. Even more, it didn’t matter which of the 40 terminals in the Ballroom you were sitting at, because you could view the MIT network monitoring statistics or store files at UCSB using any of the terminals with almost the same sequence of commands.
|
||||
|
||||
This is what was totally new about the ARPANET. The ICCC demonstration didn’t just involve a human communicating with a distant computer. It wasn’t just a demonstration of remote I/O. It was a demonstration of software remotely communicating with other software, something nobody had seen before.
|
||||
|
||||
To really appreciate why it was this aspect of the ARPANET project that was important and not the wires-across-the-country, physical connection thing that the host maps suggest (the wires were leased phone lines anyhow and were already there!), consider that, before the ARPANET project began in 1966, the ARPA offices in the Pentagon had a terminal room. Inside it were three terminals. Each connected to a different computer; one computer was at MIT, one was at UC Berkeley, and another was in Santa Monica.[12][14] It was convenient for the ARPA staff that they could use these three computers even from Washington DC. But what was inconvenient for them was that they had to buy and maintain terminals from three different manufacturers, remember three different login procedures, and familiarize themselves with three different computing environments in order to use the computers. The terminals might have been right next to each other, but they were merely extensions of the host computing systems on the other end of the wire and operated as differently as the computers did. Communicating with a distant computer was possible before the ARPANET; the problem was that the heterogeneity of computing systems limited how sophisticated the communication could be.
|
||||
|
||||
### Come Together, Right Now
|
||||
|
||||
So what I’m trying to drive home here is that there is an important distinction between statement A, “the ARPANET connected people in different locations via computers for the first time,” and statement B, “the ARPANET connected computer systems to each other for the first time.” That might seem like splitting hairs, but statement A elides some illuminating history in a way that statement B does not.
|
||||
|
||||
To begin with, the historian Joy Lisi Rankin has shown that people were socializing in cyberspace well before the ARPANET came along. In _A People’s History of Computing in the United States_, she describes several different digital communities that existed across the country on time-sharing networks prior to or apart from the ARPANET. These time-sharing networks were not, technically speaking, computer networks, since they consisted of a single mainframe computer running computations in a basement somewhere for many dumb terminals, like some portly chthonic creature with tentacles sprawling across the country. But they nevertheless enabled most of the social behavior now connoted by the word “network” in a post-Facebook world. For example, on the Kiewit Network, which was an extension of the Dartmouth Time-Sharing System to colleges and high schools across the Northeast, high school students collaboratively maintained a “gossip file” that allowed them to keep track of the exciting goings-on at other schools, “creating social connections from Connecticut to Maine.”[13][15] Meanwhile, women at Mount Holyoke College corresponded with men at Dartmouth over the network, perhaps to arrange dates or keep in touch with boyfriends.[14][16] This was all happening in the 1960s. Rankin argues that by ignoring these early time-sharing networks we impoverish our understanding of how American digital culture developed over the last 50 years, leaving room for a “Silicon Valley mythology” that credits everything to the individual genius of a select few founding fathers.
|
||||
|
||||
As for the ARPANET itself, if we recognize that the key challenge was connecting the computer _systems_ and not just the physical computers, then that might change what we choose to emphasize when we tell the story of the innovations that made the ARPANET possible. The ARPANET was the first ever packet-switched network, and lots of impressive engineering went into making that happen. I think it’s a mistake, though, to say that the ARPANET was a breakthrough because it was the first packet-switched network and then leave it at that. The ARPANET was meant to make it easier for computer scientists across the country to collaborate; that project was as much about figuring out how different operating systems and programs written in different languages would interface with each other than it was about figuring out how to efficiently ferry data back and forth between Massachusetts and California. So the ARPANET was the first packet-switched network, but it was also an amazing standards success story—something I find especially interesting given [how][17] [many][18] [times][19] I’ve written about failed standards on this blog.
|
||||
|
||||
Inventing the protocols for the ARPANET was an afterthought even at the time, so naturally the job fell to a group made up largely of graduate students. This group, later known as the Network Working Group, met for the first time at UC Santa Barbara in August of 1968.[15][20] There were 12 people present at that first meeting, most of whom were representatives from the four universities that were to be the first host sites on the ARPANET when the equipment was ready.[16][21] Steve Crocker, then a graduate student at UCLA, attended; he told me over a Zoom call that it was all young guys at that first meeting, and that Elmer Shapiro, who chaired the meeting, was probably the oldest one there at around 38. ARPA had not put anyone in charge of figuring out how the computers would communicate once they were connected, but it was obvious that some coordination was necessary. As the group continued to meet, Crocker kept expecting some “legitimate adult” with more experience and authority to fly out from the East Coast to take over, but that never happened. The Network Working Group had ARPA’s tacit approval—all those meetings involved lots of long road trips, and ARPA money covered the travel expenses—so they were it.[17][22]
|
||||
|
||||
The Network Working Group faced a huge challenge. Nobody had ever sat down to connect computer systems together in a general-purpose way; that flew against all of the assumptions that prevailed in computing in the late 1960s:
|
||||
|
||||
> The typical mainframe of the period behaved as if it were the only computer in the universe. There was no obvious or easy way to engage two diverse machines in even the minimal communication needed to move bits back and forth. You could connect machines, but once connected, what would they say to each other? In those days a computer interacted with devices that were attached to it, like a monarch communicating with his subjects. Everything connected to the main computer performed a specific task, and each peripheral device was presumed to be ready at all times for a fetch-my-slippers type command…. Computers were strictly designed for this kind of interaction; they send instructions to subordinate card readers, terminals, and tape units, and they initiate all dialogues. But if another device in effect tapped the computer on the shoulder with a signal and said, “Hi, I’m a computer too,” the receiving machine would be stumped.[18][23]
|
||||
|
||||
As a result, the Network Working Group’s progress was initially slow.[19][24] The group did not settle on an “official” specification for any protocol until June, 1970, nearly two years after the group’s first meeting.[20][25]
|
||||
|
||||
But by the time the ARPANET was to be shown off at the 1972 ICCC, all the key protocols were in place. A scenario like the chess scenario exercised many of them. When a user ran the command `@e r`, short for `@echo remote`, that instructed the TIP to make use of a facility in the new TELNET virtual teletype protocol to inform the remote host that it should echo the user’s input. When a user then ran the command `@L 134`, short for `@login 134`, that caused the TIP to invoke the Initial Connection Protocol with host 134, which in turn would cause the remote host to allocate all the necessary resources for the connection and drop the user into a TELNET session. (The file transfer scenario I described may well have made use of the File Transfer Protocol, though that protocol was only ready shortly before the conference.[21][26]) All of these protocols were known as “level three” protocols, and below them were the host-to-host protocol at level two (which defined the basic format for the messages the hosts should expect from each other), and the host-to-IMP protocol at level one (which defined how hosts communicated with the routing equipment they were linked to). Incredibly, the protocols all worked.
|
||||
|
||||
In my view, the Network Working Group was able to get everything together in time and just generally excel at its task because it adopted an open and informal approach to standardization, as exemplified by the famous Request for Comments (RFC) series of documents. These documents, originally circulated among the members of the Network Working Group by snail mail, were a way of keeping in touch between meetings and soliciting feedback to ideas. The “Request for Comments” framing was suggested by Steve Crocker, who authored the first RFC and supervised the RFC mailing list in the early years, in an attempt to emphasize the open-ended and collaborative nature of what the group was trying to do. That framing, and the availability of the documents themselves, made the protocol design process into a melting pot of contributions and riffs on other people’s contributions where the best ideas could emerge without anyone losing face. The RFC process was a smashing success and is still used to specify internet standards today, half a century later.
|
||||
|
||||
It’s this legacy of the Network Working Group that I think we should highlight when we talk about ARPANET’s impact. Though today one of the most magical things about the internet is that it can connect us with people on the other side of the planet, it’s only slightly facetious to say that that technology has been with us since the 19th century. Physical distance was conquered well before the ARPANET by the telegraph. The kind of distance conquered by the ARPANET was instead the logical distance between the operating systems, character codes, programming languages, and organizational policies employed at each host site. Implementing the first packet-switched network was of course a major feat of engineering that should also be mentioned, but the problem of agreeing on standards to connect computers that had never been designed to play nice with each other was the harder of the two big problems involved in building the ARPANET—and its solution was the most miraculous part of the ARPANET story.
|
||||
|
||||
In 1981, ARPA issued a “Completion Report” reviewing the first decade of the ARPANET’s history. In a section with the belabored title, “Technical Aspects of the Effort Which Were Successful and Aspects of the Effort Which Did Not Materialize as Originally Envisaged,” the authors wrote:
|
||||
|
||||
> Possibly the most difficult task undertaken in the development of the ARPANET was the attempt—which proved successful—to make a number of independent host computer systems of varying manufacture, and varying operating systems within a single manufactured type, communicate with each other despite their diverse characteristics.[22][27]
|
||||
|
||||
There you have it from no less a source than the federal government of the United States.
|
||||
|
||||
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][28] on Twitter or subscribe to the [RSS feed][29] to make sure you know when a new post is out._
|
||||
|
||||
_Previously on TwoBitHistory…_
|
||||
|
||||
> It's been too long, I know, but I finally got around to writing a new post. This one is about how REST APIs should really be known as FIOH APIs instead (Fuck It, Overload HTTP):<https://t.co/xjMZVZgsEz>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [June 28, 2020][30]
|
||||
|
||||
1. “Hilton Hotel Opens in Capital Today.” _The New York Times_, 20 March 1965, <https://www.nytimes.com/1965/03/20/archives/hilton-hotel-opens-in-capital-today.html?searchResultPosition=1>. Accessed 7 Feb. 2021. [↩︎][31]
|
||||
|
||||
2. James Pelkey. _Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968-1988,_ Chapter 4, Section 12, 2007, <http://www.historyofcomputercommunications.info/Book/4/4.12-ICCC%20Demonstration71-72.html>. Accessed 7 Feb. 2021. [↩︎][32]
|
||||
|
||||
3. Katie Hafner and Matthew Lyon. _Where Wizards Stay Up Late: The Origins of the Internet_. New York, Simon & Schuster, 1996, p. 178. [↩︎][33]
|
||||
|
||||
4. “International Conference on Computer Communication.” _Computer_, vol. 5, no. 4, 1972, p. c2, <https://www.computer.org/csdl/magazine/co/1972/04/01641562/13rRUxNmPIA>. Accessed 7 Feb. 2021. [↩︎][34]
|
||||
|
||||
5. “Program for the International Conference on Computer Communication.” _The Papers of Clay T. Whitehead_, Box 42, <https://d3so5znv45ku4h.cloudfront.net/Box+042/013_Speech-International+Conference+on+Computer+Communications,+Washington,+DC,+October+24,+1972.pdf>. Accessed 7 Feb. 2021. [↩︎][35]
|
||||
|
||||
6. It’s actually not clear to me which room was used for the ARPANET demonstration. Lots of sources talk about a “ballroom,” but the Washington Hilton seems to consider the room with the name “Georgetown” more of a meeting room. So perhaps the demonstration was in the International Ballroom instead. But RFC 372 alludes to a booking of the “Georgetown Ballroom” for the demonstration. A floorplan of the Washington Hilton can be found [here][36]. [↩︎][37]
|
||||
|
||||
7. Hafner, p. 179. [↩︎][38]
|
||||
|
||||
8. ibid., p. 178. [↩︎][39]
|
||||
|
||||
9. Bob Metcalfe. “Scenarios for Using the ARPANET.” _Collections-Computer History Museum_, <https://www.computerhistory.org/collections/catalog/102784024>. Accessed 7 Feb. 2021. [↩︎][40]
|
||||
|
||||
10. Hafner, p. 176. [↩︎][41]
|
||||
|
||||
11. Robert H. Thomas. “Planning for ACCAT Remote Site Operations.” BBN Report No. 3677, October 1977, <https://apps.dtic.mil/sti/pdfs/ADA046366.pdf>. Accessed 7 Feb. 2021. [↩︎][42]
|
||||
|
||||
12. Hafner, p. 12. [↩︎][43]
|
||||
|
||||
13. Joy Lisi Rankin. _A People’s History of Computing in the United States_. Cambridge, MA, Harvard University Press, 2018, p. 84. [↩︎][44]
|
||||
|
||||
14. Rankin, p. 93. [↩︎][45]
|
||||
|
||||
15. Steve Crocker. Personal interview. 17 Dec. 2020. [↩︎][46]
|
||||
|
||||
16. Crocker sent me the minutes for this meeting. The document lists everyone who attended. [↩︎][47]
|
||||
|
||||
17. Steve Crocker. Personal interview. [↩︎][48]
|
||||
|
||||
18. Hafner, p. 146. [↩︎][49]
|
||||
|
||||
19. “Completion Report / A History of the ARPANET: The First Decade.” BBN Report No. 4799, April 1981, <https://walden-family.com/bbn/arpanet-completion-report.pdf>, p. II-13. [↩︎][50]
|
||||
|
||||
20. I’m referring here to RFC 54, “Official Protocol Proffering.” [↩︎][51]
|
||||
|
||||
21. Hafner, p. 175. [↩︎][52]
|
||||
|
||||
22. “Completion Report / A History of the ARPANET: The First Decade,” p. II-29. [↩︎][53]
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2021/02/07/arpanet.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/ARPANET
|
||||
[2]: tmp.pnPpRrCI3S#fn:1
|
||||
[3]: tmp.pnPpRrCI3S#fn:2
|
||||
[4]: tmp.pnPpRrCI3S#fn:3
|
||||
[5]: tmp.pnPpRrCI3S#fn:4
|
||||
[6]: tmp.pnPpRrCI3S#fn:5
|
||||
[7]: tmp.pnPpRrCI3S#fn:6
|
||||
[8]: tmp.pnPpRrCI3S#fn:7
|
||||
[9]: tmp.pnPpRrCI3S#fn:8
|
||||
[10]: https://archive.computerhistory.org/resources/access/text/2019/07/102784024-05-001-acc.pdf
|
||||
[11]: tmp.pnPpRrCI3S#fn:9
|
||||
[12]: tmp.pnPpRrCI3S#fn:10
|
||||
[13]: tmp.pnPpRrCI3S#fn:11
|
||||
[14]: tmp.pnPpRrCI3S#fn:12
|
||||
[15]: tmp.pnPpRrCI3S#fn:13
|
||||
[16]: tmp.pnPpRrCI3S#fn:14
|
||||
[17]: https://twobithistory.org/2018/05/27/semantic-web.html
|
||||
[18]: https://twobithistory.org/2018/12/18/rss.html
|
||||
[19]: https://twobithistory.org/2020/01/05/foaf.html
|
||||
[20]: tmp.pnPpRrCI3S#fn:15
|
||||
[21]: tmp.pnPpRrCI3S#fn:16
|
||||
[22]: tmp.pnPpRrCI3S#fn:17
|
||||
[23]: tmp.pnPpRrCI3S#fn:18
|
||||
[24]: tmp.pnPpRrCI3S#fn:19
|
||||
[25]: tmp.pnPpRrCI3S#fn:20
|
||||
[26]: tmp.pnPpRrCI3S#fn:21
|
||||
[27]: tmp.pnPpRrCI3S#fn:22
|
||||
[28]: https://twitter.com/TwoBitHistory
|
||||
[29]: https://twobithistory.org/feed.xml
|
||||
[30]: https://twitter.com/TwoBitHistory/status/1277259930555363329?ref_src=twsrc%5Etfw
|
||||
[31]: tmp.pnPpRrCI3S#fnref:1
|
||||
[32]: tmp.pnPpRrCI3S#fnref:2
|
||||
[33]: tmp.pnPpRrCI3S#fnref:3
|
||||
[34]: tmp.pnPpRrCI3S#fnref:4
|
||||
[35]: tmp.pnPpRrCI3S#fnref:5
|
||||
[36]: https://www3.hilton.com/resources/media/hi/DCAWHHH/en_US/pdf/DCAWH.Floorplans.Apr25.pdf
|
||||
[37]: tmp.pnPpRrCI3S#fnref:6
|
||||
[38]: tmp.pnPpRrCI3S#fnref:7
|
||||
[39]: tmp.pnPpRrCI3S#fnref:8
|
||||
[40]: tmp.pnPpRrCI3S#fnref:9
|
||||
[41]: tmp.pnPpRrCI3S#fnref:10
|
||||
[42]: tmp.pnPpRrCI3S#fnref:11
|
||||
[43]: tmp.pnPpRrCI3S#fnref:12
|
||||
[44]: tmp.pnPpRrCI3S#fnref:13
|
||||
[45]: tmp.pnPpRrCI3S#fnref:14
|
||||
[46]: tmp.pnPpRrCI3S#fnref:15
|
||||
[47]: tmp.pnPpRrCI3S#fnref:16
|
||||
[48]: tmp.pnPpRrCI3S#fnref:17
|
||||
[49]: tmp.pnPpRrCI3S#fnref:18
|
||||
[50]: tmp.pnPpRrCI3S#fnref:19
|
||||
[51]: tmp.pnPpRrCI3S#fnref:20
|
||||
[52]: tmp.pnPpRrCI3S#fnref:21
|
||||
[53]: tmp.pnPpRrCI3S#fnref:22
|
@ -0,0 +1,90 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Understanding Linus's Law for open source security)
|
||||
[#]: via: (https://opensource.com/article/21/2/open-source-security)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Understanding Linus's Law for open source security
|
||||
======
|
||||
Linus's Law is that given enough eyeballs, all bugs are shallow. How
|
||||
does this apply to open source software security?
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article discusses Linux's influence on the security of open source software.
|
||||
|
||||
An often-praised virtue of open source software is that its code can be reviewed (or "audited," as security professionals like to say) by anyone and everyone. However, if you actually ask many open source users when the last time they reviewed code was, you might get answers ranging from a blank stare to an embarrassed murmur. And besides, there are some really big open source applications out there, so it can be difficult to review every single line of code effectively.
|
||||
|
||||
Extrapolating from these slightly uncomfortable truths, you have to wonder: When nobody looks at the code, does it really matter whether it's open or not?
|
||||
|
||||
### Should you trust open source?
|
||||
|
||||
We tend to make a trite assumption in hobbyist computing that open source is "more secure" than anything else. We don't often talk about what that means, what the basis of comparison is ("more" secure than what?), or how the conclusion has even been reached. It's a dangerous statement to make because it implies that as long as you call something _open source_, it automatically and magically inherits enhanced security. That's not what open source is about, and in fact, it's what open source security is very much against.
|
||||
|
||||
You should never assume an application is secure unless you have personally audited and understood its code. Once you have done this, you can assign _ultimate trust_ to that application. Ultimate trust isn't a thing you do on a computer; it's something you do in your own mind: You trust software because you choose to believe that it is secure, at least until someone finds a way to exploit that software.
|
||||
|
||||
You're the only person who can place ultimate trust in that code, so every user who wants that luxury must audit the code for themselves. Taking someone else's word for it doesn't count!
|
||||
|
||||
So until you have audited and understood a codebase for yourself, the maximum trust level you can give to an application is a spectrum ranging from approximately, _not trustworthy at all_ to _pretty trustworthy_. There's no cheat sheet for this. It's a personal choice you must make for yourself. If you've heard from people you strongly trust that an application is secure, then you might trust that software more than you trust something for which you've gotten no trusted recommendations.
|
||||
|
||||
Because you cannot audit proprietary (non-open source) code, you can never assign it _ultimate trust_.
|
||||
|
||||
### Linus's Law
|
||||
|
||||
The reality is, not everyone is a programmer, and not everyone who is a programmer has the time to dedicate to reviewing hundreds and hundreds of lines of code. So if you're not going to audit code yourself, then you must choose to trust (to some degree) the people who _do_ audit code.
|
||||
|
||||
So exactly who does audit code, anyway?
|
||||
|
||||
Linus's Law asserts that _given enough eyeballs, all bugs are shallow_, but we don't really know how many eyeballs are "enough." However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written. However, open source is often a group effort, so the longer code is open, the more software developers end up seeing it. A developer must review major portions of a project's code because they must learn a codebase to write new features for it.
|
||||
|
||||
Open source packagers also get involved with many projects in order to make them available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a codebase as they try to solve anomalies ranging from quirks to major crashes. Of course, some bug reporters inadvertently reveal code vulnerabilities not by reviewing it themselves but by bringing attention to something that obviously doesn't work as intended. Sysadmins frequently get intimately familiar with the code of an important software their users rely upon. Finally, there are security researchers who dig into code exclusively to uncover potential exploits.
|
||||
|
||||
### Trust and transparency
|
||||
|
||||
Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. Don't be fooled by how much code it takes to make an application run. You don't actually have to read millions of lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved.
|
||||
|
||||
There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases.
|
||||
|
||||
Some people point to bug trackers, such as the [Common Vulnerabilities and Exposures (CVE)][2] website, and deduce that it's actually as plain as day that open source isn't secure. After all, hundreds of security risks are filed against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do because exploits are filed against them, too. The difference is that _all_ exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated. That's part of the system that boosts trust in open source, and it's wholly missing from proprietary software.
|
||||
|
||||
There may never be "enough" eyeballs on any code, but the stronger and more diverse the community around the code, the better chance there is to uncover and fix weaknesses.
|
||||
|
||||
### Trust and people
|
||||
|
||||
In open source, the probability that many developers, each working on the same project, have noticed something _not secure_ but have all remained equally silent about that flaw is considered to be low because humans rarely mutually agree to conspire in this way. We've seen how disjointed human behavior can be recently with COVID-19 mitigation:
|
||||
|
||||
* We've all identified a flaw (a virus).
|
||||
* We know how to prevent it from spreading (stay home).
|
||||
* Yet the virus continues to spread because one or more people deviate from the mitigation plan.
|
||||
|
||||
|
||||
|
||||
The same is true for bugs in software. If there's a flaw, someone noticing it will bring it to light (provided, of course, that someone sees it).
|
||||
|
||||
However, with proprietary software, there can be a high probability that many developers working on a project may notice something not secure but remain equally silent because the proprietary model relies on paychecks. If a developer speaks out against a flaw, then that developer may at best hurt the software's reputation, thereby decreasing sales, or at worst, may be fired from their job. Developers being paid to work on software in secret do not tend to talk about its flaws. If you've ever worked as a developer, you've probably signed an NDA, and you've been lectured on the importance of trade secrets, and so on. Proprietary software encourages, and more often enforces, silence even in the face of serious flaws.
|
||||
|
||||
### Trust and software
|
||||
|
||||
Don't trust software you haven't audited.
|
||||
|
||||
If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability.
|
||||
|
||||
Open source isn't inherently more secure than proprietary software, but the systems in place to fix it are far better planned, implemented, and staffed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/open-source-security
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://cve.mitre.org
|
100
sources/talk/20210211 Understanding Open Governance Networks.md
Normal file
100
sources/talk/20210211 Understanding Open Governance Networks.md
Normal file
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Understanding Open Governance Networks)
|
||||
[#]: via: (https://www.linux.com/news/understanding-open-governance-networks/)
|
||||
[#]: author: (The Linux Foundation https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/)
|
||||
|
||||
Understanding Open Governance Networks
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Throughout the modern business era, industries and commercial operations have shifted substantially to digital processes. Whether you look at EDI as a means to exchange invoices or cloud-based billing and payment solutions today, businesses have steadily been moving towards increasing digital operations. In the last few years, we’ve seen the promises of digital transformation come alive, particularly in [industries that have shifted to software-defined models][2]. The next step of this journey will involve enabling digital transactions through decentralized networks.
|
||||
|
||||
A fundamental adoption issue will be figuring out who controls and decides how a decentralized network is governed. It may seem oxymoronic at first, but decentralized networks still need governance. A future may hold autonomously self-governing decentralized networks, but this model is not accepted in industries today. The governance challenge with a decentralized network technology lies in who and how participants in a network will establish and maintain policies, network operations, on/offboarding of participants, setting fees, configurations, and software changes and are among the issues that will have to be decided to achieve a successful network. No company wants to participate or take a dependency on a network that is controlled or run by a competitor, potential competitor, or any single stakeholder at all for that matter.
|
||||
|
||||
Earlier this year, [we presented a solution for Open Governance Networks][3] that enable an industry or ecosystem to govern itself in an open, inclusive, neutral, and participatory model. You may be surprised to learn that it’s based on best practices in open governance we’ve developed over decades of facilitating the world’s most successful and competitive open source projects.
|
||||
|
||||
### The Challenge
|
||||
|
||||
For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” There have been many other concerns raised and confusion created, as overnight headlines hyped cryptocurrency schemes. Despite all this, behind the scenes, and all along, sophisticated companies understood a distributed ledger technology would be a powerful enabler for tackling complex challenges in an industry, or even a section of an industry.
|
||||
|
||||
At the Linux Foundation, we focused on enabling those organizations to collaborate on open source enterprise blockchain technologies within our Hyperledger community. That community has driven collaboration on every aspect of enterprise blockchain technology, including identity, security, and transparency. Like other Linux Foundation projects, these enterprise blockchain communities are open, collaborative efforts. We have had many vertical industry participants engage, from retail, automotive, aerospace, banking, and others participate with real industry challenges they needed to solve. And in this subset of cases, enterprise blockchain is the answer.
|
||||
|
||||
The technology is ready. Enterprise blockchain has been through many proof-of-concept implementations, and we’ve already seen that many organizations have shifted to production deployments. A few notable examples are:
|
||||
|
||||
* [Trust Your Supplier Network][4] 25 major corporate members from Anheuser-Busch InBev to UPS In production since September 2019.
|
||||
* [Foodtrust][5] Launched Aug 2017 with ten members, now being used by all major retailers.
|
||||
* [Honeywell][6] 50 vendors with storefronts in the new marketplace. In its first year, GoDirect Trade processed more than $5 million in online transactions.
|
||||
|
||||
|
||||
|
||||
However, just because we have the technology doesn’t mean we have the appropriate conditions to solve adoption challenges. A certain set of challenges about networks’ governance have become a “last mile” problem for industry adoption. While there are many examples of successful production deployments and multi-stakeholder engagements for commercial enterprise blockchains already, specific adoption scenarios have been halted over uncertainty, or mistrust, over who and how a blockchain network will be governed.
|
||||
|
||||
To precisely state the issue, in many situations, company A does not want to be dependent on, or trust, company B to control a network. For specific solutions that require broad industry participation to succeed, you can name any industry, and there will be company A and company B.
|
||||
|
||||
We think the solution to this challenge will be Open Governance Networks.
|
||||
|
||||
### The Linux Foundation vision of the Open Governance Network
|
||||
|
||||
An Open Governance Network is a distributed ledger service, composed of nodes, operated under the policies and directions of an inclusive set of industry stakeholders.
|
||||
|
||||
Open Governance Networks will set the policies and rules for participation in a decentralized ledger network that acts as an industry utility for transactions and data sharing among participants that have permissions on the network. The Open Governance Network model allows any organization to participate. Those organizations that want to be active in sharing the operational costs will benefit from having a representative say in the policies and rules for the network itself. The software underlying the Open Governance Network will be open source software, including the configurations and build tools so that anyone can validate whether a network node complies with the appropriate policies.
|
||||
|
||||
Many who have worked with the Linux Foundation will realize an open, neutral, and participatory governance model under a nonprofit structure that has already been thriving for decades in successful open source software communities. All we’re doing here is taking the same core principles of what makes open governance work for open source software, open standards, and open collaboration and applying those principles to managing a distributed ledger. This is a model that the Linux Foundation has used successfully in other communities, such as the [Let’s Encrypt][7] certificate authority.
|
||||
|
||||
Our ecosystem members trust the Linux Foundation to help solve this last mile problem using open governance under a neutral nonprofit entity. This is one solution to the concerns about neutrality and distributed control. In pan-industry use cases, it is generally not acceptable for one participant in the network to have power in any way that could be used as an advantage over someone else in the industry. The control of a ledger is a valuable asset, and competitive organizations generally have concerns in allowing one entity to control this asset. If not hosted in a neutral environment for the community’s benefit, network control can become a leverage point over network users.
|
||||
|
||||
We see this neutrality of control challenge as the primary reason why some privately held networks have struggled to gain widespread adoption. In order to encourage participation, industry leaders are looking for a neutral governance structure, and the Linux Foundation has proven the open governance models accomplish that exceptionally well.
|
||||
|
||||
This neutrality of control issue is very similar to the rationale for public utilities. Because the economic model mirrors a public utility, we debated calling these “industry utility networks.” In our conversations, we have learned industry participants are open to sharing the cost burden to stand up and maintain a utility. Still, they want a low-cost, not profit-maximizing model. That is why our nonprofit model makes the most sense.
|
||||
|
||||
It’s also not a public utility in that each network we foresee today would be restricted in participation to those who have a stake in the network, not any random person in the world. There’s a layer of human trust that our communities have been enabling on top of distributed networks, which started with the [Trust over IP Foundation][8].
|
||||
|
||||
Unlike public cryptocurrency networks where anyone can view the ledger or submit proposed transactions, industries have a natural need to limit access to legitimate parties in their industry. With minor adjustments to address the need for policies for transactions on the network, we believe a similar governance model applied to distributed ledger ecosystems can resolve concerns about the neutrality of control.
|
||||
|
||||
### Understanding LF Open Governance Networks
|
||||
|
||||
Open Governance Networks can be reduced to the following building block components:
|
||||
|
||||
* Business Governance: Networks need a decision-making body to establish core policies (e.g., network policies), make funding and budget decisions, contracting with a network manager, and other business matters necessary for the network’s success. The Linux Foundation establishes a governing board to manage the business governance.
|
||||
* Technical Governance: Networks will require software. A technical open source community will openly maintain the software, specifications, or configuration decisions implemented by the network nodes. The Linux Foundation establishes a technical steering committee to oversee technical projects, configurations, working groups, etc.
|
||||
* Transaction Entity: Networks will require a transaction entity that will a) act as counterparty to agreements with parties transacting on the network, b) collect fees from participants, and c) execute contracts for operational support (e.g., hiring a network manager).
|
||||
|
||||
|
||||
|
||||
Of these building blocks, the Linux Foundation already offers its communities the Business and Technical Governance needed for Open Governance Networks. The final component is the new, LF Open Governance Networks.
|
||||
|
||||
LF Open Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation.
|
||||
|
||||
If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at **[governancenetworks@linuxfoundation.org][9]**
|
||||
|
||||
The post [Understanding Open Governance Networks][10] appeared first on [Linux Foundation][11].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/understanding-open-governance-networks/
|
||||
|
||||
作者:[The Linux Foundation][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/wp-content/uploads/2021/02/understanding-opengovnetworks.png
|
||||
[2]: https://www.linuxfoundation.org/blog/2020/09/software-defined-vertical-industries-transformation-through-open-source/
|
||||
[3]: https://www.linuxfoundation.org/blog/2020/10/introducing-the-open-governance-network-model/
|
||||
[4]: https://www.hyperledger.org/learn/publications/chainyard-case-study
|
||||
[5]: https://www.hyperledger.org/learn/publications/walmart-case-study
|
||||
[6]: https://www.hyperledger.org/learn/publications/honeywell-case-study
|
||||
[7]: https://letsencrypt.org/
|
||||
[8]: https://trustoverip.org/
|
||||
[9]: mailto:governancenetworks@linuxfoundation.org
|
||||
[10]: https://www.linuxfoundation.org/en/blog/understanding-open-governance-networks/
|
||||
[11]: https://www.linuxfoundation.org/
|
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (18 ways to differentiate open source products from upstream suppliers)
|
||||
[#]: via: (https://opensource.com/article/21/2/differentiating-products-upstream-suppliers)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
|
||||
|
||||
18 ways to differentiate open source products from upstream suppliers
|
||||
======
|
||||
Open source products must create enough differentiated value that
|
||||
customers will voluntarily pay for them versus another (or free)
|
||||
product.
|
||||
![Tips and gears turning][1]
|
||||
|
||||
In the first three parts of this series, I explored [open source as a supply chain][2], [what a product is][3], and [what product managers do][4]. In this fourth article, I'll look at a plethora of methods to differentiate open source software products from their upstream open source projects.
|
||||
|
||||
Since open source projects are essentially information products, these methods are likely to apply to many information-related products (think YouTube creators) that have a component of value given away for free. Product managers have to get creative when the information and material to build your product is freely available to users.
|
||||
|
||||
### Creating and capturing value
|
||||
|
||||
Product managers are responsible for creating solutions that attract and retain customers. To create a customer, they must provide value in exchange for money. Like a good salesperson, a product manager should never feel guilty about charging for their product (see [_How to sell an open source product_][5] by John Mark Walker).
|
||||
|
||||
Products built on open source are fundamentally no different than any other products or service. They must create value for customers. In fact, they must create enough value that customers will voluntarily pay a price that is sufficient to pay for the development costs and return a profit. These products must also be differentiated from competing products and services, as well as upstream projects.
|
||||
|
||||
![Inputs for creating value][6]
|
||||
|
||||
(Scott McCarty, [CC BY-SA 4.0][7])
|
||||
|
||||
While products built on open source software are not fundamentally different from other products and services, there are some differences. First, some of the development costs are defrayed among all open source contributors. These costs can include code, testing, documentation, hardware, project hosting costs, etc. But even when development costs are defrayed in open source, costs are incurred by the vendor that productizes the code. These can include employee costs for research, analysis, security, performance testing, certification processes (e.g., collaborating with hardware vendors, cloud providers, etc.), and of course, sales and marketing.
|
||||
|
||||
![Inputs for solving market problems][8]
|
||||
|
||||
(Scott McCarty, [CC BY-SA 4.0][7])
|
||||
|
||||
Successful open source products must be able to charge a cost that is sufficient to pay for the defrayed upstream open source contributions (development costs) and the downstream productization costs (vendor costs). Stated another way, products can only charge a sufficient price if they create value that can only be captured by customers paying for them. That might sound harsh, but it's a reality for all products. There's a saying in product management: Pray to pay doesn't work. With that said, don't be too worried. There are ethical ways to capture value.
|
||||
|
||||
### Types of value
|
||||
|
||||
Fundamentally, there are two types of value: proprietary and non-proprietary. Proprietary is a bad word in open source software, but an attractive word in manufacturing, finance, and other industries. Many financial companies will highlight their proprietary algorithms and the same with drug companies and manufacturing processes. In software, proprietary value is often thought to be completely incongruous with free and open source software. People often assume proprietary value is a binary decision. It’s difficult for people to imagine proprietary value in the context of open source software without being artificially constrained by a license. However, as we’ll attempt to demonstrate, it’s not so clear cut.
|
||||
|
||||
From an open source product management perspective, you can define proprietary value as anything that would be very difficult or nearly impossible for the customer to recreate themselves—or something the potential customer doesn't believe they can recreate. Commodity value is the opposite of proprietary. Commodity value is value the customer believes they could construct (or reconstruct) given enough time and money.
|
||||
|
||||
Reconstructing commodity value instead of purchasing it makes sense only if it's cheaper or easier than buying a product. Stated another way, a good product should save a customer money compared to building the solution themselves. It's in this cost gap that open source products exist and thrive.
|
||||
|
||||
With products built, or mostly built, on an open source supply chain, customers retain the "build versus buy" decision and can stop paying at any time. This is often true with open core products as well. As long as the majority of the supply chain is open source, the customer likely could rebuild a few components to get what they need. The open source product manager's job is the same as for any other product or service: to create and retain customers by providing value worth paying for.
|
||||
|
||||
### Differentiators for open source product managers
|
||||
|
||||
Like any artist, a product manager traffics in value as their medium. They must price and package their product. They must constantly strive to differentiate their product against competitors in the marketplace and the upstream projects that are suppliers to that product. However, the supply chain is but one tool a product manager can use to differentiate and create a customer.
|
||||
|
||||
This is a less-than-exhaustive list that should give you some ideas about how product managers can differentiate their products and create value. As you read through the list, think deeply about whether a customer could recreate the value of each given enough time, money, and willpower.
|
||||
|
||||
* **Supply chain:** Selecting the upstream suppliers is important. The upstream community's health is a selling point over products based on competing upstream projects. A perfect example of this kind of differentiation is with products such as OpenShift, Docker EE, and Mesosphere, which respectively rely on Kubernetes, Swarm, and Apache Mesos as upstream suppliers. Similar to how electric cars are replacing gasoline engines, the choice of technology and its supplier provide differentiation.
|
||||
|
||||
* **Quality engineering:** Upstream continuous integration and continuous delivery (CI/CD) and user testing are wonderful bases for building a product. However, it's critical to ensure that the downstream product, often made up of multiple upstream projects, works well together with specific versions of all the components. Testing the entire solution together applies as much to differentiating from upstream suppliers as it does from competitive products. Customers want products that just work.
|
||||
|
||||
* **Industry certifications:** Specific classes of customers, such as government, financial services, transportation, manufacturing, and telecom, often have certification requirements. Typically, these are related to security or auditing and are often quite expensive. Certifications are great because they differentiate the product from competitors and upstream.
|
||||
|
||||
* **Hardware or cloud provider certifications:** The dirty secret of cheap hardware is that it changes all the time. Often this hardware has new capabilities with varying levels of maturity. Hardware certifications provide a level of confidence that the software will run well on a specific piece of hardware or cloud virtual machine. They also provide a level of assurance that the product company and the platform on which it is certified to run are committing to make it work well together. A potential customer could always vet hardware themselves, but they often don't have deep relationships with hardware vendors and cloud providers, making it difficult to demand fixes and patches.
|
||||
|
||||
* **Ecosystem:** This represents access to a plethora of add-on solutions from third-party vendors. Again, the ecosystem provides some assurance that all the entities work together to make things work well. Small companies would likely find it difficult or impossible to demand that individual software vendors certify their privately built platforms. Integrations like these are usually quite expensive for an individual user and are best defrayed across a product's customers.
|
||||
|
||||
* **Lifecycle:** Upstream projects are great because they move quickly and innovate. But many different versions of many different upstream projects can go into a single product's supply chain. Ensuring that all the versions of the different upstream projects work together over a given lifecycle is a lot of work. A longer lifecycle gives customers time to get a return on investment. Stated another way, users spend a lot of time and money planning and rolling out software. A product's lifecycle commitment ensures that customers can use the software and receive value from their investment for a reasonable amount of time.
|
||||
|
||||
* **Packaging and distribution:** Once a vendor commits to supporting a product for a given lifecycle (e.g., five years), they must also commit to providing packaging and distribution during that time. Both products and cloud services need to provide customers the ability to plan a roadmap, execute a rollout, and expand over the desired lifecycle, so packages or services need to remain available for customer use.
|
||||
|
||||
* **Documentation:** This is often overlooked by both open source projects and vendors. Aligning product documentation to the product lifecycle, versus the upstream supplier documentation, is extremely important. It's also important to document the entire solution working together, whether that's installation or use cases for end users. It's beneficial for customers to have documentation that applies to the specific combination of components they are using.
|
||||
|
||||
* **Security:** Closely related to the product lifecycle, vendors must commit to providing security during the time the product is supported. This includes analyzing code, scoring vulnerabilities, patching those vulnerabilities, and verifying that they are patched. This is a particularly opportune area for products to differentiate themselves from upstream suppliers. It really is value creation through data.
|
||||
|
||||
* **Performance:** Also closely related to product lifecycle, vendors must commit to providing performance testing, tuning recommendations, and sometimes even backporting performance improvements during the product's lifecycle. This is another opportune area for products.
|
||||
|
||||
* **Indemnification:** This is essentially insurance in case the company using the software is sued by a patent troll. Often, the corporate legal team just won't have the skill set needed to defend themselves. While potential customers could pay a third party for legal services, would they know the software as well?
|
||||
|
||||
* **Compute resources:** You simply can't get access to compute resources without paying for them. There are free trials, but sustained usage always requires paying, either through a cloud provider or by buying hardware. In fact, this is one of the main differentiated values provided by infrastructure as a service (IaaS) and software as a service (SaaS) cloud providers. This is quite differentiated from upstream suppliers because they will never have the budget to provide free compute, storage, and network.
|
||||
|
||||
* **Consulting:** Access to operational knowledge to set up and use the software may be a differentiator. Clearly, a company can hire the talent, given enough budget and willpower, but talent can be difficult to find. In fact, one might argue that software vendors have a much better chance of attracting the top talent, essentially creating a talent vacuum for users trying to reconstruct the value themselves.
|
||||
|
||||
* **Training:** Similar to consulting, the company that wrote, configured, released, and operated the software at scale often knows how to use it best. Again, a customer could hire the talent given enough budget and willpower.
|
||||
|
||||
* **Operational knowledge:** IaaS and SaaS solutions often provide this kind of value. Similarly, knowledge bases and connected experiences that analyze an installed environment's configuration to provide the user with insights (e.g., OpenShift, Red Hat Insights) can provide this value. Operational knowledge is similar to training and consulting.
|
||||
|
||||
* **Support:** This includes the ability to call for help or file support tickets and is similar to training, consulting, and operational knowledge. Support is often a crutch for open source product managers; again, customers can often recreate their own support organizations, depending on where they want to strategically invest budget and people, especially for level one and level two support. Level three support (e.g., kernel programmers) might be harder to hire.
|
||||
|
||||
* **Proprietary code:** This is code that's not released under an open source license. A customer could always build a software development team and augment the open core code with the missing features they need. For the vendor, proprietary code has the downside of creating an unnatural competition between the upstream open source supplier and the downstream product. Furthermore, this unnatural split between open source and proprietary code does not provide the customer more value. It always feels like value is being unnaturally held back. I would argue that this is a very suboptimal form of value capture.
|
||||
|
||||
* **Brand:** Brand equity is not easily measurable. It really comes down to a general level of trust. The customer needs to believe that the solution provider can and will help them when they need it. It's slow to build a brand and easy to lose it. Careful thought might reveal that the same is true with internal support organizations in a company. Users will quickly lose trust in internal support organizations, and it can take years to build it back up.
|
||||
|
||||
|
||||
|
||||
|
||||
Reading through the list, do you think a potential customer could recreate the value of almost any of these items? The answer is almost assuredly yes. This is true with almost any product feature or capability, whether it's open source or even proprietary. Cloud providers build their own CPUs and hardware, and delivery companies (e.g., UPS, Amazon, etc.) sometimes build their own vehicles. Whether it makes sense to build or buy all depends on the business and its specific needs.
|
||||
|
||||
### Add value in the right places
|
||||
|
||||
The open source licensing model led to an explosion in the availability of components that can be assembled into a software product. Stated another way, it formed a huge supply chain of software. Product managers can create products from a completely open source supply chain (e.g., Red Hat, Chef, SUSE, etc.), or mix and match open source and proprietary technology (e.g., open core like Sourcefire or SugarCRM). Choosing a fully open source versus open core methodology should not be confused with solving a business problem. Customers only buy products that solve their problems.
|
||||
|
||||
Enterprise open source products are solutions to problems, much like a vehicle sold by an auto manufacturer. The product manager for an open source product determines the requirements, things like the lifecycle (number of years), security (important certifications), performance (important workloads), and ecosystem (partners). Some of these requirements can be met by upstream suppliers (open source projects); some cannot.
|
||||
|
||||
An open source product is a composition of value consumed through a supply chain of upstream vendors and new value added by the company creating it. This new value combined with the consumed value is often worth more together and sold at a premium. It's the responsibility of product teams (including engineering, quality, performance, security, legal, etc.) to add new value in the right places, at the right times, to make their open source products worth the price customers pay versus building out the infrastructure necessary to assemble, maintain, and support the upstream components themselves.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/differentiating-products-upstream-suppliers
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
|
||||
[2]: https://opensource.com/article/20/10/open-source-supply-chain
|
||||
[3]: https://opensource.com/article/20/10/defining-product-open-source
|
||||
[4]: https://opensource.com/article/20/11/open-source-product-teams
|
||||
[5]: https://opensource.com/article/20/6/sell-open-source-software
|
||||
[6]: https://opensource.com/sites/default/files/uploads/creatingvalue1.png (Inputs for creating value)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/creatingvalue2.png (Inputs for solving market problems)
|
128
sources/talk/20210212 How to adopt DevSecOps successfully.md
Normal file
128
sources/talk/20210212 How to adopt DevSecOps successfully.md
Normal file
@ -0,0 +1,128 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to adopt DevSecOps successfully)
|
||||
[#]: via: (https://opensource.com/article/21/2/devsecops)
|
||||
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
|
||||
|
||||
How to adopt DevSecOps successfully
|
||||
======
|
||||
Integrating security throughout the software development lifecycle is
|
||||
important, but it's not always easy.
|
||||
![Target practice][1]
|
||||
|
||||
Adopting [DevOps][2] can help an organization transform and speed how its software is delivered, tested, and deployed to production. This is the well-known "DevOps promise" that has led to such a large surge in adoption.
|
||||
|
||||
We've all heard about the many successful DevOps implementations that changed how an organization approaches software innovation, making it fast and secure through agile delivery to get [ahead of competitors][3]. This is where we see DevOps' promises achieved and delivered.
|
||||
|
||||
But on the flipside, some DevOps adoptions cause more issues than benefits. This is the DevOps dilemma where DevOps fails to deliver on its promises.
|
||||
|
||||
There are many factors involved in an unsuccessful DevOps implementation, and a major one is security. A poor security culture usually happens when security is left to the end of the DevOps adoption process. Applying existing security processes to DevOps can delay projects, cause frustrations within the team, and create financial impacts that can derail a project.
|
||||
|
||||
[DevSecOps][4] was designed to avoid this very situation. Its purpose "is to build on the mindset that 'everyone is responsible for security…'" It also makes security a consideration at all levels of DevOps adoption.
|
||||
|
||||
### The DevSecOps process
|
||||
|
||||
Before DevOps and DevSecOps, the app security process looked something like the image below. Security came late in the software delivery process, after the software was accepted for production.
|
||||
|
||||
![Old software development process with security at the end][5]
|
||||
|
||||
(Michael Calizo, [CC BY-SA 4.0][6])
|
||||
|
||||
Depending on the organization's security profile and risk appetite, the application might even bypass security reviews and processes during acceptance. At that point, the security review becomes an audit exercise to avoid unnecessary project delays.
|
||||
|
||||
![Security as audit in software development][7]
|
||||
|
||||
(Michael Calizo, [CC BY-SA 4.0][6])
|
||||
|
||||
The DevSecOps [manifesto][8] says that the reason to integrate security into dev and ops at all levels is to implement security with less friction, foster innovation, and make sure security and data privacy are not left behind.
|
||||
|
||||
Therefore, DevSecOps encourages security practitioners to adapt and change their old, existing security processes and procedures. This may be sound easy, but changing processes, behavior, and culture is always difficult, especially in large environments.
|
||||
|
||||
The DevSecOps principle's basic requirement is to introduce a security culture and mindset across the entire application development and deployment process. This means old security practices must be replaced by more agile and flexible methods so that security can iterate and adapt to the fast-changing environment. According to the DevSecOps manifesto, security needs to "operate like developers to make security and compliance available to be consumed as services."
|
||||
|
||||
DevSecOps should look like the figure below, where security is embedded across the delivery cycle and can iterate every time there is a need for change or adjustment.
|
||||
|
||||
![DevSecOps considers security throughout development][9]
|
||||
|
||||
(Michael Calizo, [CC BY-SA 4.0][6])
|
||||
|
||||
### Common DevSecOps obstacles
|
||||
|
||||
Any time changes are introduced, people find faults or issues with the new process. This is natural human behavior. The fear and inconvenience associated with learning new things are always met with adverse reactions; after all, humans are creatures of habit.
|
||||
|
||||
Some common obstacles in DevSecOps adoption include:
|
||||
|
||||
* **Vendor-defined DevOps/DevSecOps:** This means principles and processes are focused on product offerings, and the organization won't be able to build the approach. Instead, they will be limited to what the vendor provides.
|
||||
* **Nervous people managers:** The fear of losing control is a real problem when change happens. Often, anxiety affects people managers' decision-making.
|
||||
* **If ain't broke, don't fix it:** This is a common mindset, and you really can't blame people for thinking this way. But the idea that the old way will survive despite new ways of delivering software and solutions must be challenged. To adapt to the agile application lifecycle, you need to change the processes to support the speed and agility it requires.
|
||||
* **The Netflix and Uber effect:** Everybody knows that Netflix and Uber have successfully implemented DevSecOps; therefore, many organizations want to emulate them. Because they have a different culture than your organization, simply emulating them won't work.
|
||||
* **Lack of measurement:** DevOps and DevSecOps transformation must be measured against set goals. Metrics might include software delivery performance or overall organization performance over time.
|
||||
* **Checklist-driven security:** By using a checklist, the security team follows the same old, static, and inflexible processes that are neither useful nor applicable to modern technologies that developers use to make software delivery lean and agile. The introduction of the "[as code][10]" approach requires security people to learn how to code.
|
||||
* **Security as a special team:** This is especially true in organizations transitioning from the old ways of delivering software, where security is a separate entity, to DevOps. Because of the separations, trust is questionable among devs, ops, and security. This will cause the security team to spend unnecessary time reviewing and governing DevOps processes and building pipelines instead of working closely with developers and ops teams to improve the software delivery flow.
|
||||
|
||||
|
||||
|
||||
### How to adopt DevSecOps successfully
|
||||
|
||||
Adopting DevSecOps is not easy, but being aware of common obstacles and challenges is key to your success.
|
||||
|
||||
Clearly, the biggest and most important change an organization needs to make is its culture. Cultural change usually requires executive buy-in, as a top-down approach is necessary to convince people to make a successful turnaround. You might hope that executive buy-in makes cultural change follow naturally, but don't expect smooth sailing—executive buy-in alone is not enough.
|
||||
|
||||
To help accelerate cultural change, the organization needs leaders and enthusiasts that will become agents of change. Embed these people in the dev, ops, and security teams to serve as advocates and champions for culture change. This will also establish a cross-functional team that will share successes and learnings with other teams to encourage wider adoption.
|
||||
|
||||
Once that is underway, the organization needs a DevSecOps use-case to start with, something small with a high potential for success. This enables the team to learn, fail, and succeed without affecting the organization's core business.
|
||||
|
||||
The next step is to identify and agree on the definition of success. The DevSecOps adoption needs to be measurable; to do that, you need a dashboard that shows metrics such as:
|
||||
|
||||
* Lead time for a change
|
||||
* Deployment frequency
|
||||
* Mean time to restore
|
||||
* Change failure
|
||||
|
||||
|
||||
|
||||
These metrics are a critical requirement to be able to identify processes and other things that require improvement. It's also a tool to declare if an adoption is a win or a bust. This methodology is called [event-driven transformation][11].
|
||||
|
||||
### Conclusion
|
||||
|
||||
When implemented properly, DevOps enables an organization to deliver software to production quickly and gain advantages over competitors. DevOps allows it to fail small and recover faster by enabling flexibility and efficiency to go to market early.
|
||||
|
||||
In summary, DevOps and DevSecOps adoption needs:
|
||||
|
||||
* Cultural change
|
||||
* Executive buy-in
|
||||
* Leaders and enthusiasts to act as evangelists
|
||||
* Cross-functional teams
|
||||
* Measurable indicators
|
||||
|
||||
|
||||
|
||||
Ultimately, the solution to the DevSecOps dilemma relies on cultural change to make the organization better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/devsecops
|
||||
|
||||
作者:[Mike Calizo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mcalizo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/target-security.png?itok=Ca5-F6GW (Target practice)
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://www.imd.org/research-knowledge/articles/the-battle-for-digital-disruption-startups-vs-incumbents/
|
||||
[4]: http://www.devsecops.org/blog/2015/2/15/what-is-devsecops
|
||||
[5]: https://opensource.com/sites/default/files/uploads/devsecops_old-process.png (Old software development process with security at the end)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/devsecops_security-as-audit.png (Security as audit in software development)
|
||||
[8]: https://www.devsecops.org/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/devsecops_process.png (DevSecOps considers security throughout development)
|
||||
[10]: https://www.oreilly.com/library/view/devopssec/9781491971413/ch04.html
|
||||
[11]: https://www.openshift.com/blog/exploring-a-metrics-driven-approach-to-transformation
|
102
sources/talk/20210213 5 reasons why I love coding on Linux.md
Normal file
102
sources/talk/20210213 5 reasons why I love coding on Linux.md
Normal file
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 reasons why I love coding on Linux)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-programming)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
5 reasons why I love coding on Linux
|
||||
======
|
||||
Linux is a great platform for programming—it's logical, easy to see the
|
||||
source code, and very efficient.
|
||||
![Terminal command prompt on orange background][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different ways to use Linux. Here I'll explain why so many programmers choose Linux.
|
||||
|
||||
When I first started using Linux, it was for its excellent multimedia support because I worked in film production. We found that the typical proprietary video editing applications couldn't handle most of the footage we were pulling from pretty much any device that could record an image. At the time, I wasn't aware that Linux had a reputation as an operating system for servers and programmers. The more I did on Linux, the more I wanted to control every aspect of it. And in the end, I discovered that a computer was at its most powerful when its user could "speak" its language. Within a few years of switching to Linux, I was scripting unattended video edits, stringing together audio files, bulk editing photos, and anything else I could imagine and then invent a solution for. It didn't take long for me to understand why programmers loved Linux, but it was Linux that taught me to love programming.
|
||||
|
||||
It turns out that Linux is an excellent platform for programmers, both new and experienced. It's not that you _need_ Linux to program. There are successful developers on all different kinds of platforms. However, Linux has much to offer developers. Here are a few things I've found useful.
|
||||
|
||||
### Foundations of logic
|
||||
|
||||
Linux is built around [automation][2]. It's very intentional that staple applications on Linux can at the very least launch from a terminal with additional options, and often they can even be used entirely from the terminal. This idea is sometimes mistaken as a primitive computing model because people mistakenly believe that writing software that operates from a terminal is just doing the bare minimum to get a working application. This is an unfortunate misunderstanding of how code works, but it's something many of us are guilty of from time to time. We think _more_ is always better, so an application containing 1,000 lines of code must be 100 times better than one with ten lines of code, right? The truth is that all things being equal, the application with greater flexibility is preferable, regardless of how that translates to lines of code.
|
||||
|
||||
On Linux, a process that might take you an hour when done manually can literally be reduced to one minute with the right terminal command, and possibly less when parsed out to [GNU Parallel][3]. This phenomenon requires a shift in how you think about what a computer does. For instance, if your task is to add a cover image to 30 PDF files, you might think this is a sensible workflow:
|
||||
|
||||
1. Open a PDF in a PDF editor
|
||||
2. Open the front cover
|
||||
3. Append a PDF to the end of the cover file
|
||||
4. Save the file as a new PDF
|
||||
5. Repeat this process for each old PDF (but do not duplicate this process for each new PDF)
|
||||
|
||||
|
||||
|
||||
It's mostly common sense stuff, and while it's painfully repetitive, it does get the job done. On Linux, however, you're able to work smarter than that. The thought process is similar: First, you devise the steps required for a successful result. After some research, you'd find out about the `pdftk-java` command, and you'd discover a simple solution:
|
||||
|
||||
|
||||
```
|
||||
$ pdftk A=cover.pdf B=document_1.pdf \
|
||||
cat A B \
|
||||
output doc+cover_1.pdf
|
||||
```
|
||||
|
||||
Once you've proven to yourself that the command works on one document, you take time to learn about looping options, and you settle on a parallel operation:
|
||||
|
||||
|
||||
```
|
||||
$ find ~/docs/ -name "*.pdf" | \
|
||||
parallel pdftk A=cover.pdf B={} \
|
||||
cat A B \
|
||||
output {.}.cover.pdf
|
||||
```
|
||||
|
||||
It's a slightly different way of thinking because the "code" you write processes data differently than the enforced linearity you're used to. However, getting out of your old linear way of thinking is important for writing actual code later, and it has the side benefit of empowering you to compute smarter.
|
||||
|
||||
### Code connections
|
||||
|
||||
No matter what platform you're programming for when you write code, you're weaving an intricate latticework of invisible connections between many different files. In all but the rarest cases, code draws from headers and relies on external libraries to become a complete program. This happens on any platform, but Linux tends to encourage you to understand this for yourself, rather than blindly trusting the platform's development kit to take care of it for you.
|
||||
|
||||
Now, there's nothing wrong with trusting a development kit to resolve _library_ and _include_ paths. On the contrary, that kind of abstraction is a luxury you should enjoy. However, if you never understand what's happening, then it's a lot harder to override the process when you need to do something that a dev kit doesn't know about or to troubleshoot problems when they arise.
|
||||
|
||||
This translates across platforms, too. You can develop code on Linux that you intend to run on Linux as well as other operating systems, and your understanding of how code compiles helps you hit your targets.
|
||||
|
||||
Admittedly, you don't learn these lessons just by using Linux. It's entirely possible to blissfully code in a good IDE and never give a thought to what version of a library you have installed or where the development headers are located. However, Linux doesn't hide anything from you. It's easy to dig down into the system, locate the important parts, and read the code they contain.
|
||||
|
||||
### Existing code
|
||||
|
||||
Knowing where headers and libraries are located is useful, but having them to reference is yet another added bonus to programming on Linux. On Linux, you get to see the source code of basically anything you want (excluding applications that aren't open source but that run on Linux). The benefit here cannot be overstated. As you learn more about programming in general or about programming something new to you, you can learn a lot by referring to existing code on your Linux system. Many programmers have learned how to code by reading other people's open source code.
|
||||
|
||||
On proprietary systems, you might find developer documentation with code samples in it. That's great because documentation is important, but it pales in comparison to locating the exact functionality you want to implement and then finding the source code demonstrating how it was done in an application you use every day.
|
||||
|
||||
### Direct access to peripherals
|
||||
|
||||
Something I often took for granted after developing code for media companies using Linux is access to peripherals. For instance, when connecting a video camera to Linux, you can pull incoming data from **/dev/video0** or similar. Everything's in **/dev**, and it's always the shortest path between two points to get there.
|
||||
|
||||
That's not the case on other platforms. Connecting to systems outside of the operating system is often a maze of SDKs, restricted libraries, and sometimes NDAs. This, of course, varies depending on what you're programming for, but it's hard to beat Linux's simple and predictable interface.
|
||||
|
||||
### Abstraction layers
|
||||
|
||||
Conversely, Linux also provides a healthy set of abstraction layers for when direct access or manual coding ends up creating more work than you want to do. There are conveniences found in Qt and Java, and there are whole stacks like Pulse Audio, Pipewire, and gstreamer. Linux _wants_ you to be able to code, and it shows.
|
||||
|
||||
### Add to this list
|
||||
|
||||
There are more reasons that make coding on Linux a pleasure. Some are broad concepts and others are overly-specific details that have saved me hours of frustration. Linux is a great place to be, no matter what platform you're targeting. Whether you're just learning about programming or you're a coder looking for a new digital home, there's no better workspace for programming than Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-programming
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
|
||||
[2]: https://opensource.com/article/20/11/orchestration-vs-automation
|
||||
[3]: https://opensource.com/article/18/5/gnu-parallel
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How open source provides students with real-world experience)
|
||||
[#]: via: (https://opensource.com/article/21/2/open-source-student)
|
||||
[#]: author: (Laura Novich https://opensource.com/users/laura-novich)
|
||||
|
||||
How open source provides students with real-world experience
|
||||
======
|
||||
Contributing to open source gives students the real-world experience
|
||||
required to land a good job.
|
||||
![Woman sitting in front of her computer][1]
|
||||
|
||||
In the movie _The Secret of My Success_, Brantley Foster (played by Michael J. Fox) expresses the exact thought that goes through every new graduate's mind: "How can I get any experience until I get a job that _gives_ me experience?"
|
||||
|
||||
The hardest thing to do when starting a new career is to get experience. Often this creates a paradox. How do you get work with no experience, and how do you get experience with no work?
|
||||
|
||||
In the open source software world, this conundrum is a bit less daunting because your experience is what you make of it. By working with open source projects sponsored by open source software (OSS) companies, you gain experience working on projects you like for companies that make you feel important, and then you use that experience to help find employment.
|
||||
|
||||
Most companies would never allow newbies to touch their intellectual property and collateral without signing a non-disclosure agreement (NDA) or going through some kind of training or security check. However, when your source code is open and anyone in the world can contribute to it (in addition to copy it and use it), this is no longer an issue. In fact, open source companies embrace their contributors and create communities where students can easily get their feet wet and find their way in coding, testing, and documentation. Most open source companies depend on the contributions of others to get work done. This means the contributors work for free simply because they want to. For students, it translates into an unpaid internship and getting some real-world experience.
|
||||
|
||||
At [Our Best Words][2], we decided to run a pilot project to see if our students could work in an open source documentation project and find the experience beneficial to jumpstarting their new careers in technical communication.
|
||||
|
||||
![GitLab screenshot][3]
|
||||
|
||||
(Laura Novich, [CC BY-SA 4.0][4])
|
||||
|
||||
I was the initiator and point of contact for the project, and I approached several companies. The company that gave us the most positive response was [GitLab][5]. GitLab is a company that creates software for Git repository management, issue tracking, and continuous integration/continuous delivery (CI/CD) pipeline management. Its software is used by hundreds of thousands of organizations worldwide, and in 2019, it announced it achieved $100 million in annual recurring revenue.
|
||||
|
||||
GitLab's [Mike Jang][6] connected me with [Marcin Sedlak-Jakubowski][7] and [Marcia Dias Ramos][8], who are located closer to OBW's offices in Israel. We met to hammer out the details, and everyone had their tasks to launch the pilot in mid-September. Mike, Marcia, and Marcin hand-picked 19 issues for the students to solve. Each issue was tagged _Tich-Tov-only_ for OBW students, and any contributor who was not an OBW student was not allowed to work on the issue.
|
||||
|
||||
To prepare the students, I held several demonstrations with GitLab. The students had never used the software before, and some were quite nervous. As the backbone of GitLab is Git, a software tool the students were already familiar with, it wasn't too hard to learn. Following the demonstrations, I sent the students a link to a Google Drive folder with tutorials, a FAQ, and other valuable resources.
|
||||
|
||||
The issues the students were assigned came from GitLab's documentation. The documentation is written in Markdown and is checked with a linter (a static code analysis tool) called Vale. The students' assignments were to fix issues that the Vale linter found. The changes included fixing spelling, grammar, usage, and voice. In some cases, entire pages had to be rewritten.
|
||||
|
||||
As I wanted this project to run smoothly and successfully, we decided to limit the pilot to seven of our 14 students. This allowed me to manage the project more closely and to make sure each student had only two to three issues to handle during the two-month time period of the project.
|
||||
|
||||
![GitLab repo][9]
|
||||
|
||||
(Laura Novich, [CC BY-SA 4.0][4])
|
||||
|
||||
The OBW students who were part of this project (with links to their GitLab profiles) were:
|
||||
|
||||
* [Aaron Gorsuch][10]
|
||||
* [Anna Lester][11]
|
||||
* [Avi Chazen][12]
|
||||
* [Ela Greenberg][13]
|
||||
* [Rachel Gottesman][14]
|
||||
* [Stefanie Saffern][15]
|
||||
* [Talia Novich][16]
|
||||
|
||||
|
||||
|
||||
We worked mostly during September and October and wrapped up the project in November. Every issue the students had was put on a Kanban board. We had regular standup meetings, where we discussed what we were doing and any issues causing difficulty. There were many teachable moments where I would help with repository issues, troubleshooting merge requests, and helping the students understand technical writing theories in practice.
|
||||
|
||||
![Kanban board][17]
|
||||
|
||||
(Laura Novich, [CC BY-SA 4.0][4])
|
||||
|
||||
November came faster than we expected and looking back, the project ended way too quickly. About midway in, I collected feedback from Marcin, Marcia, and Mike, and they told me their experience was positive. They told us that once we were done, we could take on more issues than the original allotment assigned to the group, if we wanted.
|
||||
|
||||
One student, Rachel Gottesman, did just that. She completed 33 merge requests and helped rewrite pages of GitLab's documentation. She was so instrumental for the 13.7 release that [GitLab announced][18] that Rachel is the MVP for the release! We at OBW couldn't be more thrilled! _Congratulations, Rachel!_
|
||||
|
||||
![Rachel Gottesman's GitLab MVP award][19]
|
||||
|
||||
(Laura Novich, [CC BY-SA 4.0][4])
|
||||
|
||||
Rachel's name appears [on GitLab's MVP page][20].
|
||||
|
||||
We are gearing up for our new year and a new course. We plan to run this project again as part of our [Software Documentation for Technical Communication Professionals][21] course.
|
||||
|
||||
* * *
|
||||
|
||||
_This article originally appeared on [Our Best Words Blog][22] and is republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/open-source-student
|
||||
|
||||
作者:[Laura Novich][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laura-novich
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA (Woman sitting in front of her computer)
|
||||
[2]: https://ourbestwords.com/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/gitlab.png (GitLab screenshot)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://about.gitlab.com/
|
||||
[6]: https://www.linkedin.com/in/mijang/
|
||||
[7]: https://www.linkedin.com/in/marcin-sedlak-jakubowski-91143a124/
|
||||
[8]: https://www.linkedin.com/in/marciadiasramos/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/gitlab_obw_repo.png (GitLab repo)
|
||||
[10]: https://gitlab.com/aarongor
|
||||
[11]: https://gitlab.com/Anna-Lester
|
||||
[12]: https://gitlab.com/avichazen
|
||||
[13]: https://gitlab.com/elagreenberg
|
||||
[14]: https://gitlab.com/gottesman.rachelg
|
||||
[15]: https://gitlab.com/stefaniesaffern
|
||||
[16]: https://gitlab.com/TaliaNovich
|
||||
[17]: https://opensource.com/sites/default/files/uploads/kanban.png (Kanban board)
|
||||
[18]: https://release-13-7.about.gitlab-review.app/releases/2020/12/22/gitlab-13-7-released/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/rachelgottesmanmvp.png (Rachel Gottesman's GitLab MVP award)
|
||||
[20]: https://about.gitlab.com/community/mvp/
|
||||
[21]: https://ourbestwords.com/training-courses/skills-courses/
|
||||
[22]: https://ourbestwords.com/topic/blog/
|
@ -1,73 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intel formally launches Optane for data center memory caching)
|
||||
[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Intel formally launches Optane for data center memory caching
|
||||
======
|
||||
|
||||
### Intel formally launched the Optane persistent memory product line, which includes 3D Xpoint memory technology. The Intel-only solution is meant to sit between DRAM and NAND and to speed up performance.
|
||||
|
||||
![Intel][1]
|
||||
|
||||
As part of its [massive data center event][2] on Tuesday, Intel formally launched the Optane persistent memory product line. It had been out for a while, but the current generation of Xeon server processors could not fully utilize it. The new Xeon 8200 and 9200 lines take full advantage of it.
|
||||
|
||||
And since Optane is an Intel product (co-developed with Micron), that means AMD and Arm server processors are out of luck.
|
||||
|
||||
As I have [stated in the past][3], Optane DC Persistent Memory uses 3D Xpoint memory technology that Intel developed with Micron Technology. 3D Xpoint is a non-volatile memory type that is much faster than solid-state drives (SSD), almost at the speed of DRAM, but it has the persistence of NAND flash.
|
||||
|
||||
**[ Read also:[Why NVMe? Users weigh benefits of NVMe-accelerated flash storage][4] and [IDC’s top 10 data center predictions][5] | Get regularly scheduled insights [Sign up for Network World newsletters][6] ]**
|
||||
|
||||
The first 3D Xpoint products were SSDs called Intel’s ["ruler,"][7] because they were designed in a long, thin format similar to the shape of a ruler. They were designed that way to fit in 1u server carriages. As part of Tuesday’s announcement, Intel introduced the new Intel SSD D5-P4326 'Ruler' SSD, using four-cell or QLC 3D NAND memory, with up to 1PB of storage in a 1U design.
|
||||
|
||||
Optane DC Persistent Memory will be available in DIMM capacities of 128GB on up to 512GB initially. That’s two to four times what you can get with DRAM, said Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, who keynoted the event.
|
||||
|
||||
“We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 TB in an 8-socket system. That’s three times larger than what we were able to do with the first-generation of Xeon Scalable,” he said.
|
||||
|
||||
## Intel Optane memory uses and speed
|
||||
|
||||
Optane runs in two different modes: Memory Mode and App Direct Mode. Memory mode is what I have been describing to you, where Optane memory exists “above” the DRAM and acts as a cache. In App Direct mode, the DRAM and Optane DC Persistent Memory are pooled together to maximize the total capacity. Not every workload is ideal for this kind of configuration, so it should be used in applications that are not latency-sensitive. The primary use case for Optane, as Intel is promoting it, is Memory Mode.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
|
||||
|
||||
When 3D Xpoint was initially announced a few years back, Intel claimed it was 1,000 times faster than NAND, with 1000 times the endurance, and 10 times the density potential of DRAM. Well that was a little exaggerated, but it does have some intriguing elements.
|
||||
|
||||
Optane memory, when used in 256B contiguous 4 cacheline, can achieve read speeds of 8.3GB/sec and write speeds of 3.0GB/sec. Compare that with the read/write speed of 500 or so MB/sec for a SATA SSD, and you can see the performance gain. Optane, remember, is feeding memory, so it caches frequently accessed SSD content.
|
||||
|
||||
This is the key takeaware of Optane DC. It will keep very large data sets very close to memory, and hence the CPU, with low latency while at the same time minimizing the need to access the slower storage subsystem, whether it’s SSD or HDD. It now offers the possibility of putting multiple terabytes of data very close to the CPU for much faster access.
|
||||
|
||||
## One challenge with Optane memory
|
||||
|
||||
The only real challenge is that Optane goes into DIMM slots, which is where memory goes. Now some motherboards come with as many as 16 DIMM slots per CPU socket, but that’s still board real estate that the customer and OEM provider will need to balance out: Optane vs. memory. There are some Optane drives in PCI Express format, which alleviate the memory crowding on the motherboard.
|
||||
|
||||
3D Xpoint also offers higher endurance than traditional NAND flash memory due to the way it writes data. Intel promises a five-year warranty with its Optane, while a lot of SSDs offer only three years.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html
|
||||
[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html
|
||||
[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html
|
||||
[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
|
||||
[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,296 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions)
|
||||
[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Tmux Command Examples To Manage Multiple Terminal Sessions
|
||||
======
|
||||
|
||||
![tmux command examples][1]
|
||||
|
||||
We’ve already learned to use [**GNU Screen**][2] to manage multiple Terminal sessions. Today, we will see yet another well-known command-line utility named **“Tmux”** to manage Terminal sessions. Similar to GNU Screen, Tmux is also a Terminal multiplexer that allows us to create number of terminal sessions and run more than one programs or processes at the same time inside a single Terminal window. Tmux is free, open source and cross-platform program that supports Linux, OpenBSD, FreeBSD, NetBSD and Mac OS X. In this guide, we will discuss most-commonly used Tmux commands in Linux.
|
||||
|
||||
### Installing Tmux in Linux
|
||||
|
||||
Tmux is available in the official repositories of most Linux distributions.
|
||||
|
||||
On Arch Linux and its variants, run the following command to install it.
|
||||
|
||||
```
|
||||
$ sudo pacman -S tmux
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, Linux Mint:
|
||||
|
||||
```
|
||||
$ sudo apt-get install tmux
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install tmux
|
||||
```
|
||||
|
||||
On RHEL and CentOS:
|
||||
|
||||
```
|
||||
$ sudo yum install tmux
|
||||
```
|
||||
|
||||
On SUSE/openSUSE:
|
||||
|
||||
```
|
||||
$ sudo zypper install tmux
|
||||
```
|
||||
|
||||
Well, we have just installed Tmux. Let us go ahead and see some examples to learn how to use Tmux.
|
||||
|
||||
### Tmux Command Examples To Manage Multiple Terminal Sessions
|
||||
|
||||
The default prefix shortcut to all commands in Tmux is **Ctrl+b**. Just remember this keyboard shortcut when using Tmux.
|
||||
|
||||
* * *
|
||||
|
||||
**Note:** The default prefix to all **Screen** commands is **Ctrl+a**.
|
||||
|
||||
* * *
|
||||
|
||||
##### Creating Tmux sessions
|
||||
|
||||
To create a new Tmux session and attach to it, run the following command from the Terminal:
|
||||
|
||||
```
|
||||
tmux
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
tmux new
|
||||
```
|
||||
|
||||
Once you are inside the Tmux session, you will see a **green bar at the bottom** as shown in the screenshot below.
|
||||
|
||||
![][3]
|
||||
|
||||
New Tmux session
|
||||
|
||||
It is very handy to verify whether you’re inside a Tmux session or not.
|
||||
|
||||
##### Detaching from Tmux sessions
|
||||
|
||||
To detach from a current Tmux session, just press **Ctrl+b** and **d**. You don’t need to press this both Keyboard shortcut at a time. First press “Ctrl+b” and then press “d”.
|
||||
|
||||
Once you’re detached from a session, you will see an output something like below.
|
||||
|
||||
```
|
||||
[detached (from session 0)]
|
||||
```
|
||||
|
||||
##### Creating named sessions
|
||||
|
||||
If you use multiple sessions, you might get confused which programs are running on which sessions. In such cases, you can just create named sessions. For example if you wanted to perform some activities related to web server in a session, just create the Tmux session with a custom name, for example **“webserver”** (or any name of your choice).
|
||||
|
||||
```
|
||||
tmux new -s webserver
|
||||
```
|
||||
|
||||
Here is the new named Tmux session.
|
||||
|
||||
![][4]
|
||||
|
||||
Tmux session with a custom name
|
||||
|
||||
As you can see in the above screenshot, the name of the Tmux session is **webserver**. This way you can easily identify which program is running on which session.
|
||||
|
||||
To detach, simply press **Ctrl+b** and **d**.
|
||||
|
||||
##### List Tmux sessions
|
||||
|
||||
To view the list of open Tmux sessions, run:
|
||||
|
||||
```
|
||||
tmux ls
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][5]
|
||||
|
||||
List Tmux sessions
|
||||
|
||||
As you can see, I have two open Tmux sessions.
|
||||
|
||||
##### Creating detached sessions
|
||||
|
||||
Sometimes, you might want to simply create a session and don’t want to attach to it automatically.
|
||||
|
||||
To create a new detached session named **“ostechnix”** , run:
|
||||
|
||||
```
|
||||
tmux new -s ostechnix -d
|
||||
```
|
||||
|
||||
The above command will create a new Tmux session called “ostechnix”, but won’t attach to it.
|
||||
|
||||
You can verify if the session is created using “tmux ls” command:
|
||||
|
||||
![][6]
|
||||
|
||||
Create detached Tmux sessions
|
||||
|
||||
##### Attaching to Tmux sessions
|
||||
|
||||
You can attach to the last created session by running this command:
|
||||
|
||||
```
|
||||
tmux attach
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
tmux a
|
||||
```
|
||||
|
||||
If you want to attach to any specific named session, for example “ostechnix”, run:
|
||||
|
||||
```
|
||||
tmux attach -t ostechnix
|
||||
```
|
||||
|
||||
Or, shortly:
|
||||
|
||||
```
|
||||
tmux a -t ostechnix
|
||||
```
|
||||
|
||||
##### Kill Tmux sessions
|
||||
|
||||
When you’re done and no longer required a Tmux session, you can kill it at any time with command:
|
||||
|
||||
```
|
||||
tmux kill-session -t ostechnix
|
||||
```
|
||||
|
||||
To kill when attached, press **Ctrl+b** and **x**. Hit “y” to kill the session.
|
||||
|
||||
You can verify if the session is closed with “tmux ls” command.
|
||||
|
||||
To Kill Tmux server along with all Tmux sessions, run:
|
||||
|
||||
```
|
||||
tmux kill-server
|
||||
```
|
||||
|
||||
Be careful! This will terminate all Tmux sessions even if there are any running jobs inside the sessions without any warning.
|
||||
|
||||
When there were no running Tmux sessions, you will see the following output:
|
||||
|
||||
```
|
||||
$ tmux ls
|
||||
no server running on /tmp/tmux-1000/default
|
||||
```
|
||||
|
||||
##### Split Tmux Session Windows
|
||||
|
||||
Tmux has an option to split a single Tmux session window into multiple smaller windows called **Tmux panes**. This way we can run different programs on each pane and interact with all of them simultaneously. Each pane can be resized, moved and closed without affecting the other panes. We can split a Tmux window either horizontally or vertically or both at once.
|
||||
|
||||
**Split panes horizontally**
|
||||
|
||||
To split a pane horizontally, press **Ctrl+b** and **”** (single quotation mark).
|
||||
|
||||
![][7]
|
||||
|
||||
Split Tmux pane horizontally
|
||||
|
||||
Use the same key combination to split the panes further.
|
||||
|
||||
**Split panes vertically**
|
||||
|
||||
To split a pane vertically, press **Ctrl+b** and **%**.
|
||||
|
||||
![][8]
|
||||
|
||||
Split Tmux panes vertically
|
||||
|
||||
**Split panes horizontally and vertically**
|
||||
|
||||
We can also split a pane horizontally and vertically at the same time. Take a look at the following screenshot.
|
||||
|
||||
![][9]
|
||||
|
||||
Split Tmux panes
|
||||
|
||||
First, I did a horizontal split by pressing **Ctrl+b “** and then split the lower pane vertically by pressing **Ctrl+b %**.
|
||||
|
||||
As you see in the above screenshot, I am running three different programs on each pane.
|
||||
|
||||
**Switch between panes**
|
||||
|
||||
To switch between panes, press **Ctrl+b** and **Arrow keys (Left, Right, Up, Down)**.
|
||||
|
||||
**Send commands to all panes**
|
||||
|
||||
In the previous example, we run three different commands on each pane. However, it is also possible to run send the same commands to all panes at once.
|
||||
|
||||
To do so, press **Ctrl+b** and type the following command and hit ENTER:
|
||||
|
||||
```
|
||||
:setw synchronize-panes
|
||||
```
|
||||
|
||||
Now type any command on any pane. You will see that the same command is reflected on all panes.
|
||||
|
||||
**Swap panes**
|
||||
|
||||
To swap panes, press **Ctrl+b** and **o**.
|
||||
|
||||
**Show pane numbers**
|
||||
|
||||
Press **Ctrl+b** and **q** to show pane numbers.
|
||||
|
||||
**Kill panes**
|
||||
|
||||
To kill a pane, simply type **exit** and ENTER key. Alternatively, press **Ctrl+b** and **x**. You will see a confirmation message. Just press **“y”** to close the pane.
|
||||
|
||||
![][10]
|
||||
|
||||
Kill Tmux panes
|
||||
|
||||
At this stage, you will get a basic idea of Tmux and how to use it to manage multiple Terminal sessions. For more details, refer man pages.
|
||||
|
||||
```
|
||||
$ man tmux
|
||||
```
|
||||
|
||||
Both GNU Screen and Tmux utilities can be very helpful when managing servers remotely via SSH. Learn Screen and Tmux commands thoroughly to manage your remote servers like a pro.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png
|
||||
[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/
|
||||
[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png
|
||||
[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png
|
||||
[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png
|
||||
[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png
|
||||
[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png
|
||||
[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png
|
||||
[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png
|
||||
[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World)
|
||||
[#]: via: (https://itsfoss.com/endeavouros/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
EndeavourOS Aims to Fill the Void Left by Antergos in Arch Linux World
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
I’m sure that most of our readers are aware of the [end of the Antergos project][2]. In the wake of the announcement, the members of the Antergos community created several successors. Today, we will be looking at one of the ‘spiritual successors’ of Antergos: [EndeavourOS][3].
|
||||
|
||||
### EndeavourOS is not a fork of Antergos
|
||||
|
||||
Before we start, I would like to make it very clear that EndeavourOS is not a fork of Antergos. The developers used Antergos as their inspiration to create a light Arch-based distro.
|
||||
|
||||
![Endeavouros First Boot][4]
|
||||
|
||||
According to the [project’s site][5], EndeavourOS came into existence because people in the Antergos community wanted to keep the spirit of Antergos alive. Their goal was simply to “have Arch installed with an easy to use installer and a friendly, helpful community to fall back on during the journey to master the system”.
|
||||
|
||||
Unlike many Arch-based distros, EndeavourOS is intending to work [like vanilla Arch][5], “so no one-click solutions to install your favorite app or a bunch of preinstalled apps you’ll eventually don’t need”. For most people, especially those new to Linux and Arch, there will be a learning curve, but EndeavourOS aims to have a large friendly community where people are encouraged to ask questions and learn about their systems.
|
||||
|
||||
![Endeavouros Installing][6]
|
||||
|
||||
### A Work in Progress
|
||||
|
||||
EndeavourOS was [first released on July 15th][7] of this year after the project was first announced on [May 23rd][8]. Unfortunately, this means that the developers were unable to incorporate all of the features that they have planned.
|
||||
|
||||
For example, they want to have an online install similar to the one used by Antergos but ran into [issues with the current options][9]. “Cnchi has caused serious problems to be working outside the Antergos eco system and it needs a complete rewrite to work. The Fenix installer by RebornOS is getting more into shape, but needs more time to properly function.” For now, EndeavourOS will ship with the [Calamares installer][10].
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Velt/OS: A Material Design-Themed Desktop Environment
|
||||
|
||||
EndeavourOS will also offer [less stuff than Antergos][9]: It’s repo is smaller than Antergos though they ship with some AUR packages. Their goal is to deliver a system that’s close to Arch an not vanilla Arch.
|
||||
|
||||
![Endeavouros Updating With Kalu][12]
|
||||
|
||||
The developers [stated further][13]:
|
||||
|
||||
> “Linux and specifically Arch are all about freedom of choice, we provide a basic install that lets you explore those choices with a small layer of convenience. We will never judge you by installing GUI apps like Pamac or even work with sandbox solutions like Flatpak or Snaps. It’s up to you what you are installing to make EndeavourOS work in your circumstances, that’s the main difference we have with Antergos or Manjaro, but like Antergos we will try to help you if you run into a problem with one of your installed packages.”
|
||||
|
||||
### Experiencing EndeavourOS
|
||||
|
||||
I installed EndeavourOS in [VirtualBox][14] and took a look around. When I first booted from the image, I was greeted by a little box with links to the EndeavourOS site about installing. It also has a button to install and one to manually partition the drive. The Calamares installer worked very smoothly for me.
|
||||
|
||||
After I rebooted into a fresh install of EndeavourOS, I was greeted by a colorful themed XFCE desktop. I was also treated to a bunch of notification balloons. Most Arch-based distros I’ve used come with a GUI tool like [pamac][15] or [octopi][16] to keep the system up-to-date. EndeavourOS comes with [kalu][17]. (kalu stands for “Keeping Arch Linux Up-to-date”.) It checks for updated packages, Arch Linux News, updated AUR packages and more. Once it sees an update for any of those areas, it will create a notification balloon.
|
||||
|
||||
I took a look through the menu to see what was installed by default. The answer is not much, not even an office suite. If they intend for EndeavourOS to be a blank canvas for anyone to create the system they want. they are headed in the right direction.
|
||||
|
||||
![Endeavouros Desktop][18]
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
EndeavourOS is still very young. The first stable release was issued only 3 weeks ago. It is missing some stuff, most importantly an online installer. That being said, it is possible to gauge where EndeavourOS will be heading.
|
||||
|
||||
[][19]
|
||||
|
||||
Suggested read An Overview of Intel's Clear Linux, its Features and Installation Procedure
|
||||
|
||||
While it is not an exact clone of Antergos, EndeavourOS wants to replicate the most important part of Antergos the welcoming, friendly community. All to often, the Linux community can seem to be unwelcoming and downright hostile to the beginner. I’ve seen more and more people trying to combat that negativity and bring more people into Linux. With the EndeavourOS team making that their main focus, a great distro can be the only result.
|
||||
|
||||
If you are currently using Antergos, there is a way for you to [switch to EndeavourOS without performing a clean install.][20]
|
||||
|
||||
If you want an exact clone of Antergos, I would recommend checking out [RebornOS][21]. They are currently working on a replacement of the Cnchi installer named Fenix.
|
||||
|
||||
Have you tried EndeavourOS already? How’s your experience with it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/endeavouros/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-logo.png?ssl=1
|
||||
[2]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[3]: https://endeavouros.com/
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-first-boot.png?resize=800%2C600&ssl=1
|
||||
[5]: https://endeavouros.com/info-2/
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-installing.png?resize=800%2C600&ssl=1
|
||||
[7]: https://endeavouros.com/endeavouros-first-stable-release-has-arrived/
|
||||
[8]: https://forum.antergos.com/topic/11780/endeavour-antergos-community-s-next-stage
|
||||
[9]: https://endeavouros.com/what-to-expect-on-the-first-release/
|
||||
[10]: https://calamares.io/
|
||||
[11]: https://itsfoss.com/veltos-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-updating-with-kalu.png?resize=800%2C600&ssl=1
|
||||
[13]: https://endeavouros.com/second-week-after-the-stable-release/
|
||||
[14]: https://itsfoss.com/install-virtualbox-ubuntu/
|
||||
[15]: https://aur.archlinux.org/packages/pamac-aur/
|
||||
[16]: https://octopiproject.wordpress.com/
|
||||
[17]: https://github.com/jjk-jacky/kalu
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/endeavouros-desktop.png?resize=800%2C600&ssl=1
|
||||
[19]: https://itsfoss.com/clear-linux/
|
||||
[20]: https://forum.endeavouros.com/t/how-to-switch-from-antergos-to-endevouros/105/2
|
||||
[21]: https://rebornos.org/
|
@ -1,46 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to tell if implementing your Python code is a good idea)
|
||||
[#]: via: (https://opensource.com/article/19/12/zen-python-implementation)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
How to tell if implementing your Python code is a good idea
|
||||
======
|
||||
This is part of a special series about the Zen of Python focusing on the
|
||||
17th and 18th principles: hard vs. easy.
|
||||
![Brick wall between two people, a developer and an operations manager][1]
|
||||
|
||||
A language does not exist in the abstract. Every single language feature has to be implemented in code. It is easy to promise some features, but the implementation can get hairy. Hairy implementation means more potential for bugs, and, even worse, a maintenance burden for the ages.
|
||||
|
||||
The [Zen of Python][2] has answers for this conundrum.
|
||||
|
||||
### If the implementation is hard to explain, it's a bad idea.
|
||||
|
||||
The most important thing about programming languages is predictability. Sometimes we explain the semantics of a certain construct in terms of abstract programming models, which do not correspond exactly to the implementation. However, the best of all explanations just _explains the implementation_.
|
||||
|
||||
If the implementation is hard to explain, it means the avenue is impossible.
|
||||
|
||||
### If the implementation is easy to explain, it may be a good idea.
|
||||
|
||||
Just because something is easy does not mean it is worthwhile. However, once it is explained, it is much easier to judge whether it is a good idea.
|
||||
|
||||
This is why the second half of this principle intentionally equivocates: nothing is certain to be a good idea, but it always allows people to have that discussion.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/zen-python-implementation
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
|
||||
[2]: https://www.python.org/dev/peps/pep-0020/
|
@ -1,55 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Namespaces are the shamash candle of the Zen of Python)
|
||||
[#]: via: (https://opensource.com/article/19/12/zen-python-namespaces)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Namespaces are the shamash candle of the Zen of Python
|
||||
======
|
||||
This is part of a special series about the Zen of Python focusing on one
|
||||
bonus principle: namespaces.
|
||||
![Person programming on a laptop on a building][1]
|
||||
|
||||
Hanukkah famously has eight nights of celebration. The Hanukkah menorah, however, has nine candles: eight regular candles and a ninth that is always offset. It is called the **shamash** or **shamos**, which loosely translates to meaning "servant" or "janitor."
|
||||
|
||||
The shamos is the candle that lights all the others: it is the only candle whose fire can be used, not just watched. As we wrap up our series on the Zen of Python, I see how namespaces provide a similar service.
|
||||
|
||||
### Namespaces in Python
|
||||
|
||||
Python uses namespaces for everything. Though simple, they are sparse data structures—which is often the best way to achieve a goal.
|
||||
|
||||
> A _namespace_ is a mapping from names to objects.
|
||||
>
|
||||
> — [Python.org][2]
|
||||
|
||||
Modules are namespaces. This means that correctly predicting module semantics often just requires familiarity with how Python namespaces work. Classes are namespaces. Objects are namespaces. Functions have access to their local namespace, their parent namespace, and the global namespace.
|
||||
|
||||
The simple model, where the **.** operator accesses an object, which in turn will usually, but not always, do some sort of dictionary lookup, makes Python hard to optimize, but easy to explain.
|
||||
|
||||
Indeed, some third-party modules take this guideline and run with it. For example, the** [variants][3]** package turns functions into namespaces of "related functionality." It is a good example of how the [Zen of Python][4] can inspire new abstractions.
|
||||
|
||||
### In conclusion
|
||||
|
||||
Thank you for joining me on this Hanukkah-inspired exploration of [my favorite language][5]. Go forth and meditate on the Zen until you reach enlightenment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/zen-python-namespaces
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
|
||||
[2]: https://docs.python.org/3/tutorial/classes.html
|
||||
[3]: https://pypi.org/project/variants/
|
||||
[4]: https://www.python.org/dev/peps/pep-0020/
|
||||
[5]: https://opensource.com/article/19/10/why-love-python
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: ( chensanle )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,250 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/pacman-command/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Getting Started With Pacman Commands in Arch-based Linux Distributions
|
||||
======
|
||||
|
||||
_**Brief: This beginner’s guide shows you what you can do with pacmancommands in Linux, how to use them to find new packages, install and upgrade new packages, and clean your system.**_
|
||||
|
||||
The [pacman][1] package manager is one of the main difference between [Arch Linux][2] and other major distributions like Red Hat and Ubuntu/Debian. It combines a simple binary package format with an easy-to-use [build system][3]. The aim of pacman is to easily manage packages, either from the [official repositories][4] or the user’s own builds.
|
||||
|
||||
If you ever used Ubuntu or Debian-based distributions, you might have used the apt-get or apt commands. Pacman is the equivalent in Arch Linux. If you [just installed Arch Linux][5], one of the first few [things to do after installing Arch Linux][6] is to learn to use pacman commands.
|
||||
|
||||
In this beginner’s guide, I’ll explain some of the essential usage of the pacmand command that you should know for managing your Arch-based system.
|
||||
|
||||
### Essential pacman commands Arch Linux users should know
|
||||
|
||||
![][7]
|
||||
|
||||
Like other package managers, pacman can synchronize package lists with the software repositories to allow the user to download and install packages with a simple command by solving all required dependencies.
|
||||
|
||||
#### Install packages with pacman
|
||||
|
||||
You can install a single package or multiple packages using pacman command in this fashion:
|
||||
|
||||
```
|
||||
pacman -S _package_name1_ _package_name2_ ...
|
||||
```
|
||||
|
||||
![Installing a package][8]
|
||||
|
||||
The -S stands for synchronization. It means that pacman first synchronizes
|
||||
|
||||
The pacman database categorises the installed packages in two groups according to the reason why they were installed:
|
||||
|
||||
* **explicitly-installed**: the packages that were installed by a generic pacman -S or -U command
|
||||
* **dependencies**: the packages that were implicitly installed because [required][9] by another package that was explicitly installed.
|
||||
|
||||
|
||||
|
||||
#### Remove an installed package
|
||||
|
||||
To remove a single package, leaving all of its dependencies installed:
|
||||
|
||||
```
|
||||
pacman -R package_name_
|
||||
```
|
||||
|
||||
![Removing a package][10]
|
||||
|
||||
To remove a package and its dependencies which are not required by any other installed package:
|
||||
|
||||
```
|
||||
pacman -Rs _package_name_
|
||||
```
|
||||
|
||||
To remove dependencies that are no longer needed. For example, the package which needed the dependencies was removed.
|
||||
|
||||
```
|
||||
pacman -Qdtq | pacman -Rs -
|
||||
```
|
||||
|
||||
#### Upgrading packages
|
||||
|
||||
Pacman provides an easy way to [update Arch Linux][11]. You can update all installed packages with just one command. This could take a while depending on how up-to-date the system is.
|
||||
|
||||
The following command synchronizes the repository databases _and_ updates the system’s packages, excluding “local” packages that are not in the configured repositories:
|
||||
|
||||
```
|
||||
pacman -Syu
|
||||
```
|
||||
|
||||
* S stands for sync
|
||||
* y is for refresh (local
|
||||
* u is for system update
|
||||
|
||||
|
||||
|
||||
Basically it is saying that sync to central repository (master package database), refresh the local copy of the master package database and then perform the system update (by updating all packages that have a newer version available).
|
||||
|
||||
![System update][12]
|
||||
|
||||
Attention!
|
||||
|
||||
If you are an Arch Linux user before upgrading, it is advised to visit the [Arch Linux home page][2] to check the latest news for out-of-the-ordinary updates. If manual intervention is needed an appropriate news post will be made. Alternatively you can subscribe to the [RSS feed][13] or the [arch-announce mailing list][14].
|
||||
|
||||
Be also mindful to look over the appropriate [forum][15] before upgrading fundamental software (such as the kernel, xorg, systemd, or glibc), for any reported problems.
|
||||
|
||||
**Partial upgrades are unsupported** at a rolling release distribution such as Arch and Manjaro. That means when new library versions are pushed to the repositories, all the packages in the repositories need to be rebuilt against the libraries. For example, if two packages depend on the same library, upgrading only one package, might break the other package which depends on an older version of the library.
|
||||
|
||||
#### Use pacman to search for packages
|
||||
|
||||
Pacman queries the local package database with the -Q flag, the sync database with the -S flag and the files database with the -F flag.
|
||||
|
||||
Pacman can search for packages in the database, both in packages’ names and descriptions:
|
||||
|
||||
```
|
||||
pacman -Ss _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
![Searching for a package][16]
|
||||
|
||||
To search for already installed packages:
|
||||
|
||||
```
|
||||
pacman -Qs _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
To search for package file names in remote packages:
|
||||
|
||||
```
|
||||
pacman -F _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
To view the dependency tree of a package:
|
||||
|
||||
```
|
||||
pactree _package_naenter code hereme_
|
||||
```
|
||||
|
||||
#### Cleaning the package cache
|
||||
|
||||
Pacman stores its downloaded packages in /var/cache/pacman/pkg/ and does not remove the old or uninstalled versions automatically. This has some advantages:
|
||||
|
||||
1. It allows to [downgrade][17] a package without the need to retrieve the previous version through other sources.
|
||||
2. A package that has been uninstalled can easily be reinstalled directly from the cache folder.
|
||||
|
||||
|
||||
|
||||
However, it is necessary to clean up the cache periodically to prevent the folder to grow in size.
|
||||
|
||||
The [paccache(8)][18] script, provided within the [pacman-contrib][19] package, deletes all cached versions of installed and uninstalled packages, except for the most recent 3, by default:
|
||||
|
||||
```
|
||||
paccache -r
|
||||
```
|
||||
|
||||
![Clear cache][20]
|
||||
|
||||
To remove all the cached packages that are not currently installed, and the unused sync database, execute:
|
||||
|
||||
```
|
||||
pacman -Sc
|
||||
```
|
||||
|
||||
To remove all files from the cache, use the clean switch twice, this is the most aggressive approach and will leave nothing in the cache folder:
|
||||
|
||||
```
|
||||
pacman -Scc
|
||||
```
|
||||
|
||||
#### Installing local or third-party packages
|
||||
|
||||
Install a ‘local’ package that is not from a remote repository:
|
||||
|
||||
```
|
||||
pacman -U _/path/to/package/package_name-version.pkg.tar.xz_
|
||||
```
|
||||
|
||||
Install a ‘remote’ package, not contained in an official repository:
|
||||
|
||||
```
|
||||
pacman -U http://www.example.com/repo/example.pkg.tar.xz
|
||||
```
|
||||
|
||||
### Bonus: Troubleshooting common errors with pacman
|
||||
|
||||
Here are some common errors you may encounter while managing packages with pacman.
|
||||
|
||||
#### Failed to commit transaction (conflicting files)
|
||||
|
||||
If you see the following error:
|
||||
|
||||
```
|
||||
error: could not prepare transaction
|
||||
error: failed to commit transaction (conflicting files)
|
||||
package: /path/to/file exists in filesystem
|
||||
Errors occurred, no packages were upgraded.
|
||||
```
|
||||
|
||||
This is happening because pacman has detected a file conflict and will not overwrite files for you.
|
||||
|
||||
A safe way to solve this is to first check if another package owns the file (pacman -Qo _/path/to/file_). If the file is owned by another package, file a bug report. If the file is not owned by another package, rename the file which ‘exists in filesystem’ and re-issue the update command. If all goes well, the file may then be removed.
|
||||
|
||||
Instead of manually renaming and later removing all the files that belong to the package in question, you may explicitly run _**pacman -S –overwrite glob package**_ to force pacman to overwrite files that match _glob_.
|
||||
|
||||
#### Failed to commit transaction (invalid or corrupted package)
|
||||
|
||||
Look for .part files (partially downloaded packages) in /var/cache/pacman/pkg/ and remove them. It is often caused by usage of a custom XferCommand in pacman.conf.
|
||||
|
||||
#### Failed to init transaction (unable to lock database)
|
||||
|
||||
When pacman is about to alter the package database, for example installing a package, it creates a lock file at /var/lib/pacman/db.lck. This prevents another instance of pacman from trying to alter the package database at the same time.
|
||||
|
||||
If pacman is interrupted while changing the database, this stale lock file can remain. If you are certain that no instances of pacman are running then delete the lock file.
|
||||
|
||||
Check if a process is holding the lock file:
|
||||
|
||||
```
|
||||
lsof /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
If the above command doesn’t return anything, you can remove the lock file:
|
||||
|
||||
```
|
||||
rm /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
If you find the PID of the process holding the lock file with lsof command output, kill it first and then remove the lock file.
|
||||
|
||||
I hope you like my humble effort in explaining the basic pacman commands. Please leave your comments below and don’t forget to subscribe on our social media. Stay safe!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pacman-command/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/pacman/
|
||||
[2]: https://www.archlinux.org/
|
||||
[3]: https://wiki.archlinux.org/index.php/Arch_Build_System
|
||||
[4]: https://wiki.archlinux.org/index.php/Official_repositories
|
||||
[5]: https://itsfoss.com/install-arch-linux/
|
||||
[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1
|
||||
[9]: https://wiki.archlinux.org/index.php/Dependency
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1
|
||||
[11]: https://itsfoss.com/update-arch-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1
|
||||
[13]: https://www.archlinux.org/feeds/news/
|
||||
[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/
|
||||
[15]: https://bbs.archlinux.org/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1
|
||||
[17]: https://wiki.archlinux.org/index.php/Downgrade
|
||||
[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8
|
||||
[19]: https://www.archlinux.org/packages/?name=pacman-contrib
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1
|
@ -1,201 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself)
|
||||
[#]: via: (https://itsfoss.com/arch-based-linux-distros/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Top Arch-based User Friendly Linux Distributions That are Easier to Install and Use Than Arch Linux Itself
|
||||
======
|
||||
|
||||
In the Linux community, [Arch Linux][1] has a cult following. This lightweight distribution provides the bleeding edge updates with a DIY (do it yourself) attitude.
|
||||
|
||||
However, Arch is also aimed at more experienced users. As such, it is generally considered to be beyond the reach of those who lack the technical expertise (or persistence) required to use it.
|
||||
|
||||
In fact, the very first steps, [installing Arch Linux itself is enough to scare many people off][2]. Unlike most other distributions, Arch Linux doesn’t have an easy to use graphical installer. You have to do disk partitions, connect to internet, mount drives and create file system etc using command line tools only.
|
||||
|
||||
For those who want to experience Arch without the hassle of the complicated installation and set up, there exists a number of user-friendly Arch-based distributions.
|
||||
|
||||
In this article, I’ll show you some of these Arch alternative distributions. These distributions come with graphical installer, graphical package manager and other tools that are easier to use than their command line alternatives.
|
||||
|
||||
### Arch-based Linux distributions that are easier to set up and use
|
||||
|
||||
![][3]
|
||||
|
||||
Please note that this is not a ranking list. The numbers are just for counting purpose. Distribution at number two should not be considered better than distribution at number seven.
|
||||
|
||||
#### 1\. Manjaro Linux
|
||||
|
||||
![][4]
|
||||
|
||||
[Manjaro][5] doesn’t need any introduction. It is one of the most popular Linux distributions for several years and it deserves it.
|
||||
|
||||
Manjaro provides all the benefits of the Arch Linux combined with a focus on user-friendliness and accessibility. Manjaro is suitable for both newcomers and experienced Linux users alike.
|
||||
|
||||
**For newcomers**, a user-friendly installer is provided, and the system itself is designed to work fully ‘straight out of the box’ with your [favourite desktop environment][6] (DE) or window manager.
|
||||
|
||||
**For more experienced users,** Manjaro also offers versatility to suit every personal taste and preference. [Manjaro Architect][7] is giving the option to install any Manjaro flavour and offers unflavoured DE installation, filesystem ([recently introduced ZFS][8]) and bootloader choice for those who wants complete freedom to shape their system.
|
||||
|
||||
Manjaro is also a rolling release cutting-edge distribution. However, unlike Arch, Manjaro tests the updates first and then provides it to its users. Stability also gets importance here.
|
||||
|
||||
#### 2\. ArcoLinux
|
||||
|
||||
![][9]
|
||||
|
||||
[ArcoLinux][10] (previously known as ArchMerge) is a distribution based on Arch Linux. The development team offers three variations. ArcoLinux, ArcoLinuxD and ArcoLinuxB.
|
||||
|
||||
ArcoLinux ****is a full-featured distribution that ships with the [Xfce desktop][11], [Openbox][12] and [i3 window managers][13].
|
||||
|
||||
**ArcoLinuxD** is a minimal distribution that includes scripts that enable power users to install any desktop and application.
|
||||
|
||||
**ArcoLinuxB** is a project that gives users the power to build custom distributions, while also developing several community editions with pre-configured desktops, such as Awesome, bspwm, Budgie, Cinnamon, Deepin, GNOME, MATE and KDE Plasma.
|
||||
|
||||
ArcoLinux also provides various video tutorials as it places strong focus on learning and acquiring Linux skills.
|
||||
|
||||
#### Archlabs Linux
|
||||
|
||||
![][14]
|
||||
|
||||
[ArchLabs Linux][15] is a lightweight rolling release Linux distribution based on a minimal Arch Linux base with the [Openbox][16] window manager. [ArchLabs][17] is influenced and inspired by the look and feel of [BunsenLabs][18] with the intermediate to advanced user in mind.
|
||||
|
||||
#### Archman Linux
|
||||
|
||||
![][19]
|
||||
|
||||
[Archman][20] is an independent project. Arch Linux distros in general are not ideal operating systems for users with little Linux experience. Considerable background reading is necessary for things to make sense with minimal frustration. Developers of Archman Linux are trying to change that reputation.
|
||||
|
||||
Archman’s development is based on an understanding of development that includes user feedback and experience components. With the past experience of our team, the feedbacks and requests from the users are blended together and the road maps are determined and the build works are done.
|
||||
|
||||
#### EndeavourOS
|
||||
|
||||
![][21]
|
||||
|
||||
When the popular Arch-based distribution [Antergos was discontinued in 2019][22], it left a friendly and extremely helpful community behind. The Antergos project ended because the system was too hard to maintain for the developers.
|
||||
|
||||
Within a matter of days after the announcement, a few experienced users palnned on maintaining the former community by creating a new distribution to fill the void left by Antergos. That’s how [EndeavourOS][23] was born.
|
||||
|
||||
[EndeavourOS][24] is lightweight and ships with a minimum amount of preinstalled apps. An almost blank canvas ready to personalise.
|
||||
|
||||
#### RebornOS
|
||||
|
||||
![][25]
|
||||
|
||||
[RebornOS][26] developers’ goal is to bring the true power of Linux to everyone, with one ISO for 15 desktop environments and full of unlimited opportunities for customization.
|
||||
|
||||
RebornOS also claims to have support for [Anbox][27] for running Android applications on desktop Linux. It also offers a simple kernel manager GUI tool.
|
||||
|
||||
Coupled with [Pacman][28], the [AUR][29], and a customized version of Cnchi graphical installer, Arch Linux is finally available for even the least inexperienced users.
|
||||
|
||||
#### Chakra Linux
|
||||
|
||||
![][30]
|
||||
|
||||
A community-developed GNU/Linux distribution with an emphasis on KDE and Qt technologies. [Chakra Linux][31] does not schedule releases for specific dates but uses a “Half-Rolling release” system.
|
||||
|
||||
This means that the core packages of Chakra Linux are frozen and only updated to fix any security problems. These packages are updated after the latest versions have been thoroughly tested before being moved to permanent repository (about every six months).
|
||||
|
||||
In addition to the official repositories, users can install packages from the Chakra Community Repository (CCR), which provides user made PKGINFOs and [PKGBUILD][32] scripts for software which is not included in the official repositories and is inspired by the Arch User Repository.
|
||||
|
||||
#### Artix Linux
|
||||
|
||||
![Artix Mate Edition][33]
|
||||
|
||||
[Artix Linux][34] is a rolling-release distribution based on Arch Linux that uses [OpenRC][35], [runit][36] or [s6][37] init instead of [systemd][38].
|
||||
|
||||
Artix Linux has its own package repositories but as a pacman-based distribution, it can use packages from Arch Linux repositories or any other derivative distribution, even packages explicitly depending on systemd. The [Arch User Repository][29] (AUR) can also be used.
|
||||
|
||||
#### BlackArch Linux
|
||||
|
||||
![][39]
|
||||
|
||||
BlackArch is a [penetration testing distribution][40] based on Arch Linux that provides a large amount of cyber security tools. It is specially created for penetration testers and security researchers. The repository contains more than 2400 [hacking and pen-testing tools][41] that can be installed individually or in groups. BlackArch Linux is compatible with existing Arch Linux packages.
|
||||
|
||||
### Want real Arch Linux? Simplify the installation with graphical Arch installer
|
||||
|
||||
If you want to use the actual Arch Linux but you are not comfortable with the difficult installation, fortunately you can download an Arch Linux iso baked with a graphical installer.
|
||||
|
||||
An Arch installer is basically Arch Linux ISO with a relatively easy to use text-based installer. It is much easier than bare-bone Arch installation.
|
||||
|
||||
#### Anarchy Installer
|
||||
|
||||
![][42]
|
||||
|
||||
The [Anarchy installer][43] intends to provide both novice and experienced Linux users a simple and pain free way to install Arch Linux. Install when you want it, where you want it, and however you want it. That is the Anarchy philosophy.
|
||||
|
||||
Once you boot up the installer, you’ll be shown a simple [TUI menu][44], listing all the available installer options.
|
||||
|
||||
#### Zen Installer
|
||||
|
||||
![][45]
|
||||
|
||||
The [Zen Installer][46] provides a full graphical (point and click) environment for installing Arch Linux. It provides support for installing multiple desktop environments, AUR, and all of the power and flexiblity of Arch Linux with the ease of a graphical installer.
|
||||
|
||||
The ISO will boot the live environment, and then download the most current stable version of the installer after you connect to the internet. So, you will always get the newest installer with updated features.
|
||||
|
||||
### Conclusion
|
||||
|
||||
An Arch-based distribution is always an excellent hassle-free choice for the many users, but a graphical installer like Anarchy is at least a step closer to how Arch Linux truly tastes.
|
||||
|
||||
In my opinion the [real beauty of Arch Linux is its installation process][2] and for a Linux enthusiast is an opportunity to learn rather than a hassle. Arch Linux and its derivatives has a lot for you mess up with, but It’s FOSS will unravel the mystery behind the scenes. See you at my next tutorial!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/arch-based-linux-distros/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/
|
||||
[2]: https://itsfoss.com/install-arch-linux/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/arch-based-linux-distributions.png?ssl=1
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/manjaro-20.jpg?ssl=1
|
||||
[5]: https://manjaro.org/
|
||||
[6]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[7]: https://itsfoss.com/manjaro-architect-review/
|
||||
[8]: https://itsfoss.com/manjaro-20-release/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/arcolinux.png?ssl=1
|
||||
[10]: https://arcolinux.com/
|
||||
[11]: https://www.xfce.org/
|
||||
[12]: http://openbox.org/wiki/Main_Page
|
||||
[13]: https://i3wm.org/
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archlabs.jpg?ssl=1
|
||||
[15]: https://itsfoss.com/archlabs-review/
|
||||
[16]: https://en.wikipedia.org/wiki/Openbox
|
||||
[17]: https://archlabslinux.com/
|
||||
[18]: https://www.bunsenlabs.org/
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Archman.png?ssl=1
|
||||
[20]: https://archman.org/en/
|
||||
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/04_endeavouros_slide.jpg?ssl=1
|
||||
[22]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[23]: https://itsfoss.com/endeavouros/
|
||||
[24]: https://endeavouros.com/
|
||||
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/RebornOS.png?ssl=1
|
||||
[26]: https://rebornos.org/
|
||||
[27]: https://anbox.io/
|
||||
[28]: https://itsfoss.com/pacman-command/
|
||||
[29]: https://itsfoss.com/aur-arch-linux/
|
||||
[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/Chakra_Goedel_Screenshot.png?ssl=1
|
||||
[31]: https://www.chakralinux.org/
|
||||
[32]: https://wiki.archlinux.org/index.php/PKGBUILD
|
||||
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Artix_MATE_edition.png?ssl=1
|
||||
[34]: https://artixlinux.org/
|
||||
[35]: https://en.wikipedia.org/wiki/OpenRC
|
||||
[36]: https://en.wikipedia.org/wiki/Runit
|
||||
[37]: https://en.wikipedia.org/wiki/S6_(software)
|
||||
[38]: https://en.wikipedia.org/wiki/Systemd
|
||||
[39]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/BlackArch.png?ssl=1
|
||||
[40]: https://itsfoss.com/linux-hacking-penetration-testing/
|
||||
[41]: https://itsfoss.com/best-kali-linux-tools/
|
||||
[42]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/anarchy.jpg?ssl=1
|
||||
[43]: https://anarchyinstaller.org/
|
||||
[44]: https://en.wikipedia.org/wiki/Text-based_user_interface
|
||||
[45]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/zen.jpg?ssl=1
|
||||
[46]: https://sourceforge.net/projects/revenge-installer/
|
@ -1,262 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (LaTeX Typesetting – Part 1 (Lists))
|
||||
[#]: via: (https://fedoramagazine.org/latex-typesetting-part-1/)
|
||||
[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/)
|
||||
|
||||
LaTeX Typesetting – Part 1 (Lists)
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
This series builds on the previous articles: [Typeset your docs with LaTex and TeXstudio on Fedora][2] and [LaTeX 101 for beginners][3]. This first part of the series is about LaTeX lists.
|
||||
|
||||
### Types of lists
|
||||
|
||||
LaTeX lists are enclosed environments, and each item in the list can take a line of text to a full paragraph. There are three types of lists available in LaTeX. They are:
|
||||
|
||||
* **Itemized**: unordered or bullet
|
||||
* **Enumerated**: ordered
|
||||
* **Description**: descriptive
|
||||
|
||||
|
||||
|
||||
### Creating lists
|
||||
|
||||
To create a list, prefix each list item with the \_item_ command. Precede and follow the list of items with the \_begin_{<type>} and \_end_{<type>} commands respectively where <type> is substituted with the type of the list as illustrated in the following examples.
|
||||
|
||||
#### Itemized list
|
||||
|
||||
```
|
||||
\begin{itemize}
|
||||
\item Fedora
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
#### Enumerated list
|
||||
|
||||
```
|
||||
\begin{enumerate}
|
||||
\item Fedora CoreOS
|
||||
\item Fedora Silverblue
|
||||
\item Fedora Spin
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
#### Descriptive list
|
||||
|
||||
```
|
||||
\begin{description}
|
||||
\item[Fedora 6] Code name Zod
|
||||
\item[Fedora 8] Code name Werewolf
|
||||
\end{description}
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
### Spacing list items
|
||||
|
||||
The default spacing can be customized by adding \_usepackage{enumitem}_ to the preamble. The _enumitem_ package enables the _noitemsep_ option and the \_itemsep_ command which you can use on your lists as illustrated below.
|
||||
|
||||
#### Using the noitemsep option
|
||||
|
||||
Enclose the _noitemsep_ option in square brackets and place it on the \_begin_ command as shown below. This option removes the default spacing.
|
||||
|
||||
```
|
||||
\begin{itemize}[noitemsep]
|
||||
\item Fedora
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
#### Using the \itemsep command
|
||||
|
||||
The \_itemsep_ command must be suffixed with a number to indicate how much space there should be between the list items.
|
||||
|
||||
```
|
||||
\begin{itemize} \itemsep0.75pt
|
||||
\item Fedora Silverblue
|
||||
\item Fedora CoreOS
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][8]
|
||||
|
||||
### Nesting lists
|
||||
|
||||
LaTeX supports nested lists up to four levels deep as illustrated below.
|
||||
|
||||
#### Nested itemized lists
|
||||
|
||||
```
|
||||
\begin{itemize}[noitemsep]
|
||||
\item Fedora Versions
|
||||
\begin{itemize}
|
||||
\item Fedora 8
|
||||
\item Fedora 9
|
||||
\begin{itemize}
|
||||
\item Werewolf
|
||||
\item Sulphur
|
||||
\begin{itemize}
|
||||
\item 2007-05-31
|
||||
\item 2008-05-13
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
#### Nested enumerated lists
|
||||
|
||||
```
|
||||
\begin{enumerate}[noitemsep]
|
||||
\item Fedora Versions
|
||||
\begin{enumerate}
|
||||
\item Fedora 8
|
||||
\item Fedora 9
|
||||
\begin{enumerate}
|
||||
\item Werewolf
|
||||
\item Sulphur
|
||||
\begin{enumerate}
|
||||
\item 2007-05-31
|
||||
\item 2008-05-13
|
||||
\end{enumerate}
|
||||
\end{enumerate}
|
||||
\end{enumerate}
|
||||
\item Fedora Spin
|
||||
\item Fedora Silverblue
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
![][10]
|
||||
|
||||
### List style names for each list type
|
||||
|
||||
**Enumerated** | **Itemized**
|
||||
---|---
|
||||
\alph* | $\bullet$
|
||||
\Alph* | $\cdot$
|
||||
\arabic* | $\diamond$
|
||||
\roman* | $\ast$
|
||||
\Roman* | $\circ$
|
||||
| $-$
|
||||
|
||||
### Default style by list depth
|
||||
|
||||
**Level** | **Enumerated** | **Itemized**
|
||||
---|---|---
|
||||
1 | Number | Bullet
|
||||
2 | Lowercase alphabet | Dash
|
||||
3 | Roman numerals | Asterisk
|
||||
4 | Uppercase alphabet | Period
|
||||
|
||||
### Setting list styles
|
||||
|
||||
The below example illustrates each of the different itemiszed list styles.
|
||||
|
||||
```
|
||||
% Itemize style
|
||||
\begin{itemize}
|
||||
\item[$\ast$] Asterisk
|
||||
\item[$\diamond$] Diamond
|
||||
\item[$\circ$] Circle
|
||||
\item[$\cdot$] Period
|
||||
\item[$\bullet$] Bullet (default)
|
||||
\item[--] Dash
|
||||
\item[$-$] Another dash
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
There are three methods of setting list styles. They are illustrated below. These methods are listed by priority; highest priority first. A higher priority will override a lower priority if more than one is defined for a list item.
|
||||
|
||||
#### List styling method 1 – per item
|
||||
|
||||
Enclose the name of the desired style in square brackets and place it on the \_item_ command as demonstrated below.
|
||||
|
||||
```
|
||||
% First method
|
||||
\begin{itemize}
|
||||
\item[$\ast$] Asterisk
|
||||
\item[$\diamond$] Diamond
|
||||
\item[$\circ$] Circle
|
||||
\item[$\cdot$] period
|
||||
\item[$\bullet$] Bullet (default)
|
||||
\item[--] Dash
|
||||
\item[$-$] Another dash
|
||||
\end{itemize}
|
||||
```
|
||||
|
||||
#### List styling method 2 – on the list
|
||||
|
||||
Prefix the name of the desired style with _label=_. Place the parameter, including the _label=_ prefix, in square brackets on the \_begin_ command as demonstrated below.
|
||||
|
||||
```
|
||||
% Second method
|
||||
\begin{enumerate}[label=\Alph*.]
|
||||
\item Fedora 32
|
||||
\item Fedora 31
|
||||
\item Fedora 30
|
||||
\end{enumerate}
|
||||
```
|
||||
|
||||
#### List styling method 3 – on the document
|
||||
|
||||
This method changes the default style for the entire document. Use the \_renewcommand_ to set the values for the labelitems. There is a different labelitem for each of the four label depths as demonstrated below.
|
||||
|
||||
```
|
||||
% Third method
|
||||
\renewcommand{\labelitemi}{$\ast$}
|
||||
\renewcommand{\labelitemii}{$\diamond$}
|
||||
\renewcommand{\labelitemiii}{$\bullet$}
|
||||
\renewcommand{\labelitemiv}{$-$}
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
LaTeX supports three types of lists. The style and spacing of each of the list types can be customized. More LaTeX elements will be explained in future posts.
|
||||
|
||||
Additional reading about LaTeX lists can be found here: [LaTeX List Structures][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/latex-typesetting-part-1/
|
||||
|
||||
作者:[Earl Ramirez][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/earlramirez/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png
|
||||
[2]: https://fedoramagazine.org/typeset-latex-texstudio-fedora
|
||||
[3]: https://fedoramagazine.org/fedora-classroom-latex-101-beginners
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-1.png
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-2.png
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-3.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-4.png
|
||||
[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-5.png
|
||||
[9]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-7.png
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-8.png
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-9.png
|
||||
[12]: https://en.wikibooks.org/wiki/LaTeX/List_Structures
|
@ -1,128 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
|
||||
[#]: via: (https://itsfoss.com/material-shell/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension
|
||||
======
|
||||
|
||||
There is something about tiling windows that attracts many people. Perhaps it looks good or perhaps it is time-saving if you are a fan of [keyboard shortcuts in Linux][1]. Or maybe it’s the challenge of using the uncommon tiling windows.
|
||||
|
||||
![Tiling Windows in Linux | Image Source][2]
|
||||
|
||||
From i3 to [Sway][3], there are so many tiling window managers available for Linux desktop. Configuring a tiling window manager itself requires a steep learning curve.
|
||||
|
||||
This is why projects like [Regolith desktop][4] exist to give you preconfigured tiling desktop so that you can get started with tiling windows with less effort.
|
||||
|
||||
Let me introduce you to a similar project named Material Shell that makes using tiling feature even easier than [Regolith][5].
|
||||
|
||||
### Material Shell GNOME Extension: Convert GNOME desktop into a tiling window manager
|
||||
|
||||
[Material Shell][6] is a GNOME extension and that’s the best thing about it. This means that you don’t have to log out and log in to another desktop environment or window manager. You can enable or disable it from within your current session.
|
||||
|
||||
I’ll list the features of Material Shell but it will be easier to see it in action:
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][7]
|
||||
|
||||
The project is called Material Shell because it follows the [Material Design][8] guideline and thus gives the applications an aesthetically pleasing interface. Here are its main features:
|
||||
|
||||
#### Intuitive interface
|
||||
|
||||
Material Shell adds a left panel for quick access. On this panel, you can find the system tray at the bottom and the search and workspaces on the top.
|
||||
|
||||
All the new apps are added to the current workspace. You can create new workspace and switch to it for organizing your running apps into categories. This is the essential concept of workspace anyway.
|
||||
|
||||
In Material Shell, every workspace can be visualized as a row with several apps rather than a box with several apps in it.
|
||||
|
||||
#### Tiling windows
|
||||
|
||||
In a workspace, you can see all your opened applications on the top all the time. By default, the applications are opened to take the entire screen like you do in GNOME desktop. You can change the layout to split it in half or multiple columns or a grid of apps using the layout changer in the top right corner.
|
||||
|
||||
This video shows all the above features at a glance:
|
||||
|
||||
#### Persistent layout and workspaces
|
||||
|
||||
That’s not it. Material Shell also remembers the workspaces and windows you open so that you don’t have to reorganize your layout again. This is a good feature to have as it saves time if you are particular about which application goes where.
|
||||
|
||||
#### Hotkeys/Keyboard shortcut
|
||||
|
||||
Like any tiling windows manager, you can use keyboard shortcuts to navigate between applications and workspaces.
|
||||
|
||||
* `Super+W` Navigate to the upper workspace.
|
||||
* `Super+S` Navigate to the lower workspace.
|
||||
* `Super+A` Focus the window at the left of the current window.
|
||||
* `Super+D` Focus the window at the right of the current window.
|
||||
* `Super+1`, `Super+2` … `Super+0` Navigate to specific workspace
|
||||
* `Super+Q` Kill the current window focused.
|
||||
* `Super+[MouseDrag]` Move window around.
|
||||
* `Super+Shift+A` Move the current window to the left.
|
||||
* `Super+Shift+D` Move the current window to the right.
|
||||
* `Super+Shift+W` Move the current window to the upper workspace.
|
||||
* `Super+Shift+S` Move the current window to the lower workspace.
|
||||
|
||||
|
||||
|
||||
### Installing Material Shell
|
||||
|
||||
Warning!
|
||||
|
||||
Tiling windows could be confusing for many users. You should be familiar with GNOME Extensions to use it. Avoid trying it if you are absolutely new to Linux or if you are easily panicked if anything changes in your system.
|
||||
|
||||
Material Shell is a GNOME extension. So, please [check your desktop environment][9] to make sure you are running _**GNOME 3.34 or higher version**_.
|
||||
|
||||
I would also like to add that tiling windows could be confusing for many users.
|
||||
|
||||
Apart from that, I noticed that after disabling Material Shell it removes the top bar from Firefox and the Ubuntu dock. You can get the dock back by disabling/enabling Ubuntu dock extension from the Extensions app in GNOME. I haven’t tried but I guess these problems should also go away after a system reboot.
|
||||
|
||||
I hope you know [how to use GNOME extensions][10]. The easiest way is to just [open this link in the browser][11], install GNOME extension plugin and then enable the Material Shell extension.
|
||||
|
||||
![][12]
|
||||
|
||||
If you don’t like it, you can disable it from the same extension link you used earlier or use the GNOME Extensions app:
|
||||
|
||||
![][13]
|
||||
|
||||
**To tile or not?**
|
||||
|
||||
I use multiple screens and I found that Material Shell doesn’t work well with multiple monitors. This is something the developer(s) can improve in the future.
|
||||
|
||||
Apart from that, it’s a really easy to get started with tiling windows with Material Shell. If you try Material Shell and like it, appreciate the project by [giving it a star or sponsoring it on GitHub][14].
|
||||
|
||||
For some reasons, tiling windows are getting popular. Recently released [Pop OS 20.04][15] also added tiling window features.
|
||||
|
||||
But as I mentioned previously, tiling layouts are not for everyone and it could confuse many people.
|
||||
|
||||
How about you? Do you prefer tiling windows or you prefer the classic desktop layout?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/material-shell/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/sway-window-manager/
|
||||
[4]: https://itsfoss.com/regolith-linux-desktop/
|
||||
[5]: https://regolith-linux.org/
|
||||
[6]: https://material-shell.com
|
||||
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[8]: https://material.io/
|
||||
[9]: https://itsfoss.com/find-desktop-environment/
|
||||
[10]: https://itsfoss.com/gnome-shell-extensions/
|
||||
[11]: https://extensions.gnome.org/extension/3357/material-shell/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
|
||||
[14]: https://github.com/material-shell/material-shell
|
||||
[15]: https://itsfoss.com/pop-os-20-04-review/
|
@ -1,214 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Create and Manage Archive Files in Linux)
|
||||
[#]: via: (https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/)
|
||||
[#]: author: (LF Training https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/)
|
||||
|
||||
How to Create and Manage Archive Files in Linux
|
||||
======
|
||||
|
||||
_By Matt Zand and Kevin Downs_
|
||||
|
||||
In a nutshell, an archive is a single file that contains a collection of other files and/or directories. Archive files are typically used for a transfer (locally or over the internet) or make a backup copy of a collection of files and directories which allow you to work with only one file (if compressed, it has a lower size than the sum of all files within it) instead of many. Likewise, archives are used for software application packaging. This single file can be easily compressed for ease of transfer while the files in the archive retain the structure and permissions of the original files.
|
||||
|
||||
We can use the tar tool to create, list, and extract files from archives. Archives made with tar are normally called “tar files,” “tar archives,” or—since all the archived files are rolled into one—“tarballs.”
|
||||
|
||||
This tutorial shows how to use tar to create an archive, list the contents of an archive, and extract the files from an archive. Two common options used with all three of these operations are ‘-f’ and ‘-v’: to specify the name of the archive file, use ‘-f’ followed by the file name; use the ‘-v’ (“verbose”) option to have tar output the names of files as they are processed. While the ‘-v’ option is not necessary, it lets you observe the progress of your tar operation.
|
||||
|
||||
For the remainder of this tutorial, we cover 3 topics: 1- Create an archive file, 2- List contents of an archive file, and 3- Extract contents from an archive file. We conclude this tutorial by surveying 6 practical questions related to archive file management. What you take away from this tutorial is essential for performing tasks related to [cybersecurity][1] and [cloud technology][2].
|
||||
|
||||
### 1- Creating an Archive File
|
||||
|
||||
To create an archive with tar, use the ‘-c’ (“create”) option, and specify the name of the archive file to create with the ‘-f’ option. It’s common practice to use a name with a ‘.tar’ extension, such as ‘my-backup.tar’. Note that unless specifically mentioned otherwise, all commands and command parameters used in the remainder of this article are used in lowercase. Keep in mind that while typing commands in this article on your terminal, you need not type the $ prompt sign that comes at the beginning of each command line.
|
||||
|
||||
Give as arguments the names of the files to be archived; to create an archive of a directory and all of the files and subdirectories it contains, give the directory’s name as an argument.
|
||||
|
||||
* *To create an archive called ‘project.tar’ from the contents of the ‘project’ directory, type:
|
||||
|
||||
$ _tar -cvf project.tar project_
|
||||
|
||||
This command creates an archive file called ‘project.tar’ containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.
|
||||
|
||||
Use the ‘-z’ option to compress the archive as it is being written. This yields the same output as creating an uncompressed archive and then using gzip to compress it, but it eliminates the extra step.
|
||||
|
||||
* *To create a compressed archive called ‘project.tar.gz’ from the contents of the ‘project’ directory, type:
|
||||
|
||||
$ _tar -zcvf project.tar.gz project_
|
||||
|
||||
This command creates a compressed archive file, ‘project.tar.gz’, containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.
|
||||
|
||||
**NOTE:** While using the ‘-z’ option, you should specify the archive name with a ‘.tar.gz’ extension and not a ‘.tar’ extension, so the file name shows that the archive is compressed. Although not required, it is a good practice to follow.
|
||||
|
||||
Gzip is not the only form of compression. There is also bzip2 and and xz. When we see a file with an extension of xz we know it has been compressed using xz. When we see a file with the extension of .bz2 we can infer it was compressed using bzip2. We are going to steer away from bzip2 as it is becoming unmaintained and focus on xz. When compressing using xz it is going to take longer for the files to compressed. However, it is typically worth the wait as the compression is much more effective, meaning the resulting file will usually be smaller than other compression methods used. Even better is the fact that decompression, or expanding the file, is not much different between the different methods of compression. Below we see an example of how to utilize xz when compressing a file using tar
|
||||
|
||||
$ _tar -Jcvf project.tar.xz project_
|
||||
|
||||
We simply switch -z for gzip to uppercase -J for xz. Here are some outputs to display the differences between the forms of compression:
|
||||
|
||||
![][3]
|
||||
|
||||
![][4]
|
||||
|
||||
As you can see xz does take the longest to compress. However it does the best job of reducing files size, so it’s worth the wait. The larger the file is the better the compression becomes too!
|
||||
|
||||
### 2- Listing Contents of an Archive File
|
||||
|
||||
To list the contents of a tar archive without extracting them, use tar with the ‘-t’ option.
|
||||
|
||||
* *To list the contents of an archive called ‘project.tar’, type:
|
||||
|
||||
$ _tar -tvf project.tar_ * *
|
||||
|
||||
This command lists the contents of the ‘project.tar’ archive. Using the ‘-v’ option along with the ‘-t’ option causes tar to output the permissions and modification time of each file, along with its file name—the same format used by the ls command with the ‘-l’ option.
|
||||
|
||||
* *To list the contents of a compressed archive called ‘project.tar.gz’, type:
|
||||
|
||||
$ _tar -tvf project.tar_
|
||||
|
||||
* *3- Extracting contents from an Archive File
|
||||
|
||||
To extract (or _unpack_) the contents of a tar archive, use tar with the ‘-x’ (“extract”) option.
|
||||
|
||||
* *To extract the contents of an archive called ‘project.tar’, type:
|
||||
|
||||
$ _tar -xvf project.tar_
|
||||
|
||||
This command extracts the contents of the ‘project.tar’ archive into the current directory.
|
||||
|
||||
If an archive is compressed, which usually means it will have a ‘.tar.gz’ or ‘.tgz’ extension, include the ‘-z’ option.
|
||||
|
||||
* *To extract the contents of a compressed archive called ‘project.tar.gz’, type:
|
||||
|
||||
$ _tar -zxvf project.tar.gz_
|
||||
|
||||
**NOTE:** If there are files or subdirectories in the current directory with the same name as any of those in the archive, those files will be overwritten when the archive is extracted. If you don’t know what files are included in an archive, consider listing the contents of the archive first.
|
||||
|
||||
Another reason to list the contents of an archive before extracting them is to determine whether the files in the archive are contained in a directory. If not, and the current directory contains many unrelated files, you might confuse them with the files extracted from the archive.
|
||||
|
||||
To extract the files into a directory of their own, make a new directory, move the archive to that directory, and change to that directory, where you can then extract the files from the archive.
|
||||
|
||||
Now that we have learned how to create an Archive file and list/extract its contents, we can move on to discuss the following 9 practical questions that are frequently asked by Linux professionals.
|
||||
|
||||
* Can we add content to an archive file without unpacking it?
|
||||
|
||||
|
||||
|
||||
Unfortunately, once a file has been compressed there is no way to add content to it. You would have to “unpack” it or extract the contents, edit or add content, and then compress the file again. If it’s a small file this process will not take long. If it’s a larger file then be prepared for it to take a while.
|
||||
|
||||
* Can we delete content from an archive file without unpacking it?
|
||||
|
||||
|
||||
|
||||
This depends on the version of tar being used. Newer versions of tar will support a –delete.
|
||||
|
||||
For example, let’s say we have files file1 and file2 . They can be removed from file.tar with the following:
|
||||
|
||||
_$ tar -vf file.tar –delete file1 file2_
|
||||
|
||||
To remove a directory dir1:
|
||||
|
||||
_$ tar -f file.tar –delete dir1/*_
|
||||
|
||||
* What are the differences between compressing a folder and archiving it?
|
||||
|
||||
|
||||
|
||||
The simplest way to look at the difference between archiving and compressing is to look at the end result. When you archive files you are combining multiple files into one. So if we archive 10 100kb files you will end up with one 1000kb file. On the other hand if we compress those files we could end up with a file that is only a few kb or close to 100kb.
|
||||
|
||||
* How to compress archive files?
|
||||
|
||||
|
||||
|
||||
As we saw above you can create and archive files using the tar command with the cvf options. To compress the archive file we made there are two options; run the archive file through compression such as gzip. Or use a compression flag when using the tar command. The most common compression flags are- z for gzip, -j for bzip and -J for xz. We can see the first method below:
|
||||
|
||||
_$ gzip file.tar_
|
||||
|
||||
Or we can just use a compression flag when using the tar command, here we’ll see the gzip flag “z”:
|
||||
|
||||
_$ tar -cvzf file.tar /some/directory_
|
||||
|
||||
* How to create archives of multiple directories and/or files at one time?
|
||||
|
||||
|
||||
|
||||
It is not uncommon to be in situations where we want to archive multiple files or directories at once. And it’s not as difficult as you think to tar multiple files and directories at one time. You simply supply which files or directories you want to tar as arguments to the tar command:
|
||||
|
||||
_$ tar -cvzf file.tar file1 file2 file3_
|
||||
|
||||
or
|
||||
|
||||
_$ tar -cvzf file.tar /some/directory1 /some/directory2_
|
||||
|
||||
* How to skip directories and/or files when creating an archive?
|
||||
|
||||
|
||||
|
||||
You may run into a situation where you want to archive a directory or file but you don’t need certain files to be archived. To avoid archiving those files or “exclude” them you would use the –exclude option with tar:
|
||||
|
||||
_$ tar –exclude ‘/some/directory’ -cvf file.tar /home/user_
|
||||
|
||||
So in this example /home/user would be archived but it would exclude the /some/directory if it was under /home/user. It’s important that you put the –exclude option before the source and destination as well as to encapsulate the file or directory being excluded with single quotation marks.
|
||||
|
||||
### Summary
|
||||
|
||||
The tar command is useful for creating backups or compressing files you no longer need. It’s good practice to back up files before changing them. If something doesn’t work how it’s intended to after the change you will always be able to revert back to the old file. Compressing files no longer in use helps keep systems clean and lowers the disk space usage. There are other utilities available but tar has reigned supreme for its versatility, ease of use and popularity.
|
||||
|
||||
### Resources
|
||||
|
||||
If you like to learn more about Linux, reading the following articles and tutorials are highly recommended:
|
||||
|
||||
* [Comprehensive Review of Linux File System Architecture and Management][5]
|
||||
* [Comprehensive Review of How Linux File and Directory System Works][6]
|
||||
* [Comprehensive list of all Linux OS distributions][7]
|
||||
* [Comprehensive list of all special purpose Linux distributions][8]
|
||||
* [Linux System Admin Guide- Best Practices for Making and Managing Backup Operations][9]
|
||||
* [Linux System Admin Guide- Overview of Linux Virtual Memory and Disk Buffer Cache][10]
|
||||
* [Linux System Admin Guide- Best Practices for Monitoring Linux Systems][11]
|
||||
* [Linux System Admin Guide- Best Practices for Performing Linux Boots and Shutdowns][12]
|
||||
|
||||
|
||||
|
||||
### About the Authors
|
||||
|
||||
**Matt Zand** is a serial entrepreneur and the founder of 3 tech startups: [DC Web Makers][13], [Coding Bootcamps][14] and [High School Technology Services][15]. He is a leading author of [Hands-on Smart Contract Development with Hyperledger Fabric][16] book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: <https://www.linkedin.com/in/matt-zand-64047871>
|
||||
|
||||
**Kevin Downs** is Red Hat Certified System Administrator or RHCSA. At his current job at IBM as Sys Admin, he is in charge of administering hundreds of servers running on different Linux distributions. He is a Lead Linux Instructor at [Coding Bootcamps][17] where he has authored [5 self-paced Courses][18].
|
||||
|
||||
The post [How to Create and Manage Archive Files in Linux][19] appeared first on [Linux Foundation – Training][20].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/
|
||||
|
||||
作者:[LF Training][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://learn.coding-bootcamps.com/p/essential-practical-guide-to-cybersecurity-for-system-admin-and-developers
|
||||
[2]: https://learn.coding-bootcamps.com/p/introduction-to-cloud-technology
|
||||
[3]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux1-300x94.png
|
||||
[4]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux2-300x110.png
|
||||
[5]: https://blockchain.dcwebmakers.com/blog/linux-os-file-system-architecture-and-management.html
|
||||
[6]: https://coding-bootcamps.com/linux/filesystem/index.html
|
||||
[7]: https://myhsts.org/tutorial-list-of-all-linux-operating-system-distributions.php
|
||||
[8]: https://coding-bootcamps.com/list-of-all-special-purpose-linux-distributions.html
|
||||
[9]: https://myhsts.org/tutorial-system-admin-best-practices-for-managing-backup-operations.php
|
||||
[10]: https://myhsts.org/tutorial-how-linux-virtual-memory-and-disk-buffer-cache-work.php
|
||||
[11]: https://myhsts.org/tutorial-system-admin-best-practices-for-monitoring-linux-systems.php
|
||||
[12]: https://myhsts.org/tutorial-best-practices-for-performing-linux-boots-and-shutdowns.php
|
||||
[13]: https://blockchain.dcwebmakers.com/
|
||||
[14]: http://coding-bootcamps.com/
|
||||
[15]: https://myhsts.org/
|
||||
[16]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/
|
||||
[17]: https://coding-bootcamps.com/
|
||||
[18]: https://learn.coding-bootcamps.com/courses/author/758905
|
||||
[19]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/
|
||||
[20]: https://training.linuxfoundation.org/
|
@ -1,124 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (mengxinayan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 ways to get started with open source in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/1/getting-started-open-source)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
10 ways to get started with open source in 2021
|
||||
======
|
||||
If you're new to open source, 2020's top 10 articles about getting
|
||||
started will help guide your path.
|
||||
![Looking at a map for career journey][1]
|
||||
|
||||
Opensource.com exists to educate the world about everything open source, from new tools and frameworks to scaling communities. We aim to make open source more accessible to anyone who wants to use or contribute to it.
|
||||
|
||||
Getting started in open source can be hard, so we regularly share tips and advice on how you can get involved. If you want to learn Python, help fight COVID-19, or join the Kubernetes community, we've got you covered.
|
||||
|
||||
To help you begin, we curated the 10 most popular articles on getting started in open source we published in 2020. We hope they'll inspire you to learn something new in 2021.
|
||||
|
||||
### A beginner's guide to web scraping with Python
|
||||
|
||||
Want to learn Python through doing, not reading? In this tutorial, Julia Piaskowski guides you through her first [web scraping project in Python][2]. She shows how to access webpage content with Python library requests.
|
||||
|
||||
Julia walks through each step, from installing Python 3 to cleaning web scraping results with pandas. Aided by screenshots galore, she explains how to scrape with an end goal in mind.
|
||||
|
||||
The section on extracting relevant content is especially helpful; she doesn't mince words when saying this can be tough. But, like the rest of the article, she guides you through each step.
|
||||
|
||||
### A beginner's guide to SSH for remote connections on Linux
|
||||
|
||||
If you've never opened a secure shell (SSH) before, the first time can be confusing. In this tutorial, Seth Kenlon shows how to [configure two computers for SSH connections][3] and securely connect them without a password.
|
||||
|
||||
From four key phrases you should know to steps for activating SSH on each host, Seth explains every step of making SSH connections. He includes advice on finding your computer's IP address, creating an SSH key, and verifying your access to a remote machine.
|
||||
|
||||
### 5 steps to learn any programming language
|
||||
|
||||
If you know one programming language, you can [learn them all][4]. That's the premise of this article by Seth Kenlon, which argues that knowing some basic programming logic can scale across languages.
|
||||
|
||||
Seth shares five things programmers look for when considering a new language to learn to code in. Syntax, built-ins, and parsers are among the five, and he accompanies each with steps to take action.
|
||||
|
||||
The key argument uniting them all? Once you know the theory of how code works, it scales across languages. Nothing is too hard for you to learn.
|
||||
|
||||
### Contribute to open source healthcare projects for COVID-19
|
||||
|
||||
Did you know that an Italian hospital saved COVID-19 patients' lives by 3D printing valves for reanimation devices? It's one of many ways open source contributors [built solutions for the pandemic][5] in 2020.
|
||||
|
||||
In this article, Joshua Pearce shares several ways to volunteer with open source projects addressing COVID-19. While Project Open Air is the largest, Joshua explains how you can also work on a wiki for an open source ventilator, write open source COVID-19 medical supply requirements, test an open source oxygen concentrator prototype, and more.
|
||||
|
||||
### Advice for getting started with GNOME
|
||||
|
||||
GNOME is one of the most popular Linux desktops, but is it right for you? This article shares [advice from GNOME][6] users interspersed with Opensource.com's take on this topic.
|
||||
|
||||
Want some inspiration for configuring your desktop? This article includes links to get started with GNOME extensions, installing Dash to Dock, using the GNOME Tweak tool, and more.
|
||||
|
||||
After all that, you might decide that GNOME still isn't for you—and that's cool. You'll find links to other Linux desktops and window managers at the end.
|
||||
|
||||
### 3 reasons to contribute to open source now
|
||||
|
||||
As of June 2020, GitHub hosted more than 180,000 public repositories. It's never been easier to join the open source community, but does that mean you should? In this article, Opensource.com Correspondent Jason Blais [shares three reasons][7] to take the plunge.
|
||||
|
||||
Contributing to open source can boost your confidence, resume, and professional network. Jason explains how to leverage your contributions in helpful detail, from sharing how to add open source contributions on your LinkedIn profile to turning these contributions into paid roles. There's even a list of great projects for first-time contributors at the end.
|
||||
|
||||
### 4 ways I contribute to open source as a Linux systems administrator
|
||||
|
||||
Sysadmins are the unsung heroes of open source. They do lots of work behind the code that's deeply valuable but often unseen.
|
||||
|
||||
In this article, Elizabeth K. Joseph explains how she [improves open source projects][8] as a Linux sysadmin. User support, hosting project resources, and finding new website environments are just a few ways she leaves communities better than she found them.
|
||||
|
||||
Perhaps the most crucial contribution of all? Documentation! Elizabeth got her start in open source rewriting a quickstart guide for a project she used. Submitting bugs and patch reports to projects you use often is an ideal way to get involved.
|
||||
|
||||
### 6 ways to contribute to an open source alternative to Slack
|
||||
|
||||
Mattermost is a popular platform for teams that want an open source messaging system. Its active, vibrant community is a key plus that keeps users loyal to the product, especially those with experience in Go, React, and DevOps.
|
||||
|
||||
If you'd like to [contribute to Mattermost][9], Jason Blais explains how. Consider this article your Getting Started documentation: Blais shares steps to take, organized by six types of contributions you can make.
|
||||
|
||||
Whether you'd like to build an integration or localize your language, this article shares how to get going.
|
||||
|
||||
### How to contribute to Kubernetes
|
||||
|
||||
I walked into Open Source Summit 2018 in Vancouver young and unaware of Kubernetes. After the keynotes, I walked out of the ballroom a changed-yet-confused woman. It's not hyperbole to say that Kubernetes has changed open source for good: It's tough to find a more popular, impactful project.
|
||||
|
||||
If you'd like to contribute, IBM Engineer Tara Gu explains [how she got started.][10] This recap of her lightning talk at All Things Open 2019 includes a video of the talk she gave in person. At a conference. Remember those…?
|
||||
|
||||
### How anyone can contribute to open source software in their job
|
||||
|
||||
Necessity is the mother of invention, especially in open source. Many folks build open source solutions to their own problems. But what happens when developers miss the mark by building products without gathering feedback from their target users?
|
||||
|
||||
Product and design teams often fill this gap in enterprises. What should developers do if such roles don't exist on their open source teams?
|
||||
|
||||
In this article, Catherine Robson explains how open source teams [can collect feedback][11] from their target users. It's written for folks who want to share their work experiences with developers, thus contributing their feedback to open source projects.
|
||||
|
||||
The steps Catherine outlines will help you share your insights with open source teams and play a key role helping teams build better products.
|
||||
|
||||
### What do you want to learn?
|
||||
|
||||
What would you like to know about getting started in open source? Please share your suggestions for article topics in the comments. And if you have a story to share to help others get started in open source, please consider [writing an article][12] for Opensource.com.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/getting-started-open-source
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[萌新阿岩](https://github.com/mengxinayan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
|
||||
[2]: https://opensource.com/article/20/5/web-scraping-python
|
||||
[3]: https://opensource.com/article/20/9/ssh
|
||||
[4]: https://opensource.com/article/20/10/learn-any-programming-language
|
||||
[5]: https://opensource.com/article/20/3/volunteer-covid19
|
||||
[6]: https://opensource.com/article/20/6/gnome-users
|
||||
[7]: https://opensource.com/article/20/6/why-contribute-open-source
|
||||
[8]: https://opensource.com/article/20/7/open-source-sysadmin
|
||||
[9]: https://opensource.com/article/20/7/mattermost
|
||||
[10]: https://opensource.com/article/20/1/contributing-kubernetes-all-things-open-2019
|
||||
[11]: https://opensource.com/article/20/10/open-your-job
|
||||
[12]: https://opensource.com/how-submit-article
|
@ -1,184 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why I use the D programming language for scripting)
|
||||
[#]: via: (https://opensource.com/article/21/1/d-scripting)
|
||||
[#]: author: (Lawrence Aberba https://opensource.com/users/aberba)
|
||||
|
||||
Why I use the D programming language for scripting
|
||||
======
|
||||
The D programming language is best known as a system programming
|
||||
language, but it's also a great option for scripting.
|
||||
![Business woman on laptop sitting in front of window][1]
|
||||
|
||||
The D programming language is often advertised as a system programming language due to its static typing and metaprogramming capabilities. However, it's also a very productive scripting language.
|
||||
|
||||
Python is commonly chosen for scripting due to its flexibility for automating tasks and quickly prototyping ideas. This makes Python very appealing to sysadmins, [managers][2], and developers in general for automating recurring tasks that they might otherwise have to do manually.
|
||||
|
||||
It is reasonable to expect any other script-writing language to have these Python traits and capabilities. Here are two reasons why I believe D is a good option.
|
||||
|
||||
### 1\. D is easy to read and write
|
||||
|
||||
As a C-like language, D should be familiar to most programmers. Anyone who uses JavaScript, Java, PHP, or Python will know their way around D.
|
||||
|
||||
If you don't already have D installed, [install a D compiler][3] so that you can [run the D code][4] in this article. You may also use the [online D editor][5].
|
||||
|
||||
Here is an example of D code that reads words from a file named `words.txt` and prints them on the command line:
|
||||
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
Write the script in D:
|
||||
|
||||
|
||||
```
|
||||
// file print_words.d
|
||||
|
||||
#!/usr/bin/env rdmd
|
||||
|
||||
// import the D standard library
|
||||
import std;
|
||||
|
||||
void main(){
|
||||
// open the file
|
||||
File("./words.txt")
|
||||
|
||||
//iterate by line
|
||||
.byLine
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
}
|
||||
```
|
||||
|
||||
This code is prefixed with a [shebang][6] that will run the code using [rdmd][7], a tool that comes with the D compiler to compile and run code. Assuming you are running Unix or Linux, before you can run this script, you must make it executable by using the `chmod` command:
|
||||
|
||||
|
||||
```
|
||||
`chmod u+x print_words.d`
|
||||
```
|
||||
|
||||
Now that the script is executable, you can run it:
|
||||
|
||||
|
||||
```
|
||||
`./print_words.d`
|
||||
```
|
||||
|
||||
This should print the following on your command line:
|
||||
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
Congratulations! You've written your first D script. You can see how D enables you to chain functions in sequence to make reading the code feel natural, similar to how you think about problems in your mind. This [feature makes D my favorite programming language][8].
|
||||
|
||||
Try writing another script: A nonprofit manager has a text file of donations with each amount on separate lines. The manager wants to sum the first 10 donations and print the amounts:
|
||||
|
||||
|
||||
```
|
||||
// file sum_donations.d
|
||||
|
||||
#!/usr/bin/env rdmd
|
||||
|
||||
import std;
|
||||
|
||||
void main()
|
||||
{
|
||||
double total = 0;
|
||||
|
||||
// open the file
|
||||
File("monies.txt")
|
||||
|
||||
// iterate by line
|
||||
.byLine
|
||||
|
||||
// pick first 10 lines
|
||||
.take(10)
|
||||
|
||||
// remove new line characters (\n)
|
||||
.map!(strip)
|
||||
|
||||
// convert each to double
|
||||
.map!(to!double)
|
||||
|
||||
// add element to total
|
||||
.tee!((x) { total += x; })
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
|
||||
// print total
|
||||
writeln("total: ", total);
|
||||
}
|
||||
```
|
||||
|
||||
The `!` operator used with `each` is the syntax of a [template argument][9].
|
||||
|
||||
### 2\. D is great for quick prototyping
|
||||
|
||||
D is flexible for hammering code together really quickly and making it work. Its standard library is rich with utility functions for performing common tasks, such as manipulating data (JSON, CSV, text, etc.). It also comes with a rich set of generic algorithms for iterating, searching, comparing, and mutating data. These cleverly crafted algorithms are oriented towards processing sequences by defining generic [range-based interfaces][10].
|
||||
|
||||
The script above shows how chaining functions in D provides a gist of sequential processing and manipulating data. Another appeal of D is its growing ecosystem of third-party packages for performing common tasks. An example is how easy it is to build a simple web server using the [Vibe.d][11] web framework. Here's an example:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env dub
|
||||
/+ dub.sdl:
|
||||
dependency "vibe-d" version="~>0.8.0"
|
||||
+/
|
||||
void main()
|
||||
{
|
||||
import vibe.d;
|
||||
listenHTTP(":8080", (req, res) {
|
||||
res.writeBody("Hello, World: " ~ req.path);
|
||||
});
|
||||
runApplication();
|
||||
}
|
||||
```
|
||||
|
||||
This uses the official D package manager, [Dub][12], to fetch the vibe.d web framework from the [D package repository][13]. Dub takes care of downloading the Vibe.d package, then compiling and spinning up a web server on localhost port 8080.
|
||||
|
||||
### Give D a try
|
||||
|
||||
These are only a couple of reasons why you might want to use D for writing scripts.
|
||||
|
||||
D is a great language for development. It's easy to install from the D download page, so download the compiler, take a look at the examples, and experience D for yourself.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/d-scripting
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aberba
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://opensource.com/article/20/3/automating-community-management-python
|
||||
[3]: https://tour.dlang.org/tour/en/welcome/install-d-locally
|
||||
[4]: https://tour.dlang.org/tour/en/welcome/run-d-program-locally
|
||||
[5]: https://run.dlang.io/
|
||||
[6]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
||||
[7]: https://dlang.org/rdmd.html
|
||||
[8]: https://opensource.com/article/20/7/d-programming
|
||||
[9]: http://ddili.org/ders/d.en/templates.html
|
||||
[10]: http://ddili.org/ders/d.en/ranges.html
|
||||
[11]: https://vibed.org
|
||||
[12]: https://dub.pm/getting_started
|
||||
[13]: https://code.dlang.org
|
@ -1,180 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage containers with Podman Compose)
|
||||
[#]: via: (https://fedoramagazine.org/manage-containers-with-podman-compose/)
|
||||
[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/)
|
||||
|
||||
Manage containers with Podman Compose
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Containers are awesome, allowing you to package your application along with its dependencies and run it anywhere. Starting with Docker in 2013, containers have been making the lives of software developers much easier.
|
||||
|
||||
One of the downsides of Docker is it has a central daemon that runs as the root user, and this has security implications. But this is where Podman comes in handy. Podman is a [daemonless container engine][2] for developing, managing, and running OCI Containers on your Linux system in root or rootless mode.
|
||||
|
||||
There are other articles on Fedora Magazine you can use to learn more about Podman. Two examples follow:
|
||||
|
||||
* [Using Pods with Podman on Fedora][3]
|
||||
* [Podman with Capabilities on Fedora][4]
|
||||
|
||||
|
||||
|
||||
If you have worked with Docker, chances are you also know about Docker Compose, which is a tool for orchestrating several containers that might be interdependent. To learn more about Docker Compose see its [documentation][5].
|
||||
|
||||
### What is Podman Compose?
|
||||
|
||||
[Podman Compose][6] is a project whose goal is to be used as an alternative to Docker Compose without needing any changes to be made in the docker-compose.yaml file. Since Podman Compose works using pods, it’s good to check a refresher definition of a pod.
|
||||
|
||||
> A _Pod_ (as in a pod of whales or pea pod) is a group of one or more [containers][7], with shared storage/network resources, and a specification for how to run the containers.
|
||||
>
|
||||
> [Pods – Kubernetes Documentation][8]
|
||||
|
||||
The basic idea behind Podman Compose is that it picks the services defined inside the _docker-compose.yaml_ file and creates a container for each service. A major difference between Docker Compose and Podman Compose is that Podman Compose adds the containers to a single pod for the whole project, and all the containers share the same network. It even names the containers the same way Docker Compose does, using the _‐‐add-host_ flag when creating the containers, as you will see in the example.
|
||||
|
||||
### Installation
|
||||
|
||||
Complete install instructions for Podman Compose are found on its [project page][6], and there are several ways to do it. To install the latest development version, use the following command:
|
||||
|
||||
```
|
||||
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
|
||||
```
|
||||
|
||||
Make sure you also have [Podman installed][9] since you’ll need it as well. On Fedora, to install Podman use the following command:
|
||||
|
||||
```
|
||||
sudo dnf install podman
|
||||
```
|
||||
|
||||
### Example: launching a WordPress site with Podman Compose
|
||||
|
||||
Imagine your _docker-compose.yaml_ file is in a folder called _wpsite_. A typical _docker-compose.yaml_ (or _docker-compose.yml_) for a WordPress site looks like this:
|
||||
|
||||
```
|
||||
version: "3.8"
|
||||
services:
|
||||
web:
|
||||
image: wordpress
|
||||
restart: always
|
||||
volumes:
|
||||
- wordpress:/var/www/html
|
||||
ports:
|
||||
- 8080:80
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: db
|
||||
WORDPRESS_DB_USER: magazine
|
||||
WORDPRESS_DB_NAME: magazine
|
||||
WORDPRESS_DB_PASSWORD: 1maGazine!
|
||||
WORDPRESS_TABLE_PREFIX: cz
|
||||
WORDPRESS_DEBUG: 0
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
- wpnet
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
restart: always
|
||||
ports:
|
||||
- 6603:3306
|
||||
|
||||
volumes:
|
||||
- wpdbvol:/var/lib/mysql
|
||||
|
||||
environment:
|
||||
MYSQL_DATABASE: magazine
|
||||
MYSQL_USER: magazine
|
||||
MYSQL_PASSWORD: 1maGazine!
|
||||
MYSQL_ROOT_PASSWORD: 1maGazine!
|
||||
networks:
|
||||
- wpnet
|
||||
volumes:
|
||||
wordpress: {}
|
||||
wpdbvol: {}
|
||||
|
||||
networks:
|
||||
wpnet: {}
|
||||
```
|
||||
|
||||
If you come from a Docker background, you know you can launch these services by running _docker-compose up_. Docker Compose will create two containers named _wpsite_web_1_ and _wpsite_db_1_, and attaches them to a network called _wpsite_wpnet_.
|
||||
|
||||
Now, see what happens when you run _podman-compose up_ in the project directory. First, a pod is created named after the directory in which the command was issued. Next, it looks for any named volumes defined in the YAML file and creates the volumes if they do not exist. Then, one container is created per every service listed in the _services_ section of the YAML file and added to the pod.
|
||||
|
||||
Naming of the containers is done similar to Docker Compose. For example, for your web service, a container named _wpsite_web_1_ is created. Podman Compose also adds localhost aliases to each named container. Then, containers can still resolve each other by name, although they are not on a bridge network as in Docker. To do this, use the option _–add-host_. For example, _–add-host web:localhost_.
|
||||
|
||||
Note that _docker-compose.yaml_ includes a port forwarding from host port 8080 to container port 80 for the web service. You should now be able to access your fresh WordPress instance from the browser using the address _<http://localhost:8080>_.
|
||||
|
||||
![WordPress Dashboard][10]
|
||||
|
||||
### Controlling the pod and containers
|
||||
|
||||
To see your running containers, use _podman ps_, which shows the web and database containers along with the infra container in your pod.
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
a364a8d7cec7 docker.io/library/wordpress:latest apache2-foregroun... 2 hours ago Up 2 hours ago 0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp wpsite_web_1
|
||||
c447024aa104 docker.io/library/mariadb:10.5 mysqld 2 hours ago Up 2 hours ago 0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp wpsite_db_1
|
||||
12b1e3418e3e k8s.gcr.io/pause:3.2
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
You can also verify that a pod has been created by Podman for this project, named after the folder in which you issued the command.
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
|
||||
8a08a3a7773e wpsite Degraded 2 hours ago 12b1e3418e3e 3
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
To stop the containers, enter the following command in another command window:
|
||||
|
||||
```
|
||||
podman-compose down
|
||||
```
|
||||
|
||||
You can also do that by stopping and removing the pod. This essentially stops and removes all the containers and then the containing pod. So, the same thing can be achieved with these commands:
|
||||
|
||||
```
|
||||
podman pod stop podname
|
||||
podman pod rm podname
|
||||
```
|
||||
|
||||
Note that this does not remove the volumes you defined in _docker-compose.yaml_. So, the state of your WordPress site is saved, and you can get it back by running this command:
|
||||
|
||||
```
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
In conclusion, if you’re a Podman fan and do your container jobs with Podman, you can use Podman Compose to manage your containers in development and production.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/manage-containers-with-podman-compose/
|
||||
|
||||
作者:[Mehdi Haghgoo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/powergame/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/podman-compose-1-816x345.jpg
|
||||
[2]: https://podman.io
|
||||
[3]: https://fedoramagazine.org/podman-pods-fedora-containers/
|
||||
[4]: https://fedoramagazine.org/podman-with-capabilities-on-fedora/
|
||||
[5]: https://docs.docker.com/compose/
|
||||
[6]: https://github.com/containers/podman-compose
|
||||
[7]: https://kubernetes.io/docs/concepts/containers/
|
||||
[8]: https://kubernetes.io/docs/concepts/workloads/pods/
|
||||
[9]: https://podman.io/getting-started/installation
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Screenshot-from-2021-01-08-06-27-29-1024x767.png
|
@ -1,83 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/filmulator/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop
|
||||
======
|
||||
|
||||
_**Brief: Filmulator is an open source raw photo editing application with library management focusing on simplicity, ease of use and streamlined workflow.**_
|
||||
|
||||
### Filmulator: Raw Image Editor for Linux (and Windows)
|
||||
|
||||
There are a [bunch of raw photo editors for Linux][1]. [Filmulator][2] is one of them. Filmulator aims to make raw image editing simple by giving only the essential elements. It also adds the feature of library handling which is a plus if you are looking for a decent application for your camera images.
|
||||
|
||||
For those unaware, [raw image file][3] is a minimally processed, uncompressed file. In other words, it is untouched digital file with no compression and minimal processing applied to it. Professional photographers prefer to capture photos in raw file and process it themselves. Normal people take photos from their smartphones and it is usually compressed in JPEG format or filtered.
|
||||
|
||||
Let’s see what features you get in the Filmulator editor.
|
||||
|
||||
### Features of Filmulator
|
||||
|
||||
![Filmulator interface][4]
|
||||
|
||||
Filmulator claims that it is not the typical “film effect filter” that merely copies the outward characteristics of film. Instead, Filmulator gets to the root of what makes film so appealing: the development process.
|
||||
|
||||
It simulates film development process: from “exposure” of film, to the growth of “silver crystals” within each pixel, to the diffusion of “developer” both between neighboring pixels and with the bulk developer in the tank.
|
||||
|
||||
Fimulator developer says that the simulation brings about the following benefits:
|
||||
|
||||
* Large bright regions become darker, compressing the output dynamic range.
|
||||
* Small bright regions make their surroundings darker, enhancing local contrast.
|
||||
* In bright regions, saturation is enhanced, helping retain color in blue skies, brighter skin tones, and sunsets.
|
||||
* In extremely saturated regions, the brightness is attenuated, helping retain detail e.g. in flowers.
|
||||
|
||||
|
||||
|
||||
Here’s a comparison of a raw image processed by Filmulator to enhance colors in a natural manner without inducing color clipping.
|
||||
|
||||
![][5]
|
||||
|
||||
### Installing Filmulator on Ubuntu/Linux
|
||||
|
||||
There is an AppImage available for Filmulator so that you can use it easily on Linux. [Using AppImage files][6] is really simple. You download it, make it executable and make it run by double-clicking on it.
|
||||
|
||||
[Download Filmulator for Linux][7]
|
||||
|
||||
There is also a Windows version available for Windows users. Apart from that, you can always head over to [its GitHub repository][8] and peek into its source code.
|
||||
|
||||
There is a [little documentation][9] to help you get started with Fimulator.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Fimulator’s design ideology is to have the best tool for any job, and only that one tool. This means compromising flexibility, but gaining a greatly simplified and streamlined user interface.
|
||||
|
||||
I am not even an amateur photographer, let alone be a professional one. I do not own a DSLR or other high-end photography equipments. For this reason, I cannot test and share my experience on the usefulness of Filmulator.
|
||||
|
||||
If you have more experience dealing with raw images, I let you try Filmulator and share your opinion on it. There is an AppImage available so you can quickly test it and see if it fits your needs or not.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/filmulator/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/raw-image-tools-linux/
|
||||
[2]: https://filmulator.org/
|
||||
[3]: https://www.findingtheuniverse.com/what-is-raw-in-photography/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/Filmulate.jpg?resize=799%2C463&ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/image-without-filmulator.jpeg?ssl=1
|
||||
[6]: https://itsfoss.com/use-appimage-linux/
|
||||
[7]: https://filmulator.org/download/
|
||||
[8]: https://github.com/CarVac/filmulator-gui
|
||||
[9]: https://github.com/CarVac/filmulator-gui/wiki
|
@ -1,155 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Paru – A New AUR Helper and Pacman Wrapper Based on Yay)
|
||||
[#]: via: (https://itsfoss.com/paru-aur-helper/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Paru – A New AUR Helper and Pacman Wrapper Based on Yay
|
||||
======
|
||||
|
||||
One of the [main reasons that a user chooses Arch Linux][1] or an [Arch based Linux distribution][2] is the [Arch User repository (AUR)][3].
|
||||
|
||||
Unfortunately, [pacman][4], the package manager of Arch, can’t access the AUR in a similar way to the official repositories. The packages in AUR are in the form of [PKGBUILD][5] and require a manual process to be built.
|
||||
|
||||
An AUR helper can automate this process. Without any doubt [yay][6] is one of the most popular and highly favoured AUR helper.
|
||||
|
||||
Recently [Morganamilo][7], one of the two developers of yay, [announced][8] that is stepping away from maintaining yay and starting his own AUR helper called [paru][9]. Paru is written in [Rust][10] compared to yay that is written in [Go][11] and its design is based on yay.
|
||||
|
||||
Please note that yay hasn’t reach the end of life and is still being actively maintained by [Jguer][12]. He also [commented][13] that paru may be suitable for users that looking for a feature rich AUR helper; thus I would recommend giving it a try.
|
||||
|
||||
### Installing Paru AUR helper
|
||||
|
||||
To install paru, open your terminal and type the following commands one by one.
|
||||
|
||||
```
|
||||
sudo pacman -S --needed base-devel
|
||||
git clone https://aur.archlinux.org/paru.git
|
||||
cd paru
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
Now that you have it installed, let’s see how to use it.
|
||||
|
||||
### Essential commands to use Paru AUR helper
|
||||
|
||||
In my opinion these are the most essential commands of paru. You can explore more on the official repository on [GitHub][9].
|
||||
|
||||
* **paru <userinput>** : Search and install <userinput>.
|
||||
* **paru —** : Alias for paru -Syu
|
||||
* **paru -Sua** : Upgrade AUR packages only
|
||||
* **paru -Qua** : Print available AUR updates
|
||||
* **paru -Gc <userinput>** : Print the AUR comments of <userinput>
|
||||
|
||||
|
||||
|
||||
### Using Paru AUR helper to its full extent
|
||||
|
||||
You can access the [changelog][14] of paru on GitHub for the full changelog history or you can see the changes from yay at the [first release][15].
|
||||
|
||||
#### Enable colour in Paru
|
||||
|
||||
To enable colour in paru, you have to enable it first in pacman. All the [configuration files][16] are in /etc directory. In this example, I [use Nano text editor][17] but, you may use any [terminal-based text editor][18] of your choice.
|
||||
|
||||
```
|
||||
sudo nano /etc/pacman.conf
|
||||
```
|
||||
|
||||
Once you open the pacman configuration file, uncomment the “Color” to enable this feature.
|
||||
|
||||
![][19]
|
||||
|
||||
#### **Flip search order**
|
||||
|
||||
The most relevant package according to your search term is normally displayed on the top of the search result. In paru, you can flip the search order to make your search easier.
|
||||
|
||||
Similar to the previous example, open the paru configuration file:
|
||||
|
||||
```
|
||||
sudo nano /etc/paru.conf
|
||||
```
|
||||
|
||||
Uncomment the “BottomUp” term and save the file.
|
||||
|
||||
![][20]
|
||||
|
||||
As you can see the order is flipped and the first package appears on the bottom.
|
||||
|
||||
![][21]
|
||||
|
||||
#### **Edit PKGBUILDs** (For advanced user)
|
||||
|
||||
If you are an experienced Linux user, you can edit AUR packages through paru. To do so, you need to enable the feature from the paru configuration file and set the file manager of your choice.
|
||||
|
||||
In this example I will use the default in the configuration file i.e. the vifm file manager. If you haven’t used it you may need to install it.
|
||||
|
||||
```
|
||||
sudo pacman -S vifm
|
||||
sudo nano /etc/paru.conf
|
||||
```
|
||||
|
||||
Open the configuration file and uncomment as shown below.
|
||||
|
||||
![][22]
|
||||
|
||||
Let’s go back to the [Google Calendar][23] AUR package and try to install it. You will be prompted to review the package. Type yes and click enter.
|
||||
|
||||
![][24]
|
||||
|
||||
Choose the PKGBUILD from the file manager and hit enter to view the package.
|
||||
|
||||
![][25]
|
||||
|
||||
Any change that you make will be permanent and the next time you upgrade the package, your changes will be merged with the upstream package.
|
||||
|
||||
![][26]
|
||||
|
||||
### Conclusion
|
||||
|
||||
Paru is another interesting addition to the [AUR helpers family][27] with a promising future. At this point I wouldn’t suggest replacing yay as it is still maintained but definitely give paru a try. You can have both of them installed to your system and come to your own conclusions.
|
||||
|
||||
To get the latest [Linux news][28], subscribe to our social media to be among the first to get them whilst they are fresh!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/paru-aur-helper/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/why-arch-linux/
|
||||
[2]: https://itsfoss.com/arch-based-linux-distros/
|
||||
[3]: https://itsfoss.com/aur-arch-linux/
|
||||
[4]: https://itsfoss.com/pacman-command/
|
||||
[5]: https://wiki.archlinux.org/index.php/PKGBUILD
|
||||
[6]: https://news.itsfoss.com/qt-6-released/
|
||||
[7]: https://github.com/Morganamilo
|
||||
[8]: https://www.reddit.com/r/archlinux/comments/jjn1c1/paru_v100_and_stepping_away_from_yay/
|
||||
[9]: https://github.com/Morganamilo/paru
|
||||
[10]: https://www.rust-lang.org/
|
||||
[11]: https://golang.org/
|
||||
[12]: https://github.com/Jguer
|
||||
[13]: https://aur.archlinux.org/packages/yay/#pinned-788241
|
||||
[14]: https://github.com/Morganamilo/paru/releases
|
||||
[15]: https://github.com/Morganamilo/paru/releases/tag/v1.0.0
|
||||
[16]: https://linuxhandbook.com/linux-directory-structure/#-etc-configuration-files
|
||||
[17]: https://itsfoss.com/nano-editor-guide/
|
||||
[18]: https://itsfoss.com/command-line-text-editors-linux/
|
||||
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/pacman.conf-color.png?resize=800%2C480&ssl=1
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup.png?resize=800%2C480&ssl=1
|
||||
[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup-2.png?resize=800%2C480&ssl=1
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-vifm.png?resize=732%2C439&ssl=1
|
||||
[23]: https://aur.archlinux.org/packages/gcalcli/
|
||||
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review.png?resize=800%2C480&ssl=1
|
||||
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-2.png?resize=800%2C480&ssl=1
|
||||
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-3.png?resize=800%2C480&ssl=1
|
||||
[27]: https://itsfoss.com/best-aur-helpers/
|
||||
[28]: https://news.itsfoss.com/
|
@ -1,272 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (amwps290)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A hands-on tutorial of SQLite3)
|
||||
[#]: via: (https://opensource.com/article/21/2/sqlite3-cheat-sheet)
|
||||
[#]: author: (Klaatu https://opensource.com/users/klaatu)
|
||||
|
||||
A hands-on tutorial of SQLite3
|
||||
======
|
||||
Get started with this incredibly powerful and common database. Download
|
||||
the SQLite cheat sheet.
|
||||
![Cheat Sheet cover image][1]
|
||||
|
||||
Applications very often save data. Whether your users create simple text documents, complex graphic layouts, game progress, or an intricate list of customers and order numbers, software usually implies that data is being generated. There are many ways to store data for repeated use. You can dump text to configuration formats such as INI, [YAML][2], XML, or JSON, you can write out raw binary data, or you can store data in a structured database. SQLite is a self-contained, lightweight database that makes it easy to create, parse, query, modify, and transport data.
|
||||
|
||||
**[Download our [SQLite3 cheat sheet][3]]**
|
||||
|
||||
SQLite has been dedicated to the [public domain][4], which [technically means it is not copyrighted and therefore requires no license][5]. Should you require a license, you can [purchase a Warranty of Title][6]. SQLite is immensely common, with an estimated 1 _trillion_ SQLite databases in active use. That's counting multiple databases on every Android and iOS device, every macOS and Windows 10 computer, most Linux systems, within every Webkit-based web browser, modern TV sets, automotive multimedia systems, and countless other software applications.
|
||||
|
||||
In summary, it's a reliable and simple system to use for storing and organizing data.
|
||||
|
||||
### Installing
|
||||
|
||||
You probably already have SQLite libraries on your system, but you need its command-line tools installed to use it directly. On Linux, you probably already have these tools installed. The command provided by the tools is **sqlite3** (not just **sqlite**).
|
||||
|
||||
If you don't have SQLite installed on Linux or BSD, you can install it from your software repository or ports tree, or [download and install it][7] from source code or as a compiled binary.
|
||||
|
||||
On macOS or Windows, you can download and install SQLite tools from [sqlite.org][7].
|
||||
|
||||
### Using SQLite
|
||||
|
||||
It's common to interact with a database through a programming language. For this reason, there are SQLite interfaces (or "bindings") for Java, Python, Lua, PHP, Ruby, C++, and many many others. However, before using these libraries, it helps to understand what's actually happening with the database engine and why your choice of a database is significant. This article introduces you to SQLite and the **sqlite3** command so you can get familiar with the basics of how this database handles data.
|
||||
|
||||
### Interacting with SQLite
|
||||
|
||||
You can interact with SQLite using the **sqlite3** command. This command provides an interactive shell so you can view and update your databases.
|
||||
|
||||
|
||||
```
|
||||
$ sqlite3
|
||||
SQLite version 3.34.0 2020-12-01 16:14:00
|
||||
Enter ".help" for usage hints.
|
||||
Connected to a transient in-memory database.
|
||||
Use ".open FILENAME" to reopen on a persistent database.
|
||||
sqlite>
|
||||
```
|
||||
|
||||
The command places you in an SQLite subshell, and so your prompt is now an SQLite prompt. Your usual Bash commands don't work here. You must use SQLite commands. To see a list of SQLite commands, type **.help**:
|
||||
|
||||
|
||||
```
|
||||
sqlite> .help
|
||||
.archive ... Manage SQL archives
|
||||
.auth ON|OFF SHOW authorizer callbacks
|
||||
.backup ?DB? FILE Backup DB (DEFAULT "main") TO FILE
|
||||
.bail ON|off Stop after hitting an error. DEFAULT OFF
|
||||
.binary ON|off Turn BINARY output ON OR off. DEFAULT OFF
|
||||
.cd DIRECTORY CHANGE the working directory TO DIRECTORY
|
||||
[...]
|
||||
```
|
||||
|
||||
Some of these commands are binary, while others require unique arguments (like filenames, paths, etc.). These are administrative commands for your SQLite shell and are not database queries. Databases take queries in Structured Query Language (SQL), and many SQLite queries are the same as what you may already know from the [MySQL][8] and [MariaDB][9] databases. However, data types and functions differ, so pay close attention to minor differences if you're familiar with another database.
|
||||
|
||||
### Creating a database
|
||||
|
||||
When launching SQLite, you can either open a prompt in memory, or you can select a database to open:
|
||||
|
||||
|
||||
```
|
||||
`$ sqlite3 mydatabase.db`
|
||||
```
|
||||
|
||||
If you have no database yet, you can create one at the SQLite prompt:
|
||||
|
||||
|
||||
```
|
||||
`sqlite> .open mydatabase.db`
|
||||
```
|
||||
|
||||
You now have an empty file on your hard drive, ready to be used as an SQLite database. The file extension **.db** is arbitrary. You can also use **.sqlite**, or whatever you want.
|
||||
|
||||
### Creating a table
|
||||
|
||||
Databases contain _tables_, which can be visualized as a spreadsheet. There's a series of rows (called _records_ in a database) and columns. The intersection of a row and a column is called a _field_.
|
||||
|
||||
The Structured Query Language (SQL) is named after what it provides: A method to inquire about the contents of a database in a predictable and consistent syntax to receive useful results. SQL reads a lot like an ordinary English sentence, if a little robotic. Currently, your database is empty, devoid of any tables.
|
||||
|
||||
You can create a table with the **CREATE** query. It's useful to combine this with the **IF NOT EXISTS** statement, which prevents SQLite from clobbering an existing table.
|
||||
|
||||
You can't create an empty table in SQLite, so before trying a **CREATE** statement, you must think about what kind of data you anticipate the table will store. In this example, I'll create a table called _member_ with these columns:
|
||||
|
||||
* A unique identifier
|
||||
* A person's name
|
||||
* The date and time of data entry
|
||||
|
||||
|
||||
|
||||
#### Unique ID
|
||||
|
||||
It's always good to refer to a record by a unique number, and luckily SQLite recognizes this and does it automatically for you in a column called **rowid**.
|
||||
|
||||
No SQL statement is required to create this field.
|
||||
|
||||
#### Data types
|
||||
|
||||
For my example table, I'm creating a _name_ column to hold **TEXT** data. To prevent a record from being created without data in a specified field, you can add the **NOT NULL** directive.
|
||||
|
||||
The SQL to create this field is: **name TEXT NOT NULL**
|
||||
|
||||
There are five data types (actually _storage classes_) in SQLite:
|
||||
|
||||
* TEXT: a text string
|
||||
* INTEGER: a whole number
|
||||
* REAL: a floating point (unlimited decimal places) number
|
||||
* BLOB: binary data (for instance, a .jpeg or .webp image)
|
||||
* NULL: a null value
|
||||
|
||||
|
||||
|
||||
#### Date and time stamp
|
||||
|
||||
SQLite includes a convenient date and timestamp function. It is not a data type itself but a function in SQLite that generates either a string or integer, depending on your desired format. In this example, I left it as the default.
|
||||
|
||||
The SQL to create this field is: **datestamp DATETIME DEFAULT CURRENT_TIMESTAMP**
|
||||
|
||||
### Table creation SQL
|
||||
|
||||
The full SQL for creating this example table in SQLite:
|
||||
|
||||
|
||||
```
|
||||
sqlite> CREATE TABLE
|
||||
...> IF NOT EXISTS
|
||||
...> member (name TEXT NOT NULL,
|
||||
...> datestamp DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
```
|
||||
|
||||
In this code sample, I pressed **Return** after the logical clauses of the statement to make it easier to read. SQLite won't run your command unless it terminates with a semi-colon (**;**).
|
||||
|
||||
You can verify that the table has been created with the SQLite command **.tables**:
|
||||
|
||||
|
||||
```
|
||||
sqlite> .tables
|
||||
member
|
||||
```
|
||||
|
||||
### View all columns in a table
|
||||
|
||||
You can verify what columns and rows a table contains with the **PRAGMA** statement:
|
||||
|
||||
|
||||
```
|
||||
sqlite> PRAGMA table_info(member);
|
||||
0|name|TEXT|1||0
|
||||
1|datestamp|CURRENT_TIMESTAMP|0||0
|
||||
```
|
||||
|
||||
### Data entry
|
||||
|
||||
You can populate your new table with some sample data by using the **INSERT** SQL keyword:
|
||||
|
||||
|
||||
```
|
||||
> INSERT INTO member (name) VALUES ('Alice');
|
||||
> INSERT INTO member (name) VALUES ('Bob');
|
||||
> INSERT INTO member (name) VALUES ('Carol');
|
||||
> INSERT INTO member (name) VALUES ('David');
|
||||
```
|
||||
|
||||
Verify the data in the table:
|
||||
|
||||
|
||||
```
|
||||
> SELECT * FROM member;
|
||||
Alice|2020-12-15 22:39:00
|
||||
Bob|2020-12-15 22:39:02
|
||||
Carol|2020-12-15 22:39:05
|
||||
David|2020-12-15 22:39:07
|
||||
```
|
||||
|
||||
#### Adding multiple rows at once
|
||||
|
||||
Now create a second table:
|
||||
|
||||
|
||||
```
|
||||
> CREATE TABLE IF NOT EXISTS linux (
|
||||
...> distro TEXT NOT NULL)
|
||||
```
|
||||
|
||||
Populate it with some sample data, this time using a little **VALUES** shortcut so you can add multiple rows in just one command. The **VALUES** keyword expects a list in parentheses but can take multiple lists separated by commas:
|
||||
|
||||
|
||||
```
|
||||
> INSERT INTO linux (distro)
|
||||
...> VALUES ('Slackware'), ('RHEL'),
|
||||
...> ('Fedora'),('Debian');
|
||||
```
|
||||
|
||||
### Altering a table
|
||||
|
||||
You now have two tables, but as yet, there's no relationship between the two. They each contain independent data, but possibly you might need to associate a member of the first table to a specific item listed in the second.
|
||||
|
||||
To do that, you can create a new column for the first table that corresponds to something in the second. Because both tables were designed with unique identifiers (automatically, thanks to SQLite), the easiest way to connect them is to use the **rowid** field of one as a selector for the other.
|
||||
|
||||
Create a new column in the first table to represent a value in the second table:
|
||||
|
||||
|
||||
```
|
||||
`> ALTER TABLE member ADD os INT;`
|
||||
```
|
||||
|
||||
Using the unique IDs of the **linux** table, assign a distribution to each member. Because the records already exist, you use the **UPDATE** SQL keyword rather than **INSERT**. Specifically, you want to select one row and then update the value of one column. Syntactically, this is expressed a little in reverse, with the update happening first and the selection matching last:
|
||||
|
||||
|
||||
```
|
||||
`> UPDATE member SET os=1 WHERE name='Alice';`
|
||||
```
|
||||
|
||||
Repeat this process for the other names in the **member** table, just to populate it with data. For variety, assign three different distributions across the four rows (doubling up on one).
|
||||
|
||||
### Joining tables
|
||||
|
||||
Now that these two tables relate to one another, you can use SQL to display the associated data. There are many kinds of _joins_ in databases, but you can try them all once you know the basics. Here's a basic join to correlate the values found in the **os** field of the **member** table to the **id** field of the **linux** table:
|
||||
|
||||
|
||||
```
|
||||
> SELECT * FROM member INNER JOIN linux ON member.os=linux.rowid;
|
||||
Alice|2020-12-15 22:39:00|1|Slackware
|
||||
Bob|2020-12-15 22:39:02|3|Fedora
|
||||
Carol|2020-12-15 22:39:05|3|Fedora
|
||||
David|2020-12-15 22:39:07|4|Debian
|
||||
```
|
||||
|
||||
The **os** and **id** fields form the join.
|
||||
|
||||
In a graphical application, you can imagine that the **os** field might be set by a drop-down menu, the values for which are drawn from the contents of the **distro** field of the **linux** table. By using separate tables for unique but related sets of data, you ensure the consistency and validity of data, and thanks to SQL, you can associate them dynamically later.
|
||||
|
||||
### Learning more
|
||||
|
||||
SQLite is an infinitely useful self-contained, portable, open source database. Learning to use it interactively is a great first step toward managing it for web applications or using it through programming language libraries.
|
||||
|
||||
If you enjoy SQLite, you might also try [Fossil][10] by the same author, Dr. Richard Hipp.
|
||||
|
||||
As you learn and use SQLite, it may help to have a list of common commands nearby, so download our **[SQLite3 cheat sheet][3]** today!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/sqlite3-cheat-sheet
|
||||
|
||||
作者:[Klaatu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/klaatu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
|
||||
[2]: https://www.redhat.com/sysadmin/yaml-beginners
|
||||
[3]: https://opensource.com/downloads/sqlite-cheat-sheet
|
||||
[4]: https://sqlite.org/copyright.html
|
||||
[5]: https://directory.fsf.org/wiki/License:PublicDomain
|
||||
[6]: https://www.sqlite.org/purchase/license?
|
||||
[7]: https://www.sqlite.org/download.html
|
||||
[8]: https://www.mysql.com/
|
||||
[9]: https://mariadb.org/
|
||||
[10]: https://opensource.com/article/20/11/fossil
|
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Astrophotography with Fedora Astronomy Lab: setting up)
|
||||
[#]: via: (https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/)
|
||||
[#]: author: (Geoffrey Marr https://fedoramagazine.org/author/coremodule/)
|
||||
|
||||
Astrophotography with Fedora Astronomy Lab: setting up
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Photo by Geoffrey Marr
|
||||
|
||||
You love astrophotography. You love Fedora Linux. What if you could do the former using the latter? Capturing stunning and awe-inspiring astrophotographs, processing them, and editing them for printing or sharing online using Fedora is absolutely possible! This tutorial guides you through the process of setting up a computer-guided telescope mount, guide cameras, imaging cameras, and other pieces of equipment. A future article will cover capturing and processing data into pleasing images. Please note that while this article is written with certain aspects of the astrophotography process included or omitted based off my own equipment, you can custom-tailor it to fit your own equipment and experience. Let’s capture some photons!
|
||||
|
||||
![][2]
|
||||
|
||||
### Installing Fedora Astronomy Lab
|
||||
|
||||
This tutorial focuses on [Fedora Astronomy Lab][3], so it only makes sense that the first thing we should do is get it installed. But first, a quick introduction: based on the KDE Plasma desktop, Fedora Astronomy Lab includes many pieces of open source software to aid astronomers in planning observations, capturing data, processing images, and controlling astronomical equipment.
|
||||
|
||||
Download Fedora Astronomy Lab from the [Fedora Labs website][4]. You will need a USB flash-drive with at least eight GB of storage. Once you have downloaded the ISO image, use [Fedora Media Writer][5] to [write the image to your USB flash-drive.][6] After this is done, [boot from the USB drive][7] you just flashed and [install Fedora Astronomy Lab to your hard drive.][8] While you can use Fedora Astronomy Lab in a live-environment right from the flash drive, you should install to the hard drive to prevent bottlenecks when processing large amounts of astronomical data.
|
||||
|
||||
### Configuring your installation
|
||||
|
||||
Before you can go capturing the heavens, you need to do some minor setup in Fedora Astronomy Lab.
|
||||
|
||||
First of all, you need to add your user to the _dialout_ group so that you can access certain pieces of astronomical equipment from within the guiding software. Do that by opening the terminal (Konsole) and running this command (replacing _user_ with your username):
|
||||
|
||||
sudo usermod -a -G dialout user
|
||||
|
||||
My personal setup includes a guide camera (QHY5 series, also known as Orion Starshoot) that does not have a driver in the mainline Fedora repositories. To enable it, ypu need to install the [qhyccd SDK][9]. (_Note that this package is not officially supported by Fedora. Use it at your own risk.)_ At the time of writing, I chose to use the latest stable release, 20.08.26. Once you download the Linux 64-bit version of the SDK, to extract it:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
tar zxvf sdk_linux64_20.08.26.tgz
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Now change into the directory you just extracted, change the permissions of the _install.sh_ file to give you execute privileges, and run the _install.sh_:
|
||||
|
||||
```
|
||||
cd sdk_linux64_20.08.26
|
||||
chmod +x install.sh
|
||||
sudo ./install.sh
|
||||
```
|
||||
|
||||
Now it’s time to install the qhyccd INDI driver. INDI is an open source software library used to control astronomical equipment. Unfortunately, the driver is unavailable in the mainline Fedora repositories, but it is in a Copr repository. (_Note: Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.)_ If you prefer to have the newest (and perhaps unstable!) pieces of astronomy software, you can also enable the “bleeding” repositories at this time by following [this guide][10]. For this tutorial, you are only going to enable one repo:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
sudo dnf copr enable xsnrg/indi-3rdparty-bleeding
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Install the driver by running the following command:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
sudo dnf install indi-qhy
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Finally, update all of your system packages:
|
||||
|
||||
sudo dnf update -y
|
||||
|
||||
To recap what you accomplished in this sectio: you added your user to the _dialout_ group, downloaded and installed the qhyccd driver, enabled the _indi-3rdparty-bleeding_ copr, installed the qhyccd-INDI driver with dnf, and updated your system.
|
||||
|
||||
### Connecting your equipment
|
||||
|
||||
This is the time to connect all your equipment to your computer. Most astronomical equipment will connect via USB, and it’s really as easy as plugging each device into your computer’s USB ports. If you have a lot of equipment (mount, imaging camera, guide camera, focuser, filter wheel, etc), you should use an external powered-USB hub to make sure that all connected devices have adequate power. Once you have everything plugged in, run the following command to ensure that the system recognizes your equipment:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
lsusb
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
You should see output similar to (but not the same as) the output here:
|
||||
|
||||
![][11]
|
||||
|
||||
You see in the output that the system recognizes the telescope mount (a SkyWatcher EQM-35 Pro) as _Prolific Technology, Inc. PL2303 Serial Port_, the imaging camera (a Sony a6000) as _Sony Corp. ILCE-6000_, and the guide camera (an Orion Starshoot, aka QHY5) as _Van Ouijen Technische Informatica_. Now that you have made sure your system recognizes your equipment, it’s time to open your desktop planetarium and telescope controller, KStars!
|
||||
|
||||
### Setting up KStars
|
||||
|
||||
It’s time to open [KStars][12], which is a desktop planetarium and also includes the Ekos telescope control software. The first time you open KStars, you will see the KStars Startup Wizard.
|
||||
|
||||
![][13]
|
||||
|
||||
Follow the prompts to choose your home location (where you will be imaging from) and _Download Extra Data…_
|
||||
|
||||
![Setting your location][14]
|
||||
|
||||
![“Download Extra Data”][15]
|
||||
|
||||
![Choosing which catalogs to download][16]
|
||||
|
||||
This will allow you to install additional star, nebula, and galaxy catalogs. You don’t need them, but they don’t take up too much space and add to the experience of using KStars. Once you’ve completed this, hit _Done_ in the bottom right corner to continue.
|
||||
|
||||
### Getting familiar with KStars
|
||||
|
||||
Now is a good time to play around with the KStars interface. You are greeted with a spherical image with a coordinate plane and stars in the sky.
|
||||
|
||||
![][17]
|
||||
|
||||
This is the desktop planetarium which allows you to view the placement of objects in the night sky. Double-clicking an object selects it, and right clicking on an object gives you options like _Center & Track_ which will follow the object in the planetarium, compensating for [sidereal time][18]. _Show DSS Image_ shows a real [digitized sky survey][19] image of the selected object.
|
||||
|
||||
![][20]
|
||||
|
||||
Another essential feature is the _Set Time_ option in the toolbar. Clicking this will allow you to input a future (or past) time and then simulate the night sky as if that were the current date.
|
||||
|
||||
![The Set Time button][21]
|
||||
|
||||
### Configuring capture equipment with Ekos
|
||||
|
||||
You’re familiar with the KStars layout and some basic functions, so it’s time to move on configuring your equipment using the [Ekos][22] observatory controller and automation tool. To open Ekos, click the observatory button in the toolbar or go to _Tools_ > _Ekos_.
|
||||
|
||||
![The Ekos button on the toolbar][23]
|
||||
|
||||
You will see another setup wizard: the _Ekos Profile Wizard_. Click _Next_ to start the wizard.
|
||||
|
||||
![][24]
|
||||
|
||||
In this tutorial, you have all of our equipment connected directly to your computer. A future article we will cover using an INDI server installed on a remote computer to control our equipment, allowing you to connect over a network and not have to be in the same physical space as your gear. For now though, select _Equipment is attached to this device_.
|
||||
|
||||
![][25]
|
||||
|
||||
You are now asked to name your equipment profile. I usually name mine something like “Local Gear” to differentiate between profiles that are for remote gear, but name your profile what you wish. We will leave the button marked _Internal Guide_ checked and won’t select any additional services. Now click the _Create Profile & Select Devices_ button.
|
||||
|
||||
![][26]
|
||||
|
||||
This next screen is where we can select your particular driver to use for each individual piece of equipment. This part will be specific to your setup depending on what gear you use. For this tutorial, I will select the drivers for my setup.
|
||||
|
||||
My mount, a [SkyWatcher EQM-35 Pro][27], uses the _EQMod Mount_ under _SkyWatcher_ in the menu (this driver is also compatible with all SkyWatcher equatorial mounts, including the [EQ6-R Pro][28] and the [EQ8-R Pro][29]). For my Sony a6000 imaging camera, I choose the _Sony DSLR_ under _DSLRs_ under the CCD category. Under _Guider_, I choose the _QHY CCD_ under _QHY_ for my Orion Starshoot (and any QHY5 series camera). That last driver we want to select will be under the Aux 1 category. We want to select _Astrometry_ from the drop-down window. This will enable the Astrometry plate-solver from within Ekos that will allow our telescope to automatically figure out where in the night sky it is pointed, saving us the time and hassle of doing a one, two, or three star calibration after setting up our mount.
|
||||
|
||||
You selected your drivers. Now it’s time to configure your telescope. Add new telescope profiles by clicking on the + button in the lower right. This is essential for computing field-of-view measurements so you can tell what your images will look like when you open the shutter. Once you click the + button, you will be presented with a form where you can enter the specifications of your telescope and guide scope. For my imaging telescope, I will enter Celestron into the _Vendor_ field, SS-80 into the _Model_ field, I will leave the _Driver_ field as None, _Type_ field as Refractor, _Aperture_ as 80mm, and _Focal Length_ as 400mm.
|
||||
|
||||
![][30]
|
||||
|
||||
After you enter the data, hit the _Save_ button. You will see the data you just entered appear in the left window with an index number of 1 next to it. Now you can go about entering the specs for your guide scope following the steps above. Once you hit save here, the guide scope will also appear in the left window with an index number of 2. Once all of your scopes are entered, close this window. Now select your _Primary_ and _Guide_ telescopes from the drop-down window.
|
||||
|
||||
![][31]
|
||||
|
||||
After all that work, everything should be correctly configured! Click the _Close_ button and complete the final bit of setup.
|
||||
|
||||
### Starting your capture equipment
|
||||
|
||||
This last step before you can start taking images should be easy enough. Click the _Play_ button under Start & Stop Ekos to connect to your equipment.
|
||||
|
||||
![][32]
|
||||
|
||||
You will be greeted with a screen that looks similar to this:
|
||||
|
||||
![][33]
|
||||
|
||||
When you click on the tabs at the top of the screen, they should all show a green dot next to _Connection_, indicating that they are connected to your system. On my setup, the baud rate for my mount (the EQMod Mount tab) is set incorrectly, and so the mount is not connected.
|
||||
|
||||
![][34]
|
||||
|
||||
This is an easy fix; click on the _EQMod Mount_ tab, then the _Connection_ sub-tab, and then change the baud rate from 9600 to 115200. Now is a good time to ensure the serial port under _Ports_ is the correct serial port for your mount. You can check which port the system has mounted your device on by running the command:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
ls /dev
|
||||
|
||||
```
|
||||
| grep USB
|
||||
```
|
||||
|
||||
You should see _ttyUSB0_. If there is more than one USB-serial device plugged in at a time, you will see more than one ttyUSB port, with an incrementing following number. To figure out which port is correct. unplug your mount and run the command again.
|
||||
|
||||
Now click on the _Main Control_ sub-tab, click _Connect_ again, and wait for the mount to connect. It might take a few seconds, be patient and it should connect.
|
||||
|
||||
The last thing to do is set the sensor and pixel size parameters for my camera. Under the _Sony DSLR Alpha-A6000 (Control)_ tab, select the _Image Info_ sub-tab. This is where you can enter your sensor specifications; if you don’t know them, a quick search on the internet will bring you your sensor’s maximum resolution as well as pixel pitch. Enter this data into the right-side boxes, then press the _Set_ button to load them into the left boxes and save them into memory. Hit the _Close_ button when you are done.
|
||||
|
||||
![][35]
|
||||
|
||||
### Conclusion
|
||||
|
||||
Your equipment is ready to use. In the next article, you will learn how to capture and process the images.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/astrophotography-with-fedora-astronomy-lab-setting-up/
|
||||
|
||||
作者:[Geoffrey Marr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/coremodule/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/astrophotography-setup-2-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2020/11/IMG_4151-768x1024.jpg
|
||||
[3]: https://labs.fedoraproject.org/en/astronomy/
|
||||
[4]: https://labs.fedoraproject.org/astronomy/download/index.html
|
||||
[5]: https://github.com/FedoraQt/MediaWriter
|
||||
[6]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Preparing_for_Installation/#_fedora_media_writer
|
||||
[7]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Booting_the_Installation/
|
||||
[8]: https://docs.fedoraproject.org/en-US/fedora/f33/install-guide/install/Installing_Using_Anaconda/#sect-installation-graphical-mode
|
||||
[9]: https://www.qhyccd.com/html/prepub/log_en.html#!log_en.md
|
||||
[10]: https://www.indilib.org/download/fedora/category/8-fedora.html
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2020/11/lsusb_output_rpi.png
|
||||
[12]: https://edu.kde.org/kstars/
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_setuo_wizard-2.png
|
||||
[14]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_location_select-2.png
|
||||
[15]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars-download-extra-data-1.png
|
||||
[16]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_install_extra_Data-1.png
|
||||
[17]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium-1024x549.png
|
||||
[18]: https://en.wikipedia.org/wiki/Sidereal_time
|
||||
[19]: https://en.wikipedia.org/wiki/Digitized_Sky_Survey
|
||||
[20]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_right_click_object-1024x576.png
|
||||
[21]: https://fedoramagazine.org/wp-content/uploads/2020/11/kstars_planetarium_clock_icon.png
|
||||
[22]: https://www.indilib.org/about/ekos.html
|
||||
[23]: https://fedoramagazine.org/wp-content/uploads/2020/11/open_ekos_icon.png
|
||||
[24]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos-profile-wizard.png
|
||||
[25]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_equipment_attached_to_this_device.png
|
||||
[26]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_wizard_local_gear.png
|
||||
[27]: https://www.skywatcherusa.com/products/eqm-35-mount
|
||||
[28]: https://www.skywatcherusa.com/products/eq6-r-pro
|
||||
[29]: https://www.skywatcherusa.com/collections/eq8-r-series-mounts/products/eq8-r-mount-with-pier-tripod
|
||||
[30]: https://fedoramagazine.org/wp-content/uploads/2020/11/setup_telescope_profiles.png
|
||||
[31]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_setup_aux_1_astrometry-1024x616.png
|
||||
[32]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_start_equip_connect.png
|
||||
[33]: https://fedoramagazine.org/wp-content/uploads/2020/11/ekos_startup_equipment.png
|
||||
[34]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_baud_rate_to_115200.png
|
||||
[35]: https://fedoramagazine.org/wp-content/uploads/2020/11/set_camera_sensor_settings.png
|
@ -0,0 +1,190 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Integrate devices and add-ons into your home automation setup)
|
||||
[#]: via: (https://opensource.com/article/21/2/home-automation-addons)
|
||||
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
|
||||
|
||||
Integrate devices and add-ons into your home automation setup
|
||||
======
|
||||
Learn how to set up initial integrations and install add-ons in Home
|
||||
Assistant in the fifth article in this series.
|
||||
![Looking at a map][1]
|
||||
|
||||
In the four previous articles in this series about home automation, I have discussed [what Home Assistant is][2], why you may want [local control][3], some of the [communication protocols][4] for smart home components, and how to [install Home Assistant][5] in a virtual machine (VM) using libvirt. In this fifth article, I will talk about configuring some initial integrations and installing some add-ons.
|
||||
|
||||
### Set up initial integrations
|
||||
|
||||
It's time to start getting into some of the fun stuff. The whole reason Home Assistant (HA) exists is to pull together various "smart" devices from different manufacturers. To do so, you have to make Home Assistant aware of which devices it should coordinate. I'll demonstrate by adding a [Sonoff Zigbee Bridge][6].
|
||||
|
||||
I followed [DigiBlur's Sonoff Guide][7] to replace the stock firmware with the open source firmware [Tasmota][8] to decouple my sensors from the cloud. My [second article][3] in this series explains why you might wish to replace the stock firmware. (I won't go into the device's setup with either the stock or custom firmware, as that is outside of the scope of this tutorial.)
|
||||
|
||||
First, navigate to the **Configuration** menu on the left side of the HA interface, and make sure **Integrations** is selected:
|
||||
|
||||
![Home Assistant integration configuration][9]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
From there, click the **Add Integration** button in the bottom-right corner and search for Zigbee:
|
||||
|
||||
![Add Zigbee integration in Home Assistant][11]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Enter the device manually. If the Zigbee Bridge was physically connected to the Home Assistant interface, you could select the device path. For instance, I have a ZigBee CC2531 USB stick that I use for some Zigbee devices that do not communicate correctly with the Sonoff Bridge. It attaches directly to the Home Assistant host and shows up as a Serial Device. See my [third article][12] for details on wireless standards. However, in this tutorial, we will continue to configure and use the Sonoff Bridge.
|
||||
|
||||
![Enter device manually][13]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
The next step is to choose the radio type, using the information in the DigiBlur tutorial. In this case, the radio is an EZSP radio:
|
||||
|
||||
![Choose the radio type][14]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Finally, you need to know the IP address of the Sonoff Bridge, the port it is listening on, and the speed of the connection. Once I found the Sonoff Bridge's MAC address, I used my DHCP server to ensure that the device always uses the same IP on my network. DigiBlur's guide provides the port and speed numbers.
|
||||
|
||||
![IP, port, and speed numbers][15]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Once you've added the Bridge, you can begin pairing devices to it. Ensure that your devices are in pairing mode. The Bridge will eventually find your device(s).
|
||||
|
||||
![Device pairing][16]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
You can name the device(s) and assign an area (if you set them up).
|
||||
|
||||
![Name device][17]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
The areas displayed will vary based on whether or not you have any configured. Bedroom, Kitchen, and Living Room exist by default. As you add a device, HA will add a new Card to the **Integrations** tab. A Card is a user interface (UI) element that groups information related to a specific entity. The Zigbee card looks like this:
|
||||
|
||||
![Integration card][18]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Later, I'll come back to using this integration. I'll also get into how to use this device in automation flows. But now, I will show you how to add functionality to Home Assistant to make your life easier.
|
||||
|
||||
### Add functionality with add-ons
|
||||
|
||||
Out of the box, HA has some pretty great features for home automation. If you are buying commercial-off-the-shelf (CoTS) products, there is a good chance you can accomplish everything you need without the help of add-ons. However, you may want to investigate some of the add-ons, especially if (like me) you want to make your own sensors.
|
||||
|
||||
There are all kinds of HA add-ons, ranging from Android debugging (ADB) tools to MQTT brokers to the Visual Studio Code editor. With each release, the number of add-ons grows. Some people make HA the center of their local system, encompassing DHCP, Plex, databases, and other useful programs. In fact, HA now ships with a built-in media browser for playing any media that you expose to it.
|
||||
|
||||
I won't go too crazy in this article; I'll show you some of the basics and let you decide how you want to proceed.
|
||||
|
||||
#### Install official add-ons
|
||||
|
||||
Some of the many HA add-ons are available for installation right from the web UI, and others can be installed from alternative sources, such as Git.
|
||||
|
||||
To see what's available, click on the **Supervisor** menu on the left panel. Near the top, you will see a tab called **Add-on store**.
|
||||
|
||||
![Home Assistant add-on store][19]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Below are three of the more useful add-ons that I think should be standard for any HA deployment:
|
||||
|
||||
![Home Assistant official add-ons][20]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
The **File Editor** allows you to manage Home Assistant configuration files directly from your browser. I find this far more convenient for quick edits than obtaining a copy of the file, editing it, and pushing it back to HA. If you use add-ons like the Visual Studio Code editor, you can edit the same files.
|
||||
|
||||
The **Samba share** add-on is an excellent way to extract HA backups from the system or push configuration files or assets to the **web** directory. You should _never_ leave your backups sitting on the machine being backed up.
|
||||
|
||||
Finally, **Mosquitto broker** is my preferred method for managing an [MQTT][21] client. While you can install a broker that's external to the HA machine, I find low value in doing this. Since I am using MQTT just to communicate with my IoT devices, and HA is the primary method of coordinating that communication, there is a low risk in having these components vertically integrated. If HA is offline, the MQTT broker is almost useless in my arrangement.
|
||||
|
||||
#### Install community add-ons
|
||||
|
||||
Home Assistant has a prolific community and passionate developers. In fact, many of the "community" add-ons are developed and maintained by the HA developers themselves. For my needs, I install:
|
||||
|
||||
![Home Assistant community add-ons][22]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
**Grafana** (graphing program) and **InfluxDB** (a time-series database) are largely optional and relate to the ability to customize how you visualize the data HA collects. I like to have historical data handy and enjoy looking at the graphs from time to time. While not exactly HA-related, I have my pfSense firewall/router forward metrics to InfluxDB so that I can make some nice graphs over time.
|
||||
|
||||
![Home Assistant Grafana add-on][23]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
**ESPHome** is definitely an optional add-on that's warranted only if you plan on making your own sensors.
|
||||
|
||||
**NodeRED** is my preferred automation flow-handling solution. Although HA has some built-in automation, I find a visual flow editor is preferable for some of the logic I use in my system.
|
||||
|
||||
#### Configure add-ons
|
||||
|
||||
Some add-ons (such as File Editor) require no configuration to start them. However, most—such as Node-RED—require at least a small amount of configuration. Before you can start Node-RED, you will need to set a password:
|
||||
|
||||
![Home Assistant Node-RED add-on][24]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
**IMPORTANT:** Many people will abstract passwords through the `secrets.yaml` file. This does not provide any additional security other than not having passwords in the add-on configuration's YAML. See [the official documentation][25] for more information.
|
||||
|
||||
In addition to the password requirement, most of the add-ons that have a web UI default to having the `ssl: true` option set. A self-signed cert on my local LAN is not a requirement, so I usually set this to false. There is an add-on for Let's Encrypt, but dealing with certificates is outside the scope of this series.
|
||||
|
||||
After you have looked through the **Configuration** tab, save your changes, and enable Node-RED on the add-on's main screen.
|
||||
|
||||
![Home Assistant Node-RED add-on][26]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][10])
|
||||
|
||||
Don't forget to start the plugin.
|
||||
|
||||
Most add-ons follow a similar procedure, so you can use this approach to set up other add-ons.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Whew, that was a lot of screenshots! Fortunately, when you are doing the configuration, the UI makes these steps relatively painless.
|
||||
|
||||
At this point, your HA instance should be installed with some basic configurations and a few essential add-ons.
|
||||
|
||||
In the next article, I will discuss integrating custom Internet of Things (IoT) devices into Home Assistant. Don't worry; the fun is just beginning!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/home-automation-addons
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stratusss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
|
||||
[2]: https://opensource.com/article/20/11/home-assistant
|
||||
[3]: https://opensource.com/article/20/11/cloud-vs-local-home-automation
|
||||
[4]: https://opensource.com/article/20/11/home-automation-part-3
|
||||
[5]: https://opensource.com/article/20/12/home-assistant
|
||||
[6]: https://sonoff.tech/product/smart-home-security/zbbridge
|
||||
[7]: https://www.digiblur.com/2020/07/how-to-use-sonoff-zigbee-bridge-with.html
|
||||
[8]: https://tasmota.github.io/docs/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/ha-setup20-configuration-integration.png (Home Assistant integration configuration)
|
||||
[10]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee.png (Add Zigbee integration in Home Assistant)
|
||||
[12]: https://opensource.com/article/20/11/wireless-protocol-home-automation
|
||||
[13]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-2.png (Enter device manually)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-3.png (Choose the radio type)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-4.png (IP, port, and speed numbers)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-5.png (Device pairing)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-6.png (Name device)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/ha-setup21-int-zigbee-7_0.png (Integration card)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/ha-setup7-addons.png (Home Assistant add-on store)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/ha-setup8-official-addons.png (Home Assistant official add-ons)
|
||||
[21]: https://en.wikipedia.org/wiki/MQTT
|
||||
[22]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-addons.png (Home Assistant community add-ons)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/ha-setup9-community-grafana-pfsense.png (Home Assistant Grafana add-on)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/ha-setup27-nodered2.png (Home Assistant Node-RED add-on)
|
||||
[25]: https://www.home-assistant.io/docs/configuration/secrets/
|
||||
[26]: https://opensource.com/sites/default/files/uploads/ha-setup26-nodered1.png (Home Assistant Node-RED add-on)
|
98
sources/tech/20210207 3 ways to play video games on Linux.md
Normal file
98
sources/tech/20210207 3 ways to play video games on Linux.md
Normal file
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 ways to play video games on Linux)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-gaming)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
3 ways to play video games on Linux
|
||||
======
|
||||
If you're ready to put down the popcorn and experience games from all
|
||||
angles, start gaming on Linux.
|
||||
![Gaming with penguin pawns][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll start with gaming.
|
||||
|
||||
I used to think a "gamer" was a very specific kind of creature, carefully cataloged and classified by scientists after years of study and testing. I never classified myself as a gamer because most of the games I played were either on a tabletop (board games and pen-and-paper roleplaying games), NetHack, or Tetris. Now that games are available on everything from mobile devices, consoles, computers, and televisions, it feels like it's a good time to acknowledge that "gamers" come in all different shapes and sizes. If you want to call yourself a gamer, you can! There's no qualification exam. You don't have to know the Konami Code by heart (or even what that reference means); you don't have to buy and play "triple-A" games. If you enjoy a game from time to time, you can rightfully call yourself a gamer. And if you want to be a gamer, there's never been a better time to use Linux.
|
||||
|
||||
### Welcome to the underground
|
||||
|
||||
Peel back the glossy billboard ads, and underneath, you're sure to find a thriving gaming underground. It's a movement that began with the nascent gaming market before anyone believed money could be made off software that wasn't either a spreadsheet or typing tutor. Indie games have carved out a place in pop culture (believe it or not, [Minecraft, while not open source][2], started out as an indie game) in several ways, proving that in the eyes of players, gameplay comes before production value.
|
||||
|
||||
There's a lot of cross-over in the indie and open source developer space. There's nothing quite like kicking back with your Linux laptop and browsing [itch.io][3] or your distribution's software repository for a little-known but precious gem of an open source game.
|
||||
|
||||
There are all kinds of open source games available, including plenty of [first person shooters][4], puzzle games like [Nodulus][5], systems management games like [OpenTTD][6], racing games like [Jethook][7], tense escape campaigns like [Sauerbraten][8], and too many more to mention (with more arriving each year, thanks to great initiatives like [Open Jam][9]).
|
||||
|
||||
![Jethook game screenshot][10]
|
||||
|
||||
Jethook
|
||||
|
||||
Overall, the experience of delving into the world of open source games is different than the immediate satisfaction of buying whatever a major game studio releases next. Games by the big studios provide plenty of visual and sonic stimuli, big-name actors, and upwards of 60 hours of gameplay. Independent and open source games aren't likely to match that, but then again, major studios can't match the sense of discovery and personal connection you get when you find a game that you just know nobody else _has ever heard of_. And they can't hope to match the sense of urgency you get when you realize that everybody in the world really, really needs to hear about the great game you've just played.
|
||||
|
||||
Take some time to identify the kinds of games you enjoy the most, and then have a browse through your distribution's software repository, [Flathub][11], and open game jams. See what you can uncover and, if you like the game enough, help to promote it!
|
||||
|
||||
### Proton and WINE
|
||||
|
||||
Gaming on Linux doesn't stop with open source, but it is enabled by it. When Valve Software famously brought Linux back into the gaming market a few years ago by releasing their Steam client for Linux, the hope was that it would compel game studios to write code native to Linux systems. Some did, but Valve failed to push Linux as the primary platform even on their own Valve-branded gaming computers, and it seems that most studios have reverted to their old ways of Windows-only games.
|
||||
|
||||
Interestingly, though, the end result has produced more open source code than probably intended. Valve's solution for Linux compatibility has been to create the [Proton][12] project, a compatibility layer to translate Windows games to Linux. At its core, Proton uses [WINE (Wine Is Not an Emulator)][13], the too-good-to-be-true reimplementation of major Windows libraries as open source.
|
||||
|
||||
The game market's spoils have turned out to be a treasure trove for the open source world, and today, most games from major studios can be run on Linux as if they were native.
|
||||
|
||||
Of course, if you're the type of gamer who has to have the latest title on the day of release, you can certainly expect unpleasant surprises. That's not surprising, though, because few major games are released without bugs requiring large patches a week later. Those bugs can be even worse when a game runs on Proton and WINE, so Linux gamers often benefit by refraining from early adoption. The trade-off may be worth it, though. I've played a few games that run perfectly on Proton, only to discover later from angry forum posts that it's apparently riddled with fatal errors when played on the latest version of Windows. In short, it seems that games from major studios aren't perfect, and so you can expect similar-but-different problems when playing them on Linux as you would on Windows.
|
||||
|
||||
### Flatpak
|
||||
|
||||
One of the most exciting developments of recent Linux history is [Flatpak][14], a cross between local containers and packaging. It's got nothing to do with gaming (or doesn't it?), but it enables Linux applications to essentially be distributed universally to any Linux distribution. This applies to gaming because there are often lots of fringe technologies used in games, and it can be pretty demanding on distribution maintainers to keep up with all the latest versions required by any given game.
|
||||
|
||||
Flatpak abstracts that away from the distribution by establishing a common Flatpak-specific layer for application libraries. Distributors of flatpaks know that if a library isn't in a Flatpak SDK, then it must be included in the flatpak. It's simple and straightforward.
|
||||
|
||||
Thanks to Flatpak, the Steam client runs on something obvious like Fedora and on distributions not traditionally geared toward the gaming market, like [RHEL][15] and Slackware!
|
||||
|
||||
### Lutris
|
||||
|
||||
If you're not eager to sign up on Steam, though, there's my preferred gaming client, [Lutris][16]. On the surface, Lutris is a simple game launcher for your system, a place you can go when you know you want to play a game but just can't decide what to launch yet. With Lutris, you can add [all the games you have on your system][17] to create your own gaming library, and then launch and play them right from the Lutris interface. Better still, Lutris contributors (like me!) regularly publish installer scripts to make it easy for you to install games you own. It's not always necessary, but it can be a nice shortcut to bypass some tedious configuration.
|
||||
|
||||
Lutris can also enlist the help of _runners_, or subsystems that run games that wouldn't normally launch straight from your application menu. For instance, if you want to play console games like the open source [Warcraft Tower Defense][18], you must run an emulator, and Lutris can handle that for you (provided you have the emulator installed). Additionally, should you have a GOG.com (Good Old Games) account, Lutris can access it and import games from your library.
|
||||
|
||||
There's no easier way to manage your games.
|
||||
|
||||
### Play games
|
||||
|
||||
Linux gaming is a fulfilling and empowering experience. I used to avoid computer gaming because I didn't feel I had much of a choice. It seemed that there were always expensive games being released, which inevitably got extreme reactions from happy and unhappy gamers alike, and then the focus shifted quickly to the next big thing. On the other hand, open source gaming has introduced me to the _people_ of the gaming world. I've met other players and developers, I've met artists and musicians, fans and promoters, and I've played an assortment of games that I never even realized existed. Some of them were barely long enough to distract me for just one afternoon, while others have provided me hours and hours of obsessive gameplay, modding, level design, and fun.
|
||||
|
||||
If you're ready to put down the popcorn and experience games from all angles, start gaming on Linux.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-gaming
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gaming_grid_penguin.png?itok=7Fv83mHR (Gaming with penguin pawns)
|
||||
[2]: https://opensource.com/alternatives/minecraft
|
||||
[3]: https://itch.io/jam/open-jam-2020
|
||||
[4]: https://opensource.com/article/20/5/open-source-fps-games
|
||||
[5]: https://hyperparticle.itch.io/nodulus
|
||||
[6]: https://www.openttd.org/
|
||||
[7]: https://rcorre.itch.io/jethook
|
||||
[8]: http://sauerbraten.org/
|
||||
[9]: https://opensource.com/article/18/9/open-jam-announcement
|
||||
[10]: https://opensource.com/sites/default/files/game_0.png
|
||||
[11]: http://flathub.org
|
||||
[12]: https://github.com/ValveSoftware/Proton
|
||||
[13]: http://winehq.org
|
||||
[14]: https://opensource.com/business/16/8/flatpak
|
||||
[15]: https://www.redhat.com/en/enterprise-linux-8
|
||||
[16]: http://lutris.net
|
||||
[17]: https://opensource.com/article/18/10/lutris-open-gaming-platform
|
||||
[18]: https://ndswtd.wordpress.com/download
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 open source tools that make Linux the ideal workstation)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-workday)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
3 open source tools that make Linux the ideal workstation
|
||||
======
|
||||
Linux has everything you think you need and more for you to have a
|
||||
productive workday.
|
||||
![Person using a laptop][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll share with you why Linux is a great choice for your workday.
|
||||
|
||||
Everyone wants to be productive during the workday. If your workday generally involves working on documents, presentations, and spreadsheets, then you might be accustomed to a specific routine. The problem is that _usual routine_ is usually dictated by one or two specific applications, whether it's a certain office suite or a desktop OS. Of course, just because something's a habit doesn't mean it's ideal, and yet it tends to persist unquestioned, even to the point of influencing the very structure of how a business is run.
|
||||
|
||||
### Working smarter
|
||||
|
||||
Many office applications these days run in the cloud, so you can work with the same constraints on Linux if you really want to. However, because many of the typical big-name office applications aren't cultural expectations on Linux, you might find yourself inspired to explore other options. As anyone eager to get out of their "comfort zone" knows, this kind of subtle disruption can be surprisingly useful. All too often, you don't know what you're doing inefficiently because you haven't actually tried doing things differently. Force yourself to explore other options, and you never know what you'll find. You don't even have to know exactly what you're looking for.
|
||||
|
||||
### LibreOffice
|
||||
|
||||
One of the most obvious open source office stalwarts on Linux (or any other platform) is [LibreOffice][2]. It features several components, including a word processor, presentation software, a spreadsheet, relational database interface, vector drawing, and more. It can import many document formats from other popular office applications, so transitioning to LibreOffice from another tool is usually easy.
|
||||
|
||||
There's more to LibreOffice than just being a great office suite, however. LibreOffice has macro support, so resourceful users can automate repetitive tasks. It also features terminal commands so you can perform many tasks without ever launching the LibreOffice interface.
|
||||
|
||||
Imagine, for instance, opening 21 documents, navigating to the **File** menu, to the **Export** or **Print** menu item, and exporting the file to PDF or EPUB. That's over 84 clicks, at the very least, and probably an hour of work. Compare that to opening a folder of documents and converting all of them to PDF or EPUB with just one swift command or menu action. The conversion would run in the background while you work on other things. You'd be finished in a quarter of the time, possibly less.
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --headless --convert-to epub *.docx`
|
||||
```
|
||||
|
||||
It's the little improvements that Linux encourages, not explicitly but implicitly, through its toolset and the ease with which you can customize your environment and workflow.
|
||||
|
||||
### Abiword and Gnumeric
|
||||
|
||||
Sometimes, a big office suite is exactly what you _don't_ need. If you prefer to keep your office work simple, you might do better with a lightweight and task-specific application. For instance, I mostly write articles in a text editor because I know all styles are discarded during conversion to HTML. But there are times when a word processor is useful, either to open a document someone has sent to me or because I want a quick and easy way to generate some nicely styled text.
|
||||
|
||||
[Abiword][3] is a simple word processor with basic support for popular document formats and all the essential features you'd expect from a word processor. It isn't meant as a full office suite, and that's its best feature. While there's no such a thing as too many options, there definitely is such a thing as information overload, and that's exactly what a full office suite or word processor is sometimes guilty of. If you're looking to avoid that, then use something simple instead.
|
||||
|
||||
Similarly, the [Gnumeric][4] project provides a simple spreadsheet application. Gnumeric avoids any features that aren't strictly necessary for a spreadsheet, so you still get a robust formula syntax, plenty of functions, and all the options you need for styling and manipulating cells. I don't do much with spreadsheets, so I find myself quite happy with Gnumeric on the rare occasions I need to review or process data in a ledger.
|
||||
|
||||
### Pandoc
|
||||
|
||||
It's possible to get even more minimal with specialized commands and document processors. The `pandoc` command specializes in document conversion. It's like the `libreoffice --headless` command, except with ten times the number of document formats to work with. You can even generate presentations with it! If part of your work is taking source text from one document and formatting it for several modes of delivery, then Pandoc is a necessity, and so you should [download our cheat sheet][5].
|
||||
|
||||
Broadly, Pandoc is representative of a completely different way of working. It gets you away from the confines of office applications. It separates you from trying to get your thoughts down into typed words and deciding what font those words ought to use, all at the same time. Working in plain text and then converting to all of your delivery targets afterward lets you work with any application you want, whether it's a notepad on your mobile device, a simple text editor on whatever computer you happen to be sitting in front of, or a text editor in the cloud.
|
||||
|
||||
### Look for the alternatives
|
||||
|
||||
There are lots of unexpected alternatives available for Linux. You can find them by taking a step back from what you're doing, analyzing your work process, assessing your required results, and investigating new applications that claim to do just the things you rely upon.
|
||||
|
||||
Changing the tools you use, your workflow, and your daily routine can be disorienting, especially when you don't know exactly where it is you're looking to go. But the advantage to Linux is that you're afforded the opportunity to re-evaluate the assumptions you've subconsciously developed over years of computer usage. If you look hard enough for an answer, you'll eventually realize what the question was in the first place. And oftentimes, you'll end up appreciating what you learn.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-workday
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: http://libreoffice.org
|
||||
[3]: https://www.abisource.com
|
||||
[4]: http://www.gnumeric.org
|
||||
[5]: https://opensource.com/article/20/5/pandoc-cheat-sheet
|
@ -0,0 +1,219 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora Aarch64 on the SolidRun HoneyComb LX2K)
|
||||
[#]: via: (https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/)
|
||||
[#]: author: (John Boero https://fedoramagazine.org/author/boeroboy/)
|
||||
|
||||
Fedora Aarch64 on the SolidRun HoneyComb LX2K
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Photo by [Tim Mossholder][2] on [Unsplash][3]
|
||||
|
||||
Almost a year has passed since the [HoneyComb][4] development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware.
|
||||
|
||||
First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch.
|
||||
|
||||
![][5]
|
||||
|
||||
When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique [LX2160A][6] CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual.
|
||||
|
||||
![Configure ports in switch, LAG, MUX mode, or straight NICs.][7]
|
||||
|
||||
I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on [Developer-Ecosystem][8] which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch.
|
||||
|
||||
![][9]
|
||||
|
||||
For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues.
|
||||
|
||||
Now comes the fun part. BIOS is not built-in here. You’ll need to [build][10] from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
nvme_core.default_ps_max_latency_us=0
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better.
|
||||
|
||||
After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s _restool_ utility to map ports to their usage. In this case just a runtime mapping of _dmap -> dni_ was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments.
|
||||
|
||||
Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
systemd enable --now [dpmac@7.service][11]
|
||||
systemd enable --now [dpmac@8.service][12]
|
||||
systemd enable --now [dpmac@9.service][13]
|
||||
systemd enable --now [dpmac@10.service][14]
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined.
|
||||
|
||||
![][15]
|
||||
|
||||
I’d love to explore the built-in LAG features but this still needs to be coded into the
|
||||
|
||||
restool
|
||||
|
||||
options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic.
|
||||
|
||||
### Energy Efficiency
|
||||
|
||||
Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few _tuned_ profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up.
|
||||
|
||||
### Remote Access
|
||||
|
||||
I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into.
|
||||
|
||||
![X2go KDE session over SSH][16]
|
||||
|
||||
Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience.
|
||||
|
||||
![Cockpit behaves as expected.][17]
|
||||
|
||||
All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports).
|
||||
|
||||
### Storage
|
||||
|
||||
As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as [Bamboo][18] are actually building server options around this platform as we speak.
|
||||
|
||||
The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is.
|
||||
|
||||
### Network Perf
|
||||
|
||||
For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with _nmcli_ and _teamdctl_. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats.
|
||||
|
||||
```
|
||||
$ iperf3 -c honeycomb -P 4 --cport 5000 -R
|
||||
Connecting to host honeycomb, port 5201
|
||||
Reverse mode, remote host honeycomb is sending
|
||||
[ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201
|
||||
[ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201
|
||||
[ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201
|
||||
[ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201
|
||||
[ ID] Interval Transfer Bitrate
|
||||
[ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec
|
||||
[ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
(TRUNCATED)
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec
|
||||
[ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec
|
||||
[ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec
|
||||
[ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec
|
||||
[SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ ID] Interval Transfer Bitrate Retr
|
||||
[ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender
|
||||
[ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver
|
||||
[ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender
|
||||
[ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver
|
||||
[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender
|
||||
[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver
|
||||
[ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender
|
||||
[ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver
|
||||
[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender
|
||||
[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver
|
||||
|
||||
iperf Done
|
||||
```
|
||||
|
||||
### Notes on iperf3
|
||||
|
||||
I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP.
|
||||
|
||||
Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network.
|
||||
|
||||
![Proper LACP load balancing.][19]
|
||||
|
||||
Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great.
|
||||
|
||||
![][20]
|
||||
|
||||
### Virtualization
|
||||
|
||||
What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access.
|
||||
|
||||
![Virtual Machines in Cockpit][21]
|
||||
|
||||
### Benchmarks
|
||||
|
||||
[Phoronix][22] has already done quite a few benchmarks on [OpenBenchmarking.org][23] but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput.
|
||||
|
||||
Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive **2073** MB/s:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Zstd Compression 1.4.5:
|
||||
pts/compress-zstd-1.2.1 &#91;Compression Level: 3]
|
||||
Test 1 of 1
|
||||
Estimated Trial Run Count: 3
|
||||
Estimated Time To Completion: 9 Minutes &#91;22:41 UTC]
|
||||
Started Run 1 @ 22:33:02
|
||||
Started Run 2 @ 22:33:53
|
||||
Started Run 3 @ 22:34:37
|
||||
Compression Level: 3:
|
||||
2079.3
|
||||
2067.5
|
||||
2073.9
|
||||
Average: 2073.57 MB/s
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does **2790** MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/
|
||||
|
||||
作者:[John Boero][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/boeroboy/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/honeycomb-fed-aarch64-816x346.jpg
|
||||
[2]: https://unsplash.com/@timmossholder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[3]: https://unsplash.com/s/photos/honeycombs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[4]: http://solid-run.com/arm-servers-networking-platforms/honeycomb-workstation/#overview
|
||||
[5]: https://www.solid-run.com/wp-content/uploads/2020/11/HoneyComb-layout-front.png
|
||||
[6]: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/layerscape-processors/layerscape-lx2160a-processor:LX2160A
|
||||
[7]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-894x1024.png
|
||||
[8]: https://discord.com/channels/620838168794497044
|
||||
[9]: https://i.imgflip.com/11c7o.gif
|
||||
[10]: https://github.com/SolidRun/lx2160a_uefi
|
||||
[11]: mailto:dpmac@7.service
|
||||
[12]: mailto:dpmac@8.service
|
||||
[13]: mailto:dpmac@9.service
|
||||
[14]: mailto:dpmac@10.service
|
||||
[15]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-2-1024x403.png
|
||||
[16]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/Screenshot_20210202_112051-1024x713.jpg
|
||||
[17]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-2-1024x722.png
|
||||
[18]: https://www.bamboosystems.io/b1000n/
|
||||
[19]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-4-1024x245.png
|
||||
[20]: http://systems.cs.columbia.edu/files/kvm-arm-logo.png
|
||||
[21]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-1024x717.png
|
||||
[22]: https://www.phoronix.com/scan.php?page=news_item&px=SolidRun-ClearFog-ARM-ITX
|
||||
[23]: https://openbenchmarking.org/result/1905313-JONA-190527343&obr_sor=y&obr_rro=y&obr_hgv=ClearFog-ITX
|
@ -0,0 +1,290 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up custom sensors in Home Assistant)
|
||||
[#]: via: (https://opensource.com/article/21/2/home-assistant-custom-sensors)
|
||||
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
|
||||
|
||||
How to set up custom sensors in Home Assistant
|
||||
======
|
||||
Dive into the YAML files to set up custom sensors in the sixth article
|
||||
in this home automation series.
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
In the last article in this series about home automation, I started digging into Home Assistant. I [set up a Zigbee integration][2] with a Sonoff Zigbee Bridge and installed a few add-ons, including Node-RED, File Editor, Mosquitto broker, and Samba. I wrapped up by walking through Node-RED's configuration, which I will use heavily later on in this series. The four articles before that one discussed [what Home Assistant is][3], why you may want [local control][4], some of the [communication protocols][5] for smart home components, and how to [install Home Assistant][6] in a virtual machine (VM) using libvirt.
|
||||
|
||||
In this sixth article, I'll walk through the YAML configuration files. This is largely unnecessary if you are just using the integrations supported in the user interface (UI). However, there are times, particularly if you are pulling in custom sensor data, where you have to get your hands dirty with the configuration files.
|
||||
|
||||
Let's dive in.
|
||||
|
||||
### Examine the configuration files
|
||||
|
||||
There are several potential configuration files you will want to investigate. Although everything I am about to show you _can_ be done in the main configuration.yaml file, it can help to split your configuration into dedicated files, especially with large installations.
|
||||
|
||||
Below I will walk through how I configure my system. For my custom sensors, I use the ESP8266 chipset, which is very maker-friendly. I primarily use [Tasmota][7] for my custom firmware, but I also have some components running [ESPHome][8]. Configuring firmware is outside the scope of this article. For now, I will assume you set up your devices with some custom firmware (or you wrote your own with [Arduino IDE][9] ).
|
||||
|
||||
#### The /config/configuration.yaml file
|
||||
|
||||
Configuration.yaml is the main file Home Assistant reads. For the following, use the File Editor you installed in the previous article. If you do not see File Editor in the left sidebar, enable it by going back into the **Supervisor** settings and clicking on **File Editor**. You should see a screen like this:
|
||||
|
||||
![Install File Editor][10]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
Make sure **Show in sidebar** is toggled on. I also always toggle on the **Watchdog** setting for any add-ons I use frequently.
|
||||
|
||||
Once that is completed, launch File Editor. There is a folder icon in the top-left header bar. This is the navigation icon. The `/config` folder is where the configuration files you are concerned with are stored. If you click on the folder icon, you will see a few important files:
|
||||
|
||||
![Configuration split files][12]
|
||||
|
||||
The following is a default configuration.yaml:
|
||||
|
||||
![Default Home Assistant configuration.yaml][13]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
The notation `script: !include scripts.yaml` indicates that Home Assistant should reference the contents of scripts.yaml anytime it needs the definition of a script object. You'll notice that each of these files correlates to files observed when the folder icon is clicked.
|
||||
|
||||
I added three lines to my configuration.yaml:
|
||||
|
||||
|
||||
```
|
||||
input_boolean: !include input_boolean.yaml
|
||||
binary_sensor: !include binary_sensor.yaml
|
||||
sensor: !include sensor.yaml
|
||||
```
|
||||
|
||||
As a quick aside, I configured my MQTT settings (see Home Assistant's [MQTT documentation][14] for more details) in the configuration.yaml file:
|
||||
|
||||
|
||||
```
|
||||
mqtt:
|
||||
discovery: true
|
||||
discovery_prefix: homeassistant
|
||||
broker: 192.168.11.11
|
||||
username: mqtt
|
||||
password: superpassword
|
||||
```
|
||||
|
||||
If you make an edit, don't forget to click on the Disk icon to save your work.
|
||||
|
||||
![Save icon in Home Assistant config][15]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
#### The /config/binary_sensor.yaml file
|
||||
|
||||
After you name your file in configuration.yaml, you'll have to create it. In the File Editor, click on the folder icon again. There is a small icon of a piece of paper with a **+** sign in its center. Click on it to bring up this dialog:
|
||||
|
||||
![Create config file][16]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
I have three main types of [binary sensors][17]: door, motion, and power. A binary sensor has only two states: on or off. All my binary sensors send their data to MQTT. See my article on [cloud vs. local control][4] for more information about MQTT.
|
||||
|
||||
My binary_sensor.yaml file looks like this:
|
||||
|
||||
|
||||
```
|
||||
- platform: mqtt
|
||||
state_topic: "BRMotion/state/PIR1"
|
||||
name: "BRMotion"
|
||||
qos: 1
|
||||
payload_on: "ON"
|
||||
payload_off: "OFF"
|
||||
device_class: motion
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "IRBlaster/state/PROJECTOR"
|
||||
name: "ProjectorStatus"
|
||||
qos: 1
|
||||
payload_on: "ON"
|
||||
payload_off: "OFF"
|
||||
device_class: power
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "MainHallway/state/DOOR"
|
||||
name: "FrontDoor"
|
||||
qos: 1
|
||||
payload_on: "open"
|
||||
payload_off: "closed"
|
||||
device_class: door
|
||||
```
|
||||
|
||||
Take a look at the definitions. Since `platform` is self-explanatory, start with `state_topic`.
|
||||
|
||||
* `state_topic`, as the name implies, is the topic where the device's state is published. This means anyone subscribed to the topic will be notified any time the state changes. This path is completely arbitrary, so you can name it anything you like. I tend to use the convention `location/state/object`, as this makes sense for me. I want to be able to reference all devices in a location, and for me, this layout is the easiest to remember. Grouping by device type is also a valid organizational layout.
|
||||
|
||||
* `name` is the string used to reference the device inside Home Assistant. It is normally referenced by `type.name`, as seen in this card in the Home Assistant [Lovelace][18] interface:
|
||||
|
||||
![Binary sensor card][19]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
* `qos`, short for quality of service, refers to how an MQTT client communicates with the broker when posting to a topic.
|
||||
|
||||
* `payload_on` and `payload_off` are determined by the firmware. These sections tell Home Assistant what text the device will send to indicate its current state.
|
||||
|
||||
* `device_class:` There are multiple possibilities for a device class. Refer to the [Home Assistant documentation][17] for more information and a description of each type available.
|
||||
|
||||
|
||||
|
||||
|
||||
#### The /config/sensor.yaml file
|
||||
|
||||
This file differs from binary_sensor.yaml in one very important way: The sensors within this configuration file can have vastly different data inside their payloads. Take a look at one of the more tricky bits of sensor data, temperature.
|
||||
|
||||
Here is the definition for my DHT temperature sensor:
|
||||
|
||||
|
||||
```
|
||||
- platform: mqtt
|
||||
state_topic: "Steve_Desk_Sensor/tele/SENSOR"
|
||||
name: "Steve Desk Temperature"
|
||||
value_template: '{{ value_json.DHT11.Temperature }}'
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "Steve_Desk_Sensor/tele/SENSOR"
|
||||
name: "Steve Desk Humidity"
|
||||
value_template: '{{ value_json.DHT11.Humidity }}'
|
||||
```
|
||||
|
||||
You'll notice two things right from the start. First, there are two definitions for the same `state_topic`. This is because this sensor publishes three different statistics.
|
||||
|
||||
Second, there is a new definition of `value_template`. Most sensors, whether custom or not, send their data inside a JSON payload. The template tells Home Assistant where the important information is in the JSON file. The following shows the raw JSON coming from my homemade sensor. (I used the program `jq` to make the JSON more readable.)
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"Time": "2020-12-23T16:59:11",
|
||||
"DHT11": {
|
||||
"Temperature": 24.8,
|
||||
"Humidity": 32.4,
|
||||
"DewPoint": 7.1
|
||||
},
|
||||
"BH1750": {
|
||||
"Illuminance": 24
|
||||
},
|
||||
"TempUnit": "C"
|
||||
}
|
||||
```
|
||||
|
||||
There are a few things to note here. First, as the sensor data is stored in a time-based data store, every reading has a `Time` entry. Second, there are two different sensors attached to this output. This is because I have both a DHT11 temperature sensor and a BH1750 light sensor attached to the same ESP8266 chip. Finally, my temperature is reported in Celsius.
|
||||
|
||||
Hopefully, the Home Assistant definitions will make a little more sense now. `value_json` is just a standard name given to any JSON object ingested by Home Assistant. The format of the `value_template` is `value_json.<component>.<data point>`.
|
||||
|
||||
For example, to retrieve the dewpoint:
|
||||
|
||||
|
||||
```
|
||||
`value_template: '{{ value_json.DHT11.DewPoint}}'`
|
||||
```
|
||||
|
||||
While you can dump this information to a file from within Home Assistant, I use Tasmota's `Console` to see the data it is publishing. (If you want me to do an article on Tasmota, please let me know in the comments below.)
|
||||
|
||||
As a side note, I also keep tabs on my local Home Assistant resource usage. To do so, I put this in my sensor.yaml file:
|
||||
|
||||
|
||||
```
|
||||
- platform: systemmonitor
|
||||
resources:
|
||||
- type: disk_use_percent
|
||||
arg: /
|
||||
- type: memory_free
|
||||
- type: memory_use
|
||||
- type: processor_use
|
||||
```
|
||||
|
||||
While this is technically not a sensor, I put it here, as I think of it as a data sensor. For more information, see the Home Assistant's [system monitoring][20] documentation.
|
||||
|
||||
#### The /config/input_boolean file
|
||||
|
||||
This last section is pretty easy to set up, and I use it for a wide variety of applications. An input boolean is used to track the status of something. It's either on or off, home or away, etc. I use these quite extensively in my automations.
|
||||
|
||||
My definitions are:
|
||||
|
||||
|
||||
```
|
||||
steve_home:
|
||||
name: steve
|
||||
steve_in_bed:
|
||||
name: 'steve in bed'
|
||||
guest_home:
|
||||
|
||||
kitchen_override:
|
||||
name: kitchen
|
||||
kitchen_fan_override:
|
||||
name: kitchen_fan
|
||||
laundryroom_override:
|
||||
name: laundryroom
|
||||
bathroom_override:
|
||||
name: bathroom
|
||||
hallway_override:
|
||||
name: hallway
|
||||
livingroom_override:
|
||||
name: livingroom
|
||||
ensuite_bathroom_override:
|
||||
name: ensuite_bathroom
|
||||
steve_desk_light_override:
|
||||
name: steve_desk_light
|
||||
projector_led_override:
|
||||
name: projector_led
|
||||
|
||||
project_power_status:
|
||||
name: 'Projector Power Status'
|
||||
tv_power_status:
|
||||
name: 'TV Power Status'
|
||||
bed_time:
|
||||
name: "It's Bedtime"
|
||||
```
|
||||
|
||||
I use some of these directly in the Lovelace UI. I create little badges that I put at the top of each of the pages I have in the UI:
|
||||
|
||||
![Home Assistant options in Lovelace UI][21]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
These can be used to determine whether I am home, if a guest is in my house, and so on. Clicking on one of these badges allows me to toggle the boolean, and this object can be read by automations to make decisions about how the “smart devices” react to a person's presence (if at all). I'll revisit the booleans in a future article when I examine Node-RED in more detail.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
In this article, I looked at the YAML configuration files and added a few custom sensors into the mix. You are well on the way to getting some functioning automation with Home Assistant and Node-RED. In the next article, I'll dive into some basic Node-RED flows and introduce some basic automations.
|
||||
|
||||
Stick around; I've got plenty more to cover, and as always, leave a comment below if you would like me to examine something specific. If I can, I'll be sure to incorporate the answers to your questions into future articles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/home-assistant-custom-sensors
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stratusss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
|
||||
[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons
|
||||
[3]: https://opensource.com/article/20/11/home-assistant
|
||||
[4]: https://opensource.com/article/20/11/cloud-vs-local-home-automation
|
||||
[5]: https://opensource.com/article/20/11/home-automation-part-3
|
||||
[6]: https://opensource.com/article/20/12/home-assistant
|
||||
[7]: https://tasmota.github.io/docs/
|
||||
[8]: https://esphome.io/
|
||||
[9]: https://create.arduino.cc/projecthub/Niv_the_anonymous/esp8266-beginner-tutorial-project-6414c8
|
||||
[10]: https://opensource.com/sites/default/files/uploads/ha-setup22-file-editor-settings.png (Install File Editor)
|
||||
[11]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ha-setup29-configuration-split-files1.png (Configuration split files)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/ha-setup28-configuration-yaml.png (Default Home Assistant configuration.yaml)
|
||||
[14]: https://www.home-assistant.io/docs/mqtt/broker
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ha-setup23-configuration-yaml2.png (Save icon in Home Assistant config)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ha-setup24-new-config-file.png (Create config file)
|
||||
[17]: https://www.home-assistant.io/integrations/binary_sensor/
|
||||
[18]: https://www.home-assistant.io/lovelace/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/ha-setup25-bindary_sensor_card.png (Binary sensor card)
|
||||
[20]: https://www.home-assistant.io/integrations/systemmonitor
|
||||
[21]: https://opensource.com/sites/default/files/uploads/ha-setup25-input-booleans.png (Home Assistant options in Lovelace UI)
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why choose Plausible for an open source alternative to Google Analytics)
|
||||
[#]: via: (https://opensource.com/article/21/2/plausible)
|
||||
[#]: author: (Ben Rometsch https://opensource.com/users/flagsmith)
|
||||
|
||||
Why choose Plausible for an open source alternative to Google Analytics
|
||||
======
|
||||
Plausible is gaining attention and users as a viable, effective
|
||||
alternative to Google Analytics.
|
||||
![Analytics: Charts and Graphs][1]
|
||||
|
||||
Taking on the might of Google Analytics may seem like a big challenge. In fact, you could say it doesn't sound plausible… But that's exactly what [Plausible.io][2] has done with great success, signing up thousands of new users since 2018.
|
||||
|
||||
Plausible's co-founders Uku Taht and Marko Saric recently appeared on [The Craft of Open Source][3] podcast to talk about the project and how they:
|
||||
|
||||
* Created a viable alternative to Google Analytics
|
||||
* Gained so much momentum in less than two years
|
||||
* Achieved their goals by open sourcing the project
|
||||
|
||||
|
||||
|
||||
Read on for a summary of their conversation with podcast host and Flagsmith founder Ben Rometsch.
|
||||
|
||||
### How Plausible got started
|
||||
|
||||
In winter 2018, Uku started coding a project that he thought was desperately needed—a viable, effective alternative to Google Analytics—after becoming disillusioned with the direction Google products were heading and the fact that all other data solutions seemed to use Google as a "data-handling middleman."
|
||||
|
||||
Uku's first instinct was to focus on the analytics side of things using existing database solutions. Right away, he faced some challenges. The first attempt, using PostgreSQL, was technically naïve, as it became overwhelmed and inefficient pretty quickly. Therefore, his goal morphed into making an analytics product that can handle large quantities of data points with no discernable decline in performance. To cut a long story short, Uku succeeded, and Plausible can now ingest more than 80 million records per month.
|
||||
|
||||
The first version of Plausible was released in summer 2019. In March 2020, Marko came on board to head up the project's communications and marketing side. Since then, its popularity has grown with considerable momentum.
|
||||
|
||||
### Why open source?
|
||||
|
||||
Uku was keen to follow the "indie hacker" route of software development: create a product, put it out there, and see how it grows. Open source makes sense in this respect because you can quickly grow a community and gain popularity.
|
||||
|
||||
But Plausible didn't start out as open source. Uku was initially concerned about the software's sensitive code, such as billing codes, but he soon released that this was of no use to people without the API token.
|
||||
|
||||
Now, Plausible is fully open source under [AGPL][4], which they chose instead of the MIT License. Uku explains that under an MIT License, anyone can do anything to the code without restriction. Under AGPL, if someone changes the code, they must open source their changes and contribute the code back to the community. This means that large corporations cannot take the original code, build from it, then reap all the rewards. They must share it, making for a more level playing field. For instance, if a company wanted to plug in their billing or login system, they would be legally obliged to publish the code.
|
||||
|
||||
During the podcast, Uku asked me about Flagsmith's license which is currently under a BSD 3-Clause license, which is highly permissive, but I am about to move some features behind a more restrictive license. So far, the Flagsmith community has been understanding of the change as they realize this will lead to more and better features.
|
||||
|
||||
### Plausible vs. Google Analytics
|
||||
|
||||
Uku says, in his opinion, the spirit of open source is that the code should be open for commercial use by anyone and shared with the community, but you can keep back a closed-source API module as a proprietary add-on. In this way, Plausible and other companies can cater to different use-cases by creating and selling bespoke API add-on licenses.
|
||||
|
||||
Marko is a developer by trade, but from the marketing side of things, he worked to get the project covered on sites such as Hacker News and Lobster and build a Twitter presence to help generate momentum. The buzz created by this publicity also meant that the project took off on GitHub, going from 500 to 4,300 stars. As traffic grew, Plausible appeared on GitHub's trending list, which helped its popularity snowball.
|
||||
|
||||
Marko also focused heavily on publishing and promoting blog posts. This strategy paid off, as four or five posts went viral within the first six months, and he used those spikes to amplify the marketing message and accelerate growth.
|
||||
|
||||
The biggest challenge in Plausible's growth was getting people to switch from Google Analytics. The project's main goal was to create a web analytics product that is useful, efficient, and accurate. It also needed to be compliant with regulations and offer a high degree of privacy for both the business and website visitors.
|
||||
|
||||
Plausible is now running on more than 8,000 websites. From talking to customers, Uku estimates that around 90% of them would have run Google Analytics.
|
||||
|
||||
Plausible runs on a standard software-as-a-service (SaaS) subscription model. To make things fairer, it charges per page view on a monthly basis, rather than charging per website. This can prove tricky with seasonal websites, say e-commerce sites that spike at the holidays or US election sites that spike once every four years. These can cause pricing problems under the monthly subscription model, but it generally works well for most sites.
|
||||
|
||||
### Check out the podcast
|
||||
|
||||
To discover more about how Uku and Marko grew the open source Plausible project at a phenomenal rate and made it into a commercial success, [listen to the podcast][3] and check out [other episodes][5] to learn more about "the ins-and-outs of the open source software community."
|
||||
|
||||
Sandstorm's Jade Wang shares some of her favorite open source web apps that are self-hosted...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/plausible
|
||||
|
||||
作者:[Ben Rometsch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/flagsmith
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
|
||||
[2]: https://plausible.io/
|
||||
[3]: https://www.flagsmith.com/podcast/02-plausible
|
||||
[4]: https://www.gnu.org/licenses/agpl-3.0.en.html
|
||||
[5]: https://www.flagsmith.com/podcast
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My open source disaster recovery strategy for the home office)
|
||||
[#]: via: (https://opensource.com/article/21/2/high-availability-home-office)
|
||||
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
|
||||
|
||||
My open source disaster recovery strategy for the home office
|
||||
======
|
||||
In the remote work era, it's more important than ever to have a disaster
|
||||
recovery plan for your household infrastructure.
|
||||
![Person using a laptop][1]
|
||||
|
||||
I've worked from home for years, and with the COVID-19 crisis, millions more have joined me. Teachers, accountants, librarians, stockbrokers… you name it, these workers now operate full or part time from their homes. Even after the coronavirus crisis ends, many will continue working at home, at least part time. But what happens when the home worker's computer fails? Whether the device is a smartphone, tablet, laptop, or desktop—and whether the problem is hardware or software—the result might be missed workdays and lots of frustration.
|
||||
|
||||
This article explores how to ensure high-availability home computing. Open source software is key. It offers device independence so that home workers can easily move between primary and backup devices. Most importantly, it gives users control of their environment, which is the surest route to high availability. This simple high-availability strategy, based on open source, is easy to modify for your needs.
|
||||
|
||||
### Different strategies for different situations
|
||||
|
||||
I need to emphasize one point upfront: different job functions require different solutions. Some at-home workers can use smartphones or tablets, while others rely on laptops, and still others require high-powered desktop workstations. Some can tolerate an outage of hours or even days, while others must be available without interruption. Some use company-supplied devices, and others must provide their own. Lastly, some home workers store their data in their company's cloud, while others self-manage their data.
|
||||
|
||||
Obviously, no single high-availability strategy fits everyone. My strategy probably isn't "the answer" for you, but I hope it prompts you to think about the challenges involved (if you haven't already) and presents some ideas to help you prepare before disaster strikes.
|
||||
|
||||
### Defining high availability
|
||||
|
||||
Whatever computing device a home worker uses, high availability (HA) involves five interoperable components:
|
||||
|
||||
* Device hardware
|
||||
* System software
|
||||
* Communications capability
|
||||
* Applications
|
||||
* Data
|
||||
|
||||
|
||||
|
||||
The HA plan must encompass all five components to succeed. Missing any component causes HA failure.
|
||||
|
||||
For example, last night, I worked on a cloud-based spreadsheet. If my communications link had failed and I couldn't access my cloud data, that would stop my work on the project… even if I had all the other HA components available in a backup computer.
|
||||
|
||||
Of course, there are exceptions. Say last night's spreadsheet was stored on my local computer. If that device failed, I could have kept working as long as I had a backup computer with my data on it, even if I lacked internet access.
|
||||
|
||||
To succeed as a high-availability home worker, you must first identify the components you require for your work. Once you've done that, develop a plan to continue working even if one or more components fails.
|
||||
|
||||
#### Duplicate replacement
|
||||
|
||||
One approach is to create a _duplicate replacement_. Having the exact same hardware, software, communications, apps, and data available on a backup device guarantees that you can work if your primary fails. This approach is simple, though it might cost more to keep a complete backup on hand.
|
||||
|
||||
To economize, you might share computers with your family or flatmates. A _shared backup_ is always more cost-effective than a _dedicated backup_, so long as you have top priority on the shared computer when you need it.
|
||||
|
||||
#### Functional replacement
|
||||
|
||||
The alternative to duplicate replacement is a _functional replacement_. You substitute a working equivalent for the failed component. Say I'm working from my home laptop and connecting through home WiFi. My internet connection fails. Perhaps I can tether my computer to my phone and use the cell network instead. I achieve HA by replacing one technology with an equivalent.
|
||||
|
||||
#### Know your requirements
|
||||
|
||||
Beyond the five HA components, be sure to identify any special requirements you have. For example, if mobility is important, you might need to replace a broken laptop with another laptop, not a desktop.
|
||||
|
||||
HA means identifying all the functions you need, then ensuring your HA plan covers them all.
|
||||
|
||||
### Timing, planning, and testing
|
||||
|
||||
You must also define your time frame for recovery. Must you be able to continue your work immediately after a failure? Or do you have the luxury of some downtime during which you can react?
|
||||
|
||||
The longer your allowable downtime, the more options you have. For example, if you could miss work for several days, you could simply trot a broken device into a repair shop. No need for a backup.
|
||||
|
||||
In this article, by "high availability," I mean getting back to work in very short order after a failure, perhaps less than one hour. This typically requires that you have access to a backup device that is immediately available and ready to go. While there might be occasions when you can recover your primary device in a matter of minutes—for example, by working around a failure or by quickly replacing a defective piece of hardware or software—a backup computer is normally part of the HA plan.
|
||||
|
||||
HA requires planning and preparation. "Winging it" doesn't suffice; ensure your backup plan works by testing it beforehand.
|
||||
|
||||
For example, say your data resides in the cloud. That data is accessible from anywhere, from any device. That sounds ideal. But what if you forget that there's a small but vital bit of data stored locally on your failed computer? If you can't access that essential data, your HA plan fails. A dry run surfaces problems like this.
|
||||
|
||||
### Smartphones as backup
|
||||
|
||||
Most of us in software engineering and support use laptops and desktops at home. Smartphones and tablets are useful adjuncts, but they aren't at the core of what we do.
|
||||
|
||||
The main reasons are screen size and keyboard. For software work, you can't achieve the same level of productivity with a small screen and touchscreen keypad as you can with a large monitor and physical keyboard.
|
||||
|
||||
If you normally use a laptop or desktop and opt for a smartphone or tablet as your backup, test it out beforehand to make sure it suffices. Here's an example of the kind of subtlety that might otherwise trip you up. Most videoconferencing platforms run on both smartphones and laptops or desktops, but their mobile apps can differ in small but important ways. And even when the platform does offer an equivalent experience (the way [Jitsi][2] does, for instance), it can be awkward to share charts, slide decks, and documents, to use a chat feature, and so on, just due to the difference in mobile form factors compared to a big computer screen and a variety of input options.
|
||||
|
||||
Smartphones make convenient backup devices because nearly everyone has one. But if you designate yours as your functional replacement, then try using it for work one day to verify that it meets your needs.
|
||||
|
||||
### Data accessibility
|
||||
|
||||
Data access is vital when your primary device fails. Even if you back up your work data, if a device fails, you also may need credentials for VPN or SSH access, specialized software, or forms of data that might not be stored along with your day-to-day documents and directories. You must ensure that when you design a backup scheme for yourself, you include all important data and store encryption keys and other access information securely.
|
||||
|
||||
The best way to keep your work data secure is to use your own service. Running [Nextcloud][3] or [Sparkleshare][4] is easy, and hosting is cheap. Both are automated: files you place in a specially designated directory are synchronized with your server. It's not exactly building your own cloud, but it's a great way to leverage the cloud for your own services. You can make the backup process seamless with tools like [Syncthing, Bacula][5], or [rdiff-backup][6].
|
||||
|
||||
Cloud storage enables you to access data from any device at any location, but cloud storage will work only if you have a live communications path to it after a failure event. And not all cloud storage meets the privacy and security specifications for all projects. If your workplace has a cloud backup solution, spend some time learning about the cloud vendor's services and find out what level of availability it promises. Check its track record in achieving it. And be sure to devise an alternate way to access your cloud if your primary communications link fails.
|
||||
|
||||
### Local backups
|
||||
|
||||
If you store your data on a local device, you'll be responsible for backing it up and recovering it. In that case, back up your data to an alternate device, and verify that you can restore it within your acceptable time frame. This is your _time-to-recovery_.
|
||||
|
||||
You'll also need to secure that data and meet any privacy requirements your employer specifies.
|
||||
|
||||
#### Acceptable loss
|
||||
|
||||
Consider how much data you can afford to lose in the event of an outage. For example, if you back up your data nightly, you could lose up to a maximum of one day's work (all the work completed during the day prior to the nightly backup). This is your _backup data timeliness_.
|
||||
|
||||
Open source offers many free applications for local data backup and recovery. Generally, the same applications used for remote backups can also apply to local backup plans, so take a look at the [Advanced Rsync][7] or the [Syncthing tutorial][8] articles here on Opensource.com.
|
||||
|
||||
Many prefer a data strategy that combines both cloud and local storage. Store your data locally, and then use the cloud as a backup (rather than working on the cloud). Or do it the other way around (although automating the cloud to push backups to you is more difficult than automating your local machine to push backups to the cloud). Storing your data in two separate locations gives your data _geographical redundancy_, which is useful should either site become unavailable.
|
||||
|
||||
With a little forethought, you can devise a simple plan to access your data regardless of any outage.
|
||||
|
||||
### My high-availability strategy
|
||||
|
||||
As a practical example, I'll describe my own HA approach. My goals are a time to recovery of an hour or less and backup data timeliness within a day.
|
||||
|
||||
![High Availability Strategy][9]
|
||||
|
||||
(Howard Fosdick, [CC BY-SA 4.0][10])
|
||||
|
||||
#### Hardware
|
||||
|
||||
I use an Android smartphone for phone calls and audioconferences. I can access a backup phone from another family member if my primary fails.
|
||||
|
||||
Unfortunately, my phone's small size and touch keyboard mean I can't use it as my backup computer. Instead, I rely on a few generic desktop computers that have standard, interchangeable parts. You can easily maintain such hardware with this simple [free how-to guide][11]. You don't need any hardware experience.
|
||||
|
||||
Open source software makes my multibox strategy affordable. It runs so efficiently that even [10-year-old computers work fine][12] as backups for typical office work. Mine are dual-core desktops with 4GB of RAM and any disk that cleanly verifies. These are so inexpensive that you can often get them for free from recycling centers. (In my [charity work][13], I find that many people give them away as unsuitable for running current proprietary software, but they're actually in perfect working order given a flexible operating system like Linux.)
|
||||
|
||||
Another way to economize is to designate another family member's computer for your shared backups.
|
||||
|
||||
#### Systems software and apps
|
||||
|
||||
Running open source software on top of this generic hardware enables me to achieve several benefits. First, the flexibility of open source software enables me to address any possible software failure. For example, with simple operating system commands, I can copy, move, back up, and recover the operating system, applications, and data across partitions, disks, or computers. I don't have to worry about software constraints, vendor lock-in, proprietary backup file formats, licensing or activation restrictions, or extra fees.
|
||||
|
||||
Another open source benefit is that you control your operating system. If you don't have control over your own system, you could be subject to forced restarts, unexpected and unwanted updates, and forced upgrades. My relative has run into such problems more than once. Without his knowledge or consent, his computer suddenly launched a forced upgrade from Windows 7 to Windows 10, which cost him three days of lost income (and untold frustration). The lesson: Your vendor's agenda may not coincide with your own.
|
||||
|
||||
All operating systems have bugs. The difference is that open source software doesn't force you to eat them.
|
||||
|
||||
#### Data classification
|
||||
|
||||
I use very simple techniques to make my data highly available.
|
||||
|
||||
I can't use cloud services for my data due to privacy requirements. Instead, my data "master copy" resides on a USB-connected disk. I plug it into any of several computers. After every session, I back up any altered data on the computer I used.
|
||||
|
||||
Of course, this approach is only feasible if your backups run quickly. For most home workers, that's easy. All you have to do is segregate your data by size and how frequently you update it.
|
||||
|
||||
Isolate big files like photos, audio, and video into separate folders or partitions. Make sure you back up only the files that are new or modified, not older items that have already been backed up.
|
||||
|
||||
Much of my work involves office suites. These generate small files, so I isolate each project in its own folder. For example, I stored the two dozen files I used to write this article in a single subdirectory. Backing it up is as simple as copying that folder.
|
||||
|
||||
Giving a little thought to data segregation and backing up only modified files ensures quick, easy backups for most home workers. My approach is simple; it works best if you only work on a couple of projects in a session. And I can tolerate losing up to a day's work. You can easily automate a more refined backup scheme for yourself.
|
||||
|
||||
For software development, I take an entirely different approach. I use software versioning, which transparently handles all software backup issues for me and coordinates with other developers. My HA planning in this area focuses just on ensuring I can access the online tool.
|
||||
|
||||
#### Communications
|
||||
|
||||
Like many home users, I communicate through both a cellphone network and the internet. If my internet goes down, I can use the cell network instead by tethering my laptop to my Android smartphone.
|
||||
|
||||
### Learning from failure
|
||||
|
||||
Using my strategy for 15 years, how have I fared? What failures have I experienced, and how did they turn out?
|
||||
|
||||
1. **Motherboard burnout:** One day, my computer wouldn't turn on. I simply moved my USB "master data" external disk to another computer and used that. I lost no data. After some investigation, I determined it was a motherboard failure, so I scrapped the computer and used it for parts.
|
||||
2. **Drive failure:** An internal disk failed while I was working. I just moved my USB master disk to a backup computer. I lost 10 minutes of data updates. After work, I created a new boot disk by copying one from another computer—flexibility that only open source software offers. I used the affected computer the next day.
|
||||
3. **Fatal software update:** An update caused a failure in an important login service. I shifted to a backup computer where I hadn't yet applied the fatal update. I lost no data. After work, I searched for help with this problem and had it solved in an hour.
|
||||
4. **Monitor burnout:** My monitor fizzled out. I just swapped in a backup display and kept working. This took 10 minutes. After work, I determined that the problem was a burned-out capacitor, so I recycled the monitor.
|
||||
5. **Power outage:** Now, here's a situation I didn't plan for! A tornado took down the electrical power in our entire town for two days. I learned that one should think through _all_ possible contingencies—including alternate work sites.
|
||||
|
||||
|
||||
|
||||
### Make your plan
|
||||
|
||||
If you work from home, you need to consider what will happen when your home computer fails. If not, you could experience frustrating workdays off while you scramble to fix the problem.
|
||||
|
||||
Open source software is the key. It runs so efficiently on older, cheaper computers that they become affordable backup machines. It offers device independence, and it ensures that you can design solutions that work best for you.
|
||||
|
||||
For most people, ensuring high availability is very simple. The trick is thinking about it in advance. Create a plan _and then test it_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/high-availability-home-office
|
||||
|
||||
作者:[Howard Fosdick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/howtech
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: https://jitsi.org/downloads/
|
||||
[3]: https://opensource.com/article/20/7/nextcloud
|
||||
[4]: https://opensource.com/article/19/4/file-sharing-git
|
||||
[5]: https://opensource.com/article/19/3/backup-solutions
|
||||
[6]: https://opensource.com/life/16/3/turn-your-old-raspberry-pi-automatic-backup-server
|
||||
[7]: https://opensource.com/article/19/5/advanced-rsync
|
||||
[8]: https://opensource.com/article/18/9/take-control-your-data-syncthing
|
||||
[9]: https://opensource.com/sites/default/files/uploads/my_ha_strategy.png (High Availability Strategy)
|
||||
[10]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[11]: http://www.rexxinfo.org/Quick_Guide/Quick_Guide_To_Fixing_Computer_Hardware
|
||||
[12]: https://opensource.com/article/19/7/how-make-old-computer-useful-again
|
||||
[13]: https://www.freegeekchicago.org/
|
@ -0,0 +1,12 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Try Deno as an alternative to Node.js)
|
||||
[#]: via: (https://opensource.com/article/21/2/deno)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
Try Deno as an alternative to Node.js
|
||||
======
|
||||
Deno is a secure runtime for JavaScript and TypeScript.
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Viper Browser: A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism)
|
||||
[#]: via: (https://itsfoss.com/viper-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Viper Browser: A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism
|
||||
======
|
||||
|
||||
_**Brief: Viper Browser is a Qt-based browser that offers a simple user experience keeping privacy in mind.**_
|
||||
|
||||
While the majority of the popular browsers run on top of Chromium, unique alternatives like [Firefox][1], [Beaker Browser][2], and some other [chrome alternatives][3] should not cease to exist.
|
||||
|
||||
Especially, considering Google’s recent potential thought of stripping [Google Chrome-specific features from Chromium][4] giving an excuse of abuse.
|
||||
|
||||
In the look-out for more Chrome alternatives, I came across an interesting project “[Viper Browser][5]” as per our reader’s suggestion on [Mastodon][6].
|
||||
|
||||
### Viper Browser: An Open-Source Qt5-based Browser
|
||||
|
||||
_**Note**: Viper Browser is fairly a new project with a couple of contributors. It lacks certain features which I’ll be mentioning as you read on._
|
||||
|
||||
![][7]
|
||||
|
||||
Viper is an interesting web browser that focuses on being a powerful yet lightweight option while utilizing [QtWebEngine][8].
|
||||
|
||||
QtWebEngine borrows the code from Chromium but it does not include the binaries and services that connect to the Google platform.
|
||||
|
||||
I spent some time using it and performing some daily browsing activities and I must say that I’m quite interested. Not just because it is something simple to use (how complicated a browser can be), but it also focuses on enhancing your privacy by giving you the option to add different Ad blocking options along with some useful options.
|
||||
|
||||
![][9]
|
||||
|
||||
Even though I think it is not meant for everyone, it is still worth taking a look. Let me highlight the features briefly before you can proceed trying it out.
|
||||
|
||||
### Features of Viper Browser
|
||||
|
||||
![][10]
|
||||
|
||||
I’ll list some of the key features that you can find useful:
|
||||
|
||||
* Ability to manage cookies
|
||||
* Multiple preset options to choose different Adblocker networks
|
||||
* Simple and easy to use
|
||||
* Privacy-friendly default search engine – [Startpage][11] (you can change this)
|
||||
* Ability to add user scripts
|
||||
* Ability to add new user agents
|
||||
* Option to disable JavaScript
|
||||
* Ability to prevent images from loading up
|
||||
|
||||
|
||||
|
||||
In addition to all these highlights, you can easily tweak the privacy settings to remove your history, clean cookies when existing, and some more options.
|
||||
|
||||
![][12]
|
||||
|
||||
### Installing Viper Browser on Linux
|
||||
|
||||
It just offers an AppImage file on its [releases section][13] that you can utilize to test on any Linux distribution.
|
||||
|
||||
In case you need help, you may refer to our guide on [using AppImage file on Linux][14] as well. If you’re curious, you can explore more about it on [GitHub][5].
|
||||
|
||||
[Viper Browser][5]
|
||||
|
||||
### My Thoughts on Using Viper Browser
|
||||
|
||||
I don’t think it is something that could replace your current browser immediately but if you are interested to test out new projects that are trying to offer Chrome alternatives, this is surely one of them.
|
||||
|
||||
When I tried logging in my Google account, it prevented me by mentioning that it is potentially an insecure browser or unsupported browser. So, if you rely on your Google account, it is a disappointing news.
|
||||
|
||||
However, other social media platforms work just fine along with YouTube (without signing in). Netflix is not something supported but overall the browsing experience is quite fast and usable.
|
||||
|
||||
You can install user scripts, but Chrome extensions aren’t supported yet. Of course, it is either intentional or something to be looked after as the development progresses considering it as a privacy-friendly web browser.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Considering that this is a less-known yet something interesting for some, do you have any suggestions for us to take a look at? An open-source project that deserves coverage?
|
||||
|
||||
Let me know in the comments down below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/viper-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.mozilla.org/en-US/firefox/new/
|
||||
[2]: https://itsfoss.com/beaker-browser-1-release/
|
||||
[3]: https://itsfoss.com/open-source-browsers-linux/
|
||||
[4]: https://www.bleepingcomputer.com/news/google/google-to-kill-chrome-sync-feature-in-third-party-browsers/
|
||||
[5]: https://github.com/LeFroid/Viper-Browser
|
||||
[6]: https://mastodon.social/web/accounts/199851
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser.png?resize=800%2C583&ssl=1
|
||||
[8]: https://wiki.qt.io/QtWebEngine
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-setup.jpg?resize=793%2C600&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-preferences.jpg?resize=800%2C660&ssl=1
|
||||
[11]: https://www.startpage.com
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-tools.jpg?resize=800%2C262&ssl=1
|
||||
[13]: https://github.com/LeFroid/Viper-Browser/releases
|
||||
[14]: https://itsfoss.com/use-appimage-linux/
|
@ -0,0 +1,368 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Configure multi-tenancy with Kubernetes namespaces)
|
||||
[#]: via: (https://opensource.com/article/21/2/kubernetes-namespaces)
|
||||
[#]: author: (Mike Calizo https://opensource.com/users/mcalizo)
|
||||
|
||||
Configure multi-tenancy with Kubernetes namespaces
|
||||
======
|
||||
Namespaces provide basic building blocks of access control for
|
||||
applications, users, or groups of users.
|
||||
![shapes of people symbols][1]
|
||||
|
||||
Most enterprises want a multi-tenancy platform to run their cloud-native applications because it helps manage resources, costs, and operational efficiency and control [cloud waste][2].
|
||||
|
||||
[Kubernetes][3] is the leading open source platform for managing containerized workloads and services. It gained this reputation because of its flexibility in allowing operators and developers to establish automation with declarative configuration. But there is a catch: Because Kubernetes grows rapidly, the old problem of velocity becomes an issue. The bigger your adoption, the more issues and resource waste you discover.
|
||||
|
||||
### An example of scale
|
||||
|
||||
Imagine your company started small with its Kubernetes adoption by deploying a variety of internal applications. It has multiple project streams running with multiple developers dedicated to each project stream.
|
||||
|
||||
In a scenario like this, you need to make sure your cluster administrator has full control over the cluster to manage its resources and implement cluster policy and security standards. In a way, the admin is herding the cluster's users to use best practices. A namespace is very useful in this instance because it enables different teams to share a single cluster where computing resources are subdivided into multiple teams.
|
||||
|
||||
While namespaces are your first step to Kubernetes multi-tenancy, they are not good enough on their own. There are a number of Kubernetes primitives you need to consider so that you can administer your cluster properly and put it into a production-ready implementation.
|
||||
|
||||
The Kubernetes primitives for multi-tenancy are:
|
||||
|
||||
1. **RBAC:** Role-based access control for Kubernetes
|
||||
2. **Network policies:** To isolate traffic between namespaces
|
||||
3. **Resource quotas:** To control fair access to cluster resources
|
||||
|
||||
|
||||
|
||||
This article explores how to use Kubernetes namespaces and some basic RBAC configurations to partition a single Kubernetes cluster and take advantage of this built-in Kubernetes tooling.
|
||||
|
||||
### What is a Kubernetes namespace?
|
||||
|
||||
Before digging into how to use namespaces to prepare your Kubernetes cluster to become multi-tenant-ready, you need to know what namespaces are.
|
||||
|
||||
A [namespace][4] is a Kubernetes object that partitions a Kubernetes cluster into multiple virtual clusters. This is done with the aid of [Kubernetes names and IDs][5]. Namespaces use the Kubernetes name object, which means that each object inside a namespace gets a unique name and ID across the cluster to allow virtual partitioning.
|
||||
|
||||
### How namespaces help in multi-tenancy
|
||||
|
||||
Namespaces are one of the Kubernetes primitives you can use to partition your cluster into multiple virtual clusters to allow multi-tenancy. Each namespace is isolated from every other user's, team's, or application's namespace. This isolation is essential in multi-tenancy so that updates and changes in applications, users, and teams are contained within the specific namespace. (Note that namespace does not provide network segmentation.)
|
||||
|
||||
Before moving ahead, verify the default namespace in a working Kubernetes cluster:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl get namespace
|
||||
NAME STATUS AGE
|
||||
default Active 3d
|
||||
kube-node-lease Active 3d
|
||||
kube-public Active 3d
|
||||
kube-system Active 3d
|
||||
```
|
||||
|
||||
Then create your first namespace, called **test**:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl create namespace test
|
||||
namespace/test created
|
||||
```
|
||||
|
||||
Verify the newly created namespace:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl get namespace
|
||||
NAME STATUS AGE
|
||||
default Active 3d
|
||||
kube-node-lease Active 3d
|
||||
kube-public Active 3d
|
||||
kube-system Active 3d
|
||||
test Active 10s
|
||||
[root@master ~]#
|
||||
```
|
||||
|
||||
Describe the newly created namespace:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl describe namespace test
|
||||
Name: test
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
Status: Active
|
||||
No resource quota.
|
||||
No LimitRange resource.
|
||||
```
|
||||
|
||||
To delete a namespace:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl delete namespace test
|
||||
namespace "test" deleted
|
||||
```
|
||||
|
||||
Your new namespace is active, but it doesn't have any labels, annotations, or quota-limit ranges defined. However, now that you know how to create and describe and delete a namespace, I'll show how you can use a namespace to virtually partition a Kubernetes cluster.
|
||||
|
||||
### Partitioning clusters using namespace and RBAC
|
||||
|
||||
Deploy the following simple application to learn how to partition a cluster using namespace and isolate an application and its related objects from "other" users.
|
||||
|
||||
First, verify the namespace you will use. For simplicity, use the **test** namespace you created above:
|
||||
|
||||
|
||||
```
|
||||
[root@master ~]# kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
default Active 3d
|
||||
kube-node-lease Active 3d
|
||||
kube-public Active 3d
|
||||
kube-system Active 3d
|
||||
test Active 3h
|
||||
```
|
||||
|
||||
Then deploy a simple application called **test-app** inside the test namespace by using the following configuration:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-app ⇒ name of the application
|
||||
namespace: test ⇒ the namespace where the app runs
|
||||
labels:
|
||||
app: test-app ⇒ labels for the app
|
||||
spec:
|
||||
containers:
|
||||
- name: test-app
|
||||
image: nginx:1.14.2 ⇒ the image we used for the app.
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
Deploy it:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create -f test-app.yaml
|
||||
pod/test-app created
|
||||
```
|
||||
|
||||
Then verify the application pod was created:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -n test
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
test-app 1/1 Running 0 18s
|
||||
```
|
||||
|
||||
Now that the running application is inside the **test** namespace, test a use case where:
|
||||
|
||||
* **auth-user** can edit and view all the objects inside the test namespace
|
||||
* **un-auth-user** can only view the namespace
|
||||
|
||||
|
||||
|
||||
I pre-created the users for you to test. If you want to know how I created the users inside Kubernetes, view the commands [here][6].
|
||||
|
||||
|
||||
```
|
||||
$ kubectl config view -o jsonpath='{.users[*].name}'
|
||||
auth-user
|
||||
kubernetes-admin
|
||||
un-auth-user
|
||||
```
|
||||
|
||||
With this set up, create a Kubernetes [Role and RoleBindings][7] to isolate the target namespace **test** to allow **auth-user** to view and edit objects inside the namespace and not allow **un-auth-user** to access or view the objects inside the **test** namespace.
|
||||
|
||||
Start by creating a ClusterRole and a Role. These objects are a list of verbs (action) permitted on specific resources and namespaces.
|
||||
|
||||
Create a ClusterRole:
|
||||
|
||||
|
||||
```
|
||||
$ cat clusterrole.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: list-deployments
|
||||
namespace: test
|
||||
rules:
|
||||
- apiGroups: [ apps ]
|
||||
resources: [ deployments ]
|
||||
verbs: [ get, list ]
|
||||
```
|
||||
|
||||
Create a Role:
|
||||
|
||||
|
||||
```
|
||||
$ cat role.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: list-deployments
|
||||
namespace: test
|
||||
rules:
|
||||
- apiGroups: [ apps ]
|
||||
resources: [ deployments ]
|
||||
verbs: [ get, list ]
|
||||
```
|
||||
|
||||
Apply the Role:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create -f role.yaml
|
||||
roles.rbac.authorization.k8s.io "list-deployments" created
|
||||
```
|
||||
|
||||
Use the same command to create a ClusterRole:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create -f clusterrole.yaml
|
||||
|
||||
$ kubectl get role -n test
|
||||
NAME CREATED AT
|
||||
list-deployments 2021-01-18T00:54:00Z
|
||||
```
|
||||
|
||||
Verify the Roles:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl describe roles -n test
|
||||
Name: list-deployments
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
PolicyRule:
|
||||
Resources Non-Resource URLs Resource Names Verbs
|
||||
--------- ----------------- -------------- -----
|
||||
deployments.apps [] [] [get list]
|
||||
```
|
||||
|
||||
Remember that you must create RoleBindings by namespace, not by user. This means you need to create two role bindings for user **auth-user**.
|
||||
|
||||
Here are the sample RoleBinding YAML files to permit **auth-user** to edit and view.
|
||||
|
||||
**To edit:**
|
||||
|
||||
|
||||
```
|
||||
$ cat rolebinding-auth-edit.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: auth-user-edit
|
||||
namespace: test
|
||||
subjects:
|
||||
\- kind: User
|
||||
name: auth-user
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: edit
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
**To view:**
|
||||
|
||||
|
||||
```
|
||||
$ cat rolebinding-auth-view.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: auth-user-view
|
||||
namespace: test
|
||||
subjects:
|
||||
\- kind: User
|
||||
name: auth-user
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: view
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
Create these YAML files:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create rolebinding-auth-view.yaml
|
||||
$ kubectl create rolebinding-auth-edit.yaml
|
||||
```
|
||||
|
||||
Verify if the RoleBindings were successfully created:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get rolebindings -n test
|
||||
NAME ROLE AGE
|
||||
auth-user-edit ClusterRole/edit 48m
|
||||
auth-user-view ClusterRole/view 47m
|
||||
```
|
||||
|
||||
With the requirements set up, test the cluster partitioning:
|
||||
|
||||
|
||||
```
|
||||
[root@master]$ sudo su un-auth-user
|
||||
[un-auth-user@master ~]$ kubect get pods -n test
|
||||
[un-auth-user@master ~]$ kubectl get pods -n test
|
||||
Error from server (Forbidden): pods is forbidden: User "un-auth-user" cannot list resource "pods" in API group "" in the namespace "test"
|
||||
```
|
||||
|
||||
Log in as **auth-user**:
|
||||
|
||||
|
||||
```
|
||||
[root@master ]# sudo su auth-user
|
||||
[auth-user@master auth-user]$ kubectl get pods -n test
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
test-app 1/1 Running 0 3h8m
|
||||
[auth-user@master un-auth-user]$
|
||||
|
||||
[auth-user@master auth-user]$ kubectl edit pods/test-app -n test
|
||||
Edit cancelled, no changes made.
|
||||
```
|
||||
|
||||
You can view and edit the objects inside the **test** namespace. How about viewing the cluster nodes?
|
||||
|
||||
|
||||
```
|
||||
[auth-user@master auth-user]$ kubectl get nodes
|
||||
Error from server (Forbidden): nodes is forbidden: User "auth-user" cannot list resource "nodes" in API group "" at the cluster scope
|
||||
[auth-user@master auth-user]$
|
||||
```
|
||||
|
||||
You can't because the role bindings for user **auth-user** dictate they have access to view or edit objects only inside the **test** namespace.
|
||||
|
||||
### Enable access control with namespaces
|
||||
|
||||
Namespaces provide basic building blocks of access control using RBAC and isolation for applications, users, or groups of users. But using namespaces alone as your multi-tenancy solution is not enough in an enterprise implementation. It is recommended that you use other Kubernetes multi-tenancy primitives to attain further isolation and implement proper security.
|
||||
|
||||
Namespaces can provide some basic isolation in your Kubernetes cluster; therefore, it is important to consider them upfront, especially when planning a multi-tenant cluster. Namespaces also allow you to logically segregate and assign resources to individual users, teams, or applications.
|
||||
|
||||
By using namespaces, you can increase resource efficiencies by enabling a single cluster to be used for a diverse set of workloads.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/kubernetes-namespaces
|
||||
|
||||
作者:[Mike Calizo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mcalizo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Open%20Pharma.png?itok=GP7zqNZE (shapes of people symbols)
|
||||
[2]: https://devops.com/the-cloud-is-booming-but-so-is-cloud-waste/
|
||||
[3]: https://opensource.com/resources/what-is-kubernetes
|
||||
[4]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[5]: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/
|
||||
[6]: https://www.adaltas.com/en/2019/08/07/users-rbac-kubernetes/
|
||||
[7]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
@ -0,0 +1,370 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Draw Mandelbrot fractals with GIMP scripting)
|
||||
[#]: via: (https://opensource.com/article/21/2/gimp-mandelbrot)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
Draw Mandelbrot fractals with GIMP scripting
|
||||
======
|
||||
Create complex mathematical images with GIMP's Script-Fu language.
|
||||
![Painting art on a computer screen][1]
|
||||
|
||||
The GNU Image Manipulation Program ([GIMP][2]) is my go-to solution for image editing. Its toolset is very powerful and convenient, except for doing [fractals][3], which is one thing you cannot draw by hand easily. These are fascinating mathematical constructs that have the characteristic of being [self-similar][4]. In other words, if they are magnified in some areas, they will look remarkably similar to the unmagnified picture. Besides being interesting, they also make very pretty pictures!
|
||||
|
||||
![Portion of a Mandelbrot fractal using GIMPs Coldfire palette][5]
|
||||
|
||||
Portion of a Mandelbrot fractal using GIMP's Coldfire palette (Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
GIMP can be automated with [Script-Fu][7] to do [batch processing of images][8] or create complicated procedures that are not practical to do by hand; drawing fractals falls in the latter category. This tutorial will show how to draw a representation of the [Mandelbrot fractal][9] using GIMP and Script-Fu.
|
||||
|
||||
![Mandelbrot set drawn using GIMP's Firecode palette][10]
|
||||
|
||||
Portion of a Mandelbrot fractal using GIMP's Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
![Rotated and magnified portion of the Mandelbrot set using Firecode.][11]
|
||||
|
||||
Rotated and magnified portion of the Mandelbrot set using the Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
In this tutorial, you will write a script that creates a layer in an image and draws a representation of the Mandelbrot set with a colored environment around it.
|
||||
|
||||
### What is the Mandelbrot set?
|
||||
|
||||
Do not panic! I will not go into too much detail here. For the more math-savvy, the Mandelbrot set is defined as the set of [complex numbers][12] _a_ for which the succession
|
||||
|
||||
_zn+1 = zn2 + a_
|
||||
|
||||
does not diverge when starting from _z₀ = 0_.
|
||||
|
||||
In reality, the Mandelbrot set is the fancy-looking black blob in the pictures; the nice-looking colors are outside the set. They represent how many iterations are required for the magnitude of the succession of numbers to pass a threshold value. In other words, the color scale shows how many steps are required for the succession to pass an upper-limit value.
|
||||
|
||||
### GIMP's Script-Fu
|
||||
|
||||
[Script-Fu][7] is the scripting language built into GIMP. It is an implementation of the [Scheme programming language][13].
|
||||
|
||||
If you want to get more acquainted with Scheme, GIMP's documentation offers an [in-depth tutorial][14]. I also wrote an article about [batch processing images][8] using Script-Fu. Finally, the Help menu offers a Procedure Browser with very extensive documentation with all of Script-Fu's functions described in detail.
|
||||
|
||||
![GIMP Procedure Browser][15]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
Scheme is a Lisp-like language, so a major characteristic is that it uses a [prefix notation][16] and a [lot of parentheses][17]. Functions and operators are applied to a list of operands by prefixing them:
|
||||
|
||||
|
||||
```
|
||||
(function-name operand operand ...)
|
||||
|
||||
(+ 2 3)
|
||||
↳ Returns 5
|
||||
|
||||
(list 1 2 3 5)
|
||||
↳ Returns a list containing 1, 2, 3, and 5
|
||||
```
|
||||
|
||||
### Write the script
|
||||
|
||||
You can write your first script and save it to the **Scripts** folder found in the preferences window under **Folders → Scripts**. Mine is at `$HOME/.config/GIMP/2.10/scripts`. Write a file called `mandelbrot.scm` with:
|
||||
|
||||
|
||||
```
|
||||
; Complex numbers implementation
|
||||
(define (make-rectangular x y) (cons x y))
|
||||
(define (real-part z) (car z))
|
||||
(define (imag-part z) (cdr z))
|
||||
|
||||
(define (magnitude z)
|
||||
(let ((x (real-part z))
|
||||
(y (imag-part z)))
|
||||
(sqrt (+ (* x x) (* y y)))))
|
||||
|
||||
(define (add-c a b)
|
||||
(make-rectangular (+ (real-part a) (real-part b))
|
||||
(+ (imag-part a) (imag-part b))))
|
||||
|
||||
(define (mul-c a b)
|
||||
(let ((ax (real-part a))
|
||||
(ay (imag-part a))
|
||||
(bx (real-part b))
|
||||
(by (imag-part b)))
|
||||
(make-rectangular (- (* ax bx) (* ay by))
|
||||
(+ (* ax by) (* ay bx)))))
|
||||
|
||||
; Definition of the function creating the layer and drawing the fractal
|
||||
(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y)
|
||||
(define num-colors (car (gimp-palette-get-info palette-name)))
|
||||
(define colors (cadr (gimp-palette-get-colors palette-name)))
|
||||
|
||||
(define width (car (gimp-image-width image)))
|
||||
(define height (car (gimp-image-height image)))
|
||||
|
||||
(define new-layer (car (gimp-layer-new image
|
||||
width height
|
||||
RGB-IMAGE
|
||||
"Mandelbrot layer"
|
||||
100
|
||||
LAYER-MODE-NORMAL)))
|
||||
|
||||
(gimp-image-add-layer image new-layer 0)
|
||||
(define drawable new-layer)
|
||||
(define bytes-per-pixel (car (gimp-drawable-bpp drawable)))
|
||||
|
||||
; Fractal drawing section.
|
||||
; Code from: <https://rosettacode.org/wiki/Mandelbrot\_set\#Racket>
|
||||
(define (iterations a z i)
|
||||
(let ((z′ (add-c (mul-c z z) a)))
|
||||
(if (or (= i num-colors) (> (magnitude z′) threshold))
|
||||
i
|
||||
(iterations a z′ (+ i 1)))))
|
||||
|
||||
(define (iter->color i)
|
||||
(if (>= i num-colors)
|
||||
(list->vector '(0 0 0))
|
||||
(list->vector (vector-ref colors i))))
|
||||
|
||||
(define z0 (make-rectangular 0 0))
|
||||
|
||||
(define (loop x end-x y end-y)
|
||||
(let* ((real-x (- (* domain-width (/ x width)) offset-x))
|
||||
(real-y (- (* domain-height (/ y height)) offset-y))
|
||||
(a (make-rectangular real-x real-y))
|
||||
(i (iterations a z0 0))
|
||||
(color (iter->color i)))
|
||||
(cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color)
|
||||
(loop (+ x 1) end-x y end-y))
|
||||
((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y))
|
||||
(loop 0 end-x (+ y 1) end-y)))))
|
||||
(loop 0 width 0 height)
|
||||
|
||||
; These functions refresh the GIMP UI, otherwise the modified pixels would be evident
|
||||
(gimp-drawable-update drawable 0 0 width height)
|
||||
(gimp-displays-flush)
|
||||
)
|
||||
|
||||
(script-fu-register
|
||||
"script-fu-mandelbrot" ; Function name
|
||||
"Create a Mandelbrot layer" ; Menu label
|
||||
; Description
|
||||
"Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature."
|
||||
"Cristiano Fontana" ; Author
|
||||
"2021, C.Fontana. GNU GPL v. 3" ; Copyright
|
||||
"27th Jan. 2021" ; Creation date
|
||||
"RGB" ; Image type that the script works on
|
||||
;Parameter Displayed Default
|
||||
;type label values
|
||||
SF-IMAGE "Image" 0
|
||||
SF-STRING "Color palette name" "Firecode"
|
||||
SF-ADJUSTMENT "Threshold value" '(4 0 10 0.01 0.1 2 0)
|
||||
SF-ADJUSTMENT "Domain width" '(3 0 10 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "Domain height" '(3 0 10 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "X offset" '(2.25 -20 20 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "Y offset" '(1.50 -20 20 0.1 1 4 0)
|
||||
)
|
||||
(script-fu-menu-register "script-fu-mandelbrot" "<Image>/Layer/")
|
||||
```
|
||||
|
||||
I will go through the script to show you what it does.
|
||||
|
||||
### Get ready to draw the fractal
|
||||
|
||||
Since this image is all about complex numbers, I wrote a quick and dirty implementation of complex numbers in Script-Fu. I defined the complex numbers as [pairs][18] of real numbers. Then I added the few functions needed for the script. I used [Racket's documentation][19] as inspiration for function names and roles:
|
||||
|
||||
|
||||
```
|
||||
(define (make-rectangular x y) (cons x y))
|
||||
(define (real-part z) (car z))
|
||||
(define (imag-part z) (cdr z))
|
||||
|
||||
(define (magnitude z)
|
||||
(let ((x (real-part z))
|
||||
(y (imag-part z)))
|
||||
(sqrt (+ (* x x) (* y y)))))
|
||||
|
||||
(define (add-c a b)
|
||||
(make-rectangular (+ (real-part a) (real-part b))
|
||||
(+ (imag-part a) (imag-part b))))
|
||||
|
||||
(define (mul-c a b)
|
||||
(let ((ax (real-part a))
|
||||
(ay (imag-part a))
|
||||
(bx (real-part b))
|
||||
(by (imag-part b)))
|
||||
(make-rectangular (- (* ax bx) (* ay by))
|
||||
(+ (* ax by) (* ay bx)))))
|
||||
```
|
||||
|
||||
### Draw the fractal
|
||||
|
||||
The new function is called `script-fu-mandelbrot`. The best practice for writing a new function is to call it `script-fu-something` so that it can be identified in the Procedure Browser easily. The function requires a few parameters: an `image` to which it will add a layer with the fractal, the `palette-name` identifying the color palette to be used, the `threshold` value to stop the iteration, the `domain-width` and `domain-height` that identify the image boundaries, and the `offset-x` and `offset-y` to center the image to the desired feature. The script also needs some other parameters that it can deduce from the GIMP interface:
|
||||
|
||||
|
||||
```
|
||||
(define (script-fu-mandelbrot image palette-name threshold domain-width domain-height offset-x offset-y)
|
||||
(define num-colors (car (gimp-palette-get-info palette-name)))
|
||||
(define colors (cadr (gimp-palette-get-colors palette-name)))
|
||||
|
||||
(define width (car (gimp-image-width image)))
|
||||
(define height (car (gimp-image-height image)))
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
Then it creates a new layer and identifies it as the script's `drawable`. A "drawable" is the element you want to draw on:
|
||||
|
||||
|
||||
```
|
||||
(define new-layer (car (gimp-layer-new image
|
||||
width height
|
||||
RGB-IMAGE
|
||||
"Mandelbrot layer"
|
||||
100
|
||||
LAYER-MODE-NORMAL)))
|
||||
|
||||
(gimp-image-add-layer image new-layer 0)
|
||||
(define drawable new-layer)
|
||||
(define bytes-per-pixel (car (gimp-drawable-bpp drawable)))
|
||||
```
|
||||
|
||||
For the code determining the pixels' color, I used the [Racket][20] example on the [Rosetta Code][21] website. It is not the most optimized algorithm, but it is simple to understand. Even a non-mathematician like me can understand it. The `iterations` function determines how many steps the succession requires to pass the threshold value. To cap the iterations, I am using the number of colors in the palette. In other words, if the threshold is too high or the succession does not grow, the calculation stops at the `num-colors` value. The `iter->color` function transforms the number of iterations into a color using the provided palette. If the iteration number is equal to `num-colors`, it uses black because this means that the succession is probably bound and that pixel is in the Mandelbrot set:
|
||||
|
||||
|
||||
```
|
||||
; Fractal drawing section.
|
||||
; Code from: <https://rosettacode.org/wiki/Mandelbrot\_set\#Racket>
|
||||
(define (iterations a z i)
|
||||
(let ((z′ (add-c (mul-c z z) a)))
|
||||
(if (or (= i num-colors) (> (magnitude z′) threshold))
|
||||
i
|
||||
(iterations a z′ (+ i 1)))))
|
||||
|
||||
(define (iter->color i)
|
||||
(if (>= i num-colors)
|
||||
(list->vector '(0 0 0))
|
||||
(list->vector (vector-ref colors i))))
|
||||
```
|
||||
|
||||
Because I have the feeling that Scheme users do not like to use loops, I implemented the function looping over the pixels as a recursive function. The `loop` function reads the starting coordinates and their upper boundaries. At each pixel, it defines some temporary variables with the `let*` function: `real-x` and `real-y` are the real coordinates of the pixel in the complex plane, according to the parameters; the `a` variable is the starting point for the succession; the `i` is the number of iterations; and finally `color` is the pixel color. Each pixel is colored with the `gimp-drawable-set-pixel` function that is an internal GIMP procedure. The peculiarity is that it is not undoable, and it does not trigger the image to refresh. Therefore, the image will not be updated during the operation. To play nice with the user, at the end of each row of pixels, it calls the `gimp-progress-update` function, which updates a progress bar in the user interface:
|
||||
|
||||
|
||||
```
|
||||
(define z0 (make-rectangular 0 0))
|
||||
|
||||
(define (loop x end-x y end-y)
|
||||
(let* ((real-x (- (* domain-width (/ x width)) offset-x))
|
||||
(real-y (- (* domain-height (/ y height)) offset-y))
|
||||
(a (make-rectangular real-x real-y))
|
||||
(i (iterations a z0 0))
|
||||
(color (iter->color i)))
|
||||
(cond ((and (< x end-x) (< y end-y)) (gimp-drawable-set-pixel drawable x y bytes-per-pixel color)
|
||||
(loop (+ x 1) end-x y end-y))
|
||||
((and (>= x end-x) (< y end-y)) (gimp-progress-update (/ y end-y))
|
||||
(loop 0 end-x (+ y 1) end-y)))))
|
||||
(loop 0 width 0 height)
|
||||
```
|
||||
|
||||
At the calculation's end, the function needs to inform GIMP that it modified the `drawable`, and it should refresh the interface because the image is not "automagically" updated during the script's execution:
|
||||
|
||||
|
||||
```
|
||||
(gimp-drawable-update drawable 0 0 width height)
|
||||
(gimp-displays-flush)
|
||||
```
|
||||
|
||||
### Interact with the user interface
|
||||
|
||||
To use the `script-fu-mandelbrot` function in the graphical user interface (GUI), the script needs to inform GIMP. The `script-fu-register` function informs GIMP about the parameters required by the script and provides some documentation:
|
||||
|
||||
|
||||
```
|
||||
(script-fu-register
|
||||
"script-fu-mandelbrot" ; Function name
|
||||
"Create a Mandelbrot layer" ; Menu label
|
||||
; Description
|
||||
"Draws a Mandelbrot fractal on a new layer. For the coloring it uses the palette identified by the name provided as a string. The image boundaries are defined by its domain width and height, which correspond to the image width and height respectively. Finally the image is offset in order to center the desired feature."
|
||||
"Cristiano Fontana" ; Author
|
||||
"2021, C.Fontana. GNU GPL v. 3" ; Copyright
|
||||
"27th Jan. 2021" ; Creation date
|
||||
"RGB" ; Image type that the script works on
|
||||
;Parameter Displayed Default
|
||||
;type label values
|
||||
SF-IMAGE "Image" 0
|
||||
SF-STRING "Color palette name" "Firecode"
|
||||
SF-ADJUSTMENT "Threshold value" '(4 0 10 0.01 0.1 2 0)
|
||||
SF-ADJUSTMENT "Domain width" '(3 0 10 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "Domain height" '(3 0 10 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "X offset" '(2.25 -20 20 0.1 1 4 0)
|
||||
SF-ADJUSTMENT "Y offset" '(1.50 -20 20 0.1 1 4 0)
|
||||
)
|
||||
```
|
||||
|
||||
Then the script tells GIMP to put the new function in the Layer menu with the label "Create a Mandelbrot layer":
|
||||
|
||||
|
||||
```
|
||||
`(script-fu-menu-register "script-fu-mandelbrot" "<Image>/Layer/")`
|
||||
```
|
||||
|
||||
Having registered the function, you can visualize it in the Procedure Browser.
|
||||
|
||||
![script-fu-mandelbrot function][22]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
### Run the script
|
||||
|
||||
Now that the function is ready and registered, you can draw the Mandelbrot fractal! First, create a square image and run the script from the Layers menu.
|
||||
|
||||
![script running][23]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
The default values are a good starting set to obtain the following image. The first time you run the script, create a very small image (e.g., 60x60 pixels) because this implementation is slow! It took several hours for my computer to create the following image in full 1920x1920 pixels. As I mentioned earlier, this is not the most optimized algorithm; rather, it was the easiest for me to understand.
|
||||
|
||||
![Mandelbrot set drawn using GIMP's Firecode palette][10]
|
||||
|
||||
Portion of a Mandelbrot fractal using GIMP's Firecode palette. (Cristiano Fontana, [CC BY-SA 4.0][6])
|
||||
|
||||
### Learn more
|
||||
|
||||
This tutorial showed how to use GIMP's built-in scripting features to draw an image created with an algorithm. These images show GIMP's powerful set of tools that can be used for artistic applications and mathematical images.
|
||||
|
||||
If you want to move forward, I suggest you look at the official documentation and its [tutorial][14]. As an exercise, try modifying this script to draw a [Julia set][24], and please share the resulting image in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/gimp-mandelbrot
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cristianofontana
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
|
||||
[2]: https://www.gimp.org/
|
||||
[3]: https://en.wikipedia.org/wiki/Fractal
|
||||
[4]: https://en.wikipedia.org/wiki/Self-similarity
|
||||
[5]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion.png (Portion of a Mandelbrot fractal using GIMPs Coldfire palette)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://docs.gimp.org/en/gimp-concepts-script-fu.html
|
||||
[8]: https://opensource.com/article/21/1/gimp-scripting
|
||||
[9]: https://en.wikipedia.org/wiki/Mandelbrot_set
|
||||
[10]: https://opensource.com/sites/default/files/uploads/mandelbrot.png (Mandelbrot set drawn using GIMP's Firecode palette)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/mandelbrot_portion2.png (Rotated and magnified portion of the Mandelbrot set using Firecode.)
|
||||
[12]: https://en.wikipedia.org/wiki/Complex_number
|
||||
[13]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
||||
[14]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/procedure_browser_0.png (GIMP Procedure Browser)
|
||||
[16]: https://en.wikipedia.org/wiki/Polish_notation
|
||||
[17]: https://xkcd.com/297/
|
||||
[18]: https://www.gnu.org/software/guile/manual/html_node/Pairs.html
|
||||
[19]: https://docs.racket-lang.org/reference/generic-numbers.html?q=make-rectangular#%28part._.Complex_.Numbers%29
|
||||
[20]: https://racket-lang.org/
|
||||
[21]: https://rosettacode.org/wiki/Mandelbrot_set#Racket
|
||||
[22]: https://opensource.com/sites/default/files/uploads/mandelbrot_documentation.png (script-fu-mandelbrot function)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/script_working.png (script running)
|
||||
[24]: https://en.wikipedia.org/wiki/Julia_set
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Add Fingerprint Login in Ubuntu and Other Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/fingerprint-login-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Add Fingerprint Login in Ubuntu and Other Linux Distributions
|
||||
======
|
||||
|
||||
Many high-end laptops come with fingerprint readers these days. Windows and macOS have been supporting fingerprint login for some time. In desktop Linux, the support for fingerprint login was more of geeky tweaks but [GNOME][1] and [KDE][2] have started supporting it through system settings.
|
||||
|
||||
This means that on newer Linux distribution versions, you can easily use fingerprint reading. I am going to enable fingerprint login in Ubuntu here but you may use the steps on other distributions running GNOME 3.38.
|
||||
|
||||
Prerequisite
|
||||
|
||||
This is obvious, of course. Your computer must have a fingerprint reader.
|
||||
|
||||
This method works for any Linux distribution running GNOME version 3.38 or higher. If you are not certain, you may [check which desktop environment version you are using][3].
|
||||
|
||||
KDE 5.21 also has a fingerprint manager. The screenshots will look different, of course.
|
||||
|
||||
### Adding fingerprint login in Ubuntu and other Linux distributions
|
||||
|
||||
Go to **Settings** and the click on **Users** from left sidebar. You should see all the user account on your system here. You’ll see several option including **Fingerprint Login**.
|
||||
|
||||
Click on the Fingerprint Login option here.
|
||||
|
||||
![Enable fingerprint login in Ubuntu][4]
|
||||
|
||||
It will immediately ask you to scan a new fingerprint. When you click the + sign to add a fingerprint, it presents a few predefined options so that you can easily identify which finger or thumb it is.
|
||||
|
||||
You may of course scan left thumb by clicking right index finger though I don’t see a good reason why you would want to do that.
|
||||
|
||||
![Adding fingerprint][5]
|
||||
|
||||
While adding the fingerprint, rotate your finger or thumb as directed.
|
||||
|
||||
![Rotate your finger][6]
|
||||
|
||||
Once the system registers the entire finger, it will give you a green signal that the fingerprint has been added.
|
||||
|
||||
![Fingerprint successfully added][7]
|
||||
|
||||
If you want to test it right away, lock the screen by pressing Super+L keyboard shortcut in Ubuntu and then using the fingerprint for login.
|
||||
|
||||
![Login With Fingerprint in Ubuntu][8]
|
||||
|
||||
#### Experience with fingerprint login on Ubuntu
|
||||
|
||||
Fingerprint login is what its name suggests: login using your fingerprint. That’s it. You cannot use your finger when it asks for authentication for programs that need sudo access. It’s not a replacement of your password.
|
||||
|
||||
One more thing. The fingerprint login allows you to log in but you cannot use your finger when your system asks for sudo password. The [keyring in Ubuntu][9] also remains locked.
|
||||
|
||||
Another annoying thing is because of GNOME’s GDM login screen. When you login, you have to click on your account first to get to the password screen. This is where you can use your finger. It would have been nicer to not bothered about clicking the user account ID first.
|
||||
|
||||
I also notice that fingerprint reading is not as smooth and quick as it is in Windows. It works, though.
|
||||
|
||||
If you are somewhat disappointed with the fingerprint login on Linux, you may disable it. Let me show you the steps in the next section.
|
||||
|
||||
### Disable fingerprint login
|
||||
|
||||
Disabling fingerprint login is pretty much the same as enabling it in the first place.
|
||||
|
||||
Go to **Settings→User** and then click on Fingerprint Login option. It will show a screen with options to add more fingerprints or delete the existing ones. You need to delete the existing fingerprints.
|
||||
|
||||
![Disable Fingerprint Login][10]
|
||||
|
||||
Fingerprint login does have some benefits, specially for lazy people like me. I don’t have to type my password every time I lock the screen and I am happy with the limited usage.
|
||||
|
||||
Enabling sudo with fingerprint should not be entirely impossible with [PAM][11]. I remember that when I [set up face unlock in Ubuntu][12], it could be used with sudo as well. Let’s see if future versions add this feature.
|
||||
|
||||
Do you have a laptop with fingerprint reader? Do you use it often or is it just one of things you don’t care about?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fingerprint-login-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.gnome.org/
|
||||
[2]: https://kde.org/
|
||||
[3]: https://itsfoss.com/find-desktop-environment/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/enable-fingerprint-ubuntu.png?resize=800%2C607&ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-login-ubuntu.png?resize=800%2C496&ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/adding-fingerprint-ubuntu-linux.png?resize=800%2C603&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/fingerprint-added-ubuntu.png?resize=797%2C510&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/login-with-fingerprint-ubuntu.jpg?resize=800%2C320&ssl=1
|
||||
[9]: https://itsfoss.com/ubuntu-keyring/
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/disable-fingerprint-login.png?resize=798%2C524&ssl=1
|
||||
[11]: https://tldp.org/HOWTO/User-Authentication-HOWTO/x115.html
|
||||
[12]: https://itsfoss.com/face-unlock-ubuntu/
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage your budget on Linux with this open source finance tool)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-skrooge)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Manage your budget on Linux with this open source finance tool
|
||||
======
|
||||
Make managing your finances easier with Skrooge, an open source
|
||||
budgeting tool.
|
||||
![2 cents penny money currency][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article is about personal financial management.
|
||||
|
||||
Personal finances can be difficult to manage. It can be frustrating and even scary when you don't have enough money to get by without financial assistance, and it can be surprisingly overwhelming when you do have the money you need but no clear notion of where it all goes each month. To make matters worse, we're often told to "make a budget" as if declaring the amount of money you can spend each month will somehow manifest the money you need. The bottom line is that making a budget is hard, and not meeting your financial goals is discouraging. But it's still important, and Linux has several tools that can help make the task manageable.
|
||||
|
||||
### Money management
|
||||
|
||||
As with anything else in life, we all have our own ways of keeping track of our money. I used to take a simple and direct approach: My paycheck was deposited into an account, and I'd withdraw some percentage in cash. Once the cash was gone from my wallet, I had to wait until the next payday to spend anything. It only took one day of missing out on lunch to learn that I had to take my goals seriously, and I adjusted my spending behavior accordingly. For the simple lifestyle I had at the time, it was an effective means of keeping myself honest with my income, but it didn't translate well to online business transactions, long-term utility contracts, investments, and so on.
|
||||
|
||||
As I continue to refine the way I track my finances, I've learned that personal accounting is always an evolving process. We each have unique financial circumstances, which inform what kind of solution we can or should use to track our income and debt. If you're out of work, then your budgeting goal is likely to spend as little as possible. If you're working but paying off a student loan, then your goal probably favors sending money to the bank. And if you're working but planning for retirement, then you're probably trying to save as much as you can.
|
||||
|
||||
The thing to remember about a budget is that it's meant to compare your financial reality with your financial _goals_. You can't avoid some expenses, but after those, you get to set your own priorities. If you don't hit your goals, you can adjust your own behavior or rewrite your goals so that they better reflect reality. Adapting your financial plan doesn't mean you've failed. It just means that your initial projection wasn't accurate. During hard times, you may not be able to hit any budget goals, but if you keep up with your budget, you'll learn a lot about what it takes financially to maintain your current lifestyle (whatever it may be). Over time, you can learn to adjust settings you may never have realized were available to you. For instance, people are moving to rural towns for the lower cost of living now that remote work is a widely accepted option. It's pretty stunning to see how such a lifestyle shift can alter your budget reports.
|
||||
|
||||
The point is that budgeting is an often undervalued activity, and in no small part because it's daunting. It's important to realize that you can budget, no matter your level of expertise or interest in finances. Whether you [just use a LibreOffice spreadsheet][2], or try a dedicated financial application, you can set goals, track your own behavior, and learn a lot of valuable lessons that could eventually pay dividends.
|
||||
|
||||
### Open source accounting
|
||||
|
||||
There are several dedicated [personal finance applications for Linux][3], including [HomeBank][4], [Money Manager EX][5], [GNUCash][6], [KMyMoney][7], and [Skrooge][8]. All of these applications are essentially ledgers, a place you can retreat to at the end of each month (or whenever you look at your accounts), import data from your bank, and review how your expenditures align with whatever budget you've set for yourself.
|
||||
|
||||
![Skrooge interface with financial data displayed][9]
|
||||
|
||||
Skrooge
|
||||
|
||||
I use Skrooge as my personal budget tracker. It's an easy application to set up, even with multiple bank accounts. Skrooge, as with most open source finance apps, can import multiple file formats, so my workflow goes something like this:
|
||||
|
||||
1. Log in to my banks.
|
||||
2. Export the month's bank statement as QIF files.
|
||||
3. Open Skrooge.
|
||||
4. Import the QIF files. Each gets assigned to their appropriate accounts automatically.
|
||||
5. Review my expenditures compared to the budget goals I've set for myself. If I've gone over, then I dock next month's goals (so that I'll ideally spend less to make up the difference). If I've come in under my goal, then I move the excess to December's budget (so I'll have more to spend at the end of the year).
|
||||
|
||||
|
||||
|
||||
I only track a subset of the household budget in Skrooge. Skrooge makes that process easy through a dynamic database that allows me to categorize multiple transactions at once with custom tags. This makes it easy for me to extract my personal expenditures from general household and utility expenses, and I can leverage these categories when reviewing the autogenerated reports Skrooge provides.
|
||||
|
||||
![Skrooge budget pie chart][10]
|
||||
|
||||
Skrooge budget pie chart
|
||||
|
||||
Most importantly, the popular Linux financial apps allow me to manage my budget the way that works best for me. For instance, my partner prefers to use a LibreOffice spreadsheet, but with very little effort, I can extract a CSV file from the household budget, import it into Skrooge, and use an updated set of data. There's no lock-in, no incompatibility. The system is flexible and agile, allowing us to adapt our budget and our method of tracking expenses as we learn more about effective budgeting and about what life has in store.
|
||||
|
||||
### Open choice
|
||||
|
||||
Money markets worldwide differ, and the way we each interact with them also defines what tools we can use. Ultimately, your choice of what to use for your finances is a decision you must make based on your own requirements. And one thing open source does particularly well is provide its users the freedom of choice.
|
||||
|
||||
When setting my own financial goals, I appreciate that I can use whatever application fits in best with my style of personal computing. I get to retain control of how I process the data in my life, even when it's data I don't necessarily enjoy having to process. Linux and its amazing set of applications make it just a little less of a chore.
|
||||
|
||||
Try some financial apps on Linux and see if you can inspire yourself to set some goals and save money!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-skrooge
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/Medical%20Costs%20Transparency_1.jpg?itok=CkZ_J88m (2 cents penny money currency)
|
||||
[2]: https://opensource.com/article/20/3/libreoffice-templates
|
||||
[3]: https://opensource.com/life/17/10/personal-finance-tools-linux
|
||||
[4]: http://homebank.free.fr/en/index.php
|
||||
[5]: https://www.moneymanagerex.org/download
|
||||
[6]: https://opensource.com/article/20/2/gnucash
|
||||
[7]: https://kmymoney.org/download.html
|
||||
[8]: https://apps.kde.org/en/skrooge
|
||||
[9]: https://opensource.com/sites/default/files/skrooge.jpg
|
||||
[10]: https://opensource.com/sites/default/files/skrooge-pie_0.jpg
|
@ -0,0 +1,182 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (31 open source text editors you need to try)
|
||||
[#]: via: (https://opensource.com/article/21/2/open-source-text-editors)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
31 open source text editors you need to try
|
||||
======
|
||||
Looking for a new text editor? Here are 31 options to consider.
|
||||
![open source button on keyboard][1]
|
||||
|
||||
Computers are text-based, so the more things you do with them, the more you find yourself needing a text-editing application. And the more time you spend in a text editor, the more likely you are to demand more from whatever you use.
|
||||
|
||||
If you're looking for a good text editor, you'll find that Linux has plenty to offer. Whether you want to work in the terminal, on your desktop, or in the cloud, you can literally try a different editor every day for a month (or one a month for almost three years) in your relentless search for the perfect typing experience.
|
||||
|
||||
### Vim-like editors
|
||||
|
||||
![][2]
|
||||
|
||||
* [Vi][3] ships with every Linux, BSD, Solaris, and macOS installation. It's the quintessential Unix text editor, with its unique combination of editing modes and super-efficient single-key shortcuts. The original Vi editor was an application written by Bill Joy, creator of the C shell. Modern incarnations of Vi, most notably Vim, have added many features, including multiple levels of undo, better navigation while in insert mode, line folding, syntax highlighting, plugin support, and much more. It takes practice (it even has its own tutor application, vimtutor.)
|
||||
* [Kakoune][4] is a Vim-inspired application with a familiar, minimalistic interface, short keyboard shortcuts, and separate editing and insert modes. It looks and feels a lot like Vi at first, but with its own unique style, both in design and function. As a special bonus, it features an implementation of the Clippy interface.
|
||||
|
||||
|
||||
|
||||
### emacs editors
|
||||
|
||||
![][5]
|
||||
|
||||
* The original free emacs, and one of the first official applications of the GNU project that started the Free Software movement, [GNU Emacs][6] is a wildly popular text editor. It's great for sysadmins, developers, and everyday users alike, with loads of features and seemingly endless extensions. Once you start using Emacs, you might find it difficult to think of a reason to close it because it's just that versatile!
|
||||
* If you like Emacs but find GNU Emacs too bloated, then you might like [Jove][7]. Jove is a terminal-based emacs editor. It's easy to use, but if you're new to emacsen (the plural of emacs), Jove is also easy to learn, thanks to the teachjove command.
|
||||
* Another lightweight emacs editor, [Jed][8] is a simple incarnation of a macro-based workflow. One thing that sets it apart from other editors is its use of [S-Lang][9], a C-like scripting language providing extensibility options to developers more comfortable with C than with Lisp.
|
||||
|
||||
|
||||
|
||||
### Interactive editors
|
||||
|
||||
![][10]
|
||||
|
||||
* [GNU nano][11] takes a bold stance on terminal-based text editing: it provides a menu. Yes, this humble editor takes a cue from GUI editors by telling the user exactly which key they need to press to perform a specific function. This is a refreshing take on user experience, so it's no wonder that it's nano, not Vi, that's set as the default editor for "user-friendly" distributions.
|
||||
* [JOE][12] is based on an old text-editing application called WordStar. If you're not familiar with Wordstar, JOE can also mimic Emacs or GNU nano. By default, it's a good compromise between something relatively mysterious like Emacs or Vi and the always-on verbosity of GNU Nano (for example, it tells you how to activate an onscreen help display, but it's not on by default).
|
||||
* The excellent [e3][13] application is a tiny text editor with five built-in keyboard shortcut schemes to emulate Emacs, Vi, nano, NEdit, and WordStar. In other words, no matter what terminal-based editor you are used to, you're likely to feel right at home with e3.
|
||||
|
||||
|
||||
|
||||
### ed and more
|
||||
|
||||
* The [ed][14] line editor is part of the [POSIX][15] and Open Group's standard definition of a Unix-based operating system. You can count on it being installed on nearly every Linux or Unix system you'll ever encounter. It's tiny, terse, and tip-top.
|
||||
* Building upon ed, the [Sed][16] stream editor is popular both for its functionality and its syntax. Most Linux users learn at least one sed command when searching for the easiest and fastest way to update a line in a config file, but it's worth taking a closer look. Sed is a powerful command with lots of useful subcommands. Get to know it better, and you may find yourself open text editor applications a lot less frequently.
|
||||
* You don't always need a text editor to edit text. The [heredoc][17] (or Here Doc) system, available in any POSIX terminal, allows you to type text directly into your open terminal and then pipes what you type into a text file. It's not the most robust editing experience, but it is versatile and always available.
|
||||
|
||||
|
||||
|
||||
### Minimalist editors
|
||||
|
||||
![][18]
|
||||
|
||||
If your idea of a good text editor is a word processor except without all the processing, you're probably looking for one of these classics. These editors let you write and edit text with minimal interference and minimal assistance. What features they do offer are often centered around markup, Markdown, or code. Some have names that follow a certain pattern:
|
||||
|
||||
* [Gedit][19] from the GNOME team
|
||||
* [medit][20] for a classic GNOME feel
|
||||
* [Xedit][21] uses only the most basic X11 libraries
|
||||
* [jEdit][22] for Java aficionados
|
||||
|
||||
|
||||
|
||||
A similar experience is available for KDE users:
|
||||
|
||||
* [Kate][23] is an unassuming editor with all the features you need.
|
||||
* [KWrite][24] hides a ton of useful features in a deceptively simple, easy-to-use interface.
|
||||
|
||||
|
||||
|
||||
And there are a few for other platforms:
|
||||
|
||||
* [Notepad++][25] is a popular Windows application, while Notepadqq takes a similar approach for Linux.
|
||||
* [Pe][26] is for Haiku OS (the reincarnation of that quirky child of the '90s, BeOS).
|
||||
* [FeatherPad][27] is a basic editor for Linux but with some support for macOS and Haiku. If you're a Qt hacker looking to port code, take a look!
|
||||
|
||||
|
||||
|
||||
### IDEs
|
||||
|
||||
![][28]
|
||||
|
||||
There's quite a crossover between text editors and integrated development environments (IDEs). The latter really is just the former with lots of code-specific features added on. If you use an IDE regularly, you might find an XML or Markdown editor lurking in your extension manager:
|
||||
|
||||
* [NetBeans][29] is a handy text editor for Java users.
|
||||
* [Eclipse][30] offers a robust editing suite with lots of extensions to give you the tools you need.
|
||||
|
||||
|
||||
|
||||
### Cloud-based editors
|
||||
|
||||
![][31]
|
||||
|
||||
Working in the cloud? You can write there too, you know.
|
||||
|
||||
* [Etherpad][32] is a text editor app that runs on the web. There are free and independent instances for you to use, or you can set up your own.
|
||||
* [Nextcloud][33] has a thriving app scene and includes both a built-in text editor and a third-party Markdown editor with live preview.
|
||||
|
||||
|
||||
|
||||
### Newer editors
|
||||
|
||||
![][34]
|
||||
|
||||
Everybody has an idea about what makes a text editor perfect. For that reason, new editors are released each year. Some reimplement classic old ideas in a new and exciting way, some have unique takes on the user experience, and some focus on specific needs.
|
||||
|
||||
* [Atom][35] is an all-purpose modern text editor from GitHub featuring lots of extensions and Git integration.
|
||||
* [Brackets][36] is an editor from Adobe for web developers.
|
||||
* [Focuswriter][37] seeks to help you focus on writing with helpful features like a distraction-free fullscreen mode, optional typewriter sound effects, and beautiful configuration options.
|
||||
* [Howl][38] is a progressive, dynamic editor based on Lua and Moonscript.
|
||||
* [Norka][39] and [KJots][40] mimic a notebook with each document representing a "page" in your "binder." You can take individual pages out of your notebook through export functions.
|
||||
|
||||
|
||||
|
||||
### DIY editor
|
||||
|
||||
![][41]
|
||||
|
||||
As the saying does _NOT_ go: Why use somebody else's application when you can write your own? Linux has over 30 text editors available, so probably the last thing it really needs is another one. Then again, part of the fun of open source is the ability to experiment.
|
||||
|
||||
If you're looking for an excuse to learn how to program, making your own text editor is a great way to get started. You can achieve the basics in about 100 lines of code, and the more you use it, the more you'll be inspired to learn more so you can make improvements. Ready to get started? Go and [create your own text editor][42].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/open-source-text-editors
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
|
||||
[2]: https://opensource.com/sites/default/files/kakoune-screenshot.png
|
||||
[3]: https://opensource.com/article/20/12/vi-text-editor
|
||||
[4]: https://opensource.com/article/20/12/kakoune
|
||||
[5]: https://opensource.com/sites/default/files/jed.png
|
||||
[6]: https://opensource.com/article/20/12/emacs
|
||||
[7]: https://opensource.com/article/20/12/jove-emacs
|
||||
[8]: https://opensource.com/article/20/12/jed
|
||||
[9]: https://www.jedsoft.org/slang
|
||||
[10]: https://opensource.com/sites/default/files/uploads/nano-31_days-nano-opensource.png
|
||||
[11]: https://opensource.com/article/20/12/gnu-nano
|
||||
[12]: https://opensource.com/article/20/12/31-days-text-editors-joe
|
||||
[13]: https://opensource.com/article/20/12/e3-linux
|
||||
[14]: https://opensource.com/article/20/12/gnu-ed
|
||||
[15]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[16]: https://opensource.com/article/20/12/sed
|
||||
[17]: https://opensource.com/article/20/12/heredoc
|
||||
[18]: https://opensource.com/sites/default/files/uploads/gedit-31_days_gedit-opensource.jpg
|
||||
[19]: https://opensource.com/article/20/12/gedit
|
||||
[20]: https://opensource.com/article/20/12/medit
|
||||
[21]: https://opensource.com/article/20/12/xedit
|
||||
[22]: https://opensource.com/article/20/12/jedit
|
||||
[23]: https://opensource.com/article/20/12/kate-text-editor
|
||||
[24]: https://opensource.com/article/20/12/kwrite-kde-plasma
|
||||
[25]: https://opensource.com/article/20/12/notepad-text-editor
|
||||
[26]: https://opensource.com/article/20/12/31-days-text-editors-pe
|
||||
[27]: https://opensource.com/article/20/12/featherpad
|
||||
[28]: https://opensource.com/sites/default/files/uploads/eclipse-31_days-eclipse-opensource.png
|
||||
[29]: https://opensource.com/article/20/12/netbeans
|
||||
[30]: https://opensource.com/article/20/12/eclipse
|
||||
[31]: https://opensource.com/sites/default/files/uploads/etherpad_0.jpg
|
||||
[32]: https://opensource.com/article/20/12/etherpad
|
||||
[33]: https://opensource.com/article/20/12/31-days-text-editors-nextcloud-markdown-editor
|
||||
[34]: https://opensource.com/sites/default/files/uploads/atom-31_days-atom-opensource.png
|
||||
[35]: https://opensource.com/article/20/12/atom
|
||||
[36]: https://opensource.com/article/20/12/brackets
|
||||
[37]: https://opensource.com/article/20/12/focuswriter
|
||||
[38]: https://opensource.com/article/20/12/howl
|
||||
[39]: https://opensource.com/article/20/12/norka
|
||||
[40]: https://opensource.com/article/20/12/kjots
|
||||
[41]: https://opensource.com/sites/default/files/uploads/this-time-its-personal-31_days_yourself-opensource.png
|
||||
[42]: https://opensource.com/article/20/12/31-days-text-editors-one-you-write-yourself
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting to Know the Cryptocurrency Open Patent Alliance (COPA))
|
||||
[#]: via: (https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/)
|
||||
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/)
|
||||
|
||||
Getting to Know the Cryptocurrency Open Patent Alliance (COPA)
|
||||
======
|
||||
|
||||
### ![][1]
|
||||
|
||||
### Why is there a need for a patent protection alliance for cryptocurrency technologies?
|
||||
|
||||
With the recent surge in popularity of cryptocurrencies and related technologies, Square felt an industry group was needed to protect against litigation and other threats against core cryptocurrency technology and ensure the ecosystem remains vibrant and open for developers and companies.
|
||||
|
||||
The same way [Open Invention Network][2] (OIN) and [LOT Network][3] add a layer of patent protection to inter-company collaboration on open source technologies, COPA aims to protect open source cryptocurrency technology. Feeling safe from the threat of lawsuits is a precursor to good collaboration.
|
||||
|
||||
* Locking up foundational cryptocurrency technologies in patents stifles innovation and adoption of cryptocurrency in novel and useful applications.
|
||||
* The offensive use of patents threatens the growth and free availability of cryptocurrency technologies. Many smaller companies and developers do not own patents and cannot deter or defend threats adequately.
|
||||
|
||||
|
||||
|
||||
By joining COPA, a member can feel secure it can innovate in the cryptocurrency space without fear of litigation between other members.
|
||||
|
||||
### What is Square’s involvement in COPA?
|
||||
|
||||
Square’s core purpose is economic empowerment, and they see cryptocurrency as a core technological pillar. Square helped start and fund COPA with the hope that by encouraging innovation in the cryptocurrency space, more useful ideas and products would get created. COPA management has now diversified to an independent board of technology and regulatory experts, and Square maintains a minority presence.
|
||||
|
||||
### Do we need cryptocurrency patents to join COPA?
|
||||
|
||||
No! Anyone can join and benefit from being a member of COPA, regardless of whether they have patents or not. There is no barrier to entry – members can be individuals, start-ups, small companies, or large corporations. Here is how COPA works:
|
||||
|
||||
* First, COPA members pledge never to use their crypto-technology patents against anyone, except for defensive reasons, effectively making their patents freely available for all.
|
||||
* Second, members pool all of their crypto-technology patents together to form a shared patent library, which provides a forum to allow members to reasonably negotiate lending patents to one another for defensive purposes.
|
||||
* The patent pledge and the shared patent library work in tandem to help drive down the incidence and threat of patent litigation, benefiting the cryptocurrency community as a whole.
|
||||
* Additionally, COPA monitors core technologies and entities that support cryptocurrency and does its best to research and help address litigation threats against community members.
|
||||
|
||||
|
||||
|
||||
### What types of companies should join COPA?
|
||||
|
||||
* Financial services companies and technology companies working in regulated industries that use distributed ledger or cryptocurrency technology
|
||||
* Companies or individuals who are interested in collaborating on developing cryptocurrency products or who hold substantial investments in cryptocurrency
|
||||
|
||||
|
||||
|
||||
### What companies have joined COPA so far?
|
||||
|
||||
* Square, Inc.
|
||||
* Blockchain Commons
|
||||
* Carnes Validadas
|
||||
* Request Network
|
||||
* Foundation Devices
|
||||
* ARK
|
||||
* SatoshiLabs
|
||||
* Transparent Systems
|
||||
* Horizontal Systems
|
||||
* VerifyChain
|
||||
* Blockstack
|
||||
* Protocol Labs
|
||||
* Cloudeya Ltd.
|
||||
* Mercury Cash
|
||||
* Bithyve
|
||||
* Coinbase
|
||||
* Blockstream
|
||||
* Stakenet
|
||||
|
||||
|
||||
|
||||
### How to join
|
||||
|
||||
Please express interest and get access to our membership agreement here: <https://opencrypto.org/joining-copa/>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/getting-to-know-the-cryptocurrency-open-patent-alliance-copa/
|
||||
|
||||
作者:[Linux.com Editorial Staff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/linuxdotcom/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/wp-content/uploads/2021/02/copa-linuxdotcom.jpg
|
||||
[2]: https://openinventionnetwork.com/
|
||||
[3]: https://lotnet.com/
|
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Unikraft: Pushing Unikernels into the Mainstream)
|
||||
[#]: via: (https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/)
|
||||
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/)
|
||||
|
||||
Unikraft: Pushing Unikernels into the Mainstream
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Unikernels have been around for many years and are famous for providing excellent performance in boot times, throughput, and memory consumption, to name a few metrics [1]. Despite their apparent potential, unikernels have not yet seen a broad level of deployment due to three main drawbacks:
|
||||
|
||||
* **Hard to build**: Putting a unikernel image together typically requires expert, manual work that needs redoing for each application. Also, many unikernel projects are not, and don’t aim to be, POSIX compliant, and so significant porting effort is required to have standard applications and frameworks run on them.
|
||||
* **Hard to extract high performance**: Unikernel projects don’t typically expose high-performance APIs; extracting high performance often requires expert knowledge and modifications to the code.
|
||||
* **Little or no tool ecosystem**: Assuming you have an image to run, deploying it and managing it is often a manual operation. There is little integration with major DevOps or orchestration frameworks.
|
||||
|
||||
|
||||
|
||||
While not all unikernel projects suffer from all of these issues (e.g., some provide some level of POSIX compliance but the performance is lacking, others target a single programming language and so are relatively easy to build but their applicability is limited), we argue that no single project has been able to successfully address all of them, hindering any significant level of deployment. For the past three years, Unikraft ([www.unikraft.org][2]), a Linux Foundation project under the Xen Project’s auspices, has had the explicit aim to change this state of affairs to bring unikernels into the mainstream.
|
||||
|
||||
If you’re interested, read on, and please be sure to check out:
|
||||
|
||||
* The [replay of our two FOSDEM talks][3] [2,3] and the [virtual stand ][4]
|
||||
* Our website (unikraft.org) and source code (<https://github.com/unikraft>).
|
||||
* Our upcoming source code release, 0.5 Tethys (more information at <http://www.unikraft.org/release/>)
|
||||
* [unikraft.io][5], for industrial partners interested in Unikraft PoCs (or [info@unikraft.io][6])
|
||||
|
||||
|
||||
|
||||
### High Performance
|
||||
|
||||
To provide developers with the ability to obtain high performance easily, Unikraft exposes a set of composable, performance-oriented APIs. The figure below shows Unikraft’s architecture: all components are libraries with their own **Makefile** and **Kconfig** configuration files, and so can be added to the unikernel build independently of each other.
|
||||
|
||||
![][7]
|
||||
|
||||
**Figure 1. Unikraft ‘s fully modular architecture showing high-performance APIs**
|
||||
|
||||
APIs are also micro-libraries that can be easily enabled or disabled via a Kconfig menu; Unikraft unikernels can compose which APIs to choose to best cater to an application’s needs. For example, an RCP-style application might turn off the **uksched** API (➃ in the figure) to implement a high performance, run-to-completion event loop; similarly, an application developer can easily select an appropriate memory allocator (➅) to obtain maximum performance, or to use multiple different ones within the same unikernel (e.g., a simple, fast memory allocator for the boot code, and a standard one for the application itself).
|
||||
|
||||
![][8] | ![][9]
|
||||
---|---
|
||||
**Figure 2. Unikraft memory consumption vs. other unikernel projects and Linux** | **Figure 3. Unikraft NGINX throughput versus other unikernels, Docker, and Linux/KVM.**
|
||||
|
||||
|
||||
|
||||
These APIs, coupled with the fact that all Unikraft’s components are fully modular, results in high performance. Figure 2, for instance, shows Unikraft having lower memory consumption than other unikernel projects (HermiTux, Rump, OSv) and Linux (Alpine); and Figure 3 shows that Unikraft outperforms them in terms of NGINX requests per second, reaching 90K on a single CPU core.
|
||||
|
||||
Further, we are working on (1) a performance profiler tool to be able to quickly identify potential bottlenecks in Unikraft images and (2) a performance test tool that can automatically run a large set of performance experiments, varying different configuration options to figure out optimal configurations.
|
||||
|
||||
### Ease of Use, No Porting Required
|
||||
|
||||
Forcing users to port applications to a unikernel to obtain high performance is a showstopper. Arguably, a system is only as good as the applications (or programming languages, frameworks, etc.) can run. Unikraft aims to achieve good POSIX compatibility; one way of doing so is supporting a **libc** (e.g., **musl)**, along with a large set of Linux syscalls.
|
||||
|
||||
![][10]
|
||||
|
||||
**Figure 4. Only a certain percentage of syscalls are needed to support a wide range of applications**
|
||||
|
||||
While there are over 300 of these, many of them are not needed to run a large set of applications; as shown in Figure 1 (taken from [5]). Having in the range of 145, for instance, is enough to support 50% of all libraries and applications in a Ubuntu distribution (many of which are irrelevant to unikernels, such as desktop applications). As of this writing, Unikraft supports over 130 syscalls and a number of mainstream applications (e.g., SQLite, Nginx, Redis), programming languages and runtime environments such as C/C++, Go, Python, Ruby, Web Assembly, and Lua, not to mention several different hypervisors (KVM, Xen, and Solo5) and ARM64 bare-metal support.
|
||||
|
||||
### Ecosystem and DevOps
|
||||
|
||||
Another apparent downside of unikernel projects is the almost total lack of integration with existing, major DevOps and orchestration frameworks. Working towards the goal of integration, in the past year, we created the **kraft** tool, allowing users to choose an application and a target platform simply (e.g., KVM on x86_64) and take care of building the image running it.
|
||||
|
||||
Beyond this, we have several sub-projects ongoing to support in the coming months:
|
||||
|
||||
* **Kubernetes**: If you’re already using Kubernetes in your deployments, this work will allow you to deploy much leaner, fast Unikraft images _transparently._
|
||||
* **Cloud Foundry**: Similarly, users relying on Cloud Foundry will be able to generate Unikraft images through it, once again transparently.
|
||||
* **Prometheus**: Unikernels are also notorious for having very primitive or no means for monitoring running instances. Unikraft is targeting Prometheus support to provide a wide range of monitoring capabilities.
|
||||
|
||||
|
||||
|
||||
In all, we believe Unikraft is getting closer to bridging the gap between unikernel promise and actual deployment. We are very excited about this year’s upcoming features and developments, so please feel free to drop us a line if you have any comments, questions, or suggestions at [**info@unikraft.io**][6].
|
||||
|
||||
_**About the author: [Dr. Felipe Huici][11] is Chief Researcher, Systems and Machine Learning Group, NEC Laboratories Europe GmbH**_
|
||||
|
||||
### References
|
||||
|
||||
[1] Unikernels Rethinking Cloud Infrastructure. <http://unikernel.org/>
|
||||
|
||||
[2] Is the Time Ripe for Unikernels to Become Mainstream with Unikraft? FOSDEM 2021 Microkernel developer room. <https://fosdem.org/2021/schedule/event/microkernel_unikraft/>
|
||||
|
||||
[3] Severely Debloating Cloud Images with Unikraft. FOSDEM 2021 Virtualization and IaaS developer room. <https://fosdem.org/2021/schedule/event/vai_cloud_images_unikraft/>
|
||||
|
||||
[4] Welcome to the Unikraft Stand! <https://stands.fosdem.org/stands/unikraft/>
|
||||
|
||||
[5] A study of modern Linux API usage and compatibility: what to support when you’re supporting. Eurosys 2016. <https://dl.acm.org/doi/10.1145/2901318.2901341>
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/featured/unikraft-pushing-unikernels-into-the-mainstream/
|
||||
|
||||
作者:[Linux.com Editorial Staff][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/linuxdotcom/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/wp-content/uploads/2021/02/unikraft.svg
|
||||
[2]: http://www.unikraft.org
|
||||
[3]: https://video.fosdem.org/2021/stands/unikraft/
|
||||
[4]: https://stands.fosdem.org/stands/unikraft/
|
||||
[5]: http://www.unikraft.io
|
||||
[6]: mailto:info@unikraft.io
|
||||
[7]: https://www.linux.com/wp-content/uploads/2021/02/unikraft1.png
|
||||
[8]: https://www.linux.com/wp-content/uploads/2021/02/unikraft2.png
|
||||
[9]: https://www.linux.com/wp-content/uploads/2021/02/unikraft3.png
|
||||
[10]: https://www.linux.com/wp-content/uploads/2021/02/unikraft4.png
|
||||
[11]: https://www.linkedin.com/in/felipe-huici-47a559127/
|
180
sources/tech/20210211 What-s new with ownCloud in 2021.md
Normal file
180
sources/tech/20210211 What-s new with ownCloud in 2021.md
Normal file
@ -0,0 +1,180 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's new with ownCloud in 2021?)
|
||||
[#]: via: (https://opensource.com/article/21/2/owncloud)
|
||||
[#]: author: (Martin Loschwitz https://opensource.com/users/martinloschwitzorg)
|
||||
|
||||
What's new with ownCloud in 2021?
|
||||
======
|
||||
The open source file sharing and syncing platform gets a total overhaul
|
||||
based on Go and Vue.js and eliminates the need for a database.
|
||||
![clouds in the sky with blue pattern][1]
|
||||
|
||||
The newest version of ownCloud, [ownCloud Infinite Scale][2] (OCIS), is a complete rewrite of the venerable open source enterprise file sharing and syncing software stack. It features a new backend written in Go, a frontend in Vue.js, and many changes, including eliminating the need for a database. This scalable, modular approach replaces ownCloud's PHP, database, and [POSIX][3] filesystem and promises up to 10 times better performance.
|
||||
|
||||
Traditionally, ownCloud was centered around the idea of having a POSIX-compatible filesystem to store data uploaded by users—different versions of the data and trash files, as well as configuration files and logs. By default, an ownCloud user's files were found in a path on their ownCloud instance, like `/var/www` or `/srv/www` (a web server's document root).
|
||||
|
||||
Every admin who has maintained an ownCloud instance knows that they grow massive; today, they usually start out much larger than ownCloud was originally designed for. One of the largest ownCloud instances is Australia's Academic and Research Network (AARNet), a company that stores more than 100,000 users' data.
|
||||
|
||||
### Let's 'Go' for microservices
|
||||
|
||||
ownCloud's developers determined that rewriting the codebase with [Go][4] could bring many advantages over PHP. Even when computer programs appear to be one monolithic piece of code, most are split into different components internally. The web servers that are usually deployed with ownCloud (such as Apache) are an excellent example. Internally, one function handles TCP/IP connections, another function might handle SSL, and yet another piece of code executes the requested PHP files and delivers the results to the end user. All of those events must happen in a certain order.
|
||||
|
||||
ownCloud's developers wanted the new version to serve multiple steps concurrently so that events can happen simultaneously. Software capable of handling requests in parallel doesn't have to wait around for one process to finish before the next can begin, so they can deliver results faster. Concurrency is one of the reasons Go is so popular in containerized micro-architecture applications.
|
||||
|
||||
With OCIS, ownCloud is adapting to an architecture centered around the principle of microservices. OCIS is split into three tiers: storage, core, and frontend. I'll look at each of these tiers, but the only thing that really matters to people is overall performance. Users don't think about software in tiers; they just want the software to work well and work quickly.
|
||||
|
||||
### Tier 1: Storage
|
||||
|
||||
The storage available to the system is ownCloud's lowest tier. Performance also brings scalability; large ownCloud instances must be able to cope with the load of thousands of clients and add additional disk space if the existing storage fills up.
|
||||
|
||||
Like so many other concepts today, object stores and scalable storage weren't available when ownCloud was designed. Administrators now are used to having more choices, so ownCloud permits outsourcing physical storage device handling to an external solution. While S3-based object storage, Samba-based storage, and POSIX-compatible filesystem options are still supported in OCIS, the preferred way to deploy it is with [Earth Observing System][5] (EOS) storage.
|
||||
|
||||
#### EOS to the rescue
|
||||
|
||||
EOS is optimized for very low latency when accessing files. It provides disk-based storage to clients through the [XRootD][6] framework but also permits other protocols to access files. ownCloud uses EOS's HTTP protocol extension to talk to the storage solution (using the HTTPS protocol). EOS also allows almost "infinite" scalability. For instance, [CERN's EOS setup][7] includes more than 200PB of disk storage and continues to grow.
|
||||
|
||||
By choosing EOS, ownCloud eliminated several shortcomings of traditional storage solutions:
|
||||
|
||||
* EOS doesn't have a typical single point of failure.
|
||||
* All relevant services are run redundantly, including the ability to scale out and add instances of all existing services.
|
||||
* EOS promises to never run out of actual disk space and comes with built-in redundancy for stored data.
|
||||
|
||||
|
||||
|
||||
For large environments, ownCloud expects the administrator to deploy an EOS instance with OCIS. In exchange for the burden of maintaining a separate storage system, the admin gets the benefit of not having to worry about the OCIS instance's scalability and performance.
|
||||
|
||||
#### What about small setups?
|
||||
|
||||
This hints at ownCloud's assumed use case for OCIS: It's no longer a small business all-in-one server nor a small home server. ownCloud's strategy with OCIS targets large data centers. For small or home office setups, EOS is likely to be excessive and overly demanding for a single admin to manage. OCIS serves small setups through the [Reva][8] framework, which enables support for S3, Samba, and even POSIX-compatible filesystems. This is possible because EOS is not hardcoded into OCIS. Reva can't provide the same feature set as EOS, but it accomplishes most of the needs of end users and small installations.
|
||||
|
||||
### Tier 2: Core
|
||||
|
||||
OCIS's second tier is (due to Go) more of a collection of microservices than a singular core. Each one is responsible for handling a single task in the background (e.g., scanning for viruses). Basically, all of OCIS's functionality results from a specific microservice's work, like authenticating requests using OpenID Connect against an identity provider. In the end, that makes it a simple task to connect existing user directories—such as Active Directory Federation Services (ADFS), Azure AD, or Lightweight Directory Access Protocol (LDAP)—to ownCloud. For those that do not have an existing identity provider, ownCloud ships its own instance, effectively making ownCloud maintain its own user database.
|
||||
|
||||
### Tier 3: Frontend
|
||||
|
||||
OCIS's third tier, the frontend, is what the vendor calls ownCloud Web. It's a complete rewrite of the user interface and is based on the Vue.js JavaScript framework. Like the OCIS core, the web frontend is written based on microservices principles and hence allows better performance and scalability. The developers also used the opportunity to give the web interface a makeover; compared to previous ownCloud versions, the OCIS web interface looks smaller and slicker.
|
||||
|
||||
OCIS's developers did an impressive job complying with modern software design principles. The fundamental problem in building applications according to the microservices approach is making the environment's individual components communicate with each other. APIs can come to the rescue, but that means every micro component must have its own well-defined API interface.
|
||||
|
||||
Luckily, there are existing tools to take that burden off developers' shoulders, most notably [gRPC][9]. The idea behind gRPC is to have a set of predefined APIs that trigger actions in one component from within another.
|
||||
|
||||
### Other notable design changes
|
||||
|
||||
#### Tackling network traffic with Traefik
|
||||
|
||||
This new application design brings some challenges to the underlying network. OCIS's developers chose the [Traefik][10] framework to tackle them. Traefik automatically load-balances different instances of microservices, manages automated SSL encryption, and allows additional deployments of firewall rules.
|
||||
|
||||
The split between the backend and the frontend add advantages to OCIS. In fact, the user's actions triggered through ownCloud Web are completely decoupled from the ownCloud engine performing the task in the backend. If a user manually starts a virus check on files stored in ownCloud, they don't have to wait for the check to finish. Instead, the check happens in the background, and the user sees the results after the check is completed. This is the principle of concurrency at work.
|
||||
|
||||
#### Extensions as microservices
|
||||
|
||||
Like other web services, ownCloud supports extending its capabilities through extensions. OCIS doesn't change this, but it promises to tackle a well-known problem, especially with community apps. Apps of unknown origin can cause trouble in the server, hamper updates, and negatively impact the server's overall performance.
|
||||
|
||||
OCIS's new, gRPC-based architecture makes it much easier to create extensions alongside existing microservices. Because the API is predefined by gRPC, developers merely need to create a microservice featuring the desired functionality that can be controlled by gRPC. Traefik, on a per-case basis, ensures that newly deployed add-ons are automatically added to the existing communication mesh.
|
||||
|
||||
#### Goodbye, MySQL!
|
||||
|
||||
ownCloud's switch to gRPC and microservices eliminates the need for a relational database. Instead, components that need to store metadata do it on their own. Due to Reva and the lack of a MySQL dependency, the complexity of running ownCloud in small environments is reduced considerably—an especially welcome bonus for maintainers of large-scale data centers, but nice for admins of any size installation.
|
||||
|
||||
### Getting OCIS up and running
|
||||
|
||||
ownCloud published a technical preview of OCIS 1.0 in December 2020, [shipping it][11] as a Docker container and binaries. More examples of getting it running are linked in the deployment section of its [GitHub repository][12].
|
||||
|
||||
#### Install with Docker
|
||||
|
||||
Getting OCIS up and running with Docker containers is easy, although things can get complicated if you're new to EOS. Docker images for OCIS are available on [Docker Hub][13]. Look for the Latest tag for the current master branch.
|
||||
|
||||
Any standard virtual machine from one of the big cloud providers or any entry-level server in a data center that uses a standard Linux distribution should be sufficient, provided the system has a container runtime installed.
|
||||
|
||||
Assuming you have Docker or Podman installed, the command to start OCIS is simple:
|
||||
|
||||
|
||||
```
|
||||
`$ docker run --rm -ti -p 9200:9200 owncloud/ocis`
|
||||
```
|
||||
|
||||
That's it! OCIS is now waiting at your service on localhost port 9200. Open a web browser and navigate to `http://localhost:9200` to check it out.
|
||||
|
||||
The demo accounts and passwords are `einstein:relativity`, `marie:radioactivity`, and `richard:superfluidity`. Admin accounts are `moss:vista` and `admin:admin`. If OCIS runs on a server with a resolvable hostname, it can request an SSL certificate from Let's Encrypt using Traefik.
|
||||
|
||||
![OCIS contains no files at first login][14]
|
||||
|
||||
(Martin Loschwitz, [CC BY-SA 4.0][15])
|
||||
|
||||
![OCIS user management interface][16]
|
||||
|
||||
(Martin Loschwitz, [CC BY-SA 4.0][15])
|
||||
|
||||
#### Install with binary
|
||||
|
||||
As an alternative to Docker, there also is a pre-compiled binary available. Thanks to Go, users can [download the latest binaries][17] from the Master branch.
|
||||
|
||||
OCIS's binary edition expects `/var/tmp/ocis` as the default storage location, but you can change that in its configuration. You can start the OCIS server with:
|
||||
|
||||
|
||||
```
|
||||
`$ ./ocis server`
|
||||
```
|
||||
|
||||
Here are some of the subcommands available through the `ocis` binary:
|
||||
|
||||
* `ocis health` runs a health check. A result greater than 0 indicates an error.
|
||||
* `ocis list` prints all running OCIS extensions.
|
||||
* `ocis run foo` starts a particular extension (`foo`, in this example).
|
||||
* `ocis kill foo` stops a particular extension (`foo`, in this example).
|
||||
* `ocis --help` prints a help message.
|
||||
|
||||
|
||||
|
||||
The project's GitHub repository contains full [documentation][11].
|
||||
|
||||
### Setting up EOS (it's complicated)
|
||||
|
||||
Following ownCloud's recommendations to deploy OCIS with EOS for large environments requires some additional steps. EOS not only adds required hardware and increases the whole environment's complexity, but it's also a slightly bigger task to set it up. CERN provides concise [EOS documentation][18] (linked from its [GitHub repository][19]), and ownCloud offers a [step-by-step guide][20].
|
||||
|
||||
In a nutshell, users have to get and start EOS and OCIS containers; configure LDAP support; and kill home, users', and metadata storage before starting them with the EOS configuration. Last but not least, the accounts service needs to be set up to work with EOS. All of these steps are "docker-compose" commands documented in the GitHub repository. The Storage Backends page on EOS also provides information on verification, troubleshooting, and a command reference for the built-in EOS shell.
|
||||
|
||||
### Weighing risks and rewards
|
||||
|
||||
ownCloud Infinite Scale is easy to install, faster than ever before, and better prepared for scalability. The modular design, with microservices and APIs (even for its extensions), looks promising. ownCloud is embracing new technology and developing for the future. If you run ownCloud, or if you've been thinking of trying it, there's never been a better time. Keep in mind that this is still a technology preview and is on a rolling release published every three weeks, so please report any bugs you find.
|
||||
|
||||
Jos Poortvliet shares some of his favorite uses for the open source self-hosted storage platform.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/owncloud
|
||||
|
||||
作者:[Martin Loschwitz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/martinloschwitzorg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://owncloud.com/infinite-scale/
|
||||
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[4]: https://golang.org/
|
||||
[5]: https://en.wikipedia.org/wiki/Earth_Observing_System
|
||||
[6]: https://xrootd.slac.stanford.edu/
|
||||
[7]: https://eos-web.web.cern.ch/eos-web/
|
||||
[8]: https://reva.link/
|
||||
[9]: https://en.wikipedia.org/wiki/GRPC
|
||||
[10]: https://opensource.com/article/20/3/kubernetes-traefik
|
||||
[11]: https://owncloud.github.io/ocis/getting-started/
|
||||
[12]: https://github.com/owncloud/ocis
|
||||
[13]: https://hub.docker.com/r/owncloud/ocis
|
||||
[14]: https://opensource.com/sites/default/files/uploads/ocis5.png (OCIS contains no files at first login)
|
||||
[15]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ocis2.png (OCIS user management interface)
|
||||
[17]: https://download.owncloud.com/ocis/ocis/
|
||||
[18]: https://eos-docs.web.cern.ch/
|
||||
[19]: https://github.com/cern-eos/eos
|
||||
[20]: https://owncloud.github.io/ocis/storage-backends/eos/
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 reasons to choose Linux for art and design)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-art-design)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
4 reasons to choose Linux for art and design
|
||||
======
|
||||
Open source enhances creativity by breaking you out of a proprietary
|
||||
mindset and opening your mind to possibilities. Explore several open
|
||||
source creative programs.
|
||||
![Painting art on a computer screen][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today I'll explain why Linux is an excellent choice for creative work.
|
||||
|
||||
Linux gets a lot of press for its amazing server and cloud computing software. It comes as a surprise to some that Linux happens to have a great set of creative tools, too, and that they easily rival popular creative apps in user experience and quality. When I first started using open source creative software, it wasn't because I didn't have access to the other software. Quite the contrary, I started using open source tools when I had the greatest access to the proprietary tools offered by several leading companies. I chose to switch to open source because open source made more sense and produced better results. Those are some big claims, so allow me to explain.
|
||||
|
||||
### High availability means high productivity
|
||||
|
||||
The term _productivity_ means different things to different people. When I think of productivity, it's that when you sit down to do something, it's rewarding when you're able to meet whatever goal you've set for yourself. If you get interrupted or stopped by something outside your control, then your productivity goes down.
|
||||
|
||||
Computers can seem unpredictable, and there are admittedly a lot of things that can go wrong. There are lots of hardware parts to a computer, and any one of them can break at any time. Software has bugs and updates to fix bugs, and then new bugs introduced by those updates. If you're not comfortable with computers, it can feel a little like a timebomb just waiting to ensnare you. With so much potentially working _against_ you in the digital world, it doesn't make sense to me to embrace software that guarantees not to work when certain requirements (like a valid license, or more often, an up-to-date subscription) aren't met.
|
||||
|
||||
![Inkscape application][2]
|
||||
|
||||
Inkscape
|
||||
|
||||
Open source creative apps have no required subscription fee and no licensing requirements. They're available when you need them and usually on any platform. That means when you sit down at a working computer, you know you have access to your must-have software. And if you're having a rough day and you find yourself sitting in front of a computer that isn't working, the fix is to find one that does work, install your creative suite, and get to work.
|
||||
|
||||
It's far harder to find a computer that _can't_ run Inkscape, for instance, than it is to find a computer that _is_ running a similar proprietary application. That's called high availability, and it's a game-changer. I've never found myself wasting hours of my day for lack of the software I want to run to get things done.
|
||||
|
||||
### Open access is better for diversity
|
||||
|
||||
When I was working in the creative industry, it sometimes surprised me how many of my colleagues were self-taught both in their artistic and technical disciplines. Some taught themselves on expensive rigs with all the latest "professional" applications, but there was always a large group of people who perfected their digital trade on free and open source software because, as kids or as poor college students, that was what they could afford and obtain easily.
|
||||
|
||||
That's a different kind of high availability, but it's one that's important to me and many other users who wouldn't be in the creative industry but for open source. Even open source projects that do offer a paid subscription, like [Ardour][3], ensure that users have access to the software regardless of an ability to pay.
|
||||
|
||||
![Ardour interface][4]
|
||||
|
||||
Ardour
|
||||
|
||||
When you don't restrict who gets to use your software, you're implicitly inviting more users. And when you do that, you enable a greater diversity of creative voices. Art loves influence, and the greater the variety of experiences and ideas you have to draw from, the better. That's what's possible with open source creative software.
|
||||
|
||||
### Resolute format support is more inclusive
|
||||
|
||||
We all acknowledge the value of inclusivity in basically every industry. Inviting _more people_ to the party results in a greater spectacle, in nearly every sense. Knowing this, it's painful when I see a project or initiative that invites people to collaborate, only to limit what kind of file formats are acceptable. It feels archaic, like a vestige of elitism out of the far past, and yet it's a real problem even today.
|
||||
|
||||
In a surprise and unfortunate twist, it's not because of technical limitations. Proprietary software has access to open file formats because they're open source and free to integrate into any application. Integrating these formats requires no reciprocation. By stark contrast, proprietary file formats are often shrouded in secrecy, locked away for use by the select few who pay to play. It's so bad, in fact, that quite often, you can't open some files to get to your data without the proprietary software available. Amazingly, open source creative applications nevertheless include support for as many proprietary formats as they possibly can. Here's just a sample of Inkscape's staggering support list:
|
||||
|
||||
![Available Inkscape file formats][5]
|
||||
|
||||
Inkscape file formats
|
||||
|
||||
And that's largely without contribution from the companies owning the file formats.
|
||||
|
||||
Supporting open file formats is more inclusive, and it's better for everyone.
|
||||
|
||||
### No restrictions for fresh ideas
|
||||
|
||||
One of the things I've come to love about open source is the sheer diversity of how any given task is interpreted. When you're around proprietary software, you tend to start to see the world based on what's available to you. For instance, if you're thinking of manipulating some photos, then you generally frame your intent based on what you know to be possible. You choose from the three of four or ten applications on the shelf because they're the only options presented.
|
||||
|
||||
You generally have several obligatory "obvious" solutions in open source, but you also get an additional _dozen_ contenders hanging out on the fringe. These options are sometimes only half-complete, or they're hyper-focused on a specific task, or they're challenging to learn, but most importantly, they're unique and innovative. Sometimes they've been developed by someone who's never seen the way a task is "supposed to be done," and so the approach is wildly different than anything else on the market. Other times, they're developed by someone familiar with the "right way" of doing something but is trying a different tactic anyway. It's a big, dynamic brainstorm of possibility.
|
||||
|
||||
These kinds of everyday innovations can lead to flashes of inspiration, moments of brilliance, or widespread common improvements. For instance, the famous GIMP filter that removes items from photographs and automatically replaces the background was so popular that it later got "borrowed" by proprietary photo editing software. That's one metric of success, but it's the personal impact that matters most for an artist. I marvel at the creativity of new Linux users when I've shown them just one simple audio or video filter or paint application at a tech demo. Without any instruction or context, the ideas that spring out of a simple interaction with a new tool can be exciting and inspiring, and a whole new series of artwork can easily emerge from experimentation with just a few simple tools.
|
||||
|
||||
There are also ways of working more efficiently, provided the right set of tools are available. While proprietary software usually isn't opposed to the idea of smarter work habits, there's rarely a direct benefit from concentrating on making it easy for users to automate tasks. Linux and open source are largely built exactly for [automation and orchestration][6], and not just for servers. Tools like [ImageMagick][7] and [GIMP scripts][8] have changed the way I work with images, both for bulk processing and idle experimentation.
|
||||
|
||||
You never know what you might create, given tools that you've never imagined existed.
|
||||
|
||||
### Linux artists
|
||||
|
||||
There's a whole [community of artists using open source][9], from [photography][10] to [makers][11] to [musicians][12], and much much more. If you want to get creative, give Linux a go.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-art-design
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
|
||||
[2]: https://opensource.com/sites/default/files/inkscape_0.jpg
|
||||
[3]: https://community.ardour.org/subscribe
|
||||
[4]: https://opensource.com/sites/default/files/ardour.jpg
|
||||
[5]: https://opensource.com/sites/default/files/formats.jpg
|
||||
[6]: https://opensource.com/article/20/11/orchestration-vs-automation
|
||||
[7]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux#imagemagick
|
||||
[8]: https://opensource.com/article/21/1/gimp-scripting
|
||||
[9]: https://librearts.org
|
||||
[10]: https://pixls.us
|
||||
[11]: https://www.redhat.com/en/blog/channel/red-hat-open-studio
|
||||
[12]: https://linuxmusicians.com
|
@ -0,0 +1,139 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Network address translation part 2 – the conntrack tool)
|
||||
[#]: via: (https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/)
|
||||
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
|
||||
|
||||
Network address translation part 2 – the conntrack tool
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
This is the second article in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 introduces the “conntrack” command. conntrack allows you to inspect and modify tracked connections.
|
||||
|
||||
### Introduction
|
||||
|
||||
NAT configured via iptables or nftables builds on top of netfilters connection tracking facility. The _conntrack_ command is used to inspect and alter the state table. It is part of the “conntrack-tools” package.
|
||||
|
||||
### Conntrack state table
|
||||
|
||||
The connection tracking subsystem keeps track of all packet flows that it has seen. Run “_sudo conntrack -L_” to see its content:
|
||||
|
||||
```
|
||||
tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1
|
||||
tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1
|
||||
udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1
|
||||
```
|
||||
|
||||
Each line shows one connection tracking entry. You might notice that each line shows the addresses and port numbers twice and even with inverted address and port pairs! This is because each entry is inserted into the state table twice. The first address quadruple (source and destination address and ports) are those recorded in the original direction, i.e. what the initiator sent. The second quadruple is what conntrack expects to see when a reply from the peer is received. This solves two problems:
|
||||
|
||||
1. If a NAT rule matches, such as IP address masquerading, this is recorded in the reply part of the connection tracking entry and can then be automatically applied to all future packets that are part of the same flow.
|
||||
2. A lookup in the state table will be successful even if its a reply packet to a flow that has any form of network or port address translation applied.
|
||||
|
||||
|
||||
|
||||
The original (first shown) quadruple stored never changes: Its what the initiator sent. NAT manipulation only alters the reply (second) quadruple because that is what the receiver will see. Changes to the first quadruple would be pointless: netfilter has no control over the initiators state, it can only influence the packet as it is received/forwarded. When a packet does not map to an existing entry, conntrack may add a new state entry for it. In the case of UDP this happens automatically. In the case of TCP conntrack can be configured to only add the new entry if the TCP packet has the [SYN bit][3] set. By default conntrack allows mid-stream pickups to not cause problems for flows that existed prior to conntrack becoming active.
|
||||
|
||||
### Conntrack state table and NAT
|
||||
|
||||
As explained in the previous section, the reply tuple listed contains the NAT information. Its possible to filter the output to only show entries with source or destination nat applied. This allows to see which kind of NAT transformation is active on a given flow. _“sudo conntrack -L -p tcp –src-nat_” might show something like this:
|
||||
|
||||
```
|
||||
tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED]
|
||||
```
|
||||
|
||||
This entry shows a connection from 10.0.0.10:5536 to 10.8.2.12:80. But unlike the previous example, the reply direction is not just the inverted original direction: the source address is changed. The destination host (10.8.2.12) sends reply packets to 192.168.1.2 instead of 10.0.0.10. Whenever 10.0.0.10 sends another packet, the router with this entry replaces the source address with 192.168.1.2. When 10.8.2.12 sends a reply, it changes the destination back to 10.0.0.10. This source NAT is due to a [nft masquerade][4] rule:
|
||||
|
||||
```
|
||||
inet nat postrouting meta oifname "veth0" masquerade
|
||||
```
|
||||
|
||||
Other types of NAT rules, such as “dnat to” or “redirect to” would be shown in a similar fashion, with the reply tuples destination different from the original one.
|
||||
|
||||
### Conntrack extensions
|
||||
|
||||
Two useful extensions are conntrack accounting and timestamping. _“sudo sysctl net.netfilter.nf_conntrack_acct=1”_ makes _“sudo conntrack -L_” track byte and packet counters for each flow.
|
||||
|
||||
_“sudo sysctl net.netfilter.nf_conntrack_timestamp=1”_ records a “start timestamp” for each connection. _“sudo conntrack -L”_ then displays the seconds elapsed since the flow was first seen. Add “_–output ktimestamp_” to see the absolute start date as well.
|
||||
|
||||
### Insert and change entries
|
||||
|
||||
You can add entries to the state table. For example:
|
||||
|
||||
```
|
||||
sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80
|
||||
```
|
||||
|
||||
This is used by conntrackd for state replication. Entries of an active firewall are replicated to a standby system. The standby system can then take over without breaking connectivity even on established flows. Conntrack can also store metadata not related to the packet data sent on the wire, for example the conntrack mark and connection tracking labels. Change them with the “update” (-U) option:
|
||||
|
||||
```
|
||||
sudo conntrack -U -m 42 -p tcp
|
||||
```
|
||||
|
||||
This changes the connmark of all tcp flows to 42.
|
||||
|
||||
### **Delete entries**
|
||||
|
||||
In some cases, you want to delete enries from the state table. For example, changes to NAT rules have no effect on packets belonging to flows that are already in the table. For long-lived UDP sessions, such as tunneling protocols like VXLAN, it might make sense to delete the entry so the new NAT transformation can take effect. Delete entries via _“sudo conntrack -D_” followed by an optional list of address and port information. The following example removes the given entry from the table:
|
||||
|
||||
```
|
||||
sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53
|
||||
```
|
||||
|
||||
### Conntrack error counters
|
||||
|
||||
Conntrack also exports statistics:
|
||||
|
||||
```
|
||||
# sudo conntrack -S
|
||||
cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10
|
||||
cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
|
||||
cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1
|
||||
cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
|
||||
```
|
||||
|
||||
Most counters will be 0. “Found” and “insert” will always be 0, they only exist for backwards compatibility. Other errors accounted for are:
|
||||
|
||||
* invalid: packet does not match an existing connection and doesn’t create a new connection.
|
||||
* insert_failed: packet starts a new connection, but insertion into the state table failed. This can happen for example when NAT engine happened to pick identical source address and port when Masquerading.
|
||||
* drop: packet starts a new connection, but no memory is available to allocate a new state entry for it.
|
||||
* early_drop: conntrack table is full. In order to accept the new connection existing connections that did not see two-way communication were dropped.
|
||||
* error: icmp(v6) received icmp error packet that did not match a known connection
|
||||
* search_restart: lookup interrupted by an insertion or deletion on another CPU.
|
||||
* clash_resolve: Several CPUs tried to insert identical conntrack entry.
|
||||
|
||||
|
||||
|
||||
These error conditions are harmless unless they occur frequently. Some can be mitigated by tuning the conntrack sysctls for the expected workload. _net.netfilter.nf_conntrack_buckets_ and _net.netfilter.nf_conntrack_max_ are typical candidates. See the [nf_conntrack-sysctl documentation][5] for a full list.
|
||||
|
||||
Use “_sudo sysctl_ _net.netfilter.nf_conntrack_log_invalid=255″_ to get more information when a packet is invalid. For example, when conntrack logs the following when it encounters a packet with all tcp flags cleared:
|
||||
|
||||
```
|
||||
nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
This article gave an introduction on how to inspect the connection tracking table and the NAT information stored in tracked flows. The next part in the series will expand on the conntrack tool and the connection tracking event framework.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
|
||||
|
||||
作者:[Florian Westphal][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/strlen/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/network-address-translation-part-2-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
|
||||
[3]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
|
||||
[4]: https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)#Masquerading
|
||||
[5]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/nf_conntrack-sysctl.rst
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (scvoet)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why the success of open source depends on empathy)
|
||||
[#]: via: (https://opensource.com/article/21/2/open-source-empathy)
|
||||
[#]: author: (Bronagh Sorota https://opensource.com/users/bsorota)
|
||||
|
||||
为何开源的成功取决于同理心?
|
||||
======
|
||||
随着人们对同理心认识的提高和传播的启发。
|
||||
它——开源的生产力会增长,合作者们会靠拢。
|
||||
而且,可以充分利用开源软件开发的力量。
|
||||
![践行共情][1]
|
||||
|
||||
开源开发的协调创新精神和社区精神改变了世界。吉姆·怀特赫斯特(Jim Whitehurst)在[《开放式组织》][2]中解释说,开源的成功源于“将人们视为社区的一份子,从交易思维转变为基于承诺基础的思维方式”。 但是,开源开发模型的核心仍然存在障碍:它经常性地缺乏人们的同理心。
|
||||
|
||||
同理心是理解或感受他人感受的能力。在开源社区中,面对面的人际互动和协作是否罕见。任何在 GitHub 提交过请求 (Pull request) 或问题 (Issues) 的开发者都曾收到过可能来自他们可能从未见过的人的评论,这些人往往身处地球的另一端,而他们的交流也可能同样遥远。现代开源开发就是建立在这种异步、事务性的沟通基础之上。因此,人们在社交媒体平台上所经历的同类型的网络欺凌和其他虐待行为在开源社区中也不足为奇。
|
||||
|
||||
当然,并非所有开源交流都会事与愿违。许多人在工作中发展出了尊重并秉持着良好的行为标准。但是很多时候,人们的沟通也常常缺失了常识性的礼节,他们将人们像机器而非人类一般对待。这种行为是激发开源创新模型全部潜力的障碍,因为它让许多潜在的贡献者望而却步,并扼杀了灵感。
|
||||
|
||||
### 恶意交流的历史
|
||||
|
||||
代码审查中存在的敌意言论对开源社区来说并不新鲜,它多年来一直被社区所容忍。开源教父莱纳斯·托瓦尔兹 (Linus Torvalds) 经常在代码不符合他的标准时[抨击][3] Linux 社区,并将贡献者赶走。埃隆大学计算机科学教授梅根·斯奎尔(Megan Squire)借助[机器学习][4]分析了托瓦尔兹的侮辱行为,发现它们在四年内的数量高达数千次。2018 年,莱纳斯因自己的不良行为而将自己搁置,责成自己学习同理心,道歉并为 Linux 社区制定了行为准则。
|
||||
|
||||
2015 年,[赛格·夏普][5]虽然在技术上受人尊重,但因其缺乏对个人的尊重,被辞去了 FOSS 女性外展计划中的 Linux 内核协调员一职。
|
||||
|
||||
PR 审核中存在的贬低性评论对开发者会造成深远的影响。它导致开发者在提交 PR 时产生畏惧感,让他们对预期中的反馈感到恐惧。这吞噬了开发者对自己能力的信心。它逼迫工程师每次都只能追求完美,从而减缓了开发速度,这与许多社区采用的敏捷方法论背道而驰。
|
||||
|
||||
### 如何拉近开源中的共情缺口?
|
||||
|
||||
通常情况下,冒犯的评论常是无意间的,而通过一些指导,作者则可以学会如何在不带负面情绪的情况下表达意见。GitHub 不会监控问题和 PR 的评论是否有滥用内容,相反,它提供了一些工具,使得社区能够对其内容进行审查。所有者可以删除评论和锁定对话,所有贡献者可以报告滥用和阻止用户。
|
||||
|
||||
制定社区行为准则可为所有级别的贡献者提供一个安全且包容的环境,并且能让所有级别的贡献者参与并定义降低协作者之间冲突的过程。
|
||||
|
||||
我们能够克服开源中存在的共情问题。面对面的辩论比文字更有利于产生共鸣,所以尽可能选择视频通话。通过共情分享反馈来树立榜样。如果你目睹了一个尖锐的评论,请做一个指导员而非旁观者。如果你是受害者,请大声说出来。在面试候选人时,评估共情能力,并将共情能力与绩效评估和奖励挂钩。界定并执行社区行为准则,并管理好你的社区。
|
||||
|
||||
随着人们对移情意识的了解的加深和灵感的传播,开源生产力将得到提升,协作者将有所作为,并且可以充分激发开源软件开发的活力。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/open-source-empathy
|
||||
|
||||
作者:[Bronagh Sorota][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bsorota
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/practicing-empathy.jpg?itok=-A7fj6NF (Practicing empathy)
|
||||
[2]: https://www.redhat.com/en/explore/the-open-organization-book
|
||||
[3]: https://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/
|
||||
[4]: http://flossdata.syr.edu/data/insults/hicssInsultsv2.pdf
|
||||
[5]: https://en.wikipedia.org/wiki/Sage_Sharp
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Namespaces are the shamash candle of the Zen of Python)
|
||||
[#]: via: (https://opensource.com/article/19/12/zen-python-namespaces)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
命名空间是 Python 之禅的精髓
|
||||
======
|
||||
|
||||
> 这是 Python 之禅特别系列的一部分,重点是一个额外的原则:命名空间。
|
||||
|
||||
![在建筑物上的笔记本电脑上编程的人][1]
|
||||
|
||||
著名的<ruby>光明节<rt>Hanukkah</rt></ruby>有八个晚上的庆祝活动。然而,光明节的灯台有九根蜡烛:八根普通蜡烛和总是偏移的第九根蜡烛。它被称为 “shamash” 或 “shamos”,大致可以翻译为“仆人”或“看门人”的意思。
|
||||
|
||||
shamos 是点燃所有其它蜡烛的蜡烛:它是唯一一支可以用火的蜡烛,而不仅仅是观看。当我们结束关于 Python 之禅系列时,我看到命名空间提供了类似的作用。
|
||||
|
||||
### Python 中的命名空间
|
||||
|
||||
Python 使用命名空间来处理一切。虽然简单,但它们是稀疏的数据结构 —— 这通常是实现目标的最佳方式。
|
||||
|
||||
> *命名空间* 是一个从名字到对象的映射。
|
||||
>
|
||||
> —— [Python.org][2]
|
||||
|
||||
模块是命名空间。这意味着正确地预测模块语义通常只需要熟悉 Python 命名空间的工作方式。类是命名空间,对象是命名空间。函数可以访问它们的本地命名空间、父命名空间和全局命名空间。
|
||||
|
||||
这个简单的模型,即用 `.` 操作符访问一个对象,而这个对象又通常(但并不总是)会进行某种字典查找,这使得 Python 很难优化,但很容易解释。
|
||||
|
||||
事实上,一些第三方模块也采取了这个准则,并以此来运行。例如,[variants][3] 包把函数变成了“相关功能”的命名空间。这是一个很好的例子,说明 [Python 之禅][4] 是如何激发新的抽象的。
|
||||
|
||||
### 结语
|
||||
|
||||
感谢你和我一起参加这次以光明节为灵感的 [我最喜欢的语言][5] 的探索。
|
||||
|
||||
静心参禅,直至悟道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/12/zen-python-namespaces
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
|
||||
[2]: https://docs.python.org/3/tutorial/classes.html
|
||||
[3]: https://pypi.org/project/variants/
|
||||
[4]: https://www.python.org/dev/peps/pep-0020/
|
||||
[5]: https://opensource.com/article/19/10/why-love-python
|
@ -1,96 +1,90 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ansible Playbooks Quick Start Guide with Examples)
|
||||
[#]: via: (https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "MjSeven"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Ansible Playbooks Quick Start Guide with Examples"
|
||||
[#]: via: "https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/"
|
||||
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
|
||||
|
||||
Ansible Playbooks Quick Start Guide with Examples
|
||||
Ansible 剧本快速入门指南
|
||||
======
|
||||
|
||||
We have already written two articles about Ansible, this is the third article.
|
||||
我们已经写了两篇关于 Ansible 的文章,这是第三篇。
|
||||
|
||||
If you are new to Ansible, I advise you to read the two topics below, which will teach you the basics of Ansible and what it is.
|
||||
如果你是 Ansible 新手,我建议你阅读下面这两篇文章,它会教你一些 Ansible 的基础以及它是什么。
|
||||
|
||||
* **Part-1: [How to Install and Configure Ansible on Linux][1]**
|
||||
* **Part-2: [Ansible ad-hoc Command Quick Start Guide][2]**
|
||||
* **第一篇: [在 Linux 如何安装和配置 Ansible][1]**
|
||||
* **第二篇: [Ansible ad-hoc 命令快速入门指南][2]**
|
||||
|
||||
如果你已经阅读过了,那么在阅读本文时你才不会感到突兀。
|
||||
|
||||
### 什么是 Ansible 剧本?
|
||||
|
||||
If you have finished them, you will feel the continuity as you read this article.
|
||||
剧本比临时命令模式更强大,而且完全不同。
|
||||
|
||||
### What is the Ansible Playbook?
|
||||
它使用了 **"/usr/bin/ansible-playbook"** 二进制文件,并且提供丰富的特性使得复杂的任务变得更容易。
|
||||
|
||||
Playbooks are much more powerful and completely different way than ad-hoc command mode.
|
||||
如果你想经常运行一个任务,剧本是非常有用的。
|
||||
|
||||
It uses the **“/usr/bin/ansible-playbook”** binary. It provides rich features to make complex task easier.
|
||||
此外,如果你想在服务器组上执行多个任务,它也是非常有用的。
|
||||
|
||||
Playbooks are very useful if you want to run a task often.
|
||||
剧本由 YAML 语言编写。YAML 代表一种标记语言,它比其它常见的数据格式(如 XML 或 JSON)更容易读写。
|
||||
|
||||
Also, this is useful if you want to perform multiple tasks at the same time on the group of server.
|
||||
|
||||
Playbooks are written in YAML language. YAML stands for Ain’t Markup Language, which is easier for humans to read and write than other common data formats such as XML or JSON.
|
||||
|
||||
The Ansible Playbook Flow Chart below will tell you its detailed structure.
|
||||
下面这张 Ansible 剧本流程图将告诉你它的详细结构。
|
||||
|
||||
![][3]
|
||||
|
||||
### Understanding the Ansible Playbooks Terminology
|
||||
### 理解 Ansible 剧本的术语
|
||||
|
||||
* **Control Node:** The machine where Ansible is installed. It is responsible for managing client nodes.
|
||||
* **Managed Nodes:** List of hosts managed by the control node
|
||||
* **Playbook:** A Playbook file contains a set of procedures used to automate a task.
|
||||
* **Inventory:** The inventory file contains information about the servers you manage.
|
||||
* **Task:** Each play has multiple tasks, tasks that are executed one by one against a given machine (it a host or multiple host or a group of host).
|
||||
* **Module:** Modules are a unit of code that is used to gather information from the client node.
|
||||
* **Role:** Roles are ways to automatically load some vars_files, tasks, and handlers based on known file structure.
|
||||
* **Play:** Each playbook has multiple plays, and a play is the implementation of a particular automation from beginning to end.
|
||||
* **Handlers:** This helps you reduce any service restart in a play. Lists of handler tasks are not really different from regular tasks, and changes are notified by notifiers. If the handler does not receive any notification, it will not work.
|
||||
* **控制节点:** Ansible 安装的机器,它负责管理客户端节点。
|
||||
* **被控节点:** 被控制节点管理的主机列表。
|
||||
* **剧本:** 一个剧本文件,包含一组自动化任务。
|
||||
* **主机清单:*** 这个文件包含有关管理的服务器的信息。
|
||||
* **任务:** 每个剧本都有大量的任务。任务在指定机器上依次执行(一个主机或多个主机)。
|
||||
* **模块:** 模块是一个代码单元,用于从客户端节点收集信息。
|
||||
* **角色:** 角色是根据已知文件结构自动加载一些变量文件、任务和处理程序的方法。
|
||||
* **Play:** 每个剧本含有大量的 play, 一个 play 从头到尾执行一个特定的自动化。
|
||||
* **Handlers:** 它可以帮助你减少在剧本中的重启任务。处理程序任务列表实际上与常规任务没有什么不同,更改由通知程序通知。如果处理程序没有收到任何通知,它将不起作用。
|
||||
|
||||
### 基本的剧本是怎样的?
|
||||
|
||||
下面是一个剧本的模板:
|
||||
|
||||
### How Does the Basic Playbook looks Like?
|
||||
|
||||
Here’s how the basic playbook looks.
|
||||
|
||||
```
|
||||
--- [YAML file should begin with a three dash]
|
||||
- name: [Description about a script]
|
||||
hosts: group [Add a host or host group]
|
||||
become: true [It requires if you want to run a task as a root user]
|
||||
tasks: [What action do you want to perform under task]
|
||||
- name: [Enter the module options]
|
||||
module: [Enter a module, which you want to perform]
|
||||
module_options-1: value [Enter the module options]
|
||||
```yaml
|
||||
--- [YAML 文件应该以三个破折号开头]
|
||||
- name: [脚本描述]
|
||||
hosts: group [添加主机或主机组]
|
||||
become: true [如果你想以 root 身份运行任务,则标记它]
|
||||
tasks: [你想在任务下执行什么动作]
|
||||
- name: [输入模块选项]
|
||||
module: [输入要执行的模块]
|
||||
module_options-1: value [输入模块选项]
|
||||
module_options-2: value
|
||||
.
|
||||
module_options-N: value
|
||||
```
|
||||
|
||||
### How to Understand Ansible Output
|
||||
### 如何理解 Ansible 的输出
|
||||
|
||||
The Ansible Playbook output comes with 4 colors, see below for color definitions.
|
||||
Ansible 剧本的输出有四种颜色,下面是具体含义:
|
||||
|
||||
* **Green:** **ok –** If that is correct, the associated task data already exists and configured as needed.
|
||||
* **Yellow: changed –** Specific data has updated or modified according to the needs of the tasks.
|
||||
* **Red: FAILED –** If there is any problem while doing a task, it returns a failure message, it may be anything and you need to fix it accordingly.
|
||||
* **White:** It comes with multiple parameters
|
||||
* **绿色:** **ok –** 代表成功,关联的任务数据已经存在,并且已经根据需要进行了配置。
|
||||
* **黄色: 已更改 –** 指定的数据已经根据任务的需要更新或修改。
|
||||
* **红色: 失败–** 如果在执行任务时出现任何问题,它将返回一个失败消息,它可能是任何东西,你需要相应地修复它。
|
||||
* **白色:** 表示有多个参数。
|
||||
|
||||
为此,创建一个剧本目录,将它们都放在同一个地方。
|
||||
|
||||
|
||||
To do so, create a playbook directory to keep them all in one place.
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo mkdir /etc/ansible/playbooks
|
||||
```
|
||||
|
||||
### Playbook-1: Ansible Playbook to Install Apache Web Server on RHEL Based Systems
|
||||
### 剧本-1:在 RHEL 系统上安装 Apache Web 服务器
|
||||
|
||||
This sample playbook allows you to install the Apache web server on a given target node.
|
||||
这个示例剧本允许你在指定的目标机器上安装 Apache Web 服务器:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/apache.yml
|
||||
|
||||
---
|
||||
@ -108,17 +102,17 @@ $ sudo nano /etc/ansible/playbooks/apache.yml
|
||||
state: started
|
||||
```
|
||||
|
||||
```
|
||||
```bash
|
||||
$ ansible-playbook apache1.yml
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
### How to Understand Playbook Execution in Ansible
|
||||
### 如何理解 Ansible 中剧本的执行
|
||||
|
||||
To check the syntax error, run the following command. If it finds no error, it only shows the given file name. If it detects any error, you will get an error as follows, but the contents may differ based on your input file.
|
||||
使用以下命令来查看语法错误。如果没有发现错误,它只显示剧本文件名。如果它检测到任何错误,你将得到一个如下所示的错误,但内容可能根据你的输入文件而有所不同。
|
||||
|
||||
```
|
||||
```bash
|
||||
$ ansible-playbook apache1.yml --syntax-check
|
||||
|
||||
ERROR! Syntax Error while loading YAML.
|
||||
@ -143,11 +137,11 @@ Should be written as:
|
||||
# ^--- all spaces here.
|
||||
```
|
||||
|
||||
Alternatively, you can check your ansible-playbook content from online using the following url @ [YAML Lint][4]
|
||||
或者,你可以使用这个 URL [YAML Lint][4] 在线检查 Ansible 剧本内容。
|
||||
|
||||
Run the following command to perform a **“Dry Run”**. When you run a ansible-playbook with the **“–check”** option, it does not make any changes to the remote machine. Instead, it will tell you what changes they have made rather than create them.
|
||||
执行以下命令进行**“演练”**。当你运行带有 **"-check"** 选项的剧本时,它不会对远程机器进行任何修改。相反,它会告诉你它将要做什么改变但不是真的执行。
|
||||
|
||||
```
|
||||
```bash
|
||||
$ ansible-playbook apache.yml --check
|
||||
|
||||
PLAY [Install and Configure Apache Webserver] ********************************************************************
|
||||
@ -169,9 +163,9 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
|
||||
node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
If you want detailed information about your ansible playbook implementation, use the **“-vv”** verbose option. It shows what it really does to gather this information.
|
||||
如果你想要知道 ansible 剧本实现的详细信息,使用 **"-vv"** 选项,它会展示如何收集这些信息。
|
||||
|
||||
```
|
||||
```bash
|
||||
$ ansible-playbook apache.yml --check -vv
|
||||
|
||||
ansible-playbook 2.9.2
|
||||
@ -212,11 +206,11 @@ node1.2g.lab : ok=3 changed=2 unreachable=0 failed=0 s
|
||||
node2.2g.lab : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
### Playbook-2: Ansible Playbook to Install Apache Web Server on Ubuntu Based Systems
|
||||
### 剧本-2:在 Ubuntu 系统上安装 Apache Web 服务器
|
||||
|
||||
This sample playbook allows you to install the Apache web server on a given target node.
|
||||
这个示例剧本允许你在指定的目标节点上安装 Apache Web 服务器。
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
|
||||
|
||||
---
|
||||
@ -250,13 +244,13 @@ $ sudo nano /etc/ansible/playbooks/apache-ubuntu.yml
|
||||
enabled: yes
|
||||
```
|
||||
|
||||
### Playbook-3: Ansible Playbook to Install a List of Packages on Red Hat Based Systems
|
||||
### 剧本-3:在 Red Hat 系统上安装软件包列表
|
||||
|
||||
This sample playbook allows you to install a list of packages on a given target node.
|
||||
这个示例剧本允许你在指定的目标节点上安装软件包。
|
||||
|
||||
**Method-1:**
|
||||
**方法-1:**
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/packages-redhat.yml
|
||||
|
||||
---
|
||||
@ -273,9 +267,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat.yml
|
||||
- htop
|
||||
```
|
||||
|
||||
**Method-2:**
|
||||
**方法-2:**
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
|
||||
|
||||
---
|
||||
@ -292,9 +286,9 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-1.yml
|
||||
- htop
|
||||
```
|
||||
|
||||
**Method-3: Using Array Variable**
|
||||
**方法-3: 使用数组变量**
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
|
||||
|
||||
---
|
||||
@ -309,11 +303,11 @@ $ sudo nano /etc/ansible/playbooks/packages-redhat-2.yml
|
||||
with_items: "{{ packages }}"
|
||||
```
|
||||
|
||||
### Playbook-4: Ansible Playbook to Install Updates on Linux Systems
|
||||
### 剧本-4:在 Linux 系统上安装更新
|
||||
|
||||
This sample playbook allows you to install updates on your Linux systems, running Red Hat and Debian-based client nodes.
|
||||
这个示例剧本允许你在基于 Red Hat 或 Debian 的 Linux 系统上安装更新。
|
||||
|
||||
```
|
||||
```bash
|
||||
$ sudo nano /etc/ansible/playbooks/security-update.yml
|
||||
|
||||
---
|
||||
@ -336,7 +330,7 @@ via: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,69 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 Bash tutorials to enhance your command line skills in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/1/bash)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
7 个 Bash 教程,提高你的命令行技能,2021 年版
|
||||
======
|
||||
|
||||
Bash 是大多数 Linux 系统上的默认命令行 shell。所以你为什么不试着学习如何最大限度地利用它呢?
|
||||
|
||||
![Terminal command prompt on orange background][1]
|
||||
|
||||
Bash 是大多数 Linux 系统上的默认命令行 shell。所以你为什么不试着学习如何最大限度地利用它呢?今年,Opensource.com 推荐了许多很棒的文章来帮助您充分利用 Bash shell 的强大功能。以下是一些关于 Bash 阅读次数最多的文章:
|
||||
|
||||
## [在 Linux 终端中通过重定向从任何地方读取和写入数据 ][2]
|
||||
|
||||
输入和输出重定向是任何编程或脚本语言的基础功能。从技术上讲,只要你与电脑互动,它就会自然而然地发生。输入从 stdin( 标准输入,通常是您的键盘或鼠标)读取,输出到 stdout (标准输出,一般是文本或数据流),而错误被发送到 stderr。了解这些数据流的存在,使您能够在使用 Bash 等 shell 时控制信息的去向。Seth Kenlon 分享了这些很棒的技巧,可以让你在不需要大量鼠标移动和按键的情况下从一个地方获取数据。您可能不经常使用重定向,但学习使用它可以为您节省大量不必要的打开文件和复制粘贴数据的时间。
|
||||
|
||||
## [系统管理员 Bash 脚本入门 ][3]
|
||||
|
||||
Bash 是免费的开源软件,所以任何人都可以安装它,不管他们运行的是 Linux、BSD、OpenIndiana、Windows 还是 macOS。Seth Kenlon 帮助您学习如何使用 Bash 的命令和特性,使其成为最强大的 shell 之一。
|
||||
|
||||
## [试试这个针对大型文件系统的 Bash 脚本 ][4]
|
||||
|
||||
您是否曾经想列出一个目录中的所有文件,只显示其中的文件,不包括其他内容?或者只显示目录?如果你有,那么 Nick Clifton 的文章可能正是你正在寻找的。Nick 分享了一个漂亮的 Bash 脚本,它可以列出目录、文件、链接或可执行文件。该脚本使用 **find** 命令进行搜索,然后运行 **ls** 显示详细信息。对于管理大型 Linux 系统的人来说,这是一个漂亮的解决方案。
|
||||
|
||||
## [用 Bash 工具对你的 Linux 系统配置进行快照 ][5]
|
||||
|
||||
您可能想与他人分享您的 Linux 配置,原因有很多。您可能需要帮助排除系统上的一个问题,或者您对自己创建的环境非常自豪,想向其他开源爱好者展示它。Don Watkins 向我们展示了 screenFetch 和 Neofetch 来捕获和分享您的系统配置。
|
||||
|
||||
## [Git 的 6 个好用的 Bash 脚本 ][6]
|
||||
|
||||
Git 已经成为一个无处不在的代码管理系统。了解如何管理 Git 存储库可以简化您的开发体验。Bob Peterson 分享了 6 个 Bash 脚本,它们将使您在使用 Git 存储库时更加轻松。**gitlog** 打印当前 patch 和 主版本的缩写列表。另一种脚本可以显示 patch 的 SHA1 id 或在一组 patch 中搜索字符串。
|
||||
|
||||
## [改进你 Bash 脚本的 5 种方法 ][7]
|
||||
|
||||
系统管理员通常编写各种或长或短的 Bash 脚本,以完成各种任务。Alan Formy-Duval 解释了如何使 Bash 脚本更简单、更健壮、更易于阅读和调试。我们可能会考虑到我们需要使用诸如 Python、C 或 Java 之类的语言来实现更高的功能,但其实也不一定需要。因为 Bash 脚本语言就已经非常强大。要最大限度地发挥它的效用,还有很多东西要学。
|
||||
|
||||
## [我最喜欢的 Bash 技巧 ][8]
|
||||
|
||||
Katie McLaughlin 帮助你提高你的工作效率,用别名和其他快捷方式解决你经常忘记的事情。当你整天与计算机打交道时,找到可重复的命令并标记它们以方便以后使用是非常美妙的。Katie 总结了一些有用的 Bash 特性和帮助命令,可以节省您的时间。
|
||||
|
||||
这些 Bash 小技巧将一个已经很强大的 shell 提升到一个全新的级别。也欢迎分享你自己的建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/bash
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
|
||||
[2]: https://opensource.com/article/20/6/redirection-bash
|
||||
[3]: https://opensource.com/article/20/4/bash-sysadmins-ebook
|
||||
[4]: https://opensource.com/article/20/2/script-large-files
|
||||
[5]: https://opensource.com/article/20/1/screenfetch-neofetch
|
||||
[6]: https://opensource.com/article/20/1/bash-scripts-git
|
||||
[7]: https://opensource.com/article/20/1/improve-bash-scripts
|
||||
[8]: https://opensource.com/article/20/1/bash-scripts-aliases
|
@ -0,0 +1,202 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "MjSeven"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "How to Create and Manage Archive Files in Linux"
|
||||
[#]: via: "https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/"
|
||||
[#]: author: "LF Training https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/"
|
||||
|
||||
如何在 Linux 中创建和管理归档文件
|
||||
======
|
||||
|
||||
_由 Matt Zand 和 Kevin Downs 联合写作_
|
||||
|
||||
简而言之,归档是一个包含一系列文件和(或)目录的单一文件。归档文件通常用于在本地或互联网上传输,或成为一个一系列文件和目录的备份副本,从而允许你使用一个文件来工作(如果压缩,则其大小会小于所有文件的总和)。同样的,归档也用于软件应用程序打包。为了方便传输,可以很容易地压缩这个单一文件,而存档中的文件会保留原始结构和权限。
|
||||
|
||||
我们可以使用 tar 工具来创建、列出和提取归档中的文件。用 tar 生成的归档通常称为“tar 文件”、“tar 归档”或者“压缩包”(因为所有已归档的文件被合成了一个文件)。
|
||||
|
||||
本教程会展示如何使用 tar 创建、列出和提取归档中的内容。这三个操作都会使用两个公共选项 "-f" 和 "-v":使用 "-f" 指定归档文件的名称,使用 "-v"("verbose") 选项使 tar 在处理文件时输出文件名。虽然 "-v" 选项不是必需的,但是它可以让你观察 tar 操作的过程。
|
||||
|
||||
在本教程的剩余部分中,会涵盖 3 个主题:1、创建一个归档文件;2、列出归档文件内容;3、提取归档文件内容。另外我们会回答通过调查与归档文件管理的 6 个实际问题来结束本教程。你从本教程学到的内容对于执行与[网络安全][1]和[云技术][2]相关的任务至关重要。
|
||||
|
||||
### 1- 创建一个归档文件
|
||||
|
||||
要使用 tar 创建一个归档文件,使用 "-c"("create") 选项,然后用 "-f" 选项指定要创建的归档文件名。通常的做法是使用带有 ".tar" 扩展名的名称,例如 "my-backup.tar"。注意,除非另有特别说明,否则本文其余部分中使用的所有命令和参数都以小写形式使用。记住,在你的终端上输入本文的命令时,无需输入每个命令行开头的 $ 提示符。
|
||||
|
||||
输入要归档的文件名作为参数;如果要创建一个包含所有文件及其子目录的归档文件,提供目录名称作为参数。
|
||||
|
||||
要归档 "project" 目录内容,输入
|
||||
|
||||
$ _tar -cvf project.tar project_
|
||||
|
||||
这个命令将创建一个名为 "project.tar" 的归档文件,包含 "project" 目录的所有内容,而原目录 "project" 将保持不变。
|
||||
|
||||
使用 "-z" 选项可以对归档文件进行压缩,这样产生的输出与创建未压缩的存档然后用 gzip 压缩是一样的,但它省去了额外的步骤。
|
||||
|
||||
要从 "project" 目录创建一个 "project.tar.gz" 的压缩包,输入:
|
||||
|
||||
$ _tar -zcvf project.tar.gz project_
|
||||
|
||||
这个命令将创建一个 "project.tar.gz" 的压缩包,包含 "project" 目录的所有内容,而原目录 "project" 将保持不变。
|
||||
|
||||
**注意:**在使用 "-z" 选项时,你应该使用 ".tar.gz" 扩展名而不是 ".tar" 扩展名,这样表示已压缩。虽然不是必须的,但这是一个很好的实践。
|
||||
|
||||
Gzip 不是唯一的压缩形式,还有 bzip2 和 xz。当我们看到扩展名为 xz 的文件时,我们知道该文件是使用 xz 压缩的,扩展名为 .bz2 的文件是用 bzip2 压缩的。随着 bzip2 不再维护,我们将远离它而关注 xz。使用 xz 压缩时,需要花费更长的时间。然而,等待通常是值得的,因为压缩效果要好的多,这意味着压缩包通常比使用其它压缩形式要小。更好的是,不同压缩形式之间的解压缩或提取文件并没有太大区别。下面我们将看到一个使用 tar 压缩文件时如何使用 xz 的示例:
|
||||
|
||||
$ _tar -Jcvf project.tar.xz project_
|
||||
|
||||
我们只需将 gzip 的 -z 选项转换为 xz 的大写 -J 即可。以下是一些输出,显示压缩形式之间的差异:
|
||||
|
||||
![][3]
|
||||
|
||||
![][4]
|
||||
|
||||
如你所见,zx 的压缩时间最长。但是,它在减小文件大小方面做的最好,所以值得等待。文件越大,压缩效果也越好。
|
||||
|
||||
### 2- 列出归档文件的内容
|
||||
|
||||
要列出 tar 归档文件的内容但不提取,使用 "-t" 选项。
|
||||
|
||||
要列出 "project.tar" 的内容,输入:
|
||||
|
||||
$ _tar -tvf project.tar_ * *
|
||||
|
||||
这个命令列出了 "project.tar" 归档的内容。"-v" 和 "-t" 选项一起使用会输出每个文件的权限和修改时间,以及文件名。这与 ls 命令使用 "-l" 选项时使用的格式相同。
|
||||
|
||||
要列出 "project.tar.gz" 压缩包的内容,输入:
|
||||
|
||||
$ _tar -tvf project.tar_
|
||||
|
||||
### 3- 从归档中提取内容
|
||||
|
||||
要提取(解压)tar 归档文件中的内容,使用 "-x"("extract") 选项。
|
||||
|
||||
要提取 "project.tar" 归档的内容,输入:
|
||||
|
||||
$ _tar -xvf project.tar_
|
||||
|
||||
这个命令会将 "project.tar" 归档的内容提取到当前目录。
|
||||
|
||||
如果一个归档文件被压缩,通常来说它的扩展名为 ".tar.gz" 或 ".tgz",包括 "-z" 选项。
|
||||
|
||||
要提取 "project.tar.gz" 压缩包的内容,输入:
|
||||
|
||||
$ _tar -zxvf project.tar.gz_
|
||||
|
||||
**注意**:如果当前目录中有文件或子目录与归档文件中的内容同名,那么在提取归档文件时,这些文件或子目录将被覆盖。如果你不知道归档中包含哪些文件,请考虑先查看归档文件的内容。
|
||||
|
||||
在提取归档内容之前列出其内容的另一个原因是,确定归档中的内容是否包含在目录中。如果没有,而当前目录中包含许多不相关的文件,那么你可能将它们与归档中提取的文件混淆。
|
||||
|
||||
要将文件提取到它们自己的目录中,新建一个目录,将归档文件移到该目录,然后你就可以在新目录中提取文件。
|
||||
|
||||
现在我们已经学习了如何创建归档文件并列出和提取其内容,接下来我们可以继续讨论 Linux 专业人员经常被问到的 9 个实用问题。
|
||||
|
||||
* 可以在不解压缩的情况下添加内容到压缩包中吗?
|
||||
|
||||
很不幸,一旦文件将被压缩,就无法向其添加内容。你需要解压缩或提取其内容,然后编辑或添加内容,最后再次压缩文件。如果文件很小,这个过程不会花费很长时间,否则请等待一会。
|
||||
|
||||
* 可以在不解压缩的情况下删除归档文件中的内容吗?
|
||||
|
||||
这取决压缩时使用的 tar 版本。较新版本的 tar 支持 -delete 选项。
|
||||
|
||||
例如,假设归档文件中有 file1 和 file2,可以使用以下命令将它们从 file.tar 中删除:
|
||||
|
||||
_$ tar -vf file.tar –delete file1 file2_
|
||||
|
||||
删除目录 dir1:
|
||||
|
||||
_$ tar -f file.tar –delete dir1/*_
|
||||
|
||||
* 压缩和归档之间有什么区别?
|
||||
|
||||
查看归档和压缩之间差异最简单的方法是查看其解压大小。归档文件时,会将多个文件合并为一个。所以,如果我们归档 10 个 100kb 文件,则最终会得到一个 100kb 大小的文件。而如果压缩这些文件,则最终可能得到一个只有几 kb 或接近 100kb 的文件。
|
||||
|
||||
* 如何压缩归档文件?
|
||||
|
||||
如上所说,你可以使用带有 cvf 选项的 tar 目录来创建和归档文件。要压缩归档文件,有两个选择:通过压缩程序(例如 gzip)运行归档文件,或在使用 tar 命令时使用压缩选项。最常见的压缩标志 -z 表示 gzip,-j 表示 bzip,-J 表示 xz。例如:
|
||||
|
||||
_$ gzip file.tar_
|
||||
|
||||
或者,我们可以在使用 tar 命令时使用压缩标志,以下命令使用 gzip 标志 "z":
|
||||
|
||||
_$ tar -cvzf file.tar /some/directory_
|
||||
|
||||
* 如何一次创建多个目录和/或文件的归档?
|
||||
|
||||
一次要归档多个文件,这种情况并不少见。一次归档多个文件和目录并不像你想的那么难,你只需要提供多个文件或目录作为 tar 的参数即可:
|
||||
|
||||
_$ tar -cvzf file.tar file1 file2 file3_
|
||||
|
||||
或者
|
||||
|
||||
_$ tar -cvzf file.tar /some/directory1 /some/directory2_
|
||||
|
||||
* 创建归档时如何跳过目录和/或文件?
|
||||
|
||||
你可能会遇到这样的情况:要归档一个目录或文件,但不是所有文件,这种情况下可以使用 --exclude 选项:
|
||||
|
||||
_$ tar –exclude ‘/some/directory’ -cvf file.tar /home/user_
|
||||
|
||||
在示例中,/home/user 目录中除了 /some/directory 之外都将被归档。将 -exclude 选项放在源和目标之前,并用单引号将要排除的文件或目录引起来,这一点很重要。
|
||||
|
||||
### 总结
|
||||
|
||||
tar 命令对展示不需要的文件创建备份或压缩文件很有用。在更改文件之前备份它们是一个很好的做法。如果某些东西在更改后没有按预期正常工作,你始终可以还原到旧文件。压缩不再使用的文件有助于保持系统干净,并降低磁盘空间使用率。还有其它实用程序可以归档或压缩,但是 tar 因其多功能、易用性和受欢迎程度而独占鳌头。
|
||||
|
||||
### 资源
|
||||
|
||||
如果你想了解有关 Linux 的更多信息,强烈建议阅读以下文章和教程:
|
||||
|
||||
* [Linux 文件系统架构和管理综述][5]
|
||||
* [Linux 文件和目录系统工作原理的全面回顾][6]
|
||||
* [所有 Linux 系统发行版的综合列表][7]
|
||||
* [特殊用途 Linux 发行版的综合列表][8]
|
||||
* [Linux 系统管理指南 - 制作和管理备份操作的最佳实践][9]
|
||||
* [Linux 系统管理指南 - Linux 虚拟内存和磁盘缓冲区缓存概述][10]
|
||||
* [Linux 系统管理指南 - 监控 Linux 的最佳实践][11]
|
||||
* [Linux 系统管理指南 - Linux 启动和关闭的最佳实践][12]
|
||||
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
**Matt Zand** 是一位创业者,也是 3 家科技创业公司的创始人: [DC Web Makers][13]、[Coding Bootcamps][14] 和 [High School Technology Services][15]。他也是 [使用 Hyperledger Fabric 进行智能合约开发][16] 一书的主要作者。他为 Hyperledger、以太坊和 Corda R3 平台编写了 100 多篇关于区块链开发的技术文章和教程。在 DC Web Makers,他领导了一个区块链专家团队,负责咨询和部署企业去中心化应用程序。作为首席架构师,他为编码训练营设计和开发了区块链课程和培训项目。他拥有马里兰大学(University of Maryland)工商管理硕士学位。在区块链开发和咨询之前,他曾担任一些初创公司的高级网页和移动应用程序开发和顾问、天使投资人和业务顾问。你可以通过以下这个网址和他取得联系: <https://www.linkedin.com/in/matt-zand-64047871>。
|
||||
|
||||
**Kevin Downs** 是 Red Hat 认证的系统管理员和 RHCSA。他目前在 IBM 担任系统管理员,负责管理数百台运行在不同 Linux 发行版上的服务器。他是[编码训练营][17]的首席 Linux 讲师,并且他会讲授 [5 个自己的课程][18].
|
||||
|
||||
本文首发在 [Linux 基础培训][20]上。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/news/how-to-create-and-manage-archive-files-in-linux-2/
|
||||
|
||||
作者:[LF Training][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://learn.coding-bootcamps.com/p/essential-practical-guide-to-cybersecurity-for-system-admin-and-developers
|
||||
[2]: https://learn.coding-bootcamps.com/p/introduction-to-cloud-technology
|
||||
[3]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux1-300x94.png
|
||||
[4]: https://training.linuxfoundation.org/wp-content/uploads/2020/12/Linux2-300x110.png
|
||||
[5]: https://blockchain.dcwebmakers.com/blog/linux-os-file-system-architecture-and-management.html
|
||||
[6]: https://coding-bootcamps.com/linux/filesystem/index.html
|
||||
[7]: https://myhsts.org/tutorial-list-of-all-linux-operating-system-distributions.php
|
||||
[8]: https://coding-bootcamps.com/list-of-all-special-purpose-linux-distributions.html
|
||||
[9]: https://myhsts.org/tutorial-system-admin-best-practices-for-managing-backup-operations.php
|
||||
[10]: https://myhsts.org/tutorial-how-linux-virtual-memory-and-disk-buffer-cache-work.php
|
||||
[11]: https://myhsts.org/tutorial-system-admin-best-practices-for-monitoring-linux-systems.php
|
||||
[12]: https://myhsts.org/tutorial-best-practices-for-performing-linux-boots-and-shutdowns.php
|
||||
[13]: https://blockchain.dcwebmakers.com/
|
||||
[14]: http://coding-bootcamps.com/
|
||||
[15]: https://myhsts.org/
|
||||
[16]: https://www.oreilly.com/library/view/hands-on-smart-contract/9781492086116/
|
||||
[17]: https://coding-bootcamps.com/
|
||||
[18]: https://learn.coding-bootcamps.com/courses/author/758905
|
||||
[19]: https://training.linuxfoundation.org/announcements/how-to-create-and-manage-archive-files-in-linux/
|
||||
[20]: https://training.linuxfoundation.org/
|
@ -0,0 +1,123 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (mengxinayan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (10 ways to get started with open source in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/1/getting-started-open-source)
|
||||
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
|
||||
|
||||
2021 年开始使用开源的10钟方法
|
||||
======
|
||||
如果您刚开始接触开源,那么下面的 2020 年十大文章有助于指导您的发展之路。
|
||||
![看着职业生涯的地图][1]
|
||||
|
||||
opensource.com 存在的意义是为了向世界宣传开源的一切,从新工具到框架拓展到社区。我们的目标是让想要使用开源或为开源做贡献的人更容易参与其中。
|
||||
|
||||
入门开源可能很难,所以我们定期分享如何参与其中的提示和建议。如果你想要学习 Python,帮助抗击 COVID-19,或者加入 K8s 设置,我们将为您服务。
|
||||
|
||||
为了帮助您开始,我们总结了2020年发布的 10 篇最流行的开源入门文章。希望它们能激发您在 2021 年学习一些新知识。
|
||||
|
||||
### 利用 Python 实现网络爬虫的新手指南
|
||||
|
||||
您是否想通过实践而不是阅读来学习 Python?在本教程中,Julia Piaskowski 将会指导您完成她的第一个[Python 网页爬取项目][2]。她具体展示了如何使用 Python 库请求访问网页内容。
|
||||
|
||||
Julia 详细介绍了每一步,从安装 Python3 到使用 pandas 来清洁 Web 抓取结果。她利用了大量截图解释了如何以最终目标为目的进行爬取。
|
||||
|
||||
有关爬取相关内容的部分特别有用;当遇到困难处时,她会详细解释。但是,与本文的其余部分一样,她会指导您完成每个步骤。
|
||||
|
||||
### 在 Linux 上使用 SSH 进行远程连接的初学者指南
|
||||
|
||||
如果你之前从未使用过安全 shell(SSH),那么你在第一次使用时可能会感到困惑。在本教程中,Seth Kenlon 展示了[如何为两台计算机之间配置 SSH 连接][3],以及如何不使用密码而安全地进行连接。
|
||||
|
||||
Seth 解释了建立 SSH 连接的每个步骤,从您应该了解的四个关键术语到在每个主机上激活 SSH 的步骤。他还提供了有关查找计算机 IP 地址、创建 SSH 密钥以及堆远程计算机的远程访问权限的建议。
|
||||
|
||||
### 学习任何编程语言的 5 个步骤
|
||||
|
||||
如果您已经掌握了一种编程语言,则可以[全部学习[4]。这是 Seth Kenlon 编写本文的前提,他认为了解一些基本编程逻辑便可以跨语言拓展。
|
||||
|
||||
Seth 分享了程序员在学习一种新的编程语言或编码方式时所需要的五种东西。语法、内置函数和解析器位于这五种之中,他将会陪着每个人采取行动。
|
||||
|
||||
那么团结它们的关键方式是?一旦了解了代码工作原理,您就可以跨语言拓展。没有什么太难的让您学习。
|
||||
|
||||
### 为 COVID-19 贡献开源医疗项目
|
||||
|
||||
您是否知道一家意大利医院通过 3D 打印机设备挽救了 COVID-19 患者的生命?这是开源贡献者为 2020 年 COVID-19 大流行[建立的众多解决方案之一][5]。
|
||||
|
||||
在本文中,Joshua Pearce 分享了针对 COVID-19 的开源志愿服务项目。虽然 Open Air 是最大的项目,但 Joshua 解释了如何在 wiki 上为开源呼吸机工作,编写开源 COVID-19 医疗供应要求,测试开源氧气浓缩机原型等。
|
||||
|
||||
### GNOME 入门建议
|
||||
|
||||
GNOME 是最受欢迎的 Linux 桌面之一,但是它适合您吗?本文分享了[来自 GNOME 用户的建议][6],以及 opensource.com 上有关此主题的文章。
|
||||
|
||||
想要在配置桌面上寻找一些灵感吗?本文包含了有关 GNOME 拓展入门,将 Dash 安装到 Dock,使用 GNOME Tweak 工具等的链接。
|
||||
|
||||
毕竟,您可能会认为 GNOME 仍然不适合您——不用担心,最后您将找到指向其他 Linux 桌面和窗口管理器的链接。
|
||||
|
||||
### 现在开始为开源做贡献的 3 个理由
|
||||
|
||||
截至到 2020 年 6 月,Github 托管了超过 180,000 个公共仓库。现如今加入开源社区比过去更容易,但这是意味着您应该这样做?在本文中,opensource.com 通讯员 Jason Blais [分享了三个尝试的原因][7]。
|
||||
|
||||
为开源做贡献可以增强您的信心,简历和专业网络。Jason 还解释了如何利用有用的信息,从如何在领英(LinkedIn)个人资料中添加开源信息,到如何将这些信息转变为付费角色。 最后还列出了供初学者使用的出色项目。
|
||||
|
||||
### 作为 Linux 系统管理员为开源做贡献的 4 种方法
|
||||
|
||||
系统管理员是开源的无名英雄。他们在代码背后做了大量工作,这些代码非常有价值,但通常是看不见的。
|
||||
|
||||
在本文中,Elizabeth K. Joseph 介绍了她如何以 Linux 系统管理员的身份[来改善开源项目][8]。她离开社区比发现社区更好的几种方式为用户支持、托管项目资源以及查找新的网站环境。
|
||||
|
||||
也许最重要的贡献是什么?文档!Elizabeth 开始使用开源程序,为她使用的项目重写了快速入门指南。同时向您经常使用的项目提交错误和补丁报告是参与其中的理想方法。
|
||||
|
||||
### 为 Slack 的开源替代品做出贡献的 6 种方法
|
||||
|
||||
Mattermost 是需要开源消息传递系统的团队的流行平台。其活跃、充满活力的社区是让用户保持忠诚度的关键因素,尤其是对那些具有 Go,React 和 DevOps 经验的用户。
|
||||
|
||||
如果您想[为 Mattermost 做出贡献][9],Jason Blais 具体介绍了如何参与其中。将本文视为您的入门文档:Blais 分享了您要采取的步骤,并介绍了您可以做出的六种贡献。
|
||||
|
||||
无论您是要构建集成还是本地化语言,本文都将介绍如何进行。
|
||||
|
||||
### 如何为 K8s 做贡献
|
||||
|
||||
当我走进 2018 年温哥华青年开源峰会却不了解 K8s。主题演讲结束后,我离开会场后依然是一个困惑的女人。毫不夸张地说,K8s 已经彻底改变了开源:找到一个更受欢迎、更具影响力的项目很困难。
|
||||
|
||||
如果您想做出贡献,那么 IBM 工程师 Tara Gu 介绍了[她是如何开始的][10]。本文介绍了她在 All Things Open 2019 会议上的闪电演讲的回顾以及包括她亲自演讲的视频。还记得那些吗?
|
||||
|
||||
### 任何人如何在工作中为开源软件做出贡献
|
||||
|
||||
必要性是发明之母,尤其是在开源中。许多人针对自己遇到的问题构建开源解决方案。但是如果开发人员在没有收集目标用户反馈的情况下通过构建产品而错过了商标,会发生什么呢?
|
||||
|
||||
产品和设计团队通常会填补企业中的这一空白。如果开源团队中不存在这样的角色,开发人员应该怎么做?
|
||||
|
||||
在本文中,Catherine Robson 介绍了开源团队如何从目标用户那里[收集反馈][11]。它为希望与开发人员分享他们的工作经验,从而将他们的反馈贡献到开源项目的人们而编写。
|
||||
|
||||
Catherine 概述的步骤将帮助您与开源团队分享您的见解,并在帮助团队开发更好的产品方面发挥关键作用。
|
||||
|
||||
### 您想要学习什么?
|
||||
|
||||
您想了解开源入门哪些方面的知识?请在评论中分享您堆文章主题的建议。同时如果您有一个故事可以分享,以帮助他人开始使用开源软件,请考虑为 opensource.com [编写文章][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/getting-started-open-source
|
||||
|
||||
作者:[Lauren Maffeo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[萌新阿岩](https://github.com/mengxinayan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/lmaffeo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
|
||||
[2]: https://opensource.com/article/20/5/web-scraping-python
|
||||
[3]: https://opensource.com/article/20/9/ssh
|
||||
[4]: https://opensource.com/article/20/10/learn-any-programming-language
|
||||
[5]: https://opensource.com/article/20/3/volunteer-covid19
|
||||
[6]: https://opensource.com/article/20/6/gnome-users
|
||||
[7]: https://opensource.com/article/20/6/why-contribute-open-source
|
||||
[8]: https://opensource.com/article/20/7/open-source-sysadmin
|
||||
[9]: https://opensource.com/article/20/7/mattermost
|
||||
[10]: https://opensource.com/article/20/1/contributing-kubernetes-all-things-open-2019
|
||||
[11]: https://opensource.com/article/20/10/open-your-job
|
||||
[12]: https://opensource.com/how-submit-article
|
@ -1,215 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "MjSeven"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Why you need to drop ifconfig for ip"
|
||||
[#]: via: "https://opensource.com/article/21/1/ifconfig-ip-linux"
|
||||
[#]: author: "Rajan Bhardwaj https://opensource.com/users/rajabhar"
|
||||
|
||||
|
||||
放弃 ifconfig,拥抱 ip
|
||||
======
|
||||
开始使用现代方法配置 Linux 网络接口。
|
||||
![Tips and gears turning][1]
|
||||
|
||||
在很长一段时间内,`ifconfig` 命令是配置网络接口的默认方法。它为 Linux 用户提供了很好的服务,但是网络很复杂,所以配置网络的命令必须健壮。`ip` 命令是现代系统中新的默认网络命令,在本文中,我将向你展示如何使用它。
|
||||
|
||||
`ip` 命令工作在 [OSI 网络栈][2] 上:数据链路层和网络(IP)层。它做了之前 `net-tools` 包的所有工作。
|
||||
|
||||
### 安装 ip
|
||||
|
||||
`ip` 命令包含在 `iproute2util` 包中,它可能已经在你的 Linux 发行版中安装了。如果没有,你可以从发行版的仓库中进行安装。
|
||||
|
||||
### ifconfig 和 ip 使用对比
|
||||
|
||||
`ip` 和 `ifconfig` 命令可以用来配置网络接口,但它们做事方法不同。接下来,作为对比,我将用它们来执行一些常见的任务。
|
||||
|
||||
#### 查看网口和 IP 地址
|
||||
|
||||
如果你想查看主机的 IP 地址或网络接口信息,`ifconfig` (不带任何参数)命令提供了一个很好的总结。
|
||||
|
||||
|
||||
```
|
||||
$ ifconfig
|
||||
|
||||
eth0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
|
||||
ether bc:ee:7b:5e:7d:d8 txqueuelen 1000 (Ethernet)
|
||||
RX packets 0 bytes 0 (0.0 B)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 0 bytes 0 (0.0 B)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
|
||||
inet 127.0.0.1 netmask 255.0.0.0
|
||||
inet6 ::1 prefixlen 128 scopeid 0x10<host>
|
||||
loop txqueuelen 1000 (Local Loopback)
|
||||
RX packets 41 bytes 5551 (5.4 KiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 41 bytes 5551 (5.4 KiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
|
||||
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
|
||||
inet 10.1.1.6 netmask 255.255.255.224 broadcast 10.1.1.31
|
||||
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212 prefixlen 64 scopeid 0x0<global>
|
||||
inet6 fe80::8eb3:4bc0:7cbb:59e8 prefixlen 64 scopeid 0x20<link>
|
||||
ether 08:71:90:81:1e:b5 txqueuelen 1000 (Ethernet)
|
||||
RX packets 569459 bytes 779147444 (743.0 MiB)
|
||||
RX errors 0 dropped 0 overruns 0 frame 0
|
||||
TX packets 302882 bytes 38131213 (36.3 MiB)
|
||||
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
|
||||
```
|
||||
|
||||
新的 `ip` 命令提供了类似的结果,但命令是 `ip address show`,或者简写为 `ip a`:
|
||||
|
||||
|
||||
```
|
||||
$ ip a
|
||||
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
|
||||
link/ether bc:ee:7b:5e:7d:d8 brd ff:ff:ff:ff:ff:ff
|
||||
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
|
||||
link/ether 08:71:90:81:1e:b5 brd ff:ff:ff:ff:ff:ff
|
||||
inet 10.1.1.6/27 brd 10.1.1.31 scope global dynamic wlan0
|
||||
valid_lft 83490sec preferred_lft 83490sec
|
||||
inet6 fdb4:f58e:49f:4900:d46d:146b:b16:7212/64 scope global noprefixroute dynamic
|
||||
valid_lft 6909sec preferred_lft 3309sec
|
||||
inet6 fe80::8eb3:4bc0:7cbb:59e8/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
#### 添加 IP 地址
|
||||
|
||||
使用 `ifconfig` 命令添加 IP 地址命令为:
|
||||
|
||||
|
||||
```bash
|
||||
$ ifconfig eth0 add 192.9.203.21
|
||||
```
|
||||
|
||||
`ip` 类似:
|
||||
|
||||
|
||||
```bash
|
||||
$ ip address add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
`ip` 中的子命令可以缩短,所以下面这个命令同样有效:
|
||||
|
||||
|
||||
```bash
|
||||
$ ip addr add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
你甚至可以更短些:
|
||||
|
||||
|
||||
```bash
|
||||
$ ip a add 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
#### 移除一个 IP 地址
|
||||
|
||||
添加 IP 地址与删除 IP 地址正好相反。
|
||||
|
||||
使用 `ifconfig`,命令是:
|
||||
|
||||
|
||||
```bash
|
||||
$ ifconfig eth0 del 192.9.203.21
|
||||
```
|
||||
|
||||
`ip` 命令的语法是:
|
||||
|
||||
|
||||
```bash
|
||||
$ ip a del 192.9.203.21 dev eth0
|
||||
```
|
||||
|
||||
#### 启用或禁用组播
|
||||
|
||||
使用 `ifconfig` 接口来启用或禁用[组播][3]:
|
||||
|
||||
|
||||
```bash
|
||||
# ifconfig eth0 multicast
|
||||
```
|
||||
|
||||
对于 `ip`,使用 `set` 子命令与设备(`dev`)以及一个布尔值和 `multicast` 选项:
|
||||
|
||||
|
||||
```bash
|
||||
# ip link set dev eth0 multicast on
|
||||
```
|
||||
|
||||
#### 启用或禁用网络
|
||||
|
||||
每个系统管理员都熟悉“先关闭,然后打开”这个技巧来解决问题。在网络接口上,即打开或关闭网络。
|
||||
|
||||
`ifconfig` 命令使用 `up` 或 `down` 关键字来实现:
|
||||
|
||||
|
||||
```bash
|
||||
# ifconfig eth0 up
|
||||
```
|
||||
|
||||
或者你可以使用一个专用命令:
|
||||
|
||||
|
||||
```bash
|
||||
# ifup eth0
|
||||
```
|
||||
|
||||
`ip` 命令使用 `set` 子命令将网络设置为 `up` 或 `down` 状态:
|
||||
|
||||
|
||||
```bash
|
||||
# ip link set eth0 up
|
||||
```
|
||||
|
||||
#### 开启或关闭地址解析功能(ARP)
|
||||
|
||||
使用 `ifconfig`,你可以通过声明来启用:
|
||||
|
||||
|
||||
```bash
|
||||
# ifconfig eth0 arp
|
||||
```
|
||||
|
||||
使用 `ip`,你可以将 `arp` 属性设置为 `on` 或 `off`:
|
||||
|
||||
|
||||
```bash
|
||||
# ip link set dev eth0 arp on
|
||||
```
|
||||
|
||||
### ip 和 ipconfig 的优缺点
|
||||
|
||||
`ip` 命令比 `ifconfig` 更通用,技术上也更有效,因为它使用的是 `Netlink` 套接字,而不是 `ioctl` 系统调用。
|
||||
|
||||
`ip` 命令可能看起来比 `ifconfig` 更详细、更复杂,但这是它拥有更多功能的一个原因。一旦你开始使用它,你会了解它的内部逻辑(例如,使用 `set` 而不是看起来随意叠加的声明或设置)。
|
||||
|
||||
最后,`ifconfig` 已经过时了(例如,它缺乏对网络名称空间的支持),而 `ip` 是为现代网络而生的。尝试并学习它,使用它,你会很感激你使用它的!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/ifconfig-ip-linux
|
||||
|
||||
作者:[Rajan Bhardwaj][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rajabhar
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk "Tips and gears turning"
|
||||
[2]: https://en.wikipedia.org/wiki/OSI_model
|
||||
[3]: https://en.wikipedia.org/wiki/Multicast
|
@ -1,271 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (amwps290)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Write GIMP scripts to make image processing faster)
|
||||
[#]: via: (https://opensource.com/article/21/1/gimp-scripting)
|
||||
[#]: author: (Cristiano L. Fontana https://opensource.com/users/cristianofontana)
|
||||
|
||||
编写 GIMP 脚本使图像处理更快
|
||||
======
|
||||
通过向一批图像添加效果来学习 GIMP 的脚本语言 Script-Fu。
|
||||
|
||||
![Painting art on a computer screen][1]
|
||||
|
||||
前一段时间,我想给方程图片加一个黑板式的外观。 我开始使用 [GNU Image Manipulation Program (GIMP)][2] 来处理,并对结果感到满意。 问题是我必须对图像执行几个操作,当我想再次使用此样式,不想对所有图像重复这些步骤。 此外,我确信我会很快忘记他们。
|
||||
|
||||
![Fourier transform equations][3]
|
||||
|
||||
傅立叶变换方程式(Cristiano Fontana,[CC BY-SA 4.0] [4])
|
||||
|
||||
GIMP 是一个很棒的开源图像编辑器。 尽管我已经使用了多年,但从未研究过其批处理功能或 [Script-Fu][5] 菜单。 这是探索它们的绝好机会。
|
||||
|
||||
### 什么是 Script-Fu?
|
||||
|
||||
[Script-Fu][6] 是 GIMP 内置的脚本语言。 是一种基于 [Scheme ][7]的编程语言。 如果您从未使用过 Scheme,请尝试一下,因为它可能非常有用。 我认为 Script-Fu 是一个很好的入门方法,因为它对图像处理具有立竿见影的效果,因此您可以非常快速地提高工作效率。 您也可以使用 [Python][8] 编写脚本,但是 Script-Fu 是默认选项。
|
||||
|
||||
为了帮助您熟悉 Scheme,GIMP 的文档提供了深入的[教程][9]。 Scheme 是一种类似于 [Lisp][10] 的语言,因此主要特征是它使用[前缀][11]表示法和[许多括号][12]。 通过为操作数和操作符添加前缀来将它们应用到操作数列表:
|
||||
|
||||
|
||||
```
|
||||
(function-name operand operand ...)
|
||||
|
||||
(+ 2 3)
|
||||
↳ Returns 5
|
||||
|
||||
(list 1 2 3 5)
|
||||
↳ Returns a list containing 1, 2, 3, and 5
|
||||
```
|
||||
|
||||
我花了一些时间才能找到 GIMP 功能完整列表的文档,但实际上很简单。 在 **Help** 菜单中,有一个 **Procedure Browser**,其中包含有关所有可能功能的非常详尽的文档。
|
||||
|
||||
![GIMP Procedure Browser][13]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
### 使用 GIMP 的批处理模式
|
||||
|
||||
您可以使用 `-b` 选项以批处理的方式启动 GIMP. `-b` 选项的参数可以是你想要运行的脚本,或者用一个 `-` 来让 GIMP 进入交互模式而不是命令行模式。正常情况下,当你启动 GIMP 的时候,它会启动图形界面,但是你可以使用 `-i` 选项来禁用它。
|
||||
|
||||
### 开始编写你的第一个脚本
|
||||
|
||||
创建一个名为 `chalk.scm` 的文件,并把它保存在 **Preferences** 窗口中 **Folders** 选项下的 **Script** 中指定的 `script` 文件夹下。就我而言,是在 `$HOME/.config/GIMP/2.10/scripts`.
|
||||
|
||||
在 `chalk.scm` 文件中,写入下面的内容:
|
||||
|
||||
|
||||
```
|
||||
(define (chalk filename grow-pixels spread-amount percentage)
|
||||
(let* ((image (car (gimp-file-load RUN-NONINTERACTIVE filename filename)))
|
||||
(drawable (car (gimp-image-get-active-layer image)))
|
||||
(new-filename (string-append "modified_" filename)))
|
||||
(gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0))
|
||||
(gimp-selection-grow image grow-pixels)
|
||||
(gimp-context-set- '(0 0 0))
|
||||
(gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0)
|
||||
(gimp-selection-none image)
|
||||
(plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount)
|
||||
(gimp-drawable-invert drawable TRUE)
|
||||
(plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0)
|
||||
(gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename)
|
||||
(gimp-image-delete image)))
|
||||
```
|
||||
|
||||
### 定义脚本变量
|
||||
|
||||
在脚本中, `(define (chalk filename grow-pixels spread-amound percentage) ...)` 函数定义了一个名叫 `chalk` 的新函数。它的函数参数是 `filename`, `grow-pixels`, `spread-amound` 和 `percentage`. 在 `define` 中的所有内容都是 `chalk` 函数的主体。你可能已经注意到,那些名字比较长的变量中间都有一个破折号来分割。这是类 Lisp 语言的惯用风格。
|
||||
|
||||
`(let* ...)` 函数是一个特殊的函数可以让你定义一些只有在这个函数体中才有效的临时变量。临时变量有 `image`, `drawable`, 以及 `new-filename`. 它使用 `gimp-file-load` 来载入图片,这会返回它所包含的图片的一个列表。并通过 `car` 函数来选取第一项。 然后,它选择第一个活动层并将其引用存储在 `drawable` 变量中。 最后,它定义了包含图像新文件名的字符串。
|
||||
|
||||
为了帮助您更好地了解该过程,我将对其进行分解。 首先,启动带 GUI 的 GIMP,然后你可以通过依次点击 **Filters → Script-Fu → Console** 来打开 Script-Fu 控制台。 在这种情况下,不能使用 `let *`,因为变量必须是持久的。 使用 `define` 函数定义 `image` 变量,并为其提供查找图像的正确路径:
|
||||
|
||||
|
||||
```
|
||||
`(define image (car (gimp-file-load RUN-NONINTERACTIVE "Fourier.png" "Fourier.png")))`
|
||||
```
|
||||
|
||||
似乎在 GUI 中什么也没有发生,但是图像已加载。 您需要通过以下方式来让图像显示:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-display-new image)`
|
||||
```
|
||||
|
||||
![GUI with the displayed image][14]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
现在,获取活动层并将其存储在 `drawable` 变量中:
|
||||
|
||||
|
||||
```
|
||||
`(define drawable (car (gimp-image-get-active-layer image)))`
|
||||
```
|
||||
|
||||
最后,定义图像的新文件名:
|
||||
|
||||
|
||||
```
|
||||
`(define new-filename "modified_Fourier.png")`
|
||||
```
|
||||
|
||||
运行命令后,您将在 Script-Fu 控制台中看到以下内容:
|
||||
|
||||
![Script-Fu console][15]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
在对图像执行操作之前,需要定义将在脚本中作为函数参数的变量:
|
||||
|
||||
|
||||
```
|
||||
(define grow-pixels 2)
|
||||
(define spread-amount 4)
|
||||
(define percentage 3)
|
||||
```
|
||||
|
||||
### 处理图片
|
||||
|
||||
现在,所有相关变量都已定义,您可以对图像进行操作了。 脚本的操作可以直接在控制台上执行。第一步是在活动层上选择黑色。 颜色被写成一个由三个数字组成的列表,即 `(list 0 0 0)` 或者是 `'(0 0 0)`:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-image-select-color image CHANNEL-OP-REPLACE drawable '(0 0 0))`
|
||||
```
|
||||
|
||||
![Image with the selected color][16]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
扩大选取两个像素:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-selection-grow image grow-pixels)`
|
||||
```
|
||||
|
||||
![Image with the selected color][17]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
将前景色设置为黑色,并用它填充选区:
|
||||
|
||||
|
||||
```
|
||||
(gimp-context-set-foreground '(0 0 0))
|
||||
(gimp-edit-bucket-fill drawable BUCKET-FILL-FG LAYER-MODE-NORMAL 100 255 TRUE 0 0)
|
||||
```
|
||||
|
||||
![Image with the selection filled with black][18]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
删除选区:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-selection-none image)`
|
||||
```
|
||||
|
||||
![Image with no selection][19]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
随机移动像素:
|
||||
|
||||
|
||||
```
|
||||
`(plug-in-spread RUN-NONINTERACTIVE image drawable spread-amount spread-amount)`
|
||||
```
|
||||
|
||||
![Image with pixels moved around][20]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
反转图像颜色:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-drawable-invert drawable TRUE)`
|
||||
```
|
||||
|
||||
![Image with pixels moved around][21]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
随机化像素:
|
||||
|
||||
|
||||
```
|
||||
`(plug-in-randomize-hurl RUN-NONINTERACTIVE image drawable percentage 1 TRUE 0)`
|
||||
```
|
||||
|
||||
![Image with pixels moved around][22]
|
||||
|
||||
(Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
将图像保存到新文件:
|
||||
|
||||
|
||||
```
|
||||
`(gimp-file-save RUN-NONINTERACTIVE image drawable new-filename new-filename)`
|
||||
```
|
||||
|
||||
![Equations of the Fourier transform and its inverse][23]
|
||||
|
||||
傅立叶变换方程 (Cristiano Fontana, [CC BY-SA 4.0][4])
|
||||
|
||||
### 以批处理模式运行脚本
|
||||
|
||||
现在您知道了脚本的功能,可以在批处理模式下运行它:
|
||||
|
||||
|
||||
```
|
||||
`gimp -i -b '(chalk "Fourier.png" 2 4 3)' -b '(gimp-quit 0)'`
|
||||
```
|
||||
|
||||
在运行 `chalk` 函数之后,它将使用 `-b` 选项调用第二个函数 `gimp-quit` 来告诉 GIMP 退出。
|
||||
|
||||
### 了解更多
|
||||
|
||||
本教程向您展示了如何开始使用 GIMP 的内置脚本功能,并介绍了 GIMP 的 Scheme 实现的 Script-Fu。 如果您想继续前进,建议您查看官方文档及其[入门教程][9]。 如果您不熟悉 Scheme 或 Lisp,那么一开始的语法可能有点吓人,但我还是建议您尝试一下。 这可能是一个不错的惊喜。
|
||||
|
||||
Photoshop 之类的专业设计软件非常棒,但价格也很昂贵。 你会怎么做...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/gimp-scripting
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cristianofontana
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
|
||||
[2]: https://www.gimp.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/fourier.png (Fourier transform equations)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://docs.gimp.org/en/gimp-filters-script-fu.html
|
||||
[6]: https://docs.gimp.org/en/gimp-concepts-script-fu.html
|
||||
[7]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
||||
[8]: https://docs.gimp.org/en/gimp-filters-python-fu.html
|
||||
[9]: https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html
|
||||
[10]: https://en.wikipedia.org/wiki/Lisp_%28programming_language%29
|
||||
[11]: https://en.wikipedia.org/wiki/Polish_notation
|
||||
[12]: https://xkcd.com/297/
|
||||
[13]: https://opensource.com/sites/default/files/uploads/procedure_browser.png (GIMP Procedure Browser)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/gui01_image.png (GUI with the displayed image)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/console01_variables.png (Script-Fu console)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/gui02_selected.png (Image with the selected color)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/gui03_grow.png (Image with the selected color)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/gui04_fill.png (Image with the selection filled with black)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/gui05_no_selection.png (Image with no selection)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/gui06_spread.png (Image with pixels moved around)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/gui07_invert.png (Image with pixels moved around)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/gui08_hurl.png (Image with pixels moved around)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/modified_fourier.png (Equations of the Fourier transform and its inverse)
|
@ -1,174 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Run a Shell Script in Linux [Essentials Explained for Beginners])
|
||||
[#]: via: (https://itsfoss.com/run-shell-script-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Linux 中运行一个 Shell 脚本 [初学者必知]
|
||||
======
|
||||
|
||||
在 linux 中有两种运行 shell 脚本的方法。你可以使用:
|
||||
|
||||
```
|
||||
bash script.sh
|
||||
```
|
||||
|
||||
或者,你可以像这样执行 shell 脚本:
|
||||
|
||||
```
|
||||
./script.sh
|
||||
```
|
||||
|
||||
这可能很简单,它解释不了多少。不要担心,我将使用示例来进行必要的解释,以便你能理解为什么在运行一个 shell 脚本时要使用给定的特定语法格式。
|
||||
|
||||
我将使用这一行 shell 脚本来使需要解释的事情变地尽可能简单:
|
||||
|
||||
```
|
||||
abhishek@itsfoss:~/Scripts$ cat hello.sh
|
||||
|
||||
echo "Hello World!"
|
||||
```
|
||||
|
||||
### 方法 1: 通过将文件作为参数传递给 shell 以运行 shell 脚本
|
||||
|
||||
第一种方法涉及将脚本文件的名称作为参数传递给 shell 。
|
||||
|
||||
考虑到 bash 是默认脚本,你可以像这样运行一个脚本:
|
||||
|
||||
```
|
||||
bash hello.sh
|
||||
```
|
||||
|
||||
你知道这种方法的优点码?**你的脚本没有所需要执行权限**。非常方便快速和简单的任务。
|
||||
|
||||
![在 Linux 中运行一个 Shell 脚本][1]
|
||||
|
||||
如果你还不熟悉,我建议你 [阅读我的 Linux 文件权限详细指南][2] 。
|
||||
|
||||
记住,它需要成为一个 shell 脚本,以便你能将其作为参数传递。一个 shell 脚本是由命令组成的。如果你使用一个普通的文本文件,它将会抱怨错误的命令。
|
||||
|
||||
![运行一个文本文件为脚本][3]
|
||||
|
||||
在这种方法中,**你要明确地具体指定你想使用 bash 作为脚本的解释器** 。
|
||||
|
||||
Shell 只是一个程序,并且 bash 只是 Shell 的一个实施。这里有其它的 shell 程序,像 ksh ,[zsh][4] ,等等。如果你安装有其它的 shell ,你也可以使用它们来代替 bash 。
|
||||
|
||||
例如,我已安装了 zsh ,并使用它来运行相同的脚本:
|
||||
|
||||
![使用 Zsh 来执行 Shell 脚本][5]
|
||||
|
||||
**建议阅读:**
|
||||
|
||||
![][6]
|
||||
|
||||
#### [如何在 Linux 终端中一次运行多个 Linux 命令 [初学者必知提示]][7]
|
||||
|
||||
### 方法 2: 通过具体指定 shell 脚本的路径来执行脚本
|
||||
|
||||
另外一种运行一个 shell 脚本的方法是通过提供它的路径。但是为使这变地可能,你的文件必须是可执行的。否则,当你尝试执行脚本时,你将会得到 “拒绝访问” 错误。
|
||||
|
||||
因此,你首先需要确保你的脚本有可执行权限。你可以 [使用 chmod 命令][8] 来给予你自己脚本的这种权限,像这样:
|
||||
|
||||
```
|
||||
chmod u+x script.sh
|
||||
```
|
||||
|
||||
在你的脚本是可执行的后,你需要的全部工作是输入文件的名称及其绝对路径或相对路径。大多数情况下,你都在同一个目录中,因此你可以像这样使用它:
|
||||
|
||||
```
|
||||
./script.sh
|
||||
```
|
||||
|
||||
如果你不与你的脚本在同一个目录中,你可以具体指定脚本的绝对路径或相对路径:
|
||||
|
||||
![在其它的目录中运行 Shell 脚本][9]
|
||||
|
||||
#### 在脚本前的这个 ./ 是非常重要的。(当你与脚本在同一个目录中)
|
||||
|
||||
![][10]
|
||||
|
||||
为什么当你在同一个目录下,却不能使用脚本名称?这是因为你的 Linux 系统会在 PATH 变量中查找具体指定的几个选定目录中的可执行的文件来运行。
|
||||
|
||||
这里是我的系统的 PATH 变量的值:
|
||||
|
||||
```
|
||||
abhishek@itsfoss:~$ echo $PATH
|
||||
/home/abhishek/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
|
||||
```
|
||||
|
||||
这意味着在下面目录中具有可执行权限的任意文件都可以在系统的任何位置运行:
|
||||
|
||||
* /home/abhishek/.local/bin
|
||||
* /usr/local/sbin
|
||||
* /usr/local/bin
|
||||
* /usr/sbin
|
||||
* /usr/bin
|
||||
* /sbin
|
||||
* /bin
|
||||
* /usr/games
|
||||
* /usr/local/games
|
||||
* /snap/bin
|
||||
|
||||
|
||||
|
||||
Linux 命令(像 ls ,cat 等)的二进制文件或可执行文件都位于这些目录中的其中一个目录。这就是为什么你可以在你系统的任何位置通过使用命令的名称来运作这些命令的原因。看看,ls 命令就是位于 /usr/bin 目录中。
|
||||
|
||||
![][11]
|
||||
|
||||
当你使用脚本而不具体指定其绝对路径或相对路径时,系统将不能在 PATH 变量中找到涉及的脚本。
|
||||
|
||||
#### 为什么大多数 shell 脚本在其头部包含 #! /bin/bash ?
|
||||
|
||||
记得我提过 shell 只是一个程序,并且有不同实现的 shell 程序。
|
||||
|
||||
当你使用 #! /bin/bash 时,你是具体指定 bash 作为解释器来运行脚本。如果你不这样做,并且以 ./script.sh 的方式运行一个脚本,它通常会在你正在运行的 shell 中运行。
|
||||
|
||||
有问题吗?可能会有。看看,大多数的 shell 语法是大多数种类的 shell 中通用的,但是有一些语法可能会有所不同。
|
||||
|
||||
例如,在 bash和 zsh 中数组的行为是不同的。在 zsh 中,数组索引是从 1 开始的,而不是从 0 开始。
|
||||
|
||||
![Bash Vs Zsh][12]
|
||||
|
||||
使用 #! /bin/bash 标示该级别是 bash 脚本,并且应该使用bash 作为脚本的解释器来运行,而不受在系统上正在使用的 shell 的影响。如果你使用 zsh 的特殊语法,你可以通过在脚本的第一行添加 #! /bin/zsh 的方式来标示其是 zsh 脚本
|
||||
|
||||
在 #! 和 /bin/bash 之间的空格是没有影响的。你也可以使用 #!/bin/bash 。
|
||||
|
||||
### 它有帮助吗?
|
||||
|
||||
我希望这篇文章能够增加你的 Linux 知识。如果你还有问题或建议,请留下评论。
|
||||
|
||||
专家用户可能依然会挑出我丢失的东西。但是这类初学者话题的问题不容易找到信息和避免过多或过少的细节之间的平衡。
|
||||
|
||||
如果你对学习 bash 脚本感兴趣,在我们的以系统管理为中心的网站 [Linux Handbook][14] 上,我们有一个 [完整的 Bash 初学者系列][13] 。如果你想要,你也可以 [购买附加练习题的电子书][15] ,以支持 Linux Handbook。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/run-shell-script-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/run-a-shell-script-linux.png?resize=741%2C329&ssl=1
|
||||
[2]: https://linuxhandbook.com/linux-file-permissions/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-text-file-as-script.png?resize=741%2C329&ssl=1
|
||||
[4]: https://www.zsh.org
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/execute-shell-script-with-zsh.png?resize=741%2C253&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/run-multiple-commands-in-linux.png?fit=800%2C450&ssl=1
|
||||
[7]: https://itsfoss.com/run-multiple-commands-linux/
|
||||
[8]: https://linuxhandbook.com/chmod-command/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/running-shell-script-in-other-directory.png?resize=795%2C272&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/executing-shell-scripts-linux.png?resize=800%2C450&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/locating-command-linux.png?resize=795%2C272&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/bash-vs-zsh.png?resize=795%2C386&ssl=1
|
||||
[13]: https://linuxhandbook.com/tag/bash-beginner/
|
||||
[14]: https://linuxhandbook.com
|
||||
[15]: https://www.buymeacoffee.com/linuxhandbook
|
@ -0,0 +1,184 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage containers with Podman Compose)
|
||||
[#]: via: (https://fedoramagazine.org/manage-containers-with-podman-compose/)
|
||||
[#]: author: (Mehdi Haghgoo https://fedoramagazine.org/author/powergame/)
|
||||
|
||||
用 Podman Compose 管理容器
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
容器很棒,让你可以将你的应用连同其依赖项一起打包,并在任何地方运行。从 2013 年的 Docker 开始,容器已经让软件开发者的生活变得更加轻松。
|
||||
|
||||
Docker 的一个缺点是它有一个中央守护进程,它以 root 用户的身份运行,这对安全有影响。但这正是 Podman 的用武之地。Podman 是一个 [无守护进程容器引擎][2],用于开发、管理和在你的 Linux 系统上以 root 或无 root 模式运行 OCI 容器。
|
||||
|
||||
在 Fedora Magazine 上还有其他文章,你可以用来了解更多关于 Podman 的信息。下面有两个例子:
|
||||
|
||||
* [在 Fedora 上使用 Podman 的 Pod][3]
|
||||
* [在 Fedora 上具有 Capabilities 的Podman][4]
|
||||
|
||||
|
||||
|
||||
如果你使用过 Docker,你很可能也知道 Docker Compose,它是一个用于编排多个可能相互依赖的容器的工具。要了解更多关于 Docker Compose 的信息,请看它的[文档][5]。
|
||||
|
||||
### 什么是 Podman Compose?
|
||||
|
||||
[Podman Compose][6]是一个目标作为 Docker Compose 的替代品,不需要对 docker-compose.yaml 文件进行任何修改的项目。由于 Podman Compose 使用 pod 工作,所以最好看下 pod 的最新定义。
|
||||
|
||||
|
||||
> 一个_Pod_(如一群鲸鱼或豌豆荚)是由一个或多个[容器][7]组成的组,具有共享的存储/网络资源,以及如何运行容器的规范。
|
||||
>
|
||||
> [Pods - Kubernetes 文档][8]
|
||||
|
||||
Podman Compose 的基本思想是,它选中 _docker-compose.yaml_ 文件里面定义的服务,为每个服务创建一个容器。Docker Compose 和 Podman Compose 的一个主要区别是,Podman Compose 将整个项目的容器添加到一个单一的 pod 中,而且所有的容器共享同一个网络。它甚至用和 Docker Compose 一样的方式命名容器,在创建容器时使用 _--add-host_ 标志,你会在例子中看到。
|
||||
|
||||
|
||||
### 安装
|
||||
|
||||
Podman Compose 的完整安装说明可以在[项目页面][6]上找到,它有几种方法。要安装最新的开发版本,使用以下命令:
|
||||
|
||||
```
|
||||
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
|
||||
```
|
||||
|
||||
确保你也安装了 [Podman][9],因为你也需要它。在 Fedora 上,使用下面的命令来安装Podman:
|
||||
|
||||
```
|
||||
sudo dnf install podman
|
||||
```
|
||||
|
||||
### 例子:用 Podman Compose 启动一个 WordPress 网站
|
||||
|
||||
想象一下,你的 _docker-compose.yaml_ 文件在一个叫 _wpsite_ 的文件夹里。一个典型的 WordPress 网站的 _docker-compose.yaml_ (或 _docker-compose.yml_) 文件是这样的:
|
||||
|
||||
```
|
||||
version: "3.8"
|
||||
services:
|
||||
web:
|
||||
image: wordpress
|
||||
restart: always
|
||||
volumes:
|
||||
- wordpress:/var/www/html
|
||||
ports:
|
||||
- 8080:80
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: db
|
||||
WORDPRESS_DB_USER: magazine
|
||||
WORDPRESS_DB_NAME: magazine
|
||||
WORDPRESS_DB_PASSWORD: 1maGazine!
|
||||
WORDPRESS_TABLE_PREFIX: cz
|
||||
WORDPRESS_DEBUG: 0
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
- wpnet
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
restart: always
|
||||
ports:
|
||||
- 6603:3306
|
||||
|
||||
volumes:
|
||||
- wpdbvol:/var/lib/mysql
|
||||
|
||||
environment:
|
||||
MYSQL_DATABASE: magazine
|
||||
MYSQL_USER: magazine
|
||||
MYSQL_PASSWORD: 1maGazine!
|
||||
MYSQL_ROOT_PASSWORD: 1maGazine!
|
||||
networks:
|
||||
- wpnet
|
||||
volumes:
|
||||
wordpress: {}
|
||||
wpdbvol: {}
|
||||
|
||||
networks:
|
||||
wpnet: {}
|
||||
```
|
||||
|
||||
如果你用过 Docker,你就会知道你可运行 _docker-compose up_ 来启动这些服务。Docker Compose 会创建两个名为 _wpsite_web_1_ 和 _wpsite_db_1_ 的容器,并将它们连接到一个名为 _wpsite_wpnet_ 的网络。
|
||||
|
||||
|
||||
现在,看看当你在项目目录下运行 _podman-compose up_ 时会发生什么。首先,一个以执行命令的目录命名的 pod 被创建。接下来,它寻找 YAML 文件中定义的任何名称的卷,如果它们不存在,就创建卷。然后,在 YAML 文件的 _services_ 部分列出的每个服务都会创建一个容器,并添加到 pod 中。
|
||||
|
||||
容器的命名与 Docker Compose 类似。例如,为你的 web 服务创建一个名为 _wpsite_web_1_ 的容器。Podman Compose 还为每个命名的容器添加了 localhost 别名。之后,容器仍然可以通过名字互相解析,尽管它们并不像 Docker 那样在一个桥接网络上。要做到这一点,使用选项 _-add-host_。例如,_-add-host web:localhost_。
|
||||
|
||||
请注意,_docker-compose.yaml_ 包含了一个从主机 8080 端口到容器 80 端口的 Web 服务的端口转发。现在你应该可以通过浏览器访问新 WordPress 实例,地址为 _<http://localhost:8080>_。
|
||||
|
||||
|
||||
![WordPress Dashboard][10]
|
||||
|
||||
### 控制 pod 和容器
|
||||
|
||||
要查看正在运行的容器,使用 _podman ps_,它可以显示 web 和数据库容器以及 pod 中的 infra 容器。
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
a364a8d7cec7 docker.io/library/wordpress:latest apache2-foregroun... 2 hours ago Up 2 hours ago 0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp wpsite_web_1
|
||||
c447024aa104 docker.io/library/mariadb:10.5 mysqld 2 hours ago Up 2 hours ago 0.0.0.0:8080-&gt;80/tcp, 0.0.0.0:6603-&gt;3306/tcp wpsite_db_1
|
||||
12b1e3418e3e k8s.gcr.io/pause:3.2
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
你也可以验证 Podman 已经为这个项目创建了一个 pod,以你执行命令的文件夹命名。
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
|
||||
8a08a3a7773e wpsite Degraded 2 hours ago 12b1e3418e3e 3
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
要停止容器,在另一个命令窗口中输入以下命令:
|
||||
|
||||
```
|
||||
podman-compose down
|
||||
```
|
||||
|
||||
你也可以通过停止和删除 pod 来实现。这实质上是停止并移除所有的容器,然后再删除包含的 pod。所以,同样的事情也可以通过这些命令来实现:
|
||||
|
||||
```
|
||||
podman pod stop podname
|
||||
podman pod rm podname
|
||||
```
|
||||
|
||||
请注意,这不会删除你在 _docker-compose.yaml_ 中定义的卷。所以,你的 WordPress 网站的状态被保存下来了,你可以通过运行这个命令来恢复它。
|
||||
|
||||
```
|
||||
podman-compose up
|
||||
```
|
||||
|
||||
总之,如果你是一个 Podman 粉丝,并且用 Podman 做容器工作,你可以使用 Podman Compose 来管理你的开发和生产中的容器。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/manage-containers-with-podman-compose/
|
||||
|
||||
作者:[Mehdi Haghgoo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/powergame/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/01/podman-compose-1-816x345.jpg
|
||||
[2]: https://podman.io
|
||||
[3]: https://fedoramagazine.org/podman-pods-fedora-containers/
|
||||
[4]: https://fedoramagazine.org/podman-with-capabilities-on-fedora/
|
||||
[5]: https://docs.docker.com/compose/
|
||||
[6]: https://github.com/containers/podman-compose
|
||||
[7]: https://kubernetes.io/docs/concepts/containers/
|
||||
[8]: https://kubernetes.io/docs/concepts/workloads/pods/
|
||||
[9]: https://podman.io/getting-started/installation
|
||||
[10]: https://fedoramagazine.org/wp-content/uploads/2021/01/Screenshot-from-2021-01-08-06-27-29-1024x767.png
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Filmulator is a Simple, Open Source, Raw Image Editor for Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/filmulator/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Filmulator 是一个简单的、开源的用于 Linux 桌面的 Raw 图像编辑器
|
||||
======
|
||||
|
||||
_**简介:Filmulator 是一个开源的具有库管理功能的 Raw 照片编辑应用,侧重于简单、易用和简化的工作流程。**_
|
||||
|
||||
### Filmulator:适用于 Linux(和 Windows)的 Raw 图像编辑器
|
||||
|
||||
[Linux 中有一堆 raw 照片编辑器][1]。[Filmulator][2] 就是其中之一。Filmulator 的目标是仅提供基本要素,从而使原始图像编辑变得简单。它还增加了库处理的功能,如果你正在为你的相机图像寻找一个不错的应用,这是一个加分项。
|
||||
|
||||
对于那些不知道 raw 的人来说,[raw 图像文件][3]是一个最小化处理,未压缩的文件。换句话说,它是未经压缩的数字文件,且有最小化的处理。专业摄影师更喜欢用 raw 文件拍摄照片,并自行处理。普通人从智能手机拍摄照片,它通常被压缩为 JPEG 格式或被过滤。
|
||||
|
||||
让我们来看看在 Filmulator 编辑器中会有什么功能。
|
||||
|
||||
### Filmulator 的功能
|
||||
|
||||
![Filmulator interface][4]
|
||||
|
||||
Filmulator 宣称,它不是典型的“胶片效果滤镜”,这只是复制了胶片的外在特征。相反,Filmulator 从根本上解决了胶片的魅力所在:显影过程。
|
||||
|
||||
它模拟了胶片的显影过程:从胶片的“曝光”,到每个像素内“银晶”的生长,再到“显影剂”在相邻像素之间与储槽中大量显影剂的扩散。
|
||||
|
||||
Fimulator 开发者表示,这种模拟带来了以下好处:
|
||||
|
||||
* 大的明亮区域变得更暗,压缩了输出动态范围。
|
||||
* 小的明亮区域使周围环境变暗,增强局部对比度。
|
||||
* 在明亮区域,饱和度得到增强,有助于保留蓝天、明亮肤色和日落的色彩。
|
||||
* 在极度饱和的区域,亮度会被减弱,有助于保留细节,例如花朵。
|
||||
|
||||
|
||||
|
||||
以下是经 Filmulator 处理后的 raw 图像的对比,以自然的方式增强色彩,而不会引起色彩剪切。
|
||||
|
||||
![][5]
|
||||
|
||||
### 在 Ubuntu/Linux 上安装 Filmulator
|
||||
|
||||
Filmulator 有一个 AppImage 可用,这样你就可以在 Linux 上轻松使用它。使用 [AppImage 文件][6]真的很简单。下载后,使它可执行,然后双击运行。
|
||||
|
||||
[Download Filmulator for Linux][7]
|
||||
|
||||
对 Windows 用户也有一个 Windows 版本。除此之外,你还可以随时前往[它的 GitHub 仓库][8]查看它的源代码。
|
||||
|
||||
有一份[小文档][9]来帮助你开始使用 Fimulator。
|
||||
|
||||
### 总结
|
||||
|
||||
Fimulator 的设计理念是为任何工作提供最好的工具,而且只有这一个工具。这意味着牺牲了灵活性,但获得了一个大大简化和精简的用户界面。
|
||||
|
||||
我甚至不是一个业余的摄影师,更别说是一个专业的摄影师了。我没有单反或其他高端摄影设备。因此,我无法测试和分享我对 Filmulator 的实用性的经验。
|
||||
|
||||
如果你有更多处理 raw 图像的经验,请尝试下 Filmulator,并分享你的意见。有一个 AppImage 可以让你快速测试它,看看它是否适合你的需求。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/filmulator/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/raw-image-tools-linux/
|
||||
[2]: https://filmulator.org/
|
||||
[3]: https://www.findingtheuniverse.com/what-is-raw-in-photography/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/Filmulate.jpg?resize=799%2C463&ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/image-without-filmulator.jpeg?ssl=1
|
||||
[6]: https://itsfoss.com/use-appimage-linux/
|
||||
[7]: https://filmulator.org/download/
|
||||
[8]: https://github.com/CarVac/filmulator-gui
|
||||
[9]: https://github.com/CarVac/filmulator-gui/wiki
|
@ -0,0 +1,155 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Paru – A New AUR Helper and Pacman Wrapper Based on Yay)
|
||||
[#]: via: (https://itsfoss.com/paru-aur-helper/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Paru – 基于 Yay 的新 AUR 助手及 Pacman 包装器
|
||||
======
|
||||
|
||||
[用户选择 Arch Linux][1] 或 [基于 Arch 的 Linux 发行版][2]的主要原因之一就是 [Arch 用户仓库(AUR)][3]。
|
||||
|
||||
遗憾的是,[pacman][4],也就是 Arch 的包管理器,不能以类似官方仓库的方式访问 AUR。AUR 中的包是以 [PKGBUILD][5] 的形式存在的,需要手动过程来构建。
|
||||
|
||||
AUR 助手可以自动完成这个过程。毫无疑问,[yay][6] 是最受欢迎和备受青睐的 AUR 助手之一。
|
||||
|
||||
最近,yay 的两位开发者之一的 [Morganamilo][7] [宣布][8]将退出 yay 的维护工作,以开始自己的 AUR 助手 [paru][9]。Paru 是用 [Rust][10] 编写的,而 yay 是用 [Go][11] 编写的,它的设计是基于 yay 的。
|
||||
|
||||
请注意,yay 还没有结束支持,它仍然由 [Jguer][12] 积极维护。他还[评论][13]说,paru 可能适合那些寻找丰富功能的 AUR 助手的用户。因此我推荐大家尝试一下。
|
||||
|
||||
### 安装 Paru AUR 助手
|
||||
|
||||
要安装 paru,打开你的终端,逐一输入以下命令。
|
||||
|
||||
```
|
||||
sudo pacman -S --needed base-devel
|
||||
git clone https://aur.archlinux.org/paru.git
|
||||
cd paru
|
||||
makepkg -si
|
||||
```
|
||||
|
||||
现在已经安装好了,让我们来看看如何使用它。
|
||||
|
||||
### 使用 Paru AUR 助手的基本命令
|
||||
|
||||
在我看来,这些都是 paru 最基本的命令。你可以在 [GitHub][9] 的官方仓库中探索更多。
|
||||
|
||||
* **paru <用户输入>**:搜索并安装<用户输入>
|
||||
* **paru -**:paru -Syu 的别名
|
||||
* **paru -Sua** :仅升级 AUR 包。
|
||||
* **paru -Qua**:打印可用的 AUR 更新
|
||||
* **paru -Gc <用户输入>**:打印<用户输入>的 AUR 评论
|
||||
|
||||
|
||||
|
||||
### 充分使用 Paru AUR 助手
|
||||
|
||||
你可以在 GitHub 上访问 paru 的[更新日志][14]来查看完整的变更日志历史,或者你可以在[首次发布][15]中查看来自 yay 的更改。
|
||||
|
||||
#### 在 Paru 中启用颜色
|
||||
|
||||
要在 paru 中启用颜色,你必须先在 pacman 中启用它。所有的[配置文件][16]都在 /etc 目录下。在此例中,我[使用 Nano 文本编辑器][17],但是,你可以选择使用任何[基于终端的文本编辑器][18]。
|
||||
|
||||
```
|
||||
sudo nano /etc/pacman.conf
|
||||
```
|
||||
|
||||
打开 pacman 配置文件后,取消 “Color” 的注释,即可启用此功能。
|
||||
|
||||
![][19]
|
||||
|
||||
#### **反转搜索顺序**
|
||||
|
||||
根据你的搜索条件,最相关的包通常会显示在搜索结果的顶部。在 paru 中,你可以反转搜索顺序,使你的搜索更容易。
|
||||
|
||||
与前面的例子类似,打开 paru 配置文件:
|
||||
|
||||
```
|
||||
sudo nano /etc/paru.conf
|
||||
```
|
||||
|
||||
取消注释 “BottomUp” 项,然后保存文件。
|
||||
|
||||
![][20]
|
||||
|
||||
如你所见,顺序是反转的,第一个包出现在了底部。
|
||||
|
||||
![][21]
|
||||
|
||||
#### **编辑 PKGBUILD** (对于高级用户)
|
||||
|
||||
如果你是一个有经验的 Linux 用户,你可以通过 paru 编辑 AUR 包。要做到这一点,你需要在 paru 配置文件中启用该功能,并设置你所选择的文件管理器。
|
||||
|
||||
在此例中,我将使用配置文件中的默认值,即 vifm 文件管理器。如果你还没有使用过它,你可能需要安装它。
|
||||
|
||||
```
|
||||
sudo pacman -S vifm
|
||||
sudo nano /etc/paru.conf
|
||||
```
|
||||
|
||||
打开配置文件,如下所示取消注释。
|
||||
|
||||
![][22]
|
||||
|
||||
让我们回到 [Google Calendar][23] 的 AUR 包,并尝试安装它。系统会提示你审查该软件包。输入 Y 并按下回车。
|
||||
|
||||
![][24]
|
||||
|
||||
从文件管理器中选择 PKGBUILD,然后按下回车查看软件包。
|
||||
|
||||
![][25]
|
||||
|
||||
你所做的任何改变都将是永久性的,下次升级软件包时,你的改变将与上游软件包合并。
|
||||
|
||||
![][26]
|
||||
|
||||
### 总结
|
||||
|
||||
Paru 是 [AUR 助手家族][27]的又一个有趣的新成员,前途光明。此时,我不建议更换 yay,因为它还在维护,但一定要试试 paru。你可以把它们两个都安装到你的系统中,然后得出自己的结论。
|
||||
|
||||
要获得最新的 [Linux 新闻][28],请订阅我们的社交媒体,以便在第一时间获取新闻!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/paru-aur-helper/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/why-arch-linux/
|
||||
[2]: https://itsfoss.com/arch-based-linux-distros/
|
||||
[3]: https://itsfoss.com/aur-arch-linux/
|
||||
[4]: https://itsfoss.com/pacman-command/
|
||||
[5]: https://wiki.archlinux.org/index.php/PKGBUILD
|
||||
[6]: https://news.itsfoss.com/qt-6-released/
|
||||
[7]: https://github.com/Morganamilo
|
||||
[8]: https://www.reddit.com/r/archlinux/comments/jjn1c1/paru_v100_and_stepping_away_from_yay/
|
||||
[9]: https://github.com/Morganamilo/paru
|
||||
[10]: https://www.rust-lang.org/
|
||||
[11]: https://golang.org/
|
||||
[12]: https://github.com/Jguer
|
||||
[13]: https://aur.archlinux.org/packages/yay/#pinned-788241
|
||||
[14]: https://github.com/Morganamilo/paru/releases
|
||||
[15]: https://github.com/Morganamilo/paru/releases/tag/v1.0.0
|
||||
[16]: https://linuxhandbook.com/linux-directory-structure/#-etc-configuration-files
|
||||
[17]: https://itsfoss.com/nano-editor-guide/
|
||||
[18]: https://itsfoss.com/command-line-text-editors-linux/
|
||||
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/pacman.conf-color.png?resize=800%2C480&ssl=1
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup.png?resize=800%2C480&ssl=1
|
||||
[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-bottomup-2.png?resize=800%2C480&ssl=1
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru.conf-vifm.png?resize=732%2C439&ssl=1
|
||||
[23]: https://aur.archlinux.org/packages/gcalcli/
|
||||
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review.png?resize=800%2C480&ssl=1
|
||||
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-2.png?resize=800%2C480&ssl=1
|
||||
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/paru-proceed-for-review-3.png?resize=800%2C480&ssl=1
|
||||
[27]: https://itsfoss.com/best-aur-helpers/
|
||||
[28]: https://news.itsfoss.com/
|
Loading…
Reference in New Issue
Block a user