Merge pull request #4 from LCTT/master

update
This commit is contained in:
wyxplus 2021-03-22 17:15:56 +08:00 committed by GitHub
commit 0a27365ad7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 2497 additions and 771 deletions

View File

@ -0,0 +1,152 @@
[#]: collector: "lujun9972"
[#]: translator: "wyxplus"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13215-1.html"
[#]: subject: "Managing processes on Linux with kill and killall"
[#]: via: "https://opensource.com/article/20/1/linux-kill-killall"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
在 Linux 上使用 kill 和 killall 命令来管理进程
======
> 了解如何使用 ps、kill 和 killall 命令来终止进程并回收系统资源。
![](https://img.linux.net.cn/data/attachment/album/202103/18/230625q6g65gz6ugdk8ygr.jpg)
在 Linux 中,每个程序和<ruby>守护程序<rt>daemon</rt></ruby>都是一个“<ruby>进程<rt>process</rt></ruby>”。 大多数进程代表一个正在运行的程序。而另外一些程序可以派生出其他进程,比如说它会侦听某些事件的发生,然后对其做出响应。并且每个进程都需要一定的内存和处理能力。你运行的进程越多,所需的内存和 CPU 使用周期就越多。在老式电脑(例如我使用了 7 年的笔记本电脑)或轻量级计算机(例如树莓派)上,如果你关注过后台运行的进程,就能充分利用你的系统。
你可以使用 `ps` 命令来查看正在运行的进程。你通常会使用 `ps` 命令的参数来显示出更多的输出信息。我喜欢使用 `-e` 参数来查看每个正在运行的进程,以及 `-f` 参数来获得每个进程的全部细节。以下是一些例子:
```
$ ps
PID TTY TIME CMD
88000 pts/0 00:00:00 bash
88052 pts/0 00:00:00 ps
88053 pts/0 00:00:00 head
```
```
$ ps -e | head
PID TTY TIME CMD
1 ? 00:00:50 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp
6 ? 00:00:02 kworker/0:0H-events_highpri
9 ? 00:00:00 mm_percpu_wq
10 ? 00:00:01 ksoftirqd/0
11 ? 00:00:12 rcu_sched
12 ? 00:00:00 migration/0
```
```
$ ps -ef | head
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
root 2 0 0 13:51 ? 00:00:00 [kthreadd]
root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
root 12 2 0 13:51 ? 00:00:00 [migration/0]
```
最后的例子显示最多的细节。在每一行,`UID`(用户 ID显示了该进程的所有者。`PID`(进程 ID代表每个进程的数字 ID`PPID`(父进程 ID表示其父进程的数字 ID。在任何 Unix 系统中,进程是从 1 开始编号,是内核启动后运行的第一个进程。在这里,`systemd` 是第一个进程,它催生了 `kthreadd`,而 `kthreadd` 还创建了其他进程,包括 `rcu_gp`、`rcu_par_gp` 等一系列进程。
### 使用 kill 命令来管理进程
系统会处理大多数后台进程,所以你不需要操心这些进程。你只需要关注那些你所运行的应用创建的进程。虽然许多应用一次只运行一个进程(如音乐播放器、终端模拟器或游戏等),但其他应用则可能创建后台进程。其中一些应用可能当你退出后还在后台运行,以便下次你使用的时候能快速启动。
当我运行 Chromium作为谷歌 Chrome 浏览器所基于的开源项目)时,进程管理便成了问题。 Chromium 在我的笔记本电脑上运行非常吃力,并产生了许多额外的进程。现在我仅打开五个选项卡,就能看到这些 Chromium 进程:
```
$ ps -ef | fgrep chromium
jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
[...]
jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
```
我已经省略一些行,其中有 20 个 Chromium 进程和一个正在搜索 “chromium" 字符的 `grep` 进程。
```
$ ps -ef | fgrep chromium | wc -l
21
```
但是在我退出 Chromium 之后,这些进程仍旧运行。如何关闭它们并回收这些进程占用的内存和 CPU 呢?
`kill` 命令能让你终止一个进程。在最简单的情况下,你告诉 `kill` 命令终止你想终止的进程的 PID。例如要终止这些进程我需要对 20 个 Chromium 进程 ID 都执行 `kill` 命令。一种方法是使用命令行获取 Chromium 的 PID而另一种方法针对该列表运行 `kill`
```
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
66221
66230
66239
66257
66262
66283
66284
66285
66324
66337
66360
66370
66386
66402
66503
66539
66595
66734
66848
66861
69702
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' > /tmp/pids
$ kill $(cat /tmp/pids)
```
最后两行是关键。第一个命令行为 Chromium 浏览器生成一个进程 ID 列表。第二个命令行针对该进程 ID 列表运行 `kill` 命令。
### 介绍 killall 命令
一次终止多个进程有个更简单方法,使用 `killall` 命令。你或许可以根据名称猜测出,`killall` 会终止所有与该名字匹配的进程。这意味着我们可以使用此命令来停止所有流氓 Chromium 进程。这很简单:
```
$ killall /usr/lib64/chromium-browser/chromium-browser
```
但是要小心使用 `killall`。该命令能够终止与你所给出名称相匹配的所有进程。这就是为什么我喜欢先使用 `ps -ef` 命令来检查我正在运行的进程,然后针对要停止的命令的准确路径运行 `killall`
你也可以使用 `-i``--interactive` 参数,来让 `killkill` 在停止每个进程之前提示你。
`killall` 还支持使用 `-o``--older-than` 参数来查找比特定时间更早的进程。例如,如果你发现了一组已经运行了好几天的恶意进程,这将会很有帮助。又或是,你可以查找比特定时间更晚的进程,例如你最近启动的失控进程。使用 `-y``--young-than` 参数来查找这些进程。
### 其他管理进程的方式
进程管理是系统维护重要的一部分。在我作为 Unix 和 Linux 系统管理员的早期职业生涯中,杀死非法作业的能力是保持系统正常运行的关键。在如今,你可能不需要亲手在 Linux 上的终止流氓进程,但是知道 `kill``killall` 能够在最终出现问题时为你提供帮助。
你也能寻找其他方式来管理进程。在我这个案例中,我并不需要在我退出浏览器后,使用 `kill``killall` 来终止后台 Chromium 进程。在 Chromium 中有个简单设置就可以进行控制:
![Chromium background processes setting][2]
不过,始终关注系统上正在运行哪些进程,并且在需要的时候进行干预是一个明智之举。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/linux-kill-killall
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 "Penguin with green background"
[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png "Chromium background processes setting"

View File

@ -3,71 +3,62 @@
[#]: author: (Kevin O'Brien https://opensource.com/users/ahuka)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13218-1.html)
在 FreeDOS 中设置你的路径
======
学习你的 FreeDOS 的路径,如何设置它,并且如何使用它。
> 学习 FreeDOS 路径的知识,如何设置它,并且如何使用它。
![查看职业生涯地图][1]
你在开源 [FreeDOS][2] 操作系统中所做的一切工作都是通过命令行完成的。命令行以一个 _提示_ 开始,这是计算机说法的方式,"我准备好了。请给我一些事情来做"。你可以配置你的提示符的外观,但是默认情况下,它是:
你在开源 [FreeDOS][2] 操作系统中所做的一切工作都是通过命令行完成的。命令行以一个 _提示符_ 开始,这是计算机说法的方式,“我准备好了。请给我一些事情来做。”你可以配置你的提示符的外观,但是默认情况下,它是:
```
C:\>
```
从命令行中,你可以做两件事:运行一个内部命令或运行一个程序。外部命令是在你的 `FDOS` 目录中可找到的以单独文件形式存在的程序,以便运行程序包括运行外部命令。它也意味着你使用你的计算机运行应用程序软件来做一些东西。你也可以运行一个批处理文件,但是在这种情况下,你所做的全部工作就变成了运行批处理文件中所列出的一系列命令或程序。
从命令行中,你可以做两件事:运行一个内部命令或运行一个程序。外部命令是在你的 `FDOS` 目录中可找到的以单独文件形式存在的程序,以便运行程序包括运行外部命令。它也意味着你可以使用你的计算机运行应用程序软件来做一些东西。你也可以运行一个批处理文件,但是在这种情况下,你所做的全部工作就变成了运行批处理文件中所列出的一系列命令或程序。
### 可执行应用程序文件
FreeDOS 可以运行三种类型的应用程序文件:
1. **COM** 是一个用机器语言写的,且小于 64 KB 的文件。
2. **EXE** 也是一个用机器语言写的文件,但是它可以大于 64 KB 。此外,在 EXE 文件的开头位置有信息,用于告诉 DOS 系统该文件是什么类型的以及如何加载和运行。
2. **EXE** 也是一个用机器语言写的文件,但是它可以大于 64 KB 。此外,在 EXE 文件的开头部分有信息,用于告诉 DOS 系统该文件是什么类型的以及如何加载和运行。
3. **BAT** 是一个使用文本编辑器以 ASCII 文本格式编写的 _批处理文件_ ,其中包含以批处理模式执行的 FreeDOS 命令。这意味着每个命令都会按顺序执行到文件的结尾。
如果你所输入的一个文件名称不能被 FreeDOS 识别为一个内部命令或一个程序,你将收到一个错误消息 _Bad command or filename_ 。如果你看到这个错误,它意味着会是下面三种情况中的其中一种:
如果你所输入的一个文件名称不能被 FreeDOS 识别为一个内部命令或一个程序,你将收到一个错误消息 “Bad command or filename” 。如果你看到这个错误,它意味着会是下面三种情况中的其中一种:
1. 由于某些原因,你所给予的名称是错误的。你可能拼错了文件名称,或者你可能正在使用错误的命令名称。检查名称和拼写,并再次尝试。
2. 可能你正在尝试运行的程序并没有安装在计算机上。确认它已经安装了。
2. 可能你正在尝试运行的程序并没有安装在计算机上。确认它已经安装了。
3. 文件确实存在,但是 FreeDOS 不知道在哪里可以找到它。
在清单上的最后一项就是这篇文章的主题,它被称为 `PATH`。如果你已经习惯于使用 Linux 或 Unix ,你可能已经理解 [PATH 变量][3] 的概念。如果你是命令行的新手,那么路径是一个非常重要的足以让你舒适的东西。
在清单上的最后一项就是这篇文章的主题,它被称为路径。如果你已经习惯于使用 Linux 或 Unix ,你可能已经理解 [PATH 变量][3] 的概念。如果你是命令行的新手,那么路径是一个非常重要的足以让你舒适的东西。
### 路径
当你输入一个可执行应用程序文件的名称时FreeDOS 必须能找到它。FreeDOS 会在一个具体指定的位置层次结构中查找文件:
1. 首先,它查找当前驱动器的活动目录(称为 _工作目录_)。如果你正在目录 `C:\FDOS` 中,接着,你输入名称 `FOOBAR.EXE`FreeDOS 将在 `C:\FDOS` 中查找带有这个名称的文件。你甚至不需要输入完整的名称。如果你输入 `FOOBAR` FreeDOS 将查找任何带有这个名称的可执行文件,不管它是 `FOOBAR.EXE``FOOBAR.COM`,或 `FOOBAR.BAT`。只要 FreeDOS 能找到一个匹配该名称的文件,它就会运行该可执行文件。
2. 如果 FreeDOS 不能找到一个你所输入名称的文件,它将查询被称为 `PATH` 的一些东西。每当 DOS 不能在当前活动命令中找到一个文件时,指示 DOS 检查这个列表中目录。
你可以随时使用 `PATH` 命令来查看你的计算机的路径。只需要在 FreeDOS 提示符中输入 `path` FreeDOS 就会返回你的路径设置:
2. 如果 FreeDOS 不能找到你所输入名称的文件,它将查询被称为 `PATH` 的一些东西。每当 DOS 不能在当前活动命令中找到文件时,会指示 DOS 检查这个列表中目录。
你可以随时使用 `path` 命令来查看你的计算机的路径。只需要在 FreeDOS 提示符中输入 `path` FreeDOS 就会返回你的路径设置:
```
C:\>path
PATH=C:\FDOS\BIN
```
第一行是提示和命令,第二行是计算机返回的东西。你可以看到 DOS 第一个查看的位置就是 `FDOS\BIN` ,它位于 `C` 驱动器上。如果你想更改你的路径,你可以输入一个 path 命令以及你想使用的新路径:
第一行是提示符和命令,第二行是计算机返回的东西。你可以看到 DOS 第一个查看的位置就是位于 `C` 驱动器上的 `FDOS\BIN`。如果你想更改你的路径,你可以输入一个 `path` 命令以及你想使用的新路径:
```
C:\>path=C:\HOME\BIN;C:\FDOS\BIN
```
在这个示例中,我设置我的路径到我个人的 `BIN` 文件夹which I keep in a custom directory called `HOME`, and then to `FDOS\BIN`。现在,当你检查你的路径时:
在这个示例中,我设置我的路径到我个人的 `BIN` 文件夹,我把它放在一个叫 `HOME` 的自定义目录中,然后再设置为 `FDOS/BIN`。现在,当你检查你的路径时:
```
C:\>path
@ -76,7 +67,7 @@ PATH=C:\HOME\BIN;C:\FDOS\BIN
路径设置是按所列目录的顺序处理的。
你可能会注意到有一些字符是小写的有一些字符是大写的。你使用哪一种都真的不重要。FreeDOS 是不区分大小写的并且把所有的东西都作为大写字母对待。在内部FreeDOS 使用所有的大写字母,这就是为什么你看到来自你命令的输出都是大写字母的原因。如果你以小写字母的形式输入命令和文件名称,在一个转换器将自动转换它们为大写字母后,它们将被执行。
你可能会注意到有一些字符是小写的有一些字符是大写的。你使用哪一种都真的不重要。FreeDOS 是不区分大小写的并且把所有的东西都作为大写字母对待。在内部FreeDOS 使用的全是大写字母,这就是为什么你看到来自你命令的输出都是大写字母的原因。如果你以小写字母的形式输入命令和文件名称,在一个转换器将自动转换它们为大写字母后,它们将被执行。
输入一个新的路径来替换先前设置的路径。
@ -84,14 +75,12 @@ PATH=C:\HOME\BIN;C:\FDOS\BIN
你可能遇到的下一个问题的是 FreeDOS 默认使用的第一个路径来自何处。这与其它一些重要的设置一起定义在你的 `C` 驱动器的根目录下的 `AUTOEXEC.BAT` 文件中。这是一个批处理文件,它在你启动 FreeDOS 时会自动执行(由此得名)。你可以使用 FreeDOS 程序 `EDIT` 来编辑这个文件。为查看或编辑这个文件的内容,输入下面的命令:
```
C:\>edit autoexec.bat
```
这一行出现在顶部附近:
```
SET PATH=%dosdir%\BIN
```
@ -100,26 +89,22 @@ SET PATH=%dosdir%\BIN
在你查看 `AUTOEXEC.BAT` 后,你可以通过依次按下面的按键来退出 EDIT 应用程序:
1. Alt
2. f
3. x
1. `Alt`
2. `f`
3. `x`
你也可以使用键盘快捷键 **Alt** + **X**
你也可以使用键盘快捷键 `Alt+X`
### 使用完整的路径
如果你在你的路径中忘记包含 `C:\FDOS\BIN` ,那么你将不能快速访问存储在这里的任何应用程序,因为 FreeDOS 不知道从哪里找到它们。例如,假设我设置我的路径到我个人应用程序集合:
```
C:\>path=C:\HOME\BIN
```
内置在命令行中应用程序仍然能正常工作:
```
C:\cd HOME
C:\HOME>dir
@ -132,7 +117,6 @@ DND
不过,外部的命令将不能运行:
```
C:HOME\ARTICLES>BZIP2 -c example.txt
Bad command or filename - "BZIP2"
@ -140,14 +124,13 @@ Bad command or filename - "BZIP2"
通过提供命令的一个 _完整路径_ ,你可以总是执行一个在你的系统上且不在你的路径中的命令:
```
C:HOME\ARTICLES>C:\FDOS\BIN\BZIP2 -c example.txt
C:HOME\ARTICLES>DIR
example.txb
```
你可以使用同样的方法从外部媒体或其它目录执行应用程序。
你可以使用同样的方法从外部介质或其它目录执行应用程序。
### FreeDOS 路径
@ -157,9 +140,7 @@ example.txb
现在,你知道如何在 FreeDOS 中管理你的路径,你能够以最适合你的方式了执行命令和维护你的工作环境。
* * *
_感谢 [DOS 课程 5: 路径][4] (在 CC BY-SA 4.0 协议下发布) 中的一些信息。_
_致谢 [DOS 课程 5: 路径][4] (在 CC BY-SA 4.0 协议下发布) 为本文提供的一些信息。_
--------------------------------------------------------------------------------
@ -168,7 +149,7 @@ via: https://opensource.com/article/21/2/path-freedos
作者:[Kevin O'Brien][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,59 @@
[#]: subject: (4 new open source licenses)
[#]: via: (https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl)
[#]: author: (Pam Chestek https://opensource.com/users/pchestek)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13224-1.html)
四个新式开源许可证
======
> 让我们来看看 OSI 最新批准的加密自治许可证和 CERN 开源硬件许可协议。
![](https://img.linux.net.cn/data/attachment/album/202103/21/221014mw8lhxox0kkjk04z.jpg)
作为 <ruby>[开源定义][2]<rt>Open Source Defintion</rt></ruby>OSD的管理者<ruby>[开源促进会][3]<rt>Open Source Initiative</rt></ruby>OSI20 年来一直在批准“开源”许可证。这些许可证是开源软件生态系统的基础,可确保每个人都可以使用、改进和共享软件。当一个许可证获批为“开源”时,是因为 OSI 认为该许可证可以促进相互的协作和共享,从而使得每个参与开源生态的人获益。
在过去的 20 年里世界发生了翻天覆地的变化。现如今软件以新的甚至是无法想象的方式在被使用。OSI 已经预料到曾经被人们所熟知的开源许可证现已无法满足如今的要求。因此许可证管理者已经加强了工作为更广泛的用途提交了几个新的许可证。OSI 所面临的挑战是在评估这些新的许可证概念是否会继续推动共享和合作,是否被值得称为“开源”许可证,最终 OSI 批准了一些用于特殊领域的新式许可证。
### 四个新式许可证
第一个是 <ruby>[加密自治许可证][4]<rt>Cryptographic Autonomy License</rt></ruby>CAL。该许可证是为分布式密码应用程序而设计的。此许可证所解决的问题是现有的开源许可证无法保证开放性因为如果没有义务也与其他对等体共享数据那么一个对等体就有可能损害网络的运行。因此除了是一个强有力的版权保护许可外CAL 还包括向第三方提供独立使用和修改软件所需的权限和资料的义务,而不会让第三方有数据或功能的损失。
随着越来越多的人使用加密结构进行点对点共享,那么更多的开发人员发现自己需要诸如 CAL 之类的法律工具也就不足为奇了。 OSI 的两个邮件列表 License-Discuss 和 License-Review 上的社区,讨论了拟议的新开源许可证,并询问了有关此许可证的诸多问题。我们希望由此产生的许可证清晰易懂,并希望对其他开源从业者有所裨益。
接下来是欧洲核研究组织CERN提交的 CERN <ruby>开放硬件许可证<rt>Open Hardware Licence</rt></ruby>OHL系列许可证以供审议。它包括三个许可证其主要用于开放硬件这是一个与开源软件相似的开源访问领域但有其自身的挑战和细微差别。硬件和软件之间的界线现已变得相当模糊因此应用单独的硬件和软件许可证变得越来越困难。欧洲核子研究组织CERN制定了一个可以确保硬件和软件自由的许可证。
OSI 可能在开始时就没考虑将开源硬件许可证添加到其开源许可证列表中,但是世界早已发生变革。因此,尽管 CERN 许可证中的措词涵盖了硬件术语,但它也符合 OSI 认可的所有开源软件许可证的条件。
CERN 开源硬件许可证包括一个 [宽松许可证][5]、一个 [弱互惠许可证][6] 和一个 [强互惠许可证][7]。最近,该许可证已被一个国际研究项目采用,该项目正在制造可用于 COVID-19 患者的简单、易于生产的呼吸机。
### 了解更多
CAL 和 CERN OHL 许可证是针对特殊用途的,并且 OSI 不建议把它们用于其它领域。但是 OSI 想知道这些许可证是否会按预期发展,从而有助于在较新的计算机领域中培育出健壮的开源生态。
可以从 OSI 获得关于 [许可证批准过程][8] 的更多信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl
作者:[Pam Chestek][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pchestek
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov3.png?itok=e4eFKe0l "Law books in a library"
[2]: https://opensource.org/osd
[3]: https://opensource.org/
[4]: https://opensource.org/licenses/CAL-1.0
[5]: https://opensource.org/CERN-OHL-P
[6]: https://opensource.org/CERN-OHL-W
[7]: https://opensource.org/CERN-OHL-S
[8]: https://opensource.org/approval

View File

@ -3,165 +3,149 @@
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13219-1.html)
你可以在命令行上用 LibreOffice 做 5 件令人惊讶的事情
5 个用命令行操作 LibreOffice 的技巧
======
直接在命令行中对文件进行转换、打印、保护等操作。
![hot keys for shortcuts or features on computer keyboard][1]
LibreOffice 拥有所有你想要的办公软件套件的生产力功能,使其成为微软 Office 或谷歌套件的流行的开源替代品。LibreOffice 的能力之一是可以从命令行操作。例如Seth Kenlon 最近解释了如何使用 LibreOffice 用全局[命令行选项将多个文件][2]从 DOCX 转换为 EPUB。他的文章启发我分享一些其他 LibreOffice 命令行技巧和窍门
> 直接在命令行中对文件进行转换、打印、保护等操作。
在查看 LibreOffice 命令的一些隐藏功能之前,你需要了解如何使用应用选项。并不是所有的应用都接受选项(除了像 `--help`选项这样的基本选项,它在大多数 Linux 应用中都可以使用)。
![](https://img.linux.net.cn/data/attachment/album/202103/20/110200xjkkijnjixbyi4ui.jpg)
LibreOffice 拥有所有你想要的办公软件套件的生产力功能,使其成为微软 Office 或谷歌套件的流行的开源替代品。LibreOffice 的能力之一是可以从命令行操作。例如Seth Kenlon 最近解释了如何使用 LibreOffice 用全局 [命令行选项将多个文件][2] 从 DOCX 转换为 EPUB。他的文章启发我分享一些其他 LibreOffice 命令行技巧和窍门。
在查看 LibreOffice 命令的一些隐藏功能之前,你需要了解如何使用应用选项。并不是所有的应用都接受选项(除了像 `--help` 选项这样的基本选项,它在大多数 Linux 应用中都可以使用)。
```
`$ libreoffice --help`
$ libreoffice --help
```
这将返回 LibreOffice 接受的其他选项的描述。有些应用没有太多选项,但 LibreOffice 好几页有用的,所以有很多东西可以玩。
这将返回 LibreOffice 接受的其他选项的描述。有些应用没有太多选项,但 LibreOffice 好几页有用的选项,所以有很多东西可以玩。
就是说,你可以在终端上使用 LibreOffice 进行以下五项有用的操作,来让使软件更加有用。
### 1\. 自定义你的启动选项
### 1自定义你的启动选项
你可以修改你启动 LibreOffice 的方式。例如,如果你想只打开 LibreOffice 的文字处理器组件:
```
`$ libreoffice --writer  #starts the word processor`
$ libreoffice --writer  # 启动文字处理器
```
你可以类似地打开它的其他组件:
```
$ libreoffice --calc  #starts the Calc document
$ libreoffice --draw  #starts an empty Draw document
$ libreoffice --web  #starts and empty HTML document
$ libreoffice --calc  # 启动一个空的电子表格
$ libreoffice --draw  # 启动一个空的绘图文档
$ libreoffice --web   # 启动一个空的 HTML 文档
```
你也可以从命令行访问特定的帮助文件:
```
`$ libreoffice --helpwriter`
$ libreoffice --helpwriter
```
![LibreOffice Writer help][3]
Don Watkins, [CC BY-SA 4.0][4]
或者如果你需要电子表格应用方面的帮助:
```
`$ libreoffice --helpcalc`
$ libreoffice --helpcalc
```
你可以在没有启动屏幕的情况下启动 LibreOffice
你可以在不显示启动屏幕的情况下启动 LibreOffice
```
`$ libreoffice --writer --nologo`
$ libreoffice --writer --nologo
```
你甚至可以在你完成当前窗口的工作时,让它在后台最小化:
你甚至可以在你完成当前窗口的工作时,让它在后台最小化启动:
```
`$ libreoffice --writer --minimized`
$ libreoffice --writer --minimized
```
### 2\. 以只读模式打开一个文件
### 2以只读模式打开一个文件
你可以使用 `--view` 以只读模式打开文件,以防止意外地对重要文件进行修改和保存:
```
`$ libreoffice --view example.odt`
$ libreoffice --view example.odt
```
### 3\. 打开一个模板文档
你是否曾经创建过用作信头或发票表格的文档LibreOffice 具有丰富的内置模板系统,但是你可以使用 -n 选项将任何文档作为模板:
### 3、打开一个模板文档
你是否曾经创建过用作信头或发票表格的文档LibreOffice 具有丰富的内置模板系统,但是你可以使用 `-n` 选项将任何文档作为模板:
```
`$ libreoffice --writer -n example.odt`
$ libreoffice --writer -n example.odt
```
你的文档将在 LibreOffice 中打开,你可以对其进行修改,但保存时不会覆盖原始文件。
### 4\. 转换文档
### 4转换文档
当你需要做一个小任务,比如将一个文件转换为新的格式时,应用启动的时间可能与完成任务的时间一样长。解决办法是 `--headless` 选项,它可以在不启动图形用户界面的情况下执行 LibreOffice 进程。
例如,在 LibreOffic 中,将一个文档转换为 EPUB 是一个非常简单的任务,但使用 `libreoffice` 命令就更容易:
```
`$ libreoffice --headless --convert-to epub example.odt`
$ libreoffice --headless --convert-to epub example.odt
```
使用通配符意味着你可以一次转换几十个文档:
```
`$ libreoffice --headless --convert-to epub *.odt`
$ libreoffice --headless --convert-to epub *.odt
```
你可以将文件转换为多种格式,包括 PDF、HTML、DOC、DOCX、EPUB、纯文本等。
### 5\. 从终端打印
### 5从终端打印
你可以从命令行打印 LibreOffice 文档,而无需打开应用:
```
`$ libreoffice --headless -p example.odt`
$ libreoffice --headless -p example.odt
```
这个选项不需要打开 LibreOffice 就可以使用默认打印机打印,它只是将文档发送到你的打印机。
要打印一个目录中的所有文件:
```
`$ libreoffice -p *.odt`
$ libreoffice -p *.odt
```
(我不止一次执行了这个命令,然后用完了纸,所以在你开始之前,确保你的打印机里有足够的纸张。)
(我不止一次执行了这个命令,然后用完了纸,所以在你开始之前,确保你的打印机里有足够的纸张。)
你也可以把文件输出成 PDF。通常这和使用 `--convert-to-pdf` 选项没有什么区别,但是很容易记住:
```
`$ libreoffice --print-to-file example.odt --headless`
$ libreoffice --print-to-file example.odt --headless
```
### 额外技巧Flatpak 和命令选项
如果你使用 [Flatpak][5] 安装 LibreOffice所有这些命令选项都可以使用但你必须通过 Flatpak 传递。下面是一个例子:
如果你是使用 [Flatpak][5] 安装的 LibreOffice所有这些命令选项都可以使用但你必须通过 Flatpak 传递。下面是一个例子:
```
`$ flatpak run org.libreoffice.LibreOffice --writer`
$ flatpak run org.libreoffice.LibreOffice --writer
```
它比本地安装要麻烦得多,所以你可能会受到启发[写一个 Bash 别名][6]来使它更容易直接与 LibreOffice 交互。
它比本地安装要麻烦得多,所以你可能会受到启发 [写一个 Bash 别名][6] 来使它更容易直接与 LibreOffice 交互。
### 令人惊讶的终端选项
通过查阅手册页面,了解如何从命令行扩展 LibreOffice 的功能:
```
`$ man libreoffice`
$ man libreoffice
```
你是否知道 LibreOffice 具有如此丰富的命令行选项? 你是否发现了其他人似乎都不了解的其他选项? 请在评论中分享它们!
@ -173,7 +157,7 @@ via: https://opensource.com/article/21/3/libreoffice-command-line
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,52 +3,45 @@
[#]: author: (Javier Pena https://opensource.com/users/jpena)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13222-1.html)
利用树莓派和低功耗显示器来显示你家的日程表
利用树莓派和低功耗显示器来跟踪你的家庭日程表
======
通过利用开源工具和电子水屏,让每个人都清楚家庭的日程安排。
> 通过利用开源工具和电子水屏,让每个人都清楚家庭的日程安排。
![Calendar with coffee and breakfast][1]
![](https://img.linux.net.cn/data/attachment/album/202103/21/091512dkbgb3vzgjrz2935.jpg)
有些家庭的日程安排很复杂:孩子们有上学活动和放学后的活动,你想要记住的重要事情、许多会议等等。虽然你可以使用手机和应用程序来关注所有事情,但在家中放置大型低功耗显示器以显示家人的日程不是更好吗? 电子水日程表刚好满足!
有些家庭的日程安排很复杂:孩子们有上学活动和放学后的活动,你想要记住的重要事情,每个人都有多个约会等等。虽然你可以使用手机和应用程序来关注所有事情,但在家中放置一个大型低功耗显示器以显示家人的日程不是更好吗?电子水日程表刚好满足!
![E Ink calendar][2]
(Javier Pena, [CC BY-SA 4.0][3])
### 硬件
这个项目是作为假日项目开始,因此我试着尽可能多的旧物利用。其中包括一台已经闲置了太长时间树莓派 2。由于我没有电子墨屏,因此我需要购买一个。幸运的是,我找到了一家供应商,该供应商为树莓派的屏幕提供了 [开源驱动程序和示例][4],该屏幕使用 [GPIO][5] 端口连接。
这个项目是作为假日项目开始,因此我试着尽可能多的旧物利用。其中包括一台已经闲置了太长时间树莓派 2。由于我没有电子墨屏,因此我需要购买一个。幸运的是,我找到了一家供应商,该供应商为支持树莓派的屏幕提供了 [开源驱动程序和示例][4],该屏幕使用 [GPIO][5] 端口连接。
我的家人还想在不同的日程表之间切换,因此需要某种形式的输入。我没有添加 USB 键盘,而是选择了一种更简单的解决方案,并购买了一个类似于在 [这篇文章][6] 中所描述 1x4 大小的键盘。这使我可以将键盘连接到树莓派中的某些 GPIO 端口。
最后,我需要一个相框来容纳整个设置。虽然背面看起来有些凌乱,但也能搞定
最后,我需要一个相框来容纳整个设置。虽然背面看起来有些凌乱,但它能完成工作
![Calendar internals][7]
(Javier Pena, [CC BY-SA 4.0][3])
### 软件
我从 [一个类似的项目][8] 中获得了灵感,并开始为我的项目编写 Python 代码。我需要从两个地方获取数据:
* 天气信息:从 [OpenWeather API][9] 获取
* 时间信息:我打算使用 [CalDav standard][10] 连接到一个运行在我家服务器上日程表
* 时间信息:我打算使用 [CalDav 标准][10] 连接到一个在我家服务器上运行的日程表
由于必须等待一些零件的送达,因此我使用了模块化的方法来进行输入和显示,这样我可以在没有硬件的情况下调试大多数代码。日程表应用程序需要驱动程序,于是我编写了 [Pygame][11] 驱动程序以便能在台式机上运行它。
由于必须等待一些零件的送达,因此我使用了模块化的方法来进行输入和显示,这样我可以在没有硬件的情况下调试大多数代码。日程表应用程序需要驱动程序,于是我编写了一个 [Pygame][11] 驱动程序以便能在台式机上运行它。
重构现有的开源项目是编码的最高效的方式。因为调用许多的 API 会减轻编码压力。我可以专注于设计用户界面,其中包括每人每周和每个人的每日的日程,以及允许使用小键盘来选择日程。并且我花时间又添加了一些额外的功能,例如特殊日子的自定义屏幕保护程序。
编写代码最好的部分是能够重用现有的开源项目,所以访问不同的 API 很容易。我可以专注于设计用户界面,其中包括每个人的周历和每个人的日历,以及允许使用小键盘来选择日程。并且我花时间又添加了一些额外的功能,例如特殊日子的自定义屏幕保护程序。
![E Ink calendar screensaver][12]
(Javier Pena, [CC BY-SA 4.0][3])
最后的集成步骤将确保我的日程表应用程序将在启动时运行,并且能够容错。我使用了原始的 [树莓派系统][13] 映像并将该应用程序配置到 systemd 服务,以便它可以在出现故障和系统重新启动依旧运行。
最后的集成步骤将确保我的日程表应用程序将在启动时运行,并且能够容错。我使用了一个基本的 [树莓派系统][13] 镜像,并将该应用程序配置到 systemd 服务,以便它可以在出现故障和系统重新启动依旧运行。
做完所有工作,我把代码上传到了 [GitHub][14]。因此,如果你要创建类似的日历,可以随时查看并重构它!
@ -56,16 +49,17 @@
日程表已成为我们厨房中的日常工具。它可以帮助我们记住我们的日常活动,甚至我们的孩子在上学前,都可以使用它来查看日程的安排。
对我而言这个项目让我感受到开源的力量。如果没有开源的驱动程序、库以及API我们依旧还在用纸和笔来安排日程。很疯狂不是吗
对我而言,这个项目让我感受到开源的力量。如果没有开源的驱动程序、库以及开放 API我们依旧还在用纸和笔来安排日程。很疯狂不是吗
需要确保你的日程不冲突吗?学习如何使用这些免费的开源项目来做到这点。
------
via: https://opensource.com/article/21/3/family-calendar-raspberry-pi
作者:[Javier Pena][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,28 +3,29 @@
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13216-1.html)
在树莓派上设置网络家长控制
在树莓派上设置家庭网络家长控制
======
用最少的时间和金钱投入,就能保证孩子上网安全。
> 用最少的时间和金钱投入,就能保证孩子上网安全。
![Family learning and reading together at night in a room][1]
家长们一直在寻找保护孩子们上网的方法,从防止恶意软件、横幅广告、弹出窗口、活动跟踪脚本和其他问题,到防止他们在应该做功课的时候玩游戏和看 YouTube。许多企业使用工具来规范员工的网络安全和活动但问题是如何在家里实现这一点
简短的答案是一台小巧、廉价的树莓派电脑,它可以让你为孩子和你在家的工作设置家长控制。本文将为你介绍如何使用树莓派轻松构建自己的家长控制家庭网络
简短的答案是一台小巧、廉价的树莓派电脑,它可以让你为孩子和你在家的工作设置<ruby>家长控制<rt>parental controls</rt></ruby>。本文将引导你了解使用树莓派构建自己的启用了家长控制功能的家庭网络有多么容易
### 安装硬件和软件
对于这个项目,你需要一个树莓派和一个家庭网络路由器。如果你只花 5 分钟浏览在线购物网站,你会发现很多选择。[树莓派 4][2] 和 [TP-Link 路由器][3]是初学者的好选择。
有了网络设备和树莓派后,你需要在 Linux 容器或者受支持的操作系统中安装 [Pi-hole][4]。有几种[安装方法][5],但一个简单的方法是在你的树莓派上执行以下命令:
对于这个项目,你需要一个树莓派和一个家庭网络路由器。如果你在线购物网站花上 5 分钟浏览,就可以发现很多选择。[树莓派 4][2] 和 [TP-Link 路由器][3] 是初学者的好选择。
有了网络设备和树莓派后,你需要在 Linux 容器或者受支持的操作系统中安装 [Pi-hole][4]。有几种 [安装方法][5],但一个简单的方法是在你的树莓派上执行以下命令:
```
`curl -sSL https://install.pi-hole.net | bash`
curl -sSL https://install.pi-hole.net | bash
```
### 配置 Pi-hole 作为你的 DNS 服务器
@ -34,32 +35,26 @@
1. 禁用路由器中的 DHCP 服务器设置
2. 在 Pi-hole 中启用 DHCP 服务器
每台设备都不一样,所以我没有办法告诉你具体需要点击什么来调整设置。一般来说,你可以通过浏览器访问你家的路由器。你的路由器的地址有时会印在路由器的底部,它以 192.168 或 10 开头。
每台设备都不一样,所以我没有办法告诉你具体需要点击什么来调整设置。一般来说,你可以通过浏览器访问你家的路由器。你的路由器的地址有时会印在路由器的底部, 它以 192.168 或 10 开头。
在浏览器中,打开你的路由器的地址,并用你收到的网络服务凭证登录。它通常是简单的 `admin` 和一个数字密码(有时这个密码也打印在路由器上)。如果你不知道登录名,请打电话给你的供应商并询问详情。
在浏览器中,打开你的路由器的地址,并用你的凭证登录。它通常是简单的 `admin` 和一个数字密码(有时这个密码也打印在路由器上)。如果你不知道登录名,请打电话给你的供应商并询问详情。
在图形界面中,寻找你的局域网内关于 DHCP 的部分,并停用 DHCP 服务器。 你的路由器界面几乎肯定会与我的不同,但这是一个我设置的例子。取消勾选 **DHCP 服务器**
![Disable DHCP][6]
(Daniel Oh, [CC BY-SA 4.0][7])
接下来,你必须在 Pi-hole 上激活 DHCP 服务器。如果你不这样做,除非你手动分配 IP 地址,否则你的设备将无法上网!
### 让你的网络变得家庭友好
### 让你的网络适合家庭
设置完成了。现在,你的网络设备(如手机、平板电脑、笔记本电脑等)将自动找到树莓派上的 DHCP 服务器。然后,每个设备将被分配一个动态 IP 地址来访问互联网。
注意:如果你的路由器设备支持设置 DNS 服务器,你也可以在路由器中配置 DNS 客户端。客户端将把 Pi-hole 作为你的 DNS 服务器。
要设置你的孩子可以访问哪些网站和活动的规则,打开浏览器进入 Pi-hole 管理页面,`http://pi.hole/admin/`。在仪表板上,点击**白名单**来添加你的孩子可以访问的网页。你也可以将不允许孩子访问的网站(如游戏、成人、广告、购物等)添加到**屏蔽列表**
要设置你的孩子可以访问哪些网站和活动的规则,打开浏览器进入 Pi-hole 管理页面,`http://pi.hole/admin/`。在仪表板上,点击“Whitelist”来添加你的孩子可以访问的网页。你也可以将不允许孩子访问的网站(如游戏、成人、广告、购物等)添加到“Blocklist”
![Pi-hole admin dashboard][8]
(Daniel Oh, [CC BY-SA 4.0][7])
### 接下来是什么?
现在,你已经在树莓派上设置了家长控制,你可以让你的孩子更安全地上网,同时让他们访问经批准的娱乐选项。这也可以通过减少你的家庭串流来降低你的家庭网络使用量。更多高级使用方法,请访问 Pi-hole 的[文档][9]和[博客][10]。
@ -71,7 +66,7 @@ via: https://opensource.com/article/21/3/raspberry-pi-parental-control
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,308 @@
[#]: subject: (Top 10 Terminal Emulators for Linux \(With Extra Features or Amazing Looks\))
[#]: via: (https://itsfoss.com/linux-terminal-emulators/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13221-1.html)
10 个常见的 Linux 终端仿真器
======
![](https://img.linux.net.cn/data/attachment/album/202103/21/073043q4j4o6hr33b595j4.jpg)
默认情况下,所有的 Linux 发行版都已经预装了“<ruby>终端<rt>terminal</rt></ruby>”应用程序或“<ruby>终端仿真器<rt>terminal emulator</rt></ruby>”(这才是正确的技术术语)。当然,根据桌面环境的不同,它的外观和感觉会有所不同。
Linux 的特点是,你可以不用局限于你的发行版所提供的东西,你可以用你所选择的替代应用程序。终端也不例外。有几个提供了独特功能的终端仿真器令人印象深刻,可以获得更好的用户体验或更好的外观。
在这里,我将整理一个有趣的终端应用程序的列表,你可以在你的 Linux 发行版上尝试它们。
### 值得赞叹的 Linux 终端仿真器
此列表没有特别的排名顺序,我会先列出一些有趣的,然后是一些最流行的终端仿真器。此外,我还强调了每个提到的终端仿真器的主要功能,你可以选择你喜欢的终端仿真器。
#### 1、Terminator
![][1]
主要亮点:
* 可以在一个窗口中使用多个 GNOME 终端
[Terminator][2] 是一款非常流行的终端仿真器,目前仍在维护中(从 Launchpad 移到了 GitHub
它基本上是在一个窗口中为你提供了多个 GNOME 终端。在它的帮助下,你可以轻松地对终端窗口进行分组和重组。你可能会觉得这像是在使用平铺窗口管理器,不过有一些限制。
##### 如何安装 Terminator
对于基于 Ubuntu 的发行版,你只需在终端输入以下命令:
```
sudo apt install terminator
```
你应该可以在大多数 Linux 发行版的默认仓库中找到它。但是,如果你需要安装帮助,请访问它的 [GitHub 页面][3]。
#### 2、Guake 终端
![][4]
主要亮点:
* 专为在 GNOME 上快速访问终端而设计
* 工作速度快,不需要大量的系统资源
* 访问的快捷键
[Guake][6] 终端最初的灵感来自于一款 FPS 游戏 Quake。与其他一些终端仿真器不同的是它的工作方式是覆盖在其他的活动窗口上。
你所要做的就是使用快捷键(`F12`)召唤该仿真器,它就会从顶部出现。你可以自定义该仿真器的宽度或位置,但大多数用户使用默认设置就可以了。
它不仅仅是一个方便的终端仿真器,还提供了大量的功能,比如能够恢复标签、拥有多个标签、对每个标签进行颜色编码等等。你可以查看我关于 [Guake 的单独文章][5] 来了解更多。
##### 如何安装 Guake 终端?
Guake 在大多数 Linux 发行版的默认仓库中都可以找到,你可以参考它的 [官方安装说明][7]。
如果你使用的是基于 Debian 的发行版,只需输入以下命令:
```
sudo apt install guake
```
#### 3、Tilix 终端
![][8]
主要亮点:
* 平铺功能
* 支持拖放
* 下拉式 Quake 模式
[Tilix][10] 终端提供了与 Guake 类似的下拉式体验 —— 但它允许你在平铺模式下拥有多个终端窗口。
如果你的 Linux 发行版中默认没有平铺窗口,而且你有一个大屏幕,那么这个功能就特别有用,你可以在多个终端窗口上工作,而不需要在不同的工作空间之间切换。
如果你想了解更多关于它的信息,我们之前已经 [单独介绍][9] 过了。
##### 如何安装 Tilix
Tilix 在大多数发行版的默认仓库中都有。如果你使用的是基于 Ubuntu 的发行版,只需输入:
```
sudo apt install tilix
```
#### 4、Hyper
![][13]
主要亮点:
* 基于 HTML/CSS/JS 的终端
* 基于 Electron
* 跨平台
* 丰富的配置选项
[Hyper][15] 是另一个有趣的终端仿真器,它建立在 Web 技术之上。它并没有提供独特的用户体验,但看起来很不一样,并提供了大量的自定义选项。
它还支持安装主题和插件来轻松定制终端的外观。你可以在他们的 [GitHub 页面][14] 中探索更多关于它的内容。
##### 如何安装 Hyper
Hyper 在默认的资源库中是不可用的。然而,你可以通过他们的 [官方网站][16] 找到 .deb 和 .rpm 包来安装。
如果你是新手,请阅读文章以获得 [使用 deb 文件][17] 和 [使用 rpm 文件][18] 的帮助。
#### 5、Tilda
![][19]
主要亮点:
* 下拉式终端
* 搜索栏整合
[Tilda][20] 是另一款基于 GTK 的下拉式终端仿真器。与其他一些不同的是,它提供了一个你可以切换的集成搜索栏,还可以让你自定义很多东西。
你还可以设置热键来快速访问或执行某个动作。从功能上来说,它是相当令人印象深刻的。然而,在视觉上,我不喜欢覆盖的行为,而且它也不支持拖放。不过你可以试一试。
##### 如何安装 Tilda
对于基于 Ubuntu 的发行版,你可以简单地键入:
```
sudo apt install tilda
```
你可以参考它的 [GitHub 页面][20],以了解其他发行版的安装说明。
#### 6、eDEX-UI
![][21]
主要亮点:
* 科幻感的外观
* 跨平台
* 自定义主题选项
* 支持多个终端标签
如果你不是特别想找一款可以帮助你更快的完成工作的终端仿真器,那么 [eDEX-UI][23] 绝对是你应该尝试的。
对于科幻迷和只想让自己的终端看起来独特的用户来说,这绝对是一款漂亮的终端仿真器。如果你不知道,它的灵感很大程度上来自于电影《创:战纪》。
不仅仅是设计或界面,总的来说,它为你提供了独特的用户体验,你会喜欢的。它还可以让你 [自定义终端][12]。如果你打算尝试的话,它确实需要大量的系统资源。
你不妨看看我们 [专门介绍 eDEX-UI][22] 的文章,了解更多关于它的信息和安装步骤。
##### 如何安装 eDEX-UI
你可以在一些包含 [AUR][24] 的仓库中找到它。无论是哪种情况,你都可以从它的 [GitHub 发布部分][25] 中抓取一个适用于你的 Linux 发行版的软件包(或 AppImage 文件)。
#### 7、Cool Retro Terminal
![][26]
主要亮点:
* 复古主题
* 动画/效果调整
[Cool Retro Terminal][27] 是一款独特的终端仿真器,它为你提供了一个复古的阴极射线管显示器的外观。
如果你正在寻找一些额外功能的终端仿真器,这可能会让你失望。然而,令人印象深刻的是,它在资源上相当轻盈,并允许你自定义颜色、效果和字体。
##### 如何安装 Cool Retro Terminal
你可以在其 [GitHub 页面][27] 中找到所有主流 Linux 发行版的安装说明。对于基于 Ubuntu 的发行版,你可以在终端中输入以下内容:
```
sudo apt install cool-retro-term
```
#### 8、Alacritty
![][28]
主要亮点:
* 跨平台
* 选项丰富,重点是整合。
[Alacritty][29] 是一款有趣的开源跨平台终端仿真器。尽管它被认为是处于“测试”阶段的东西,但它仍然可以工作。
它的目标是为你提供广泛的配置选项,同时考虑到性能。例如,使用键盘点击 URL、将文本复制到剪贴板、使用 “Vi” 模式进行搜索等功能可能会吸引你去尝试。
你可以探索它的 [GitHub 页面][29] 了解更多信息。
##### 如何安装 Alacritty
官方 GitHub 页面上说可以使用包管理器安装 Alacritty但我在 Linux Mint 20.1 的默认仓库或 [synaptic 包管理器][30] 中找不到它。
如果你想尝试的话,可以按照 [安装说明][31] 来手动设置。
#### 9、Konsole
![][32]
主要亮点:
* KDE 的终端
* 轻巧且可定制
如果你不是新手,这个可能不用介绍了。[Konsole][33] 是 KDE 桌面环境的默认终端仿真器。
不仅如此,它还集成了很多 KDE 应用。即使你使用的是其他的桌面环境,你也可以试试 Konsole。它是一个轻量级的终端仿真器拥有众多的功能。
你可以拥有多个标签和多个分组窗口。以及改变终端仿真器的外观和感觉的大量的自定义选项。
##### 如何安装 Konsole
对于基于 Ubuntu 的发行版和大多数其他发行版,你可以使用默认的版本库来安装它。对于基于 Debian 的发行版,你只需要在终端中输入以下内容:
```
sudo apt install konsole
```
#### 10、GNOME 终端
![][34]
主要亮点:
* GNOME 的终端
* 简单但可定制
如果你使用的是任何基于 Ubuntu 的 GNOME 发行版,它已经是天生的了,它可能不像 Konsole 那样可以自定义,但它可以让你轻松地配置终端的大部分重要方面。它可能不像 Konsole 那样可以自定义(取决于你在做什么),但它可以让你轻松配置终端的大部分重要方面。
总的来说,它提供了良好的用户体验和易于使用的界面,并提供了必要的功能。
如果你好奇的话,我还有一篇 [自定义你的 GNOME 终端][12] 的教程。
##### 如何安装 GNOME 终端?
如果你没有使用 GNOME 桌面,但又想尝试一下,你可以通过默认的软件仓库轻松安装它。
对于基于 Debian 的发行版,以下是你需要在终端中输入的内容:
```
sudo apt install gnome-terminal
```
### 总结
有好几个终端仿真器。如果你正在寻找不同的用户体验,你可以尝试任何你喜欢的东西。然而,如果你的目标是一个稳定的和富有成效的体验,你需要测试一下,然后才能依靠它们。
对于大多数用户来说默认的终端仿真器应该足够好用了。但是如果你正在寻找快速访问Quake 模式)、平铺功能或在一个终端中的多个窗口,请试试上述选择。
你最喜欢的 Linux 终端仿真器是什么?我有没有错过列出你最喜欢的?欢迎在下面的评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-terminal-emulators/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/terminator-terminal.jpg?resize=800%2C436&ssl=1
[2]: https://gnome-terminator.org
[3]: https://github.com/gnome-terminator/terminator
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/guake-terminal-2.png?resize=800%2C432&ssl=1
[5]: https://itsfoss.com/guake-terminal/
[6]: https://github.com/Guake/guake
[7]: https://guake.readthedocs.io/en/latest/user/installing.html#system-wide-installation
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilix-screenshot.png?resize=800%2C460&ssl=1
[9]: https://itsfoss.com/tilix-terminal-emulator/
[10]: https://gnunn1.github.io/tilix-web/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/linux-terminal-customization.jpg?fit=800%2C450&ssl=1
[12]: https://itsfoss.com/customize-linux-terminal/
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/hyper-screenshot.png?resize=800%2C527&ssl=1
[14]: https://github.com/vercel/hyper
[15]: https://hyper.is/
[16]: https://hyper.is/#installation
[17]: https://itsfoss.com/install-deb-files-ubuntu/
[18]: https://itsfoss.com/install-rpm-files-fedora/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/tilda-terminal.jpg?resize=800%2C427&ssl=1
[20]: https://github.com/lanoxx/tilda
[21]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/edex-ui-screenshot.png?resize=800%2C450&ssl=1
[22]: https://itsfoss.com/edex-ui-sci-fi-terminal/
[23]: https://github.com/GitSquared/edex-ui
[24]: https://itsfoss.com/aur-arch-linux/
[25]: https://github.com/GitSquared/edex-ui/releases
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2015/10/cool-retro-term-1.jpg?resize=799%2C450&ssl=1
[27]: https://github.com/Swordfish90/cool-retro-term
[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/alacritty-screenshot.png?resize=800%2C496&ssl=1
[29]: https://github.com/alacritty/alacritty
[30]: https://itsfoss.com/synaptic-package-manager/
[31]: https://github.com/alacritty/alacritty/blob/master/INSTALL.md#debianubuntu
[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/konsole-screenshot.png?resize=800%2C512&ssl=1
[33]: https://konsole.kde.org/
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/default-terminal.jpg?resize=773%2C493&ssl=1

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Multicloud, security integration drive massive SD-WAN adoption
======
40% year-over year SD-WAN growth through 2022 is being fueled by relationships built between vendors including Cisco, VMware, Juniper, and Arista and service provders AWS, Microsoft Azure, Google Anthos, and IBM RedHat.
[Gratisography][1] [(CC0)][2]
Increasing cloud adoption as well as improved network security, visibility and manageability are driving enterprise software-defined WAN ([SD-WAN][3]) deployments at a breakneck pace.
According to research from IDC, software- and infrastructure-as-a-service (SaaS and IaaS) offerings in particular have been driving SD-WAN implementations in the past year, said Rohit Mehra, vice president, network infrastructure at  IDC.
**Read about edge networking**
* [How edge networking and IoT will reshape data centers][4]
* [Edge computing best practices][5]
* [How edge computing can help secure the IoT][6]
For example, IDC says that its recent surveys of customers show that 95% will be using [SD-WAN][7] technology within two years, and that 42% have already deployed it. IDC also says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
“The growth of SD-WAN is a broad-based trend that is driven largely by the enterprise desire to optimize cloud connectivity for remote sites,” Mehra said.
Indeed the growth of multicloud networking is prompting many businesses to re-tool their networks in favor of SD-WAN technology, Cisco wrote recently. SD-WAN is critical for businesses adopting cloud services, acting as a connective tissue between the campus, branch, [IoT][8], [data center][9] and cloud.  The company said surveys show Cisco customers have, on average, 30 paid SaaS applications each. And that they are actually using many more  over 100 in several cases, the company said.
Part of this trend is driven by the relationships that networking vendors such as Cisco, VMware, Juniper, Arista and others have been building with the likes of Amazon Web Services, Microsoft Azure, Google Anthos and IBM RedHat. 
An indicator of the growing importance of the SD-WAN and multicloud relationship came last December when AWS announced key services for its cloud offering that included new integration technologies such as [AWS Transit Gateway][10], which lets customers connect their Amazon Virtual Private Clouds and their on-premises networks to a single gateway. Aruba, Aviatrix Cisco, Citrix Systems, Silver Peak and Versa already announced support for the technology which promises to simplify and enhance the performance of SD-WAN integration with AWS cloud resources.
[][11]
Going forward the addition of features such as cloud-based application insights and performance monitoring will be a key part of SD-WAN rollouts, Mehra said.
While the SD-WAN and cloud relationship is growing, so, too, is the need for integrated security features.
“The way SD-WAN offerings integrate security is so much better than traditional ways of securing WAN traffic which usually involved separate packages and services," Mehra said. "SD-WAN is a much more agile security environment.” Security, analytics and WAN optimization are viewed as top SD-WAN component, with integrated security being the top requirement for next-generation SD-WAN solutions, Mehra said. 
Increasingly, enterprises will look less at point SD-WAN solutions and instead will favor platforms that solve a wider range of network management and security needs, Mehra said. They will look for SD-WAN platforms that integrate with other aspects of their IT infrastructure including corporate data-center networks, enterprise campus LANs, or [public-cloud][12] resources, he said. They will look for security services to be baked in, as well as support for a variety of additional functions such as visibility, analytics, and unified communications, he said.
“As customers continue to integrate their infrastructure components with software they can do things like implement consistent management and security policies based on user, device or application requirements across their LANs and WANs and ultimately achieve a better overall application experience,” Mehra said.
An emerging trend is the need for SD-WAN packages to support [SD-branch][13] technology. More than 70% of IDC's surveyed customers expect to use SD-Branch within next year, Mehra said.  In recent weeks [Juniper][14] and [Aruba][15] have enhanced SD-Branch offerings, a trend that is expected to continue this year. 
SD-Branch builds on the concepts and support of SD-WAN but is more specific to the networking and management needs of LANs in the branch. Going forward, how SD-Branch integrates other technologies such as analytics, voice, unified communications and video will be key drivers of that technology.  
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.pexels.com/photo/black-and-white-branches-tree-high-279/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[7]: https://www.networkworld.com/article/3489938/what-s-hot-at-the-edge-for-2020-everything.html
[8]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[10]: https://aws.amazon.com/transit-gateway/
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
[13]: https://www.networkworld.com/article/3250664/sd-branch-what-it-is-and-why-youll-need-it.html
[14]: https://www.networkworld.com/article/3487801/juniper-broadens-sd-branch-management-switch-options.html
[15]: https://www.networkworld.com/article/3513357/aruba-reinforces-sd-branch-with-security-management-upgrades.html
[16]: https://www.facebook.com/NetworkWorld/
[17]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (stevenzdg988)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -264,7 +264,7 @@ via: https://opensource.com/article/19/7/python-google-natural-language-api
作者:[JR Oakes][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (ShuyRoy )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
Get started with distributed tracing using Grafana Tempo
======
Grafana Tempo is a new open source, high-volume distributed tracing
backend.
![Computer laptop in space][1]
Grafana's [Tempo][2] is an easy-to-use, high-scale, distributed tracing backend from Grafana Labs. Tempo has integrations with [Grafana][3], [Prometheus][4], and [Loki][5] and requires only object storage to operate, making it cost-efficient and easy to operate.
I've been involved with this open source project since its inception, so I'll go over some of the basics about Tempo and show why the cloud-native community has taken notice of it.
### Distributed tracing
It's common to want to gather telemetry on requests made to an application. But in the modern server world, a single application is regularly split across many microservices, potentially running on several different nodes.
Distributed tracing is a way to get fine-grained information about the performance of an application that may consist of discreet services. It provides a consolidated view of the request's lifecycle as it passes through an application. Tempo's distributed tracing can be used with monolithic or microservice applications, and it gives you [request-scoped information][6], making it the third pillar of observability (alongside metrics and logs).
The following is an example of a Gantt chart that distributed tracing systems can produce about applications. It uses the Jaeger [HotROD][7] demo application to generate traces and stores them in Grafana Cloud's hosted Tempo. This chart shows the processing time for the request, broken down by service and function.
![Gantt chart from Grafana Tempo][8]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### Reducing index size
Traces have a ton of information in a rich and well-defined data model. Usually, there are two interactions with a tracing backend: filtering for traces using metadata selectors like the service name or duration, and visualizing a trace once it's been filtered.
To enhance search, most open source distributed tracing frameworks index a number of fields from the trace, including the service name, operation name, tags, and duration. This results in a large index and pushes you to use a database like Elasticsearch or [Cassandra][10]. However, these can be tough to manage and costly to operate at scale, so my team at Grafana Labs set out to come up with a better solution.
At Grafana, our on-call debugging workflows start with drilling down for the problem using a metrics dashboard (we use [Cortex][11], a Cloud Native Computing Foundation incubating project for scaling Prometheus, to store metrics from our application), sifting through the logs for the problematic service (we store our logs in Loki, which is like Prometheus, but for logs), and then viewing traces for a given request. We realized that all the indexing information we need for the filtering step is available in Cortex and Loki. However, we needed a strong integration for trace discoverability through these tools and a complimentary store for key-value lookup by trace ID.
This was the start of the [Grafana Tempo][12] project. By focusing on retrieving traces given a trace ID, we designed Tempo to be a minimal-dependency, high-volume, cost-effective distributed tracing backend.
### Easy to operate and cost-effective
Tempo uses an object storage backend, which is its only dependency. It can be used in either single binary or microservices mode (check out the [examples][13] in the repo on how to get started easily). Using object storage also means you can store a high volume of traces from applications without any sampling. This ensures that you never throw away traces for those one-in-a-million requests that errored out or had higher latencies.
### Strong integration with open source tools
[Grafana 7.3 includes a Tempo data source][14], which means you can visualize traces from Tempo in the Grafana UI. Also, [Loki 2.0's new query features][15] make trace discovery in Tempo easy. And to integrate with Prometheus, the team is working on adding support for exemplars, which are high-cardinality metadata information you can add to time-series data. The metric storage backends do not index these, but you can retrieve and display them alongside the metric value in the Grafana UI. While exemplars can store various metadata, trace-IDs are stored to integrate strongly with Tempo in this use case.
This example shows using exemplars with a request latency histogram where each exemplar data point links to a trace in Tempo.
![Using exemplars in Tempo][16]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### Consistent metadata
Telemetry data emitted from applications running as containerized applications generally has some metadata associated with it. This can include information like the cluster ID, namespace, pod IP, etc. This is great for providing on-demand information, but it's even better if you can use the information contained in metadata for something productive. 
For instance, you can use the [Grafana Cloud Agent to ingest traces into Tempo][17], and the agent leverages the Prometheus Service Discovery mechanism to poll the Kubernetes API for metadata information and adds these as tags to spans emitted by the application. Since this metadata is also indexed in Loki, it makes it easy for you to jump from traces to view logs for a given service by translating metadata into Loki label selectors.
The following is an example of consistent metadata that can be used to view the logs for a given span in a trace in Tempo.
### ![][18]
### Cloud-native
Grafana Tempo is available as a containerized application, and you can run it on any orchestration engine like Kubernetes, Mesos, etc. The various services can be horizontally scaled depending on the workload on the ingest/query path. You can also use cloud-native object storage, such as Google Cloud Storage, Amazon S3, or Azure Blog Storage with Tempo. For further information, read the [architecture section][19] in Tempo's documentation.
### Try Tempo
If this sounds like it might be as useful for you as it has been for us, [clone the Tempo repo][20] and give it a try.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/tempo-distributed-tracing
作者:[Annanay Agarwal][a]
选题:[lujun9972][b]
译者:[RiaXu](https://github.com/ShuyRoy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/annanayagarwal
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://grafana.com/oss/tempo/
[3]: http://grafana.com/oss/grafana
[4]: https://prometheus.io/
[5]: https://grafana.com/oss/loki/
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
[11]: https://cortexmetrics.io/
[12]: http://github.com/grafana/tempo
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
[20]: https://github.com/grafana/tempo

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/3/webassembly-firefox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,108 +0,0 @@
[#]: subject: (Kooha is a Nascent Screen Recorder for GNOME With Wayland Support)
[#]: via: (https://itsfoss.com/kooha-screen-recorder/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Kooha is a Nascent Screen Recorder for GNOME With Wayland Support
======
There is not a single [decent screen recording software for Linux][1] that supports Wayland display server.
[GNOMEs built-in screen recorder][1] is probably the rare (and lone) one that works if you are using Wayland. But that screen recorder has no visible interface and features you expect in a standard screen recording software.
Thankfully, there is a new application in development that provides a bit more feature than GNOME screen recorder and works okay-ish on Wayland.
### Meet Kooha: a new screen recorder for GNOME desktop
![][2]
[Kooha][3] is an application in the nascent stage of development. It can be used in GNOME and it is built with GTK and PyGObject. In fact, it utilizes the same backend as the GNOMEs built-in screen recorder.
Here are the features Kooha has:
* Record the entire screen or a selected area
* Works on both Wayland and Xorg display servers
* Records audio from microphone along with the video
* Option to include or omit mouse pointer
* Can add a delay of 5 or 10 seconds before start the recording
* Supports recording in WebM and MKV formats
* Allows to change the default saving location
* Supports a few keyboard shortcuts
### My experience with Kooha
![][4]
I was contacted by its developer, Dave Patrick and since I desperately want a good screen recorder, I immediately went on to try it.
At present, [Kooha is only available to install via Flatpak][5]. I installed Flatpak and when I tried to use it, nothing was recorded. I had a quick email discussion with Dave and he told me that it was due to a [bug with GNOME screen recorder in Ubuntu 20.10][6].
You can imagine my desperation for a screen recorder with Wayland support that I [upgraded my Ubuntu to the beta version][7] of 21.04.
The screen recording worked in 21.04 but it could still not record the audio from the microphone.
There are a few more things that I noticed and didnt work smoothly to my liking.
For example, while recording the counter remains visible on the screen and is included in the recording. I wouldnt want that in a video tutorial. You wouldnt like to see that either I guess.
![][8]
Another thing is about multi-monitor support. There is no option to exclusively select a particular screen. I connect with two external monitors and by default it recorded all three of them. Setting a capture region could be used but dragging it to exact pixels of a screen is a time-consuming task.
There is no option to set the frame rate or encoding that comes with [Kazam][9] or other legacy screen recorders.
### Installing Kooha on Linux (if you are using GNOME)
Please make sure to enable Flatpak support on your Linux distribution. It only works with GNOME for now so please check which desktop environment you are using.
Use this command to add Flathub to your Flatpak repositories list:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
And then use this command to install it:
```
flatpak install flathub io.github.seadve.Kooha
```
You may run it from the menu or by using this command:
```
flatpak run io.github.seadve.Kooha
```
### Conclusion
Kooha is not perfect but considering the huge void in the Wayland domain, I hope that the developers work on fixing the issues and adding more features. This is important considering [Ubuntu 21.04 is switching to Wayland by default][10] and some other popular distros like Fedora and openSUSE already use Wayland by default.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kooha-screen-recorder/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-screen-recorder/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-screen-recorder.png?resize=800%2C450&ssl=1
[3]: https://github.com/SeaDve/Kooha
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha.png?resize=797%2C364&ssl=1
[5]: https://flathub.org/apps/details/io.github.seadve.Kooha
[6]: https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1901391
[7]: https://itsfoss.com/upgrade-ubuntu-beta/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-recording.jpg?resize=800%2C636&ssl=1
[9]: https://itsfoss.com/kazam-screen-recorder/
[10]: https://news.itsfoss.com/ubuntu-21-04-wayland/

View File

@ -1,107 +0,0 @@
[#]: subject: (Use gdu for a Faster Disk Usage Checking in Linux Terminal)
[#]: via: (https://itsfoss.com/gdu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Use gdu for a Faster Disk Usage Checking in Linux Terminal
======
There are two popular [ways to check disk usage in Linux terminal][1]: du command and df command. The [du command is more for checking the space used by a directory][2] and the df command gives you the disk utilization on filesystem level.
There are more friendly [ways to see the disk usage in Linux with graphical tools like GNOME Disks][3]. If you are confined to the terminal, you can use a [TUI][4] tool like [ncdu][5] to get the disk usage information with a sort of graphical touch.
### Gdu: Disk usage checking in Linux terminal
[Gdu][6] is such a tool written in Go (hence the g in gdu). Gdu developer has [benchmark tests][7] to show that it is quite fast for disk usage checking, specifically on SSDs. In fact, gdu is intended primarily for SSDs though it can work for HDD as well.
If you use the gdu command without any options, it shows the disk usage for the current directory you are in.
![][8]
Since it has terminal user interface (TUI), you can navigate through directories and disk using arrows. You can also sort the result by file names or size.
Heres how to do that:
* Up arrow or k to move cursor up
* Down arrow or j to move cursor down
* Enter to select directory / device
* Left arrow or h to go to parent directory
* Use d to delete the selected file or directory
* Use n to sort by name
* Use s to sort by size
* Use c to sort by items
Youll notice some symbols before some file entries. Those have specific meaning.
![][9]
* `!` means an error occurred while reading the directory.
* `.` means an error occurred while reading a subdirectory, size may not be correct.
* `@` means file is a symlink or socket.
* `H` means the file was already counted (hard link).
* `e` means directory is empty.
To see the disk utilization and free space for all mounted disks, use the option `d`:
```
gdu -d
```
It shows all the details in one screen:
![][10]
Sounds like a handy tool, right? Lets see how to get it on your Linux system.
### Installing gdu on Linux
Gdu is available for Arch and Manjaro users through the [AUR][11]. I presume that as an Arch user, you know how to use AUR.
It is included in the universe repository of the upcoming Ubuntu 21.04 but chances are that you are not using it at present. In that case, you may install it using Snap through it may seem like a lot of snap commands:
```
snap install gdu-disk-usage-analyzer
snap connect gdu-disk-usage-analyzer:mount-observe :mount-observe
snap connect gdu-disk-usage-analyzer:system-backup :system-backup
snap alias gdu-disk-usage-analyzer.gdu gdu
```
You may also find the source code on its release page:
[Source code download for gdu][12]
I am more used to of using du and df commands but I can see some Linux users might like gdu. Are you one of them?
--------------------------------------------------------------------------------
via: https://itsfoss.com/gdu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://linuxhandbook.com/df-command/
[2]: https://linuxhandbook.com/find-directory-size-du-command/
[3]: https://itsfoss.com/check-free-disk-space-linux/
[4]: https://itsfoss.com/gui-cli-tui/
[5]: https://dev.yorhel.nl/ncdu
[6]: https://github.com/dundee/gdu
[7]: https://github.com/dundee/gdu#benchmarks
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization.png?resize=800%2C471&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-entry-symbols.png?resize=800%2C302&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization-for-all-drives.png?resize=800%2C471&ssl=1
[11]: https://itsfoss.com/aur-arch-linux/
[12]: https://github.com/dundee/gdu/releases

View File

@ -0,0 +1,225 @@
[#]: subject: (Get started with an open source customer data platform)
[#]: via: (https://opensource.com/article/21/3/rudderstack-customer-data-platform)
[#]: author: (Amey Varangaonkar https://opensource.com/users/ameypv)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Get started with an open source customer data platform
======
As an open source alternative to Segment, RudderStack collects and
routes event stream (or clickstream) data and automatically builds your
customer data lake on your data warehouse.
![Person standing in front of a giant computer screen with numbers, data][1]
[RudderStack][2] is an open source, warehouse-first customer data pipeline. It collects and routes event stream (or clickstream) data and automatically builds your customer data lake on your data warehouse.
RudderStack is commonly known as the open source alternative to the customer data platform (CDP), [Segment][3]. It provides a more secure, flexible, and cost-effective solution in comparison. You get all the CDP functionality with added security and full ownership of your customer data.
Warehouse-first tools like RudderStack are architected to build functional data lakes in the user's data warehouse. The benefits are improved data control, increased flexibility in tool use, and (frequently) lower costs. Since it's open source, you can see how complicated processes—like building your identity graph—are done without relying on a vendor's black box.
### Getting the RudderStack workspace token
Before you get started, you will need the RudderStack workspace token from your RudderStack dashboard. To get it:
1. Go to the [RudderStack dashboard][4].
2. Log in using your credentials (or sign up for an account, if you don't already have one).
![RudderStack login screen][5]
(RudderStack, [CC BY-SA 4.0][6])
3. Once you've logged in, you should see the workspace token on your RudderStack dashboard.
![RudderStack workspace token][7]
(RudderStack, [CC BY-SA 4.0][6])
### Installing RudderStack
Setting up a RudderStack open source instance is straightforward. You have two installation options:
1. On your Kubernetes cluster, using RudderStack's Helm charts
2. On your Docker container, using the `docker-compose` command
This tutorial explains how to use both options but assumes that you already have [Git installed on your system][8].
#### Deploying with Kubernetes
You can deploy RudderStack on your Kubernetes cluster using the [Helm][9] package manager.
_If you plan to use RudderStack in production, we strongly recommend using this method._ This is because the Docker images are updated with bug fixes more frequently than the GitHub repository (which follows a monthly release cycle).
Before you can deploy RudderStack on Kubernetes, make sure you have the following prerequisites in place:
* [Install and connect kubectl][10] to your Kubernetes cluster.
* [Install Helm][11] on your system, either through the Helm installer scripts or its package manager.
* Finally, get the workspace token from the RudderStack dashboard by following the steps in the [Getting the RudderStack workspace token][12] section.
Once you've completed all the prerequisites, deploy RudderStack on your default Kubernetes cluster:
1. Find the Helm chart required to deploy RudderStack in this [repo][13].
2. Install the Helm chart with a release name of your choice (`my-release`, in this example) from the root directory of the repo in the previous step: [code] $ helm install \
my-release ./ --set \
rudderWorkspaceToken="&lt;your workspace token from RudderStack dashboard&gt;"
```
This deploys RudderStack on your default Kubernetes cluster configured with kubectl using the workspace token you obtained from the RudderStack dashboard.
For more details on the configurable parameters in the RudderStack Helm chart or updating the versions of the images used, consult the [documentation][14].
### Deploying with Docker
Docker is the easiest and fastest way to set up your open source RudderStack instance.
First, get the workspace token from the RudderStack dashboard by following the steps above.
Once you have the RudderStack workspace token:
1. Download the [**rudder-docker.yml**][15] docker-compose file required for the installation.
2. Replace `<your_workspace_token>` in this file with your RudderStack workspace token.
3. Set up RudderStack on your Docker container by running: [code]`docker-compose -f rudder-docker.yml up`
```
Now RudderStack should be up and running on your Docker instance.
### Verifying the installation
You can verify your RudderStack installation by sending test events using the bundled shell script:
1. Clone the GitHub repository: [code]`git clone https://github.com/rudderlabs/rudder-server.git`
```
2. In this tutorial, you will verify RudderStack by sending test events to Google Analytics. Make sure you have a Google Analytics account and keep the tracking ID handy. Also, note that the Google Analytics account needs to have a `Web` property.
3. In the [RudderStack hosted control plane][4]:
* Add a source on the RudderStack dashboard by following the [Adding a source and destination in RudderStack][16] guide. You can use either of RudderStack's event stream software development kits (SDKs) for sending events from your app. This example sets up the [JavaScript SDK][17] as a source on the dashboard. **Note:** You aren't actually installing the RudderStack JavaScript SDK on your site in this step; you are just creating the source in RudderStack.
* Configure a Google Analytics destination on the RudderStack dashboard using the instructions in the guide mentioned previously. Use the Google Analytics tracking ID you kept from step 2 of this section:
![Google Analytics tracking ID][18]
(RudderStack, [CC BY-SA 4.0][6])
4. As mentioned before, RudderStack bundles a shell script that generates test events. Get the **Source write key** from the RudderStack dashboard:
![RudderStack source write key][19]
(RudderStack, [CC BY-SA 4.0][6])
5. Next, run: [code]`./scripts/generate-event <YOUR_WRITE_KEY> https://hosted.rudderlabs.com/v1/batch`
```
6. Finally, log into your Google Analytics account and verify that the events were delivered. In your Google Analytics account, navigate to **RealTime** -&gt; **Events**. The RealTime view is important because some dashboards can take one to two days to refresh.
### Optional: Setting up the open source control plane
RudderStack's core architecture contains two major components: the data plane and the control plane. The data plane, [rudder-server][20], delivers your event data, and the RudderStack hosted control plane manages the configuration of your sources and destinations.
However, if you want to manage the source and destination configurations locally, you can set an open source control plane in your environment using the RudderStack Config Generator. (You must have [Node.js][21] installed on your system to use it.)
Here are the steps to set up the control plane:
1. Install and set up RudderStack on the platform of your choice by following the instructions above.
2. Run the following commands in this order:
* `cd utils/config-gen`
* `npm install`
* `npm start`
You should now be able to access the open source control plane at `http://localhost:3000` by default. If your setup is successful, you will see the user interface.
![RudderStack open source control plane][22]
(RudderStack, [CC BY-SA 4.0][6])
To export the existing workspace configuration from the RudderStack-hosted control plane and have RudderStack use it, consult the [docs][23].
### RudderStack and open source
The core of RudderStack is in the [rudder-server][20] repository. It is open source, licensed under [AGPL-3.0][24]. A majority of the destination integrations live in the [rudder-transformer][25] repository. They are open source as well, licensed under the [MIT License][26]. The SDKs and instrumentation repositories, several tool and utility repositories, and even some [dbt][27] model repositories for use-cases like customer journey analysis and sessionization for the data residing in your data warehouse are open source, licensed under the MIT License, and available in the [GitHub repository][28].
You can use RudderStack's open source offering, rudder-server, on your platform of choice. There are setup guides for [Docker][29], [Kubernetes][30], [native installation][31], and [developer machines][32].
RudderStack open source offers:
1. RudderStack event stream
2. 15+ SDKs and source integrations to ingest event data
3. 80+ destination and warehouse integrations
4. Slack community support
#### RudderStack Cloud
RudderStack also offers a managed option, [RudderStack Cloud][33]. It is fast, reliable, and highly scalable with a multi-node architecture and sophisticated error-handling mechanism. You can hit peak event volume without worrying about downtime, loss of events, or latency.
Explore our open source repos on [GitHub][28], subscribe to [our blog][34], and follow us on social media: [Twitter][35], [LinkedIn][36], [dev.to][37], [Medium][38], and [YouTube][39]!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/rudderstack-customer-data-platform
作者:[Amey Varangaonkar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ameypv
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://rudderstack.com/
[3]: https://segment.com/
[4]: https://app.rudderstack.com/
[5]: https://opensource.com/sites/default/files/uploads/rudderstack_login.png (RudderStack login screen)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/rudderstack_workspace-token.png (RudderStack workspace token)
[8]: https://opensource.com/life/16/7/stumbling-git
[9]: https://helm.sh/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[11]: https://helm.sh/docs/intro/install/
[12]: tmp.AhGpFIyrbZ#token
[13]: https://github.com/rudderlabs/rudderstack-helm
[14]: https://docs.rudderstack.com/installing-and-setting-up-rudderstack/kubernetes
[15]: https://raw.githubusercontent.com/rudderlabs/rudder-server/master/rudder-docker.yml
[16]: https://docs.rudderstack.com/get-started/adding-source-and-destination-rudderstack
[17]: https://docs.rudderstack.com/rudderstack-sdk-integration-guides/rudderstack-javascript-sdk
[18]: https://opensource.com/sites/default/files/uploads/googleanalyticstrackingid.png (Google Analytics tracking ID)
[19]: https://opensource.com/sites/default/files/uploads/rudderstack_sourcewritekey.png (RudderStack source write key)
[20]: https://github.com/rudderlabs/rudder-server
[21]: https://nodejs.org/en/download/
[22]: https://opensource.com/sites/default/files/uploads/rudderstack_controlplane.png (RudderStack open source control plane)
[23]: https://docs.rudderstack.com/how-to-guides/rudderstack-config-generator
[24]: https://www.gnu.org/licenses/agpl-3.0-standalone.html
[25]: https://github.com/rudderlabs/rudder-transformer
[26]: https://opensource.org/licenses/MIT
[27]: https://www.getdbt.com/
[28]: https://github.com/rudderlabs
[29]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/docker
[30]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/kubernetes
[31]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/native-installation
[32]: https://docs.rudderstack.com/get-started/installing-and-setting-up-rudderstack/developer-machine-setup
[33]: https://resources.rudderstack.com/rudderstack-cloud
[34]: https://rudderstack.com/blog/
[35]: https://twitter.com/RudderStack
[36]: https://www.linkedin.com/company/rudderlabs/
[37]: https://dev.to/rudderstack
[38]: https://rudderstack.medium.com/
[39]: https://www.youtube.com/channel/UCgV-B77bV_-LOmKYHw8jvBw

View File

@ -0,0 +1,207 @@
[#]: subject: (Practice using the Linux grep command)
[#]: via: (https://opensource.com/article/21/3/grep-cheat-sheet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Practice using the Linux grep command
======
Learn the basics on searching for info in your files, then download our
cheat sheet for a quick reference guide to grep and regex.
![Hand putting a Linux file folder into a drawer][1]
One of the classic Unix commands, developed way back in 1974 by Ken Thompson, is the Global Regular Expression Print (grep) command. It's so ubiquitous in computing that it's frequently used as a verb ("grepping through a file") and, depending on how geeky your audience, it fits nicely into real-world scenarios, too. (For example, "I'll have to grep my memory banks to recall that information.") In short, grep is a way to search through a file for a specific pattern of characters. If that sounds like the modern Find function available in any word processor or text editor, then you've already experienced grep's effects on the computing industry.
Far from just being a quaint old command that's been supplanted by modern technology, grep's true power lies in two aspects:
* Grep works in the terminal and operates on streams of data, so you can incorporate it into complex processes. You can not only _find_ a word in a text file; you can extract the word, send it to another command, and so on.
* Grep uses regular expression to provide a flexible search capability.
Learning the `grep` command is easy, although it does take some practice. This article introduces you to some of its features I find most useful.
**[Download our free [grep cheat sheet][2]]**
### Installing grep
If you're using Linux, you already have grep installed.
On macOS, you have the BSD version of grep. This differs slightly from the GNU version, so if you want to follow along exactly with this article, then install GNU grep from a project like [Homebrew][3] or [MacPorts][4].
### Basic grep
The basic grep syntax is always the same. You provide the `grep` command a pattern and a file you want it to search. In return, it prints each line to your terminal with a match.
```
$ grep gnu gpl-3.0.txt
    along with this program.  If not, see &lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/philosophy/why-not-lgpl.html\&gt;][6].
```
By default, the `grep` command is case-sensitive, so "gnu" is different from "GNU" or "Gnu." You can make it ignore capitalization with the `--ignore-case` option.
```
$ grep --ignore-case gnu gpl-3.0.txt
                    GNU GENERAL PUBLIC LICENSE
  The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
[...16 more results...]
&lt;[http://www.gnu.org/licenses/\&gt;][5].
&lt;[http://www.gnu.org/philosophy/why-not-lgpl.html\&gt;][6].
```
You can also make the `grep` command return all lines _without_ a match by using the `--invert-match` option:
```
$ grep --invert-match \
\--ignore-case gnu gpl-3.0.txt
                      Version 3, 29 June 2007
 Copyright (C) 2007 Free Software Foundation, Inc. &lt;[http://fsf.org/\&gt;][7]
[...648 lines...]
Public License instead of this License.  But first, please read
```
### Pipes
It's useful to be able to find text in a file, but the true power of [POSIX][8] is its ability to chain commands together through "pipes." I find that my best use of grep is when it's combined with other tools, like cut, tr, or [curl][9].
For instance, assume I have a file that lists some technical papers I want to download. I could open the file and manually click on each link, and then click through Firefox options to save each file to my hard drive, but that's a lot of time and clicking. Instead, I could grep for the links in the file, printing _only_ the matching string by using the `--only-matching` option:
```
$ grep --only-matching http\:\/\/.*pdf example.html
<http://example.com/linux\_whitepaper.pdf>
<http://example.com/bsd\_whitepaper.pdf>
<http://example.com/important\_security\_topic.pdf>
```
The output is a list of URLs, each on one line. This is a natural fit for how Bash processes data, so instead of having the URLs printed to my terminal, I can just pipe them into `curl`:
```
$ grep --only-matching http\:\/\/.*pdf \
example.html | curl --remote-name
```
This downloads each file, saving it according to its remote filename onto my hard drive.
My search pattern in this example may seem cryptic. That's because it uses regular expression, a kind of "wildcard" language that's particularly useful when searching broadly through lots of text.
### Regular expression
Nobody is under the illusion that regular expression ("regex" for short) is easy. However, I find it often has a worse reputation than it deserves. Admittedly, there's the potential for people to get a little _too clever_ with regex until it's so unreadable and so broad that it folds in on itself, but you don't have to overdo your regex. Here's a brief introduction to regex the way I use it.
First, create a file called `example.txt` and enter this text into it:
```
Albania
Algeria
Canada
0
1
3
11
```
The most basic element of regex is the humble `.` character. It represents a single character.
```
$ grep Can.da example.txt
Canada
```
The pattern `Can.da` successfully returned `Canada` because the `.` character represented any _one_ character.
The `.` wildcard can be modified to represent more than one character with these notations:
* `?` matches the preceding item zero or one time
* `*` matches the preceding item zero or more times
* `+` matches the preceding item one or more times
* `{4}` matches the preceding item up to four (or any number you enter in the braces) times
Armed with this knowledge, you can practice regex on `example.txt` all afternoon, seeing what interesting combinations you come up with. Some won't work; others will. The important thing is to analyze the results, so you understand why.
For instance, this fails to return any country:
```
`$ grep A.a example.txt`
```
It fails because the `.` character can only ever match a single character unless you level it up. Using the `*` character, you can tell `grep` to match a single character zero or as many times as necessary until it reaches the end of the word. Because you know the list you're dealing with, you know that _zero times_ is useless in this instance. There are definitely no three-letter country names in this list. So instead, you can use `+` to match a single character at least once and then again as many times as necessary until the end of the word:
```
$ grep A.+a example.txt
Albania
Algeria
```
You can use square brackets to provide a list of letters:
```
$ grep [A,C].+a example.txt
Albania
Algeria
Canada
```
This works for numbers, too. The results may surprise you:
```
$ grep [1-9] example.txt
1
3
11
```
Are you surprised to see 11 in a search for digits 1 to 9?
What happens if you add 13 to your list?
These numbers are returned because they include 1, which is among the list of digits to match.
As you can see, regex is something of a puzzle, but through experimentation and practice, you can get comfortable with it and use it to improve the way you grep through your data.
### Download the cheatsheet
The `grep` command has far more options than I demonstrated in this article. There are options to better format results, list files and line numbers containing matches, provide context for results by printing the lines surrounding a match, and much more. If you're learning grep, or you just find yourself using it often and resorting to searching through its `info` pages, you'll do yourself a favor by downloading our cheat sheet for it. The cheat sheet uses short options (`-v` instead of `--invert-matching`, for instance) as a way to get you familiar with common grep shorthand. It also contains a regex section to help you remember the most common regex codes. [Download the grep cheat sheet today!][2]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/grep-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://opensource.com/downloads/grep-cheat-sheet
[3]: https://opensource.com/article/20/6/homebrew-mac
[4]: https://opensource.com/article/20/11/macports
[5]: http://www.gnu.org/licenses/\>
[6]: http://www.gnu.org/philosophy/why-not-lgpl.html\>
[7]: http://fsf.org/\>
[8]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[9]: https://opensource.com/downloads/curl-command-cheat-sheet

View File

@ -0,0 +1,285 @@
[#]: subject: (Reverse Engineering a Docker Image)
[#]: via: (https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Reverse Engineering a Docker Image
======
This started with a consulting snafu: Government organisation A got government organisation B to develop a web application. Government organisation B subcontracted part of the work to somebody. Hosting and maintenance of the project was later contracted out to a private-sector company C. Company C discovered that the subcontracted somebody (who was long gone) had built a custom Docker image and made it a dependency of the build system, but without committing the original Dockerfile. That left company C with a contractual obligation to manage a Docker image they had no source code for. Company C calls me in once in a while to do various things, so doing something about this mystery meat Docker image became my job.
Fortunately, the Docker image format is a lot more transparent than it could be. A little detective work is needed, but a lot can be figured out just by pulling apart an image file. As an example, heres a quick walkthrough of an image for [the Prettier code formatter][1].
First lets get the Docker daemon to pull the image, then extract the image to a file:
```
docker pull tmknom/prettier:2.0.5
docker save tmknom/prettier:2.0.5 > prettier.tar
```
Yes, the file is just an archive in the classic tarball format:
```
$ tar xvf prettier.tar
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/VERSION
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/json
6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/VERSION
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/json
a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/VERSION
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/json
d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar
manifest.json
repositories
```
As you can see, Docker uses hashes a lot for naming things. Lets have a look at the `manifest.json`. Its in hard-to-read compacted JSON, but the [`jq` JSON Swiss Army knife][2] can pretty print it for us:
```
$ jq . manifest.json
[
{
"Config": "88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json",
"RepoTags": [
"tmknom/prettier:2.0.5"
],
"Layers": [
"a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar",
"d4f612de5397f1fc91272cfbad245b89eac8fa4ad9f0fc10a40ffbb54a356cb4/layer.tar",
"6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar"
]
}
]
```
Note that the three layers correspond to the three hash-named directories. Well look at them later. For now, lets look at the JSON file pointed to by the `Config` key. Its a little long, so Ill just dump the first bit here:
```
$ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | head -n 20
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"--help"
],
"ArgsEscaped": true,
"Image": "sha256:93e72874b338c1e0734025e1d8ebe259d4f16265dc2840f88c4c754e1c01ba0a",
```
The most interesting part is the `history` list, which lists every single layer in the image. A Docker image is a stack of these layers. Almost every statement in a Dockerfile turns into a layer that describes the changes to the image made by that statement. If you have a `RUN script.sh` statement that creates `really_big_file` that you then delete with `RUN rm really_big_file`, you actually get two layers in the Docker image: one that contains `really_big_file`, and one that contains a `.wh.really_big_file` tombstone to cancel it out. The overall image file isnt any smaller. Thats why you often see Dockerfile statements chained together like `RUN script.sh && rm really_big_file` — it ensures all changes are coalesced into one layer.
Here are all the layers recorded in the Docker image. Notice that most layers dont change the filesystem image and are marked `"empty_layer": true`. Only three are non-empty, which matches up with what we saw before.
```
$ jq .history 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
[
{
"created": "2020-04-24T01:05:03.608058404Z",
"created_by": "/bin/sh -c #(nop) ADD file:b91adb67b670d3a6ff9463e48b7def903ed516be66fc4282d22c53e41512be49 in / "
},
{
"created": "2020-04-24T01:05:03.92860976Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/sh\"]",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:06.617130538Z",
"created_by": "/bin/sh -c #(nop) ARG BUILD_DATE",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:07.020521808Z",
"created_by": "/bin/sh -c #(nop) ARG VCS_REF",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:07.36915054Z",
"created_by": "/bin/sh -c #(nop) ARG VERSION",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:07.708820086Z",
"created_by": "/bin/sh -c #(nop) ARG REPO_NAME",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:08.06429638Z",
"created_by": "/bin/sh -c #(nop) LABEL org.label-schema.vendor=tmknom org.label-schema.name=tmknom/prettier org.label-schema.description=Prettier is an opinionated code formatter. org.label-schema.build-date=2020-04-29T06:34:01Z org
.label-schema.version=2.0.5 org.label-schema.vcs-ref=35d2587 org.label-schema.vcs-url=https://github.com/tmknom/prettier org.label-schema.usage=https://github.com/tmknom/prettier/blob/master/README.md#usage org.label-schema.docker.cmd=do
cker run --rm -v $PWD:/work tmknom/prettier --parser=markdown --write '**/*.md' org.label-schema.schema-version=1.0",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:08.511269907Z",
"created_by": "/bin/sh -c #(nop) ARG NODEJS_VERSION=12.15.0-r1",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:08.775876657Z",
"created_by": "/bin/sh -c #(nop) ARG PRETTIER_VERSION",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:26.399622951Z",
"created_by": "|6 BUILD_DATE=2020-04-29T06:34:01Z NODEJS_VERSION=12.15.0-r1 PRETTIER_VERSION=2.0.5 REPO_NAME=tmknom/prettier VCS_REF=35d2587 VERSION=2.0.5 /bin/sh -c set -x && apk add --no-cache nodejs=${NODEJS_VERSION} nodejs-np
m=${NODEJS_VERSION} && npm install -g prettier@${PRETTIER_VERSION} && npm cache clean --force && apk del nodejs-npm"
},
{
"created": "2020-04-29T06:34:26.764034848Z",
"created_by": "/bin/sh -c #(nop) WORKDIR /work"
},
{
"created": "2020-04-29T06:34:27.092671047Z",
"created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"/usr/bin/prettier\"]",
"empty_layer": true
},
{
"created": "2020-04-29T06:34:27.406606712Z",
"created_by": "/bin/sh -c #(nop) CMD [\"--help\"]",
"empty_layer": true
}
]
```
Fantastic! All the statements are right there in the `created_by` fields, so we can almost reconstruct the Dockerfile just from this. Almost. The `ADD` statement at the very top doesnt actually give us the file we need to `ADD`. `COPY` statements are also going to be opaque. We also lose `FROM` statements because they expand out to all the layers inherited from the base Docker image.
We can group the layers by Dockerfile by looking at the timestamps. Most layer timestamps are under a minute apart, representing how long each layer took to build. However, the first two layers are from `2020-04-24`, and the rest of the layers are from `2020-04-29`. This would be because the first two layers are from a base Docker image. Ideally wed figure out a `FROM` statement that gets us that image, so that we have a maintainable Dockerfile.
The `manifest.json` says that the first non-empty layer is `a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/layer.tar`. Lets take a look:
```
$ cd a9cc4ace48cd792ef888ade20810f82f6c24aaf2436f30337a2a712cd054dc97/
$ tar tf layer.tf | head
bin/
bin/arch
bin/ash
bin/base64
bin/bbconfig
bin/busybox
bin/cat
bin/chgrp
bin/chmod
bin/chown
```
Okay, that looks like it might be an operating system base image, which is what youd expect from a typical Dockerfile. There are 488 entries in the tarball, and if you scroll through them, some interesting ones stand out:
```
...
dev/
etc/
etc/alpine-release
etc/apk/
etc/apk/arch
etc/apk/keys/
etc/apk/keys/alpine-devel@lists.alpinelinux.org-4a6a0840.rsa.pub
etc/apk/keys/alpine-devel@lists.alpinelinux.org-5243ef4b.rsa.pub
etc/apk/keys/alpine-devel@lists.alpinelinux.org-5261cecb.rsa.pub
etc/apk/protected_paths.d/
etc/apk/repositories
etc/apk/world
etc/conf.d/
...
```
Sure enough, its an [Alpine][3] image, which you might have guessed if you noticed that the other layers used an `apk` command to install packages. Lets extract the tarball and look around:
```
$ mkdir files
$ cd files
$ tar xf ../layer.tar
$ ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
$ cat etc/alpine-release
3.11.6
```
If you pull `alpine:3.11.6` and extract it, youll find that theres one non-empty layer inside it, and the `layer.tar` is identical to the `layer.tar` in the base layer of the Prettier image.
Just for the heck of it, whats in the other two non-empty layers? The second layer is the main layer containing the Prettier installation. It has 528 entries, including Prettier, a bunch of dependencies and certificate updates:
```
...
usr/lib/libuv.so.1
usr/lib/libuv.so.1.0.0
usr/lib/node_modules/
usr/lib/node_modules/prettier/
usr/lib/node_modules/prettier/LICENSE
usr/lib/node_modules/prettier/README.md
usr/lib/node_modules/prettier/bin-prettier.js
usr/lib/node_modules/prettier/doc.js
usr/lib/node_modules/prettier/index.js
usr/lib/node_modules/prettier/package.json
usr/lib/node_modules/prettier/parser-angular.js
usr/lib/node_modules/prettier/parser-babel.js
usr/lib/node_modules/prettier/parser-flow.js
usr/lib/node_modules/prettier/parser-glimmer.js
usr/lib/node_modules/prettier/parser-graphql.js
usr/lib/node_modules/prettier/parser-html.js
usr/lib/node_modules/prettier/parser-markdown.js
usr/lib/node_modules/prettier/parser-postcss.js
usr/lib/node_modules/prettier/parser-typescript.js
usr/lib/node_modules/prettier/parser-yaml.js
usr/lib/node_modules/prettier/standalone.js
usr/lib/node_modules/prettier/third-party.js
usr/local/
usr/local/share/
usr/local/share/ca-certificates/
usr/sbin/
usr/sbin/update-ca-certificates
usr/share/
usr/share/ca-certificates/
usr/share/ca-certificates/mozilla/
usr/share/ca-certificates/mozilla/ACCVRAIZ1.crt
usr/share/ca-certificates/mozilla/AC_RAIZ_FNMT-RCM.crt
usr/share/ca-certificates/mozilla/Actalis_Authentication_Root_CA.crt
...
```
The third layer is created by the `WORKDIR /work` statement, and it contains exactly one entry:
```
$ tar tf 6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.tar
work/
```
[The original Dockerfile is in the Prettier git repo.][4]
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://github.com/tmknom/prettier
[2]: https://stedolan.github.io/jq/
[3]: https://www.alpinelinux.org/
[4]: https://github.com/tmknom/prettier/blob/35d2587ec052e880d73f73547f1ffc2b11e29597/Dockerfile

View File

@ -0,0 +1,145 @@
[#]: subject: (4 cool new projects to try in Copr for March 2021)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/)
[#]: author: (Jakub Kadlčík https://fedoramagazine.org/author/frostyx/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
4 cool new projects to try in Copr for March 2021
======
![][1]
Copr is a [collection][2] of personal repositories for software that isnt carried in Fedora Linux. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. Copr can offer these projects outside the Fedora set of packages. Software in Copr isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in Copr. If youre new to using Copr, see the [Copr User Documentation][3] for how to get started.
### [][4]
### Ytfzf
[Ytfzf][5] is a simple command-line tool for searching and watching YouTube videos. It provides a fast and intuitive interface built around fuzzy find utility [fzf][6]. It uses [youtube-dl][7] to download selected videos and opens an external video player to watch them. Because of this approach, _ytfzf_ is significantly less resource-heavy than a web browser with YouTube. It supports thumbnails (via [ueberzug][8]), history saving, queueing multiple videos or downloading them for later, channel subscriptions, and other handy features. Thanks to tools like [dmenu][9] or [rofi][10], it can even be used outside the terminal.
![][11]
#### [][12] Installation instructions
The [repo][13] currently provides Ytfzf for Fedora 33 and 34. To install it, use these commands:
```
sudo dnf copr enable bhoman/ytfzf
sudo dnf install ytfzf
```
### [][14] Gemini clients
Have you ever wondered what your internet browsing experience would be if the World Wide Web went an entirely different route and didnt adopt CSS and client-side scripting? [Gemini][15] is a modern alternative to the HTTPS protocol, although it doesnt intend to replace it. The [stenstorp/gemini][16] Copr project provides various clients for browsing Gemini _websites_, namely [Castor][17], [Dragonstone][18], [Kristall][19], and [Lagrange][20].
The [Gemini][21] site provides a list of some hosts that use this protocol. Using Castor to visit this site is shown here:
![][22]
#### [][23] Installation instructions
The [repo][16] currently provides Gemini clients for Fedora 32, 33, 34, and Fedora Rawhide. Also available for EPEL 7 and 8, and CentOS Stream. To install a browser, chose from the install commands shown here:
```
sudo dnf copr enable stenstorp/gemini
sudo dnf install castor
sudo dnf install dragonstone
sudo dnf install kristall
sudo dnf install lagrange
```
### [][24] Ly
[Ly][25] is a lightweight login manager for Linux and BSD. It features a ncurses-like text-based user interface. Theoretically, it should support all X desktop environments and window managers (many of them [were tested][26]). Ly also provides basic Wayland support (Sway works very well). Somewhere in the configuration, there is an easter egg option to enable the famous [PSX DOOM fire][27] animation in the background, which on its own, is worth checking out.
![][28]
#### [][29] Installation instructions
The [repo][30] currently provides Ly for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:
```
sudo dnf copr enable dhalucario/ly
sudo dnf install ly
```
Before setting up Ly to be your system login screen, run _ly_ command in the terminal to make sure it works properly. Then proceed with disabling your current login manager and enabling Ly instead.
```
sudo systemctl disable gdm
sudo systemctl enable ly
```
Finally, restart your computer for the changes to take an effect.
### [][31] AWS CLI v2
[AWS CLI v2][32] brings a steady and methodical evolution based on the community feedback, rather than a massive redesign of the original client. It introduces new mechanisms for configuring credentials and now allows the user to import credentials from the _.csv_ files generated in the AWS Console. It also provides support for AWS SSO. Other big improvements are server-side auto-completion, and interactive parameters generation. A fresh new feature is interactive wizards, which provide a higher level of abstraction and combines multiple AWS API calls to create, update, or delete AWS resources.
![][33]
#### [][34] Installation instructions
The [repo][35] currently provides AWS CLI v2 for Fedora Linux 32, 33, 34, and Fedora Rawhide. To install it, use these commands:
```
sudo dnf copr enable spot/aws-cli-2
sudo dnf install aws-cli-2
```
Naturally, access to an AWS account is necessary.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-march-2021/
作者:[Jakub Kadlčík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/frostyx/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/4-copr-945x400-1-816x345.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html
[4]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#droidcam
[5]: https://github.com/pystardust/ytfzf
[6]: https://github.com/junegunn/fzf
[7]: http://ytdl-org.github.io/youtube-dl/
[8]: https://github.com/seebye/ueberzug
[9]: https://tools.suckless.org/dmenu/
[10]: https://github.com/davatorium/rofi
[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/ytfzf.png
[12]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions
[13]: https://copr.fedorainfracloud.org/coprs/bhoman/ytfzf/
[14]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#gemini-clients
[15]: https://gemini.circumlunar.space/
[16]: https://copr.fedorainfracloud.org/coprs/stenstorp/gemini/
[17]: https://git.sr.ht/~julienxx/castor
[18]: https://gitlab.com/baschdel/dragonstone
[19]: https://kristall.random-projects.net/
[20]: https://github.com/skyjake/lagrange
[21]: https://gemini.circumlunar.space/servers/
[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/gemini.png
[23]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-1
[24]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#ly
[25]: https://github.com/nullgemm/ly
[26]: https://github.com/nullgemm/ly#support
[27]: https://fabiensanglard.net/doom_fire_psx/index.html
[28]: https://fedoramagazine.org/wp-content/uploads/2021/03/ly.png
[29]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-2
[30]: https://copr.fedorainfracloud.org/coprs/dhalucario/ly/
[31]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#aws-cli-v2
[32]: https://aws.amazon.com/blogs/developer/aws-cli-v2-is-now-generally-available/
[33]: https://fedoramagazine.org/wp-content/uploads/2021/03/aws-cli-2.png
[34]: https://github.com/FrostyX/fedora-magazine/blob/main/2021-march.md#installation-instructions-3
[35]: https://copr.fedorainfracloud.org/coprs/spot/aws-cli-2/

View File

@ -0,0 +1,393 @@
[#]: subject: (Create a countdown clock with a Raspberry Pi)
[#]: via: (https://opensource.com/article/21/3/raspberry-pi-countdown-clock)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Create a countdown clock with a Raspberry Pi
======
Start counting down the days to your next holiday with a Raspberry Pi
and an ePaper display.
![Alarm clocks with different time][1]
For 2021, [Pi Day][2] has come and gone, leaving fond memories and [plenty of Raspberry Pi projects][3] to try out. The days after any holiday can be hard when returning to work after high spirits and plenty of fun, and Pi Day is no exception. As we look into the face of the Ides of March, we can long for the joys of the previous, well, day. But fear no more, dear Pi Day celebrant! For today, we begin the long countdown to the next Pi Day!
OK, but seriously. I made a Pi Day countdown timer, and you can too!
A while back, I purchased a [Raspberry Pi Zero W][4] and recently used it to [figure out why my WiFi was so bad][5]. I was also intrigued by the idea of getting an ePaper display for the little Zero W. I didn't have a good use for one, but, dang it, it looked like fun! I purchased a little 2.13" [Waveshare display][6], which fit perfectly on top of the Raspberry Pi Zero W. It's easy to install: Just slip the display down onto the Raspberry Pi's GIPO headers and you're good to go.
I used [Raspberry Pi OS][7] for this project, and while it surely can be done with other operating systems, the `raspi-config` command, used below, is most easily available on Raspberry Pi OS.
### Set up the Raspberry Pi and the ePaper display
Setting up the Raspberry Pi to work with the ePaper display requires you to enable the Serial Peripheral Interface (SPI) in the Raspberry Pi software, install the BCM2835 C libraries (to access the GPIO functions for the Broadcom BCM 2835 chip on the Raspberry Pi), and install Python GPIO libraries to control the ePaper display. Finally, you need to install the Waveshare libraries for working with the 2.13" display using Python.
Here's a step-by-step walkthrough of how to do these tasks.
#### Enable SPI
The easiest way to enable SPI is with the Raspberry Pi `raspi-config` command. The SPI bus allows serial data communication to be used with devices—in this case, the ePaper display:
```
`$ sudo raspi-config`
```
From the menu that pops up, select **Interfacing Options** -&gt; **SPI** -&gt; **Yes** to enable the SPI interface, then reboot.
#### Install BCM2835 libraries
As mentioned above, the BCM2835 libraries are software for the Broadcom BCM2385 chip on the Raspberry Pi, which allows access to the GPIO pins and the ability to use them to control devices.
As I'm writing this, the latest version of the Broadcom BCM 2835 libraries for the Raspberry Pi is v1.68. To install the libraries, you need to download the software tarball and build and install the software with `make`:
```
# Download the BCM2853 libraries and extract them
$ curl -sSL <http://www.airspayce.com/mikem/bcm2835/bcm2835-1.68.tar.gz> -o - | tar -xzf -
# Change directories into the extracted code
$ pushd bcm2835-1.68/
# Configure, build, check and install the BCM2853 libraries
$ sudo ./configure
$ sudo make check
$ sudo make install
# Return to the original directory
$ popd
```
#### Install required Python libraries
You also need some Python libraries to use Python to control the ePaper display, the `RPi.GPIO` pip package. You also need the `python3-pil` package for drawing shapes. Apparently, the PIL package is all but dead, but there is an alternative, [Pillow][8]. I have not tested Pillow for this project, but it may work:
```
# Install the required Python libraries
$ sudo apt-get update
$ sudo apt-get install python3-pip python3-pil
$ sudo pip3 install RPi.GPIO
```
_Note: These instructions are for Python 3. You can find Python 2 instructions on Waveshare's website_
#### Download Waveshare examples and Python libraries
Waveshare maintains a Git repository with Python and C libraries for working with its ePaper displays and some examples that show how to use them. For this countdown clock project, you will clone this repository and use the libraries for the 2.13" display:
```
# Clone the WaveShare e-Paper git repository
$ git clone <https://github.com/waveshare/e-Paper.git>
```
If you're using a different display or a product from another company, you'll need to use the appropriate software for your display.
Waveshare provides instructions for most of the above on its website:
* [WaveShare ePaper setup instructions][9]
* [WaveShare ePaper libraries install instructions][10]
#### Get a fun font (optional)
You can display your timer however you want, but why not do it with a little style? Find a cool font to work with!
There's a ton of [Open Font License][11] fonts available out there. I am particularly fond of Bangers. You've seen this if you've ever watched YouTube—it's used _all over_. It can be downloaded and dropped into your user's local shared fonts directory to make it available for any application, including this project:
```
# The "Bangers" font is a Open Fonts License licensed font by Vernon Adams (<https://github.com/vernnobile>) from Google Fonts
$ mkdir -p ~/.local/share/fonts
$ curl -sSL <https://github.com/google/fonts/raw/master/ofl/bangers/Bangers-Regular.ttf> -o fonts/Bangers-Regular.ttf
```
### Create a Pi Day countdown timer
Now that you have installed the software to work with the ePaper display and a fun font to use, you can build something cool with it: a timer to count down to the next Pi Day!
If you want, you can just grab the [countdown.py][12] Python file from this project's [GitHub repo][13] and skip to the end of this article.
For the curious, I'll break down that file, section by section.
#### Import some libraries
```
#!/usr/bin/python3
# -*- coding:utf-8 -*-
import logging
import os
import sys
import time
from datetime import datetime
from pathlib import Path
from PIL import Image,ImageDraw,ImageFont
logging.basicConfig(level=logging.INFO)
basedir = Path(__file__).parent
waveshare_base = basedir.joinpath('e-Paper', 'RaspberryPi_JetsonNano', 'python')
libdir = waveshare_base.joinpath('lib')
```
At the start, the Python script imports some standard libraries used later in the script. You also need to add `Image`, `ImageDraw`, and `ImageFont` from the PIL package, which you'll use to draw some simple geometric shapes. Finally, set some variables for the local `lib` directory that contains the Waveshare Python libraries for working with the 2.13" display, and which you can use later to load the library from the local directory.
#### Font size helper function
The next part of the script has a helper function for setting the font size for your chosen font: Bangers-Regular.ttf. It takes an integer for the font size and returns an ImageFont object you can use with the display:
```
def set_font_size(font_size):
    logging.info("Loading font...")
    return ImageFont.truetype(f"{basedir.joinpath('Bangers-Regular.ttf').resolve()}", font_size)
```
#### Countdown logic
Next is a small function that calculates the meat of this project: how long it is until the next Pi Day. If it were, say, January, it would be relatively straightforward to count how many days are left, but you also need to consider whether Pi Day has already passed for the year (sadface), and if so, count how very, very many days are ahead until you can celebrate again:
```
def countdown(now):
    piday = datetime(now.year, 3, 14)
    # Add a year if we're past PiDay
    if piday &lt; now:
        piday = datetime((now.year + 1), 3, 14)
    days = (piday - now).days
    logging.info(f"Days till piday: {days}")
    return day
```
#### The main function
Finally, you get to the main function, which initializes the display and begins writing data to it. In this case, you'll write a welcome message and then begin the countdown to the next Pi Day. But first, you need to load the Waveshare library:
```
def main():
    if os.path.exists(libdir):
        sys.path.append(f"{libdir}")
        from waveshare_epd import epd2in13_V2
    else:
        logging.fatal(f"not found: {libdir}")
        sys.exit(1)
```
The snippet above checks to make sure the library has been downloaded to a directory alongside the countdown script, and then it loads the `epd2in13_V2` library. If you're using a different display, you will need to use a different library. You can also write your own if you are so inclined. I found it kind of interesting to read the Python code that Waveshare provides with the display. It's considerably less complicated than I would have imagined it to be, if somewhat tedious.
The next bit of code creates an EPD (ePaper Display) object to interact with the display and initializes the hardware:
```
    logging.info("Starting...")
    try:
        # Create an a display object
        epd = epd2in13_V2.EPD()
        # Initialize the displace, and make sure it's clear
        # ePaper keeps it's state unless updated!
        logging.info("Initialize and clear...")
        epd.init(epd.FULL_UPDATE)
        epd.Clear(0xFF)
```
An interesting aside about ePaper: It uses power only when it changes a pixel from white to black or vice-versa. This means when the power is removed from the device or the application stops for whatever reason, whatever was on the screen remains. That's great from a power-consumption perspective, but it also means you need to clear the display when starting up, or your script will just write over whatever is already on the screen. Hence, `epd.Clear(0xFF)` is used to clear the display when the script starts.
Next, create a "canvas" where you will draw the rest of your display output:
```
    # Create an image object
    # NOTE: The "epd.heigh" is the LONG side of the screen
    # NOTE: The "epd.width" is the SHORT side of the screen
    # Counter-intuitive...
    logging.info(f"Creating canvas - height: {epd.height}, width: {epd.width}")
    image = Image.new('1', (epd.height, epd.width), 255)  # 255: clear the frame
    draw = ImageDraw.Draw(image)
```
This matches the width and height of the display—but it is somewhat counterintuitive, in that the short side of the display is the width. I think of the long side as the width, so this is just something to note. Note that the `epd.height` and `epd.width` are set by the Waveshare library to correspond to the device you're using.
#### Welcome message
Next, you'll start to draw something. This involves setting data on the "canvas" object you created above. This doesn't draw it to the ePaper display yet—you're just building the image you want right now. Create a little welcome message celebrating Pi Day, with an image of a piece of pie, drawn by yours truly just for this project:
![drawing of a piece of pie][14]
(Chris Collins, [CC BY-SA 4.0][15])
Cute, huh?
```
    logging.info("Set text text...")
    bangers64 = set_font_size(64)
    draw.text((0, 30), 'PI DAY!', font = bangers64, fill = 0)
    logging.info("Set BMP...")
    bmp = Image.open(basedir.joinpath("img", "pie.bmp"))
    image.paste(bmp, (150,2))
```
Finally, _finally_, you get to display the canvas you drew, and it's a little bit anti-climactic:
```
    logging.info("Display text and BMP")
    epd.display(epd.getbuffer(image))
```
That bit above updates the display to show the image you drew.
Next, prepare another image to display your countdown timer.
#### Pi Day countdown timer
First, create a new image object that you can use to draw the display. Also, set some new font sizes to use for the image:
```
    logging.info("Pi Date countdown; press CTRL-C to exit")
    piday_image = Image.new('1', (epd.height, epd.width), 255)
    piday_draw = ImageDraw.Draw(piday_image)
    # Set some more fonts
    bangers36 = set_font_size(36)
    bangers64 = set_font_size(64)
```
To display a ticker like a countdown, it's more efficient to update part of the image, changing the display for only what has changed in the data you want to draw. The next bit of code prepares the display to function this way:
```
    # Prep for updating display
    epd.displayPartBaseImage(epd.getbuffer(piday_image))
    epd.init(epd.PART_UPDATE)
```
Finally, you get to the timer bit, starting an infinite loop that checks how long it is until the next Pi Day and displays the countdown on the ePaper display. If it actually _is_ Pi Day, you can handle that with a little celebration message:
```
    while (True):
        days = countdown(datetime.now())
        unit = get_days_unit(days)
        # Clear the bottom half of the screen by drawing a rectangle filld with white
        piday_draw.rectangle((0, 50, 250, 122), fill = 255)
        # Draw the Header
        piday_draw.text((10,10), "Days till Pi-day:", font = bangers36, fill = 0)
        if days == 0:
            # Draw the Pi Day celebration text!
            piday_draw.text((0, 50), f"It's Pi Day!", font = bangers64, fill = 0)
        else:
            # Draw how many days until Pi Day
            piday_draw.text((70, 50), f"{str(days)} {unit}", font = bangers64, fill = 0)
        # Render the screen
        epd.displayPartial(epd.getbuffer(piday_image))
        time.sleep(5)
```
The last bit of the script does some error handling, including some code to catch keyboard interrupts so that you can stop the infinite loop with **Ctrl**+**C** and a small function to print "day" or "days" depending on whether or not the output should be singular (for that one, single day each year when it's appropriate):
```
    except IOError as e:
        logging.info(e)
    except KeyboardInterrupt:
        logging.info("Exiting...")
        epd.init(epd.FULL_UPDATE)
        epd.Clear(0xFF)
        time.sleep(1)
        epd2in13_V2.epdconfig.module_exit()
        exit()
def get_days_unit(count):
    if count == 1:
        return "day"
    return "days"
if __name__ == "__main__":
    main()
```
And there you have it! A script to count down and display how many days are left until Pi Day! Here's an action shot on my Raspberry Pi (sped up by 86,400; I don't have nearly enough disk space to save a day-long video):
![Pi Day Countdown Timer In Action][16]
(Chris Collins, [CC BY-SA 4.0][15])
#### Install the systemd service (optional)
If you'd like the countdown display to run whenever the system is turned on and without you having to be logged in and run the script, you can install the optional systemd unit as a [systemd user service][17]).
Copy the [piday.service][18] file on GitHub to `${HOME}/.config/systemd/user`, first creating the directory if it doesn't exist. Then you can enable the service and start it:
```
$ mkdir -p ~/.config/systemd/user
$ cp piday.service ~/.config/systemd/user
$ systemctl --user enable piday.service
$ systemctl --user start piday.service
# Enable lingering, to create a user session at boot
# and allow services to run after logout
$ loginctl enable-linger $USER
```
The script will output to the systemd journal, and the output can be viewed with the `journalctl` command.
### It's beginning to look a lot like Pi Day!
And _there_ you have it! A Pi Day countdown timer, displayed on an ePaper display using a Raspberry Pi Zero W, and starting on system boot with a systemd unit file! Now there are just 350-something days until we can once again come together and celebrate the fantastic device that is the Raspberry Pi. And we can see exactly how many days at a glance with our tiny project.
But in truth, anyone can hold Pi Day in their hearts year-round, so enjoy creating some fun and educational projects with your own Raspberry Pi!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/raspberry-pi-countdown-clock
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://en.wikipedia.org/wiki/Pi_Day
[3]: https://opensource.com/tags/raspberry-pi
[4]: https://www.raspberrypi.org/products/raspberry-pi-zero-w/
[5]: https://opensource.com/article/21/3/troubleshoot-wifi-go-raspberry-pi
[6]: https://www.waveshare.com/product/displays/e-paper.htm
[7]: https://www.raspberrypi.org/software/operating-systems/
[8]: https://pypi.org/project/Pillow/
[9]: https://www.waveshare.com/wiki/2.13inch_e-Paper_HAT
[10]: https://www.waveshare.com/wiki/Libraries_Installation_for_RPi
[11]: https://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=OFL
[12]: https://github.com/clcollins/epaper-pi-ex/blob/main/countdown.py
[13]: https://github.com/clcollins/epaper-pi-ex/
[14]: https://opensource.com/sites/default/files/uploads/pie.png (drawing of a piece of pie)
[15]: https://creativecommons.org/licenses/by-sa/4.0/
[16]: https://opensource.com/sites/default/files/uploads/piday_countdown.gif (Pi Day Countdown Timer In Action)
[17]: https://wiki.archlinux.org/index.php/systemd/User
[18]: https://github.com/clcollins/epaper-pi-ex/blob/main/piday.service

View File

@ -0,0 +1,213 @@
[#]: subject: (Managing deb Content in Foreman)
[#]: via: (https://opensource.com/article/21/3/linux-foreman)
[#]: author: (Maximilian Kolb https://opensource.com/users/kolb)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Managing deb Content in Foreman
======
Use Foreman to serve software packages and errata for certain Linux
systems.
![Package wrapped with brown paper and red bow][1]
Foreman is a data center automation tool to deploy, configure, and patch hosts. It relies on Katello for content management, which in turn relies on Pulp to manage repositories. See [_Manage content using Pulp Debian_][2] for more information.
Pulp offers many plugins for different content types, including RPM packages, Ansible roles and collections, PyPI packages, and deb content. The latter is called the **pulp_deb** plugin.
### Content management in Foreman
The basic idea for providing content to hosts is to mirror repositories and provide content to hosts via either the Foreman server or attached Smart Proxies.
This tutorial is a step-by-step guide to adding deb content to Foreman and serving hosts running Debian 10. "Deb content" refers to software packages and errata for Debian-based Linux systems (e.g., Debian and Ubuntu). This article focuses on [Debian 10 Buster][3] but the instructions also work for [Ubuntu 20.04 Focal Fossa][4], unless noted otherwise.
### 1\. Create the operating system
#### 1.1. Create an architecture
Navigate to **Hosts &gt; Architectures** and create a new architecture (if the architecture where you want to deploy Debian 10 hosts is missing). This tutorial assumes your hosts run on the x86_64 architecture, as Foreman does.
#### 1.2. Create an installation media
Navigate to **Hosts &gt; Installation Media** and create new Debian 10 installation media. Use the upstream repository URL <http://ftp.debian.org/debian/>.
Select the Debian operating system family for either Debian or Ubuntu.
Alternatively, you can also use a Debian mirror. However, content synced via Pulp does not work for two reasons: first, the `linux` and `initrd.gz` files are not in the expected locations; second, the `Release` file is not signed.
#### 1.3. Create an operating system
Navigate to **Hosts &gt; Operating Systems** and create a new operating system called Debian 10. Use **10** as the major version and leave the minor version field blank. For Ubuntu, use **20.04** as the major version and leave the minor version field blank.
![Creating an operating system entry][5]
(Maximilian Kolb, [CC BY-SA 4.0][6])
Select the Debian operating system family for Debian or Ubuntu, and specify the release name (e.g., **Buster** for Debian 10 or **Stretch** for Debian 9). Select the default partition tables and provisioning templates, i.e., **Preseed default ***.
#### 1.4. Adapt default Preseed templates (optional)
Navigate to **Hosts &gt; Partition Tables** and **Hosts &gt; Provisioning Templates** and adapt the default **Preseed** templates if necessary. Note that you need to clone locked templates before editing them. Cloned templates will not receive updates with newer Foreman versions. All Debian-based systems use **Preseed** templates, which are included with Foreman by default.
#### 1.5. Associate the templates
Navigate to **Hosts &gt; Provisioning Templates** and search for **Preseed**. Associate all desired provisioning templates to the operating system. Then, navigate to **Hosts &gt; Operating Systems** and select **Debian 10** as the operating system. Select the **Templates** tab and associate any provisioning templates that you want.
### 2\. Synchronize content
#### 2.1. Create content credentials for Debian upstream repositories and Debian client
Navigate to **Content &gt; Content Credentials** and add the required GPG public keys as content credentials for Foreman to verify the deb packages' authenticity. To obtain the necessary GPG public keys, verify the **Release** file and export the corresponding GPG public key as follows:
* **Debian 10 main:** [code] wget <http://ftp.debian.org/debian/dists/buster/Release> &amp;&amp; wget <http://ftp.debian.org/debian/dists/buster/Release.gpg>
gpg --verify Release.gpg Release
gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
gpg --keyserver keys.gnupg.net --recv-key 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517
gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE DCC9EFBF77E11517 &gt; debian_10_main.txt
```
* **Debian 10 security:** [code] wget <http://security.debian.org/debian-security/dists/buster/updates/Release> &amp;&amp; wget <http://security.debian.org/debian-security/dists/buster/updates/Release.gpg>
gpg --verify Release.gpg Release
gpg --keyserver keys.gnupg.net --recv-key 379483D8B60160B155B372DDAA8E81B4331F7F50
gpg --keyserver keys.gnupg.net --recv-key 5237CEEEF212F3D51C74ABE0112695A0E562B32A
gpg --armor --export EDA0D2388AE22BA9 4DFAB270CAA96DFA &gt; debian_10_security.txt
```
* **Debian 10 updates:** [code] wget <http://ftp.debian.org/debian/dists/buster-updates/Release> &amp;&amp; wget <http://ftp.debian.org/debian/dists/buster-updates/Release.gpg>
gpg --verify Release.gpg Release
gpg --keyserver keys.gnupg.net --recv-key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
gpg --keyserver keys.gnupg.net --recv-key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
gpg --armor --export E0B11894F66AEC98 DC30D7C23CBBABEE &gt; debian_10_updates.txt
```
* **Debian 10 client:** [code]`wget --output-document=debian_10_client.txt https://apt.atix.de/atix_gpg.pub`
```
You can select the respective ASCII-armored TXT files to upload to your Foreman instance.
#### 2.2. Create products called Debian 10 and Debian 10 client
Navigate to **Content &gt; Hosts** and create two new products.
#### 2.3. Create the necessary Debian 10 repositories
Navigate to **Content &gt; Products** and select the **Debian 10** product. Create three **deb** repositories:
* **Debian 10 main:**
* URL: `http://ftp.debian.org/debian/`
* Releases: `buster`
* Component: `main`
* Architecture: `amd64`
* **Debian 10 security:**
* URL: `http://deb.debian.org/debian-security/`
* Releases: `buster/updates`
* Component: `main`
* Architecture: `amd64`
If you want, you can add a self-hosted errata service: `https://github.com/ATIX-AG/errata_server` and `https://github.com/ATIX-AG/errata_parser`
* **Debian 10 updates:**
* URL: `http://ftp.debian.org/debian/`
* Releases: `buster-updates`
* Component: `main`
* Architecture: `amd64`
Select the content credentials that you created in step 2.1. Adjust the components and architecture as needed. Navigate to **Content &gt; Products** and select the **Debian 10 client** product. Create a **deb** repository as follows:
* **Debian 10 subscription-manager**
* URL: `https://apt.atix.de/Debian10/`
* Releases: `stable`
* Component: `main`
* Architecture: `amd64`
Select the content credentials you created in step 2.1. The Debian 10 client contains the **subscription-manager** package, which runs on each content host to receive content from the Foreman Server or an attached Smart Proxy. Navigate to [apt.atix.de][7] for further instructions.
#### 2.4. Synchronize the repositories
If you want, you can create a sync plan to sync the **Debian 10** and **Debian 10 client** products periodically. To sync the product once, click the **Select Action &gt; Sync Now** button on the **Products** page.
#### 2.5. Create content views
Navigate to **Content &gt; Content Views** and create a content view called **Debian 10** comprising the Debian upstream repositories created in the **Debian 10** product and publish a new version. Do the same for the **Debian 10 client** repository of the **Debian 10 client** product.
#### 2.6. Create a composite content view
Create a new composite content view called **Composite Debian 10** comprising the previously published **Debian 10** and **Debian 10 client** content views and publish a new version. You may optionally add other content views of your choice (e.g., Puppet).
![Composite content view][8]
(Maximilian Kolb, [CC BY-SA 4.0][6])
#### 2.7. Create an activation key
Navigate to **Content &gt; Activation Keys** and create a new activation key called **debian-10**:
* Select the **Library** lifecycle environment and add the **Composite Debian 10** content view.
* On the **Details** tab, assign the correct lifecycle environment and composite content view.
* On the **Subscriptions** tab, assign the necessary subscriptions, i.e., the **Debian 10** and **Debian 10 client** products.
### 3\. Deploy a host
#### 3.1. Enable provisioning via Port 8000
Connect to your Foreman instance via SSH and edit the following file:
```
`/etc/foreman-proxy/settings.yml`
```
Search for `:http_port: 8000` and make sure it is not commented out (i.e., the line does not start with a `#`).
#### 3.2. Create a host group
Navigate to **Configure &gt; Host Groups** and create a new host group called **Debian 10**. Check out the Foreman documentation on [creating host groups][9], and make sure to select the correct entries on the **Operating System** and **Activation Keys** tabs.
#### 3.3. Create a new host
Navigate to **Hosts &gt; Create Host** and either select the host group as described above or manually enter the identical information.
> Tip: Deploying hosts running Ubuntu 20.04 is even easier, as you can use its official installation media ISO image and do offline installations. Check out orcharhino's [Managing Ubuntu Systems Guide][10] for more information.
[ATIX][11] has developed several Foreman plugins, and is an integral part of the [Foreman open source ecosystem][12]. The community's feedback on our contributions is passed back to our customers, as we continuously strive to improve our downstream product, [orcharhino][13].
This May I started my internship at Red Hat with the Pulp team . Since it was my first ever...
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/linux-foreman
作者:[Maximilian Kolb][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kolb
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brown-package-red-bow.jpg?itok=oxZYQzH- (Package wrapped with brown paper and red bow)
[2]: https://opensource.com/article/20/10/pulp-debian
[3]: https://wiki.debian.org/DebianBuster
[4]: https://releases.ubuntu.com/20.04/
[5]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_operating_system_entry.png (Creating an operating system entry)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://apt.atix.de/
[8]: https://opensource.com/sites/default/files/uploads/foreman-debian_content_deb_composite_content_view.png (Composite content view)
[9]: https://docs.theforeman.org/nightly/Managing_Hosts/index-foreman-el.html#creating-a-host-group
[10]: https://docs.orcharhino.com/or/docs/sources/usage_guides/managing_ubuntu_systems_guide.html#musg_deploy_hosts
[11]: https://atix.de/
[12]: https://theforeman.org/2020/10/atix-in-the-foreman-community.html
[13]: https://orcharhino.com/

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Multicloud, security integration drive massive SD-WAN adoption)
[#]: via: (https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
多云融合和安全集成推动SD-WAN的大规模应用
======
2022 年 SD-WAN 市场 40% 的同比增长主要来自于包括 Cisco、VMWare、Juniper 和 Arista 在内的网络供应商和包括 AWS、Microsoft AzureGoogle Anthos 和 IBM RedHat 在内的服务提供商之间的紧密联系。
[Gratisography][1] [(CC0)][2]
越来越多的云应用,以及越来越完善的网络安全性,可视化特性和可管理性,正以惊人的速度推动企业软件定义广域网 ([SD-WAN][3]) 部署。
IDCInternational Data Corporation译者注公司的网络基础架构副总裁 Rohit Mehra 表示,根据 IDC 的研究过去一年中特别是软件和基础设施即服务SaaS 和 IaaS产品推动了 SD-WAN 的实施。
**阅读更多关于边缘计算的文章**
* [边缘计算和物联网如何重塑数据中心][4]
* [边缘计算的最佳实践][5]
* [边缘计算如何提高物联网的安全性][6]
例如IDC 表示,根据其最近的客户调查结果,有 95 的客户将在两年内使用 [SD-WAN][7] 技术,而 42 的客户已经部署了它。IDC 还表示,到 2022 年SD-WAN 基础设施市场将达到 45 亿美元,此后每年将以每年 40 的速度增长。
”SD-WAN 的增长是一个广泛的趋势,很大程度上是由企业希望优化远程站点的云连接性的需求推动的。“ Mehra 说。
思科最近撰文称,多云网络的发展正在促使许多企业改组其网络,以更好地使用 SD-WAN 技术。SD-WAN 对于采用云服务的企业至关重要,它是园区网、分支机构、[物联网][8]、[数据中心][9] 和云之间的连接中间件。思科公司表示,根据调查,平均每个思科的企业客户有 30 个付费的 SaaS 应用程序,而他们实际使用的 SaaS 应用会更多——在某些情况下甚至超过 100 种。
这种趋势的部分原因是由网络供应商(例如 Cisco、VMware、Juniper、Arista 等)(这里的网络供应商指的是提供硬件或软件并可按需组网的厂商,译者注)与服务提供商(例如 Amazon AWS、Microsoft Azure、Google Anthos 和 IBM RedHat 等)建立的关系推动的。
去年 12 月AWS为其云产品发布了关键服务其中包括诸如 [AWS Transit Gateway][10] 等新集成技术的关键服务,这标志着 SD-WAN 与多云场景关系的日益重要。使用 AWS Transit Gateway 技术,客户可以将 AWS 中的 VPCVirtual Private Cloud 和其自有网络均连接到相同的网关。Aruba、Aviatrix Cisco、Citrix Systems、Silver Peak 和 Versa 已经宣布支持该技术,这将简化和增强这些公司的 SD-WAN 产品与 AWS 云服务的集成服务的性能和表现。
[][11]
Mehra 说,展望未来,对云应用的友好兼容和完善的性能监控等增值功能将是 SD-WAN 部署的关键部分。
随着 SD-WAN 与云的关系不断发展SD-WAN 对集成安全功能的需求也在不断增长。
Mehra 说SD-WAN 产品集成安全性的方式比以往单独打包的广域网安全软件或服务要好得多。SD-WAN 是一个更加敏捷的安全环境。SD-WAN 被公认的主要组成部分包括安全功能,数据分析功能和广域网优化功能等,其中安全功能则是下一代 SD-WAN 解决方案的首要需求。
Mehra 说,企业将越来越少地关注仅解决某个具体问题的 SD-WAN 解决方案,而将青睐于能够解决更广泛的网络管理和安全需求的 SD-WAN 平台。他们将寻找可以与他们的 IT 基础设施(包括企业数据中心网络、企业园区局域网、[公有云][12] 资源等)集成更紧密的 SD-WAN 平台。他说,企业将寻求无缝融合的安全服务,并希望有其他各种功能的支持,例如可视化,数据分析和统一通信功能。
“随着客户不断将其基础设施与软件集成在一起,他们可以做更多的事情,例如根据其局域网和广域网上的用户、设备或应用程序的需求,实现一致的管理和安全策略,并最终获得更好的整体使用体验。” Mehra 说。
一个新兴趋势是 SD-WAN 产品包需要支持 [SD-branch][13] 技术。 Mehra 说,超过 70 的 IDC 受调查客户希望在明年使用 SD-Branch。在最近几周[Juniper][14] 和 [Aruba][15] 公司已经优化了 SD-branch 产品,这一趋势预计将在今年持续下去。
SD-Branch 技术基于 SD-WAN 的概念和支持但更专注于满足分支机构中局域网的组网和管理需求。展望未来SD-Branch 如何与其他技术集成,例如数据分析,音视频,统一通信等,将成为该技术的主要驱动力。
加入 [Facebook][16] 和 [LinkedIn][17] 上的 Network World 社区,以评论您最关注的主题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3527194/multicloud-security-integration-drive-massive-sd-wan-adoption.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[cooljelly](https://github.com/cooljelly)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.pexels.com/photo/black-and-white-branches-tree-high-279/
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[7]: https://www.networkworld.com/article/3489938/what-s-hot-at-the-edge-for-2020-everything.html
[8]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[10]: https://aws.amazon.com/transit-gateway/
[11]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
[13]: https://www.networkworld.com/article/3250664/sd-branch-what-it-is-and-why-youll-need-it.html
[14]: https://www.networkworld.com/article/3487801/juniper-broadens-sd-branch-management-switch-options.html
[15]: https://www.networkworld.com/article/3513357/aruba-reinforces-sd-branch-with-security-management-upgrades.html
[16]: https://www.facebook.com/NetworkWorld/
[17]: https://www.linkedin.com/company/network-world

View File

@ -1,159 +0,0 @@
[#]: collector: "lujun9972"
[#]: translator: "wyxplus"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Managing processes on Linux with kill and killall"
[#]: via: "https://opensource.com/article/20/1/linux-kill-killall"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
在 Linux 上使用 kill 和 killall 命令来管理进程
======
了解如何使用 pskill 和 killall 命令来终止进程并回收系统资源。
![Penguin with green background][1]
在Linux中每个程序和守护程序都是一个“进程”。 大多数进程代表一个正在运行的程序。 其他程序可以派生出其他进程,比如说有侦听某些事件的发生并对其做出响应的进程。 并且每个进程都需要申请内存空间和处理器使用权。你运行的进程越多,所需的内存和 CPU 使用周期就越多。在老式电脑(例如我使用了 7 年的笔记本电脑或轻量级计算机(例如树莓派)上,如果你关注过后台运行的进程,则可以充分利用你的系统。
你可以使用 **ps** 命令来查看正在运行的进程。你通常会使用 **ps** 命令的参数来显示出更多的输出信息。我喜欢使用 **-e** 参数来查看每个正在运行的进程,以及 **-f** 参数来获得每个进程的全部细节。以下是一些例子:
```
$ ps
PID TTY TIME CMD
88000 pts/0 00:00:00 bash
88052 pts/0 00:00:00 ps
88053 pts/0 00:00:00 head
[/code] [code]
$ ps -e | head
PID TTY TIME CMD
1 ? 00:00:50 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp
6 ? 00:00:02 kworker/0:0H-events_highpri
9 ? 00:00:00 mm_percpu_wq
10 ? 00:00:01 ksoftirqd/0
11 ? 00:00:12 rcu_sched
12 ? 00:00:00 migration/0
[/code] [code]
$ ps -ef | head
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 13:51 ? 00:00:50 /usr/lib/systemd/systemd --switched-root --system --deserialize 36
root 2 0 0 13:51 ? 00:00:00 [kthreadd]
root 3 2 0 13:51 ? 00:00:00 [rcu_gp]
root 4 2 0 13:51 ? 00:00:00 [rcu_par_gp]
root 6 2 0 13:51 ? 00:00:02 [kworker/0:0H-kblockd]
root 9 2 0 13:51 ? 00:00:00 [mm_percpu_wq]
root 10 2 0 13:51 ? 00:00:01 [ksoftirqd/0]
root 11 2 0 13:51 ? 00:00:12 [rcu_sched]
root 12 2 0 13:51 ? 00:00:00 [migration/0]
```
最后的例子显示最多的细节。在每一行UID用户 ID表示进程所有者。PID进程 ID代表每个进程的 ID并且 PPID父进程 ID表示父进程的 ID。在任何 Unix 系统中,进程号是从 1 开始编号,内核启动后运行的第一个进程。在这里, **systemd** 是第一个进程,它产生了 **kthreadd**,而 **kthreadd** 创建其他进程,包括 **rcu_gp**, **rcu_par_gp** 和其他许多进程。
### 使用 kill 命令来管理进程
系统会关注大多数后台进程,所以你不必担心这些进程。你只需要关注那些你所运行应用所创建的进程。然而许多应用一次只运行一个进程(如音乐播放器、终端模拟器或游戏等),其他应用则可能产生后台进程。一些应用可能当你退出后还在后台运行,以便下次你使用的时候能快速启动。
当我运行 Chromium作为谷歌 Chrome 浏览器所基于的项目)时,进程管理便成了问题。 Chromium 在我的笔记本电脑上运行非常吃力,并产生了许多额外的进程。现在仅打开五个选项卡,我可以看到这些 Chromium 进程运行情况:
```
$ ps -ef | fgrep chromium
jhall 66221 [...] /usr/lib64/chromium-browser/chromium-browser [...]
jhall 66230 [...] /usr/lib64/chromium-browser/chromium-browser [...]
[...]
jhall 66861 [...] /usr/lib64/chromium-browser/chromium-browser [...]
jhall 67329 65132 0 15:45 pts/0 00:00:00 grep -F chromium
```
我已经省略一些,其中有 20 个 Chromium 进程和一个正在搜索 “chromium" 字符的 **grep** 进程。
```
$ ps -ef | fgrep chromium | wc -l
21
```
但是在我退出 Chromium 之后,这些进程仍旧运行。如何关闭它们并回收这些进程占用的内存和 CPU呢
**kill** 命令能让你终止一个进程。在最简单的情况下,你用 **kill** PID 命令来终止你想终止的进程。例如,要终止这些进程,我需要对 20 个 Chromium 进程 ID 都执行 **kill** 命令。一种方法是使用命令行获取 Chromium PID而另一种方法针对该列表运行 **kill**
```
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}'
66221
66230
66239
66257
66262
66283
66284
66285
66324
66337
66360
66370
66386
66402
66503
66539
66595
66734
66848
66861
69702
$ ps -ef | fgrep /usr/lib64/chromium-browser/chromium-browser | awk '{print $2}' &gt; /tmp/pids
$ kill $( cat /tmp/pids)
```
最后两行是关键。第一个命令行为 Chromium 浏览器生成一个进程 ID 列表。第二个命令行针对该进程 ID 列表运行 **kill** 命令。
### 介绍 killall 命令
一次终止多个进程有个更简单方法,使用 **killall** 命令。你或许可以根据名称猜测出,**killall **会终止所有与该名字匹配的进程。这意味着我们可以使用此命令来停止所有流氓 Chromium 进程。这很简单:
```
`$ killall /usr/lib64/chromium-browser/chromium-browser`
```
但是要小心使用 **killall**。该命令能够终止你所给出名称相匹配的所有进程。这就是为什么我喜欢先使用 **ps -ef** 命令来检查我正在运行的进程,然后针对要停止的命令的准确路径运行 **killall**
您可能会想使用 **-i** 或 **\--interactive** 参数,使得 **killkill** 会提示你想要停止每个进程。
**killall** 还支持使用 **-o** 或 **\--than-than** 参数来查找比特定时间更早的进程。例如,这将有助于你发现一组无人值守运行了几天的无赖进程。又或是,你可以查找比特定时间更迟的进程,例如您最近启动的高占用进程。使用 **-y** 或 **\--young-than** 参数来查找这些进程。
### 其他管理进程的方式
进程管理是系统维护重要的一部分。在我早期职业生涯中,作为 Unix 和 Linux 的系统管理员,杀死非法作业的能力是保持系统正常运行的关键。在如今,你可能不需要亲手在 Linux 上的终止恶意进程,但是知道 **kill****killall** 能够在最终出现问题时为你提供帮助。
你也能寻找其他方式来管理进程。对我而言,我不需要在我退出浏览器后,使用 **kill****killall** 来终止后台Chromium 进程。在 Chromium 中有个简单设置可以进行控制:
![Chromium background processes setting][2]
不过,始终关注系统上正在运行哪些进程,并且在需要的时候进行干预是一个明智之举。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/linux-kill-killall
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 "Penguin with green background"
[2]: https://opensource.com/sites/default/files/uploads/chromium-settings-continue-running.png "Chromium background processes setting"

View File

@ -0,0 +1,112 @@
[#]: collector: (lujun9972)
[#]: translator: (ShuyRoy )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get started with distributed tracing using Grafana Tempo)
[#]: via: (https://opensource.com/article/21/2/tempo-distributed-tracing)
[#]: author: (Annanay Agarwal https://opensource.com/users/annanayagarwal)
使用Grafana Tempo开始分布式跟踪
======
Grafana Tempo是一个新的开源、大容量分布式跟踪后端。
![Computer laptop in space][1]
Grafana的[Tempo][2]是出自Grafana实验室的一个简单易用、大规模集成、分布式的跟踪后端。Tempo集成了[Grafana][3]、[Prometheus][4]以及[Loki][5],并且它只需要对象存储进行操作,这使得它是合算的且易操作的。
我从一开始就参与了这个开源项目所以我将介绍一些关于Tempo的基础知识并说明为什么本地云社区会注意到它。
### 分布式跟踪
想要收集对应用程序请求的遥测数据是很常见的。但是在现在的服务器中,单个应用通常被分割为多个微服务,可能运行在几个不同的节点上。
分布式跟踪是一种获得关于应用的性能细粒度信息的方式该应用程序可能由离散的服务组成。当请求到达一个应用时它提供了请求生命周期的统一视图。Tempo的分布式跟踪可以用于单片或微服务应用它提供[请求范围的信息][6],使其成为可观察的第三个支柱(除了度量和日志)。
接下来是一个分布式跟踪系统生成应用程序甘特图的示例。它使用Jaeger [HotROD][7] 的演示应用生成跟踪并把他们存到Grafana云托管的Tempo上。这个图展示了按照服务和功能划分的请求处理时间。
![Gantt chart from Grafana Tempo][8]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### 减少索引的大小
在丰富且定义良好的数据模型中,跟踪包含大量信息。通常,跟踪后端有两种交互:使用元数据选择器(如服务名或者持续时间)筛选跟踪,并在筛选后可视化跟踪。
为了加强查找大多数的开源分布式跟踪框架对跟踪中的许多字段进行索引包括服务名称、操作名称、标记和持续时间。这会导致索引很大并迫使您使用Elasticsearch或者[Cassandra][10]这样的数据库。但是这些很难管理而且大规模操作的成本高所以我在Grafana实验室的团队打算提出一个更好的解决方案。
在Grafana中我的待命调试工作流开始使用指标报表我们使用[Cortex][11]来存储我们应用中的指标它是一个云本地计算基金会孵化的项目用于扩展Prometheus深入研究这个问题筛选有问题服务的日志我们将日志存储在Loki中就像Prometheus一样只不过Loki是存日志的然后查看跟踪给定的请求。我们意识到我们过滤时所需的所有索引信息都可以在Cortex和Loki中找到。但是我们需要通过这些工具实现跟踪可发现的强大集成以及根据跟踪ID进行键值查找的免费存储。
这是[Grafana Tempo][12]项目的开始。通过关注给定跟踪ID的跟踪检索我们将Tempo设计为最小依赖、高容量、低成本的分布式跟踪后端。
### 容易操作和低成本
Tempo使用对象存储后端这是它唯一的依赖。它既可以被用于单二进制模式下也可以用于微服务模式请参考repo中的[例子][13],了解如何轻松开始)。使用对象存储也意味着你可以在不使用任何抽样的情况下存储应用的的大量跟踪。这可以确保你永远不会丢弃出错或延迟更高的百万分之一的请求。
### 与开源工具的强大集成
[Grafana 7.3包括了Tempo数据源][14]这意味着你可以在Grafana UI中可视化来自Tempo的跟踪。而且[Loki 2.0的新查询特性][15]使得Tempo中的跟踪更简单。为了与Prometheus集成该团队正在添加对范例的支持范例是可以添加到时间序列数据中的高基数元数据信息。度量存储后端不会对它们建立索引但是你可以在Grafana UI中检索和显示度量值。尽管exemplars可以存储各种元数据但是在这个用例中跟踪的ID被存储以便与Tempo强集成。
这个例子展示了使用带有请求延迟直方图的范例其中每个范例数据点都链接到Tempo中的一个跟踪。
![Using exemplars in Tempo][16]
(Annanay Agarwal, [CC BY-SA 4.0][9])
### 元数据一致性
作为容器化应用程序运行的应用发出的遥测数据通常具有一些相关的元数据。这可以包括集群ID、命名空间、pod IP等。这对于提供基于需求的信息是好的但是如果你可以利用包含在元数据的信息来进行一些高效的工作那就更好了。
 
例如,你可以使用[Grafana云代理将跟踪信息导入Tempo中][17]代理利用Prometheus服务发现机制轮询Kubernetes接口以查询元数据信息并且将这些标记添加到应用程序发出的跨域数据中。由于这些元数据也在Loki中也建立了索引所以通过元数据转换为Loki变迁选择器可以很容易地从跟踪跳转到查看给定服务的日志。
下面是一个一致元数据的示例它可用于Tempo跟踪中查看给定范围的日志。
### ![][18]
### 云本地
Grafana Tempo作为一个容器化的应用时可用的你可以在如Kubernetes、Mesos等任何编排引擎上运行它。根据获取/查询路径上的工作负载各种服务可以水平伸缩。你还可以使用云本地对象存储如谷歌云存储、Amazon S3或者Tempo Azure博客存储。更多的信息请阅读Tempo文档中的[架构部分][19]。
### 试一试Tempo
如果这对你和我们一样有用,可以[克隆Tempo仓库][20]试一试。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/tempo-distributed-tracing
作者:[Annanay Agarwal][a]
选题:[lujun9972][b]
译者:[RiaXu](https://github.com/ShuyRoy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/annanayagarwal
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://grafana.com/oss/tempo/
[3]: http://grafana.com/oss/grafana
[4]: https://prometheus.io/
[5]: https://grafana.com/oss/loki/
[6]: https://peter.bourgon.org/blog/2017/02/21/metrics-tracing-and-logging.html
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/tempo_gantt.png (Gantt chart from Grafana Tempo)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/article/19/8/how-set-apache-cassandra-cluster
[11]: https://cortexmetrics.io/
[12]: http://github.com/grafana/tempo
[13]: https://grafana.com/docs/tempo/latest/getting-started/example-demo-app/
[14]: https://grafana.com/blog/2020/10/29/grafana-7.3-released-support-for-the-grafana-tempo-tracing-system-new-color-palettes-live-updates-for-dashboard-viewers-and-more/
[15]: https://grafana.com/blog/2020/11/09/trace-discovery-in-grafana-tempo-using-prometheus-exemplars-loki-2.0-queries-and-more/
[16]: https://opensource.com/sites/default/files/uploads/tempo_exemplar.png (Using exemplars in Tempo)
[17]: https://grafana.com/blog/2020/11/17/tracing-with-the-grafana-cloud-agent-and-grafana-tempo/
[18]: https://lh5.googleusercontent.com/vNqk-ygBOLjKJnCbTbf2P5iyU5Wjv2joR7W-oD7myaP73Mx0KArBI2CTrEDVi04GQHXAXecTUXdkMqKRq8icnXFJ7yWUEpaswB1AOU4wfUuADpRV8pttVtXvTpVVv8_OfnDINgfN
[19]: https://grafana.com/docs/tempo/latest/architecture/architecture/
[20]: https://github.com/grafana/tempo

View File

@ -1,62 +0,0 @@
[#]: subject: (4 new open source licenses)
[#]: via: (https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl)
[#]: author: (Pam Chestek https://opensource.com/users/pchestek)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
四个新式开源许可证
======
让我们来看看 OSI 最新批准的 Cryptographic Autonomy License 和 CERN 开源硬件许可协议。
![Law books in a library][1]
作为 [开源定义][2] 的管理者,[开源倡议][3] 把许可证当作“开源”已有20多年了。这些许可证是开源软件生态系统的基础可确保每个人都可以使用重构和共享软件。当一个许可证获批是因为 OSI 认为该许可证可以促进相互的协作和共享,从而使得每个参与开源生态的人获益。
在过去的20年里世界发生了翻天覆地的变化。现如今软件以新的甚至是无法想象的方式在被使用。OSI 已经预料到曾经被人们所熟知的开源许可证现已无法满足如今的要求。因此许可证管理者已经加强了工作提交了多个更兼容的新式许可证。OSI 所面临的挑战是在评估许可证中,这些新观点是否会继续推动共享和协作,是否被值得称为“开源”许可证,最终 OSI 批准了一些用于特殊领域的新式许可证。
### 四个新式许可证
第一个是 [Cryptographic Autonomy License][4]。该许可证是为分布式密码应用程序而设计的。此许可证所解决的问题是现有的开源许可证无法保证开放性因为使用方没有义务共享数据如果其中一人不开放则其他使用方照做这样可能会有害于开放的作用。因此除了要获得强有力的版权保护许可外CAL 还包括向第三方提供独立使用和修改软件所需的权限和资料的义务,而不会让第三方有数据或功能的损失。
As more and more uses arise for peer-to-peer sharing using a cryptographic structure, it wouldn't be surprising if more developers found themselves in need of a legal tool like the CAL. The community on License-Discuss and License-Review, OSI's two mailing lists where proposed new open source licenses are discussed, asked many questions about this license. We hope that the resulting license is clear and easy to understand and that other open source practitioners will find it useful.
随着越来越多的人使用加密结构进行点对点共享,那么更多的开发人员发现自己需要诸如 CAL 之类的法律工具也就不足为奇了。 OSI 的两个邮件列表中的许可证讨论和许可证审查社区,讨论了拟议的新开源许可证,并询问了有关此许可证的诸多问题。 我们希望由此产生的许可证清晰易懂,并希望对其他开源从业者有所裨益。
接下来是欧洲核研究组织CERN提交的 CERN Open Hardware LicenceOHL系列许可证以供审议。它包括三个许可证其主要用于开源硬件这是一个与开源代码软件相似的开源访问领域但有其自身的困难和细微差别。硬件和软件之间的界线现已变得相当模糊因此应用单独的硬件和软件许可证变得越来越困难。欧洲核子研究组织CERN制定了可以确保硬件和软件开源的许可证。
OSI 可能在开始时就没考虑将开源硬件许可证添加到其开源代码许可证列表中,但是世界早已发生变革。因此,尽管 CERN 许可证中的措词涵盖了硬件术语,但它也符合 OSI 认可的所有开源软件许可证的条件。
CERN 开源硬件许可证包括 [permissive license][5][weak reciprocal license][6] 和 [strong reciprocal license][7]。最近,该许可证已被一个国际研究项目采用,该项目正在制造可用于 COVID-19 患者的简单、易于生产的呼吸机。
### 了解更多
CAL 和 CERN OHL 许可证是针对特殊事务,并且 OSI 不建议把它们用于其它领域。但是 OSI 想知道这些许可证是否会按预期发展,从而有助于在较新的计算机领域中培育出健壮的开源生态。
可以从 OSI 获得关于 [许可证批准过程][8] 的更多信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/osi-licenses-cal-cern-ohl
作者:[Pam Chestek][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pchestek
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov3.png?itok=e4eFKe0l "Law books in a library"
[2]: https://opensource.org/osd
[3]: https://opensource.org/
[4]: https://opensource.org/licenses/CAL-1.0
[5]: https://opensource.org/CERN-OHL-P
[6]: https://opensource.org/CERN-OHL-W
[7]: https://opensource.org/CERN-OHL-S
[8]: https://opensource.org/approval

View File

@ -0,0 +1,108 @@
[#]: subject: (Kooha is a Nascent Screen Recorder for GNOME With Wayland Support)
[#]: via: (https://itsfoss.com/kooha-screen-recorder/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Kooha 是一款支持 Wayland 的新生 GNOME 屏幕录像机
======
Linux 中没有一个[像样的支持 Wayland 显示服务器的屏幕录制软件][1]。
如果你使用 Wayland 的话,[GNOME 内置的屏幕录像机][1]可能是少有的(也是唯一的)支持的软件。但是那个屏幕录像机没有可视界面和你所期望的标准屏幕录像软件的功能。
值得庆幸的是,有一个新的应用正在开发中,它提供了比 GNOME 屏幕录像机更多一点的功能,并且在 Wayland 上也能正常工作。
### 遇见 Kooha一个新的 GNOME 桌面屏幕录像机
![][2]
[Kooha][3] 是一个处于开发初期阶段的应用,它可以在 GNOME 中使用,并且用 GTK 和 PyGObject 构建。事实上,它利用了与 GNOME 内置屏幕录像机相同的后端。
以下是 Kooha 的功能:
* 录制整个屏幕或选定区域
* 在 Wayland 和 Xorg 显示服务器上均可使用
* 在视频里用麦克风记录音频
* 包含或忽略鼠标指针的选项
* 可以在开始录制前增加 5 秒或 10 秒的延迟
* 支持 WebM 和 MKV 格式的录制
* 允许更改默认保存位置
* 支持一些键盘快捷键
### 我的 Kooha 体验
![][4]
它的开发者 Dave Patrick 联系了我,由于我急需一款好用的屏幕录像机,所以我马上就去试用了。
目前,[Kooha 只能通过 Flatpak 安装][5]。我安装了 Flatpak当我试着使用时它什么都没有记录。我和 Dave 进行了快速的邮件讨论,他告诉我这是由于 [Ubuntu 20.10 中 GNOME 屏幕录像机的 bug][6]。
你可以想象我对支持 Wayland 的屏幕录像机的绝望,我[将我的 Ubuntu 升级到 21.04 测试版][7]。
在 21.04 中,可以屏幕录像,但仍然无法录制麦克风的音频。
我注意到了另外几件无法按照我的喜好顺利进行的事情。
例如,在录制时,计时器在屏幕上仍然可见,并且包含在录像中。我不会希望在视频教程中出现这种情况。我想你也不会喜欢看到这些吧。
![][8]
另外就是关于多显示器的支持。没有专门选择某一个屏幕的选项。我连接了两个外部显示器,默认情况下,它录制所有三个显示器。可以使用设置捕捉区域,但精确拖动屏幕区域是一项耗时的任务。
它也没有 [Kazam][9] 或其他传统屏幕录像机中有的设置帧率或者编码的选项。
### 在 Linux 上安装 Kooha如果你使用 GNOME
请确保在你的 Linux 发行版上启用 Flatpak 支持。目前它只适用于 GNOME所以请检查你使用的桌面环境。
使用此命令将 Flathub 添加到你的 Flatpak 仓库列表中:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
然后用这个命令来安装:
```
flatpak install flathub io.github.seadve.Kooha
```
你可以通过菜单或使用这个命令来运行它:
```
flatpak run io.github.seadve.Kooha
```
### 总结
Kooha 并不完美,但考虑到 Wayland 领域的巨大空白,我希望开发者努力修复这些问题并增加更多的功能。考虑到 [Ubuntu 21.04 将默认切换到 Wayland][10],以及其他一些流行的发行版如 Fedora 和 openSUSE 已经默认使用 Wayland这一点很重要。
--------------------------------------------------------------------------------
via: https://itsfoss.com/kooha-screen-recorder/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-screen-recorder/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-screen-recorder.png?resize=800%2C450&ssl=1
[3]: https://github.com/SeaDve/Kooha
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha.png?resize=797%2C364&ssl=1
[5]: https://flathub.org/apps/details/io.github.seadve.Kooha
[6]: https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1901391
[7]: https://itsfoss.com/upgrade-ubuntu-beta/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/kooha-recording.jpg?resize=800%2C636&ssl=1
[9]: https://itsfoss.com/kazam-screen-recorder/
[10]: https://news.itsfoss.com/ubuntu-21-04-wayland/

View File

@ -0,0 +1,107 @@
[#]: subject: (Use gdu for a Faster Disk Usage Checking in Linux Terminal)
[#]: via: (https://itsfoss.com/gdu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
在 Linux 终端中使用 gdu 进行更快的磁盘使用情况检查
======
在 Linux 终端中有两种常用的[检查磁盘使用情况的方法][1]du 命令和 df 命令。[du 命令更多的是用来检查目录的使用空间][2]df 命令则是提供文件系统级别的磁盘使用情况。
还有更友好的[用 GNOME Disks 等图形工具在 Linux 中查看磁盘使用情况的方法][3]。如果局限于终端,你可以使用像 [ncdu][5] 这样的[ TUI][4] 工具,以一种图形化的方式获取磁盘使用信息。
### Gdu: 在 Linux 终端中检查磁盘使用情况
[Gdu][6] 就是这样一个用 Go 编写的工具(因此是 gdu 中的 “g”。Gdu 开发者的[基准测试][7]表明,它的磁盘使用情况检查速度相当快,特别是在 SSD 上。事实上gdu 主要是针对 SSD 的,尽管它也可以在 HDD 上工作。
如果你在使用 gdu 命令时没有任何选项,它就会显示你当前所在目录的磁盘使用情况。
![][8]
由于它具有终端用户界面TUI你可以使用箭头浏览目录和磁盘。你也可以按文件名或大小对结果进行排序。
你可以用它做到:
* 向上箭头或 k 键将光标向上移动
* 向下箭头或 j 键将光标向下移动
* 回车选择目录/设备
* 左箭头或 h 键转到上级目录
* 使用 d 键删除所选文件或目录
* 使用 n 键按名称排序
* 使用 s 键按大小排序
* 使用 c 键按项目排序
你将注意到一些条目前的一些符号。这些符号有特定的意义。
![][9]
* `!` 表示读取目录时发生错误。
* `.` 表示在读取子目录时发生错误,大小可能不正确。
* `@` 表示文件是一个符号链接或套接字。
* `H` 表示文件已经被计数(硬链接)。
* `e` 表示目录为空。
要查看所有挂载磁盘的磁盘利用率和可用空间,使用选项 `d`
```
gdu -d
```
它在一屏中显示所有的细节:
![][10]
看起来是个方便的工具,对吧?让我们看看如何在你的 Linux 系统上安装它。
### 在 Linux 上安装 gdu
Gdu 是通过 [AUR][11] 为 Arch 和 Manjaro 用户提供的。我想作为一个 Arch 用户,你应该知道如何使用 AUR。
它包含在即将到来的 Ubuntu 21.04 的 universe 仓库中,但有可能你现在还没有使用它。这种情况下,你可以使用 Snap 安装它,这可能看起来有很多条 snap 命令:
```
snap install gdu-disk-usage-analyzer
snap connect gdu-disk-usage-analyzer:mount-observe :mount-observe
snap connect gdu-disk-usage-analyzer:system-backup :system-backup
snap alias gdu-disk-usage-analyzer.gdu gdu
```
你也可以在其发布页面找到源代码:
[Source code download for gdu][12]
我更习惯于使用 du 和 df 命令,但我可以看到一些 Linux 用户可能会喜欢 gdu。你是其中之一吗
--------------------------------------------------------------------------------
via: https://itsfoss.com/gdu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://linuxhandbook.com/df-command/
[2]: https://linuxhandbook.com/find-directory-size-du-command/
[3]: https://itsfoss.com/check-free-disk-space-linux/
[4]: https://itsfoss.com/gui-cli-tui/
[5]: https://dev.yorhel.nl/ncdu
[6]: https://github.com/dundee/gdu
[7]: https://github.com/dundee/gdu#benchmarks
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization.png?resize=800%2C471&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-entry-symbols.png?resize=800%2C302&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/gdu-disk-utilization-for-all-drives.png?resize=800%2C471&ssl=1
[11]: https://itsfoss.com/aur-arch-linux/
[12]: https://github.com/dundee/gdu/releases