Merge pull request #9 from LCTT/master

update from lctt
This commit is contained in:
Qian.Sun 2021-04-20 12:03:25 +08:00 committed by GitHub
commit c0e682fa96
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
65 changed files with 7413 additions and 2503 deletions

View File

@ -0,0 +1,529 @@
[#]: collector: (lujun9972)
[#]: translator: (tt67wq)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13307-1.html)
[#]: subject: (Use systemd timers instead of cronjobs)
[#]: via: (https://opensource.com/article/20/7/systemd-timers)
[#]: author: (David Both https://opensource.com/users/dboth)
使用 systemd 定时器代替 cron 作业
======
> 定时器提供了比 cron 作业更为细粒度的事件控制。
![](https://img.linux.net.cn/data/attachment/album/202104/18/104406dgszkj3eeibkea55.jpg)
我正在致力于将我的 [cron][2] 作业迁移到 systemd 定时器上。我已经使用定时器多年了,但通常来说,我的学识只足以支撑我当前的工作。但在我研究 [systemd 系列][3] 的过程中,我发现 systemd 定时器有一些非常有意思的能力。
与 cron 作业类似systemd 定时器可以在特定的时间间隔触发事件shell 脚本和程序),例如每天一次或在一个月中的特定某一天(或许只有在周一生效),或在从上午 8 点到下午 6 点的工作时间内每隔 15 分钟一次。定时器也可以做到 cron 作业无法做到的一些事情。举个例子,定时器可以在特定事件发生后的一段时间后触发一段脚本或者程序去执行,例如开机、启动、上个任务完成,甚至于定时器调用的上个服务单元的完成的时刻。
### 操作系统维护的计时器
当在一个新系统上安装 Fedora 或者是任意一个基于 systemd 的发行版时,作为系统维护过程的一部分,它会在 Linux 宿主机的后台中创建多个定时器。这些定时器会触发事件来执行必要的日常维护任务,比如更新系统数据库、清理临时目录、轮换日志文件,以及更多其他事件。
作为示例,我会查看一些我的主要工作站上的定时器,通过执行 `systemctl status *timer` 命令来展示主机上的所有定时器。星号的作用与文件通配相同,所以这个命令会列出所有的 systemd 定时器单元。
```
[root@testvm1 ~]# systemctl status *timer
● mlocate-updatedb.timer - Updates mlocate database every day
Loaded: loaded (/usr/lib/systemd/system/mlocate-updatedb.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Fri 2020-06-05 00:00:00 EDT; 15h left
Triggers: ● mlocate-updatedb.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Updates mlocate database every day.
● logrotate.timer - Daily rotation of log files
Loaded: loaded (/usr/lib/systemd/system/logrotate.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Fri 2020-06-05 00:00:00 EDT; 15h left
Triggers: ● logrotate.service
Docs: man:logrotate(8)
man:logrotate.conf(5)
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Daily rotation of log files.
● sysstat-summary.timer - Generate summary of yesterday's process accounting
Loaded: loaded (/usr/lib/systemd/system/sysstat-summary.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Fri 2020-06-05 00:07:00 EDT; 15h left
Triggers: ● sysstat-summary.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Generate summary of yesterday's process accounting.
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Mon 2020-06-08 00:00:00 EDT; 3 days left
Triggers: ● fstrim.service
Docs: man:fstrim
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Discard unused blocks once a week.
● sysstat-collect.timer - Run system activity accounting tool every 10 minutes
Loaded: loaded (/usr/lib/systemd/system/sysstat-collect.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Thu 2020-06-04 08:50:00 EDT; 41s left
Triggers: ● sysstat-collect.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Run system activity accounting tool every 10 minutes.
● dnf-makecache.timer - dnf makecache --timer
Loaded: loaded (/usr/lib/systemd/system/dnf-makecache.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Thu 2020-06-04 08:51:00 EDT; 1min 41s left
Triggers: ● dnf-makecache.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started dnf makecache timer.
● systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories
Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static; vendor preset: disabled)
Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
Trigger: Fri 2020-06-05 08:19:00 EDT; 23h left
Triggers: ● systemd-tmpfiles-clean.service
Docs: man:tmpfiles.d(5)
man:systemd-tmpfiles(8)
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Daily Cleanup of Temporary Directories.
```
每个定时器至少有六行相关信息:
* 定时器的第一行有定时器名字和定时器目的的简短介绍
* 第二行展示了定时器的状态,是否已加载,定时器单元文件的完整路径以及预设信息。
* 第三行指明了其活动状态,包括该定时器激活的日期和时间。
* 第四行包括了该定时器下次被触发的日期和时间和距离触发的大概时间。
* 第五行展示了被定时器触发的事件或服务名称。
* 部分不是全部systemd 单元文件有相关文档的指引。我虚拟机上输出中有三个定时器有文档指引。这是一个很好(但非必要)的信息。
* 最后一行是计时器最近触发的服务实例的日志条目。
你也许有一些不一样的定时器,取决于你的主机。
### 创建一个定时器
尽管我们可以解构一个或多个现有的计时器来了解其工作原理,但让我们创建我们自己的 [服务单元][4] 和一个定时器去触发它。为了保持简单,我们将使用一个相当简单的例子。当我们完成这个实验之后,就能更容易理解其他定时器的工作原理以及发现它们正在做什么。
首先,创建一个运行基础东西的简单的服务,例如 `free` 命令。举个例子,你可能想定时监控空余内存。在 `/etc/systemd/system` 目录下创建如下的 `myMonitor.server` 单元文件。它不需要是可执行文件:
```
# This service unit is for testing timer units
# By David Both
# Licensed under GPL V2
#
[Unit]
Description=Logs system statistics to the systemd journal
Wants=myMonitor.timer
[Service]
Type=oneshot
ExecStart=/usr/bin/free
[Install]
WantedBy=multi-user.target
```
这大概是你能创建的最简单的服务单元了。现在我们查看一下服务状态同时测试一下服务单元确保它和我们预期一样可用。
```
[root@testvm1 system]# systemctl status myMonitor.service
● myMonitor.service - Logs system statistics to the systemd journal
Loaded: loaded (/etc/systemd/system/myMonitor.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root@testvm1 system]# systemctl start myMonitor.service
[root@testvm1 system]#
```
输出在哪里呢默认情况下systemd 服务单元执行程序的标准输出(`STDOUT`)会被发送到系统日志中,它保留了记录供现在或者之后(直到某个时间点)查看。(在本系列的后续文章中,我将介绍系统日志的记录和保留策略)。专门查看你的服务单元的日志,而且只针对今天。`-S` 选项,即 `--since` 的缩写,允许你指定 `journalctl` 工具搜索条目的时间段。这并不代表你不关心过往结果 —— 在这个案例中,不会有过往记录 —— 如果你的机器以及运行了很长时间且堆积了大量的日志,它可以缩短搜索时间。
```
[root@testvm1 system]# journalctl -S today -u myMonitor.service
-- Logs begin at Mon 2020-06-08 07:47:20 EDT, end at Thu 2020-06-11 09:40:47 EDT. --
Jun 11 09:12:09 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 11 09:12:09 testvm1.both.org free[377966]: total used free shared buff/cache available
Jun 11 09:12:09 testvm1.both.org free[377966]: Mem: 12635740 522868 11032860 8016 1080012 11821508
Jun 11 09:12:09 testvm1.both.org free[377966]: Swap: 8388604 0 8388604
Jun 11 09:12:09 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
[root@testvm1 system]#
```
由服务触发的任务可以是单个程序、一组程序或者是一个脚本语言写的脚本。通过在 `myMonitor.service` 单元文件里的 `[Service]` 块末尾中添加如下行可以为服务添加另一个任务:
```
ExecStart=/usr/bin/lsblk
```
再次启动服务,查看日志检查结果,结果应该看上去像这样。你应该在日志中看到两条命令的结果输出:
```
Jun 11 15:42:18 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 11 15:42:18 testvm1.both.org free[379961]: total used free shared buff/cache available
Jun 11 15:42:18 testvm1.both.org free[379961]: Mem: 12635740 531788 11019540 8024 1084412 11812272
Jun 11 15:42:18 testvm1.both.org free[379961]: Swap: 8388604 0 8388604
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: sda 8:0 0 120G 0 disk
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─sda1 8:1 0 4G 0 part /boot
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: └─sda2 8:2 0 116G 0 part
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─VG01-root 253:0 0 5G 0 lvm /
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─VG01-swap 253:1 0 8G 0 lvm [SWAP]
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─VG01-usr 253:2 0 30G 0 lvm /usr
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─VG01-tmp 253:3 0 10G 0 lvm /tmp
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─VG01-var 253:4 0 20G 0 lvm /var
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: └─VG01-home 253:5 0 10G 0 lvm /home
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: sr0 11:0 1 1024M 0 rom
Jun 11 15:42:18 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 11 15:42:18 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
```
现在你知道了你的服务可以按预期工作了,在 `/etc/systemd/system` 目录下创建 `myMonitor.timer` 定时器单元文件,添加如下代码:
```
# This timer unit is for testing
# By David Both
# Licensed under GPL V2
#
[Unit]
Description=Logs some system statistics to the systemd journal
Requires=myMonitor.service
[Timer]
Unit=myMonitor.service
OnCalendar=*-*-* *:*:00
[Install]
WantedBy=timers.target
```
`myMonitor.timer` 文件中的 `OnCalendar` 时间格式,`*-*-* *:*:00`,应该会每分钟触发一次定时器去执行 `myMonitor.service` 单元。我会在文章的后面进一步探索 `OnCalendar` 设置。
到目前为止,在服务被计时器触发运行时观察与之有关的日志记录。你也可以跟踪计时器,跟踪服务可以让你接近实时的看到结果。执行 `journalctl` 时带上 `-f` 选项:
```
[root@testvm1 system]# journalctl -S today -f -u myMonitor.service
-- Logs begin at Mon 2020-06-08 07:47:20 EDT. --
```
执行但是不启用该定时器,看看它运行一段时间后发生了什么:
```
[root@testvm1 ~]# systemctl start myMonitor.service
[root@testvm1 ~]#
```
一条结果立即就显示出来了,下一条大概在一分钟后出来。观察几分钟日志,看看你有没有跟我发现同样的事情:
```
[root@testvm1 system]# journalctl -S today -f -u myMonitor.service
-- Logs begin at Mon 2020-06-08 07:47:20 EDT. --
Jun 13 08:39:18 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:39:18 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:39:19 testvm1.both.org free[630566]: total used free shared buff/cache available
Jun 13 08:39:19 testvm1.both.org free[630566]: Mem: 12635740 556604 10965516 8036 1113620 11785628
Jun 13 08:39:19 testvm1.both.org free[630566]: Swap: 8388604 0 8388604
Jun 13 08:39:18 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: sda 8:0 0 120G 0 disk
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─sda1 8:1 0 4G 0 part /boot
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: └─sda2 8:2 0 116G 0 part
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─VG01-root 253:0 0 5G 0 lvm /
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─VG01-swap 253:1 0 8G 0 lvm [SWAP]
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─VG01-usr 253:2 0 30G 0 lvm /usr
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─VG01-tmp 253:3 0 10G 0 lvm /tmp
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─VG01-var 253:4 0 20G 0 lvm /var
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: └─VG01-home 253:5 0 10G 0 lvm /home
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: sr0 11:0 1 1024M 0 rom
Jun 13 08:40:46 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:40:46 testvm1.both.org free[630572]: total used free shared buff/cache available
Jun 13 08:40:46 testvm1.both.org free[630572]: Mem: 12635740 555228 10966836 8036 1113676 11786996
Jun 13 08:40:46 testvm1.both.org free[630572]: Swap: 8388604 0 8388604
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: sda 8:0 0 120G 0 disk
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─sda1 8:1 0 4G 0 part /boot
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: └─sda2 8:2 0 116G 0 part
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─VG01-root 253:0 0 5G 0 lvm /
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─VG01-swap 253:1 0 8G 0 lvm [SWAP]
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─VG01-usr 253:2 0 30G 0 lvm /usr
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─VG01-tmp 253:3 0 10G 0 lvm /tmp
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─VG01-var 253:4 0 20G 0 lvm /var
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: └─VG01-home 253:5 0 10G 0 lvm /home
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: sr0 11:0 1 1024M 0 rom
Jun 13 08:40:46 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:40:46 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
Jun 13 08:41:46 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:41:46 testvm1.both.org free[630580]: total used free shared buff/cache available
Jun 13 08:41:46 testvm1.both.org free[630580]: Mem: 12635740 553488 10968564 8036 1113688 11788744
Jun 13 08:41:46 testvm1.both.org free[630580]: Swap: 8388604 0 8388604
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: sda 8:0 0 120G 0 disk
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─sda1 8:1 0 4G 0 part /boot
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: └─sda2 8:2 0 116G 0 part
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─VG01-root 253:0 0 5G 0 lvm /
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─VG01-swap 253:1 0 8G 0 lvm [SWAP]
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─VG01-usr 253:2 0 30G 0 lvm /usr
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─VG01-tmp 253:3 0 10G 0 lvm /tmp
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─VG01-var 253:4 0 20G 0 lvm /var
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: └─VG01-home 253:5 0 10G 0 lvm /home
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: sr0 11:0 1 1024M 0 rom
Jun 13 08:41:47 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:41:47 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
```
别忘了检查下计时器和服务的状态。
你在日志里大概至少注意到两件事。第一,你不需要特地做什么来让 `myMonitor.service` 单元中 `ExecStart` 触发器产生的 `STDOUT` 存储到日志里。这都是用 systemd 来运行服务的一部分功能。然而,它确实意味着你需要小心对待服务单元里面执行的脚本和它们能产生多少 `STDOUT`
第二,定时器并不是精确在每分钟的 :00 秒执行的,甚至每次执行的时间间隔都不是刚好一分钟。这是特意的设计,但是有必要的话可以改变这种行为(如果只是它挑战了你的系统管理员的敏感神经)。
这样设计的初衷是为了防止多个服务在完全相同的时刻被触发。举个例子,你可以用例如 WeeklyDaily 等时间格式。这些快捷写法都被定义为在某一天的 00:00:00 执行。当多个定时器都这样定义的话,有很大可能它们会同时执行。
systemd 定时器被故意设计成在规定时间附近随机波动的时间点触发,以避免同一时间触发。它们在一个时间窗口内半随机触发,时间窗口开始于预设的触发时间,结束于预设时间后一分钟。根据 `systemd.timer` 的手册页,这个触发时间相对于其他已经定义的定时器单元保持在稳定的位置。你可以在日志条目中看到,定时器在启动后立即触发,然后在每分钟后的 46 或 47 秒触发。
大部分情况下,这种概率抖动的定时器是没事的。当调度类似执行备份的任务,只需要它们在下班时间运行,这样是没问题的。系统管理员可以选择确定的开始时间来确保不和其他任务冲突,例如 01:05:00 这样典型的 cron 作业时间,但是有很大范围的时间值可以满足这一点。在开始时间上的一个分钟级别的随机往往是无关紧要的。
然而,对某些任务来说,精确的触发时间是个硬性要求。对于这类任务,你可以向单元文件的 `Timer` 块中添加如下声明来指定更高的触发时间跨度精确度(精确到微秒以内):
```
AccuracySec=1us
```
时间跨度可用于指定所需的精度,以及定义重复事件或一次性事件的时间跨度。它能识别以下单位:
* `usec`、`us`、`µs`
* `msec`、`ms`
* `seconds`、`second`、`sec`、`s`
* `minutes`、`minute`、`min`、`m`
* `hours`、`hour`、`hr`、`h`
* `days`、`day`、`d`
* `weeks`、`week`、`w`
* `months`、`month`、`M`(定义为 30.44 天)
* `years`、`year`、`y`(定义为 365.25 天)
所有 `/usr/lib/systemd/system` 中的定时器都指定了一个更宽松的时间精度,因为精准时间没那么重要。看看这些系统创建的定时器的时间格式:
```
[root@testvm1 system]# grep Accur /usr/lib/systemd/system/*timer
/usr/lib/systemd/system/fstrim.timer:AccuracySec=1h
/usr/lib/systemd/system/logrotate.timer:AccuracySec=1h
/usr/lib/systemd/system/logwatch.timer:AccuracySec=12h
/usr/lib/systemd/system/mlocate-updatedb.timer:AccuracySec=24h
/usr/lib/systemd/system/raid-check.timer:AccuracySec=24h
/usr/lib/systemd/system/unbound-anchor.timer:AccuracySec=24h
[root@testvm1 system]#
```
看下 `/usr/lib/systemd/system` 目录下部分定时器单元文件的完整内容,看看它们是如何构建的。
在本实验中不必让这个定时器在启动时激活,但下面这个命令可以设置开机自启:
```
[root@testvm1 system]# systemctl enable myMonitor.timer
```
你创建的单元文件不需要是可执行的。你同样不需要启用服务,因为它是被定时器触发的。如果你需要的话,你仍然可以在命令行里手动触发该服务单元。尝试一下,然后观察日志。
关于定时器精度、事件时间规格和触发事件的详细信息,请参见 systemd.timer 和 systemd.time 的手册页。
### 定时器类型
systemd 定时器还有一些在 cron 中找不到的功能cron 只在确定的、重复的、具体的日期和时间触发。systemd 定时器可以被配置成根据其他 systemd 单元状态发生改变时触发。举个例子,定时器可以配置成在系统开机、启动后,或是某个确定的服务单元激活之后的一段时间被触发。这些被称为单调计时器。“单调”指的是一个持续增长的计数器或序列。这些定时器不是持久的,因为它们在每次启动后都会重置。
表格 1 列出了一些单调定时器以及每个定时器的简短定义,同时有 `OnCalendar` 定时器,这些不是单调的,它们被用于指定未来有可能重复的某个确定时间。这个信息来自于 `systemd.timer` 的手册页,有一些不重要的修改。
定时器 | 单调性 | 定义
---|---|---
`OnActiveSec=` | X | 定义了一个与定时器被激活的那一刻相关的定时器。
`OnBootSec=` | X | 定义了一个与机器启动时间相关的计时器。
`OnStartupSec=` | X | 定义了一个与服务管理器首次启动相关的计时器。对于系统定时器来说,这个定时器与 `OnBootSec=` 类似,因为系统服务管理器在机器启动后很短的时间后就会启动。当以在每个用户服务管理器中运行的单元进行配置时,它尤其有用,因为用户的服务管理器通常在首次登录后启动,而不是机器启动后。
`OnUnitActiveSec=` | X | 定义了一个与将要激活的定时器上次激活时间相关的定时器。
`OnUnitInactiveSec=` | X | 定义了一个与将要激活的定时器上次停用时间相关的定时器。
`OnCalendar=` | | 定义了一个有日期事件表达式语法的实时(即时钟)定时器。查看 `systemd.time(7)` 的手册页获取更多与日历事件表达式相关的语法信息。除此以外,它的语义和 `OnActiveSec=` 类似。
_Table 1: systemd 定时器定义_
单调计时器可使用同样的简写名作为它们的时间跨度,即我们之前提到的 `AccuracySec` 表达式,但是 systemd 将这些名字统一转换成了秒。举个例子,比如你想规定某个定时器在系统启动后五天触发一次事件;它可能看起来像 `OnBootSec=5d`。如果机器启动于 `2020-06-15 09:45:27`,这个定时器会在 `2020-06-20 09:45:27` 或在这之后的一分钟内触发。
### 日历事件格式
日历事件格式是定时器在所需的重复时间触发的关键。我们开始看下一些 `OnCalendar` 设置一起使用的格式。
与 crontab 中的格式相比systemd 及其计时器使用的时间和日历格式风格不同。它比 crontab 更为灵活,而且可以使用类似 `at` 命令的方式允许模糊的日期和时间。它还应该足够熟悉使其易于理解。
systemd 定时器使用 `OnCalendar=` 的基础格式是 `DOW YYYY-MM-DD HH:MM:SS`。DOW星期几是选填的其他字段可以用一个星号`*`)来匹配此位置的任意值。所有的日历时间格式会被转换成标准格式。如果时间没有指定,它会被设置为 `00:00:00`。如果日期没有指定但是时间指定了,那么下次匹配的时间可能是今天或者明天,取决于当前的时间。月份和星期可以使用名称或数字。每个单元都可以使用逗号分隔的列表。单元范围可以在开始值和结束值之间用 `..` 指定。
指定日期有一些有趣的选项,波浪号(`~`)可以指定月份的最后一天或者最后一天之前的某几天。`/` 可以用来指定星期几作为修饰符。
这里有几个在 `OnCalendar` 表达式中使用的典型时间格式例子。
日期事件格式 | 描述
---|---
`DOW YYYY-MM-DD HH:MM:SS` |
`*-*-* 00:15:30` | 每年每月每天的 0 点 15 分 30 秒
`Weekly` | 每个周一的 00:00:00
`Mon *-*-* 00:00:00` | 同上
`Mon` | 同上
`Wed 2020-*-*` | 2020 年每个周三的 00:00:00
`Mon..Fri 2021-*-*` | 2021 年的每个工作日(周一到周五)的 00:00:00
`2022-6,7,8-1,15 01:15:00` | 2022 年 6、7、8 月的 1 到 15 号的 01:15:00
`Mon *-05~03` | 每年五月份的下个周一同时也是月末的倒数第三天
`Mon..Fri *-08~04` | 任何年份 8 月末的倒数第四天,同时也须是工作日
`*-05~03/2` | 五月末的倒数第三天,然后 2 天后再来一次。每年重复一次。注意这个表达式使用了波浪号(`~`)。
`*-05-03/2` | 五月的第三天,然后每两天重复一次直到 5 月底。注意这个表达式使用了破折号(`-`)。
_Table 2: `OnCalendar` 事件时间格式例子_
### 测试日历格式
systemd 提供了一个绝佳的工具用于检测和测试定时器中日历时间事件的格式。`systemd-analyze calendar` 工具解析一个时间事件格式,提供标准格式和其他有趣的信息,例如下次“经过”(即匹配)的日期和时间,以及距离下次触发之前大概时间。
首先,看看未来没有时间的日(注意 `Next elapse``UTC` 的时间会根据你当地时区改变):
```
[student@studentvm1 ~]$ systemd-analyze calendar 2030-06-17
  Original form: 2030-06-17                
Normalized form: 2030-06-17 00:00:00        
    Next elapse: Mon 2030-06-17 00:00:00 EDT
       (in UTC): Mon 2030-06-17 04:00:00 UTC
       From now: 10 years 0 months left    
[root@testvm1 system]#
```
现在添加一个时间,在这个例子中,日期和时间是当作无关的部分分开解析的:
```
[root@testvm1 system]# systemd-analyze calendar 2030-06-17 15:21:16
  Original form: 2030-06-17                
Normalized form: 2030-06-17 00:00:00        
    Next elapse: Mon 2030-06-17 00:00:00 EDT
       (in UTC): Mon 2030-06-17 04:00:00 UTC
       From now: 10 years 0 months left    
  Original form: 15:21:16                  
Normalized form: *-*-* 15:21:16            
    Next elapse: Mon 2020-06-15 15:21:16 EDT
       (in UTC): Mon 2020-06-15 19:21:16 UTC
       From now: 3h 55min left              
[root@testvm1 system]#
```
为了把日期和时间当作一个单元来分析,可以把它们包在引号里。你在定时器单元里 `OnCalendar=` 时间格式中使用的时候记得把引号去掉,否则会报错:
```
[root@testvm1 system]# systemd-analyze calendar "2030-06-17 15:21:16"
Normalized form: 2030-06-17 15:21:16        
    Next elapse: Mon 2030-06-17 15:21:16 EDT
       (in UTC): Mon 2030-06-17 19:21:16 UTC
       From now: 10 years 0 months left    
[root@testvm1 system]#
```
现在我们测试下 Table2 里的例子。我尤其喜欢最后一个:
```
[root@testvm1 system]# systemd-analyze calendar "2022-6,7,8-1,15 01:15:00"
  Original form: 2022-6,7,8-1,15 01:15:00
Normalized form: 2022-06,07,08-01,15 01:15:00
    Next elapse: Wed 2022-06-01 01:15:00 EDT
       (in UTC): Wed 2022-06-01 05:15:00 UTC
       From now: 1 years 11 months left
[root@testvm1 system]#
```
让我们看一个例子,这个例子里我们列出了时间表达式的五个经过时间。
```
[root@testvm1 ~]# systemd-analyze calendar --iterations=5 "Mon *-05~3"
  Original form: Mon *-05~3                
Normalized form: Mon *-05~03 00:00:00      
    Next elapse: Mon 2023-05-29 00:00:00 EDT
       (in UTC): Mon 2023-05-29 04:00:00 UTC
       From now: 2 years 11 months left    
       Iter. #2: Mon 2028-05-29 00:00:00 EDT
       (in UTC): Mon 2028-05-29 04:00:00 UTC
       From now: 7 years 11 months left    
       Iter. #3: Mon 2034-05-29 00:00:00 EDT
       (in UTC): Mon 2034-05-29 04:00:00 UTC
       From now: 13 years 11 months left    
       Iter. #4: Mon 2045-05-29 00:00:00 EDT
       (in UTC): Mon 2045-05-29 04:00:00 UTC
       From now: 24 years 11 months left    
       Iter. #5: Mon 2051-05-29 00:00:00 EDT
       (in UTC): Mon 2051-05-29 04:00:00 UTC
       From now: 30 years 11 months left    
[root@testvm1 ~]#
```
这些应该为你提供了足够的信息去开始测试你的 `OnCalendar` 时间格式。`systemd-analyze` 工具可用于其他有趣的分析,我会在这个系列的下一篇文章来探索这些。
### 总结
systemd 定时器可以用于执行和 cron 工具相同的任务,但是通过按照日历和单调时间格式去触发事件的方法提供了更多的灵活性。
虽然你为此次实验创建的服务单元通常是由定时器调用的,你也可以随时使用 `systemctl start myMonitor.service` 命令去触发它。可以在一个定时器中编写多个维护任务的脚本;它们可以是 Bash 脚本或者其他 Linux 程序。你可以通过触发定时器来运行所有的脚本来运行服务,也可以按照需要执行单独的脚本。
我会在下篇文章中更加深入的探索 systemd 时间格式的用处。
我还没有看到任何迹象表明 cron 和 at 将被废弃。我希望这种情况不会发生,因为至少 `at` 在执行一次性调度任务的时候要比 systemd 定时器容易的多。
### 参考资料
网上有大量的关于 systemd 的参考资料,但是大部分都有点简略、晦涩甚至有误导性。除了本文中提到的资料,下列的网页提供了跟多可靠且详细的 systemd 入门信息。
* Fedora 项目有一篇切实好用的 [systemd 入门][5],它囊括了几乎所有你需要知道的关于如何使用 systemd 配置、管理和维护 Fedora 计算机的信息。
* Fedora 项目也有一个不错的 [备忘录][6],交叉引用了过去 SystemV 命令和 systemd 命令做对比。
* 关于 systemd 的技术细节和创建这个项目的原因,请查看 [Freedesktop.org][7] 上的 [systemd 描述][8]。
* [Linux.com][9] 的“更多 systemd 的乐趣”栏目提供了更多高级的 systemd [信息和技巧][10]。
此外,还有一系列深度的技术文章,是由 systemd 的设计者和主要实现者 Lennart Poettering 为 Linux 系统管理员撰写的。这些文章写于 2010 年 4 月至 2011 年 9 月间,但它们现在和当时一样具有现实意义。关于 systemd 及其生态的许多其他好文章都是基于这些文章:
* [Rethinking PID 1][11]
* [systemd for AdministratorsPart I][12]
* [systemd for AdministratorsPart II][13]
* [systemd for AdministratorsPart III][14]
* [systemd for AdministratorsPart IV][15]
* [systemd for AdministratorsPart V][16]
* [systemd for AdministratorsPart VI][17]
* [systemd for AdministratorsPart VII][18]
* [systemd for AdministratorsPart VIII][19]
* [systemd for AdministratorsPart IX][20]
* [systemd for AdministratorsPart X][21]
* [systemd for AdministratorsPart XI][22]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/systemd-timers
作者:[David Both][a]
选题:[lujun9972][b]
译者:[tt67wq](https://github.com/tt67wq)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/article/17/11/how-use-cron-linux
[3]: https://opensource.com/users/dboth
[4]: https://opensource.com/article/20/5/manage-startup-systemd
[5]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[6]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[7]: http://Freedesktop.org
[8]: http://www.freedesktop.org/wiki/Software/systemd
[9]: http://Linux.com
[10]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[11]: http://0pointer.de/blog/projects/systemd.html
[12]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[13]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[14]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[15]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[16]: http://0pointer.de/blog/projects/three-levels-of-off.html
[17]: http://0pointer.de/blog/projects/changing-roots
[18]: http://0pointer.de/blog/projects/blame-game.html
[19]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[20]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[21]: http://0pointer.de/blog/projects/instances.html
[22]: http://0pointer.de/blog/projects/inetd.html

View File

@ -0,0 +1,201 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13311-1.html)
[#]: subject: (Getting started with Stratis encryption)
[#]: via: (https://fedoramagazine.org/getting-started-with-stratis-encryption/)
[#]: author: (briansmith https://fedoramagazine.org/author/briansmith/)
Stratis 加密入门
======
![](https://img.linux.net.cn/data/attachment/album/202104/19/094919orzaxwl5axiqqfiu.jpg)
Stratis 在其 [官方网站][2] 上被描述为“_易于使用的 Linux 本地存储管理_”。请看这个 [短视频][3],快速演示基础知识。该视频是在 Red Hat Enterprise Linux 8 系统上录制的。视频中显示的概念也适用于 Fedora 中的 Stratis。
Stratis 2.1 版本引入了对加密的支持。继续阅读以了解如何在 Stratis 中开始加密。
### 先决条件
加密需要 Stratis 2.1 或更高版本。这篇文章中的例子使用的是 Fedora 33 的预发布版本。Stratis 2.1 将用在 Fedora 33 的最终版本中。
你还需要至少一个可用的块设备来创建一个加密池。下面的例子是在 KVM 虚拟机上完成的,虚拟磁盘驱动器为 5GB`/dev/vdb`)。
### 在内核密钥环中创建一个密钥
Linux 内核<ruby>密钥环<rt>keyring</rt></ruby>用于存储加密密钥。关于内核密钥环的更多信息,请参考 `keyrings` 手册页(`man keyrings`)。  
使用 `stratis key set` 命令在内核钥匙圈中设置密钥。你必须指定从哪里读取密钥。要从标准输入中读取密钥,使用 `-capture-key` 选项。要从文件中读取密钥,使用 `-keyfile-path <file>` 选项。最后一个参数是一个密钥描述。它将稍后你创建加密的 Stratis 池时使用。
例如,要创建一个描述为 `pool1key` 的密钥,并从标准输入中读取密钥,可以输入:
```
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
```
该命令提示我们输入密钥数据/密码,然后密钥就创建在内核密钥环中了。
要验证密钥是否已被创建,运行 `stratis key list`
```
# stratis key list
Key Description
pool1key
```
这将验证是否创建了 `pool1key`。请注意,这些密钥不是持久的。如果主机重启,在访问加密的 Stratis 池之前,需要再次提供密钥(此过程将在后面介绍)。
如果你有多个加密池,它们可以有一个单独的密钥,也可以共享同一个密钥。
也可以使用以下 `keyctl` 命令查看密钥:
```
# keyctl get_persistent @s
318044983
# keyctl show
Session Keyring
701701270 --alswrv 0 0 keyring: _ses
649111286 --alswrv 0 65534 \_ keyring: _uid.0
318044983 ---lswrv 0 65534 \_ keyring: _persistent.0
1051260141 --alswrv 0 0 \_ user: stratis-1-key-pool1key
```
### 创建加密的 Stratis 池
现在已经为 Stratis 创建了一个密钥,下一步是创建加密的 Stratis 池。加密池只能在创建池时进行。目前不可能对现有的池进行加密。
使用 `stratis pool create` 命令创建一个池。添加 `-key-desc` 和你在上一步提供的密钥描述(`pool1key`)。这将向 Stratis 发出信号,池应该使用提供的密钥进行加密。下面的例子是在 `/dev/vdb` 上创建 Stratis 池,并将其命名为 `pool1`。确保在你的系统中指定一个空的/可用的设备。
```
# stratis pool create --key-desc pool1key pool1 /dev/vdb
```
你可以使用 `stratis pool list` 命令验证该池是否已经创建:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 37.63 MiB / 4.95 GiB ~Ca, Cr
```
在上面显示的示例输出中,`~Ca` 表示禁用了缓存(`~` 否定了该属性)。`Cr` 表示启用了加密。请注意,缓存和加密是相互排斥的。这两个功能不能同时启用。
接下来,创建一个文件系统。下面的例子演示了创建一个名为 `filesystem1` 的文件系统,将其挂载在 `/filesystem1` 挂载点上,并在新文件系统中创建一个测试文件:
```
# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile
```
### 重启后访问加密池
当重新启动时,你会发现 Stratis 不再显示你的加密池或它的块设备:
```
# stratis pool list
Name Total Physical Properties
```
```
# stratis blockdev list
Pool Name Device Node Physical Size Tier
```
要访问加密池,首先要用之前使用的相同的密钥描述和密钥数据/口令重新创建密钥:
```
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
```
接下来,运行 `stratis pool unlock` 命令,并验证现在可以看到池和它的块设备:
```
# stratis pool unlock
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
# stratis blockdev list
Pool Name Device Node Physical Size Tier
pool1 /dev/dm-2 4.98 GiB Data
```
接下来,挂载文件系统并验证是否可以访问之前创建的测试文件:
```
# mount /stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file
```
### 使用 systemd 单元文件在启动时自动解锁 Stratis 池
可以在启动时自动解锁 Stratis 池,无需手动干预。但是,必须有一个包含密钥的文件。在某些环境下,将密钥存储在文件中可能会有安全问题。
下图所示的 systemd 单元文件提供了一个简单的方法来在启动时解锁 Stratis 池并挂载文件系统。欢迎提供更好的/替代方法的反馈。你可以在文章末尾的评论区提供建议。
首先用下面的命令创建你的密钥文件。确保用之前输入的相同的密钥数据/密码来代替`passphrase`。
```
# echo -n passphrase > /root/pool1key
```
确保该文件只能由 root 读取:
```
# chmod 400 /root/pool1key
# chown root:root /root/pool1key
```
`/etc/systemd/system/stratis-filesystem1.service` 创建包含以下内容的 systemd 单元文件:
```
[Unit]
Description = stratis mount pool1 filesystem1 file system
After = stratisd.service
[Service]
ExecStartPre=sleep 2
ExecStartPre=stratis key set --keyfile-path /root/pool1key pool1key
ExecStartPre=stratis pool unlock
ExecStartPre=sleep 3
ExecStart=mount /stratis/pool1/filesystem1 /filesystem1
RemainAfterExit=yes
[Install]
WantedBy = multi-user.target
```
接下来,启用服务,使其在启动时运行:
```
# systemctl enable stratis-filesystem1.service
```
现在重新启动并验证 Stratis 池是否已自动解锁,其文件系统是否已挂载。
### 结语
在今天的环境中,加密是很多人和组织的必修课。本篇文章演示了如何在 Stratis 2.1 中启用加密功能。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-with-stratis-encryption/
作者:[briansmith][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/briansmith/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/stratis-encryption-2-816x345.jpg
[2]: https://stratis-storage.github.io/
[3]: https://www.youtube.com/watch?v=CJu3kmY-f5o

View File

@ -0,0 +1,134 @@
[#]: collector: (lujun9972)
[#]: translator: (tt67wq)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13292-1.html)
[#]: subject: (Program a simple game with Elixir)
[#]: via: (https://opensource.com/article/20/12/elixir)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
使用 Elixir 语言编写一个小游戏
======
> 通过编写“猜数字”游戏来学习 Elixir 编程语言,并将它与一个你熟知的语言做对比。
![](https://img.linux.net.cn/data/attachment/album/202104/12/223351t68886wmza1m9jnt.jpg)
为了更好的学习一门新的编程语言,最好的方法是去关注主流语言的一些共有特征:
* 变量
* 表达式
* 语句
这些概念是大多数编程语言的基础。因为这些相似性,只要你通晓了一门编程语言,你可以通过对比差异来熟知另一门编程语言。
另外一个学习新编程语言的好方法是开始编写一个简单标准的程序。它可以让你集中精力在语言上而非程序的逻辑本身。在这个系列的文章中,我们使用“猜数字”程序来实现,在这个程序中,计算机会选择一个介于 1 到 100 之间的数字,并要求你来猜测它。程序会循环执行,直到你正确猜出该数字为止。
“猜数字”这个程序使用了编程语言的以下概念:
* 变量
* 输入
* 输出
* 条件判断
* 循环
这是一个学习新编程语言的绝佳实践。
### 猜数字的 Elixir 实现
[Elixir][2] 是一门被设计用于构建稳定可维护应用的动态类型的函数式编程语言。它与 [Erlang][3] 运行于同一虚拟机之上,吸纳了 Erlang 的众多长处的同时拥有更加简单的语法。
你可以编写一个 Elixir 版本的“猜数字”游戏来体验这门语言。
这是我的实现方法:
```
defmodule Guess do
def guess() do
random = Enum.random(1..100)
IO.puts "Guess a number between 1 and 100"
Guess.guess_loop(random)
end
def guess_loop(num) do
data = IO.read(:stdio, :line)
{guess, _rest} = Integer.parse(data)
cond do
guess < num ->
IO.puts "Too low!"
guess_loop(num)
guess > num ->
IO.puts "Too high!"
guess_loop(num)
true ->
IO.puts "That's right!"
end
end
end
Guess.guess()
```
Elixir 通过列出变量的名称后面跟一个 `=` 号来为了给变量分配一个值。举个例子,表达式 `random = 0``random` 变量分配一个数值 0。
代码以定义一个模块开始。在 Elixir 语言中,只有模块可以包含命名函数。
紧随其后的这行代码定义了入口函数 `guess()`,这个函数:
* 调用 `Enum.random()` 函数来获取一个随机整数
* 打印游戏提示
* 调用循环执行的函数
剩余的游戏逻辑实现在 `guess_loop()` 函数中。
`guess_loop()` 函数利用 [尾递归][4] 来实现循环。Elixir 中有好几种实现循环的方法,尾递归是比较常用的一种方式。`guess_loop()` 函数做的最后一件事就是调用自身。
`guess_loop()` 函数的第一行读取用户输入。下一行调用 `parse()` 函数将输入转换成一个整数。
`cond` 表达式是 Elixir 版本的多重分支表达式。与其他语言中的 `if/elif` 或者 `if/elsif` 表达式不同Elixir 对于的首个分支或者最后一个没有分支并没有区别对待。
这个 `cond` 表达式有三路分支:猜测的结果可以比随机数大、小或者相等。前两个选项先输出不等式的方向然后递归调用 `guess_loop()`,循环返回至函数开始。最后一个选项输出 `That's right`,然后这个函数就完成了。
### 输出例子
现在你已经编写了你的 Elixir 代码你可以运行它来玩“猜数字”的游戏。每次你执行这个程序Elixir 会选择一个不同的随机数,你可以一直猜下去直到你找到正确的答案:
```
$ elixir guess.exs
Guess a number between 1 and 100
50
Too high
30
Too high
20
Too high
10
Too low
15
Too high
13
Too low
14
That's right!
```
“猜数字”游戏是一个学习一门新编程语言的绝佳入门程序,因为它用了非常直接的方法实践了常用的几个编程概念。通过用不同语言实现这个简单的小游戏,你可以实践各个语言的核心概念并且比较它们的细节。
你是否有你最喜爱的编程语言?你将怎样用它来编写“猜数字”这个游戏?关注这个系列的文章来看看其他你可能感兴趣的语言实现。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/elixir
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[tt67wq](https://github.com/tt67wq)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN (A die with rainbow color background)
[2]: https://elixir-lang.org/
[3]: https://www.erlang.org/
[4]: https://en.wikipedia.org/wiki/Tail_call

View File

@ -0,0 +1,181 @@
[#]: subject: (3 reasons I use the Git cherry-pick command)
[#]: via: (https://opensource.com/article/21/3/git-cherry-pick)
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13305-1.html)
我使用 Git cherry-pick 命令的 3 个理由
======
> “遴选”可以解决 Git 仓库中的很多问题。以下是用 `git cherry-pick` 修复错误的三种方法。
![](https://img.linux.net.cn/data/attachment/album/202104/17/174429qw1im6if6mf6zi9i.jpg)
在版本控制系统中摸索前进是一件很棘手的事情。对于一个新手来说,这可能是非常难以应付的,但熟悉版本控制系统(如 Git的术语和基础知识是开始为开源贡献的第一步。
熟悉 Git 也能帮助你在开源之路上走出困境。Git 功能强大,让你感觉自己在掌控之中 —— 没有哪一种方法会让你无法恢复到工作版本。
这里有一个例子可以帮助你理解“<ruby>遴选<rt>cherry-pick</rt></ruby>”的重要性。假设你已经在一个分支上做了好几个提交但你意识到这是个错误的分支你现在该怎么办你现在要做什么要么在正确的分支上重复所有的变更然后重新提交要么把这个分支合并到正确的分支上。等一下前者太过繁琐而你可能不想做后者。那么还有没有办法呢有的Git 已经为你准备好了。这就是“遴选”的作用。顾名思义,你可以用它从一个分支中手工遴选一个提交,然后转移到另一个分支。
使用遴选的原因有很多。以下是其中的三个原因。
### 避免重复性工作
如果你可以直接将相同的提交复制到另一个分支,就没有必要在不同的分支中重做相同的变更。请注意,遴选出来的提交会在另一个分支中创建带有新哈希的新提交,所以如果你看到不同的提交哈希,请不要感到困惑。
如果您想知道什么是提交的哈希,以及它是如何生成的,这里有一个说明可以帮助你。提交哈希是用 [SHA-1][2] 算法生成的字符串。SHA-1 算法接收一个输入,然后输出一个唯一的 40 个字符的哈希值。如果你使用的是 [POSIX][3] 系统,请尝试在您的终端上运行这个命令:
```
$ echo -n "commit" | openssl sha1
```
这将输出一个唯一的 40 个字符的哈希值 `4015b57a143aec5156fd1444a017a32137a3fd0f`。这个哈希代表了字符串 `commit`
Git 在提交时生成的 SHA-1 哈希值不仅仅代表一个字符串。它代表的是:
```
sha1(
    meta data
        commit message
        committer
        commit date
        author
        authoring date
    Hash of the entire tree object
)
```
这就解释了为什么你对代码所做的任何细微改动都会得到一个独特的提交哈希值。哪怕是一个微小的改动都会被发现。这是因为 Git 具有完整性。
### 撤销/恢复丢失的更改
当你想恢复到工作版本时,遴选就很方便。当多个开发人员在同一个代码库上工作时,很可能会丢失更改,最新的版本会被转移到一个陈旧的或非工作版本上。这时,遴选提交到工作版本就可以成为救星。
#### 它是如何工作的?
假设有两个分支:`feature1` 和 `feature2`,你想把 `feature1` 中的提交应用到 `feature2`
`feature1` 分支上,运行 `git log` 命令,复制你想遴选的提交哈希值。你可以看到一系列类似于下面代码示例的提交。`commit` 后面的字母数字代码就是你需要复制的提交哈希。为了方便起见,您可以选择复制前六个字符(本例中为 `966cf3`)。
```
commit 966cf3d08b09a2da3f2f58c0818baa37184c9778 (HEAD -> master)
Author: manaswinidas <me@example.com>
Date: Mon Mar 8 09:20:21 2021 +1300
add instructions
```
然后切换到 `feature2` 分支,在刚刚从日志中得到的哈希值上运行 `git cherry-pick`
```
$ git checkout feature2
$ git cherry-pick 966cf3.
```
如果该分支不存在,使用 `git checkout -b feature2` 来创建它。
这里有一个问题。你可能会遇到下面这种情况:
```
$ git cherry-pick 966cf3
On branch feature2
You are currently cherry-picking commit 966cf3d.
nothing to commit, working tree clean
The previous cherry-pick is now empty, possibly due to conflict resolution.
If you wish to commit it anyway, use:
   git commit --allow-empty
Otherwise, please use 'git reset'
```
不要惊慌。只要按照建议运行 `git commit --allow-empty`
```
$ git commit --allow-empty
[feature2 afb6fcb] add instructions
Date: Mon Mar 8 09:20:21 2021 +1300
```
这将打开你的默认编辑器,允许你编辑提交信息。如果你没有什么要补充的,可以保存现有的信息。
就这样,你完成了你的第一次遴选。如上所述,如果你在分支 `feature2` 上运行 `git log`,你会看到一个不同的提交哈希。下面是一个例子:
```
commit afb6fcb87083c8f41089cad58deb97a5380cb2c2 (HEAD -&gt; feature2)
Author: manaswinidas &lt;[me@example.com][4]&gt;
Date:   Mon Mar 8 09:20:21 2021 +1300
   add instructions
```
不要对不同的提交哈希感到困惑。这只是区分 `feature1``feature2` 的提交。
### 遴选多个提交
但如果你想遴选多个提交的内容呢?你可以使用:
```
git cherry-pick <commit-hash1> <commit-hash2>... <commit-hashn>
```
请注意,你不必使用整个提交的哈希值,你可以使用前五到六个字符。
同样,这也是很繁琐的。如果你想遴选的提交是一系列的连续提交呢?这种方法太费劲了。别担心,有一个更简单的方法。
假设你有两个分支:
* `feature1` 包括你想复制的提交(从更早的 `commitA``commitB`)。
* `feature2` 是你想把提交从 `feature1` 转移到的分支。
然后:
1. 输入 `git checkout <feature1>`
2. 获取 `commitA``commitB` 的哈希值。
3. 输入 `git checkout <branchB>`
4. 输入 `git cherry-pick <commitA>^..<commitB>` (请注意,这包括 `commitA``commitB`)。
5. 如果遇到合并冲突,[像往常一样解决][5],然后输入 `git cherry-pick --continue` 恢复遴选过程。
### 重要的遴选选项
以下是 [Git 文档][6] 中的一些有用的选项,你可以在 `cherry-pick` 命令中使用。
* `-e`、`--edit`:用这个选项,`git cherry-pick` 可以让你在提交前编辑提交信息。
* `-s`、`--signoff`:在提交信息的结尾添加 `Signed-off by` 行。更多信息请参见 `git-commit(1)` 中的 signoff 选项。
* `-S[<keyid>]`、`--pgg-sign[=<keyid>]`:这些是 GPG 签名的提交。`keyid` 参数是可选的,默认为提交者身份;如果指定了,则必须嵌在选项中,不加空格。
* `--ff`:如果当前 HEAD 与遴选的提交的父级提交相同,则会对该提交进行快进操作。
下面是除了 `--continue` 外的一些其他的后继操作子命令:
* `--quit`:你可以忘记当前正在进行的操作。这可以用来清除遴选或撤销失败后的后继操作状态。
* `--abort`:取消操作并返回到操作序列前状态。
下面是一些关于遴选的例子:
* `git cherry-pick master`:应用 `master` 分支顶端的提交所引入的变更,并创建一个包含该变更的新提交。
* `git cherry-pick master~4 master~2':应用 `master` 指向的第五个和第三个最新提交所带来的变化,并根据这些变化创建两个新的提交。
感到不知所措?你不需要记住所有的命令。你可以随时在你的终端输入 `git cherry-pick --help` 查看更多选项或帮助。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/git-cherry-pick
作者:[Manaswini Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/manaswinidas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
[2]: https://en.wikipedia.org/wiki/SHA-1
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: mailto:me@example.com
[5]: https://opensource.com/article/20/4/git-merge-conflict
[6]: https://git-scm.com/docs/git-cherry-pick

View File

@ -3,24 +3,24 @@
[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13293-1.html)
git stash 命令实用指南
======
> 学习如何使用 `git stash` 命令,以及何时应该使用它。
> 学习如何使用 `git stash` 命令,以及何时应该使用它。
![女人在笔记本上坐在窗口][1]
![](https://img.linux.net.cn/data/attachment/album/202104/12/232830chuyr6lkzevrfuzr.jpg)
版本控制是软件开发人员日常生活中不可分割的一部分。很难想象有哪个团队在开发软件时不使用版本控制工具。同样也很难想象有哪个开发者没有使用过或没有听说过Git。在 2018 年 Stackoverflow 开发者调查中74298 名参与者中 87.2% 的人 [使用 Git][2] 进行版本控制。
版本控制是软件开发人员日常生活中不可分割的一部分。很难想象有哪个团队在开发软件时不使用版本控制工具。同样也很难想象有哪个开发者没有使用过或没有听说过Git。在 2018 年 Stackoverflow 开发者调查中74298 名参与者中 87.2% 的人 [使用 Git][2] 进行版本控制。
Linus Torvalds 在 2005 年创建了 Git 用于开发 Linux 内核。本文将介绍 `git stash` 命令,并探讨一些有用的暂存修改的选项。本文假定你对 [Git 概念][3] 有基本的了解,并对工作树、暂存区和相关命令有良好的理解。
Linus Torvalds 在 2005 年创建了 Git 用于开发 Linux 内核。本文将介绍 `git stash` 命令,并探讨一些有用的暂存变更的选项。本文假定你对 [Git 概念][3] 有基本的了解,并对工作树、暂存区和相关命令有良好的理解。
### 为什么 git stash 很重要?
首先要明白为什么在 Git 中暂存变更很重要。假设 Git 没有暂存变更的命令。当你正在一个有两个分支A 和 B的仓库上工作时这两个分支已经分叉了一段时间并且有不同的头。当你正在处理 A 分支的一些文件时,你的团队要求你修复 B 分支的一个错误。你迅速将你的修改保存到 A 分支,并尝试用 `git checkout B` 来签出 B 分支。Git 立即中止了这个操作,并抛出错误:“你对以下文件的本地修改会被该签出覆盖……请在切换分支之前提交你的修改或将它们暂存起来。”
首先要明白为什么在 Git 中暂存变更很重要。假设 Git 没有暂存变更的命令。当你正在一个有两个分支A 和 B的仓库上工作时这两个分支已经分叉了一段时间并且有不同的头。当你正在处理 A 分支的一些文件时,你的团队要求你修复 B 分支的一个错误。你迅速将你的修改保存到 A 分支(但没有提交),并尝试用 `git checkout B` 来签出 B 分支。Git 立即中止了这个操作,并抛出错误:“你对以下文件的本地修改会被该签出覆盖……请在切换分支之前提交你的修改或将它们暂存起来。”
在这种情况下,有几种方法可以启用分支切换:
@ -31,7 +31,7 @@ Linus Torvalds 在 2005 年创建了 Git 用于开发 Linux 内核。本文将
`git stash` 将未提交的改动保存在本地,让你可以进行修改、切换分支以及其他 Git 操作。然后,当你需要的时候,你可以重新应用这些存储的改动。暂存是本地范围的,不会被 `git push` 推送到远程。
### 如何使用git stash
### 如何使用 git stash
下面是使用 `git stash` 时要遵循的顺序:
@ -54,7 +54,7 @@ $ git stash
Saved working directory and index state WIP on master; d7435644 Feat: configure graphql endpoint
```
默认情况下,`git stash` 存储(或“暂存”)未提交的更改(已暂存和未暂存的文件),并忽略未跟踪和忽略的文件。通常情况下,你不需要暂存未跟踪和忽略的文件,但有时它们可能会干扰你在代码库中要做的其他事情。
默认情况下,`git stash` 存储(或称之为“暂存”)未提交的更改(已暂存和未暂存的文件),并忽略未跟踪和忽略的文件。通常情况下,你不需要暂存未跟踪和忽略的文件,但有时它们可能会干扰你在代码库中要做的其他事情。
你可以使用附加选项让 `git stash` 来处理未跟踪和忽略的文件:
@ -212,7 +212,7 @@ via: https://opensource.com/article/21/4/git-stash
作者:[Ramakrishna Pattnaik][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,135 @@
[#]: subject: (7 Git tips for managing your home directory)
[#]: via: (https://opensource.com/article/21/4/git-home)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13313-1.html)
7个管理家目录的 Git 技巧
======
> 这是我怎样设置 Git 来管理我的家目录的方法。
![](https://img.linux.net.cn/data/attachment/album/202104/20/095224mtq14szo7opfofq7.jpg)
我有好几台电脑。一台笔记本电脑用于工作,一台工作站放在家里,一台树莓派(或四台),一台 [Pocket CHIP][2],一台 [运行各种不同的 Linux 的 Chromebook][3],等等。我曾经在每台计算机上或多或少地按照相同的步骤设置我的用户环境,也经常告诉自己让每台计算机都略有不同。例如,我在工作中比在家里更经常使用 [Bash 别名][4],并且我在家里使用的辅助脚本可能对工作没有用。
这些年来,我对各种设备的期望开始相融,我会忘记我在家用计算机上建立的功能没有移植到我的工作计算机上,诸如此类。我需要一种标准化我的自定义工具包的方法。使我感到意外的答案是 Git。
Git 是版本跟踪软件。它以既可以用在非常大的开源项目也可以用在极小的开源项目而闻名,甚至最大的专有软件公司也在用它。但是它是为源代码设计的,而不是用在一个装满音乐和视频文件、游戏、照片等的家目录。我听说过有人使用 Git 管理其家目录,但我认为这是程序员们进行的一项附带实验,而不是像我这样的现实生活中的用户。
用 Git 管理我的家目录是一个不断发展的过程。随着时间的推移我一直在学习和适应。如果你决定使用 Git 管理家目录,则可能需要记住以下几点。
### 1、文本和二进制位置
![家目录][5]
当由 Git 管理时,除了配置文件之外,你的家目录对于所有内容而言都是“无人之地”。这意味着当你打开主目录时,除了可预见的目录的列表之外,你什么都看不到。不应有任何杂乱无章的照片或 LibreOffice 文档,也不应有 “我就在这里放一分钟” 的临时文件。
原因很简单:使用 Git 管理家目录时,家目录中所有 _未_ 提交的内容都会变成噪音。每次执行 `git status` 时,你都必须翻过去之前 Git 未跟踪的任何文件,因此将这些文件保存在子目录(添加到 `.gitignore` 文件中)至关重要。
许多 Linux 发行版提供了一组默认目录:
* `Documents`
* `Downloads`
* `Music`
* `Photos`
* `Templates`
* `Videos`
如果需要,你可以创建更多。例如,我把创作的音乐(`Music`)和购买来聆听的音乐(`Albums`)区分开来。同样,我的电影(`Cinema`)目录包含了其他人的电影,而视频(`Videos`)目录包含我需要编辑的视频文件。换句话说,我的默认目录结构比大多数 Linux 发行版提供的默认设置更详细,但是我认为这样做有好处。如果没有适合你的目录结构,你更会将其存放在家目录中,因为没有更好的存放位置,因此请提前考虑并规划好适合你的工作目录。你以后总是可以添加更多,但是最好先开始擅长的。
### 2、、设置最优的 `.gitignore`
清理家目录后,你可以像往常一样将其作为 Git 存储库实例化:
```
$ cd
$ git init .
```
你的 Git 仓库中还没有任何内容,你的家目录中的所有内容均未被跟踪。你的第一项工作是筛选未跟踪文件的列表,并确定要保持未跟踪状态的文件。要查看未跟踪的文件:
```
$ git status
  .AndroidStudio3.2/
  .FBReader/
  .ICEauthority
  .Xauthority
  .Xdefaults
  .android/
  .arduino15/
  .ash_history
[...]
```
根据你使用家目录的时间长短,此列表可能很长。简单的是你在上一步中确定的目录。通过将它们添加到名为 `.gitignore` 的隐藏文件中,你告诉 Git 停止将它们列为未跟踪文件,并且永远不对其进行跟踪:
```
$ \ls -lg | grep ^d | awk '{print $8}' >> ~/.gitignore
```
完成后,浏览 `git status` 所示的其余未跟踪文件,并确定是否有其他文件需要排除。这个过程帮助我发现了几个陈旧的配置文件和目录,这些文件和目录最终被我全部丢弃了,而且还发现了一些特定于一台计算机的文件和目录。我在这里非常严格,因为许多配置文件在自动生成时会表现得更好。例如,我从不提交我的 KDE 配置文件,因为许多文件包含了诸如最新文档之类的信息以及其他机器上不存在的其他元素。
我会跟踪我的个性化配置文件、脚本和实用程序、配置文件和 Bash 配置,以及速查表和我经常引用的其他文本片段。如果有软件主要负责维护的文件,则将其忽略。当对一个文件不确定时,我将其忽略。你以后总是可以取消忽略它(通过从 `.gitignore` 文件中删除它)。
### 3、了解你的数据
我使用的是 KDE因此我使用开源扫描程序 [Filelight][7] 来了解我的数据概况。Filelight 为你提供了一个图表,可让你查看每个目录的大小。你可以浏览每个目录以查看占用了空间的内容,然后回溯调查其他地方。这是一个令人着迷的系统视图,它使你可以以全新的方式看待你的文件。
![Filelight][8]
使用 Filelight 或类似的实用程序查找不需要提交的意外数据缓存。例如KDE 文件索引器Baloo生成了大量特定于其主机的数据我绝对不希望将其传输到另一台计算机。
### 4、不要忽略你的 `.gitignore` 文件
在某些项目中,我告诉 Git 忽略我的 `.gitignore` 文件,因为有时我要忽略的内容特定于我的工作目录,并且我不认为同一项目中的其他开发人员需要我告诉他们 `.gitignore` 文件应该是什么样子。因为我的家目录仅供我使用,所以我 _不_ 会忽略我的家目录的 `.gitignore` 文件。我将其与其他重要文件一起提交,因此它已在我的所有系统中被继承。当然,从家目录的角度来看,我所有的系统都是相同的:它们具有一组相同的默认文件夹和许多相同的隐藏配置文件。
### 5、不要担心二进制文件
我对我的系统进行了数周的严格测试,确信将二进制文件提交到 Git 绝对不是明智之举。我试过 GPG 加密的密码文件、试过 LibreOffice 文档、JPEG、PNG 等等。我甚至有一个脚本,可以在将 LibreOffice 文件添加到 Git 之前先解压缩,提取其中的 XML以便仅提交 XML然后重新构建 LibreOffice 文件,以便可以在 LibreOffice 中继续工作。我的理论是,提交 XML 会比使用 ZIP 文件LibreOffice 文档实际上就是一个 ZIP 文件)会让 Git 存储库更小一些。
令我惊讶的是,我发现偶尔提交一些二进制文件并没有大幅增加我的 Git 存储库的大小。我使用 Git 已经很长时间了,我知道如果我要提交几千兆的二进制数据,我的存储库将会受到影响,但是偶尔提交几个二进制文件也不是不惜一切代价要避免的紧急情况。
有了这种信心,我将字体 OTF 和 TTF 文件添加到我的标准主存储库,以及 GDM 的 `.face` 文件以及其他偶尔小型二进制 Blob 文件。不要想太多,不要浪费时间去避免它。只需提交即可。
### 6、使用私有存储库
即使托管方提供了私人帐户,也不要将你的主目录提交到公共 Git 存储库。如果你像我一样,拥有 SSH 密钥、GPG 密钥链和 GPG 加密的文件,这些文件不应该出现在任何人的服务器上,而应该出现在我自己的服务器上。
我在树莓派上 [运行本地 Git 服务器][9](这比你想象的要容易),因此我可以在家里时随时更新任何一台计算机。我是一名远程工作者,所以通常情况下就足够了,但是我也可以在旅行时通过 [虚拟私人网络][10] 访问我的计算机。
### 7、要记得推送
Git 的特点是,只有当你告诉它要推送改动时,它才会把改动推送到你的服务器上。如果你是 Git 的老用户,则此过程可能对你很自然。对于可能习惯于 Nextcloud 或 Syncthing 自动同步的新用户,这可能需要一些时间来适应。
### Git 家目录
使用 Git 管理我的常用文件,不仅使我在不同设备上的生活更加便利。我知道我拥有所有配置和实用程序脚本的完整历史记录,这会鼓励我尝试新的想法,因为如果结果变得 _很糟糕_则很容易回滚我的更改。Git 曾将我从在 `.bashrc` 文件中一个欠考虑的 `umask` 设置中解救出来、从深夜对包管理脚本的拙劣添加中解救出来、从当时看似很酷的 [rxvt][11] 配色方案的修改中解救出来,也许还有其他一些错误。在家目录中尝试 Git 吧,因为这些提交会让家目录融合在一起。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/git-home
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://opensource.com/article/17/2/pocketchip-or-pi
[3]: https://opensource.com/article/21/2/chromebook-linux
[4]: https://opensource.com/article/17/5/introduction-alias-command-line-tool
[5]: https://opensource.com/sites/default/files/uploads/home-git.jpg (home directory)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://utils.kde.org/projects/filelight
[8]: https://opensource.com/sites/default/files/uploads/filelight.jpg (Filelight)
[9]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[10]: https://www.redhat.com/sysadmin/run-your-own-vpn-libreswan
[11]: https://opensource.com/article/19/10/why-use-rxvt-terminal

View File

@ -4,25 +4,25 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13301-1.html)
使用 Git 工作树对你的代码进行自由实验
======
> 获得自由尝试的权利,同时在你的实验出错时可以安全地拥有一个新的、链接的克隆存储库。
![带烧杯的科学实验室][1]
![](https://img.linux.net.cn/data/attachment/album/202104/16/085512x3auafu5uaymk52u.jpg)
Git 的设计部分是为了进行实验。如果你知道你的工作会被安全地跟踪,并且在出现严重错误时有安全状态存在,你就不会害怕尝试新的想法。不过,创新的部分代价是,你很可能会在过程中弄得一团糟。文件会被重新命名、移动、删除、更改、切割成碎片;新的文件被引入;你不打算跟踪的临时文件会在你的工作目录中占据一席之地等等。
Git 的设计部分是为了进行实验。如果你知道你的工作会被安全地跟踪,并且在出现严重错误时有安全状态存在,你就不会害怕尝试新的想法。不过,创新的部分代价是,你很可能会在这个过程中弄得一团糟。文件会被重新命名、移动、删除、更改、切割成碎片;新的文件被引入;你不打算跟踪的临时文件会在你的工作目录中占据一席之地等等。
简而言之,你的工作空间变成了纸牌屋,在“快好了!”和“哦,不,我做了什么?”之间岌岌可危地平衡着。那么,当你需要把仓库恢复到下午的一个已知状态,以便完成一些真正的工作时,该怎么办?我立刻想到了 `git branch` 和 [git stash][2] 这两个经典命令,但这两个命令都不是用来处理未被跟踪的文件的,而且文件路径的改变和其他重大的转变也会让人困惑,它们只能把工作`stash`)起来以备后用。解决这个需求的答案是 Git 工作树。
简而言之,你的工作空间变成了纸牌屋,在“快好了!”和“哦,不,我做了什么?”之间岌岌可危地平衡着。那么,当你需要把仓库恢复到下午的一个已知状态,以便完成一些真正的工作时,该怎么办?我立刻想到了 `git branch` 和 [git stash][2] 这两个经典命令,但这两个命令都不是用来处理未被跟踪的文件的,而且文件路径的改变和其他重大的转变也会让人困惑,它们只能把工作暂存`stash`)起来以备后用。解决这个需求的答案是 Git 工作树。
### 什么是 Git 工作树
Git 工作树是 Git 仓库的一个链接副本,允许你同时签出多个分支。工作树与主工作副本的路径是分开的,它可以处于不同的状态和不同的分支上。在 Git 中新建工作树的好处是,你可以在不干扰当前工作环境的情况下,做出与当前任务无关的修改,提交修改,然后在以后再合并。
Git <ruby>工作树<rt>worktree</rt></ruby>是 Git 仓库的一个链接副本,允许你同时签出多个分支。工作树与主工作副本的路径是分开的,它可以处于不同的状态和不同的分支上。在 Git 中新建工作树的好处是,你可以在不干扰当前工作环境的情况下,做出与当前任务无关的修改、提交修改,然后在以后合并。
直接从 `git-worktree` 手册中找到了一个典型的例子:当你正在为一个项目做一个令人兴奋的新功能时,你的项目经理告诉你有一个紧急的修复工作。问题是你的工作仓库(你的“工作树”)处于混乱状态,因为你正在开发一个重要的新功能。你不想在当前的冲刺中“偷偷地”进行修复,而且你也不愿意把变更藏(`stash`起来,为修复创建一个新的分支。相反,你决定创建一个新的工作树,这样你就可以在那里进行修复:
直接从 `git-worktree` 手册中找到了一个典型的例子:当你正在为一个项目做一个令人兴奋的新功能时,你的项目经理告诉你有一个紧急的修复工作。问题是你的工作仓库(你的“工作树”)处于混乱状态,因为你正在开发一个重要的新功能。你不想在当前的冲刺中“偷偷地”进行修复,而且你也不愿意把变更暂存起来,为修复创建一个新的分支。相反,你决定创建一个新的工作树,这样你就可以在那里进行修复:
```
$ git branch | tee
@ -75,7 +75,7 @@ $ git worktree list
/home/seth/code/hotfix     09e585d [master]
```
你可以在任何一个工作树中使用这个功能。工作树始终是链接的(除非你手动移动它们,破坏 Git 定位工作树的能力,从而切断链接)。
你可以在任何一个工作树中使用这个功能。工作树始终是连接的(除非你手动移动它们,破坏 Git 定位工作树的能力,从而切断连接)。
### 移动工作树
@ -132,4 +132,4 @@ via: https://opensource.com/article/21/4/git-worktree
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/science_experiment_beaker_lab.png?itok=plKWRhlU (Science lab with beakers)
[2]: https://opensource.com/article/21/4/git-stash
[2]: https://linux.cn/article-13293-1.html

View File

@ -0,0 +1,187 @@
[#]: subject: (What is Git cherry-picking?)
[#]: via: (https://opensource.com/article/21/4/cherry-picking-git)
[#]: author: (Rajeev Bera https://opensource.com/users/acompiler)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13295-1.html)
什么是 Git 遴选cherry-pick
======
> 了解 `git cherry-pick` 命令是什么,为什么用以及如何使用。
![](https://img.linux.net.cn/data/attachment/album/202104/14/131735o63v3ow6y2wc281o.jpg)
当你和一群程序员一起工作时,无论项目大小,处理多个 Git 分支之间的变更都会变得很困难。有时,你不想将整个 Git 分支合并到另一个分支,而是想选择并移动几个特定的提交。这个过程被称为 “<ruby>遴选<rt>cherry-pick</rt></ruby>”。
本文将介绍“遴选”是什么、为何使用以及如何使用。
那么让我们开始吧。
### 什么是遴选?
使用遴选(`cherry-pick`命令Git 可以让你将任何分支中的个别提交合并到你当前的 [Git HEAD][2] 分支中。
当执行 `git merge` 或者 `git rebase` 时,一个分支的所有提交都会被合并。`cherry-pick` 命令允许你选择单个提交进行整合。
### 遴选的好处
下面的情况可能会让你更容易理解遴选功能。
想象一下,你正在为即将到来的每周冲刺实现新功能。当你的代码准备好了,你会把它推送到远程分支,准备进行测试。
然而,客户并不是对所有修改都满意,要求你只呈现某些修改。因为客户还没有批准下次发布的所有修改,所以 `git rebase` 不会有预期的结果。为什么会这样?因为 `git rebase` 或者 `git merge` 会把上一个冲刺的每一个调整都纳入其中。
遴选就是答案!因为它只关注在提交中添加的变更,所以遴选只会带入批准的变更,而不添加其他的提交。
还有其他几个原因可以使用遴选:
* 这对于 bug 修复是必不可少的,因为 bug 是出现在开发分支中对应的提交的。
* 你可以通过使用 `git cherry-pick` 来避免不必要的工作,而不用使用其他选项例如 `git diff` 来应用特定变更。
* 如果因为不同 Git 分支的版本不兼容而无法将整个分支联合起来,那么它是一个很有用的工具。
### 使用 cherry-pick 命令
`cherry-pick` 命令的最简单形式中,你只需使用 [SHA][3] 标识符来表示你想整合到当前 HEAD 分支的提交。
要获得提交的哈希值,可以使用 `git log` 命令:
```
$ git log --oneline
```
当你知道了提交的哈希值后,你就可以使用 `cherry-pick` 命令。
语法是:
```
$ git cherry-pick <commit sha>
```
例如:
```
$ git cherry-pick 65be1e5
```
这将会把指定的修改合并到当前已签出的分支上。
如果你想做进一步的修改,也可以让 Git 将提交的变更内容添加到你的工作副本中。
语法是:
```
$ git cherry-pick <commit sha> --no-commit
```
例如:
```
$ git cherry-pick 65be1e5 --no-commit
```
如果你想同时选择多个提交,请将它们的提交哈希值用空格隔开:
```
$ git cherry-pick hash1 hash3
```
当遴选提交时,你不能使用 `git pull` 命令,因为它能获取一个仓库的提交**并**自动合并到另一个仓库。`cherry-pick` 是一个专门不这么做的工具;另一方面,你可以使用 `git fetch`,它可以获取提交,但不应用它们。毫无疑问,`git pull` 很方便,但它不精确。
### 自己尝试
要尝试这个过程,启动终端并生成一个示例项目:
```
$ mkdir fruit.git
$ cd fruit.git
$ git init .
```
创建一些数据并提交:
```
$ echo "Kiwifruit" > fruit.txt
$ git add fruit.txt
$ git commit -m 'First commit'
```
现在,通过创建一个项目的复刻来代表一个远程开发者:
```
$ mkdir ~/fruit.fork
$ cd !$
$ echo "Strawberry" >> fruit.txt
$ git add fruit.txt
$ git commit -m 'Added a fruit"
```
这是一个有效的提交。现在,创建一个不好的提交,代表你不想合并到你的项目中的东西:
```
$ echo "Rhubarb" >> fruit.txt
$ git add fruit.txt
$ git commit -m 'Added a vegetable that tastes like a fruit"
```
返回你的仓库,从你的假想的开发者那里获取提交的内容:
```
$ cd ~/fruit.git
$ git remote add dev ~/fruit.fork
$ git fetch dev
remote: Counting objects: 6, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 6 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (6/6), done...
```
```
$ git log oneline dev/master
e858ab2 Added a vegetable that tastes like a fruit
0664292 Added a fruit
b56e0f8 First commit
```
你已经从你想象中的开发者那里获取了提交的内容,但你还没有将它们合并到你的版本库中。你想接受第二个提交,但不想接受第三个提交,所以使用 `cherry-pick`
```
$ git cherry-pick 0664292
```
第二次提交现在在你的仓库里了:
```
$ cat fruit.txt
Kiwifruit
Strawberry
```
将你的更改推送到远程服务器上,这就完成了!
### 避免使用遴选的原因
在开发者社区中,通常不鼓励所以遴选。主要原因是它会造成重复提交,而你也失去了跟踪你的提交历史的能力。
如果你不按顺序地遴选了大量的提交,这些提交会被记录在你的分支中,这可能会在 Git 分支中导致不理想的结果。
遴选是一个强大的命令,如果没有正确理解可能发生的情况,它可能会导致问题。不过,当你搞砸了,提交到错误的分支时,它可能会救你一命(至少是你当天的工作)。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/cherry-picking-git
作者:[Rajeev Bera][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/acompiler
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
[2]: https://acompiler.com/git-head/
[3]: https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

View File

@ -0,0 +1,107 @@
[#]: subject: (Why I love using bspwm for my Linux window manager)
[#]: via: (https://opensource.com/article/21/4/bspwm-linux)
[#]: author: (Stephen Adams https://opensource.com/users/stevehnh)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13308-1.html)
为什么我喜欢用 bspwm 来做我的 Linux 窗口管理器
======
> 在 Fedora Linux 上安装、配置并开始使用 bspwm 窗口管理器。
![](https://img.linux.net.cn/data/attachment/album/202104/18/114637hxvqp4hfvbbhihb4.jpg)
有些人喜欢重新布置家具。还有的人喜欢尝试新鞋或定期重新装修他们的卧室。我呢,则是尝试 Linux 桌面。
在对网上看到的一些不可思议的桌面环境流口水之后,我对一个窗口管理器特别好奇:[bspwm][2]。
![bspwm desktop][3]
我喜欢 [i3][5] 窗口管理器已经有一段时间了,我很喜欢它的布局方式和上手的便捷性。但 bspwm 的某些特性吸引了我。有几个原因让我决定尝试一下:
* 它_只是_一个窗口管理器WM
* 它由几个易于配置的脚本管理。
* 它默认支持窗口之间的间隙。
可能是最需要指出的第一个原因是它只是一个窗口管理器。和 i3 一样默认情况下没有任何图形化的那些花哨东西。你当然可以随心所欲地定制它但_你_需要付出努力来使它看起来像你想要的。这也是它吸引我的部分原因。
虽然它可以在许多发行版上使用,但在我这个例子中使用的是 Fedora Linux。
### 安装 bspwm
bspwm 在大多数常见的发行版中都有打包,所以你可以用系统的包管理器安装它。下面这个命令还会安装 [sxkhd][6],这是一个 X 窗口系统的守护程序,它“通过执行命令对输入事件做出反应”;还有 [dmenu][7],这是一个通用的 X 窗口菜单:
```
dnf install bspwm sxkhd dmenu
```
因为 bspwm 只是一个窗口管理器,所以没有任何内置的快捷键或键盘命令。这也是它与 i3 等软件的不同之处。所以,在你第一次启动窗口管理器之前,请先配置一下 `sxkhd`
```
systemctl start sxkhd
systemctl enable sxkhd
```
这样就可以在登录时启用 `sxkhd`,但你还需要一些基本功能的配置:
```
curl https://raw.githubusercontent.com/baskerville/bspwm/master/examples/sxhkdrc --output ~/.config/sxkhd/sxkhdrc
```
在你深入了解之前,不妨先看看这个文件,因为有些脚本调用的命令可能在你的系统中并不存在。一个很好的例子是调用 `urxvt``super + Return` 快捷键。把它改成你喜欢的终端,尤其是当你没有安装 `urxvt` 的时候:
```
#
# wm independent hotkeys
#
   
# terminal emulator
super + Return
        urxvt
   
# program launcher
super + @space
        dmenu_run
```
如果你使用的是 GDM、LightDM 或其他显示管理器DM只要在登录前选择 `bspwm` 即可。
### 配置 bspwm
当你登录后,你会看到屏幕上什么都没有。这不是你感觉到的空虚感。而是无限可能性!你现在可以开始摆弄桌面环境的所有部分了。你现在可以开始摆弄这些年你认为理所当然的桌面环境的所有部分了。从头开始构建并不容易,但一旦你掌握了诀窍,就会非常有收获。
任何窗口管理器最困难的是掌握快捷键。你开始会很慢,但在很短的时间内,你就可以只使用键盘在系统中到处操作,在你的朋友和家人面前看起来像一个终极黑客。
你可以通过编辑 `~/.config/bspwm/bspwmrc`,在启动时添加应用,设置桌面和显示器,并为你的窗口应该如何表现设置规则,随心所欲地定制系统。有一些默认设置的例子可以让你开始使用。键盘快捷键都是由 `sxkhdrc` 文件管理的。
还有更多的开源项目可以安装,让你的电脑看起来更漂亮,比如用于桌面背景的 [Feh][8]、状态栏的 [Polybar][9]、应用启动器的 [Rofi][10],还有 [Compton][11] 可以给你提供阴影和透明度,可以让你的电脑看起来焕然一新。
玩得愉快!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/bspwm-linux
作者:[Stephen Adams][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stevehnh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
[2]: https://github.com/baskerville/bspwm
[3]: https://opensource.com/sites/default/files/uploads/bspwm-desktop.png (bspwm desktop)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://i3wm.org/
[6]: https://github.com/baskerville/sxhkd
[7]: https://linux.die.net/man/1/dmenu
[8]: https://github.com/derf/feh
[9]: https://github.com/polybar/polybar
[10]: https://github.com/davatorium/rofi
[11]: https://github.com/chjj/compton

View File

@ -0,0 +1,84 @@
[#]: subject: (4 ways open source gives you a competitive edge)
[#]: via: (https://opensource.com/article/21/4/open-source-competitive-advantage)
[#]: author: (Jason Blais https://opensource.com/users/jasonblais)
[#]: collector: (lujun9972)
[#]: translator: (DCOLIVERSUN)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13299-1.html)
开源为你带来竞争优势的 4 种方式
======
> 使用开源技术可以帮助组织获得更好的业务结果。
![](https://img.linux.net.cn/data/attachment/album/202104/15/085345a2aani3axxj7wcis.jpg)
构建技术栈是每个组织的主要决策。选择合适的工具将让团队获得成功,选择错误的解决方案或平台会对生产率和利润率产生毁灭性影响。为了在当今快节奏的世界中脱颖而出,组织必须明智地选择数字解决方案,好的数字解决方案可以提升团队行动力与运营敏捷性。
这就是为什么越来越多的组织都采用开源解决方案的原因,这些组织来自各行各业,规模有大有小。根据 [麦肯锡][2] 最近的报告,高绩效组织的最大区别是采用不同的开源方案。
采用开源技术可以帮助组织提高竞争优势、获得更好业务成果的原因有以下四点。
### 1、可拓展性和灵活性
可以说,技术世界发展很快。例如,在 2014 年之前Kubernetes 并不存在,但今天,它却令人印象深刻,无处不在。根据 CNCF [2020 云原生调查][3]91% 的团队正在以某种形式使用 Kubernetes。
组织投资开源的一个主要原因是因为开源赋予组织行动敏捷性,组织可以迅速地将新技术集成到技术栈中。这与传统方法不同,在传统方法中,团队需要几个季度甚至几年来审查、实施、采用软件,这导致团队不可能实现火速转变。
开源解决方案完整地提供源代码,团队可以轻松将软件与他们每天使用的工具连接起来。
简而言之,开源让开发团队能够为手头的东西构建完美的工具,而不是被迫改变工作方式来适应不灵活的专有工具。
### 2、安全性和高可信的协作
在数据泄露备受瞩目的时代,组织需要高度安全的工具来保护敏感数据的安全。
专有解决方案中的漏洞不易被发现,被发现时为时已晚。不幸的是,使用这些平台的团队无法看到源代码,本质上是他们将安全性外包给特定供应商,并希望得到最好的结果。
采用开源的另一个主要原因是开源工具使组织能够自己把控安全。例如,开源项目——尤其是拥有大型开源社区的项目——往往会收到更负责任的漏洞披露,因为每个人在使用过程中都可以彻底检查源代码。
由于源代码是免费提供的,因此披露通常伴随着修复缺陷的详细建议解决方案。这些方案使得开发团队能够快速解决问题,不断增强软件。
在远程办公时代,对于分布式团队来说,在知道敏感数据受到保护的情况下进行协作比以往任何时候都更重要。开源解决方案允许组织审核安全性、完全掌控自己数据,因此开源方案可以促进远程环境下高可信协作方式的成长。
### 3、不受供应商限制
根据 [最近的一项研究][4]68% 的 CIO 担心受供应商限制。当你受限于一项技术中,你会被迫接受别人的结论,而不是自己做结论。
当组织更换供应商时,专有解决方案通常会 [给你带走数据带来挑战][5]。另一方面,开源工具提供了组织需要的自由度和灵活性,以避免受供应商限制,开源工具可以让组织把数据带去任意地方。
### 4、顶尖人才和社区
随着越来越多的公司 [接受远程办公][6],人才争夺战变得愈发激烈。
在软件开发领域,获得顶尖人才始于赋予工程师先进工具,让工程师在工作中充分发挥潜力。开发人员 [越来越喜欢开源解决方案][7] 而不是专有解决方案,组织应该强烈考虑用开源替代商业解决方案,以吸引市场上最好的开发人员。
除了雇佣、留住顶尖人才更容易,公司能够通过开源平台利用贡献者社区,得到解决问题的建议,从平台中得到最大收益。此外,社区成员还可以 [直接为开源项目做贡献][8]。
### 开源带来自由
开源软件在企业团队中越来越受到欢迎——[这是有原因的][8]。它帮助团队灵活地构建完美的工作工具,同时使团队可以维护高度安全的环境。同时,开源允许团队掌控未来方向,而不是局限于供应商的路线图。开源还帮助公司接触才华横溢的工程师和开源社区成员。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/open-source-competitive-advantage
作者:[Jason Blais][a]
选题:[lujun9972][b]
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jasonblais
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab (Open ethernet cords.)
[2]: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/developer-velocity-how-software-excellence-fuels-business-performance#
[3]: https://www.cncf.io/blog/2020/11/17/cloud-native-survey-2020-containers-in-production-jump-300-from-our-first-survey/
[4]: https://solutionsreview.com/cloud-platforms/flexera-68-percent-of-cios-worry-about-vendor-lock-in-with-public-cloud/
[5]: https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html
[6]: https://mattermost.com/blog/tips-for-working-remotely/
[7]: https://opensource.com/article/20/6/open-source-developers-survey
[8]: https://mattermost.com/blog/100-most-popular-mattermost-features-invented-and-contributed-by-our-amazing-open-source-community/
[9]: https://mattermost.com/open-source-advantage/

View File

@ -0,0 +1,119 @@
[#]: subject: (How to Install Steam on Fedora [Beginners Tip])
[#]: via: (https://itsfoss.com/install-steam-fedora/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13302-1.html)
如何在 Fedora 上安装 Steam
======
![](https://img.linux.net.cn/data/attachment/album/202104/16/090703cg4t5npnseskhxhv.jpg)
Steam 对 Linux 游戏玩家来说是最好的东西了。由于 Steam你可以在 Linux 上玩成百上千的游戏。
如果你还不知道Steam 是最流行的 PC 游戏平台。2013 年,它开始可以在 Linux 使用。[Steam 最新的 Proton 项目][1] 允许你在 Linux 上玩为 Windows 平台创建的游戏。这让 Linux 游戏库增强了许多倍。
![][2]
Steam 提供了一个桌面客户端,你可以用它从 Steam 商店下载或购买游戏,然后安装并玩它。
过去我们曾讨论过 [在 Ubuntu 上安装 Steam][3]。在这个初学者教程中,我将向你展示在 Fedora Linux 上安装 Steam 的步骤。
### 在 Fedora 上安装 Steam
要在 Fedora 上使用 Steam你必须使用 RMPFusion 软件库。[RPMFusion][4] 是一套第三方软件库,其中包含了 Fedora 选择不与它们的操作系统一起发布的软件。它们提供自由(开源)和非自由(闭源)的软件库。由于 Steam 在非自由软件库中,你将只安装那一个。
我将同时介绍终端和图形安装方法。
#### 方法 1通过终端安装 Steam
这是最简单的方法,因为它需要的步骤最少。只需输入以下命令即可启用仓库:
```
sudo dnf install https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
你会被要求输入密码。然后你会被要求验证是否要安装这些仓库。你同意后,仓库安装就会完成。
要安装 Steam只需输入以下命令
```
sudo dnf install steam
```
![Install Steam via command line][5]
输入密码后按 `Y` 接受。安装完毕后,打开 Steam玩一些游戏。
#### 方法 2通过 GUI 安装 Steam
你可以从软件中心 [启用 Fedora 上的第三方仓库][6]。打开软件中心并点击菜单。
![][7]
在 “软件仓库” 窗口中,你会看到顶部有一个 “第三方软件仓库”。点击 “安装” 按钮。当提示你输入密码时,就完成了。
![][8]
安装了 Steam 的 RPM Fusion 仓库后,更新你系统的软件缓存(如果需要),并在软件中心搜索 Steam。
![Steam in GNOME Software Center][9]
安装完成后,打开 GNOME 软件中心,搜索 Steam。找到 Steam 页面后,点击安装。当被问及密码时,输入你的密码就可以了。
安装完 Steam 后,启动应用,输入你的 Steam 帐户详情或注册它,然后享受你的游戏。
### 将 Steam 作为 Flatpak 使用
Steam 也可以作为 Flatpak 使用。Fedora 上默认安装 Flatpak。在使用该方法安装 Steam 之前,我们必须安装 Flathub 仓库。
![Install Flathub][10]
首先,在浏览器中打开 [Flatpak 网站][11]。现在,点击标有 “Flathub repository file” 的蓝色按钮。浏览器会询问你是否要在 GNOME 软件中心打开该文件。点击确定。在 GNOME 软件中心打开后,点击安装按钮。系统会提示你输入密码。
如果你在尝试安装 Flathub 仓库时出现错误,请在终端运行以下命令:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
安装好 Flathub 仓库后,你需要做的就是在 GNOME 软件中心搜索 Steam。找到后安装它你就可以开始玩了。
![Fedora Repo Select][12]
Flathub 版本的 Steam 也有几个附加组件可以安装。其中包括一个 DOS 兼容工具和几个 [Vulkan][13] 和 Proton 工具。
![][14]
我想这应该可以帮助你在 Fedora 上使用 Steam。享受你的游戏 :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-steam-fedora/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/steam-play-proton/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/05/Steam-Store.jpg?resize=800%2C382&ssl=1
[3]: https://itsfoss.com/install-steam-ubuntu-linux/
[4]: https://rpmfusion.org/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/install-steam-fedora.png?resize=800%2C588&ssl=1
[6]: https://itsfoss.com/fedora-third-party-repos/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/software-meni.png?resize=800%2C672&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/fedora-third-party-repo-gui.png?resize=746%2C800&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gnome-store-steam.jpg?resize=800%2C434&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/flatpak-install-button.jpg?resize=800%2C434&ssl=1
[11]: https://www.flatpak.org/setup/Fedora/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/fedora-repo-select.jpg?resize=800%2C434&ssl=1
[13]: https://developer.nvidia.com/vulkan
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/steam-flatpak-addons.jpg?resize=800%2C434&ssl=1

View File

@ -0,0 +1,191 @@
[#]: subject: (6 open source tools and tips to securing a Linux server for beginners)
[#]: via: (https://opensource.com/article/21/4/securing-linux-servers)
[#]: author: (Sahana Sreeram https://opensource.com/users/sahanasreeram01gmailcom)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13298-1.html)
6 个提升 Linux 服务器安全的开源工具和技巧
======
> 使用开源工具来保护你的 Linux 环境不被入侵。
![](https://img.linux.net.cn/data/attachment/album/202104/15/082334ltqtgg40tu7l80rd.jpg)
如今我们的许多个人和专业数据都可以在网上获得,因此无论是专业人士还是普通互联网用户,学习安全和隐私的基本知识是非常重要的。作为一名学生,我通过学校的 CyberPatriot 活动获得了这方面的经验,在那里我有机会与行业专家交流,了解网络漏洞和建立系统安全的基本步骤。
本文基于我作为初学者迄今所学的知识,详细介绍了六个简单的步骤,以提高个人使用的 Linux 环境的安全性。在我的整个旅程中,我利用开源工具来加速我的学习过程,并熟悉了与提升 Linux 服务器安全有关的更高层次的概念。
我使用我最熟悉的 Ubuntu 18.04 版本测试了这些步骤,但这些步骤也适用于其他 Linux 发行版。
### 1、运行更新
开发者们不断地寻找方法,通过修补已知的漏洞,使服务器更加稳定、快速、安全。定期运行更新是一个好习惯,可以最大限度地提高安全性。运行它们:
```
sudo apt-get update && apt-get upgrade
```
### 2、启用防火墙保护
[启用防火墙][2] 可以更容易地控制服务器上的进站和出站流量。在 Linux 上有许多防火墙应用程序可以使用,包括 [firewall-cmd][3] 和 <ruby>简单防火墙<rt>Uncomplicated Firewall</rt></ruby>[UFW][4])。我使用 UFW所以我的例子是专门针对它的但这些原则适用于你选择的任何防火墙。
安装 UFW
```
sudo apt-get install ufw
```
如果你想进一步保护你的服务器,你可以拒绝传入和传出的连接。请注意,这将切断你的服务器与世界的联系,所以一旦你封锁了所有的流量,你必须指定哪些出站连接是允许从你的系统中发出的:
```
sudo ufw default deny incoming
sudo ufw default allow outgoing
```
你也可以编写规则来允许你个人使用所需要的传入连接:
```
ufw allow <service>
```
例如,允许 SSH 连接:
```
ufw allow ssh
```
最后,启用你的防火墙:
```
sudo ufw enable
```
### 3、加强密码保护
实施强有力的密码政策是保持服务器安全、防止网络攻击和数据泄露的一个重要方面。密码策略的一些最佳实践包括强制要求最小长度和指定密码年龄。我使用 libpam-cracklib 软件包来完成这些任务。
安装 libpam-cracklib 软件包:
```
sudo apt-get install libpam-cracklib
```
强制要求密码的长度:
* 打开 `/etc/pam.d/common-password` 文件。
* 将 `minlen=12` 行改为你需要的任意字符数,从而改变所有密码的最小字符长度要求。
为防止密码重复使用:
* 在同一个文件(`/etc/pam.d/common-password`)中,添加 `remember=x` 行。
* 例如,如果你想防止用户重复使用他们最后 5 个密码中的一个,使用 `remember=5`
要强制要求密码年龄:
* 在 `/etc/login.defs` 文件中找到以下几行,并用你喜欢的时间(天数)替换。例如:
```
PASS_MIN_AGE: 3
PASS_MAX_AGE: 90
PASS_WARN_AGE: 14
```
强制要求字符规格:
* 在密码中强制要求字符规格的四个参数是 `lcredit`(小写)、`ucredit`(大写)、`dcredit`(数字)和 `ocredit`(其他字符)。
* 在同一个文件(`/etc/pam.d/common-password`)中,找到包含 `pam_cracklib.so` 的行。
* 在该行末尾添加以下内容:`lcredit=-a ucredit=-b dcredit=-c ocredit=-d`。
* 例如,下面这行要求密码必须至少包含一个每种字符。你可以根据你喜欢的密码安全级别来改变数字。`lcredit=-1 ucredit=-1 dcredit=-1 ocredit=-1`。
### 4、停用容易被利用的非必要服务。
停用不必要的服务是一种最好的做法。这样可以减少开放的端口,以便被利用。
安装 systemd 软件包:
```
sudo apt-get install systemd
```
查看哪些服务正在运行:
```
systemctl list-units
```
[识别][5] 哪些服务可能会导致你的系统出现潜在的漏洞。对于每个服务可以:
* 停止当前正在运行的服务:`systemctl stop <service>`。
* 禁止服务在系统启动时启动:`systemctl disable <service>`。
* 运行这些命令后,检查服务的状态:`systemctl status <service>`。
### 5、检查监听端口
开放的端口可能会带来安全风险,所以检查服务器上的监听端口很重要。我使用 [netstat][6] 命令来显示所有的网络连接:
```
netstat -tulpn
```
查看 “address” 列,确定 [端口号][7]。一旦你找到了开放的端口,检查它们是否都是必要的。如果不是,[调整你正在运行的服务][8],或者调整你的防火墙设置。
### 6、扫描恶意软件
杀毒扫描软件可以有用的防止病毒进入你的系统。使用它们是一种简单的方法,可以让你的服务器免受恶意软件的侵害。我首选的工具是开源软件 [ClamAV][9]。
安装 ClamAV
```
sudo apt-get install clamav
```
更新病毒签名:
```
sudo freshclam
```
扫描所有文件,并打印出被感染的文件,发现一个就会响铃:
```
sudo clamscan -r --bell -i /
```
你可以而且应该设置为自动扫描,这样你就不必记住或花时间手动进行扫描。对于这样简单的自动化,你可以使用 [systemd 定时器][10] 或者你的 [喜欢的 cron][11] 来做到。
### 保证你的服务器安全
我们不能把保护服务器安全的责任只交给一个人或一个组织。随着威胁环境的不断迅速扩大,我们每个人都应该意识到服务器安全的重要性,并采用一些简单、有效的安全最佳实践。
这些只是你提升 Linux 服务器的安全可以采取的众多步骤中的一部分。当然,预防只是解决方案的一部分。这些策略应该与严格监控拒绝服务攻击、用 [Lynis][12] 做系统分析以及创建频繁的备份相结合。
你使用哪些开源工具来保证服务器的安全?在评论中告诉我们它们的情况。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/securing-linux-servers
作者:[Sahana Sreeram][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sahanasreeram01gmailcom
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[3]: https://opensource.com/article/20/2/firewall-cheat-sheet
[4]: https://wiki.ubuntu.com/UncomplicatedFirewall
[5]: http://www.yorku.ca/infosec/Administrators/UNIX_disable.html
[6]: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/netstat
[7]: https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
[8]: https://opensource.com/article/20/5/systemd-units
[9]: https://www.clamav.net/
[10]: https://opensource.com/article/20/7/systemd-timers
[11]: https://opensource.com/article/21/2/linux-automation
[12]: https://opensource.com/article/20/5/linux-security-lynis

View File

@ -0,0 +1,87 @@
[#]: subject: (Encrypt your files with this open source software)
[#]: via: (https://opensource.com/article/21/4/open-source-encryption)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13304-1.html)
用开源的 VeraCrypt 加密你的文件
======
> VeraCrypt 提供跨平台的开源文件加密功能。
![](https://img.linux.net.cn/data/attachment/album/202104/17/110244p1g4tbpnw00tqwq3.jpg)
许多年前,有一个名为 [TrueCrypt][2] 的加密软件。它的源码是可以得到的,尽管没有任何人声称曾对它进行过审计或贡献过。它的作者是(至今仍是)匿名的。不过,它是跨平台的,易于使用,而且真的非常有用。
TrueCrypt 允许你创建一个加密的文件“保险库”在那里你可以存储任何类型的敏感信息文本、音频、视频、图像、PDF 等。只要你有正确的口令TrueCrypt 就可以解密保险库,并在任何运行 TrueCrypt 的电脑上提供读写权限。这是一项有用的技术,它基本上提供了一个虚拟的、可移动的、完全加密的驱动器(除了文件以外),你可以在其中安全地存储你的数据。
TrueCrypt 最终关闭了,但一个名为 VeraCrypt 的替代项目迅速兴起,填补了这一空白。[VeraCrypt][3] 基于 TrueCrypt 7.1a,比原来的版本有许多改进(包括标准加密卷和引导卷的算法的重大变化)。在 VeraCrypt 1.12 及以后的版本中你可以使用自定义迭代来提高加密安全性。更好的是VeraCrypt 可以加载旧的 TrueCrypt 卷,所以如果你是 TrueCrypt 用户,可以很容易地将它们转移到 VeraCrypt 上。
### 安装 VeraCrypt
你可以从 [VeraCrypt 下载页面][4] 下载相应的安装文件,之后在所有主流平台上安装 VeraCrypt。
另外,你也可以自己从源码构建它。在 Linux 上,它需要 wxGTK3、makeself 和通常的开发栈Binutils、GCC 等)。
当你安装后,从你的应用菜单中启动 VeraCrypt。
### 创建一个 VeraCrypt 卷
如果你刚接触 VeraCrypt你必须先创建一个 VeraCrypt 加密卷(否则,你没有任何东西可以解密)。在 VeraCrypt 窗口中,点击左侧的 “Create Volume” 按钮。
![Creating a volume with VeraCrypt][5]
在出现的 VeraCrypt 的卷创建向导窗口中,选择要创建一个加密文件容器还是要加密整个驱动器或分区。向导将为你的数据创建一个保险库,所以请按照提示进行操作。
在本文中我创建了一个文件容器。VeraCrypt 容器和其他文件很像它保存在硬盘、外置硬盘、云存储或其他任何你能想到的存储数据的地方。与其他文件一样它可以被移动、复制和删除。与大多数其他文件不同的是它可以_容纳_更多的文件这就是为什么我认为它是一个“保险库”而 VeraCrypt 开发者将其称为“容器”。它的开发者将 VeraCrypt 文件称为“容器”,是因为它可以包含其他数据对象;它与 LXC、Kubernetes 和其他现代 IT 机制所流行的容器技术无关。
#### 选择一个文件系统
在创建卷的过程中,你会被要求选择一个文件系统来决定你放在保险库中的文件的存储方式。微软 FAT 格式是过时的、非日志型,并且限制了卷和文件的大小,但它是所有平台都能读写的一种格式。如果你打算让你的 VeraCrypt 保险库跨平台FAT 是你最好的选择。
除此之外NTFS 适用于 Windows 和 Linux。开源的 EXT 系列适用于 Linux。
### 挂载 VeraCrypt 加密卷
当你创建了 VeraCrypt 卷,你就可以在 VeraCrypt 窗口中加载它。要挂载一个加密库,点击右侧的 “Select File” 按钮。选择你的加密文件,选择 VeraCrypt 窗口上半部分的一个编号栏,然后点击位于 VeraCrypt 窗口左下角的 “Mount” 按钮。
你挂载的卷在 VeraCrypt 窗口的可用卷列表中,你可以通过文件管理器访问该卷,就像访问一个外部驱动器一样。例如,在 KDE 上,我打开 [Dolphin][7],进入 `/media/veracrypt1`,然后我就可以把文件复制到我的保险库里。
只要你的设备上有 VeraCrypt你就可以随时访问你的保险库。在你手动在 VeraCrypt 中挂载之前,文件都是加密的,在那里,文件会保持解密,直到你再次关闭卷。
### 关闭 VeraCrypt 卷
为了保证你的数据安全,当你不需要打开 VeraCrypt 卷时,关闭它是很重要的。这样可以保证数据的安全,不被人窥视,且不被人趁机犯罪。
![Mounting a VeraCrypt volume][8]
关闭 VeraCrypt 容器和打开容器一样简单。在 VeraCrypt 窗口中选择列出的卷,然后点击 “Dismount”。你就不能访问保险库中的文件了其他人也不会再有访问权。
### VeraCrypt 轻松实现跨平台加密
有很多方法可以保证你的数据安全VeraCrypt 试图为你提供方便,而无论你需要在什么平台上使用这些数据。如果你想体验简单、开源的文件加密,请尝试 VeraCrypt。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/open-source-encryption
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum (Lock)
[2]: https://en.wikipedia.org/wiki/TrueCrypt
[3]: https://www.veracrypt.fr/en/Home.html
[4]: https://www.veracrypt.fr/en/Downloads.html
[5]: https://opensource.com/sites/default/files/uploads/veracrypt-create.jpg (Creating a volume with VeraCrypt)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://en.wikipedia.org/wiki/Dolphin_%28file_manager%29
[8]: https://opensource.com/sites/default/files/uploads/veracrypt-volume.jpg (Mounting a VeraCrypt volume)

View File

@ -0,0 +1,113 @@
[#]: subject: (Create an encrypted file vault on Linux)
[#]: via: (https://opensource.com/article/21/4/linux-encryption)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13296-1.html)
在 Linux 上创建一个加密文件保险库
======
> 使用 Linux 统一密钥设置LUKS为物理驱动器或云存储上的敏感文件创建一个加密保险库。
![](https://img.linux.net.cn/data/attachment/album/202104/14/151220l5zkkxiukgzix54k.jpg)
最近,我演示了如何在 Linux 上使用<ruby>统一密钥设置<rt>Linux Unified Key Setup</rt></ruby>[LUKS][3])和 `cryptsetup` 命令 [实现全盘加密][2]。虽然加密整个硬盘在很多情况下是有用的,但也有一些原因让你不想对整个硬盘进行加密。例如,你可能需要让一个硬盘在多个平台上工作,其中一些平台可能没有集成 [LUKS][3]。此外,现在是 21 世纪,由于云的存在,你可能不会使用物理硬盘来处理所有的数据。
几年前,有一个名为 [TrueCrypt][4] 的系统,允许用户创建加密的文件保险库,可以通过 TrueCrypt 解密来提供读/写访问。这是一项有用的技术基本上提供了一个虚拟的便携式、完全加密的驱动器你可以在那里存储重要数据。TrueCrypt 项目关闭了,但它可以作为一个有趣的模型。
幸运的是LUKS 是一个灵活的系统,你可以使用它和 `cryptsetup` 在一个独立的文件中创建一个加密保险库,你可以将其保存在物理驱动器或云存储中。
下面就来介绍一下怎么做。
### 1、建立一个空文件
首先,你必须创建一个预定大小的空文件。就像是一种保险库或保险箱,你可以在其中存储其他文件。你使用的命令是 `util-linux` 软件包中的 `fallocate`
```
$ fallocate --length 512M vaultfile.img
```
这个例子创建了一个 512MB 的文件,但你可以把你的文件做成任何你想要的大小。
### 2、创建一个 LUKS 卷
接下来,在空文件中创建一个 LUKS 卷:
```
$ cryptsetup --verify-passphrase \
luksFormat vaultfile.img
```
### 3、打开 LUKS 卷
要想创建一个可以存储文件的文件系统,必须先打开 LUKS 卷,并将其挂载到电脑上:
```
$ sudo cryptsetup open \
--type luks vaultfile.img myvault
$ ls /dev/mapper
myvault
```
### 4、建立一个文件系统
在你打开的保险库中建立一个文件系统:
```
$ sudo mkfs.ext4 -L myvault /dev/mapper/myvault
```
如果你现在不需要它做什么,你可以关闭它:
```
$ sudo cryptsetup close myvault
```
### 5、开始使用你的加密保险库
现在一切都设置好了,你可以在任何需要存储或访问私人数据的时候使用你的加密文件库。要访问你的保险库,必须将其挂载为一个可用的文件系统:
```
$ sudo cryptsetup open \
--type luks vaultfile.img myvault
$ ls /dev/mapper
myvault
$ sudo mkdir /myvault
$ sudo mount /dev/mapper/myvault /myvault
```
这个例子用 `cryptsetup` 打开保险库,然后把保险库从 `/dev/mapper` 下挂载到一个叫 `/myvault` 的新目录。和 Linux 上的任何卷一样,你可以把 LUKS 卷挂载到任何你想挂载的地方,所以除了 `/myvault`,你可以用 `/mnt``~/myvault` 或任何你喜欢的位置。
当它被挂载后,你的 LUKS 卷就会被解密。你可以像读取和写入文件一样读取和写入它,就像它是一个物理驱动器一样。
当使用完你的加密保险库时,请卸载并关闭它:
```
$ sudo umount /myvault
$ sudo cryptsetup close myvault
```
### 加密的文件保险库
你用 LUKS 加密的镜像文件和其他文件一样,都是可移动的,因此你可以将你的保险库存储在硬盘、外置硬盘,甚至是互联网上。只要你可以使用 LUKS就可以解密、挂载和使用它来保证你的数据安全。轻松加密提高数据安全性不妨一试。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-encryption
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_bank_vault_secure_safe.png?itok=YoW93h7C (Secure safe)
[2]: https://opensource.com/article/21/3/encryption-luks
[3]: https://gitlab.com/cryptsetup/cryptsetup/blob/master/README.md
[4]: https://en.wikipedia.org/wiki/TrueCrypt

View File

@ -0,0 +1,108 @@
[#]: subject: (Make your data boss-friendly with this open source tool)
[#]: via: (https://opensource.com/article/21/4/visualize-data-eda)
[#]: author: (Juanjo Ortilles https://opensource.com/users/jortilles)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13310-1.html)
用这个开源工具让你的数据对老板友好起来
======
> 企业数据分析旨在将数据可视化带给日常商务用户。
![](https://img.linux.net.cn/data/attachment/album/202104/19/092617elri0ff4r6lr06rr.jpg)
<ruby>企业数据分析<rt>Enterprise Data Analytics</rt></ruby>[EDA][2] 是一个网页应用,它可以通过一个简单、清晰的界面来获取信息。
在巴塞罗那开源分析公司 [Jortilles][3] 工作几年后,我们意识到,现代世界强制性地收集数据,但普通人没有简单的方法来查看或解释这些数据。有一些强大的开源工具可用于此目的,但它们非常复杂。我们找不到一个工具设计成能让没有什么技术能力的普通人轻松使用。
我们之所以开发 EDA是因为我们认为获取信息是现代组织的要求和义务并希望为每个人提供获取信息的机会。
![EDA interface][4]
### 可视化你的数据
EDA 使用人们已经理解的商业术语提供了一个数据模型。你可以选择你想要的信息,并可以以你想要的方式查看它。它的目标是对用户友好,同时又功能强大。
EDA 通过元数据模型将数据库中的信息可视化和丰富化。它可以从 BigQuery、Postgres、[MariaDB、MySQL][6] 和其他一些数据库中读取数据。这就把技术性的数据库模型转化为熟悉的商业概念。
它还设计为加快信息传播的速度因为它可以利用已经存储在数据库中的数据。EDA 可以发现数据库的拓扑结构并提出业务模型。如果你设计了一个好的数据库模型你就有了一个好的业务模型。EDA 还可以连接到生产服务器,提供实时分析。
这种数据和数据模型的结合意味着你和你组织中的任何人都可以分析其数据。然而,为了保护数据,你可以定义数据安全,可以精确到行,以授予正当的人访问正当的数据。
EDA 的一些功能包括:
* 自动生成数据模型
* 一致的数据模型,防止出现不一致的查询
* 高级用户的 SQL 模式
* 数据可视化:
* 标准图表(如柱状图、饼状图、线状图、树状图)
* 地图整合(如 geoJSON shapefile、纬度、经度
* 电子邮件提醒,可通过关键绩效指标 KPI 来定义
* 私人和公共信息控制,以启用私人和公共仪表板,你可以通过链接分享它。
* 数据缓存和程序刷新。
### 如何使用 EDA
用 EDA 实现数据可视化的第一步是创建数据模型。
#### 创建数据模型
首先,在左侧菜单中选择 “New Datasource”。
接下来,选择你的数据存储的数据库系统(如 Postgres、MariaDB、MySQL、Vertica、SqlServer、Oracle、Big Query并提供连接参数。
EDA 将自动为你生成数据模型。它读取表和列,并为它们定义名称以及表之间的关系。你还可以通过添加虚拟视图或 geoJSON 图来丰富你的数据模型。
#### 制作仪表板
现在你已经准备好制作第一个仪表板了。在 EDA 界面的主页面上,你应该会看到一个 “New dashboard” 按钮。点击它,命名你的仪表板,并选择你创建的数据模型。新的仪表板将出现一个面板供你配置。
要配置面板,请单击右上角的 “Configuration” 按钮,并选择你要做的事情。在 “Edit query” 中,选择你要显示的数据。这将出现一个新的窗口,你的数据模型由实体和实体的属性表示。选择你要查看的实体和你要使用的属性。例如,对于名为 “Customers” 的实体,你可能会显示 “Customer Name”对于 “Sales” 实体,你可能希望显示 “Total Sales”。
接下来,运行一个查询,并选择你想要的可视化。
![EDA interface][7]
你可以添加任意数量的面板、过滤器和文本字段,所有这些都有说明。当你保存仪表板后,你可以查看它,与同事分享,甚至发布到互联网上。
### 获取 EDA
最快的方法是用 [公开演示][8] 来查看 EDA。但如果你想自己试一试可以用 Docker 获取最新的 EDA 版本:
```
$ docker run -p 80:80 jortilles / eda: latest
```
我们还有一个 SaaS 选项,适用于任何想要使用 EDA 而无需进行安装、配置和持续更新的用户。你可以在我们的网站上查看 [云选项][9]。
如果你想看看它的实际运行情况,你可以在 YouTube 上观看一些 [演示][10]。
EDA 正在持续开发中,你可以在 GitHub 上找到它的 [源代码][11]。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/visualize-data-eda
作者:[Juanjo Ortilles][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jortilles
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://eda.jortilles.com/en/jortilles-english/
[3]: https://www.jortilles.com/
[4]: https://opensource.com/sites/default/files/uploads/eda-display.jpeg (EDA interface)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/article/20/10/mariadb-mysql-cheat-sheet
[7]: https://opensource.com/sites/default/files/uploads/eda-chart.jpeg (EDA interface)
[8]: https://demoeda.jortilles.com/
[9]: https://eda.jortilles.com
[10]: https://youtu.be/cBAAJbohHXQ
[11]: https://github.com/jortilles/EDA

View File

@ -0,0 +1,116 @@
[#]: subject: (13 ways to get involved with your favorite open source project)
[#]: via: (https://opensource.com/article/21/4/open-source-project-level)
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
13 ways to get involved with your favorite open source project
======
Apply GET/SET principles to connecting with open source projects.
![Looking at a map for career journey][1]
Many of my [blog][2]'s readers already know lots about open source, but I'm also aware that many know little, if anything, about it. I'm a big, big proponent of open source software (and beyond, such as open hardware), and there are lots of great resources you can find to learn more about it.
One very good starting point is the one you're now reading, Opensource.com. It's run by a bunch of brilliant people for the broader community by my current employer, Red Hat. (I should add a disclaimer that I'm not only employed by Red Hat but also a [Correspondent][3] at Opensource.com—a kind of frequent contributor/Elder Thing.) It has articles on pretty much every aspect of open source that you can imagine.
I was thinking about APIs today (they're [in the news][4] as I'm writing this, after a US Supreme Court judgment on an argument between Google and Oracle), and it occurred to me that if I were interested in understanding how to interact with open source at the project level but didn't know much about it, then a quick guide might be useful. The same goes if I were involved in an open source project (e.g., [Enarx][5]) interested in attracting contributors (particularly techie contributors) who aren't already knowledgeable about open source.
Given that most programmers will understand what GET and SET methods do (one reads data, the other writes data), I thought this might be a useful framework for considering engagement.[1][6] I'll start with GET, as that's how you're likely to be starting off—finding out more about the project—and then move to SET methods for getting involved with an open source project.
This is far from an exhaustive list, but I hope that I've hit most of the key ways you're most likely to start getting involved or encouraging others to get involved. The order I've chosen reflects what I suspect is a fairly typical approach to finding out more about a project, particularly for those who aren't open source savvy already but, as they say, YMMV.[3][7]
I've managed to stop myself from using Enarx (which I co-founded) as the sole source of examples and have tried to find a variety of projects to give you a taster. Disclaimer: their inclusion here does not mean that I am a user or contributor to the project, nor is it any guarantee of their open source credentials, code quality, up to date-ness, project maturity, or community health.[4][8]
### GET methods
* **Landing page:** The first encounter you have with a project will probably be its landing page. Some projects go for something basic, others apply more design, but you should be able to use this as the starting point for your adventures around the project. You'd generally hope to find links to various of the other resources listed below from the landing page.
* See [Sigstore][9]'s landing page.
* **Wiki:** In many cases, the project will have a wiki. This could be simple, or it could be complex. It may allow editing by anyone or only by a select band of contributors to the project, and its relevance as a source of truth may be impacted by how up to date it is. Still, the wiki is usually an excellent place to start.
* See the [Fedora Project][10] wiki.
* **Videos:** Some projects maintain a set of videos about their project. These may include introductions to the concepts, talking-head interviews with team members, conference sessions, demos, how-tos, and more. It's also worth looking for videos put up by contributors to the project but not necessarily officially owned by the project.
* See [Rust Language][11] videos.
* **Code of conduct:** Many projects insist that their project members follow a code of conduct to reduce harassment, reduce friction, and generally make the project a friendly, more inclusive, and more diverse place to be.
* See the [Linux kernel][12]'s CoC.
* **Binary downloads:** As projects get more mature, they may choose to provide precompiled binary downloads for users. More technically inclined users may choose to compile their own binaries from the codebase (see below), but binary downloads can be a quick way to try out a project and see whether it does what you want.
* See the binaries from [Chocolate Doom][13] (a Doom port).
* **Design documentation:** Without design documentation, it can be very difficult to get really into a project. (I've written about the [importance of architecture diagrams][14] before.) This documentation is likely to include everything from an API definition to complex use cases and threat models.
* See [Kubernetes][15]' design docs.
* **Codebase:** You've found out all you need to get going: It's time to look at the code! This may vary from a few lines to many thousands, include documentation in comments, or include test cases, but if the code is not there, then the project can't legitimately call itself open source.
* See [Rocket Rust web framework][16]'s code.[5][17]
* **Email/chat:** Most projects like to have a way for contributors to discuss matters asynchronously. The preferred medium varies among projects, but most will choose an email list, a chat server, or both. These are where to get to know other users and contributors, ask questions, celebrate successful compiles, and just hang out.
* See [Enarx chat][18].
* **Meetups, videoconferences, calls, etc.:** Although in-person meetings are tricky for many at the moment (I'm writing as COVID-19 still reduces travel opportunities), having ways for community members and contributors to get together synchronously can be really helpful for everybody. Sometimes these are scheduled on a daily, weekly, or monthly basis; sometimes, they coincide with other, larger meetups, sometimes a project gets big enough to have its own meetups; sometimes, it's so big that there are meetups of subprojects or internal interest groups.
* See the [Linux Security Summit Europe][19].
### PUT methods
* **Bug reports:** The first time many of us contribute anything substantive back to an open source project is when we file a bug report. Bug reports from new users can be really helpful for projects, as they not only expose bugs that may not already be known to the project, but they also give clues as to how actual users of the project are trying to use the code. If the project already publishes binary downloads (see above), you don't even need to compile the code to try it and submit a bug report. But bug reports related to compilation and build can also be extremely useful to the project. Sometimes, the mechanism for bug reporting also provides a way to ask more general questions about the project or to ask for new features.
* See the issues page for [exa][20] (a replacement for the _ls_ command).
* **Tests:** Once you've started using the project, another way to get involved (particularly once you start contributing code) can be to design and submit tests for how the project _ought_ to work. This can be a great way to unearth both your assumptions (and lack of knowledge!) about the project and the project's design assumptions (some of which may well be flawed). Tests are often part of the code repository, but not always.
* See [GNOME Shell][21]'s test repository.
* **Wiki:** A wiki can be a great way to contribute to the project, whether you're coding or not. Many projects don't have as much information available as they should, and that information may not be aimed at people coming to the project "fresh." If this is what you've done, then you're in a great position to write material that will help other "newbs" get into the project faster, as you'll know what would have helped you if it had been there.
* See the wiki for [Wine][22] (Windows Emulator for Linux).
* **Code:** Last but not least, you can write code. You may take hours, months, or years to get to this stage—or you may never reach it—but open source software is nothing without its code. If you've paid enough attention to the other steps, gotten involved in the community, understood what the project aims to do, and have the technical expertise (which you may well develop as you go!), then writing code may be the way you want to contribute.
* See [Enarx][23] (again).
* * *
1. I did consider standard RESTful verbs—GET, PUT, POST, and DELETE—but that felt rather contrived.[2][24]
2. And I don't like the idea of DELETE in this context!
3. "Your Mileage May Vary," meaning, basically, that your experience may be different, and that's to be expected.
4. That said, I do use lots of them!
5. I included this one because I've spent _far_ too much of my time looking at this over the past few months…
* * *
_This article was originally published on [Alice, Eve, and Bob][25] and is reprinted with the author's permission._
Six non-code opportunities for contributing to open source software code and communities.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/open-source-project-level
作者:[Mike Bursell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mikecamel
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
[2]: https://aliceevebob.com/
[3]: https://opensource.com/correspondent-program
[4]: https://www.eff.org/deeplinks/2021/04/victory-fair-use-supreme-court-reverses-federal-circuit-oracle-v-google
[5]: https://enarx.dev/
[6]: tmp.WF7h0s934j#1
[7]: tmp.WF7h0s934j#3
[8]: tmp.WF7h0s934j#4
[9]: https://sigstore.dev/
[10]: https://fedoraproject.org/wiki/Fedora_Project_Wiki
[11]: https://www.youtube.com/channel/UCaYhcUwRBNscFNUKTjgPFiA
[12]: https://www.kernel.org/doc/html/latest/process/code-of-conduct.html
[13]: https://www.chocolate-doom.org/wiki/index.php/Downloads
[14]: https://opensource.com/article/20/5/diagrams-documentation
[15]: https://kubernetes.io/docs/reference/
[16]: https://github.com/SergioBenitez/Rocket/tree/v0.4
[17]: tmp.WF7h0s934j#5
[18]: https://chat.enarx.dev/
[19]: https://events.linuxfoundation.org/linux-security-summit-europe/
[20]: https://github.com/ogham/exa/issues
[21]: https://gitlab.gnome.org/GNOME/gnome-shell/tree/master/tests/interactive
[22]: https://wiki.winehq.org/Main_Page
[23]: https://github.com/enarx
[24]: tmp.WF7h0s934j#2
[25]: https://aliceevebob.com/2021/04/06/get-set-methods-for-open-source-projects/

View File

@ -0,0 +1,189 @@
[#]: subject: (21 reasons why I think everyone should try Linux)
[#]: via: (https://opensource.com/article/21/4/linux-reasons)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
21 reasons why I think everyone should try Linux
======
Gaming, business, budgeting, art, programming, and more. These are just
a few of the many ways anyone can use Linux.
![Linux keys on the keyboard for a desktop computer][1]
When I go on holiday, I often end up at one or more used bookstores. I always find a good book I've been meaning to read, and I always justify the inevitable purchase by saying, "I'm on vacation; I should treat myself to this book." It works well, and I've acquired some of my favorite books this way. Yet, like so many traditions in life, it doesn't hold up to scrutiny. In reality, I don't need an excuse to buy a good book. All things being equal, I can do it any time I want. But having a reason does seem to make the process more enjoyable, somehow.
In my everyday life, I get a lot of questions about Linux. When caught unaware, I sometimes awkwardly ramble on about the history of open source software or the intellectual and economic benefits of sharing resources. Sometimes, I manage to mention some of my favorite features I enjoy on Linux and then end up reverse-engineering those benefits so they can be enjoyed on another operating system. These discussions are usually enjoyable and informative, but there's just one problem: None of it answers the question that people are really asking.
When a person asks you about Linux, they're often asking you to give them a reason to try it. There are exceptions, of course. People who have never heard the term "Linux" are probably asking for a literal definition of the word. But when your friends and colleagues confide that they're a little dissatisfied with their current operating system, it's probably safe to explain why you enjoy Linux, rather than lecturing them on why Linux is a better option than proprietary systems. In other words, you don't need a sales presentation; you need vacation photos (or used books you bought on vacation, if you're a bookworm).
To that end, the links below connect to 21 reasons I enjoy Linux, given to 21 separate people on 21 separate occasions.
### Gaming
![Gaming on Linux][2]
(Seth Kenlon, [CC BY-SA 4.0][3])
When it comes to enjoying a computer, one of the most obvious activities is gaming, and when it comes to gaming, I love it all. I'm happy to spend an evening playing an 8-bit puzzler or a triple-A studio epic. Other times, I settle in for a board game or a tabletop role-playing game (RPG).
And I [do it all on a Linux computer][4].
### Office
![LibreOffice][5]
(Seth Kenlon, [CC BY-SA 4.0][3])
One size doesn't fit all. This is as true for hats as it is for office work. It pains me to see colleagues locked into a singular workflow that doesn't suit them, and I enjoy the way Linux encourages users to find tools they love. I've used office applications ranging from big suites (like LibreOffice and OpenOffice) to lightweight word processors (such as Abiword) to minimal text editors (with Pandoc for conversion).
Regardless of what users around me are locked into, I have [the freedom to use the tools that work best][6] on my computer and with the way I want to work.
### Choice
![Linux login screen][7]
(Seth Kenlon, [CC BY-SA 4.0][3])
One of open source's most valuable traits is the trust it allows users to have in the software they use. This trust is derived from a network of friends who can read the source code of the applications and operating systems they use. That means, even if you don't know good source code from bad, you can make friends within the [open source community][8] who do. These are important connections that Linux users can make as they explore the distribution they run. If you don't trust the community that builds and maintains a distribution, you can and should move to a different distribution. Many of us have done it, and it's one of the strengths of having many distros to choose from.
[Linux offers choice][9] as a feature. A strong community, filled with real human connections, combined with the freedom of choice that Linux provides all give users confidence in the software they run. Because I've read some source code, and because I trust the people who maintain the code I haven't read, [I trust Linux][10].
### Budgeting
![Skrooge][11]
(Seth Kenlon, [CC BY-SA 4.0][3])
Budgeting isn't fun, but it's important. I learned early, while working menial jobs as I learned a _free_ operating system (Linux!) in my free time, that a budget isn't meant to track your money so much as it tracks your habits. That means that whether you're living paycheck to paycheck or you're well on the way to planning your retirement, you should [maintain a budget][12].
If you're in the United States, you can even [pay your taxes on Linux][13].
### Art
![MyPaint][14]
(Dogchicken, [CC BY-SA 4.0][3])
It doesn't matter whether you paint or do pixel art, [edit video][15], or scratch records, you can create great content on Linux. Some of the best art I've seen has been casually made with tools that aren't "industry standard," and it might surprise you just how much of the content you see is made the same way. Linux is a quiet engine, but it's a powerful one that drives indie artists as well as big producers.
Try using Linux [to create some art][16].
### Programming
![NetBeans][17]
(Seth Kenlon, [CC BY-SA 4.0][3])
Look, using Linux to program is almost a foregone conclusion. Second only to server administration, open source code and Linux are an obvious combination. There are [many reasons for this][18], but the one I cite is that it's just more fun. I run into plenty of roadblocks when inventing something new, so the last thing I need is for an operating system or software development kit (SDK) to be the reason for failure. On Linux, I have access to everything. Literally everything.
### Packaging
![Packaging GNOME software][19]
(Seth Kenlon, [CC BY-SA 4.0][3])
The thing nobody talks about when they tell you about programming is _packaging_. As a developer, you have to get your code to your users, or you won't have any users. Linux makes it easy for developers [to deliver apps][20] and easy for users to [install those applications][21].
It surprises many people, but [Linux can run many Windows applications][22] as if they were native apps. You shouldn't expect a Windows application to be executable on Linux. Still, many of the major common applications either already exist natively on Linux or else can be run through a compatibility layer called Wine.
### Technology
![Data center][23]
([Taylor Vick][24], [Unsplash License][25])
If you're looking for a career in IT, Linux is a great first step. As a former art student who stumbled into Linux to render video faster, I speak from experience!
Cutting-edge technology happens on Linux. Linux drives most of the internet, most of the world's fastest supercomputers, and the cloud itself. Today, Linux drives [edge computing][26], combining the power of cloud data centers with decentralized nodes for quick response.
You don't have to start at the top, though. You can learn to [automate][27] tasks on your laptop or desktop and remotely control systems with a [good terminal][28].
Linux is open to your new ideas and [available for customization][29].
### Share files
![Beach with cloudy sky][30]
(Seth Kenlon, [CC BY-SA 4.0][3])
Whether you're a fledgling sysadmin or just a housemate with files to distribute to friends, Linux makes [file sharing a breeze][31].
### Media
![Waterfall][32]
(Seth Kenlon, [CC BY-SA 4.0][3])
With all the talk about programming and servers, people sometimes envision Linux as just a black screen filled with green 1's and 0's. Unsurprisingly to those of us who use it, Linux [plays all your media][33], too.
### Easy install
![CentOS installation][34]
(Seth Kenlon, [CC BY-SA 4.0][3])
Never installed an operating system before? Linux is shockingly easy. Step-by-step, Linux installers hold your hand through an operating system installation to make you feel like a computer expert in under an hour.
[Go install Linux][35]!
### Try Linux
![Porteus][36]
(Seth Kenlon, [CC BY-SA 4.0][3])
If you're not ready to install Linux, then you can _try_ Linux instead. No idea where to start? It's less intimidating than you may think. Here are some [things you should consider first][37]. Then take your pick, download a distro, and come up with your own 21 reasons to use Linux.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-reasons
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/sites/default/files/uploads/game_0ad-egyptianpyramids.jpg (Gaming on Linux)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/article/21/2/linux-gaming
[5]: https://opensource.com/sites/default/files/uploads/office_libreoffice.jpg (LibreOffice)
[6]: https://opensource.com/article/21/2/linux-workday
[7]: https://opensource.com/sites/default/files/uploads/trust_sddm.jpg (Linux login screen)
[8]: https://opensource.com/article/21/2/linux-community
[9]: https://opensource.com/article/21/2/linux-choice
[10]: https://opensource.com/article/21/2/open-source-security
[11]: https://opensource.com/sites/default/files/uploads/skrooge_1.jpg (Skrooge)
[12]: https://opensource.com/article/21/2/linux-skrooge
[13]: https://opensource.com/article/21/2/linux-tax-software
[14]: https://opensource.com/sites/default/files/uploads/art_mypaint.jpg (MyPaint)
[15]: https://opensource.com/article/21/2/linux-python-video
[16]: https://opensource.com/article/21/2/linux-art-design
[17]: https://opensource.com/sites/default/files/uploads/programming_java-netbeans.jpg (NetBeans)
[18]: https://opensource.com/article/21/2/linux-programming
[19]: https://opensource.com/sites/default/files/uploads/packaging_gnome-software.png (Packaging GNOME software)
[20]: https://opensource.com/article/21/2/linux-packaging
[21]: https://opensource.com/article/21/2/linux-package-management
[22]: https://opensource.com/article/21/2/linux-wine
[23]: https://opensource.com/sites/default/files/uploads/edge_taylorvick-unsplash.jpg (Data center)
[24]: https://unsplash.com/@tvick
[25]: https://unsplash.com/license
[26]: https://opensource.com/article/21/2/linux-edge-computing
[27]: https://opensource.com/article/21/2/linux-automation
[28]: https://opensource.com/article/21/2/linux-terminals
[29]: https://opensource.com/article/21/2/linux-technology
[30]: https://opensource.com/sites/default/files/uploads/cloud_beach-sethkenlon.jpg (Beach with cloudy sky)
[31]: https://opensource.com/article/21/3/linux-server
[32]: https://opensource.com/sites/default/files/uploads/media_waterfall.jpg (Waterfall)
[33]: https://opensource.com/article/21/2/linux-media-players
[34]: https://opensource.com/sites/default/files/uploads/install_centos8.jpg (CentOS installation)
[35]: https://opensource.com/article/21/2/linux-installation
[36]: https://opensource.com/sites/default/files/uploads/porteus_0.jpg (Porteus)
[37]: https://opensource.com/article/21/2/try-linux

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( chensanle )
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,551 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use systemd timers instead of cronjobs)
[#]: via: (https://opensource.com/article/20/7/systemd-timers)
[#]: author: (David Both https://opensource.com/users/dboth)
Use systemd timers instead of cronjobs
======
Timers provide finer-grained control of events than cronjobs.
![Team checklist][1]
I am in the process of converting my [cron][2] jobs to systemd timers. I have used timers for a few years, but usually, I learned just enough to perform the task I was working on. While doing research for this [systemd series][3], I learned that systemd timers have some very interesting capabilities.
Like cron jobs, systemd timers can trigger events—shell scripts and programs—at specified time intervals, such as once a day, on a specific day of the month (perhaps only if it is a Monday), or every 15 minutes during business hours from 8am to 6pm. Timers can also do some things that cron jobs cannot. For example, a timer can trigger a script or program to run a specific amount of time after an event such as boot, startup, completion of a previous task, or even the previous completion of the service unit called by the timer.
### System maintenance timers
When Fedora or any systemd-based distribution is installed on a new system, it creates several timers that are part of the system maintenance procedures that happen in the background of any Linux host. These timers trigger events necessary for common maintenance tasks, such as updating system databases, cleaning temporary directories, rotating log files, and more.
As an example, I'll look at some of the timers on my primary workstation by using the `systemctl status *timer` command to list all the timers on my host. The asterisk symbol works the same as it does for file globbing, so this command lists all systemd timer units:
```
[root@testvm1 ~]# systemctl status *timer
● mlocate-updatedb.timer - Updates mlocate database every day
     Loaded: loaded (/usr/lib/systemd/system/mlocate-updatedb.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Fri 2020-06-05 00:00:00 EDT; 15h left
   Triggers: ● mlocate-updatedb.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Updates mlocate database every day.
● logrotate.timer - Daily rotation of log files
     Loaded: loaded (/usr/lib/systemd/system/logrotate.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Fri 2020-06-05 00:00:00 EDT; 15h left
   Triggers: ● logrotate.service
       Docs: man:logrotate(8)
             man:logrotate.conf(5)
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Daily rotation of log files.
● sysstat-summary.timer - Generate summary of yesterday's process accounting
     Loaded: loaded (/usr/lib/systemd/system/sysstat-summary.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Fri 2020-06-05 00:07:00 EDT; 15h left
   Triggers: ● sysstat-summary.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Generate summary of yesterday's process accounting.
● fstrim.timer - Discard unused blocks once a week
     Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Mon 2020-06-08 00:00:00 EDT; 3 days left
   Triggers: ● fstrim.service
       Docs: man:fstrim
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Discard unused blocks once a week.
● sysstat-collect.timer - Run system activity accounting tool every 10 minutes
     Loaded: loaded (/usr/lib/systemd/system/sysstat-collect.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Thu 2020-06-04 08:50:00 EDT; 41s left
   Triggers: ● sysstat-collect.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Run system activity accounting tool every 10 minutes.
● dnf-makecache.timer - dnf makecache --timer
     Loaded: loaded (/usr/lib/systemd/system/dnf-makecache.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Thu 2020-06-04 08:51:00 EDT; 1min 41s left
   Triggers: ● dnf-makecache.service
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started dnf makecache timer.
● systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories
     Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static; vendor preset: disabled)
     Active: active (waiting) since Tue 2020-06-02 08:02:33 EDT; 2 days ago
    Trigger: Fri 2020-06-05 08:19:00 EDT; 23h left
   Triggers: ● systemd-tmpfiles-clean.service
       Docs: man:tmpfiles.d(5)
             man:systemd-tmpfiles(8)
Jun 02 08:02:33 testvm1.both.org systemd[1]: Started Daily Cleanup of Temporary Directories.
```
Each timer has at least six lines of information associated with it:
* The first line has the timer's file name and a short description of its purpose.
* The second line displays the timer's status, whether it is loaded, the full path to the timer unit file, and the vendor preset.
* The third line indicates its active status, which includes the date and time the timer became active.
* The fourth line contains the date and time the timer will be triggered next and an approximate time until the trigger occurs.
* The fifth line shows the name of the event or the service that is triggered by the timer.
* Some (but not all) systemd unit files have pointers to the relevant documentation. Three of the timers in my virtual machine's output have pointers to documentation. This is a nice (but optional) bit of data.
* The final line is the journal entry for the most recent instance of the service triggered by the timer.
Depending upon your host, you will probably have a different set of timers.
### Create a timer
Although we can deconstruct one or more of the existing timers to learn how they work, lets create our own [service unit][4] and a timer unit to trigger it. We will use a fairly trivial example in order to keep this simple. After we have finished this, it will be easier to understand how the other timers work and to determine what they are doing.
First, create a simple service that will run something basic, such as the `free` command. For example, you may want to monitor free memory at regular intervals. Create the following `myMonitor.service` unit file in the `/etc/systemd/system` directory. It does not need to be executable:
```
# This service unit is for testing timer units
# By David Both
# Licensed under GPL V2
#
[Unit]
Description=Logs system statistics to the systemd journal
Wants=myMonitor.timer
[Service]
Type=oneshot
ExecStart=/usr/bin/free
[Install]
WantedBy=multi-user.target
```
This is about the simplest service unit you can create. Now lets look at the status and test our service unit to ensure that it works as we expect it to.
```
[root@testvm1 system]# systemctl status myMonitor.service
● myMonitor.service - Logs system statistics to the systemd journal
     Loaded: loaded (/etc/systemd/system/myMonitor.service; disabled; vendor preset: disabled)
     Active: inactive (dead)
[root@testvm1 system]# systemctl start myMonitor.service
[root@testvm1 system]#
```
Where is the output? By default, the standard output (`STDOUT`) from programs run by systemd service units is sent to the systemd journal, which leaves a record you can view now or later—up to a point. (I will look at systemd journaling and retention strategies in a future article in this series.) Look at the journal specifically for your service unit and for today only. The `-S` option, which is the short version of `--since`, allows you to specify the time period that the `journalctl` tool should search for entries. This isn't because you don't care about previous results—in this case, there won't be any—it is to shorten the search time if your host has been running for a long time and has accumulated a large number of entries in the journal:
```
[root@testvm1 system]# journalctl -S today -u myMonitor.service
\-- Logs begin at Mon 2020-06-08 07:47:20 EDT, end at Thu 2020-06-11 09:40:47 EDT. --
Jun 11 09:12:09 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 11 09:12:09 testvm1.both.org free[377966]:               total        used        free      shared  buff/cache   available
Jun 11 09:12:09 testvm1.both.org free[377966]: Mem:       12635740      522868    11032860        8016     1080012    11821508
Jun 11 09:12:09 testvm1.both.org free[377966]: Swap:       8388604           0     8388604
Jun 11 09:12:09 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
[root@testvm1 system]#
```
A task triggered by a service can be a single program, a series of programs, or a script written in any scripting language. Add another task to the service by adding the following line to the end of the `[Service]` section of the `myMonitor.service` unit file:
```
`ExecStart=/usr/bin/lsblk`
```
Start the service again and check the journal for the results, which should look like this. You should see the results from both commands in the journal:
```
Jun 11 15:42:18 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 11 15:42:18 testvm1.both.org free[379961]:               total        used        free      shared  buff/cache   available
Jun 11 15:42:18 testvm1.both.org free[379961]: Mem:       12635740      531788    11019540        8024     1084412    11812272
Jun 11 15:42:18 testvm1.both.org free[379961]: Swap:       8388604           0     8388604
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: sda             8:0    0  120G  0 disk
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: ├─sda1          8:1    0    4G  0 part /boot
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: └─sda2          8:2    0  116G  0 part
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   ├─VG01-root 253:0    0    5G  0 lvm  /
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP]
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   ├─VG01-usr  253:2    0   30G  0 lvm  /usr
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   ├─VG01-tmp  253:3    0   10G  0 lvm  /tmp
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   ├─VG01-var  253:4    0   20G  0 lvm  /var
Jun 11 15:42:18 testvm1.both.org lsblk[379962]:   └─VG01-home 253:5    0   10G  0 lvm  /home
Jun 11 15:42:18 testvm1.both.org lsblk[379962]: sr0            11:0    1 1024M  0 rom
Jun 11 15:42:18 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 11 15:42:18 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
```
Now that you know your service works as expected, create the timer unit file, `myMonitor.timer` in `/etc/systemd/system`, and add the following:
```
# This timer unit is for testing
# By David Both
# Licensed under GPL V2
#
[Unit]
Description=Logs some system statistics to the systemd journal
Requires=myMonitor.service
[Timer]
Unit=myMonitor.service
OnCalendar=*-*-* *:*:00
[Install]
WantedBy=timers.target
```
The `OnCalendar` time specification in the `myMonitor.timer file`, `*-*-* *:*:00`, should trigger the timer to execute the `myMonitor.service` unit every minute. I will explore `OnCalendar` settings a bit later in this article.
For now, observe any journal entries pertaining to running your service when it is triggered by the timer. You could also follow the timer, but following the service allows you to see the results in near real time. Run `journalctl` with the `-f` (follow) option:
```
[root@testvm1 system]# journalctl -S today -f -u myMonitor.service
\-- Logs begin at Mon 2020-06-08 07:47:20 EDT. --
```
Start but do not enable the timer, and see what happens after it runs for a while:
```
[root@testvm1 ~]# systemctl start myMonitor.service
[root@testvm1 ~]#
```
One result shows up right away, and the next ones come at—sort of—one-minute intervals. Watch the journal for a few minutes and see if you notice the same things I did:
```
[root@testvm1 system]# journalctl -S today -f -u myMonitor.service
\-- Logs begin at Mon 2020-06-08 07:47:20 EDT. --
Jun 13 08:39:18 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:39:18 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:39:19 testvm1.both.org free[630566]:               total        used        free      shared  buff/cache   available
Jun 13 08:39:19 testvm1.both.org free[630566]: Mem:       12635740      556604    10965516        8036     1113620    11785628
Jun 13 08:39:19 testvm1.both.org free[630566]: Swap:       8388604           0     8388604
Jun 13 08:39:18 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: sda             8:0    0  120G  0 disk
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: ├─sda1          8:1    0    4G  0 part /boot
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: └─sda2          8:2    0  116G  0 part
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   ├─VG01-root 253:0    0    5G  0 lvm  /
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP]
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   ├─VG01-usr  253:2    0   30G  0 lvm  /usr
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   ├─VG01-tmp  253:3    0   10G  0 lvm  /tmp
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   ├─VG01-var  253:4    0   20G  0 lvm  /var
Jun 13 08:39:19 testvm1.both.org lsblk[630567]:   └─VG01-home 253:5    0   10G  0 lvm  /home
Jun 13 08:39:19 testvm1.both.org lsblk[630567]: sr0            11:0    1 1024M  0 rom
Jun 13 08:40:46 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:40:46 testvm1.both.org free[630572]:               total        used        free      shared  buff/cache   available
Jun 13 08:40:46 testvm1.both.org free[630572]: Mem:       12635740      555228    10966836        8036     1113676    11786996
Jun 13 08:40:46 testvm1.both.org free[630572]: Swap:       8388604           0     8388604
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: sda             8:0    0  120G  0 disk
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: ├─sda1          8:1    0    4G  0 part /boot
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: └─sda2          8:2    0  116G  0 part
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   ├─VG01-root 253:0    0    5G  0 lvm  /
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP]
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   ├─VG01-usr  253:2    0   30G  0 lvm  /usr
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   ├─VG01-tmp  253:3    0   10G  0 lvm  /tmp
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   ├─VG01-var  253:4    0   20G  0 lvm  /var
Jun 13 08:40:46 testvm1.both.org lsblk[630574]:   └─VG01-home 253:5    0   10G  0 lvm  /home
Jun 13 08:40:46 testvm1.both.org lsblk[630574]: sr0            11:0    1 1024M  0 rom
Jun 13 08:40:46 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:40:46 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
Jun 13 08:41:46 testvm1.both.org systemd[1]: Starting Logs system statistics to the systemd journal...
Jun 13 08:41:46 testvm1.both.org free[630580]:               total        used        free      shared  buff/cache   available
Jun 13 08:41:46 testvm1.both.org free[630580]: Mem:       12635740      553488    10968564        8036     1113688    11788744
Jun 13 08:41:46 testvm1.both.org free[630580]: Swap:       8388604           0     8388604
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: sda             8:0    0  120G  0 disk
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: ├─sda1          8:1    0    4G  0 part /boot
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: └─sda2          8:2    0  116G  0 part
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   ├─VG01-root 253:0    0    5G  0 lvm  /
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP]
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   ├─VG01-usr  253:2    0   30G  0 lvm  /usr
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   ├─VG01-tmp  253:3    0   10G  0 lvm  /tmp
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   ├─VG01-var  253:4    0   20G  0 lvm  /var
Jun 13 08:41:47 testvm1.both.org lsblk[630581]:   └─VG01-home 253:5    0   10G  0 lvm  /home
Jun 13 08:41:47 testvm1.both.org lsblk[630581]: sr0            11:0    1 1024M  0 rom
Jun 13 08:41:47 testvm1.both.org systemd[1]: myMonitor.service: Succeeded.
Jun 13 08:41:47 testvm1.both.org systemd[1]: Finished Logs system statistics to the systemd journal.
```
Be sure to check the status of both the timer and the service.
You probably noticed at least two things in the journal. First, you do not need to do anything special to cause the `STDOUT` from the `ExecStart` triggers in the `myMonitor.service` unit to be stored in the journal. That is all part of using systemd for running services. However, it does mean that you might need to be careful about running scripts from a service unit and how much `STDOUT` they generate.
The second thing is that the timer does not trigger exactly on the minute at :00 seconds or even exactly one minute from the previous instance. This is intentional, but it can be overridden if necessary (or if it just offends your sysadmin sensibilities).
The reason for this behavior is to prevent multiple services from triggering at exactly the same time. For example, you can use time specifications such as Weekly, Daily, and more. These shortcuts are all defined to trigger at 00:00:00 hours on the day they are triggered. When multiple timers are specified this way, there is a strong likelihood that they would attempt to start simultaneously.
systemd timers are intentionally designed to trigger somewhat randomly around the specified time to try to prevent simultaneous triggers. They trigger semi-randomly within a time window that starts at the specified trigger time and ends at the specified time plus one minute. This trigger time is maintained at a stable position with respect to all other defined timer units, according to the `systemd.timer` man page. You can see in the journal entries above that the timer triggered immediately when it started and then about 46 or 47 seconds after each minute.
Most of the time, such probabilistic trigger times are fine. When scheduling tasks such as backups to run, so long as they run during off-hours, there will be no problems. A sysadmin can select a deterministic start time, such as 01:05:00 in a typical cron job specification, to not conflict with other tasks, but there is a large range of time values that will accomplish that. A one-minute bit of randomness in a start time is usually irrelevant.
However, for some tasks, exact trigger times are an absolute requirement. For those, you can specify greater trigger time-span accuracy (to within a microsecond) by adding a statement like this to the `Timer` section of the timer unit file:
```
`AccuracySec=1us`
```
Time spans can be used to specify the desired accuracy as well as to define time spans for repeating or one-time events. It recognizes the following units:
* usec, us, µs
* msec, ms
* seconds, second, sec, s
* minutes, minute, min, m
* hours, hour, hr, h
* days, day, d
* weeks, week, w
* months, month, M (defined as 30.44 days)
* years, year, y (defined as 365.25 days)
All the default timers in `/usr/lib/systemd/system` specify a much larger range for accuracy because exact times are not critical. Look at some of the specifications in the system-created timers:
```
[root@testvm1 system]# grep Accur /usr/lib/systemd/system/*timer
/usr/lib/systemd/system/fstrim.timer:AccuracySec=1h
/usr/lib/systemd/system/logrotate.timer:AccuracySec=1h
/usr/lib/systemd/system/logwatch.timer:AccuracySec=12h
/usr/lib/systemd/system/mlocate-updatedb.timer:AccuracySec=24h
/usr/lib/systemd/system/raid-check.timer:AccuracySec=24h
/usr/lib/systemd/system/unbound-anchor.timer:AccuracySec=24h
[root@testvm1 system]#
```
View the complete contents of some of the timer unit files in the `/usr/lib/systemd/system` directory to see how they are constructed.
You do not have to enable the timer in this experiment to activate it at boot time, but the command to do so would be:
```
`[root@testvm1 system]# systemctl enable myMonitor.timer`
```
The unit files you created do not need to be executable. You also did not enable the service unit because it is triggered by the timer. You can still trigger the service unit manually from the command line, should you want to. Try that and observe the journal.
See the man pages for `systemd.timer` and `systemd.time` for more information about timer accuracy, event-time specifications, and trigger events.
### Timer types
systemd timers have other capabilities that are not found in cron, which triggers only on specific, repetitive, real-time dates and times. systemd timers can be configured to trigger based on status changes in other systemd units. For example, a timer might be configured to trigger a specific elapsed time after system boot, after startup, or after a defined service unit activates. These are called monotonic timers. Monotonic refers to a count or sequence that continually increases. These timers are not persistent because they reset after each boot.
Table 1 lists the monotonic timers along with a short definition of each, as well as the `OnCalendar` timer, which is not monotonic and is used to specify future times that may or may not be repetitive. This information is derived from the `systemd.timer` man page with a few minor changes.
Timer | Monotonic | Definition
---|---|---
`OnActiveSec=` | X | This defines a timer relative to the moment the timer is activated.
`OnBootSec=` | X | This defines a timer relative to when the machine boots up.
`OnStartupSec=` | X | This defines a timer relative to when the service manager first starts. For system timer units, this is very similar to `OnBootSec=`, as the system service manager generally starts very early at boot. It's primarily useful when configured in units running in the per-user service manager, as the user service manager generally starts on first login only, not during boot.
`OnUnitActiveSec=` | X | This defines a timer relative to when the timer that is to be activated was last activated.
`OnUnitInactiveSec=` | X | This defines a timer relative to when the timer that is to be activated was last deactivated.
`OnCalendar=` | | This defines real-time (i.e., wall clock) timers with calendar event expressions. See `systemd.time(7)` for more information on the syntax of calendar event expressions. Otherwise, the semantics are similar to `OnActiveSec=` and related settings. This timer is the one most like those used with the cron service.
_Table 1: systemd timer definitions_
The monotonic timers can use the same shortcut names for their time spans as the `AccuracySec` statement mentioned before, but systemd normalizes those names to seconds. For example, you might want to specify a timer that triggers an event one time, five days after the system boots; that might look like: `OnBootSec=5d`. If the host booted at `2020-06-15 09:45:27`, the timer would trigger at `2020-06-20 09:45:27` or within one minute after.
### Calendar event specifications
Calendar event specifications are a key part of triggering timers at desired repetitive times. Start by looking at some specifications used with the `OnCalendar` setting.
systemd and its timers use a different style for time and date specifications than the format used in crontab. It is more flexible than crontab and allows fuzzy dates and times in the manner of the `at` command. It should also be familiar enough that it will be easy to understand.
The basic format for systemd timers using `OnCalendar=` is `DOW YYYY-MM-DD HH:MM:SS`. DOW (day of week) is optional, and other fields can use an asterisk (*) to match any value for that position. All calendar time forms are converted to a normalized form. If the time is not specified, it is assumed to be 00:00:00. If the date is not specified but the time is, the next match might be today or tomorrow, depending upon the current time. Names or numbers can be used for the month and day of the week. Comma-separated lists of each unit can be specified. Unit ranges can be specified with `..` between the beginning and ending values.
There are a couple interesting options for specifying dates. The Tilde (~) can be used to specify the last day of the month or a specified number of days prior to the last day of the month. The “/” can be used to specify a day of the week as a modifier.
Here are some examples of some typical time specifications used in `OnCalendar` statements.
Calendar event specification | Description
---|---
DOW YYYY-MM-DD HH:MM:SS |
*-*-* 00:15:30 | Every day of every month of every year at 15 minutes and 30 seconds after midnight
Weekly | Every Monday at 00:00:00
Mon *-*-* 00:00:00 | Same as weekly
Mon | Same as weekly
Wed 2020-*-* | Every Wednesday in 2020 at 00:00:00
Mon..Fri 2021-*-* | Every weekday in 2021 at 00:00:00
2022-6,7,8-1,15 01:15:00 | The 1st and 15th of June, July, and August of 2022 at 01:15:00am
Mon *-05~03 | The next occurrence of a Monday in May of any year which is also the 3rd day from the end of the month.
Mon..Fri *-08~04 | The 4th day preceding the end of August for any years in which it also falls on a weekday.
*-05~03/2 | The 3rd day from the end of the month of May and then again two days later. Repeats every year. Note that this expression uses the Tilde (~).
*-05-03/2 | The third day of the month of may and then every 2nd day for the rest of May. Repeats every year. Note that this expression uses the dash (-).
_Table 2: Sample `OnCalendar` event specifications_
### Test calendar specifications
systemd provides an excellent tool for validating and examining calendar time event specifications in a timer. The `systemd-analyze calendar` tool parses a calendar time event specification and provides the normalized form as well as other interesting information such as the date and time of the next "elapse," i.e., match, and the approximate amount of time before the trigger time is reached.
First, look at a date in the future without a time (note that the times for `Next elapse` and `UTC` will differ based on your local time zone):
```
[student@studentvm1 ~]$ systemd-analyze calendar 2030-06-17
  Original form: 2030-06-17                
Normalized form: 2030-06-17 00:00:00        
    Next elapse: Mon 2030-06-17 00:00:00 EDT
       (in UTC): Mon 2030-06-17 04:00:00 UTC
       From now: 10 years 0 months left    
[root@testvm1 system]#
```
Now add a time. In this example, the date and time are analyzed separately as non-related entities:
```
[root@testvm1 system]# systemd-analyze calendar 2030-06-17 15:21:16
  Original form: 2030-06-17                
Normalized form: 2030-06-17 00:00:00        
    Next elapse: Mon 2030-06-17 00:00:00 EDT
       (in UTC): Mon 2030-06-17 04:00:00 UTC
       From now: 10 years 0 months left    
  Original form: 15:21:16                  
Normalized form: *-*-* 15:21:16            
    Next elapse: Mon 2020-06-15 15:21:16 EDT
       (in UTC): Mon 2020-06-15 19:21:16 UTC
       From now: 3h 55min left              
[root@testvm1 system]#
```
To analyze the date and time as a single unit, enclose them together in quotes. Be sure to remove the quotes when using them in the `OnCalendar=` event specification in a timer unit or you will get errors:
```
[root@testvm1 system]# systemd-analyze calendar "2030-06-17 15:21:16"
Normalized form: 2030-06-17 15:21:16        
    Next elapse: Mon 2030-06-17 15:21:16 EDT
       (in UTC): Mon 2030-06-17 19:21:16 UTC
       From now: 10 years 0 months left    
[root@testvm1 system]#
```
Now test the entries in Table 2. I like the last one, especially:
```
[root@testvm1 system]# systemd-analyze calendar "2022-6,7,8-1,15 01:15:00"
  Original form: 2022-6,7,8-1,15 01:15:00
Normalized form: 2022-06,07,08-01,15 01:15:00
    Next elapse: Wed 2022-06-01 01:15:00 EDT
       (in UTC): Wed 2022-06-01 05:15:00 UTC
       From now: 1 years 11 months left
[root@testvm1 system]#
```
Lets look at one example in which we list the next five elapses for the timestamp expression.
```
[root@testvm1 ~]# systemd-analyze calendar --iterations=5 "Mon *-05~3"
  Original form: Mon *-05~3                
Normalized form: Mon *-05~03 00:00:00      
    Next elapse: Mon 2023-05-29 00:00:00 EDT
       (in UTC): Mon 2023-05-29 04:00:00 UTC
       From now: 2 years 11 months left    
       Iter. #2: Mon 2028-05-29 00:00:00 EDT
       (in UTC): Mon 2028-05-29 04:00:00 UTC
       From now: 7 years 11 months left    
       Iter. #3: Mon 2034-05-29 00:00:00 EDT
       (in UTC): Mon 2034-05-29 04:00:00 UTC
       From now: 13 years 11 months left    
       Iter. #4: Mon 2045-05-29 00:00:00 EDT
       (in UTC): Mon 2045-05-29 04:00:00 UTC
       From now: 24 years 11 months left    
       Iter. #5: Mon 2051-05-29 00:00:00 EDT
       (in UTC): Mon 2051-05-29 04:00:00 UTC
       From now: 30 years 11 months left    
[root@testvm1 ~]#
```
This should give you enough information to start testing your `OnCalendar` time specifications. The `systemd-analyze` tool can be used for other interesting analyses, which I will begin to explore in the next article in this series.
### Summary
systemd timers can be used to perform the same kinds of tasks as the cron tool but offer more flexibility in terms of the calendar and monotonic time specifications for triggering events.
Even though the service unit you created for this experiment is usually triggered by the timer, you can also use the `systemctl start myMonitor.service` command to trigger it at any time. Multiple maintenance tasks can be scripted in a single timer; these can be Bash scripts or Linux utility programs. You can run the service triggered by the timer to run all the scripts, or you can run individual scripts as needed.
I will explore systemd's use of time and time specifications in much more detail in the next article.
I have not yet seen any indication that `cron` and `at` will be deprecated. I hope that does not happen because `at`, at least, is much easier to use for one-off task scheduling than systemd timers.
### Resources
There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup.
* The Fedora Project has a good, practical [guide to systemd][5]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd.
* The Fedora Project also has a good [cheat sheet][6] that cross-references the old SystemV commands to comparable systemd ones.
* For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][7]'s [description of systemd][8].
* [Linux.com][9]'s "More systemd fun" offers more advanced systemd [information and tips][10].
There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers.
* [Rethinking PID 1][11]
* [systemd for Administrators, Part I][12]
* [systemd for Administrators, Part II][13]
* [systemd for Administrators, Part III][14]
* [systemd for Administrators, Part IV][15]
* [systemd for Administrators, Part V][16]
* [systemd for Administrators, Part VI][17]
* [systemd for Administrators, Part VII][18]
* [systemd for Administrators, Part VIII][19]
* [systemd for Administrators, Part IX][20]
* [systemd for Administrators, Part X][21]
* [systemd for Administrators, Part XI][22]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/systemd-timers
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/article/17/11/how-use-cron-linux
[3]: https://opensource.com/users/dboth
[4]: https://opensource.com/article/20/5/manage-startup-systemd
[5]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[6]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[7]: http://Freedesktop.org
[8]: http://www.freedesktop.org/wiki/Software/systemd
[9]: http://Linux.com
[10]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[11]: http://0pointer.de/blog/projects/systemd.html
[12]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[13]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[14]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[15]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[16]: http://0pointer.de/blog/projects/three-levels-of-off.html
[17]: http://0pointer.de/blog/projects/changing-roots
[18]: http://0pointer.de/blog/projects/blame-game.html
[19]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[20]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[21]: http://0pointer.de/blog/projects/instances.html
[22]: http://0pointer.de/blog/projects/inetd.html

View File

@ -1,224 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (11 Linux Distributions You Can Rely on for Your Ancient 32-bit Computer)
[#]: via: (https://itsfoss.com/32-bit-linux-distributions/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
11 Linux Distributions You Can Rely on for Your Ancient 32-bit Computer
======
If youve been keeping up with the latest [Linux distributions][1], you must have noticed that 32-bit support has been dropped from [most of the popular Linux distributions][2]. Arch Linux, Ubuntu, Fedora, everyone has dropped the support for this older architecture.
But, what if you have vintage hardware with you that still needs to be revived or you want to make use of it for something? Fret not, there are still a few options left to choose from for your 32-bit system.
In this article, Ive tried to compile some of the best Linux distributions that will keep on supporting 32-bit platform for next few years.
### Top Linux distributions that still offer 32-bit support
![][3]
This list is a bit different from [our earlier list of Linux distributions for old laptops][4]. Even 64-bit computers can be considered old if they were released before 2010. This is why some suggestions listed there included distros that only support 64-bit now.
The information presented here are correct as per my knowledge and findings but if you find otherwise, please let me know in the comment section.
Before you go on, I suppose you know [how to check if you have a 32 bit or 64 bit computer][5].
#### 1\. Debian
![Image Credits: mrneilypops / Deviantart][6]
Debian is a fantastic choice for 32-bit systems because they still support it with their latest stable release. At the time of writing this, the latest stable release **Debian 10 “buster”** offers a 32-bit version and is supported until 2024.
If youre new to Debian, it is worth mentioning that you get solid documentation for everything on their [official wiki][7]. So, it shouldnt be an issue to get started.
You can browse through the [available installers][8] to get it installed. However, before you proceed, I would recommend referring to the list of [things to remember before installing Debian][9] in addition to its [installation manual][10].
[Debian][11]
#### 2\. Slax
![][12]
If you just want to quickly boot up a device for some temporary work, Slax is an impressive option.
It is based on Debian but it aims to be a portable and fast option that is meant to be run through USB devices or DVDs. You can download the 32-bit ISO file from their website for free or purchase a rewritable DVD/encrypted pendrive with Slax pre-installed.
Of course, this isnt meant to replace a traditional desktop operating system. But, yes, you do get the 32-bit support with Debian as its base.
[Slax][13]
#### 3\. AntiX
![Image Credits: Opensourcefeed][14]
Yet another impressive Debian-based distribution. AntiX is popularly known as a systemd-free distribution which focuses on performance while being a lightweight installation.
It is perfectly suitable for just about any old 32-bit system. To give you an idea, it just needs 256 MB RAM and 2.7 GB storage space at the very least. Not just easy to install, but the user experience is focused for both newbies and experienced users as well.
You should get the latest version based on Debians latest stable branch available.
[AntiX][15]
#### 4\. openSUSE
![][16]
openSUSE is an independent Linux distribution that supports 32-bit systems as well. Even though the latest regular version (Leap) does not offer 32-bit images, the rolling release edition (Tumbleweed) does provide 32-bit image.
It will be an entirely different experience if youre new. However, I suggest you to go through the [reasons why you should be using openSUSE.][17]
It is mostly focused for developers and system administrators but you can utilize it as an average desktop user as well. It is worth noting that openSUSE is not meant to run on vintage hardware — so you have to make sure that you have at least 2 GB RAM, 40+ GB storage space, and a dual core processor.
[openSUSE][18]
#### 5\. Emmabuntüs
![][19]
Emmabuntus is an interesting distribution that aims to extend the life of the hardware to reduce waste of raw materials with 32-bit support. As a group theyre also involved in providing computers and digital technologies to schools.
It offers two different editions, one based on Ubuntu and the other based on Debian. If you want a longer 32-bit support, you may want to go with the Debian edition. It may not be the best option, but with a number of pre-configured software to make the Linux learning experience easy and 32-bit support, it is a decent option if you want to support their cause in the process.
[Emmanbuntus][20]
#### 6\. NixOS
![Nixos KDE Edition \(Image Credits: Distrowatch\)][21]
NixOS is yet another independent Linux distribution that supports 32-bit systems. It focuses on providing a reliable system where packages are isolated from each other.
This may not be directly geared towards average users but it is a KDE-powered usable distribution with a unique approach to package management. You can learn more about its [features][22] from its official website.
[NixOS][23]
#### 7\. Gentoo Linux
![][24]
If youre an experienced Linux user and looking for a 32-bit Linux distributions, Gentoo Linux should be a great choice.
You can easily configure, compile, and install a kernel through package manager with Gentoo Linux if you want. Not just limited to its configurability, which it is popularly known for, you will also be able to run it without any issues on older hardware.
Even if youre not an experienced user and want to give it a try, simply read through the [installation instructions][25] and you will be in for an adventure.
[Gentoo Linux][26]
#### 8\. Devuan
![][27]
[Devuan][28] is yet another systemd-free distribution. It is technically a fork of Debian, just without systemd and encouraging [Init freedom][29].
It may not be a very popular Linux distribution for an average user but if you want a systemd-free distribution and 32-bit support, Devuan should be a good option.
[Devuan][30]
#### 9\. Void Linux
![][31]
Void Linux is an interesting distribution independently developed by volunteers. It aims to be a general purpose OS while offering a stable rolling release cycle. It features runit as the init system instead of systemd and gives you the option of several [desktop environments][32].
It has an extremely impressive minimum requirement specification with just 96 MB of RAM paired up with Pentium 4 (or equivalent) chip. Try it out!
[Void Linux][33]
#### 10\. Q4OS
![][34]
Q4OS is another Debian-based distribution that focuses on providing a minimal and fast desktop user experience. It also happens to be one of the [best lightweight Linux distributions][4] in our list. It features the [Trinity desktop][35] for its 32-bit edition and you can find KDE Plasma support on 64-bit version.
Similar to Void Linux, Q4OS also runs on a bare minimum of at least 128 MB RAM and a 300 MHz CPU with a 3 GB storage space requirement. It should be more than enough for any vintage hardware. So, Id say, you should definitely try it out!
To know more about it, you can also check out [our review of Q4OS][36].
[Q$OS][37]
#### 11: MX Linux
![][38]
If youve got a slightly decent configuration (not completely vintage but old), MX Linux would be my personal recommendation for 32-bit systems. It also happens to be one of the [best Linux distributions][2] for every type of user.
In general, MX Linux is a fantastic lightweight and customizable distribution based on Debian. You get the option to choose from KDE, XFCE or Fluxbox (which is their own desktop environment for older hardware). You can explore more about it on their official website and give it a try.
[MX Linux][39]
### Honorable Mention: Funtoo
Funtoo is a Gentoo-based community-developed Linux distribution. It focuses on giving you the best performance with Gentoo Linux along with some extra packages to make the experience complete for users. It is also interesting to note that the development is actually led by Gentoo Linuxs creator **Daniel Robbins**.
Of course, if youre new to Linux, you may not have the best experience here. But, it does support 32-bit systems and works well across many older Intel/AMD chipsets. Explore more about it on its official website to see if you want to try it out.
[Funtoo][40]
### Wrapping Up
I focused the list on Debian-based and some Independent distributions. However, if you dont mind long term support and just want to get your hands on a 32-bit supported image, you can try any Ubuntu 18.04 based distributions (or any official flavour) as well.
At the time of writing this, they just have a few more months of software support left. Hence, I avoided mentioning it as the primary options. But, if you like Ubuntu 18.04 based distros or any of its flavours, you do have options like [LXLE][41], [Linux Lite][42], [Zorin Lite 15][43], and other official flavours.
Even though most modern desktop operating systems based on Ubuntu have dropped support for 32-bit support. You still have plenty of choices to go with.
What would you prefer to have on your 32-bit system? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/32-bit-linux-distributions/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/what-is-linux-distribution/
[2]: https://itsfoss.com/best-linux-distributions/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/32-bit-linux.png?resize=800%2C450&ssl=1
[4]: https://itsfoss.com/lightweight-linux-beginners/
[5]: https://itsfoss.com/32-bit-64-bit-ubuntu/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/debian-screenshot.png?resize=800%2C450&ssl=1
[7]: https://wiki.debian.org/FrontPage
[8]: https://www.debian.org/releases/buster/debian-installer/
[9]: https://itsfoss.com/before-installing-debian/
[10]: https://www.debian.org/releases/buster/installmanual
[11]: https://www.debian.org/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/slax-screenshot.jpg?resize=800%2C600&ssl=1
[13]: https://www.slax.org
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/antiX-19-1.jpg?resize=800%2C500&ssl=1
[15]: https://antixlinux.com
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/opensuse-15-1.png?resize=800%2C500&ssl=1
[17]: https://itsfoss.com/why-use-opensuse/
[18]: https://www.opensuse.org/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/Emmabuntus-xfce.png?resize=800%2C500&ssl=1
[20]: https://emmabuntus.org/
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/nixos-kde.jpg?resize=800%2C500&ssl=1
[22]: https://nixos.org/features.html
[23]: https://nixos.org/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/gentoo-linux.png?resize=800%2C450&ssl=1
[25]: https://www.gentoo.org/get-started/
[26]: https://www.gentoo.org
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/devuan-beowulf.jpg?resize=800%2C600&ssl=1
[28]: https://itsfoss.com/devuan-3-release/
[29]: https://www.devuan.org/os/init-freedom
[30]: https://www.devuan.org
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/void-linux.jpg?resize=800%2C450&ssl=1
[32]: https://itsfoss.com/best-linux-desktop-environments/
[33]: https://voidlinux.org/
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os8Debonaire.jpg?resize=800%2C500&ssl=1
[35]: https://en.wikipedia.org/wiki/Trinity_Desktop_Environment
[36]: https://itsfoss.com/q4os-linux-review/
[37]: https://q4os.org/index.html
[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde.jpg?resize=800%2C452&ssl=1
[39]: https://mxlinux.org/
[40]: https://www.funtoo.org/Welcome
[41]: https://www.lxle.net/
[42]: https://www.linuxliteos.com
[43]: https://zorinos.com/download/15/lite/32/

View File

@ -1,201 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Stratis encryption)
[#]: via: (https://fedoramagazine.org/getting-started-with-stratis-encryption/)
[#]: author: (briansmith https://fedoramagazine.org/author/briansmith/)
Getting started with Stratis encryption
======
![][1]
Stratis is described on its [official website][2] as an “_easy to use local storage management for Linux_.” See this [short video][3] for a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system. The concepts shown in the video also apply to Stratis in Fedora.
Stratis version 2.1 introduces support for encryption. Continue reading to learn how to get started with encryption in Stratis.
### Prerequisites
Encryption requires Stratis version 2.1 or greater. The examples in this post use a pre-release of Fedora 33. Stratis 2.1 will be available in the final release of Fedora 33.
Youll also need at least one available block device to create an encrypted pool. The examples shown below were done on a KVM virtual machine with a 5 GB virtual disk drive _(/dev/vdb_).
### Create a key in the kernel keyring
The Linux kernel keyring is used to store the encryption key. For more information on the kernel keyring, refer to the _keyrings_ manual page (_man keyrings_).  
Use the _stratis key set_ command to set up the key within the kernel keyring.  You must specify where the key should be read from. To read the key from standard input, use the _capture-key_ option. To retrieve the key from a file, use the _keyfile-path &lt;file&gt;_ option. The last parameter is a key description. It will be used later when you create the encrypted Stratis pool.
For example, to create a key with the description _pool1key_, and to read the key from standard input, you would enter:
```
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
```
The command prompts us to type the key data / passphrase, and the key is then created within the kernel keyring.  
To verify that the key was created, run _stratis key list_:
```
# stratis key list
Key Description
pool1key
```
This verifies that the _pool1key_ was created. Note that these keys are not persistent. If the host is rebooted, the key will need to be provided again before the encrypted Stratis pool can be accessed (this process is covered later).
If you have multiple encrypted pools, they can have a separate keys, or they can share the same key.
The keys can also be viewed using the following _keyctl_ commands:
```
# keyctl get_persistent @s
318044983
# keyctl show
Session Keyring
701701270 --alswrv 0 0 keyring: _ses
649111286 --alswrv 0 65534 \_ keyring: _uid.0
318044983 ---lswrv 0 65534 \_ keyring: _persistent.0
1051260141 --alswrv 0 0 \_ user: stratis-1-key-pool1key
```
### Create the encrypted Stratis pool
Now that a key has been created for Stratis, the next step is to create the encrypted Stratis pool. Encrypting a pool can only be done at pool creation. It isnt currently possible to encrypt an existing pool.
Use the _stratis pool create_ command to create a pool. Add _key-desc_ and the key description that you provided in the previous step (_pool1key_). This will signal to Stratis that the pool should be encrypted using the provided key. The below example creates the Stratis pool on _/dev/vdb_, and names it _pool1_. Be sure to specify an empty/available device on your system.
```
# stratis pool create --key-desc pool1key pool1 /dev/vdb
```
You can verify that the pool has been created with the _stratis pool list_ command:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 37.63 MiB / 4.95 GiB ~Ca, Cr
```
In the sample output shown above, _~Ca_ indicates that caching is disabled (the tilde negates the property). _Cr_ indicates that encryption is enabled.  Note that caching and encryption are mutually exclusive. Both features cannot be simultaneously enabled.
Next, create a filesystem. The below example, demonstrates creating a filesystem named _filesystem1_, mounting it at the _/filesystem1_ mountpoint, and creating a test file in the new filesystem:
```
# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile
```
### Access the encrypted pool after a reboot
When you reboot youll notice that Stratis no longer shows your encrypted pool or its block device:
```
# stratis pool list
Name Total Physical Properties
```
```
# stratis blockdev list
Pool Name Device Node Physical Size Tier
```
To access the encrypted pool, first re-create the key with the same key description and key data / passphrase that you used previously:
```
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
```
Next, run the _stratis pool unlock_ command, and verify that you can now see the pool and its block device:
```
# stratis pool unlock
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
# stratis blockdev list
Pool Name Device Node Physical Size Tier
pool1 /dev/dm-2 4.98 GiB Data
```
Next, mount the filesystem and verify that you can access the test file you created previously:
```
# mount /stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file
```
### Use a systemd unit file to automatically unlock a Stratis pool at boot
It is possible to automatically unlock your Stratis pool at boot without manual intervention. However, a file containing the key must be available. Storing the key in a file might be a security concern in some environments.
The systemd unit file shown below provides a simple method to unlock a Stratis pool at boot and mount the filesystem. Feedback on a better/alternative methods is welcome. You can provide suggestions in the comment section at the end of this article.
Start by creating your key file with the following command. Be sure to substitute _passphrase_ with the same key data / passphrase you entered previously.
```
# echo -n passphrase > /root/pool1key
```
Make sure that the file is only readable by root:
```
# chmod 400 /root/pool1key
# chown root:root /root/pool1key
```
Create a systemd unit file at _/etc/systemd/system/stratis-filesystem1.service_ with the following content:
```
[Unit]
Description = stratis mount pool1 filesystem1 file system
After = stratisd.service
[Service]
ExecStartPre=sleep 2
ExecStartPre=stratis key set --keyfile-path /root/pool1key pool1key
ExecStartPre=stratis pool unlock
ExecStartPre=sleep 3
ExecStart=mount /stratis/pool1/filesystem1 /filesystem1
RemainAfterExit=yes
[Install]
WantedBy = multi-user.target
```
Next, enable the service so that it will run at boot:
```
# systemctl enable stratis-filesystem1.service
```
Now reboot and verify that the Stratis pool has been automatically unlocked and that its filesystem is mounted.
### Summary and conclusion
In todays environment, encryption is a must for many people and organizations. This post demonstrated how to enable encryption in Stratis 2.1.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/getting-started-with-stratis-encryption/
作者:[briansmith][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/briansmith/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/11/stratis-encryption-2-816x345.jpg
[2]: https://stratis-storage.github.io/
[3]: https://www.youtube.com/watch?v=CJu3kmY-f5o

View File

@ -1,141 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Program a simple game with Elixir)
[#]: via: (https://opensource.com/article/20/12/elixir)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Program a simple game with Elixir
======
Learn Elixir by programming a "guess the number" game and comparing the
language against ones you know.
![A die with rainbow color background][1]
To you learn a new programming language, it's good to focus on the things most programming languages have in common:
* Variables
* Expressions
* Statements
These concepts are the basis of most programming languages. Because of these similarities, once you know one programming language, you can start figuring another one out by recognizing its differences.
Another good tool for learning a new language is starting with a standard program. This allows you to focus on the language, not the program's logic. We're doing that in this article series using a "guess the number" program, in which the computer picks a number between one and 100 and asks you to guess it. The program loops until you guess the number correctly.
The "guess the number" program exercises several concepts in programming languages:
* Variables
* Input
* Output
* Conditional evaluation
* Loops
It's a great practical experiment to learn a new programming language.
### Guess the number in Elixir
The [Elixir][2] programming language is a dynamically typed functional language designed for building stable and maintainable applications. It runs on top of the same virtual machine as [Erlang][3] and shares many of its strengths—but with slightly easier syntax.
You can explore Elixir by writing a version of the "guess the number" game.
Here is my implementation:
```
defmodule Guess do
  def guess() do
     random = Enum.random(1..100)
     IO.puts "Guess a number between 1 and 100"
     Guess.guess_loop(random)
  end
  def guess_loop(num) do
    data = IO.read(:stdio, :line)
    {guess, _rest} = Integer.parse(data)
    cond do
      guess &lt; num -&gt;
        IO.puts "Too low!"
        guess_loop(num)
      guess &gt; num -&gt;
        IO.puts "Too high!"
        guess_loop(num)
      true -&gt;
        IO.puts "That's right!"
    end
  end
end
Guess.guess()
```
To assign a value to a variable, list the variable's name followed by the `=` sign. For example, the statement `random = 0` assigns a zero value to the `random` variable.
The script starts by defining a **module**. In Elixir, only modules can have named functions in them.
The next line defines the function that will serve as the entry point, `guess()`, which:
* Calls the `Enum.random()` function to get a random integer
* Prints the game prompt
* Calls the function that will serve as the loop
The rest of the game logic is implemented in the `guess_loop()` function.
The `guess_loop()` function uses [tail recursion][4] to loop. There are several ways to do looping in Elixir, but using tail recursion is a common one. The last thing `guess_loop()` does is call _itself_.
The first line in `guess_loop()` reads the input from the user. The next line uses `parse()` to convert the input to an integer.
The `cond` statement is Elixir's version of a multi-branch statement. Unlike `if/elif` or `if/elsif` in other languages, Elixir does not treat the first nor the last branch in a different way.
This `cond` statement has a three-way branch: The guess can be smaller, bigger, or equal to the random number. The first two options output the inequality's direction and then tail-call `guess_loop()`, looping back to the beginning. The last option outputs `That's right`, and the function finishes.
### Sample output
Now that you've written your Elixir program, you can run it to play the "guess the number" game. Every time you run the program, Elixir will pick a different random number, and you can guess until you find the correct number:
```
$ elixir guess.exs
Guess a number between 1 and 100
50
Too high
30
Too high
20
Too high
10
Too low
15
Too high
13
Too low
14
That's right!
```
This "guess the number" game is a great introductory program for learning a new programming language because it exercises several common programming concepts in a pretty straightforward way. By implementing this simple game in different programming languages, you can demonstrate some core concepts of the languages and compare their details.
Do you have a favorite programming language? How would you write the "guess the number" game in it? Follow this article series to see examples of other programming languages that might interest you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/12/elixir
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN (A die with rainbow color background)
[2]: https://elixir-lang.org/
[3]: https://www.erlang.org/
[4]: https://en.wikipedia.org/wiki/Tail_call

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (amwps290)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (MZqk)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (scvoet)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,139 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Network address translation part 2 the conntrack tool)
[#]: via: (https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
Network address translation part 2 the conntrack tool
======
![][1]
This is the second article in a series about network address translation (NAT). The first article introduced [how to use the iptables/nftables packet tracing feature][2] to find the source of NAT-related connectivity problems. Part 2 introduces the “conntrack” command. conntrack allows you to inspect and modify tracked connections.
### Introduction
NAT configured via iptables or nftables builds on top of netfilters connection tracking facility. The _conntrack_ command is used to inspect and alter the state table. It is part of the “conntrack-tools” package.
### Conntrack state table
The connection tracking subsystem keeps track of all packet flows that it has seen. Run “_sudo conntrack -L_” to see its content:
```
tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1
tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1
udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1
```
Each line shows one connection tracking entry. You might notice that each line shows the addresses and port numbers twice and even with inverted address and port pairs! This is because each entry is inserted into the state table twice. The first address quadruple (source and destination address and ports) are those recorded in the original direction, i.e. what the initiator sent. The second quadruple is what conntrack expects to see when a reply from the peer is received. This solves two problems:
1. If a NAT rule matches, such as IP address masquerading, this is recorded in the reply part of the connection tracking entry and can then be automatically applied to all future packets that are part of the same flow.
2. A lookup in the state table will be successful even if its a reply packet to a flow that has any form of network or port address translation applied.
The original (first shown) quadruple stored never changes: Its what the initiator sent. NAT manipulation only alters the reply (second) quadruple because that is what the receiver will see. Changes to the first quadruple would be pointless: netfilter has no control over the initiators state, it can only influence the packet as it is received/forwarded. When a packet does not map to an existing entry, conntrack may add a new state entry for it. In the case of UDP this happens automatically. In the case of TCP conntrack can be configured to only add the new entry if the TCP packet has the [SYN bit][3] set. By default conntrack allows mid-stream pickups to not cause problems for flows that existed prior to conntrack becoming active.
### Conntrack state table and NAT
As explained in the previous section, the reply tuple listed contains the NAT information. Its possible to filter the output to only show entries with source or destination nat applied. This allows to see which kind of NAT transformation is active on a given flow. _“sudo conntrack -L -p tcp src-nat_” might show something like this:
```
tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED]
```
This entry shows a connection from 10.0.0.10:5536 to 10.8.2.12:80. But unlike the previous example, the reply direction is not just the inverted original direction: the source address is changed. The destination host (10.8.2.12) sends reply packets to 192.168.1.2 instead of 10.0.0.10. Whenever 10.0.0.10 sends another packet, the router with this entry replaces the source address with 192.168.1.2. When 10.8.2.12 sends a reply, it changes the destination back to 10.0.0.10. This source NAT is due to a [nft masquerade][4] rule:
```
inet nat postrouting meta oifname "veth0" masquerade
```
Other types of NAT rules, such as “dnat to” or “redirect to” would be shown in a similar fashion, with the reply tuples destination different from the original one.
### Conntrack extensions
Two useful extensions are conntrack accounting and timestamping. _“sudo sysctl net.netfilter.nf_conntrack_acct=1”_ makes _“sudo conntrack -L_” track byte and packet counters for each flow.
_“sudo sysctl net.netfilter.nf_conntrack_timestamp=1”_ records a “start timestamp” for each connection. _“sudo conntrack -L”_ then displays the seconds elapsed since the flow was first seen. Add “_output ktimestamp_” to see the absolute start date as well.
### Insert and change entries
You can add entries to the state table. For example:
```
sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80
```
This is used by conntrackd for state replication. Entries of an active firewall are replicated to a standby system. The standby system can then take over without breaking connectivity even on established flows. Conntrack can also store metadata not related to the packet data sent on the wire, for example the conntrack mark and connection tracking labels. Change them with the “update” (-U) option:
```
sudo conntrack -U -m 42 -p tcp
```
This changes the connmark of all tcp flows to 42.
### **Delete entries**
In some cases, you want to delete enries from the state table. For example, changes to NAT rules have no effect on packets belonging to flows that are already in the table. For long-lived UDP sessions, such as tunneling protocols like VXLAN, it might make sense to delete the entry so the new NAT transformation can take effect. Delete entries via _“sudo conntrack -D_” followed by an optional list of address and port information. The following example removes the given entry from the table:
```
sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53
```
### Conntrack error counters
Conntrack also exports statistics:
```
# sudo conntrack -S
cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10
cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1
cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
```
Most counters will be 0. “Found” and “insert” will always be 0, they only exist for backwards compatibility. Other errors accounted for are:
* invalid: packet does not match an existing connection and doesnt create a new connection.
* insert_failed: packet starts a new connection, but insertion into the state table failed. This can happen for example when NAT engine happened to pick identical source address and port when Masquerading.
* drop: packet starts a new connection, but no memory is available to allocate a new state entry for it.
* early_drop: conntrack table is full. In order to accept the new connection existing connections that did not see two-way communication were dropped.
* error: icmp(v6) received icmp error packet that did not match a known connection
* search_restart: lookup interrupted by an insertion or deletion on another CPU.
* clash_resolve: Several CPUs tried to insert identical conntrack entry.
These error conditions are harmless unless they occur frequently. Some can be mitigated by tuning the conntrack sysctls for the expected workload. _net.netfilter.nf_conntrack_buckets_ and _net.netfilter.nf_conntrack_max_ are typical candidates. See the [nf_conntrack-sysctl documentation][5] for a full list.
Use “_sudo sysctl_ _net.netfilter.nf_conntrack_log_invalid=255″_ to get more information when a packet is invalid. For example, when conntrack logs the following when it encounters a packet with all tcp flags cleared:
```
nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0
```
### Summary
This article gave an introduction on how to inspect the connection tracking table and the NAT information stored in tracked flows. The next part in the series will expand on the conntrack tool and the connection tracking event framework.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/network-address-translation-part-2-816x345.jpg
[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
[3]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
[4]: https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)#Masquerading
[5]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/nf_conntrack-sysctl.rst

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (Tracygcz)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/3/android-raspberry-pi)
[#]: author: (Sudeshna Sur https://opensource.com/users/sudeshna-sur)
[#]: collector: (lujun9972)
[#]: translator: ( RiaXu)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,198 +0,0 @@
[#]: subject: (3 reasons I use the Git cherry-pick command)
[#]: via: (https://opensource.com/article/21/3/git-cherry-pick)
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
3 reasons I use the Git cherry-pick command
======
Cherry-picking solves a lot of problems in Git repositories. Here are
three ways to fix your mistakes with git cherry-pick.
![Measuring and baking a cherry pie recipe][1]
Finding your way around a version control system can be tricky. It can be massively overwhelming for a newbie, but being well-versed with the terminology and the basics of a version control system like Git is one of the baby steps to start contributing to open source.
Being familiar with Git can also help you out of sticky situations in your open source journey. Git is powerful and makes you feel in control—there is not a single way in which you cannot revert to a working version.
Here is an example to help you understand the importance of cherry-picking. Suppose you have made several commits in a branch, but you realize it's the wrong branch! What do you do now? Either you repeat all your changes in the correct branch and make a fresh commit, or you merge the branch into the correct branch. Wait, the former is too tedious, and you may not want to do the latter. So, is there a way? Yes, Git's got you covered. Here is where cherry-picking comes into play. As the term suggests, you can use it to hand-pick a commit from one branch and transfer it into another branch.
There are various reasons to use cherry-picking. Here are three of them.
### Avoid redundancy of efforts
There's no need to redo the same changes in a different branch when you can just copy the same commits to the other branch. Please note that cherry-picking commits will create a fresh commit with a new hash in the other branch, so please don't be confused if you see a different commit hash.
In case you are wondering what a commit hash is and how it is generated, here is a note to help you: A commit hash is a string generated using the [SHA-1][2] algorithm. The SHA-1 algorithm takes an input and outputs a unique 40-character hash. If you are on a [POSIX][3] system, try running this in your terminal:
```
`$ echo -n "commit" | openssl sha1`
```
This outputs a unique 40-character hash, `4015b57a143aec5156fd1444a017a32137a3fd0f`. This hash represents the string `commit`.
A SHA-1 hash generated by Git when you make a commit represents much more than just a single string. It represents:
```
sha1(
    meta data
        commit message
        committer
        commit date
        author
        authoring date
    Hash of the entire tree object
)
```
This explains why you get a unique commit hash for the slightest change you make to your code. Not even a single change goes unnoticed. This is because Git has integrity.
### Undoing/restoring lost changes
Cherry-picking can be handy when you want to restore to a working version. When multiple developers are working on the same codebase, it is very likely for changes to get lost and the latest version to move to a stale or non-working version. That's where cherry-picking commits to the working version can be a savior.
#### How does it work?
Suppose there are two branches, `feature1` and `feature2`, and you want to apply commits from `feature1` to `feature2`.
On the `feature1` branch, run a `git log` command, and copy the commit hash that you want to cherry-pick. You can see a series of commits resembling the code sample below. The alphanumeric code following "commit" is the commit hash that you need to copy. You may choose to copy the first six characters (`966cf3` in this example) for the sake of convenience:
```
commit 966cf3d08b09a2da3f2f58c0818baa37184c9778 (HEAD -&gt; master)
Author: manaswinidas &lt;[me@example.com][4]&gt;
Date:   Mon Mar 8 09:20:21 2021 +1300
   add instructions
```
Then switch to `feature2` and run `git cherry-pick` on the hash you just got from the log:
```
$ git checkout feature2
$ git cherry-pick 966cf3.
```
If the branch doesn't exist, use `git checkout -b feature2` to create it.
Here's a catch: You may encounter the situation below:
```
$ git cherry-pick 966cf3
On branch feature2
You are currently cherry-picking commit 966cf3d.
nothing to commit, working tree clean
The previous cherry-pick is now empty, possibly due to conflict resolution.
If you wish to commit it anyway, use:
   git commit --allow-empty
Otherwise, please use 'git reset'
```
Do not panic. Just run `git commit --allow-empty` as suggested:
```
$ git commit --allow-empty
[feature2 afb6fcb] add instructions
Date: Mon Mar 8 09:20:21 2021 +1300
```
This opens your default editor and allows you to edit the commit message. It's acceptable to save the existing message if you have nothing to add.
There you go; you did your first cherry-pick. As discussed above, if you run a `git log` on branch `feature2`, you will see a different commit hash. Here is an example:
```
commit afb6fcb87083c8f41089cad58deb97a5380cb2c2 (HEAD -&gt; feature2)
Author: manaswinidas &lt;[me@example.com][4]&gt;
Date:   Mon Mar 8 09:20:21 2021 +1300
   add instructions
```
Don't be confused about the different commit hash. That just distinguishes between the commits in `feature1` and `feature2`.
### Cherry-pick multiple commits
But what if you want to cherry-pick multiple commits? You can use:
```
`git cherry-pick <commit-hash1> <commit-hash2>... <commit-hashn>`
```
Please note that you don't have to use the entire commit hash; you can use the first five or six characters.
Again, this is tedious. What if the commits you want to cherry-pick are a range of continuous commits? This approach is too much work. Don't worry; there's an easier way.
Assume that you have two branches:
* `feature1` includes commits you want to copy (from `commitA` (older) to `commitB`).
* `feature2` is the branch you want the commits to be transferred to from `feature1`.
Then:
1. Enter `git checkout <feature1>`.
2. Get the hashes of `commitA` and `commitB`.
3. Enter `git checkout <branchB>`.
4. Enter `git cherry-pick <commitA>^..<commitB>` (please note that this includes `commitA` and `commitB`).
5. Should you encounter a merge conflict, [solve it as usual][5] and then type `git cherry-pick --continue` to resume the cherry-pick process.
### Important cherry-pick options
Here are some useful options from the [Git documentation][6] that you can use with the `cherry-pick` command:
* `-e`, `--edit`: With this option, `git cherry-pick` lets you edit the commit message prior to committing.
* `-s`, `--signoff`: Add a "Signed-off-by" line at the end of the commit message. See the signoff option in git-commit(1) for more information.
* `-S[<keyid>]`, `--gpg-sign[=<keyid>]`: These are GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space.
* `--ff`: If the current HEAD is the same as the parent of the cherry-picked commit, then a fast-forward to this commit will be performed.
Here are some other sequencer subcommands (apart from continue):
* `--quit`: You can forget about the current operation in progress. This can be used to clear the sequencer state after a failed cherry-pick or revert.
* `--abort`: Cancel the operation and return to the presequence state.
Here are some examples of cherry-picking:
* `git cherry-pick master`: Applies the change introduced by the commit at the tip of the master branch and creates a new commit with this change
* `git cherry-pick master~4 master~2`: Applies the changes introduced by the fifth and third-last commits pointed to by master and creates two new commits with these changes
Feeling overwhelmed? You needn't remember all the commands. You can always type `git cherry-pick --help` in your terminal to look at more options or help.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/git-cherry-pick
作者:[Manaswini Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/manaswinidas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
[2]: https://en.wikipedia.org/wiki/SHA-1
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: mailto:me@example.com
[5]: https://opensource.com/article/20/4/git-merge-conflict
[6]: https://git-scm.com/docs/git-cherry-pick

View File

@ -1,142 +0,0 @@
[#]: subject: (7 Git tips for managing your home directory)
[#]: via: (https://opensource.com/article/21/4/git-home)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
7 Git tips for managing your home directory
======
Here is how I set up Git to manage my home directory.
![Houses in a row][1]
I have several computers. I've got a laptop at work, a workstation at home, a Raspberry Pi (or four), a [Pocket CHIP][2], a [Chromebook running various forms of Linux][3], and so on. I used to set up my user environment on each computer by more or less following the same steps, and I often told myself that I enjoyed that each one was slightly unique. For instance, I use [Bash aliases][4] more often at work than at home, and the helper scripts I use at home might not be useful at work.
Over the years, my expectations across devices began to merge, and I'd forget that a feature I'd built up on my home machine wasn't ported over to my work machine, and so on. I needed a way to standardize my customized toolkit. The answer, to my surprise, was Git.
Git is version-tracker software. It's famously used by the biggest and smallest open source projects and even by the largest proprietary software companies. But it was designed for source code—not a home directory filled with music and video files, games, photos, and so on. I'd heard of people managing their home directory with Git, but I assumed that it was a fringe experiment done by coders, not real-life users like me.
Managing my home directory with Git has been an evolving process. I've learned and adapted along the way. Here are the things you might want to keep in mind should you decide to manage your home directory with Git.
### 1\. Text and binary locations
![home directory][5]
(Seth Kenlon, [CC BY-SA 4.0][6])
When managed by Git, your home directory becomes something of a no-man 's-land for everything but configuration files. That means when you open your home directory, you should see nothing but a list of predictable directories. There shouldn't be any stray photos or LibreOffice documents, and no "I'll put this here for just a minute" files.
The reason for this is simple: when you manage your home directory with Git, everything in your home directory that's _not_ being committed becomes noise. Every time you do a `git status`, you'll have to scroll past any file that Git isn't tracking, so it's vital that you keep those files in subdirectories (which you add to your `.gitignore` file).
Many Linux distributions provide a set of default directories:
* Documents
* Downloads
* Music
* Photos
* Templates
* Videos
You can create more if you need them. For instance, I differentiate between the music I create (Music) and the music I purchase to listen to (Albums). Likewise, my Cinema directory contains movies by other people, while Videos contains video files I need for editing. In other words, my default directory structure has more granularity than the default set provided by most Linux distributions, but I think there's a benefit to that. Without a directory structure that works for you, you'll be more likely to just stash stuff in your home directory, for lack of a better place for it, so think ahead and plan out directories that work for you. You can always add more later, but it's best to start strong.
### 2\. Setting up your very best .gitignore
Once you've cleaned up your home directory, you can instantiate it as a Git repository as usual:
```
$ cd
$ git init .
```
Your Git repository contains nothing yet, so everything in your home directory is untracked. Your first job is to sift through the list of untracked files and determine what you want to remain untracked. To see untracked files:
```
$ git status
  .AndroidStudio3.2/
  .FBReader/
  .ICEauthority
  .Xauthority
  .Xdefaults
  .android/
  .arduino15/
  .ash_history
[...]
```
Depending on how long you've been using your home directory, this list may be long. The easy ones are the directories you decided on in the first step. By adding these to a hidden file called `.gitignore`, you tell Git to stop listing them as untracked files and never to track them:
```
`$ \ls -lg | grep ^d | awk '{print $8}' >> ~/.gitignore`
```
With that done, go through the remaining untracked files shown by `git status` and determine whether any other files warrant exclusion. This process helped me discover several stale old configuration files and directories, which I ended up trashing altogether, but also some that were very specific to one computer. I was fairly strict here because many configuration files do better when they're auto-generated. For instance, I never commit my KDE configuration files because many contain information like recent documents and other elements that don't exist on another machine.
I track my personalized configuration files, scripts and utilities, profile and Bash configs, and cheat sheets and other snippets of text that I refer to frequently. If the software is mostly responsible for maintaining a file, I ignore it. And when in doubt about a file, I ignore it. You can always un-ignore it later (by removing it from your `.gitignore` file).
### 3\. Get to know your data
I'm on KDE, so I use the open source scanner [Filelight][7] to get an overview of my data. Filelight gives you a chart that lets you see the size of each directory. You can navigate through each directory to see what's taking up all the space and then backtrack to investigate elsewhere. It's a fascinating view of your system, and it lets you see your files in a completely new light.
![Filelight][8]
(Seth Kenlon, [CC BY-SA 4.0][6])
Use Filelight or a similar utility to find unexpected caches of data you don't need to commit. For instance, the KDE file indexer (Baloo) generates quite a lot of data specific to its host that I definitely wouldn't want to transport to another computer.
### 4\. Don't ignore your .gitignore file
On some projects, I tell Git to ignore my `.gitignore` file because what I want to ignore is sometimes specific to my working directory, and I don't presume other developers on the same project need me to tell them what their `.gitignore` file ought to look like. Because my home directory is for my use only, I do _not_ ignore my home's `.gitignore` file. I commit it along with other important files, so it's inherited across all of my systems. And of course, all of my systems are identical from the home directory's viewpoint: they have the same set of default folders and many of the same hidden configuration files.
### 5\. Don't fear the binary
I put my system through weeks and weeks of rigorous testing, convinced that it was _never_ wise to commit binary files to Git. I tried GPG encrypted password files, I tried LibreOffice documents, JPEGs, PNGs, and more. I even had a script that unarchived LibreOffice files before adding them to Git, extracted the XML inside so I could commit just the XML, and then rebuilt the LibreOffice file so that I could work on it within LibreOffice. My theory was that committing XML would render a smaller Git repository than a ZIP file (which is all a LibreOffice document really is).
To my great surprise, I found that committing a few binary files every now and then did not substantially increase the size of my Git repository. I've worked with Git long enough to know that if I were to commit gigabytes of binary data, my repository would suffer, but the occasional binary file isn't an emergency to avoid at all costs.
Armed with this new confidence, I add font OTF and TTF files to my standard home repo, my `.face` file for GDM, and other incidental minor binary blobs. Don't overthink it, don't waste time trying to avoid it; just commit it.
### 6\. Use a private repo
Don't commit your home directory to a public Git repository, even if the host offers private accounts. If you're like me, you have SSH keys and GPG keychains and GPG-encrypted files that ought not end up on anybody's server but my own.
I [run a local Git server][9] on a Raspberry Pi (it's easier than you think), so I can update any computer any time I'm home. I'm a remote worker, so that's usually good enough, but I can also reach the computer when traveling over my [VPN][10].
### 7\. Remember to push
The thing about Git is that it only pushes changes to your server when you tell it to. If you're a longtime Git user, this process is probably natural to you. For new users who might be accustomed to the automatic synchronization in Nextcloud or Syncthing, this may take some getting used to.
### Git at home
Managing my common files with Git hasn't just made life more convenient across devices. Knowing that I have a full history for all my configurations and utility scripts encourages me to try out new ideas because it's always easy to roll back my changes if they turn out to be _bad_ ideas. Git has rescued me from an ill-advised umask setting in `.bashrc`, a poorly executed late-night addition to my package management script, and an it-seemed-like-a-cool-idea-at-the-time change of my [rxvt][11] color scheme—and probably a few other mistakes in my past. Try Git in your home because a home that commits together merges together.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/git-home
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://opensource.com/article/17/2/pocketchip-or-pi
[3]: https://opensource.com/article/21/2/chromebook-linux
[4]: https://opensource.com/article/17/5/introduction-alias-command-line-tool
[5]: https://opensource.com/sites/default/files/uploads/home-git.jpg (home directory)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://utils.kde.org/projects/filelight
[8]: https://opensource.com/sites/default/files/uploads/filelight.jpg (Filelight)
[9]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[10]: https://www.redhat.com/sysadmin/run-your-own-vpn-libreswan
[11]: https://opensource.com/article/19/10/why-use-rxvt-terminal

View File

@ -1,288 +0,0 @@
[#]: subject: (Using network bound disk encryption with Stratis)
[#]: via: (https://fedoramagazine.org/network-bound-disk-encryption-with-stratis/)
[#]: author: (briansmith https://fedoramagazine.org/author/briansmith/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Using network bound disk encryption with Stratis
======
![][1]
Photo by [iMattSmart][2] on [Unsplash][3]
In an environment with many encrypted disks, unlocking them all is a difficult task. Network bound disk encryption (NBDE) helps automate the process of unlocking Stratis volumes. This is a critical requirement in large environments. Stratis version 2.1 added support for encryption, which was introduced in the article “[Getting started with Stratis encryption][4].” Stratis version 2.3 recently introduced support for Network Bound Disk Encryption (NBDE) when using encrypted Stratis pools, which is the topic of this article.
The [Stratis website][5] describes Stratis as an “_easy to use local storage management for Linux_.” The  short video [“Managing Storage With Stratis”][6] gives a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system, however, the concepts shown in the video also apply to Stratis in Fedora Linux.
### Prerequisites
This article assumes you are familiar with Stratis, and also Stratis pool encryption. If you arent familiar with these topics, refer to this [article][4] and the [Stratis overview video][6] previously mentioned.
NBDE requires Stratis 2.3 or later. The examples in this article use a pre-release version of Fedora Linux 34. The Fedora Linux 34 final release will include Stratis 2.3.
### Overview of network bound disk encryption (NBDE)
One of the main challenges of encrypting storage is having a secure method to unlock the storage again after a system reboot. In large environments, typing in the encryption passphrase manually doesnt scale well. NBDE addresses this and allows for encrypted storage to be unlocked in an automated manner.
At a high level, NBDE requires a Tang server in the environment. Client systems (using Clevis Pin) can automatically decrypt storage as long as they can establish a network connection to the Tang server. If there is no network connectivity to the Tang server, the storage would have to be decrypted manually.
The idea behind this is that the Tang server would only be available on an internal network, thus if the encrypted device is lost or stolen, it would no longer have access to the internal network to connect to the Tang server, therefore would not be automatically decrypted.
For more information on Tang and Clevis, see the man pages (man tang, man clevis) , the [Tang GitHub page][7], and the [Clevis GitHub page][8].
### Setting up the Tang server
This example uses another Fedora Linux system as the Tang server with a hostname of tang-server. Start by installing the tang package:
```
dnf install tang
```
Then enable and start the tangd.socket with systemctl:
```
systemctl enable tangd.socket --now
```
Tang uses TCP port 80, so you also need to open that in the firewall:
```
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=80/tcp
```
Finally, run _tang-show-keys_ to display the output signing key thumbprint. Youll need this later.
```
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
### Creating the encrypted Stratis Pool
The previous article on Stratis encryption goes over how to setup an encrypted Stratis pool in detail, so this article wont cover that in depth.
The first step is capturing a key that will be used to decrypt the Stratis pool. Even when using NBDE, you need to set this, as it can be used to manually unlock the pool in the event that the NBDE server is unreachable. Capture the pool1 key with the following command:
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
Then Ill create an encrypted Stratis pool (using the pool1key just created) named pool1 using the _/dev/vdb_ device:
```
# stratis pool create --key-desc pool1key pool1 /dev/vdb
```
Next, create a filesystem in this Stratis pool named filesystem1, create a mount point, mount the filesystem, and create a testfile in it:
```
# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /dev/stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile
```
### Binding the Stratis pool to the Tang server
At this point, we have the encrypted Stratis pool created, and also have a filesystem created in the pool. The next step is to bind your Stratis pool to the Tang server that you just setup. Do this with the _stratis pool bind nbde_ command.
When you make the Tang binding, you need to pass several parameters to the command:
* the pool name (in this example, pool1)
* the key descriptor name (in this example, pool1key)
* the Tang server name (in this example, <http://tang-server>)
Recall that on the Tang server, you previously ran _tang-show-keys_ which showed the Tang output signing key thumbprint is _l3fZGUCmnvKQF_OA6VZF9jf8z2s_. In addition to the previous parameters, you either need to pass this thumbprint with the parameter _thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s_, or skip the verification of the thumbprint with the _trust-url_ parameter. ****
It is more secure to use the _thumbprint_ parameter. For example:
```
# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
### Unlocking the Stratis Pool with NBDE
Next reboot the host, and validate that you can unlock the Stratis pool with NBDE, without requiring the use of the key passphrase. After rebooting the host, the pool is no longer available:
```
# stratis pool list
Name Total Physical Properties
```
To unlock the pool using NBDE, run the following command:
```
# stratis pool unlock clevis
```
Note that you did not need to use the key passphrase. This command could be automated to run during the system boot up.
At this point, the pool is now available:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
```
You can mount the filesystem and access the file that was previously created:
```
# mount /dev/stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file
```
### Rotating Tang server keys
Best practices recommend that you periodically rotate the Tang server keys and update the Stratis client servers to use the new Tang keys.
To generate new Tang keys, start by logging in to your Tang server and look at the current status of the /var/db/tang directory. Then, run the _tang-show-keys_ command:
```
# ls -al /var/db/tang
total 8
drwx------. 1 tang tang 124 Mar 15 15:51 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
To generate new keys, run tangd-keygen and point it to the /var/db/tang directory:
```
# /usr/libexec/tangd-keygen /var/db/tang
```
If you look at the /var/db/tang directory again, you will see two new files:
```
# ls -al /var/db/tang
total 16
drwx------. 1 tang tang 248 Mar 22 10:41 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 root root 354 Mar 22 10:41 iyG5HcF01zaPjaGY6L_3WaslJ_E.jwk
-rw-r--r--. 1 root root 349 Mar 22 10:41 jHxerkqARY1Ww_H_8YjQVZ5OHao.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
```
And if you run _tang-show-keys_, it will show the keys being advertised by Tang:
```
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
iyG5HcF01zaPjaGY6L_3WaslJ_E
```
You can prevent the old key (starting with l3fZ) from being advertised by renaming the two original files to be hidden files, starting with a period. With this method, the old key will no longer be advertised, however it will still be usable by any existing clients that havent been updated to use the new key. Once all clients have been updated to use the new key, these old key files can be deleted.
```
# cd /var/db/tang
# mv hbjJEDXy8G8wynMPqiq8F47nJwo.jwk .hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
# mv l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk .l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
```
At this point, if you run _tang-show-keys_ again, only the new key is being advertised by Tang:
```
# tang-show-keys
iyG5HcF01zaPjaGY6L_3WaslJ_E
```
Next, switch over to your Stratis system and update it to use the new Tang key. Stratis supports doing this while the filesystem(s) are online.
First, unbind the pool:
```
# stratis pool unbind pool1
```
Next, set the key with the original passphrase used when the encrypted pool was created:
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
Finally, bind the pool to the Tang server with the updated key thumbprint:
```
# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint iyG5HcF01zaPjaGY6L_3WaslJ_E
```
The Stratis system is now configured to use the updated Tang key. Once any other client systems using the old Tang key have been updated, the two original key files that were renamed to hidden files in the /var/db/tang directory on the Tang server can be backed up and deleted.
### What if the Tang server is unavailable?
Next, shutdown the Tang server to simulate it being unavailable, then reboot the Stratis system.
Again, after the reboot, the Stratis pool is not available:
```
# stratis pool list
Name Total Physical Properties
```
If you try to unlock it with NBDE, this fails because the Tang server is unavailable:
```
# stratis pool unlock clevis
Execution failed:
An iterative command generated one or more errors: The operation 'unlock' on a resource of type pool failed. The following errors occurred:
Partial action "unlock" failed for pool with UUID 4d62f840f2bb4ec9ab53a44b49da3f48: Cryptsetup error: Failed with error: Error: Command failed: cmd: "clevis" "luks" "unlock" "-d" "/dev/vdb" "-n" "stratis-1-private-42142fedcb4c47cea2e2b873c08fcf63-crypt", exit reason: 1 stdout: stderr: /dev/vdb could not be opened.
```
At this point, without the Tang server being reachable, the only option to unlock the pool is to use the original key passphrase:
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
You can then unlock the pool using the key:
```
# stratis pool unlock keyring
```
Next, verify the pool was successfully unlocked:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/network-bound-disk-encryption-with-stratis/
作者:[briansmith][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/briansmith/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/stratis-nbde-816x345.jpg
[2]: https://unsplash.com/@imattsmart?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/lock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/getting-started-with-stratis-encryption/
[5]: https://stratis-storage.github.io/
[6]: https://www.youtube.com/watch?v=CJu3kmY-f5o
[7]: https://github.com/latchset/tang
[8]: https://github.com/latchset/clevis

View File

@ -1,200 +0,0 @@
[#]: subject: (What is Git cherry-picking?)
[#]: via: (https://opensource.com/article/21/4/cherry-picking-git)
[#]: author: (Rajeev Bera https://opensource.com/users/acompiler)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
What is Git cherry-picking?
======
Learn the what, why, and how of the git cherry-pick command.
![Measuring and baking a cherry pie recipe][1]
Whenever you're working with a group of programmers on a project, whether small or large, handling changes between multiple Git branches can become difficult. Sometimes, instead of combining an entire Git branch into a different one, you want to select and move a couple of specific commits. This procedure is known as "cherry-picking."
This article will cover the what, why, and how of cherry-picking.
So let's start.
### What is cherry-pick?
With the `cherry-pick` command, Git lets you incorporate selected individual commits from any branch into your current [Git HEAD][2] branch.
When performing a `git merge` or `git rebase`, all the commits from a branch are combined. The `cherry-pick` command allows you to select individual commits for integration.
### Benefits of cherry-pick
The following situation might make it easier to comprehend the way cherry-picking functions.
Imagine you are implementing new features for your upcoming weekly sprint. When your code is ready, you will push it into the remote branch, ready for testing.
However, the customer is not delighted with all of the modifications and requests that you present only certain ones. Because the client hasn't approved all changes for the next launch, `git rebase` wouldn't create the desired results. Why? Because `git rebase` or `git merge` will incorporate every adjustment from the last sprint.
Cherry-picking is the answer! Because it focuses only on the changes added in the commit, cherry-picking brings in only the approved changes without adding other commits.
There are several other reasons to use cherry-picking:
* It is essential for bug fixing because bugs are set in the development branch using their commits.
* You can avoid unnecessary battles by using `git cherry-pick` instead of other options that apply changes in the specified commits, e.g., `git diff`.
* It is a useful tool if a full branch unite is impossible because of incompatible versions in the various Git branches.
### Using the cherry-pick command
In the `cherry-pick` command's simplest form, you can just use the [SHA][3] identifier for the commit you want to integrate into your current HEAD branch.
To get the commit hash, you can use the `git log` command:
```
`$ git log --oneline`
```
Once you know the commit hash, you can use the `cherry-pick` command.
The syntax is:
```
`$ git cherry-pick <commit sha>`
```
For example:
```
`$ git cherry-pick 65be1e5`
```
This will dedicate the specified change to your currently checked-out branch.
If you'd like to make further modifications, you can also instruct Git to add commit changes to your working copy.
The syntax is:
```
`$ git cherry-pick <commit sha> --no-commit`
```
For example:
```
`$ git cherry-pick 65be1e5 --no-commit`
```
If you would like to select more than one commit simultaneously, add their commit hashes separated by a space:
```
`$ git cherry-pick hash1 hash3`
```
When cherry-picking commits, you can't use the `git pull` command because it fetches _and_ automatically merges commits from one repository into another. The `cherry-pick` command is a tool you use to specifically not do that; instead, use `git fetch`, which fetches commits but does not apply them. There's no doubt that `git pull` is convenient, but it's imprecise.
### Try it yourself
To try the process, launch a terminal and generate a sample project:
```
$ mkdir fruit.git
$ cd fruit.git
$ git init .
```
Create some data and commit it:
```
$ echo "Kiwifruit" &gt; fruit.txt
$ git add fruit.txt
$ git commit -m 'First commit'
```
Now, represent a remote developer by creating a fork of your project:
```
$ mkdir ~/fruit.fork
$ cd !$
$ echo "Strawberry" &gt;&gt; fruit.txt
$ git add fruit.txt
$ git commit -m 'Added a fruit"
```
That's a valid commit. Now, create a bad commit to represent something you wouldn't want to merge into your project:
```
$ echo "Rhubarb" &gt;&gt; fruit.txt
$ git add fruit.txt
$ git commit -m 'Added a vegetable that tastes like a fruit"
```
Return to your authoritative repo and fetch the commits from your imaginary developer:
```
$ cd ~/fruit.git
$ git remote add dev ~/fruit.fork
$ git fetch dev
remote: Counting objects: 6, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 6 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (6/6), done...
[/code] [code]
$ git log oneline dev/master
e858ab2 Added a vegetable that tastes like a fruit
0664292 Added a fruit
b56e0f8 First commit
```
You've fetched the commits from your imaginary developer, but you haven't merged them into your repository yet. You want to accept the second commit but not the third, so use `cherry-pick`:
```
`$ git cherry-pick 0664292`
```
The second commit is now in your repository:
```
$ cat fruit.txt
Kiwifruit
Strawberry
```
Push your changes to your remote server, and you're done!
### Reasons to avoid cherry-picking
Cherry-picking is usually discouraged in the developer community. The primary reason is that it creates duplicate commits, but you also lose the ability to track your commit history.
If you're cherry-picking a lot of commits out of order, those commits will be recorded in your branch, and it might lead to undesirable results in your Git branch.
Cherry-picking is a powerful command that might cause problems if it's used without a proper understanding of what might occur. However, it may save your life (or at least your day job) when you mess up and make commits to the wrong branches.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/cherry-picking-git
作者:[Rajeev Bera][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/acompiler
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
[2]: https://acompiler.com/git-head/
[3]: https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

View File

@ -1,114 +0,0 @@
[#]: subject: (Why I love using bspwm for my Linux window manager)
[#]: via: (https://opensource.com/article/21/4/bspwm-linux)
[#]: author: (Stephen Adams https://opensource.com/users/stevehnh)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Why I love using bspwm for my Linux window manager
======
Install, configure, and start using the bspwm window manager on Fedora
Linux.
![Tall building with windows][1]
Some folks like to rearrange furniture. Other folks like to try new shoes or redecorate their bedroom on the regular. Me? I try out Linux desktops.
After drooling over some of the incredible desktop environments I've seen online, I got curious about one window manager in particular: [bspwm][2].
![bspwm desktop][3]
(Stephen Adams, [CC BY-SA 4.0][4])
I've been a fan of the [i3][5] window manager for quite a while, and I enjoy the way everything is laid out and the ease of getting started. But something about bspwm called to me. There are a few reasons I decided to try it out:
* It is _only_ a window manager.
* It is managed by a few easy-to-configure scripts.
* It supports gaps between windows by default.
The first reason—that it is simply a window manager—is probably the top thing to point out. Like i3, there are no graphical bells and whistles applied by default. You can certainly customize it to your heart's content, but _you_ will be putting in all the work to make it look like you want. That's part of its appeal to me.
Although it is available on many distributions, my examples use Fedora Linux.
### Install bspwm
Bspwm is packaged in most common distributions, so you can install it with your system's package manager. This command also installs [sxkhd][6], a daemon for the X Window System "that reacts to input events by executing commands," and [dmenu][7], a generic X Window menu:
```
`dnf install bspwm sxkhd dmenu`
```
Since bspwm is _just_ a window manager, there aren't any built-in shortcuts or keyboard commands. This is where it stands in contrast to something like i3. sxkhd makes it easier to get going. So, go ahead and configure sxkhd before you fire up the window manager for the first time:
```
systemctl start sxkhd
systemctl enable sxkhd
```
This enables sxkhd at login, but you also need a configuration with some basic functionality ready to go:
```
`curl https://raw.githubusercontent.com/baskerville/bspwm/master/examples/sxhkdrc --output ~/.config/sxkhd/sxkhdrc`
```
It's worth taking a look at this file before you get much further, as some commands that the scripts call may not exist on your system. A good example is the `super + Return` shortcut that calls `urxvt`. Change this to your preferred terminal, especially if you do not have urxvt installed:
```
#
# wm independent hotkeys
#
   
# terminal emulator
super + Return
        urxvt
   
# program launcher
super + @space
        dmenu_run
```
If you are using GDM, LightDM, or another display manager, just choose bspwm before logging in.
### Configure bspwm
Once you are logged in, you'll see a whole lot of nothing on the screen. That's not a sense of emptiness you feel. It's possibility! You are now ready to start fiddling with all the parts of a desktop environment that you have taken for granted all these years. Building from scratch is not easy, but it's very rewarding once you get the hang of it.
The most difficult thing about any window manager is getting a handle on the shortcuts. You're going to be slow to start, but in a short time, you'll be flying around your system using your keyboard alone and looking like an ultimate hacker to your friends and family.
You can tailor the system as much as you want by editing `~/.config/bspwm/bspwmrc` to add apps at launch, set up your desktops and monitors, and set rules for how your windows should behave. There are a few examples set by default to get you going. Keyboard shortcuts are all managed by the **sxkhdrc** file.
There are plenty more open source projects to install to really get things looking nice—like [Feh][8] for desktop backgrounds, [Polybar][9] for that all-important status bar, [Rofi][10] to really help your app launcher pop, and [Compton][11] to give you the shadows and transparency to get things nice and shiny.
Happy hacking!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/bspwm-linux
作者:[Stephen Adams][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stevehnh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
[2]: https://github.com/baskerville/bspwm
[3]: https://opensource.com/sites/default/files/uploads/bspwm-desktop.png (bspwm desktop)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://i3wm.org/
[6]: https://github.com/baskerville/sxhkd
[7]: https://linux.die.net/man/1/dmenu
[8]: https://github.com/derf/feh
[9]: https://github.com/polybar/polybar
[10]: https://github.com/davatorium/rofi
[11]: https://github.com/chjj/compton

View File

@ -1,83 +0,0 @@
[#]: subject: (4 ways open source gives you a competitive edge)
[#]: via: (https://opensource.com/article/21/4/open-source-competitive-advantage)
[#]: author: (Jason Blais https://opensource.com/users/jasonblais)
[#]: collector: (lujun9972)
[#]: translator: (DCOLIVERSUN)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
4 ways open source gives you a competitive edge
======
Using open source technology can help organizations drive better
business outcomes.
![Open ethernet cords.][1]
Building a tech stack is a major decision for every organization. While picking the right tools will set your team up for success, picking the wrong solutions or platforms can have devastating effects on productivity and profitability. To succeed in today's fast-paced world, organizations must make smart investments in digital solutions that enable them to move faster and increase operational agility.
This is precisely why more and more organizations of all sizes and across all industries are embracing open source solutions. According to a recent [McKinsey][2] report, open source adoption is the biggest differentiator for top-performing organizations.
Here are four reasons why adopting open source technology can help organizations drive competitive advantage and experience better business outcomes.
### 1\. Extensibility and flexibility
Suffice it to say the world of technology moves quickly. For example, Kubernetes didn't exist before 2014, but today, it's impressively ubiquitous. According to the CNCF's [2020 Cloud Native Survey][3], 91% of teams are using Kubernetes in some form.
One of the main reasons organizations are investing in open source is because it enables them to operate with agility and rapidly integrate new technologies into their stack. That's compared to the more traditional approach, where teams would take quarters or even years to vet, implement, and adopt software—making it impossible for them to pivot with any sense of urgency.
Since open source solutions offer complete access to source code, teams can easily connect the software to the other tools they use every day.
Simply put, open source enables development teams to build the perfect tool for what is at hand instead of being forced to change how they work to fit into how inflexible proprietary tools are designed.
### 2\. Security and high-trust collaboration
In the age of high-profile data breaches, organizations need highly secure tools that enable them to keep sensitive data secure.
When vulnerabilities exist in proprietary solutions, they're often undiscovered until it's too late. Unfortunately for the teams using these platforms, the lack of visibility into source code means they're essentially outsourcing security to the specific vendor and hoping for the best.
Another main driver of open source adoption is that open source tools enable organizations to take control over their own security. For example, open source projects—particularly those with large communities—tend to receive more responsible vulnerability disclosures because everyone using the product can thoroughly inspect the source code.
Since the source code is freely available, such disclosures often come with detailed proposed solutions for fixing bugs. This enables dev teams to remedy issues faster, continuously strengthening the software.
In the age of remote work, it's more important than ever for distributed teams to collaborate while knowing that sensitive data stays protected. Since open source solutions allow organizations to audit security while maintaining complete control over their data, they can facilitate the high-trust collaboration needed to thrive in remote environments.
### 3\. Freedom from vendor lock-in
According to a [recent study][4], 68% of CIOs are concerned about vendor lock-in. They should be. When you're locked into a piece of technology, you're forced to live with someone else's conclusions instead of making your own.
Proprietary solutions often make it [challenging to take data with you][5] when an organization switches vendors. On the other hand, open source tools offer the freedom and flexibility needed to avoid vendor lock-in and take data wherever an organization wants to go.
### 4\. Top talent and community
As more and more companies [embrace remote work][6], the war for talent is becoming even more competitive.
In the world of software development, landing top talent starts with giving engineers access to modern tools that enable them to reach their full potential at work. Since developers increasingly [prefer open source solutions][7] to proprietary counterparts, organizations should strongly consider open source alternatives to their commercial solutions to attract the best developers on the market.
In addition to making it easier to hire and retain top talent, open source platforms also enable companies to tap into a community of contributors for advice on how to walk through problems and get the most out of the platform. Plus, members of the community also [contribute to open source projects directly][8].
### Open source offers freedom
Open source software is increasingly popular among enterprise teams—[for good reason][9]. It gives teams the flexibility needed to build the perfect tool for the job while enabling them to maintain a highly secure environment. At the same time, an open source approach allows teams to maintain control of their future, rather than being locked into one vendor's roadmap. And it also gives companies access to talented engineers and members of the open source community.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/open-source-competitive-advantage
作者:[Jason Blais][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jasonblais
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab (Open ethernet cords.)
[2]: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/developer-velocity-how-software-excellence-fuels-business-performance#
[3]: https://www.cncf.io/blog/2020/11/17/cloud-native-survey-2020-containers-in-production-jump-300-from-our-first-survey/
[4]: https://solutionsreview.com/cloud-platforms/flexera-68-percent-of-cios-worry-about-vendor-lock-in-with-public-cloud/
[5]: https://www.computerworld.com/article/3428679/mattermost-makes-case-for-open-source-as-team-messaging-market-booms.html
[6]: https://mattermost.com/blog/tips-for-working-remotely/
[7]: https://opensource.com/article/20/6/open-source-developers-survey
[8]: https://mattermost.com/blog/100-most-popular-mattermost-features-invented-and-contributed-by-our-amazing-open-source-community/
[9]: https://mattermost.com/open-source-advantage/

View File

@ -1,77 +0,0 @@
[#]: subject: (5 signs you're a groff programmer)
[#]: via: (https://opensource.com/article/21/4/groff-programmer)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
5 signs you're a groff programmer
======
Learning groff, an old-school text processor, is like learning to ride a
bicycle.
![Typewriter in the grass][1]
I first discovered Unix systems in the early 1990s, when I was an undergraduate at university. I liked it so much that I replaced the MS-DOS system on my home computer with the Linux operating system.
One thing that Linux didn't have in the early to mid-1990s was a word processor. A standard office application on other desktop operating systems, a word processor lets you edit text easily. I often used a word processor on DOS to write my papers for class. I wouldn't find a Linux-native word processor until the late 1990s. Until then, word processing was one of the rare reasons I maintained dual-boot on my first computer, so I could occasionally boot back into DOS to write papers.
Then I discovered that Linux provided kind of a word processor. GNU troff, better known as [groff][2], is a modern implementation of a classic text processing system called troff, short for "typesetter roff," which is an improved version of the nroff system. And nroff was meant to be a new implementation of the original roff (which stood for "run off," as in to "run off" a document).
With text processing, you edit text in a plain text editor, and you add formatting through macros or other processing commands. You then process that text file through a text-processing system such as groff to generate formatted output suitable for a printer. Another well-known text processing system is LaTeX, but groff was simple enough for my needs.
With a little practice, I found I could write my class papers just as easily in groff as I could using a word processor on Linux. While I don't use groff to write documents today, I still remember the macros and commands to generate printed documents with it. And if you're the same and you learned how to write with groff all those years ago, you probably recognize these five signs that you're a groff writer.
### 1\. You have a favorite macro set
You format a document in groff by writing plain text interspersed with macros. A macro in groff is a short command that starts with a single period at the beginning of a line. For example: if you want to insert a few lines into your output, the `.sp 2` macro command adds two blank lines. groff supports other basic macros for all kinds of formatting.
To make formatting a document easier for the writer, groff also provides different _macro sets_, collections of macros that let you format documents your own way. The first macro set I learned was the `-me` macro set. Really, the macro set is called the `e` macro set, and you specify the `e` macro set when you process a file using the `-me` option.
groff includes other macro sets, too. For example, the `-man` macro set used to be the standard macro set to format the built-in _manual_ pages on Unix systems, and the `-ms` macro set is often used to format certain other technical documents. If you learned to write with groff, you probably have a favorite macro set.
### 2\. You want to focus on your content, not the formatting
One great feature of writing with groff is that you can focus on your _content_ and not worry too much about what it looks like. That is a handy feature for technical writers. groff is a great "distraction-free" environment for professional writers. At least, as long as you don't mind delivering your output in any of the formats that groff supports with the `-T` command-line option, including PDF, PostScript, HTML, and plain text. You can't generate a LibreOffice ODT file or Word DOC file directly from groff.
Once you get comfortable writing in groff, the macros start to _disappear_. The formatting macros become part of the background, and you focus purely on the text in front of you. I've done enough writing in groff that I don't even see the macros anymore. Maybe it's like writing programming code, and your mind just switches gears, so you think like a computer and see the code as a set of instructions. For me, writing in groff is like that; I just see my text, and my mind interprets the macros automatically into formatting.
### 3\. You like the old-school feel
Sure, it might be _easier_ to write your documents with a more typical word processor like LibreOffice Writer or even Google Docs or Microsoft Word. And for certain kinds of documents, a desktop word processor is the right fit. But if you want the "old-school" feel, it's hard to beat writing in groff.
I'll admit that I do most of my writing with LibreOffice Writer, which does an outstanding job. But when I get that itch to do it "old-school," I'll open an editor and write my document using groff.
### 4\. You like that you can use it anywhere
groff (and its cousins) are a standard package on almost any Unix system. And with groff, the macros don't change. For example, the `-me` macros should be the same from system to system. So once you've learned to use the macros on one system, you can use them on the next system.
And because groff documents are just plain text, you can use any editor you like to edit your documents for groff. I like to use GNU Emacs to edit my groff documents, but you can use GNOME Gedit, Vim, or your [favorite text editor][3]. Most editors include some kind of "mode" that will highlight the groff macros in a different color from the rest of your text to help you spot errors before processing the file.
### 5\. You wrote this article in -me
When I decided to write this article, I thought the best way would be to use groff directly. I wanted to demonstrate how flexible groff was in preparing documents. So even though you're reading this on a website, the article was originally written using groff.
I hope this has interested you in learning how to use groff to write documents. If you'd like to use more advanced functions in the `-me` macro set, refer to Eric Allman's _Writing papers with groff using -me_, which you should find on your system as **meintro.me** in groff's documentation. It's a great reference document that explains other ways to format papers using the `-me` macros.
I've also included a copy of the original draft of my article that uses the `-me` macros. Save the file to your system as **five-signs-groff.me**, and run it through groff to view it. The `-T` option sets the output type, such as `-Tps` to generate PostScript output or `-Thtml` to create an HTML file. For example:
groff -me -Thtml five-signs-groff.me &gt; five-signs-groff.html
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/groff-programmer
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead.png?itok=h3fCkVmU (Typewriter in the grass)
[2]: https://en.wikipedia.org/wiki/Groff_(software)
[3]: https://opensource.com/article/21/2/open-source-text-editors

View File

@ -1,117 +0,0 @@
[#]: subject: (How to Install Steam on Fedora [Beginners Tip])
[#]: via: (https://itsfoss.com/install-steam-fedora/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Install Steam on Fedora [Beginners Tip]
======
Steam is the best thing that could happen to Linux gamers. Thanks to Steam, you can play hundreds and thousands of games on Linux.
If you are not already aware of it, Steam is the most popular PC gaming platform. In 2013, it became available for Linux. [Steams latest Proton project][1] allows you to play games created for Windows platform on Linux. This enhanced Linux gaming library many folds.
![][2]
Steam provides a desktop client and you can use it to download or purchase games from the Steam store, install the game and play it.
We have discussed [installing Steam on Ubuntu][3] in the past. In this beginners tutorial, I am going to show you the steps for installing Steam on Fedora Linux.
### Installing Steam on Fedora
To get Steam on Fedora, youll have to use RMPFusion repository. [RPMFusion][4] is a series of third-party repos that contain software that Fedora chooses not to ship with their operating system. They offer both free (open source) and non-free (closed source) repos. Since Steam is in the non-free repo, you will only install that one.
I shall go over both the terminal and graphical installation methods.
#### Method 1: Install Steam via terminal
This is the easiest method because it requires the fewest steps. Just enter the following command to enable the free repo:
```
sudo dnf install https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
You will be asked to enter your password. You will then be asked to verify that you want to install these repos. Once you approve it, the installation of the repo will be completed.
To install Steam, simply enter the following command:
```
sudo dnf install steam
```
![Install Steam via command line][5]
Enter your password and press “Y” to accept. Once installed, open Steam and play some games.
#### Method 2: Install Steam via GUI
You can [enable the third-party repository on Fedora][6] from the Software Center. Open the Software Center application and click on the hamburger menu:
![][7]
In the Software Repositories window, you will see a section at the top that says “Third Party Repositories”. Click the Install button. Enter your password when you are prompted and you are done.
![][8]
Once you have installed RPM Fusion repository for Steam, update your systems software cache (if needed) and search for Steam in the software center.
![Steam in GNOME Software Center][9]
Once that installation is complete, open up the GNOME Software Center and search for Steam. Once you locate the Steam page, click install. Enter your password when asked and youre done.
After installing Steam, start the application, enter your Steam account details or register for it and enjoy your games.
### Using Steam as Flatpak
Steam is also available as a Flatpak. Flatpak is installed by default on Fedora. Before we can install Steam using that method, we have to install the Flathub repo.
![Install Flathub][10]
First, open the [Flatpak site][11] in your browser. Now, click the blue button marked “Flathub repository file”. The browser will ask you if you want to open the file in GNOME Software Center. Click okay. Once GNOME Software Center open, click the install button. You will be prompted to enter your password.
If you get an error when you try to install the Flathub repo, run this command in the terminal:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
With the Flathub repo installed, all you need to do is search for Steam in the GNOME Software Center. Once you find it, install it, and you are ready to go.
![Fedora Repo Select][12]
The Flathub version of Steam has several add-ons you can install, as well. These include a DOS compatibility tool and a couple of tools for [Vulkan][13] and Proton.
![][14]
I think this should help you with Steam on Fedora. Enjoy your games :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-steam-fedora/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/steam-play-proton/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/05/Steam-Store.jpg?resize=800%2C382&ssl=1
[3]: https://itsfoss.com/install-steam-ubuntu-linux/
[4]: https://rpmfusion.org/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/install-steam-fedora.png?resize=800%2C588&ssl=1
[6]: https://itsfoss.com/fedora-third-party-repos/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/software-meni.png?resize=800%2C672&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/fedora-third-party-repo-gui.png?resize=746%2C800&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/gnome-store-steam.jpg?resize=800%2C434&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/flatpak-install-button.jpg?resize=800%2C434&ssl=1
[11]: https://www.flatpak.org/setup/Fedora/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/fedora-repo-select.jpg?resize=800%2C434&ssl=1
[13]: https://developer.nvidia.com/vulkan
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/steam-flatpak-addons.jpg?resize=800%2C434&ssl=1

View File

@ -0,0 +1,207 @@
[#]: subject: (Scheduling tasks with cron)
[#]: via: (https://fedoramagazine.org/scheduling-tasks-with-cron/)
[#]: author: (Darshna Das https://fedoramagazine.org/author/climoiselle/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Scheduling tasks with cron
======
![][1]
Photo by [Yomex Owo][2] on [Unsplash][3]
Cron is a scheduling daemon that executes tasks at specified intervals. These tasks are called _cron_ jobs and are mostly used to automate system maintenance or administration tasks. For example, you could set a _cron_ job to automate repetitive tasks such as backing up database or data, updating the system with the latest security patches, checking the disk space usage, sending emails, and so on. The _cron_ jobs can be scheduled to run by the minute, hour, day of the month, month, day of the week, or any combination of these.
### **Some advantages of cron**
These are a few of the advantages of using _cron_ jobs:
* You have much more control over when your job runs i.e. you can control the minute, the hour, the day, etc. when it will execute.
* It eliminates the need to write the code for the looping and logic of the task and you can shut it off when you no longer need to execute the job.
* Jobs do not occupy your memory when not executing so you are able to save the memory allocation.
* If a job fails to execute and exits for some reason it will run again when the proper time comes.
### Installing the cron daemon
Luckily Fedora Linux is pre-configured to run important system tasks to keep the system updated. There are several utilities that can run tasks such as _cron_, _anacron_, _at_ and _batch_. This article will focus on the installation of the _cron_ utility only. Cron is installed with the _cronie_ package that also provides the _cron_ services.
To determine if the package is already present or not, use the rpm command:
```
$ rpm -q cronie
Cronie-1.5.2-4.el8.x86_64
```
If the _cronie_ package is installed it will return the full name of the _cronie_ package. If you do not have the package present in your system it will say the package is not installed.
To install type this:
```
$ dnf install cronie
```
### Running the cron daemon
A _cron_ job is executed by the _crond_ service based on information from a configuration file. Before adding a job to the configuration file, however, it is necessary to start the _crond_ service, or in some cases install it. What is _crond_? _Crond_ is the compressed name of cron daemon (crond). To determine if the _crond_ service is running or not, type in the following command:
```
$ systemctl status crond.service
● crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor pre>
Active: active (running) since Sat 2021-03-20 14:12:35 PDT; 1 day 21h ago
Main PID: 1110 (crond)
```
If you do not see something similar including the line “Active: active (running) since…”, you will have to start the _crond_ daemon. To run the _crond_ service in the current session, enter the following command:
```
$ systemctl run crond.service
```
To configure the service to start automatically at boot time, type the following:
```
$ systemctl enable crond.service
```
If, for some reason, you wish to stop the _crond_ service from running, use the _stop_ command as follows:
```
$ systemctl stop crond.service
```
To restart it, simply use the _restart_ command:
```
$ systemctl restart crond.service
```
### Defining a cron job
#### The cron configuration
Here is an example of the configuration details for a _cron_ job. This defines a simple _cron_ job to pull the latest changes of a _git_ master branch into a cloned repository:
```
*/59 * * * * username cd /home/username/project/design && git pull origin master
```
There are two main parts:
* The first part is “*/59 * * * *”. This is where the timer is set to every 59 minutes.
* The rest of the line is the command as it would run from the command line.
The command itself in this example has three parts:
* The job will run as the user “username”
* It will change to the directory /home/username/project/design
* The git command runs to pull the latest changes in the master branch.
#### **Timing syntax**
The timing information is the first part of the _cron_ job string, as mentioned above. This determines how often and when the cron job is going to run. It consists of 5 parts in this order:
* minute
* hour
* day of the month
* month
* day of the week
Here is a more graphic way to explain the syntax may be seen here:
```
.---------------- minute (0 - 59)
| .------------- hour (0 - 23)
| | .---------- day of month (1 - 31)
| | | .------- month (1 - 12) OR jan,feb,mar,apr …
| | | | .---- day of week (0-6) (Sunday=0 or 7)
| | | | | OR sun,mon,tue,wed,thr,fri,sat
| | | | |
* * * * user-name command-to-be-executed
```
#### Use of the **asterisk**
An asterisk (*) may be used in place of a number to represents all possible values for that position. For example, an asterisk in the minute position would make it run every minute. The following examples may help to better understand the syntax.
This cron job will run every minute, all the time:
```
* * * * [command]
```
A slash (/) indicates a multiple number of minutes The following example will run 12 times per hour, i.e., every 5 minutes:
```
*/5 * * * * [command]
```
The next example will run once a month, on the second day of the month at midnight (e.g. January 2nd 12:00am, February 2nd 12:00am, etc.):
```
0 0 2 * * [command]
```
#### Using crontab to create a cron job
Cron jobs run in the background and constantly check the _/etc/crontab_ file, and the _/etc/cron.*/_ and _/var/spool/cron/_ directories. Each user has a unique crontab file in _/var/spool/cron/_ .
These _cron_ files are not supposed to be edited directly. The _crontab_ command is the method you use to create, edit, install, uninstall, and list cron jobs.
The same _crontab_ command is used for creating and editing cron jobs. And whats even cooler is that you dont need to restart cron after creating new files or editing existing ones.
```
$ crontab -e
```
This opens your existing _crontab_ file or creates one if necessary. The _vi_ editor opens by default when calling _crontab -e_. Note: To edit the _crontab_ file using Nano editor, you can optionally set the **EDITOR**=nano environment variable.
List all your _cron_ jobs using the option _-l_ and specify a user using the _-u_ option, if desired.
```
$ crontab -l
$ crontab -u username -l
```
Remove or erase all your _cron_ jobs using the following command:
```
$ crontab -r
```
To remove jobs for a specific user you must run the following command as the _root user_:
```
$ crontab -r -u username
```
Thank you for reading. _cron_ jobs may seem like a tool just for system admins, but they are actually relevant to many kinds of web applications and user tasks.
#### Reference
Fedora Linux documentation for [Automated Tasks][4]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/scheduling-tasks-with-cron/
作者:[Darshna Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/climoiselle/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/schedule_with_cron-816x345.jpg
[2]: https://unsplash.com/@yomex4life?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/clock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://docs.fedoraproject.org/en-US/Fedora/12/html/Deployment_Guide/ch-autotasks.html

View File

@ -0,0 +1,205 @@
[#]: subject: (Send your scans to a Linux machine over your network)
[#]: via: (https://opensource.com/article/21/4/linux-scan-samba)
[#]: author: (Marc Skinner https://opensource.com/users/marc-skinner)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Send your scans to a Linux machine over your network
======
Set up a Samba share to make a scanner easily accessible by a Linux
computer over your network.
![Files in a folder][1]
The free software movement famously got started [because of a poorly designed printer][2]. Decades later, printer and scanner manufacturers continue to reinvent the wheel, ignoring established and universal protocols. As a result, every now and again, you'll stumble onto a printer or scanner that just doesn't seem to work with your operating system.
This happened to me recently with a Canon 3-in-1 scanner (the Canon Maxify MB2720). I was able to solve the scanner's problem with open source. Specifically, I set up a Samba share to make the scanner available on my network.
The [Samba project][3] is a Windows interoperability suite of programs for Linux and Unix. Although it's mostly low-level code that many users never knowingly interact with, the software makes it easy to share files over your local network, regardless of what platforms are used.
I'm using Fedora, so these instructions should work for any RPM-based Linux distribution. Minor modifications may be necessary for other distributions. Here's how I did it.
### Get the Canon tools
Download the required Windows Canon Quick Utility Toolbox software from Canon's website. The software is required because it is the only way to configure the printer's destination folder location and credentials. Once this is done, you do not need to use the tool unless you want to make a change.
Before configuring the printer, you must set up a Samba share on your Linux computer or server. Install Samba with the following command:
```
`$ sudo dnf -y install samba`
```
Create `/etc/smb.conf` file with the following content:
```
[global]
        workgroup = WORKGROUP
        netbios name = MYSERVER
        security = user
        #CORE needed for CANON PRINTER SCAN FOLDER
        min protocol = CORE
        #NTML AUTHV1 needed for CANON PRINTER SCAN FOLDER
        ntlm auth = yes
        passdb backend = tdbsam
        printing = cups
        printcap name = cups
        load printers = no
        cups options = raw
        hosts allow = 127. 192.168.33.
        max smbd processes = 1000
[homes]
        comment = Home Directories
        valid users = %S, %D%w%S
        browseable = No
        writable = yes
        read only = No
        inherit acls = Yes
[SCANS]
        comment = MB2720 SCANS
        path = /mnt/SCANS
        public = yes
        writable = yes
        browseable = yes
        printable = no
        force user = tux
        create mask = 770
```
In the `force user` line near the end, change the username from `tux` to your own username.
Unfortunately, the Canon printer won't work with Server Message Block ([SMB][4]) protocols higher than CORE or NTML authentication v2. For this reason, the Samba share must be configured with the oldest SMB protocol and NTML authentication versions. This is not ideal by any means and has security implications, so I created a separate Samba server dedicated to the scanner use case. My other Samba server, which shares all home networked files, still uses SMB protocol 3 and NTML authentication v2.
Start the Samba server service and enable it for restart:
```
$ sudo systemctl start smb
$ sudo systemctl enable smb
```
### Create a Samba user
Create your Samba user and a password for it:
```
`$ sudo smbpasswd -a tux`
```
Enter your password at the prompt.
Assuming you want to mount your Samba scans on a Linux system, you need to do a few steps.
Create a Samba client credentials file. Mine looks like this:
```
$ sudo cat /root/smb-credentials.txt
username=tux
password=mySTRONGpassword
```
Change the permissions so that it isn't world-readable:
```
`$ sudo chmod 640 /root/smb-credentials.txt`
```
Create a mount point and add it to `/etc/fstab`:
```
`$ sudo mkdir /mnt/MB2720-SCANS`
```
Add the following line into your `/etc/fstab`:
```
`//192.168.33.50/SCANS  /mnt/MB2720-SCANS  cifs vers=3.0,credentials=/root/smb-credentials.txt,gid=1000,uid=1000,_netdev    0 0`
```
This mounts the Samba share scans to the new mount point using [CIFS][5], forcing SMBv3, and using the username and password stored in `/root/smb-credetials.txt`. It also passes the user's group identifier (GID) and the user identifier (UID), giving you full ownership of the Linux mount. The `_netdev` option is required so that the mount point is mounted after networking is fully functional (after a reboot, for instance) because this mount requires networking to be accessed.
### Configure the Canon software
Now that you have created the Samba share, configured it on the server, and configured the share to be mounted on your Linux client, you need to launch the Canon Quick Utility Toolbox to configure the printer. Because Canon doesn't release this toolbox for Linux, this step requires Windows. You can try [running it on WINE][6], but should that fail, you'll have to either borrow a Windows computer from someone or run a [Windows developer virtual machine][7] in [GNOME Boxes][8] or [VirtualBox][9].
Power on the printer, and then start the Canon Quick Utility Toolbox. It should find your printer. If it can't see your printer, you must configure the printer for either LAN or wireless networking first.
In the toolbox, click on **Destination Folder Settings**.
![Canon Quick Utility Toolbox][10]
(Marc Skinner, [CC BY-SA 4.0][11])
Enter the printer administration password—my default password was **canon**.
Click the **Add** button.
![Add destination folder][12]
Fill out the form with a Displayed Name, your Samba share location, and your Samba username and password.
I left the PIN Code blank, but if you want to require a PIN to be entered each time you scan from the printer, you can set one. This would be useful in an office where each user has their own Samba share and PIN to protect their scans.
Click **Connection Test** to validate the form data.
Click the **OK** button.
Click **Register to Printer** to save your configuration back to the printer.
![Register to Printer ][13]
(Marc Skinner, [CC BY-SA 4.0][11])
Everything is set up. Click **Exit**. You're done with Windows now, and probably the toolbox, unless you need to change something.
### Start scanning
You can now scan from the printer and select your Destination Folder from its LCD menu. Scans are saved directly to the Samba share, which you have access to from your Linux computer.
For convenience, create a symbolic link on your Linux desktop or home directory with the following command:
```
`$ sudo ln -sd /mnt/MB2720-SCANS /home/tux/Desktop/MB2720-SCANS`
```
That's all there is to it!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-scan-samba
作者:[Marc Skinner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/marc-skinner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/files_documents_paper_folder.png?itok=eIJWac15 (Files in a folder)
[2]: https://opensource.com/article/18/2/pivotal-moments-history-open-source
[3]: http://samba.org/
[4]: https://en.wikipedia.org/wiki/Server_Message_Block
[5]: https://searchstorage.techtarget.com/definition/Common-Internet-File-System-CIFS
[6]: https://opensource.com/article/21/2/linux-wine
[7]: https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/
[8]: https://opensource.com/article/19/5/getting-started-gnome-boxes-virtualization
[9]: https://www.virtualbox.org/
[10]: https://opensource.com/sites/default/files/uploads/canontoolbox.png (Canon Quick Utility Toolbox)
[11]: https://creativecommons.org/licenses/by-sa/4.0/
[12]: https://opensource.com/sites/default/files/add_destination_folder.png (Add destination folder)
[13]: https://opensource.com/sites/default/files/uploads/canonregistertoprinter.png (Register to Printer )

View File

@ -0,0 +1,472 @@
[#]: subject: (Make Conway's Game of Life in WebAssembly)
[#]: via: (https://opensource.com/article/21/4/game-life-simulation-webassembly)
[#]: author: (Mohammed Saud https://opensource.com/users/saud)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Make Conway's Game of Life in WebAssembly
======
WebAssembly is a good option for computationally expensive tasks due to
its predefined execution environment and memory granularity.
![Woman sitting in front of her computer][1]
Conway's [Game of Life][2] is a popular programming exercise to create a [cellular automaton][3], a system that consists of an infinite grid of cells. You don't play the game in the traditional sense; in fact, it is sometimes referred to as a game for zero players.
Once you start the Game of Life, the game plays itself to multiply and sustain "life." In the game, digital cells representing lifeforms are allowed to change states as defined by a set of rules. When the rules are applied to cells through multiple iterations, they exhibit complex behavior and interesting patterns.
The Game of Life simulation is a very good candidate for a WebAssembly implementation because of how computationally expensive it can be; every cell's state in the entire grid must be calculated for every iteration. WebAssembly excels at computationally expensive tasks due to its predefined execution environment and memory granularity, among many other features.
### Compiling to WebAssembly
Although it's possible to write WebAssembly by hand, it is very unintuitive and error-prone as complexity increases. Most importantly, it's not intended to be written that way. It would be the equivalent of manually writing [assembly language][4] instructions.
Here's a simple WebAssembly function to add two numbers:
```
(func $Add (param $0 i32) (param $1 i32) (result i32)
    local.get $0
    local.get $1
    i32.add
)
```
It is possible to compile WebAssembly modules using many existing languages, including C, C++, Rust, Go, and even interpreted languages like Lua and Python. This [list][5] is only growing.
One of the problems with using existing languages is that WebAssembly does not have much of a runtime. It does not know what it means to [free a pointer][6] or what a [closure][7] is. All these language-specific runtimes have to be included in the resulting WebAssembly binaries. Runtime size varies by language, but it has an impact on module size and execution time.
### AssemblyScript
[AssemblyScript][8] is one language that is trying to overcome some of these challenges with a different approach. AssemblyScript is designed specifically for WebAssembly, with a focus on providing low-level control, producing smaller binaries, and reducing the runtime overhead.
AssemblyScript uses a strictly typed variant of [TypeScript][9], a superset of JavaScript. Developers familiar with TypeScript do not have to go through the trouble of learning an entirely new language.
### Getting started
The AssemblyScript compiler can easily be installed through [Node,js][10]. Start by initializing a new project in an empty directory:
```
npm init
npm install --save-dev assemblyscript
```
If you don't have Node installed locally, you can play around with AssemblyScript on your browser using the nifty [WebAssembly Studio][11] application.
AssemblyScript comes with `asinit`, which should be installed when you run the installation command above. It is a helpful utility to quickly set up an AssemblyScript project with the recommended directory structure and configuration files:
```
`npx asinit .`
```
The newly created `assembly` directory will contain all the AssemblyScript code, a simple example function in `assembly/index.ts`, and the `asbuild` command inside `package.json`. `asbuild`, which compiles the code into WebAssembly binaries.
When you run `npm run asbuild` to compile the code, it creates files inside `build`. The `.wasm` files are the generated WebAssembly modules. The `.wat` files are the modules in text format and are generally used for debugging and inspection.
You have to do a little bit of work to get the binaries to run on a browser.
First, create a simple HTML file, `index.html`:
```
&lt;[html][12]&gt;
    &lt;[head][13]&gt;
        &lt;[meta][14] charset=utf-8&gt;
        &lt;[title][15]&gt;Game of life&lt;/[title][15]&gt;
    &lt;/[head][13]&gt;
   
    &lt;[body][16]&gt;
        &lt;[script][17] src='./index.js'&gt;&lt;/[script][17]&gt;
    &lt;/[body][16]&gt;
&lt;/[html][12]&gt;
```
Next, replace the contents of `index.js` with the code snippet below to load the WebAssembly modules:
```
const runWasm = async () =&gt; {
  const module = await WebAssembly.instantiateStreaming(fetch('./build/optimized.wasm'));
  const exports = module.instance.exports;
  console.log('Sum = ', exports.add(20, 22));
};
runWasm();
```
This `fetches` the binary and passes it to `WebAssembly.instantiateStreaming`, the browser API that compiles a module into a ready-to-use instance. This is an asynchronous operation, so it is run inside an async function so that await can be used to wait for it to finish compiling.
The `module.instance.exports` object contains all the functions exported by AssemblyScript. Use the example function in `assembly/index.ts` and log the result.
You will need a simple development server to host these files. There are a lot of options listed in this [gist][18]. I used [node-static][19]:
```
npm install -g node-static
static
```
You can view the result by pointing your browser to `localhost:8080` and opening the console.
![console output][20]
(Mohammed Saud, [CC BY-SA 4.0][21])
### Drawing to a canvas
You will be drawing all the cells onto a `<canvas>` element:
```
&lt;[body][16]&gt;
    &lt;[canvas][22] id=canvas&gt;&lt;/[canvas][22]&gt;
    ...
&lt;/[body][16]&gt;
```
Add some CSS:
```
&lt;[head][13]&gt;
    ...
    &lt;[style][23] type=text/css&gt;
    body {
      background: #ccc;
    }
    canvas {
      display: block;
      padding: 0;
      margin: auto;
      width: 40%;
      image-rendering: pixelated;
      image-rendering: crisp-edges;
    }
    &lt;/[style][23]&gt;
&lt;/[head][13]&gt;
```
The `image-rendering` styles are used to prevent the canvas from smoothing and blurring out pixelated images.
You will need a canvas drawing context in `index.js`:
```
const canvas = document.getElementById('canvas');
const ctx = canvas.getContext('2d');
```
There are many functions in the [Canvas API][24] that you could use for drawing—but you need to draw using WebAssembly, not JavaScript.
Remember that WebAssembly does NOT have access to the browser APIs that JavaScript has, and any call that needs to be made should be interfaced through JavaScript. This also means that your WebAssembly module will run the fastest if there is as little communication with JavaScript as possible.
One method is to create [ImageData][25] (a data type for the underlying pixel data of a canvas), fill it up with the WebAssembly module's memory, and draw it on the canvas. This way, if the memory buffer is updated inside WebAssembly, it will be immediately available to the `ImageData`.
Define the pixel count of the canvas and create an `ImageData` object:
```
const WIDTH = 10, HEIGHT = 10;
const runWasm = async () =&gt; {
...
canvas.width = WIDTH;
canvas.height = HEIGHT;
const ctx = canvas.getContext('2d');
const memoryBuffer = exports.memory.buffer;
const memoryArray = new Uint8ClampedArray(memoryBuffer)
const imageData = ctx.createImageData(WIDTH, HEIGHT);
imageData.data.set(memoryArray.slice(0, WIDTH * HEIGHT * 4));
ctx.putImageData(imageData, 0, 0);
```
The memory of a WebAssembly module is provided in `exports.memory.buffer` as an [ArrayBuffer][26]. You need to use it as an array of 8-bit unsigned integers or `Uint8ClampedArray`. Now you can fill up the module's memory with some pixels. In `assembly/index.ts`, you first need to grow the available memory:
```
`memory.grow(1);`
```
WebAssembly does not have access to memory by default and needs to request it from the browser using the `memory.grow` function. Memory grows in chunks of 64Kb, and the number of required chunks can be specified when calling it. You will not need more than one chunk for now.
Keep in mind that memory can be requested multiple times, whenever needed, and once acquired, memory cannot be freed or given back to the browser.
Writing to the memory:
```
`store<u32>(0, 0xff101010);`
```
A pixel is represented by 32 bits, with the RGBA values taking up 8 bits each. Here, RGBA is defined in reverse—ABGR—because WebAssembly is [little-endian][27].
The `store` function stores the value `0xff101010` at index `0`, taking up 32 bits. The alpha value is `0xff` so that the pixel is fully opaque.
![Byte order for a pixel's color][28]
(Mohammed Saud, [CC BY-SA 4.0][21])
Build the module again with `npm run asbuild` before refreshing the page to see your first pixel on the top-left of the canvas.
### Implementing rules
Let's review the rules. The [Game of Life Wikipedia page][29] summarizes them nicely:
1. Any live cell with fewer than two live neighbors dies, as if by underpopulation.
2. Any live cell with two or three live neighbors lives on to the next generation.
3. Any live cell with more than three live neighbors dies, as if by overpopulation.
4. Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.
You need to iterate through all the rows, implementing these rules on each cell. You do not know the width and height of the grid, so write a little function to initialize the WebAssembly module with this information:
```
let universe_width: u32;
let universe_height: u32;
let alive_color: u32;
let dead_color: u32;
let chunk_offset: u32;
export function init(width: u32, height: u32): void {
  universe_width = width;
  universe_height = height;
  chunk_offset = width * height * 4;
  alive_color = 0xff101010;
  dead_color = 0xffefefef;
}
```
Now you can use this function in `index.js` to provide data to the module:
```
`exports.init(WIDTH, HEIGHT);`
```
Next, write an `update` function to iterate over all the cells, count the number of active neighbors for each, and set the current cell's state accordingly:
```
export function update(): void {
  for (let x: u32 = 0; x &lt; universe_width; x++) {
    for (let y: u32 = 0; y &lt; universe_height; y++) {
      const neighbours = countNeighbours(x, y);
      if (neighbours &lt; 2) {
        // less than 2 neighbours, cell is no longer alive
        setCell(x, y, dead_color);
      } else if (neighbours == 3) {
        // cell will be alive
        setCell(x, y, alive_color);
      } else if (neighbours &gt; 3) {
        // cell dies due to overpopulation
        setCell(x, y, dead_color);
      }
    }
  }
  copyToPrimary();
}
```
You have two copies of cell arrays, one representing the current state and the other for calculating and temporarily storing the next state. After the calculation is done, the second array is copied to the first for rendering.
The rules are fairly straightforward, but the `countNeighbours()` function looks interesting. Take a closer look:
```
function countNeighbours(x: u32, y: u32): u32 {
  let neighbours = 0;
  const max_x = universe_width - 1;
  const max_y = universe_height - 1;
  const y_above = y == 0 ? max_y : y - 1;
  const y_below = y == max_y ? 0 : y + 1;
  const x_left = x == 0 ? max_x : x - 1;
  const x_right = x == max_x ? 0 : x + 1;
  // top left
  if(getCell(x_left, y_above) == alive_color) {
    neighbours++;
  }
  // top
  if(getCell(x, y_above) == alive_color) {
    neighbours++;
  }
  // top right
  if(getCell(x_right, y_above) == alive_color) {
    neighbours++;
  }
  ...
  return neighbours;
}
```
![Coordinates of a cell's neighbors][30]
(Mohammed Saud, [CC BY-SA 4.0][21])
Every cell has eight neighbors, and you can check if each one is in the `alive_color` state. The important situation handled here is if a cell is exactly on the edge of the grid. Cellular automata are generally assumed to be on an infinite space, but since infinitely large displays haven't been invented yet, stick to warping at the edges. This means when a cell goes off the top, it comes back in its corresponding position on the bottom. This is commonly known as [toroidal space][31].
The `getCell` and `setCell` functions are wrappers to the `store` and `load` functions to make it easier to interact with memory using 2D coordinates:
```
@inline
function getCell(x: u32, y: u32): u32 {
  return load&lt;u32&gt;((x + y * universe_width) &lt;&lt; 2);
}
@inline
function setCell(x: u32, y: u32, val: u32): void {
  store&lt;u32&gt;(((x + y * universe_width) &lt;&lt; 2) + chunk_offset, val);
}
function copyToPrimary(): void {
  memory.copy(0, chunk_offset, chunk_offset);
}
```
The `@inline` is an [annotation][32] that requests that the compiler convert calls to the function with the function definition itself.
Call the update function on every iteration from `index.js` and render the image data from the module memory:
```
const FPS = 5;
const runWasm = async () =&gt; {
  ...
  const step = () =&gt; {
    exports.update();
 
    imageData.data.set(memoryArray.slice(0, WIDTH * HEIGHT * 4));
    ctx.putImageData(imageData, 0, 0);
 
    setTimeout(step, 1000 / FPS);
  };
  step();
```
At this point, if you compile the module and load the page, it shows nothing. The code works fine, but since you don't have any living cells initially, there are no new cells coming up.
Create a new function to randomly add cells during initialization:
```
function fillUniverse(): void {
  for (let x: u32 = 0; x &lt; universe_width; x++) {
    for (let y: u32 = 0; y &lt; universe_height; y++) {
      setCell(x, y, Math.random() &gt; 0.5 ? alive_color : dead_color);
    }
  }
  copyToPrimary();
}
export function init(width: u32, height: u32): void {
  ...
  fillUniverse();
```
Since `Math.random` is used to determine the initial state of a cell, the WebAssembly module needs a seed function to derive a random number from.
AssemblyScript provides a convenient [module loader][33] that does this and a lot more, like wrapping the browser APIs for module loading and providing functions for more fine-grained memory control. You will not be using it here since it abstracts away many details that would otherwise help in learning the inner workings of WebAssembly, so pass in a seed function instead:
```
  const importObject = {
    env: {
      seed: Date.now,
      abort: () =&gt; console.log('aborting!')
    }
  };
  const module = await WebAssembly.instantiateStreaming(fetch('./build/optimized.wasm'), importObject);
```
`instantiateStreaming` can be called with an optional second parameter, an object that exposes JavaScript functions to WebAssembly modules. Here, use `Date.now` as the seed to generate random numbers.
It should now be possible to run the `fillUniverse` function and finally have life on your grid!
You can also play around with different `WIDTH`, `HEIGHT`, and `FPS` values and use different cell colors.
![Game of Life result][34]
(Mohammed Saud, [CC BY-SA 4.0][21])
### Try the game
If you use large sizes, make sure to grow the memory accordingly.
Here's the [complete code][35].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/game-life-simulation-webassembly
作者:[Mohammed Saud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/saud
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer)
[2]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
[3]: https://en.wikipedia.org/wiki/Cellular_automaton
[4]: https://en.wikipedia.org/wiki/Assembly_language
[5]: https://github.com/appcypher/awesome-wasm-langs
[6]: https://en.wikipedia.org/wiki/C_dynamic_memory_allocation
[7]: https://en.wikipedia.org/wiki/Closure_(computer_programming)
[8]: https://www.assemblyscript.org
[9]: https://www.typescriptlang.org/
[10]: https://nodejs.org/en/download/
[11]: https://webassembly.studio
[12]: http://december.com/html/4/element/html.html
[13]: http://december.com/html/4/element/head.html
[14]: http://december.com/html/4/element/meta.html
[15]: http://december.com/html/4/element/title.html
[16]: http://december.com/html/4/element/body.html
[17]: http://december.com/html/4/element/script.html
[18]: https://gist.github.com/willurd/5720255
[19]: https://www.npmjs.com/package/node-static
[20]: https://opensource.com/sites/default/files/uploads/console_log.png (console output)
[21]: https://creativecommons.org/licenses/by-sa/4.0/
[22]: http://december.com/html/4/element/canvas.html
[23]: http://december.com/html/4/element/style.html
[24]: https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API
[25]: https://developer.mozilla.org/en-US/docs/Web/API/ImageData
[26]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer
[27]: https://en.wikipedia.org/wiki/Endianness
[28]: https://opensource.com/sites/default/files/uploads/color_bits.png (Byte order for a pixel's color)
[29]: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Rules
[30]: https://opensource.com/sites/default/files/uploads/count_neighbours.png (Coordinates of a cell's neighbors)
[31]: https://en.wikipedia.org/wiki/Torus
[32]: https://www.assemblyscript.org/peculiarities.html#annotations
[33]: https://www.assemblyscript.org/loader.html
[34]: https://opensource.com/sites/default/files/uploads/life.png (Game of Life result)
[35]: https://github.com/rottencandy/game-of-life-wasm

View File

@ -0,0 +1,148 @@
[#]: subject: (What's new with Drupal in 2021?)
[#]: via: (https://opensource.com/article/21/4/drupal-updates)
[#]: author: (Shefali Shetty https://opensource.com/users/shefalishetty)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
What's new with Drupal in 2021?
======
Its newest initiatives include decoupled menus, automated updates, and
other usability-focused updates.
![Computer screen with files or windows open][1]
The success of open source projects is largely carried by the pillars of the community and group collaborations. Without putting a stake in the ground to achieve strategic initiatives, an open source project can lose focus. Open source strategic initiatives should aim at solving impactful problems through collaboration involving the project's stakeholders.
### The why and how of Drupal's strategic initiatives
As one of the leading open source projects, [Drupal][2]'s success largely thrives on implementing its various proposed strategic initiatives. Drupal's focus on strategic initiatives and continuous innovation since Drupal 7 brought huge architectural changes in Drupal 8, 9, and beyond that offer a platform for continuous innovation on the web and an easy upgrade path for end users.
The vision for Drupal's core strategic initiatives is determined by Dries Buytaert, Drupal project lead. These initiatives are backed by community collaboration and lead to significant developments driven by forces like:
* Collaboration with the core maintainers
* Survey data and usability studies
* A vision to build a leading open source digital experience platform
* Relevancy in the market by improving editorial, developer, and customer experiences
* Validation by broader community discussions and collaborations
Once initiatives are **proposed**, they move ahead to the **planned** initiatives stage, where each initiative is nurtured with detailed plans and goals by a strong team of contributors. When an initiative passes through this stage, it moves to the **active** initiatives stage. Here's where the initiatives take structure and come alive.
Some of the most successful Drupal 8 initiatives, like Twig and Bigpipe, did not follow the traditional process. However, following a thoughtfully planned process will avoid a lot of [bike-shedding][3].
### Popular past initiatives
In 2011, at DrupalCon Chicago, Dries announced that Drupal 8 would feature core initiatives that would cause big changes to Drupal's architecture. To support the transition, each initiative would have a few leads involved in decision-making and coordination with Dries. Some popular initiatives included:
* **Configuration Management Initiative (CMI):** This was the first key initiative announced at the 2011 DrupalCon. The idea was to offer site builders more powerful, flexible, and traceable configuration handling in Drupal 8 core. As planned, the Configuration Manager module is now a Drupal 8 core module that allows deploying configurations between different environments easily.
* **Web Services and Context Core Initiative:** This initiative aimed at embracing a modern web and turned Drupal into a first-class REST server with a first-class content management system (CMS) on top of it. The result? Drupal is now a competent REST server providing the ability to manage content entities through HTTP requests. This is part of why Drupal has been the leading CMS for decoupled experiences for several years.
* **Layout Initiative:** This initiative's focus was on improving and simplifying the site-building experience by non-technical users, like site builders and content authors. This initiative came alive in Drupal 8 by introducing the Layout Discovery API (a Layout plugin API) in v.8.4 and the Layout Builder module (a complete layout management solution) in v.8.5 core.
* **Media Initiative:** The Media Initiative was proposed to launch a rich, intuitive, easy-to-use, API-based media solution with extensible media functionalities in the core. This resulted in bringing in the Media API (which manages various operations on media entities) and Media Library (a rich digital asset management tool) to Drupal 8 core.
* **Drupal 9 Readiness Initiative:** The focus of this initiative was to get Drupal 9 ready by June 3, 2020, so that Drupal 7 and 8 users had at least 18 months to upgrade. Since Drupal 9 is just a cleaned-up version of the last version of Drupal 8 (8.9), the idea was to update dependencies and remove any deprecated code. And as planned, Drupal 9 was successfully released on June 3, 2020. Drupal 8-compatible modules were ported to Drupal 9 faster than any major version upgrade in Drupal's history, with more than 90% of the top 1,000 modules already ported (and many of the remaining now obsolete).
### The new strategic initiatives
Fast-forward to 2021, where everything is virtual. DrupalCon North America will witness a first-of-its-kind "Initiative Days" event added to the traditional DrupalCon content. Previously, initiatives were proposed during the [Driesnote][4] session, but this time, initiatives are more interactive and detailed. DrupalCon North America 2021 participants can learn about an initiative and participate in building components and contributing back to the project.
#### The Decoupled Menus Initiative
Dries proposed the Decoupled Menus Initiative in his keynote speech during DrupalCon Global 2020. While this initiative's broader intent is to make Drupal the best decoupled CMS, to accomplish the larger goal, the project chose to work on decoupled menus as a first step because menus are used on every project and are not easy to implement in decoupled architectures.
The goals of this initiative are to build APIs, documentation, and examples that can:
* Give JavaScript front-end developers the best way to integrate Drupal-managed menus into their front ends.
* Provide site builders and content editors with an easy-to-use experience to build and update menus independently.
This is because, without web services for decoupled menus in Drupal core, JavaScript developers are often compelled to hard-code menu items. This makes it really hard for a non-developer to edit or remove a menu item without getting a developer involved. The developer needs to make the change, build the JavaScript code, and then deploy it to production. With the Decoupled Menus Initiative, the developer can easily eliminate all these steps and many lines of code by using Drupal's HTTP APIs and using JavaScript-focused resources.
The bigger idea is to establish patterns and a roadmap that can be adapted to solve other decoupled problems. At DrupalCon 2021, on the [Decoupled Menus Initiative day][5], April 13, you can both learn about where it stands and get involved by building custom menu components and contributing them back to the project.
#### The Easy Out-Of-The-Box Initiative
During DrupalCon 2019 in Amsterdam, CMS users were asked about their perceptions of their CMS. The research found that beginners did not favor Drupal as much as intermediate- and expert-level users. However, it was the opposite for other CMS users; they seemed to like their CMS less over time.
![CMS users' preferences][6]
([Driesnote, DrupalCon Global 2020][7])
Hence, the Easy Out-Of-The-Box Initiative's goal is to make Drupal easy to use, especially for non-technical users and beginners. It is an extension of the great work that has been done for Layouts, Media, and Claro. Layout Builder's low-code design flexibility, Media's robust management of audio-visual content, and Claro's modern and accessible administrative UI combine to empower less-technical users with the power Drupal has under the hood.
This initiative bundles all three of these features into one initiative and aims to provide a delightful user experience. The ease of use can help attract new and novice users to Drupal. On April 14, DrupalCon North America's [Easy Out-Of-The-Box Initiative day][8], the initiative leads will discuss the initiative and its current progress. Learn about how you can contribute to the project by building a better editorial experience.
#### Automated Updates Initiative
The results of a Drupal survey in 2020 revealed that automated updating was the most frequently requested feature. Updating a Drupal site manually can be tedious, expensive, and time-consuming. Luckily, the initiative team has been on this task since 2019, when the first prototype for the Automated Update System was developed as a [contributed module][9]. The focus of the initiative now is to bring this feature into Drupal core. As easy as it may sound, there's a lot more work that needs to go in to:
* Ensure site readiness for a safe update
* Integrate composer
* Verify updates with package signing
* Safely apply updates in a way that can be rolled back in case of errors
In its first incarnation, the focus is on Drupal Core patch releases and security updates, but the intent is to support the contributed module ecosystem as well.
The initiative intends to make it easier for small to midsized businesses that sometimes overlook the importance of updating their Drupal site or struggle with the manual process. The [Automated Updates Initiative day][10] is happening on April 15 at DrupalCon North America. You will get an opportunity to know more about this initiative and get involved in the project.
#### Drupal 10 Readiness Initiative
With the release of Drupal 10 not too far away (as early as June 2022), the community is gearing up to welcome a more modern version of Drupal. Drupal now integrates more third-party technologies than ever. Dependencies such as Symfony, jQuery, Guzzle, Composer, CKEditor, and more have their own release cycles that Drupal needs to align with.
![CMS Release Cycles][11]
([Driesnote, DrupalCon 2020][7])
The goal of the initiative is to get Drupal 10 ready, and this involves:
* Releasing Drupal 10 on time
* Getting compatible with the latest versions of the dependencies for security
* Deprecating the dependencies, libraries, modules, and themes that are no longer needed and removing them from Drupal 10 core.
At the [Drupal 10 Readiness Initiative day][12], April 16, you can learn about the tools you'll use to update your websites and modules from Drupal 9 to Drupal 10 efficiently. There are various things you can do to help make Drupal better. Content authors will get an opportunity to peek into the new CKEditor 5, its new features, and improved editing experience.
### Learn more at DrupalCon
Drupal is celebrating its 20th year and its evolution to a more relevant, easier to adopt open source software. Leading an evolution is close to impossible without taking up strategic initiatives. Although the initial initiatives did not focus on offering great user experiences, today, ease of use and out-of-the-box experience are Drupal's most significant goals.
Our ambition is to create software that works for everyone. At every DrupalCon, the intent is to connect with the community that fosters the same belief, learn from each other, and ultimately, build a better Drupal.
[DrupalCon North America][13], hosted by the Drupal Association, is the largest Drupal event of the year. Drupal experts, enthusiasts, and users will unite online April 1216, 2021, share lessons learned and best practices, and collaborate on creating better, more engaging digital experiences. PHP and JavaScript developers, designers, marketers, and anyone interested in a career in open source will be able to learn, connect, and build by attending DrupalCon.
The [Drupal Association][14] is the nonprofit organization focused on accelerating Drupal, fostering the Drupal community's growth, and supporting the project's vision to create a safe, secure, and open web for everyone. DrupalCon is the primary source of funding for the Drupal Association. Your support and attendance at DrupalCon make our work possible.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/drupal-updates
作者:[Shefali Shetty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/shefalishetty
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://www.drupal.org/
[3]: https://en.wikipedia.org/wiki/Law_of_triviality
[4]: https://events.drupal.org/global2020/program/driesnote
[5]: https://events.drupal.org/northamerica2021/decoupled-menus-day
[6]: https://opensource.com/sites/default/files/uploads/cms_preferences.png (CMS users' preferences)
[7]: https://youtu.be/RIeRpLgI1mM
[8]: https://events.drupal.org/northamerica2021/easy-out-box-day
[9]: http://drupal.org/project/automatic_updates/
[10]: https://events.drupal.org/northamerica2021/automatic-updates-day
[11]: https://opensource.com/sites/default/files/uploads/cms_releasecycles.png (CMS Release Cycles)
[12]: https://events.drupal.org/northamerica2021/drupal-10-readiness-day
[13]: https://events.drupal.org/northamerica2021?utm_source=replyio&utm_medium=email&utm_campaign=DCNA2021-20210318
[14]: https://www.drupal.org/association

View File

@ -0,0 +1,123 @@
[#]: subject: (3 essential Linux cheat sheets for productivity)
[#]: via: (https://opensource.com/article/21/4/linux-cheat-sheets)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
3 essential Linux cheat sheets for productivity
======
Download cheat sheets for sed, grep, and parted to integrate new
processes into your work.
![Hand putting a Linux file folder into a drawer][1]
Linux is famous for its commands. This is partially because nearly everything that Linux does can also be invoked from a terminal, but it's also that Linux as an operating system is highly modular. Its tools are designed to produce fairly specific results, and when you know a lot about a few commands, you can combine them in interesting ways for useful output. Learning Linux is equal parts learning commands and learning how to string those commands together in interesting combinations.
With so many Linux commands to learn, though, taking the first step can seem daunting. What command should you learn first? Which commands should you learn well, and which commands require only a passing familiarity? I've thought about these questions a lot, and I'm not convinced there's a universal answer. The "basic" commands are probably the same for anyone:
* `ls`
* `cd`
* `mv`
These amount to being able to navigate your Linux file system.
Beyond the basics, though, the "default" commands vary from industry to industry. Sysadmins need tools for [system introspection and monitoring][2]. Artists need tools for [media conversion][3] and [graphic processing][4]. Home users might want tools for [PDF processing][5], or [calendaring][6], or [document conversion][7]. The list goes on and on.
However, some Linux commands stand out as being particularly important—either because they're common low-level tools that everyone needs on occasion or they're all-purpose tools that anyone might find useful most of the time.
Here are three to add to your list.
### Sed
**Purpose:** The `sed` command is a good, all-purpose tool that any Linux user can benefit from knowing. On the surface, it's just a terminal-based "find and replace." That makes it great for quick and easy corrections across multiple documents. The `sed` command has saved me hours (or possibly cumulative days) of opening individual files, searching and replacing a word, saving the file, and closing the file. It alone justifies my investment in learning the Linux terminal. Once you get to know `sed` well, you're likely to discover a whole world of potential editing tricks that make your life easier.
**Strength:** The command's strength is in repetition. If you have just one file to edit, it's easy to open it and do a "find and replace" in a traditional [text editor][8]. However, when you're faced with five or 50 files, a good `sed` command (maybe combined with [GNU Parallel][9] for extra speed) can reclaim hours of your day.
**Weakness:** You have to balance the time you expect to spend making a change with how long it may take you to construct the right `sed` command. Simple edits with the common `sed 's/foo/bar/g` syntax are almost always worth the trivial amount of time it takes to type the command, but complex `sed` commands that utilize a hold space and any of the `ed` style subcommands can take serious concentration combined with several rounds of trial and error. It can be, as it turns out, better to do some edits the new-fashioned way.
**Cheat:** Download our [sed cheat sheet][10] for quick reference to its single-letter subcommands and an overview of its syntax.
### Grep
**Purpose:** The `grep` command comes from its admittedly clunky description: global regular expression print. In other words, `grep` prints to the terminal any matching pattern it finds in files (or other forms of input). That makes it a great search tool, especially adept at scrubbing through vast amounts of text.
You might use it to find URLs:
```
$ grep --only-matching \
http\:\/\/.* example.txt
```
You could use it to find a specific config option:
```
$ grep --line-number \
foo= example.ini
2:foo=true
```
And of course, you can combine it with other commands:
```
$ grep foo= example.ini | cut -d= -f2
true
```
**Strength:** The `grep` command is a straightforward search command. If you've read the few examples above, then you've essentially learned the command. For even more flexibility, you can use its extended regular expression syntax.
**Weakness:** The problem with `grep` is also one of its strengths: It's just a search function. Once you've found what you're looking for, you might be faced with the larger question of what to do with it. Sometimes the answer is as easy as redirecting the output to a file, which becomes your filtered list of results. However, more complex use cases mean further processing with any number of commands like [awk][11], [curl][12] (incidentally, [we have a cheat sheet for curl][13], too), or any of the thousands of other options you have on a modern computer.
**Cheat:** Download our [grep cheat sheet][14] for a quick reference to its many options and regex syntax.
### Parted
**Purpose:** GNU `parted` isn't a daily-use command for most people, but it is one of the most powerful tools for hard-drive manipulation. The frustrating thing about hard drives is that you spend years ignoring them until you get a new one and have to set it up for your computer. It's only then that you remember that you have no idea how to best format your drive. That's when familiarity with `parted` can be useful. GNU `parted` can create disk labels and create, back up, and rescue partitions. In addition, it can provide you with lots of information about a drive and its layout and generally prepare a drive for a filesystem.
**Strength:** The reason I love `parted` over `fdisk` and similar tools is for its combination of an easy interactive mode and its fully noninteractive option. Regardless of how you choose to use `parted`, its commands follow a consistent syntax, and its help menus are well-written and informative. Better still, the command itself is _smart_. When partitioning a drive, you can specify sizes in anything from sectors to percentages, and `parted` does its best to figure out the finer points of partition table placement.
**Weakness:** It took me a long while to learn GNU `parted` after switching to Linux because, for a very long time, I didn't have a good understanding of how drives actually work. GNU `parted` and most terminal-based drive utilities assume you know what a partition is, that drives have sectors and need disk labels and partition tables that initially lack filesystems, and so on. There's a steep learning curve—not to the command so much as to the foundations of hard-drive technology, and GNU `parted` doesn't do much to bridge the potential gap. It's arguably not the command's job to step you through the process because there are [graphical applications][15] for that, but a workflow-focused option for GNU `parted` could be an interesting addition to the utility.
**Cheat:** Download our [parted cheat sheet][16] for a quick reference to its many subcommands and options.
### Learn more
These are some of my favorite commands, but the list is naturally biased to how I use my computer. I do a lot of shell scripting, so I make heavy use of `grep` to find configuration options, I use `sed` for text editing, and I use `parted` because when I'm working on multimedia projects, there are usually a lot of hard drives involved. You either already have, or you'll soon develop, your own workflows with your own favorite (or at least _frequent_) commands.
When I'm integrating new processes into my daily work, I create or download a cheat sheet (like the ones linked above), and then I practice. We all learn in our own way, though, so find what works best for you, and learn a new essential command. The more you learn about your most frequent commands, the more you can make them work harder for you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-cheat-sheets
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
[3]: https://opensource.com/article/17/6/ffmpeg-convert-media-file-formats
[4]: https://opensource.com/article/17/8/imagemagick
[5]: https://opensource.com/article/20/8/reduce-pdf
[6]: https://opensource.com/article/19/4/calendar-git
[7]: https://opensource.com/article/20/5/pandoc-cheat-sheet
[8]: https://opensource.com/article/21/2/open-source-text-editors
[9]: https://opensource.com/article/18/5/gnu-parallel
[10]: https://opensource.com/downloads/sed-cheat-sheet
[11]: https://opensource.com/article/20/9/awk-ebook
[12]: https://www.redhat.com/sysadmin/social-media-curl
[13]: https://opensource.com/article/20/5/curl-cheat-sheet
[14]: https://opensource.com/downloads/grep-cheat-sheet
[15]: https://opensource.com/article/18/11/partition-format-drive-linux#gui
[16]: https://opensource.com/downloads/parted-cheat-sheet

View File

@ -0,0 +1,188 @@
[#]: subject: (4 tips for context switching in Git)
[#]: via: (https://opensource.com/article/21/4/context-switching-git)
[#]: author: (Olaf Alders https://opensource.com/users/oalders)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
4 tips for context switching in Git
======
Compare the pros and cons of four options to switch branches while
working in Git.
![Computer screen with files or windows open][1]
Anyone who spends a lot of time working with Git will eventually need to do some form of context switching. Sometimes this adds very little overhead to your workflow, but other times, it can be a real pain.
Let's discuss the pros and cons of some common strategies for dealing with context switching using this example problem:
> Imagine you are working in a branch called `feature-X`. You have just discovered you need to solve an unrelated problem. This cannot be done in `feature-X`. You will need to do this work in a new branch, `feature-Y`.
### Solution #1: stash + branch
Probably the most common workflow to tackle this issue looks something like this:
1. Halt work on the branch `feature-X`
2. `git stash`
3. `git checkout -b feature-Y origin/main`
4. Hack, hack, hack…
5. `git checkout feature-X` or `git switch -`
6. `git stash pop`
7. Resume work on `feature-X`
**Pros:** The nice thing about this approach is that this is a fairly easy workflow for simple changes. It can work quite well, especially for small repositories.
**Cons:** When using this workflow, you can have only one workspace at a time. Also, depending on the state of your repository, working with the stash can be non-trivial.
### Solution #2: WIP commit + branch
A variation on this solution looks quite similar, but it uses a WIP (Work in Progress) commit rather than the stash. When you're ready to switch back, rather than popping the stash, `git reset HEAD~1` unrolls your WIP commit, and you're free to continue, much as you did in the earlier scenario but without touching the stash.
1. Halt work on the branch `feature-X`
2. `git add -u` (adds only modified and deleted files)
3. `git commit -m "WIP"`
4. `git checkout -b feature-Y origin/master`
5. Hack, hack, hack…
6. `git checkout feature-X` or `git switch -`
7. `git reset HEAD~1`
**Pros:** This is an easy workflow for simple changes and also good for small repositories. You don't have to work with the stash.
**Cons:** You can have only one workspace at any time. Also, WIP commits can sneak into your final product if you or your code reviewer are not vigilant.
When using this workflow, you _never_ want to add a `--hard` to `git reset`. If you do this accidentally, you should be able to restore your commit using `git reflog`, but it's less heartstopping to avoid this scenario entirely.
### Solution #3: new repository clone
In this solution, rather than creating a new branch, you make a new clone of the repository for each new feature branch.
**Pros:** You can work in multiple workspaces simultaneously. You don't need `git stash` or even WIP commits.
**Cons:** Depending on the size of your repository, this can use a lot of disk space. (Shallow clones can help with this scenario, but they may not always be a good fit.) Additionally, your repository clones will be agnostic about each other. Since they can't track each other, you must track where your clones live. If you need git hooks, you will need to set them up for each new clone.
### Solution #4: git worktree
To use this solution, you may need to learn about `git add worktree`. Don't feel bad if you're not familiar with worktrees in Git. Many people get by for years in blissful ignorance of this concept.
#### What is a worktree?
Think of a worktree as the files in the repository that belong to a project. Essentially, it's a kind of workspace. You may not realize that you're already using worktrees. When using Git, you get your first worktree for free.
```
$ mkdir /tmp/foo &amp;&amp; cd /tmp/foo
$ git init
$ git worktree list
/tmp  0000000 [master]
```
As you can see, the worktree exists even before the first commit. Now, add a new worktree to an existing project.
#### Add a worktree
To add a new worktree, you need to provide:
1. A location on disk
2. A branch name
3. Something to branch from
```
$ git clone <https://github.com/oalders/http-browserdetect.git>
$ cd http-browserdetect/
$ git worktree list
/Users/olaf/http-browserdetect  90772ae [master]
$ git worktree add ~/trees/oalders/feature-X -b oalders/feature-X origin/master
$ git worktree add ~/trees/oalders/feature-Y -b oalders/feature-Y e9df3c555e96b3f1
$ git worktree list
/Users/olaf/http-browserdetect       90772ae [master]
/Users/olaf/trees/oalders/feature-X  90772ae [oalders/feature-X]
/Users/olaf/trees/oalders/feature-Y  e9df3c5 [oalders/feature-Y]
```
Like with most other Git commands, you need to be inside a repository when issuing this command. Once the worktrees are created, you have isolated work environments. The Git repository tracks where the worktrees live on disk. If Git hooks are already set up in the parent repository, they will also be available in the worktrees.
Don't overlook that each worktree uses only a fraction of the parent repository's disk space. In this case, the worktree requires about one-third of the original's disk space. This can scale very well. Once your repositories are measured in the gigabytes, you'll really come to appreciate these savings.
```
$ du -sh /Users/olaf/http-browserdetect
2.9M
$ du -sh /Users/olaf/trees/oalders/feature-X
1.0M
```
**Pros:** You can work in multiple workspaces simultaneously. You don't need the stash. Git tracks all of your worktrees. You don't need to set up Git hooks. This is also faster than `git clone` and can save on network traffic since you can do this in airplane mode. You also get more efficient disk space use without needing to resort to a shallow clone.
**Cons:** This is yet another thing to remember. However, if you can get into the habit of using this feature, it can reward you handsomely.
### A few more tips
When you need to clean up your worktrees, you have a couple of options. The preferable way is to let Git remove the worktree:
```
`git worktree remove /Users/olaf/trees/oalders/feature-X`
```
If you prefer a scorched-earth approach, `rm -rf` is also your friend:
```
`rm -rf /Users/olaf/trees/oalders/feature-X`
```
However, if you do this, you may want to clean up any remaining files with `git worktree prune`. Or you can skip the `prune` now, and this will happen on its own at some point in the future via `git gc`.
### Notable notes
If you're ready to get started with `git worktree`, here are a few things to keep in mind.
* Removing a worktree does not delete the branch.
* You can switch branches within a worktree.
* You cannot simultaneously check out the same branch in multiple worktrees.
* Like many other Git commands, `git worktree` needs to be run from inside a repository.
* You can have many worktrees at once.
* Create your worktrees from the same local checkout, or they will be agnostic about each other.
### git rev-parse
One final note: When using `git worktree`, your concept of where the root of the repository lives may depend on context. Fortunately, `git rev-parse` allows you to distinguish between the two.
* To find the parent repository's root: [code]`git rev-parse --git-common-dir`
```
* To find the root of the repository you're in: [code]`git rev-parse --show-toplevel`
```
### Choose the best method for your needs
As in many things, TIMTOWDI (there's more than one way to do it). What's important is that you find a workflow that suits your needs. What your needs are may vary depending on the problem at hand. Maybe you'll occasionally find yourself reaching for `git worktree` as a handy tool in your revision-control toolbelt.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/context-switching-git
作者:[Olaf Alders][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/oalders
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)

View File

@ -0,0 +1,145 @@
[#]: subject: (Fedora Workstation 34 feature focus: Btrfs transparent compression)
[#]: via: (https://fedoramagazine.org/fedora-workstation-34-feature-focus-btrfs-transparent-compression/)
[#]: author: (nickavem https://fedoramagazine.org/author/nickavem/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Fedora Workstation 34 feature focus: Btrfs transparent compression
======
![][1]
Photo by [Patrick Lindenberg][2] on [Unsplash][3]
The release of Fedora 34 grows ever closer, and with that, some fun new features! A [previous feature focus][4] talked about some changes coming to GNOME version 40. This article is going to go a little further under the hood and talk about data compression and _transparent compression_ in _btrfs_. A term like that may sound scary at first, but less technical users need not be wary. This change is simple to grasp, and will help many Workstation users in several key areas.
### What is transparent compression exactly?
Transparent compression is complex, but at its core it is simple to understand: it makes files take up less space. It is somewhat like a compressed tar file or ZIP file. Transparent compression will dynamically optimize your file systems bits and bytes into a smaller, reversible format. This has many benefits that will be discussed in more depth later on, however, at its core, it makes files smaller. This may leave most computer users with a question: “I cant just read ZIP files. You need to decompress them. Am I going to need to constantly decompress things when I access them?”. That is where the “transparent” part of this whole concept comes in.
Transparent compression makes a file smaller, but the final version is indistinguishable from the original by the human viewer. If you have ever worked with Audio, Video, or Photography you have probably heard of the terms “lossless” and “lossy”. Think of transparent compression like a lossless compressed PNG file. You want the image to look exactly like the original. Small enough to be streamed over the web but still readable by a human. Transparent compression works similarly. Your file system will look and behave the same way as before (no ZIP files everywhere, no major speed reductions). Everything will look, feel, and behave the same. However, in the background it is taking up much less disk space. This is because BTRFS will dynamically compress and decompress your files for you. Its “Transparent” because even with all this going on, you wont notice the difference.
> You can learn more about transparent compression at <https://btrfs.wiki.kernel.org/index.php/Compression>
### Transparent compression sounds cool, but also too good to be true…
I would be lying if I said transparent compression doesnt slow some things down. It adds extra CPU cycles to pretty much any I/O operation, and can affect performance in certain scenarios. However, Fedora is using the extremely efficient _zstd:1_ algorithm. [Several tests][5] show that relative to the other benefits, the downsides are negligible (as I mentioned in my explanation before). Better disk space usage is the greatest benefit. You may also receive reduction of write amplification (can increase the lifespan of SSDs), and enhanced read/write performance.
Btrfs transparent compression is extremely performant, and chances are you wont even notice a difference when its there.
### Im convinced! How do I get this working?
In fresh **installations of Fedora 34 and its [corresponding beta][6], it should be enabled by default. However, it is also straightforward to enable before and after an upgrade from Fedora 33. You can even enable it in Fedora 33, if you arent ready to upgrade just yet.
1. (Optional) Backup any important data. The process itself is completely safe, but human error isnt.
2. To truly begin you will be editing your _[fstab][7]_. This file tells your computer what file systems exist where, and how they should be handled. You need to be cautious here, but only a few small changes will be made so dont be intimidated. On an installation of Fedora 33 with the default Btrfs layout the _/etc/fstab_ file will probably look something like this:
```
```
&lt;strong&gt;$ $EDITOR /etc/fstab&lt;/strong&gt;
UUID=1234 /                       btrfs   subvol=root     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home     0 0
```
```
NOTE: _While this guide builds around the standard partition layout, you may be an advanced enough user to partition things yourself. If so, you are probably also advanced enough to extrapolate the info given here onto your existing system. However, comments on this article are always open for any questions._
Disregard the _/boot_ and _/boot/efi_ directories as they arent ([currently][8]) compressed. You will be adding the argument _compress=zstd:1_. This tells the computer that it should transparently compress any newly written files if they benefit from it. Add this option in the fourth column, which currently only contains the _subvol_ option for both /home and /:
```
```
UUID=1234 /                       btrfs   subvol=root,compress=zstd:1     0 0
UUID=1234 /boot                   ext4    defaults        1 2
UUID=1234         /boot/efi               vfat    umask=0077,shortname=winnt 0 2
UUID=1234 /home                   btrfs   subvol=home,compress=zstd:1     0 0
```
```
Once complete, simply save and exit (on the default _nano_ editor this is CTRL-X, SHIFT-Y, then ENTER).
3\. Now that fstab has been edited, tell the computer to read it again. After this, it will make all the changes required:
```
$ sudo mount -o remount / /home/
```
Once youve done this, you officially have transparent compression enabled for all newly written files!
### Recommended: Retroactively compress old files
Chances are you already have many files on your computer. While the previous configuration _will_ compress all newly written files, those old files will not benefit. I recommend taking this next (but optional) step to receive the full benefits of transparent compression.
1. (Optional) Clean out any data you dont need (empty trash etc.). This will speed things up. However, its not required.
2. Time to compress your data. One simple command can do this, but its form is dependent on your system. Fedora Workstation (and any other desktop spins using the DNF package manager) should use:
```
$ sudo btrfs filesystem defrag -czstd -rv / /home/
```
Fedora Silverblue users should use:
```
$ sudo btrfs filesystem defrag -czstd -rv / /var/home/
```
Silverblue users may take note of the immutability of some parts of the file system as described [here][9] as well as this [Bugzilla entry][10].
NOTE: _You may receive several warnings that say something like “Cannot compress permission denied.”. This is because some files, on Silverblue systems especially, the user cannot easily modify. This is a tiny subset of files. They will most likely compress on their own, in time, as the system upgrades._
Compression can take anywhere from a few minutes to an hour depending on how much data you have. Luckily, since all new writes are compressed, you can continue working while this process completes. Just remember it may partially slow down your work at hand and/or the process itself depending on your hardware.
Once this command completes you are officially fully compressed!
### How much file space is used, how big are my files
Due to the nature of transparent compression, utilities like _du_ will only report exact, uncompressed, files space usage. This is not the actual space they take up on the disk. The [_compsize_][11] utility is the best way to see how much space your files are actually taking up on disk. An example of a _compsize_ command is:
```
$ sudo compsize -x / /home/
```
This example provides exact information on how the two locations, / and /home/ are currently, transparently, compressed. If not installed, this utility is available in the Fedora Linux repository.
### Conclusion:
Transparent compression is a small but powerful change. It should benefit everyone from developers to sysadmin, from writers to artists, from hobbyists to gamers. It is one among many of the changes in Fedora 34. These changes will allow us to take further advantage of our hardware, and of the powerful Fedora Linux operating system. I have only just touched the surface here. I encourage those of you with interest to begin at the [Fedora Project Wiki][12] and [Btrfs Wiki][13] to learn more!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-workstation-34-feature-focus-btrfs-transparent-compression/
作者:[nickavem][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/nickavem/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/btrfs_compression-1-816x345.jpg
[2]: https://unsplash.com/@heapdump?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/hdd-compare?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/fedora-34-feature-focus-updated-activities-overview/
[5]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression#Simple_Analysis_of_btrfs_zstd_compression_level
[6]: https://fedoramagazine.org/announcing-fedora-34-beta/
[7]: https://en.wikipedia.org/wiki/Fstab
[8]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression#Q:_Will_.2Fboot_be_compressed.3F
[9]: https://docs.fedoraproject.org/en-US/fedora-silverblue/technical-information/#filesystem-layout
[10]: https://bugzilla.redhat.com/show_bug.cgi?id=1943850
[11]: https://github.com/kilobyte/compsize
[12]: https://fedoraproject.org/wiki/Changes/BtrfsTransparentCompression
[13]: https://btrfs.wiki.kernel.org/index.php/Compression

View File

@ -0,0 +1,306 @@
[#]: subject: (Using Web Assembly Written in Rust on the Server-Side)
[#]: via: (https://www.linux.com/news/using-web-assembly-written-in-rust-on-the-server-side/)
[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Using Web Assembly Written in Rust on the Server-Side
======
_By Bob Reselman_
_This article was originally published at [TheNewStac][1]k_
WebAssembly allows you to write code in a low-level programming language such as Rust, that gets compiled into a transportable binary. That binary can then be run on the client-side in the WebAssembly virtual machine that is [standard in todays web browsers][2]. Or, the binary can be used on the server-side, as a component consumed by another programming framework — such as Node.js or [Deno][3].
WebAssembly combines the efficiency inherent in low-level code programming with the ease of component transportability typically found in Linux containers. The result is a development paradigm specifically geared toward doing computationally intensive work at scale — for example, artificial intelligence and complex machine learning tasks.
As Solomon Hykes, the creator of Docker, [tweeted][4] on March 27, 2019: “If WASM+WASI existed in 2008, we wouldnt have needed to have created Docker. Thats how important it is. WebAssembly on the server is the future of computing.”
WebAssembly is a compelling approach to software development. However, in order to get a true appreciation for the technology, you need to see it in action.
In this article, I am going to show you how to program a WebAssembly binary in Rust and use it in a TypeScript-powered web server running under Deno. Ill show you how to install Rust and prep the runtime environment. Well compile the source code into a Rust binary. Then, once the binary is created, Ill demonstrate how to run it on the server-side under [Deno][3]. Deno is a TypeScript-based programming framework that was started by Ryan Dahl, the creator of Node.js.
### Understanding the Demonstration Project
The demonstration project that accompanies this article is called Wise Sayings. The project stores a collection of “wise sayings” in a text file named wisesayings.txt. Each line in the text file is a wise saying, for example, “_A friend in need is a friend indeed._”
The Rust code publishes a single function, get_wise_saying(). That function gets a random line from the text file, wisesayings.txt, and returns the random line to the caller. (See Figure 1, below)
<https://cdn.thenewstack.io/media/2021/03/50e3e12f-image1.png>
Figure 1: The demonstration project compiles data in a text file directly into the WebAssembly binary
Both the code and text file are compiled into a single WebAssembly binary file, named wisesayings.wasm. Then another layer of processing is performed to make the WebAssembly binary consumable by the Deno web server code. The Deno code calls the function get_wise_sayings() in the WebAssembly binary, to produce a random wise saying. (See Figure 2.)
<https://cdn.thenewstack.io/media/2021/03/ed3a0b83-image2.png>
Figure 2: WebAssembly binaries can be consumed by a server-side programming framework such as Deno.
_You get the source code for the Wise Sayings demonstration project used in this article [on GitHub][5]. All the steps described in this article are listed on the repositorys main [Readme][6] document._
### Prepping the Development Environment
The first thing we need to do to get the code up and running is to make sure that Rust is installed in the development environment. The following steps describe the process.
**Step 1: **Make sure Rust is installed on your machine by typing:
1 | rustc —version
---|---
Youll get output similar to the following:
1 | rustc 1.50.0 (cb75ad5db 20210210)
---|---
If the call to rustc version fails, you dont have Rust installed. Follow the instructions below and** make sure you do all the tasks presented by the given installation method**.
To install Rust, go here and install on Linux/MAC: …
1 | curl —proto =https —tlsv1.2 sSf <https://sh.rustup.rs>
---|---
… or here to install it on Windows:
Download and run rustup-init.exe which you can find at this URL: <https://static.rust-lang.org/rustup/dist/i686-pc-windows-gnu/rustup-init.exe>.
**Step 2:** Modify your systems PATH
1 | export PATH=“$HOME/.cargo/bin:$PATH”
---|---
**Step 3: **If youre working in a Linux environment do the following steps to install the required additional Linux components.
1 2 3 4 5 | sudo aptget update y sudo aptget install y libssldev apt install pkgconfig
---|---
***Developers Note: *_The optimal development environment in which to run this code is one that uses the Linux operating system._
**Step 4: **Get the CLI tool that youll use for generating the TypeScript/JavaScript adapter files. These adapter files (a.k.a. shims) do the work of exposing the function get_wise_saying() in the WebAssembly binary to the Deno web server that will be hosting the binary. Execute the following command at the command line to install the tool, [wasm-bindgen-cli][7].
1 | cargo install wasmbindgencli
---|---
The development environment now has Rust installed, along with the necessary ancillary libraries. Now we need to get the Wise Saying source code.
### Working with the Project Files
The Wise Saying source code is hosted in a GitHub repository. Take the following steps to clone the source code from GitHub onto the local development environment.
**Step 1: **Execute the following command to clone the Wise Sayings source code from GitHub
1 | git clone <https://github.com/reselbob/wisesayingswasm.git>
---|---
**Step 2: **Go to the working directory
1 | cd wisesayingswasm/
---|---
Listing 1, below lists the files that make up the source code cloned from the GitHub repository.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── packagelock.json ├── README.md ├── server │   ├── main.ts │   └── packagelock.json └── src     ├── fortunes.txt     ├── lib.rs     └── main.rs
---|---
_Listing 1: The files for the source code for the Wise Sayings demonstration project hosted in the GitHub repository_
Lets take a moment to describe the source code files listed above in Listing 1. The particular files of interest with regard to creating the WebAssembly binary are the files in the directory named, src at Line 11 and the file, Cargo.toml at Line 2.
Lets discuss Cargo.toml first. The content of Cargo.toml is shown in Listing 2, below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | [package] name = “wise-sayings-wasm” version = “0.1.0” authors = [“Bob Reselman &lt;bob@CogArtTech.com&gt;”] edition = “2018” [dependencies] rand = “0.8.3” getrandom = { version = “0.2”, features = [“js”] } wasmbindgen = “0.2.70” [lib] name = “wisesayings” cratetype =[“cdylib”, “lib”]
---|---
_Listing 2: The content of Cargo.toml for the demonstration project Wise Sayings_
Cargo.toml is the [manifest file][8] that describes various aspects of the Rust project under development. The Cargo.toml file for the Wise Saying project is organized into three sections: package, dependencies, and lib. The section names are defined in the Cargo manifest specification, which you read [here][8].
#### Understanding the Package Section of Cargo.toml
The package section indicates the name of the package (wise-sayings-wasm), the developer assigned version (0.1.0), the authors (Bob Reselman &lt;[bob@CogArtTech.com][9]&gt;) and the edition of Rust (2018) that is used to program the binary.
#### Understanding the Dependencies Section of Cargo.toml
The dependencies section lists the dependencies that the WebAssembly project needs to do its work. As you can see in Listing 2, above at Line 8, the Cargo.toml lists the rand library as a dependency. The rand library provides the capability to generate a random number which is used to get a random line of wise saying text from the file, wisesayings.txt.
The reference to getrandom at Line 9 in Listing 2 above indicates that the WebAssembly binarys [getrandom][10] is running under Javascript and that the [JavaScript interface should be used][11]. This condition is very particular to running a WebAssembly binary under JavaScript. The long and short of it is that if the line getrandom = { version = “0.2”, features = [“js”] } is not included in the Cargo.toml, the WebAssembly binary will not be able to create a random number.
The entry at Line 10 declares the [wasm-bindgen][12] library as a dependency. The wasm-bindgen library provides the capability for wasm modules to talk to JavaScript and JavaScript to talk to wasm modules.
#### Understanding the Lib Section of Cargo.toml
The entry [crate-type =[“cdylib”, “lib”]][13] at Line 14 in the lib section of the Cargo.toml file tells the Rust compiler to create a wasm binary without a start function. Typically when cdylib is indicated, the compiler will create a [dynamic library][14] with the extension .dll in Windows, .so in Linux, or .dylib in MacOS. In this case, because the deployment unit is a WebAssembly binary, the compiler will create a file with the extension .wasm. The name of the wasm file will be wisesayings.wasm, as indicated at Line 13 above in Listing 2.
The important thing to understand about Cargo.toml is that it provides both the design and runtime information needed to get your Rust code up and running. If the Cargo.toml file is not present, the Rust compiler doesnt know what to do and the build will fail.
### Understanding the Core Function, get_wise_saying()
The actual work of getting a random line that contains a Wise Saying from the text file wisesayings.txt is done by the function get_wise_saying(). The code for get_wise_sayings() is in the Rust library file, ./src/lib.rs. The Rust code is shown below in Listing 3.
1 2 3 4 5 6 7 8 9 10 11 12 13 | use rand::seq::IteratorRandom; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub fn get_wise_saying() -&gt; String {     let str = include_str!(“fortunes.txt”);     let mut lines = str.lines();     let line = lines         .choose(&amp;mut rand::thread_rng())         .expect(“File had no lines”);     return line.to_string(); }
---|---
_Listing 3: The function file, lib.rs contains the function, get_wise_saying()._
The important things to know about the source is that its tagged at Line 4 with the attribute #[wasm_bindgen], which lets the Rust compiler know that the source code is targeted as a WebAssembly binary. The code publishes one function, get_wise_saying(), at Line 5. The way the wise sayings text file is loaded into memory is to use the [Rust macro][15], [include_str!][16]. This macro does the work of getting the file from disk and loading the data into memory. The macro loads the file as a string and the function str.lines() separates the lines within the string into an array. (Line 7.)
The rand::thread_rng() call at Line 10 returns a number that is used as an index by the .choose() function at Line 10. The result of it all is an array of characters (a string) that reflects the wise saying returned by the function.
### Creating the WebAssembly Binary
Lets move on compiling the code into a WebAssembly Binary.
**Step 1: **Compile the source code into a WebAssembly is shown below.
1 | cargo build —lib —target wasm32unknownunknown
---|---
WHERE
**cargo build** is the command and subcommand to invoke the Rust compiler using the settings in the Cargo.toml file.
**lib** is the option indicating that youre going to build a library against the source code in the ./lib directory.
**targetwasm32-unknown-unknown** indicates that Rust will use the wasm-unknown-unknown compiler and will store the build artifacts as well as the WebAssembly binary into directories within the target directory, **wasm32-unknown-unknown.**
#### **Understanding the Rust Target Triple Naming Convention**
Rust has a naming convention for targets. The term used for the convention is a _target triple_. A target triple uses the following format: ARCH-VENDOR-SYS-ABI.
**WHERE**
**ARCH** describes the intended target architecture, for example wasm32 for WebAssembly, or i686 for current-generation Intel chips.
**VENDOR** describes the vendor publishing the target; for example, Apple or Nvidia.
**SYS** describes the operating system; for example, Windows or Linux.
**ABI** describes how the process starts up, for eabi is used for bare metal, while gnu is used for glibc.
Thus, the name i686-unknown-linux-gnu means that the Rust binary is targeted to an i686 architecture, the vendor is defined as unknown, the targeted operating system is Linux, and ABI is gnu.
In the case of wasm32-unknown-unknown, the target is WebAssembly, the operating system is unknown and the ABI is unknown. The informal inference of the name is “its a WebAssembly binary.”
There are a standard set of built-in targets defined by Rust that can be found [here][17].
If you find the naming convention to be confusing because there are optional fields and sometimes there are four sections to the name, while other times there will be three sections, you are not alone.
### Deploying the Binary Server-Side Using Deno
After we build the base WeAssembly binary, we need to create the adapter (a.k.a shim) files and a special version of the WebAssembly binary — all of which can be run from within JavaScript. Well create these artifacts using the [wasm-bindgen][18] tool.
**Step 1: **We create these new artifacts using the command shown below.
1 | wasmbindgen —target deno ./target/wasm32unknownunknown/debug/wisesayings.wasm —outdir ./server
---|---
WHERE
**wasm-bindgen** is the command for creating the adapter files and the special WebAssembly binary.
**target deno ./target/wasm32-unknown-unknown/debug/wisesayings.wasm** is the option that indicates the adapter files will be targeted for Deno. Also, the option denotes the location of the original WebAssembly wasm binary that is the basis for the artifact generation process.
**out-dir ./server** is the option that declares the location where the created adapter files will be stored on disk; in this case, **./server**.
The result of running wasm-bindgen is the server directory shown in Listing 4 below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | . ├── Cargo.toml ├── cheatsheet.txt ├── LICENSE ├── lldbconfig ├── packagelock.json ├── README.md ├── server │   ├── main.ts │   ├── packagelock.json │   ├── wisesayings_bg.wasm │   ├── wisesayings_bg.wasm.d.ts │   ├── wisesayings.d.ts │   └── wisesayings.js └── src     ├── fortunes.txt     ├── lib.rs     └── main.rs
---|---
_Listing 4: The server directory contains the results of running wasm-bindgen_
Notice that the contents of the server directory, shown above in Listing 4, now has some added JavaScript (js) and TypeScript (ts) files. Also, the server directory has the special version of the WebAssembly binary, named wisesayings_bg.wasm. This version of the WebAssembly binary is a stripped-down version of the wasm file originally created by the initial compilation, done when invoking cargo build earlier. You can think of this new wasm file as a JavaScript-friendly version of the original WebAssembly binary. The suffix, _bg, is an abbreviation for bindgen.
### Running the Deno Server
Once all the artifacts for running WebAssembly have been generated into the server directory, were ready to invoke the Deno web server. Listing 5 below shows content of main.ts, which is the source code for the Deno web server.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | import { serve } from “https://deno.land/std@0.86.0/http/server.ts”; import { get_wise_saying } from “./wisesayings.js”; const env = Deno.env.toObject(); let port = 4040; if(env.WISESAYING_PORT){   port = Number(env.WISESAYING_PORT); }; const server = serve({ hostname: “0.0.0.0”, port}); console.log(`HTTP webserver running at ${new Date()}.  Access it at:  http://localhost:${port}/`); for await (const request of server) {     const saying = get_wise_saying();     request.respond({ status: 200, body: saying });   }
---|---
_Listing 5: main.ts is the Deno webserver code that uses the WebAssembly binary_
Youll notice that the WebAssembly wasm binary is not imported directly. This is because the work of representing the WebAssembly binary is done by the JavaScript and TypeScript adapter (a.k.a shim) files generated earlier. The WebAssembly/Rust function, get_wise_sayings(), is exposed in the auto-generated JavaScript file, wisesayings.js.
The function get_wise_saying is imported into the webserver code at Line 2 above. The function is used at Line 16 to get a wise saying that will be returned as an HTTP response by the webserver.
To get the Deno web server up and running, execute the following command in a terminal window.
**Step 1:**
1 | deno run —allowread —allownet —allowenv ./main.ts
---|---
WHERE
deno run is the command set to invoke the webserver.
allow-read is the option that allows the Deno webserver code to have permission to read files from disk.
allow-net is the option that allows the Deno webserver code to have access to the network.
allow-env is the option that allows the Deno webserver code read environment variables.
./main.ts is the TypeScript file that Deno is to run. In this case, its the webserver code.
When the webserver is up and running, youll get output similar to the following:
HTTP webserver running at Thu Mar 11 2021 17:57:32 GMT+0000 (Coordinated Universal Time). Access it at: <http://localhost:4040/>
**Step 2:**
Run the following command in a terminal on your computer to exercise the Deno/WebAssembly code
1 | curl localhost:4040
---|---
Youll get a wise saying, for example:
_True beauty lies within._
**Congratulations!** Youve created and run a server-side WebAssembly binary.
### Putting It All Together
In this article, Ive shown you everything you need to know to create and use a WebAssembly binary in a Deno web server. Yet for as detailed as the information presented in this article is, there is still a lot more to learn about whats under the covers. Remember, Rust is a low-level programming language. Its meant to go right up against the processor and memory directly. Thats where its power really is. The real benefit of WebAssembly is using the technology to do computationally intensive work from within a browser. Applications that are well suited to WebAssembly are visually intensive games and activities that require complex machine learning capabilities — for example, real-time voice recognition and language translation. WebAssembly allows you to do computation on the client-side that previously was only possible on the server-side. As Solomon Hykes said, WebAssembly is the future of computing. He might very well be right.
The important thing to understand is that WebAssembly provides enormous opportunities for those wanting to explore cutting-edge approaches to modern distributed computing. Hopefully, the information presented in this piece will motivate you to explore those opportunities.
The post [Using Web Assembly Written in Rust on the Server-Side][19] appeared first on [Linux Foundation Training][20].
--------------------------------------------------------------------------------
via: https://www.linux.com/news/using-web-assembly-written-in-rust-on-the-server-side/
作者:[Dan Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/
[b]: https://github.com/lujun9972
[1]: https://thenewstack.io/using-web-assembly-written-in-rust-on-the-server-side/
[2]: https://www.infoq.com/news/2017/12/webassembly-browser-support/
[3]: https://deno.land/
[4]: https://twitter.com/solomonstre/status/1111004913222324225?s=20
[5]: https://github.com/reselbob/wisesayingswasm
[6]: https://github.com/reselbob/wisesayingswasm/blob/main/README.md
[7]: https://rustwasm.github.io/docs/wasm-bindgen/reference/cli.html
[8]: https://doc.rust-lang.org/cargo/reference/manifest.html
[9]: mailto:bob@CogArtTech.com
[10]: https://docs.rs/getrandom/0.2.2/getrandom/
[11]: https://docs.rs/getrandom/0.2.2/getrandom/#webassembly-support
[12]: https://rustwasm.github.io/docs/wasm-bindgen/
[13]: https://rustwasm.github.io/docs/wasm-pack/tutorials/npm-browser-packages/template-deep-dive/cargo-toml.html#1-crate-type
[14]: https://en.wikipedia.org/wiki/Library_(computing)#Shared_libraries
[15]: https://doc.rust-lang.org/book/ch19-06-macros.html
[16]: https://doc.rust-lang.org/std/macro.include_str.html
[17]: https://docs.rust-embedded.org/embedonomicon/compiler-support.html#built-in-target
[18]: https://rustwasm.github.io/wasm-bindgen/
[19]: https://training.linuxfoundation.org/announcements/using-web-assembly-written-in-rust-on-the-server-side/
[20]: https://training.linuxfoundation.org/

View File

@ -0,0 +1,204 @@
[#]: subject: (5 reasons sysadmins love systemd)
[#]: via: (https://opensource.com/article/21/4/sysadmins-love-systemd)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
5 reasons sysadmins love systemd
======
Systemd's speed and ease make it a popular way to manage modern Linux
systems.
![Woman sitting in front of her laptop][1]
As systems administrators know, there's a lot happening on modern computers. Applications run in the background, automated events wait to be triggered at a certain time, log files are written, status reports are delivered. Traditionally, these disparate processes have been managed and monitored with a collection of Unix tools to great effect and with great efficiency. However, modern computers are diverse, with local services running alongside containerized applications, easy access to clouds and the clusters they run on, real-time processes, and more data to process than ever.
Having a unified method of managing them is an expectation for users and a useful luxury for busy sysadmins. For this nontrivial task, the system daemon, or **systemd**, was developed and quickly adopted by all major Linux distributions.
Of course, systemd isn't the only way to manage a Linux system. There are many alternative init systems, including sysvinit, OpenRC, runit, s6, and even BusyBox, but systemd treats Linux as a unified data set, meant to be manipulated and queried consistently with robust tools. For a busy systems administrator and many users, the speed and ease of systemd is an important feature. Here are five reasons why.
### Boot management
Booting a Linux computer can be a surprisingly rare event, if you want it to be. Certainly in the server world, uptimes are often counted in _years_ rather than months or weeks. Laptops and desktops tend to be shut down and booted pretty frequently, although even these are as likely to be suspended or hibernated as they are to be shut down. Either way, the time since the most recent boot event can serve as a sort of session manager for a computer health check. It's a useful way to limit what data you look at when monitoring your system or diagnosing problems.
In the likely event that you can't remember the last time you booted your computer, you can list boot sessions with systemd's logging tool, `journalctl`:
```
$ journalctl --list-boots
-42 7fe7c3... Fri 2020-12-04 05:13:59 - Wed 2020-12-16 16:01:23
-41 332e99... Wed 2020-12-16 20:07:39 - Fri 2020-12-18 22:08:13
[...]
-1 e0fe5f... Mon 2021-03-29 20:47:46 - Mon 2021-03-29 21:59:29
 0 37fbe4... Tue 2021-03-30 04:46:13 - Tue 2021-03-30 10:42:08
```
The latest boot sessions appear at the bottom of the list, so you can pipe the output to `tail` for just the latest boots.
The numbers on the left (42, 41, 1, and 0 in this example) are index numbers for each boot session. In other words, to view logs for only a specific boot session, you can use its index number as reference.
### Log reviews
Looking at logs is an important method of extrapolating information about your system. Logs provide a history of much of the activity your computer engages in without your direct supervision. You can see when services launched, when timed jobs ran, what services are running in the background, which activities failed, and more. One of the most common initial troubleshooting steps is to review logs, which is easy to do with `journalctl`:
```
`$ journalctl --pager-end`
```
The `--pager-end` (or `-e` for short) option starts your view of the logs at the end of the `journalctl` output, so you must scroll up to see events that happened earlier.
Systemd maintains a "catalog" of errors and messages filled with records of errors, possible solutions, pointers to support forums, and developer documentation. This can provide important context to a log event, which can otherwise be a confusing blip in a sea of messages, or worse, could go entirely unnoticed. To integrate error messages with explanatory text, you can use the `--catalog` (or `-x` for short) option:
```
`$ journalctl --pager-end --catalog`
```
To further limit the log output you need to wade through, you can specify which boot session you want to see logs for. Because each boot session is indexed, you can specify certain sessions with the `--boot` option and view only the logs that apply to it:
```
`$ journalctl --pager-end --catalog --boot 42`
```
You can also see logs for a specific systemd unit. For instance, to troubleshoot an issue with your secure shell (SSH) service, you can specify `--unit sshd` to see only the logs that apply to the `sshd` daemon:
```
$ journalctl --pager-end \
\--catalog --boot 42 \
\--unit sshd
```
### Service management
The first task for systemd is to boot your computer, and it generally does that promptly, efficiently, and effectively. But the task that's never finished is service management. By design, systemd ensures that the services you want to run do indeed start and continue running during your session. This is nicely robust, because in theory even a crashed service can be restarted without your intervention.
Your interface to help systemd manage services is the `systemctl` command. With it, you can view the unit files that define a service:
```
$ systemctl cat sshd
# /usr/lib/systemd/system/sshd.service
[Unit]
Description=OpenSSH server daemon
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target sshd-keygen.target
Wants=sshd-keygen.target
[Service]
Type=notify
EnvironmentFile=-/etc/crypto-policies/back-ends/opensshserver.config
EnvironmentFile=-/etc/sysconfig/sshd
ExecStart=/usr/sbin/sshd -D $OPTIONS $CRYPTO_POLICY
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=42s
[Install]
WantedBy=multi-user.target
```
Most unit files exist in `/usr/lib/systemd/system/` but, as with many important configurations, you're encouraged to modify them with local changes. There's an interface for that, too:
```
`$ systemctl edit sshd`
```
You can see whether a service is currently active:
```
$ systemctl is-active sshd
active
$ systemctl is-active foo
inactive
```
Similarly, you can see whether a service has failed with `is-failed`.
Starting and stopping services is nicely intuitive:
```
$ systemctl stop sshd
$ systemctl start sshd
```
And enabling a service to start at boot time is simple:
```
`$ systemctl enable sshd`
```
Add the `--now` option to enable a service to start at boot time or to start it for your current session.
### Timers
Long ago, when you wanted to automate a task on Linux, the canonical tool for the job was `cron`. There's still a place for the cron command, but there are also some compelling alternatives. For instance, the [`anacron` command][2] is a versatile, cron-like system capable of running tasks that otherwise would have been missed during downtime.
Scheduled events are little more than services activated at a specific time, so systemd manages a cron-like function called [timers][3]. You can list active timers:
```
$ systemctl list-timers
NEXT                          LEFT      
Tue 2021-03-30 12:37:54 NZDT  16min left [...]
Wed 2021-03-31 00:00:00 NZDT  11h left [...]
Wed 2021-03-31 06:42:02 NZDT  18h left [...]
3 timers listed.
Pass --all to see loaded but inactive timers, too.
```
You can enable a timer the same way you enable a service:
```
`$ systemctl enable myMonitor.timer`
```
### Targets
Targets are the final major component of the systemd matrix. A target is defined by a unit file, the same as services and timers. Targets can also be started and enabled in the same way. What makes targets unique is that they group other unit files in an arbitrarily significant way. For instance, you might want to boot to a text console instead of a graphical desktop, so the `multi-user` target exists. However, the `multi-user` target is only the `graphical` target without the desktop unit files as dependencies.
In short, targets are an easy way for you to collect services, timers, and even other targets together to represent an intended state for your machine.
In fact, within systemd, a reboot, a power-off, or a shut-down action is just another target.
You can list all available targets using the `list-unit-files` option, constraining it with the `--type` option set to `target`:
```
`$ systemctl list-unit-files --type target`
```
### Taking control with systemd
Modern Linux uses systemd for service management and log introspection. It provides everything from personal Linux systems to enterprise servers with a modern mechanism for monitoring and easy maintenance. The more you use it, the more systemd becomes comfortably predictable and intuitive, and the more you discover how disparate parts of your system are interconnected.
To get better acquainted with systemd, you must use it. And to get comfortable with using it, [download our cheat sheet][4] and refer to it often.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/sysadmins-love-systemd
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/article/21/2/linux-automation
[3]: https://opensource.com/article/20/7/systemd-timers
[4]: https://opensource.com/downloads/linux-systemd-cheat-sheet

View File

@ -0,0 +1,84 @@
[#]: subject: (A beginner's guide to load balancing)
[#]: via: (https://opensource.com/article/21/4/load-balancing)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
A beginner's guide to load balancing
======
Load balancing distributes resources to where they're needed most at
that moment.
![eight stones balancing][1]
When the personal computer was young, a household was likely to have one (or fewer) computers in it. Children played games on it during the day, and parents did accounting or programming or roamed through a BBS in the evening. Imagine a one-computer household today, though, and you can predict the conflict it would create. Everyone would want to use the computer at the same time, and there wouldn't be enough keyboard and mouse to go around.
This is, more or less, the same scenario that's been happening to the IT industry as computers have become more and more ubiquitous. Demand for services and servers has increased to the point that they could grind to a halt from overuse. Fortunately, we now have the concept of load balancing to help us handle the demand.
### What is load balancing?
Load balancing is a generic term referring to anything you do to ensure the resources you manage are distributed efficiently. For a web server's systems administrator, load balancing usually means ensuring that the web server software (such as [Nginx][2]) is configured with enough worker nodes to handle a spike in incoming visitors. In other words, should a site suddenly become very popular and its visitor count quadruple in a matter of minutes, the software running the server must be able to respond to each visitor without any of them noticing service degradation. For simple sites, this is as simple as a one-line configuration option, but for complex sites with dynamic content and several database queries for each user, it can be a serious problem.
This problem is supposed to have been solved with cloud computing, but it's not impossible for a web app to fail to scale out when it experiences an unexpected surge.
The important thing to keep in mind when it comes to load balancing is that distributing resources _efficiently_ doesn't necessarily mean distributing them _evenly_. Not all tasks require all available resources at all times. A smart load-balancing strategy provides resources to users and tasks only when those resources are needed. This is often the application developer's domain rather than the IT infrastructure's responsibility. Asynchronous applications are vital to ensuring that a user who walks away from the computer for a coffee break isn't occupying valuable resources on the server.
### How does load balancing work?
Load balancing avoids bottlenecks by distributing a workload across multiple computational nodes. Those nodes may be physical servers in a data center, containers in a cloud, strategically placed servers enlisted for edge computing, separate Java Virtual Machines (JVMs) in a complex application framework, or daemons running on a single Linux server.
The idea is to divide a large problem into small tasks and assign each task to a dedicated computer. For a website that requires its users to log in, for instance, the website might be hosted on Server A, while the login page and all the authentication lookups that go along with it are hosted on Server B. This way, the process of a new user logging into an account doesn't steal resources from other users actively using the site.
#### Load balancing the cloud
Cloud computing uses [containers][3], so there aren't usually separate physical servers to handle distinct tasks (actually, there are many separate servers, but they're clustered together to act as one computational "brain"). Instead, a "pod" is created from several containers. When one pod starts to run out of resources due to its user or task load, an identical pod is generated. Pods share storage and network resources, and each pod is assigned to a compute node as it's created. Pods can be created or destroyed on demand as the load requires so that users experience consistent quality of service regardless of how many users there are.
#### Edge computing
[Edge computing][4] takes the physical world into account when load balancing. The cloud is naturally a distributed system, but in practice, a cloud's nodes are usually concentrated in a few data centers. The further a user is from the data center running the cloud, the more physical barriers they must overcome for optimal service. Even with fiber connections and proper load balancing, the response time of a server located 3,000 miles away is likely greater than the response time of something just 300 miles away.
Edge computing brings compute nodes to the "edge" of the cloud in an attempt to bridge the geographic divide, forming a sort of satellite network for the cloud, so it also plays a part in a good load-balancing effort.
### What is a load-balancing algorithm?
There are many strategies for load balancing, and they range in complexity depending on what technology is involved and what the requirements demand. Load balancing doesn't have to be complicated, and it's important, even when using specialized software like [Kubernetes][5] or [Keepalived][6], to start load balancing from inception.
Don't rely on containers to balance the load when you could design your application to take simple precautions on its own. If you design your application to be modular and ephemeral from the start, then you'll benefit from the load balancing opportunities made available by clever network design, container orchestration, and whatever tomorrow's technology brings.
Some popular algorithms that can guide your efforts as an application developer or network engineer include:
* Assign tasks to servers sequentially (this is often called _round-robin_).
* Assign tasks to the server that's currently the least busy.
* Assign tasks to the server with the best response time.
* Assign tasks randomly.
These principles can be combined or weighted to favor, for instance, the most powerful server in a group when assigning particularly complex tasks. [Orchestration][7] is commonly used so that an administrator doesn't have to drum up the perfect algorithm or strategy for load balancing, although sometimes it's up to the admin to choose which combination of load balancing schemes to use.
### Expect the unexpected
Load balancing isn't really about ensuring that all your resources are used evenly across your network. Load balancing is all about guaranteeing a reliable user experience even when the unexpected happens. Good infrastructure can withstand a computer crash, application overload, onslaught of network traffic, and user errors. Think about how your service can be resilient and design load balancing accordingly from the ground up.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/load-balancing
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/water-stone-balance-eight-8.png?itok=1aht_V5V (eight stones balancing)
[2]: https://opensource.com/business/15/4/nginx-open-source-platform
[3]: https://opensource.com/resources/what-are-linux-containers
[4]: https://opensource.com/article/18/5/edge-computing
[5]: https://opensource.com/resources/what-is-kubernetes
[6]: https://www.redhat.com/sysadmin/keepalived-basics
[7]: https://opensource.com/article/20/11/orchestration-vs-automation

View File

@ -0,0 +1,140 @@
[#]: subject: (Resolve systemd-resolved name-service failures with Ansible)
[#]: via: (https://opensource.com/article/21/4/systemd-resolved)
[#]: author: (David Both https://opensource.com/users/dboth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Resolve systemd-resolved name-service failures with Ansible
======
Name resolution and the ever-changing networking landscape.
![People work on a computer server with devices][1]
Most people tend to take name services for granted. They are necessary to convert human-readable names, such as `www.example.com`, into IP addresses, like `93.184.216.34`. It is easier for humans to recognize and remember names than IP addresses, and name services allow us to use names, and they also convert them to IP addresses for us.
The [Domain Name System][2] (DNS) is the global distributed database that maintains the data required to perform these lookups and reverse lookups, in which the IP address is known and the domain name is needed.
I [installed Fedora 33][3] the first day it became available in October 2020. One of the major changes was a migration from the ancient Name Service Switch (NSS) resolver to [systemd-resolved][4]. Unfortunately, after everything was up and running, I couldn't connect to or even ping any of the hosts on my network by name, although using IP addresses did work.
### The problem
I run my own name server using BIND on my network server, and all has been good for over 20 years. I've configured my DHCP server to provide the IP address of my name server to every workstation connected to my network, and that (along with a couple of backup name servers) is stored in `/etc/resolv.conf`.
[Michael Catanzaro][5] describes how systemd-resolved is supposed to work, but the introduction of systemd-resolved caused various strange resolution problems on my network hosts. The symptoms varied depending upon the host's purpose. The trouble mostly presented as an inability to find IP addresses for hosts inside the network on most systems. On other systems, it failed to resolve any names at all. For example, even though nslookup sometimes returned the correct IP addresses for hosts inside and outside networks, ping would not contact the designated host, nor could I SSH to that same host. Most of the time, neither the lookup, the ping, nor SSH would work unless I used the IP address in the command.
The new resolver allegedly has four operational modes, described in this [Fedora Magazine article][6]. None of the options seems to work correctly when the network has its own name server designed to perform lookups for internal and external hosts.
In theory, systemd-resolved is supposed to fix some corner issues around the NSS resolver failing to use the correct name server when a host is connected to a VPN, which has become a common problem with so many more people working from home.
The new resolver is supposed to use the fact that `/etc/resolv.conf` is now a symlink to determine how it is supposed to work based on which resolve file it is linked to. systemd-resolved's man page includes details about this behavior. The man page says that setting `/etc/resolv.conf` as a symlink to `/run/systemd/resolve/resolv.conf` should cause the new resolver to work the same way the old one does, but that didn't work for me.
### My fix
I have seen many options on the internet for resolving this problem, but the only thing that works reliably for me is to stop and disable the new resolver. First, I deleted the existing link for `resolv.conf`, copied my preferred `resolv.conf` file to `/run/NetworkManager/resolv.conf`, and added a new link to that file in `/etc`:
```
# rm -f /etc/resolv.conf
# ln -s /run/NetworkManager/resolv.conf /etc/resolv.conf
```
These commands stop and disable the systemd-resolved service:
```
# systemctl stop systemd-resolved.service ; systemctl disable systemd-resolved.service
Removed /etc/systemd/system/multi-user.target.wants/systemd-resolved.service.
Removed /etc/systemd/system/dbus-org.freedesktop.resolve1.service.
```
There's no reboot required. The old resolver takes over, and name services work as expected.
### Make it easy with Ansible
I set up this Ansible playbook to make the necessary changes if I install a new host or an update that reverts the resolver to systemd-resolved, or if an upgrade to the next release of Fedora reverts the resolver. The `resolv.conf` file you want for your network should be located in `/root/ansible/resolver/files/`:
```
################################################################################
#                              fixResolver.yml                                 #
#------------------------------------------------------------------------------#
#                                                                              #
# This playbook configures the old nss resolver on systems that have the new   #
# systemd-resolved service installed. It copies the resolv.conf file for my    #
# network to /run/NetworkManager/resolv.conf and places a link to that file    #
# as /etc/resolv.conf. It then stops and disables systemd-resolved which       #
# activates the old nss resolver.                                              #
#                                                                              #
#------------------------------------------------------------------------------#
#                                                                              #
# Change History                                                               #
# Date        Name         Version   Description                               #
# 2020/11/07  David Both   00.00     Started new code                          #
# 2021/03/26  David Both   00.50     Tested OK on multiple hosts.              #
#                                                                              #
################################################################################
\---
\- name: Revert to old NSS resolver and disable the systemd-resolved service
  hosts: all_by_IP
################################################################################
  tasks:
    - name: Copy resolv.conf for my network
      copy:
        src: /root/ansible/resolver/files/resolv.conf
        dest: /run/NetworkManager/resolv.conf
        mode: 0644
        owner: root
        group: root
    - name: Delete existing /etc/resolv.conf file or link
      file:
        path: /etc/resolv.conf
        state: absent
    - name: Create link from /etc/resolv.conf to /run/NetworkManager/resolv.conf
      file:
        src: /run/NetworkManager/resolv.conf
        dest: /etc/resolv.conf
        state: link
    - name: Stop and disable systemd-resolved
      systemd:
        name: systemd-resolved
        state: stopped
        enabled: no
```
Whichever Ansible inventory you use must have a group that uses IP addresses instead of hostnames. This command runs the playbook and specifies the name of the inventory file I use for hosts by IP address:
```
`# ansible-playbook -i /etc/ansible/hosts_by_IP fixResolver.yml`
```
### Final thoughts
Sometimes the best answer to a tech problem is to fall back to what you know. When systemd-resolved is more robust, I'll likely give it another try, but for now I'm glad that open source infrastructure allows me to quickly identify and resolve network problems. Using Ansible to automate the process is a much appreciated bonus. The important lesson here is to do your research when troubleshooting, and to never be afraid to void your warranty.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/systemd-resolved
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR (People work on a computer server with devices)
[2]: https://opensource.com/article/17/4/introduction-domain-name-system-dns
[3]: https://opensource.com/article/20/11/new-gnome
[4]: https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html
[5]: https://blogs.gnome.org/mcatanzaro/2020/12/17/understanding-systemd-resolved-split-dns-and-vpn-configuration/
[6]: https://fedoramagazine.org/systemd-resolved-introduction-to-split-dns/

View File

@ -0,0 +1,228 @@
[#]: subject: (Notes on building debugging puzzles)
[#]: via: (https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/)
[#]: author: (Julia Evans https://jvns.ca/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Notes on building debugging puzzles
======
Hello! This week I started building some choose-your-own-adventure-style puzzles about debugging networking problems. Im pretty excited about it and Im trying to organize my thoughts so heres a blog post!
The two Ive made so far are:
* [The Case of the Connection Timeout][1]
* [The Case of the Slow Website][2]
Ill talk about how I came to this idea, design decisions I made, how it works, what I think is hard about making these puzzles, and some feedback Ive gotten so far.
### why this choose-your-own-adventure format?
Ive been thinking a lot about DNS recently, and how to help people troubleshoot their DNS issues. So on Tuesday I was sitting in a park with a couple of friends chatting about this.
We started out by talking about the idea of flowcharts (“heres a flowchart that will help you debug any DNS problem”). Ive dont think Ive ever seen a flowchart that I found helpful in solving a problem, so I found it really hard to imagine creating one there are so many possibilities! Its hard to be exhaustive! It would be disappointing if the flowchart failed and didnt give you your answer!
But then someone mentioned choose-your-own-adventure games, and I thought about something I **could** relate to debugging a problem together with someone who knows things that I dont!
So I thought what if I made a choose-your-own-adventure game where youre given the symptoms of a specific networking bug, and you have to figure out how to diagnose it?
I got really excited about this and immediately went home and started putting something together in Twine.
Here are some design decisions Ive made so far. Some of them might change.
### design decision: the mystery has 1 specific bug
Each mystery has one very specific bug, ideally a bug that Ive actually run into in the past. Your mission is to figure out the cause of the bug and fix it.
### design decision: show people the actual output of the tools theyre using
All of the bugs Im starting with are networking issues, and the way you solve them is to use various tools (like dig, curl, tcpdump, ping, etc) to get more information.
Originally I thought of writing the game like this:
1. You choose “Use curl”
2. It says “You run `<command>`. You see that the output tells you `<interpretation>`
But I realized that immediately interpreting the output of a command for someone is extremely unrealistic one of the biggest problems with using some of these command line networking tools is that their output is hard to interpret!
So instead, the puzzle:
1. Asks what tool you want to use
2. Tells you what command they ran, and shows you the output of the command
3. Asks you to interpret the output (you type it in in a freeform text box)
4. Tells you the “correct” interpretation of the output and shows you how you could have figured it out (by highlighting the relevant parts of the output)
This really lines up with how Ive learned about these tools in real life I dont learn about how to read all of the output all at once, I learn it in bits and pieces by debugging real problems.
### design decision: make the output realistic
This is sort of obvious, but in order to give someone output to help them diagnose a bug, the output needs to be a realistic representation of what would actually happen.
Ive been doing this by reproducing the bug in a virtual machine (or on my laptop), and then running the commands in the same way I would to fix the bug in real life and paste their output.
Reproducing the bug isnt always easy, but once Ive reproduced it it makes building the puzzle much more straightforward than trying to imagine what tcpdump would theoretically output in a given situation.
### design decision: let people collect “knowledge” throughout the mystery
When I debug, I think about it as slowly collecting new pieces of information as I go. So in this mystery, every time you figure out a new piece of information, you get a little box that looks like this:
![][3]
And in the sidebar, you have a sort of “inventory” that lists all of the knowledge youve collected so far. It looks like this:
![][4]
### design decision: you always figure out the bug
My friend Sumana pointed out an interesting difference between this and normal choose-your-own-adventure games: in the choose-your-own-adventure games I grew up reading, you lose a lot! You make the wrong choice, and you fall into a pit and die.
But thats not how debugging works in my experience. When debugging, if you make a “wrong” choice (for example by making a guess about the bug that isnt correct), theres no cost except your time! So you can always go back, keep trying, and eventually figure out whats going on.
I think that “you always win” is sort of realistic in the sense that with any bug you can always figure out what the bug is, given:
1. enough time
2. enough understanding of how the systems youre debugging work
3. tools that can give you information about whats happening
Im still not sure if I want all bugs to result in “you fix the bug!” sometimes bugs are impossible to fix if theyre caused by a system thats outside of your control! One really interesting idea Sumana had was to have the resolution sometimes be “you submit a really clear and convincing bug report and someone else fixes it”.
### design decision: include red herrings sometimes
In debugging in real life, there are a lot of red herrings! Sometimes you see something that looks weird, and you spend three hours looking into it, and then you realize that wasnt it at all.
One of the mysteries right now has a red herring, and the way I came up with it was that I ran a command and I thought “wait, the output of that is pretty confusing, its not clear how to interpret that”. So I just included the confusing output in the mystery and said “hey, what do you think it means?”.
One thing I like about including red herrings is that it lets me show how you can prove what the cause of the bug **isnt** which is even harder than proving what the cause of the bug is.
### design decision: use free form text boxes
Heres an example of what it looks like to be asked to interpret some output. Youre asked a question and you fill in the answer in a text box.
![][5]
I think I like using free form text boxes instead of multiple choice because it feels a little more realistic to me in real life, when you see some output like this, you dont get a list of choices!
### design decision: dont do anything with what you enter in the text box
No matter what you enter in the text box (or if you say “I dont know”), exactly the same thing happens. Itll send you to a page that tells you the answer and explains the reasoning. So you have to think about what you think the answer might be, but if you get it “wrong”, its no big deal.
The reason Im doing this is basically “its very easy to implement”, but I think theres maybe also something nice about it for the person using it if you dont know, its totally okay! You can learn something new and keep moving! You dont get penalized for your “wrong” answers in any way.
### design decision: the epilogue
At the end of the game, theres a very short epilogue where it talks about how likely you are to run into this bug in real life / how realistic this is. I think I need to expand on this to answer other questions people might have had while going through it, but I think its going to be a nice place to wrap up loose ends.
### how long each one takes to play: 5 minutes
People seem to report so far that each mystery takes about 5 minutes to play, which feels reasonable to me. I think Im most likely to extend this by making lots of different 5-minute mysteries rather than making one really long mystery, but well see.
### whats hard: reproducing the bug
Figuring out how to reproduce a given bug is actually not that easy I think I want to include some pretty weird bugs, and setting up a computer where that bug is happening in a realistic way isnt actually that easy. I think this just takes some work and creativity though.
### whats hard: giving realistic options
The most common critique I got was of the form “In this situation I would have done X but you didnt include X as an option”. Some examples of X: “ping the problem host”, “ssh to the problem host and run tcpdump there”, “look at the log file”, “use netstat”, etc.
I think its possible to make a lot of progress on this with playtesting if I playtest a mystery with a bunch of people and ask them to tell me when there was an option they wish they had, I can add that option pretty easily!
Because I can actually reproduce the bug, providing an option like “run netstat” is pretty straightforward all I have to do is go to the VM where Ive reproduced the bug, run `netstat`, and put the output into the game.
A couple of people also said that the game felt too “linear” or didnt branch enough. Im curious about whether that will naturally come out of having more realistic options.
### how it works: its a Twine game!
I felt like Twine was the obvious choice for this even though Id never used it before Id heard so many good things about it over the years.
You can see all of the source code for The Case of the Connection Timeout in [connection-timeout.twee][6] and [common.twee][7], which has some shared code between all the games.
A few notes about using Twine:
* Im using SugarCube, the [sugarcube docs are very good][8]
* Im using [tweego][9] to translate the `.twee` files in to a HTML page. I started out using the visual Twine editor to do my editing but switched to `tweego` pretty quickly because I wanted to use version control and have a more text-based workflow.
* The final output is one big HTML file that includes all the images and CSS and Javascript inline. The final HTML files are about 800K which seems reasonable to me.
* I base64-encode all the images in the game and include them inline in the file
* The [Twine wiki][10] and forums have a lot of great information and between the Twine wiki, the forums, and the Sugarcube docs I could pretty easily find answers to all my questions.
I used pretty much the exact Twine workflow from Em Lazerwalkers great post [A Modern Developers Workflow For Twine][11].
I wont explain how Twine works because it has great documentation and it would make this post way too long.
### some feedback so far
I posted this on Twitter and asked for feedback. Some common pieces of feedback I got:
things people liked:
* maybe 180 “I love this, this was so fun, I learned something new”
* A bunch of people specifically said that they liked learning how to interpret tcpdumps output format
* A few people specifically mentioned that they liked the “what you know” list and the mechanic of hunting for clues and how it breaks down the debugging process.
some suggestions for improvements:
* Like I mentioned before, lots of people said “I wanted to try X but it wasnt an option”
* One of the puzzles had a resolution to the bug that some people found unsatisfying (they felt it was more of a workaround than a fix, which I agreed with). I updated it to add a different resolution that was more satisfying.
* There were some technical issues (it could be more mobile-friendly, one of the images was hard to read, I needed to add a “Submit” button to one of the forms)
* Right now the way the text boxes work is that no matter what you type, the exact same thing happens. Some people found this a bit confusing, like “why did it act like I answered correctly if my answer was wrong”. This definitely needs some work.
### some goals of this project
Heres what I think the goals of this project are:
1. help people learn about **tools** (like tcpcdump, dig, and curl). How do you use each tool? What questions can they be used to answer? How do you interpret their output?
2. help people learn about **bugs**. There are some super common bugs that we run into over and over, and once you see a bug once its easier to recognize the same bug in the future.
3. help people get better at the **debugging process** (gathering data, asking questions)
### what experience is this trying to imitate?
Something I try to keep in mind with all my projects is what real-life experience does this reproduce? For example, I kind of think of my zines as being the experience “your coworker explains something to you in a really clear way”.
I think the experience here might be “youre debugging a problem together with your coworker and theyre really knowledgeable about the tools youre using”.
### thats all!
Im pretty excited about this project right now Im going to build at least a couple more of these and see how it goes! If things go well I might make this into my first non-zine thing for sale maybe itll be a collection of 12 small debugging mysteries! Well see.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2021/04/16/notes-on-debugging-puzzles/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://mysteries.wizardzines.com/connection-timeout.html
[2]: https://mysteries.wizardzines.com/slow-website.html
[3]: https://jvns.ca/images/newinfo.png
[4]: https://jvns.ca/images/sidebar-mystery.png
[5]: https://jvns.ca/images/textboxes.png
[6]: https://github.com/jvns/twine-stories/blob/2914c4326e3ff760a0187b2cfb15161928d6335b/connection-timeout.twee
[7]: https://github.com/jvns/twine-stories/blob/2914c4326e3ff760a0187b2cfb15161928d6335b/common.twee
[8]: https://www.motoslave.net/sugarcube/2/docs/
[9]: https://www.motoslave.net/tweego/
[10]: https://twinery.org/wiki/
[11]: https://dev.to/lazerwalker/a-modern-developer-s-workflow-for-twine-4imp

View File

@ -0,0 +1,209 @@
[#]: subject: (Play a fun math game with Linux commands)
[#]: via: (https://opensource.com/article/21/4/math-game-linux-commands)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Play a fun math game with Linux commands
======
Play the numbers game from the popular British game show "Countdown" at
home.
![Math formulas in green writing][1]
Like many people, I've been exploring lots of new TV shows during the pandemic. I recently discovered a British game show called _[Countdown][2]_, where contestants play two types of games: a _words_ game, where they try to make the longest word out of a jumble of letters, and a _numbers_ game, where they calculate a target number from a random selection of numbers. Because I enjoy mathematics, I've found myself drawn to the numbers game.
The numbers game can be a fun addition to your next family game night, so I wanted to share my own variation of it. You start with a collection of random numbers, divided into "small" numbers from 1 to 10 and "large" numbers that are 15, 20, 25, and so on until 100. You pick any combination of six numbers from both large and small numbers.
Next, you generate a random "target" number between 200 and 999. Then use simple arithmetic operations with your six numbers to try to calculate the target number using each "small" and "large" number no more than once. You get the highest number of points if you calculate the target number exactly and fewer points if you can get within 10 of the target number.
For example, if your random numbers were 75, 100, 2, 3, 4, and 1, and your target number was 505, you might say _2+3=5_, _5×100=500_, _4+1=5_, and _5+500=505_. Or more directly: (**2**+**3**)×**100** \+ **4** \+ **1** = **505**.
### Randomize lists on the command line
I've found the best way to play this game at home is to pull four "small" numbers from a pool of 1 to 10 and two "large" numbers from multiples of five from 15 to 100. You can use the Linux command line to create these random numbers for you.
Let's start with the "small" numbers. I want these to be in the range of 1 to 10. You can generate a sequence of numbers using the Linux `seq` command. You can run `seq` a few different ways, but the simplest form is to provide the starting and ending numbers for the sequence. To generate a list from 1 to 10, you might run this command:
```
$ seq 1 10
1
2
3
4
5
6
7
8
9
10
```
To randomize this list, you can use the Linux `shuf` ("shuffle") command. `shuf` will randomize the order of whatever you give it, usually a file. For example, if you send the output of the `seq` command to the `shuf` command, you will receive a randomized list of numbers between 1 and 10:
```
$ seq 1 10 | shuf
3
6
8
10
7
4
5
2
1
9
```
To select just four random numbers from a list of 1 to 10, you can send the output to the `head` command, which prints out the first few lines of its input. Use the `-4` option to specify that `head` should print only the first four lines:
```
$ seq 1 10 | shuf | head -4
6
1
8
4
```
Note that this list is different from the earlier example because `shuf` will generate a random order every time.
Now you can take the next step to generate the random list of "large" numbers. The first step is to generate a list of possible numbers starting at 15, incrementing by five, until you reach 100. You can generate this list with the Linux `seq` command. To increment each number by five, insert another option for the `seq` command to indicate the _step_:
```
$ seq 15 5 100
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
```
And just as before, you can randomize this list and select two of the "large" numbers:
```
$ seq 15 5 100 | shuf | head -2
75
40
```
### Generate a random number with Bash
I suppose you could use a similar method to select the game's target number from the range 200 to 999. But the simplest solution to generate a single random value is to use the `RANDOM` variable directly in Bash. When you reference this built-in variable, Bash generates a large random number. To put this in the range of 200 to 999, you need to put the random number into the range 0 to 799 first, then add 200.
To put a random number into a specific range starting at 0, you can use the **modulo** arithmetic operation. Modulo calculates the _remainder_ after dividing two numbers. If I started with 801 and divided by 800, the result is 1 _with a remainder of_ 1 (the modulo is 1). Dividing 800 by 800 gives 1 _with a remainder of_ 0 (the modulo is 0). And dividing 799 by 800 results in 0 _with a remainder of_ 799 (the modulo is 799).
Bash supports arithmetic expansion with the `$(( ))` construct. Between the double parentheses, Bash will perform arithmetic operations on the values you provide. To calculate the modulo of 801 divided by 800, then add 200, you would type:
```
$ echo $(( 801 % 800 + 200 ))
201
```
With that operation, you can calculate a random target number between 200 and 999:
```
$ echo $(( RANDOM % 800 + 200 ))
673
```
You might wonder why I used `RANDOM` instead of `$RANDOM` in my Bash statement. In arithmetic expansion, Bash will automatically expand any variables within the double parentheses. You don't need the `$` on the `$RANDOM` variable to reference the value of the variable because Bash will do it for you.
### Playing the numbers game
Let's put all that together to play the numbers game. Generate two random "large" numbers, four random "small" values, and the target value:
```
$ seq 15 5 100 | shuf | head -2
75
100
$ seq 1 10 | shuf | head -4
4
3
10
2
$ echo $(( RANDOM % 800 + 200 ))
868
```
My numbers are **75**, **100**, **4**, **3**, **10**, and **2**, and my target number is **868**.
I can get close to the target number if I do these arithmetic operations using each of the "small" and "large" numbers no more than once:
```
10×75 = 750
750+100 = 850
and:
4×3 = 12
850+12 = 862
862+2 = 864
```
That's only four away—not bad! But I found this way to calculate the exact number using each random number no more than once:
```
4×2 = 8
8×100 = 800
and:
75-10+3 = 68
800+68 = 868
```
Or I could perform _these_ calculations to get the target number exactly. This uses only five of the six random numbers:
```
4×3 = 12
75+12 = 87
and:
87×10 = 870
870-2 = 868
```
Give the _Countdown_ numbers game a try, and let us know how well you did in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/math-game-linux-commands
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
[2]: https://en.wikipedia.org/wiki/Countdown_%28game_show%29

View File

@ -0,0 +1,436 @@
[#]: subject: (Use the DNF local plugin to speed up your home lab)
[#]: via: (https://fedoramagazine.org/use-the-dnf-local-plugin-to-speed-up-your-home-lab/)
[#]: author: (Brad Smith https://fedoramagazine.org/author/buckaroogeek/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Use the DNF local plugin to speed up your home lab
======
![][1]
Photo by [Sven Hornburg][2] on [Unsplash][3]
### Introduction
If you are a Fedora Linux enthusiast or a developer working with multiple instances of Fedora Linux then you might benefit from the [DNF local][4] plugin. An example of someone who would benefit from the DNF local plugin would be an enthusiast who is running a cluster of Raspberry Pis. Another example would be someone running several virtual machines managed by Vagrant. The DNF local plugin reduces the time required for DNF transactions. It accomplishes this by transparently creating and managing a local RPM repository. Because accessing files on a local file system is significantly faster than downloading them repeatedly, multiple Fedora Linux machines will see a significant performance improvement when running _dnf_ with the DNF local plugin enabled.
I recently started using this plugin after reading a tip from Glenn Johnson (aka glennzo) in [a 2018 fedoraforum.org post][5]. While working on a Raspberry Pi based Kubernetes cluster running Fedora Linux and also on several container-based services, I winced with every DNF update on each Pi or each container that downloaded a duplicate set of rpms across my expensive internet connection. In order to improve this situation, I searched for a solution that would cache rpms for local reuse. I wanted something that would not require any changes to repository configuration files on every machine. I also wanted it to continue to use the network of Fedora Linux mirrors. I didnt want to use a single mirror for all updates.
### Prior art
An internet search yields two common solutions that eliminate or reduce repeat downloads of the same RPM set create a private Fedora Linux mirror or set up a caching proxy.
Fedora provides guidance on setting up a [private mirror][6]. A mirror requires a lot of bandwidth and disk space and significant work to maintain. A full private mirror would be too expensive and it would be overkill for my purposes.
The most common solution I found online was to implement a caching proxy using Squid. I had two concerns with this type of solution. First, I would need to edit repository definitions stored in _/etc/yum.repo.d_ on each virtual and physical machine or container to use the same mirror. Second, I would need to use _http_ and not _https_ connections which would introduce a security risk.
After reading Glenns 2018 post on the DNF local plugin, I searched for additional information but could not find much of anything besides the sparse documentation for the plugin on the DNF documentation web site. This article is intended to raise awareness of this plugin.
### About the DNF local plugin
The [online documentation][4] provides a succinct description of the plugin: “Automatically copy all downloaded packages to a repository on the local filesystem and generating repo metadata”. The magic happens when there are two or more Fedora Linux machines configured to use the plugin and to share the same local repository. These machines can be virtual machines or containers running on a host and all sharing the host filesystem, or separate physical hardware on a local area network sharing the file system using a network-based file system sharing technology. The plugin, once configured, handles everything else transparently. Continue to use _dnf_ as before. _dnf_ will check the plugin repository for rpms, then proceed to download from a mirror if not found. The plugin will then cache all rpms in the local repository regardless of their upstream source an official Fedora Linux repository or a third-party RPM repository and make them available for the next run of _dnf_.
### Install and configure the DNF local plugin
Install the plugin using _dnf_. The _createrepo_c_ packages will be installed as a dependency. The latter is used, if needed, to create the local repository.
```
sudo dnf install python3-dnf-plugin-local
```
The plugin configuration file is stored at /_etc/dnf/plugins/local.conf_. An example copy of the file is provided below. The only change required is to set the _repodir_ option. The _repodir_ option defines where on the local filesystem the plugin will keep the RPM repository.
```
[main]
enabled = true
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local
# Createrepo options. See man createrepo_c
[createrepo]
# This option lets you disable createrepo command. This could be useful
# for large repositories where metadata is priodically generated by cron
# for example. This also has the side effect of only copying the packages
# to the local repo directory.
enabled = true
# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir
# quiet = true
# verbose = false
```
Change _repodir_ to the filesystem directory where you want the RPM repository stored. For example, change _repodir_ to _/srv/repodir_ as shown below.
```
...
# Path to the local repository.
# repodir = /var/lib/dnf/plugins/local
repodir = /srv/repodir
...
```
Finally, create the directory if it does not already exist. If this directory does not exist, _dnf_ will display some errors when it first attempts to access the directory. The plugin will create the directory, if necessary, despite the initial errors.
```
sudo mkdir -p /srv/repodir
```
Repeat this process on any virtual machine or container that you want to share the local repository. See the use cases below for more information. An alternative configuration using NFS (network file system) is also provided below.
### How to use the DNF local plugin
After you have installed the plugin, you do not need to change how you use _dnf_. The plugin will cause a few additional steps to run transparently behind the scenes whenever _dnf_ is called. After _dnf_ determines which rpms to update or install, the plugin will try to retrieve them from the local repository before trying to download them from a mirror. After _dnf_ has successfully completed the requested updates, the plugin will copy any rpms downloaded from a mirror to the local repository and then update the local repositorys metadata. The downloaded rpms will then be available in the local repository for the next _dnf_ client.
There are two points to be aware of. First, benefits from the local repository only occur if multiple machines share the same architecture (for example, x86_64 or aarch64). Virtual machines and containers running on a host will usually share the same architecture as the host. But if there is only one aarch64 device and one x86_64 device there is little real benefit to a shared local repository unless one of the devices is constantly reset and updated which is common when developing with a virtual machine or container. Second, I have not explored how robust the local repository is to multiple _dnf_ clients updating the repository metadata concurrently. I therefore run _dnf_ from multiple machines serially rather than in parallel. This may not be a real concern but I want to be cautious.
The use cases outlined below assume that work is being done on Fedora Workstation. Other desktop environments can work as well but may take a little extra effort. I created a GitHub repository with examples to help with each use case. Click the _Code_ button at <https://github.com/buckaroogeek/dnf-local-plugin-examples> to clone the repository or to download a zip file.
#### Use case 1: networked physical machines
The simplest use case is two or more Fedora Linux computers on the same network. Install the DNF local plugin on each Fedora Linux machine and configure the plugin to use a repository on a network-aware file system. There are many network-aware file systems to choose from. Which file system you will use will probably be influenced by the existing devices on your network.
For example, I have a small Synology Network Attached Storage device (NAS) on my home network. The web admin interface for the Synology makes it very easy to set up a NFS server and export a file system share to other devices on the network. NFS is a shared file system that is well supported on Fedora Linux. I created a share on my NAS named _nfs-dnf_ and exported it to all the Fedora Linux machines on my network. For the sake of simplicity, I am omitting the details of the security settings. However, please keep in mind that security is always important even on your own local network. If you would like more information about NFS, the online Red Hat Enable Sysadmin magazine has an [informative post][7] that covers both client and server configurations on Red Hat Enterprise Linux. They translate well to Fedora Linux.
I configured the NFS client on each of my Fedora Linux machines using the steps shown below. In the below example, _quga.lan_ is the hostname of my NAS device.
Install the NFS client on each Fedora Linux machine.
```
$ sudo dnf install nfs-utils
```
Get the list of exports from the NFS server:
```
$ showmount -e quga.lan
Export list for quga.lan:
/volume1/nfs-dnf pi*.lan
```
Create a local directory to be used as a mount point on the Fedora Linux client:
```
$ sudo mkdir -p /srv/repodir
```
Mount the remote file system on the local directory. See _man mount_ for more information and options.
```
$ sudo mount -t nfs -o vers=4 quga.lan:/nfs-dnf /srv/repodir
```
The DNF local plugin will now work until as long as the client remains up. If you want the NFS export to be automatically mounted when the client is rebooted, then you must to edit /_etc/fstab_ as demonstrated below. I recommend making a backup of _/etc/fstab_ before editing it. You can substitute _vi_ with _nano_ or another editor of your choice if you prefer.
```
$ sudo vi /etc/fstab
```
Append the following line at the bottom of _/etc/fstab_, then save and exit.
```
quga.lan:/volume1/nfs-dnf /srv/repodir nfs defaults,timeo=900,retrans=5,_netdev 0 0
```
Finally, notify systemd that it should rescan _/etc/fstab_ by issuing the following command.
```
$ sudo systemctl daemon-reload
```
NFS works across the network and, like all network traffic, may be blocked by firewalls on the client machines. Use _firewall-cmd_ to allow NFS-related network traffic through each Fedora Linux machines firewall.
```
$ sudo firewall-cmd --permanent --zone=public --allow-service=nfs
```
As you can imagine, replicating these steps correctly on multiple Fedora Linux machines can be challenging and tedious. Ansible automation solves this problem.
In the _rpi-example_ directory of the github repository Ive included an example Ansible playbook (_configure.yaml_) that installs and configures both the DNF plugin and the NFS client on all Fedora Linux machines on my network. There is also a playbook (_update.yaml)_ that runs a DNF update across all devices. See this [recent post in Fedora Magazine][8] for more information about Ansible.
To use the provided Ansible examples, first update the inventory file (_inventory_) to include the list of Fedora Linux machines on your network that you want to managed. Next, install two Ansible roles in the roles subdirectory (or another suitable location).
```
$ ansible-galaxy install --roles-path ./roles -r requirements.yaml
```
Run the _configure.yaml_ playbook to install and configure the plugin and NFS client on all hosts defined in the inventory file. The role that installs and configures the NFS client does so via _/etc/fstab_ but also takes it a step further by creating an automount for the NFS share in systemd. The automount is configured to mount the share only when needed and then to automatically unmount. This saves network bandwidth and CPU cycles which can be important for low power devices like a Raspberry Pi. See the github repository for the role and for more information.
```
$ ansible-playbook -i inventory configure.yaml
```
Finally, Ansible can be configured to execute _dnf update_ on all the systems serially by using the _update.yaml_ playbook.
```
$ ansible-playbook -i inventory update.yaml
```
Ansible and other automation tools such as Puppet, Salt, or Chef can be big time savers when working with multiple virtual or physical machines that share many characteristics.
#### Use case 2: virtual machines running on the same host
Fedora Linux has excellent built-in support for virtual machines. The Fedora Project also provides [Fedora Cloud][9] base images for use as virtual machines. [Vagrant][10] is a tool for managing virtual machines. Fedora Magazine has [instructions][11] on how to set up and configure Vagrant. Add the following line in your _.bashrc_ (or other comparable shell configuration file) to inform Vagrant to use _libvirt_ automatically on your workstation instead of the default VirtualBox.
```
export VAGRANT_DEFAULT_PROVIDER=libvirt
```
In your project directory initialize Vagrant and the Fedora Cloud image (use 34-cloud-base for Fedora Linux 34 when available):
```
$ vagrant init fedora/33-cloud-base
```
This creates a Vagrant file in the project directory. Edit the Vagrant file to look like the example below. DNF will likely fail with the default memory settings for _libvirt_. So the example Vagrant file below provides additional memory to the virtual machine. The example below also shares the host _/srv/repodir_ with the virtual machine. The shared directory will have the same path in the virtual machine _/srv/repodir_. The Vagrant file can be downloaded from [github][12].
```
# -*- mode: ruby -*-
# vi: set ft=ruby :
# define repo directory; same name on host and vm
REPO_DIR = "/srv/repodir"
Vagrant.configure("2") do |config|
config.vm.box = "fedora/33-cloud-base"
config.vm.provider :libvirt do |v|
v.memory = 2048
# v.cpus = 2
end
# share the local repository with the vm at the same location
config.vm.synced_folder REPO_DIR, REPO_DIR
# ansible provisioner - commented out by default
# the ansible role is installed into a path defined by
# ansible.galaxy_roles-path below. The extra_vars are ansible
# variables passed to the playbook.
#
# config.vm.provision "ansible" do |ansible|
# ansible.verbose = "v"
# ansible.playbook = "ansible/playbook.yaml"
# ansible.extra_vars = {
# repo_dir: REPO_DIR,
# dnf_update: false
# }
# ansible.galaxy_role_file = "ansible/requirements.yaml"
# ansible.galaxy_roles_path = "ansible/roles"
# end
end
```
Once you have Vagrant managing a Fedora Linux virtual machine, you can install the plugin manually. SSH into the virtual machine:
```
$ vagrant ssh
```
When you are at a command prompt in the virtual machine, repeat the steps from the _Install and configure the DNF local plugin_ section above. The Vagrant configuration file should have already made _/srv/repodir_ from the host available in the virtual machine at the same path.
If you are working with several virtual machines or repeatedly re-initiating a new virtual machine then some simple automation becomes useful. As with the network example above, I use ansible to automate this process.
In the [vagrant-example directory][12] on github, you will see an _ansible_ subdirectory. Edit the Vagrant file and remove the comment marks under the _ansible provisioner_ section. Make sure the _ansible_ directory and its contents (_playbook.yaml_, _requirements.yaml_) are in the project directory.
After youve uncommented the lines, the _ansible provisioner_ section in the Vagrant file should look similar to the following:
```
....
# ansible provisioner
# the ansible role is installed into a path defined by
# ansible.galaxy_roles-path below. The extra_vars are ansible
# variables passed to the playbook.
#
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "ansible/playbook.yaml"
ansible.extra_vars = {
repo_dir: REPO_DIR,
dnf_update: false
}
ansible.galaxy_role_file = "ansible/requirements.yaml"
ansible.galaxy_roles_path = "ansible/roles"
end
....
```
Ansible must be installed (_sudo dnf install ansible_). Note that there are significant changes to how Ansible is packaged beginning with Fedora Linux 34 (use _sudo dnf install ansible-base ansible-collections*_).
If you run Vagrant now (or reprovision: _vagrant provision_), Ansible will automatically download an Ansible role that installs the DNF local plugin. It will then use the downloaded role in a playbook. You can _vagrant ssh_ into the virtual machine to verify that the plugin is installed and to verify that rpms are coming from the DNF local repository instead of a mirror.
#### Use case 3: container builds
Container images are a common way to distribute and run applications. If you are a developer or enthusiast using Fedora Linux containers as a foundation for applications or services, you will likely use _dnf_ to update the container during the development/build process. Application development is iterative and can result in repeated executions of _dnf_ pulling the same RPM set from Fedora Linux mirrors. If you cache these rpms locally then you can speed up the container build process by retrieving them from the local cache instead of re-downloading them over the network each time. One way to accomplish this is to create a custom Fedora Linux container image with the DNF local plugin installed and configured to use a local repository on the host workstation. Fedora Linux offers _podman_ and _buildah_ for managing the container build, run and test life cycle. See the Fedora Magazine post [_How to build Fedora container images_][13] for more about managing containers on Fedora Linux.
Note that the _fedora_minimal_ container uses _microdnf_ by default which does not support plugins. The _fedora_ container, however, uses _dnf_.
A script that uses _buildah_ and _podman_ to create a custom Fedora Linux image named _myFedora_ is provided below. The script creates a mount point for the local repository at _/srv/repodir_. The below script is also available in the [_container-example_][14] directory of the github repository. It is named _base-image-build.sh_.
```
#!/bin/bash
set -x
# bash script that creates a 'myfedora' image from fedora:latest.
# Adds dnf-local-plugin, points plugin to /srv/repodir for local
# repository and creates an external mount point for /srv/repodir
# that can be used with a -v switch in podman/docker
# custom image name
custom_name=myfedora
# scratch conf file name
tmp_name=local.conf
# location of plugin config file
configuration_name=/etc/dnf/plugins/local.conf
# location of repodir on container
container_repodir=/srv/repodir
# create scratch plugin conf file for container
# using repodir location as set in container_repodir
cat <<EOF > "$tmp_name"
[main]
enabled = true
repodir = $container_repodir
[createrepo]
enabled = true
# If you want to speedup createrepo with the --cachedir option. Eg.
# cachedir = /tmp/createrepo-local-plugin-cachedir
# quiet = true
# verbose = false
EOF
# pull registry.fedoraproject.org/fedora:latest
podman pull registry.fedoraproject.org/fedora:latest
#start the build
mkdev=$(buildah from fedora:latest)
# tag author
buildah config --author "$USER" "$mkdev"
# install dnf-local-plugin, clean
# do not run update as local repo is not operational
buildah run "$mkdev" -- dnf --nodocs -y install python3-dnf-plugin-local createrepo_c
buildah run "$mkdev" -- dnf -y clean all
# create the repo dir
buildah run "$mkdev" -- mkdir -p "$container_repodir"
# copy the scratch plugin conf file from host
buildah copy "$mkdev" "$tmp_name" "$configuration_name"
# mark container repodir as a mount point for host volume
buildah config --volume "$container_repodir" "$mkdev"
# create myfedora image
buildah commit "$mkdev" "localhost/$custom_name:latest"
# clean up working image
buildah rm "$mkdev"
# remove scratch file
rm $tmp_name
```
Given normal security controls for containers, you usually run this script with _sudo_ and when you use the _myFedora_ image in your development process.
```
$ sudo ./base_image_build.sh
```
To list the images stored locally and see both _fedora:latest_ and _myfedora:latest_ run:
```
$ sudo podman images
```
To run the _myFedora_ image as a container and get a bash prompt in the container run:
```
$ sudo podman run -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash
```
Podman also allows you to run containers rootless (as an unprivileged user). Run the script without _sudo_ to create the _myfedora_ image and store it in the unprivileged users image repository:
```
$ ./base-image-build.sh
```
In order to run the _myfedora_ image as a rootless container on a Fedora Linux host, an additional flag is needed. Without the extra flag, SELinux will block access to _/srv/repodir_ on the host.
```
$ podman run --security-opt label=disable -ti -v /srv/repodir:/srv/repodir:Z myfedora /bin/bash
```
By using this custom image as the base for your Fedora Linux containers, the iterative building and development of applications or services on them will be faster.
**Bonus Points** for even better _dnf_ performance, Dan Walsh describes how to share _dnf_ metadata between a host and container using a file overlay (see <https://www>. redhat.com/sysadmin/speeding-container-buildah). This technique will work in combination with a shared local repository only if the host and the container use the same local repository. The _dnf_ metadata cache includes metadata for the local repository under the name __dnf_local_.
I have created a container file that uses _buildah_ to do a _dnf_ update on a _fedora:latest_ image. Ive also created a container file to repeat the process using a _myfedora_ image. There are 53 MB and 111 rpms in the _dnf_ update. The only difference between the images is that _myfedora_ has the DNF local plugin installed. Using the local repository cut the elapse time by more than half in this example and saves 53MB of internet bandwidth consumption.
With the _fedora:latest_ image the command and elapsed time is:
```
# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O - f Containerfile.3 .
128 Elapsed Time: 0:48.06
```
With the _myfedora_ image the command and elapsed time is less than half of the base run. The **:Z** on the **-v** volume below is required when running the container on a SELinux-enabled host.
```
# sudo time -f "Elapsed Time: %E" buildah bud -v /var/cache/dnf:/var/cache/dnf:O -v /srv/repodir:/srv/repodir:Z -f Containerfile.4 .
133 Elapsed Time: 0:19.75
```
### Repository management
The local repository will accumulate files over time. Among the files will be many versions of rpms that change frequently. The kernel rpms are one such example. A system upgrade (for example upgrading from Fedora Linux 33 to Fedora Linux 34) will copy many rpms into the local repository. The _dnf repomanage_ command can be used to remove outdated rpm archives. I have not used the plugin long enough to explore this. The interested and knowledgeable reader is welcome to write an article about the _dnf repomanage_ command for Fedora Magazine.
Finally, I keep the _x86_64_ rpms for my workstation, virtual machines and containers in a local repository that is separate from the _aarch64_ local repository for the Raspberry Pis and (future) containers hosting my Kubernetes cluster. I have separated them for reasons of convenience and happenstance. A single repository location should work across all architectures.
### An important note about Fedora Linux system upgrades
Glenn Johnson has more than four years experience with the DNF local plugin. On occasion he has experienced problems when upgrading to a new release of Fedora Linux with the DNF local plugin enabled. Glenn strongly recommends that the _enabled_ attribute in the plugin configuration file _/etc/dnf/plugins/local.conf_ be set to **false** before upgrading your systems to a new Fedora Linux release. After the system upgrade, re-enable the plugin. Glenn also recommends using a separate local repository for each Fedora Linux release. For example, a NFS server might export _/volume1/dnf-repo/33_ for Fedora Linux 33 systems only. Glenn hangs out on fedoraforum.org an independent online resource for Fedora Linux users.
### Summary
The DNF local plugin has been beneficial to my ongoing work with a Fedora Linux based Kubernetes cluster. The containers and virtual machines running on my Fedora Linux desktop have also benefited. I appreciate how it supplements the existing DNF process and does not dictate any changes to how I update my systems or how I work with containers and virtual machines. I also appreciate not having to download the same set of rpms multiple times which saves me money, frees up bandwidth, and reduces the load on the Fedora Linux mirror hosts. Give it a try and see if the plugin will help in your situation!
Thanks to Glenn Johnson for his post on the DNF local plugin which started this journey, and for his helpful reviews of this post.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-the-dnf-local-plugin-to-speed-up-your-home-lab/
作者:[Brad Smith][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/buckaroogeek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/dnf-local-816x345.jpg
[2]: https://unsplash.com/@_s9h8_?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://dnf-plugins-core.readthedocs.io/en/latest/local.html
[5]: https://forums.fedoraforum.org/showthread.php?318812-dnf-plugin-local
[6]: https://fedoraproject.org/wiki/%20Infrastructure/Mirroring#How_can_someone_make_a_private_mirror
[7]: https://www.redhat.com/sysadmin/nfs-server-client
[8]: https://fedoramagazine.org/using-ansible-setup-workstation/
[9]: https://alt.fedoraproject.org/cloud/
[10]: https://vagrantup.com/
[11]: https://fedoramagazine.org/vagrant-qemukvm-fedora-devops-sysadmin/
[12]: https://github.com/buckaroogeek/dnf-local-plugin-examples/tree/main/vagrant-example
[13]: https://fedoramagazine.org/how-to-build-fedora-container-images/
[14]: https://github.com/buckaroogeek/dnf-local-plugin-examples/tree/main/container-example

View File

@ -0,0 +1,87 @@
[#]: subject: (WASI, Bringing WebAssembly Way Beyond Browsers)
[#]: via: (https://www.linux.com/news/wasi-bringing-webassembly-way-beyond-browsers/)
[#]: author: (Dan Brown https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
WASI, Bringing WebAssembly Way Beyond Browsers
======
_By Marco Fioretti_
[WebAssembly (Wasm)][1] is a binary software format that all browsers can run directly, [safely][2] and at near-native speeds, on any operating system (OS). Its biggest promise, however, is to eventually work in the same way [everywhere][3], from IoT devices and edge servers, to mobile devices and traditional desktops. This post introduces the main interface that should make this happen. The next post in this series will describe some of the already available, real-world implementations and applications of the same interface.
**What is portability, again?**
To be safe and portable, software code needs, as a minimum: 
1. guarantees that users and programs can do only what they actually _have_ the right to do, and only do it without creating problems to other programs or users
2. standard, platform-independent methods to declare and apply those guarantees
Traditionally, these services are provided by libraries of “system calls” for each language, that is functions with which a software program can ask its host OS to perform some low-level, or sensitive task. When those libraries follow standards like [POSIX][4], any compiler can automatically combine them with the source code, to produce a binary file that can run on _some_ combination of OSes and processors.
**The next level: BINARY compatibility**
System calls only make _source code_ portable across platforms. As useful as they are, they still force developers to generate platform-specific executable files, all too often from more or less different combinations of source code.
WebAssembly instead aims to get to the next level: use any language you want, then compile it once, to produce one binary file that will _just run_, securely, in any environment that recognizes WebAssembly. 
**What Wasm does not need to work outside browsers**
Since WebAssembly already “compiles once” for all major browsers, the easiest way to expand its reach may seem to create, for every target environment, a full virtual machine (runtime) that provides everything a Wasm module expects from Firefox or Chrome.
Work like that however would be _really_ complex, and above all simply unnecessary, if not impossible, in many cases (e.g. on IoT devices). Besides, there are better ways to secure Wasm modules than dumping them in one-size-fits-all sandboxes as browsers do today.
**The solution? A virtual operating system and runtime**
Fully portable Wasm modules cannot happen until, to give one practical example, accesses to webcams or websites can be written only with system calls that generate platform-dependent machine code.
Consequently, the most practical way to have such modules, from _any_ programming language, seems to be that of the [WebAssembly System interface (WASI) project][5]: write and compile code for only _one, obviously virtual,_ but complete operating system.
On one hand WASI gives to all the developers of [Wasm runtimes][6] one single OS to emulate. On the other, WASI gives to all programming languages one set of system calls to talk to that same OS.
In this way, even if you loaded it on ten different platforms, a _binary_ Wasm module calling a certain WASI function would still get from the runtime that launched it a different binary object every time. But since all those objects would interact with that single Wasm module in exactly the same way, it would not matter!
This approach would work also in the first use case of WebAssembly, that is with the JavaScript virtual machines inside web browsers. To run Wasm modules that use WASI calls, those machines should only load the JavaScript versions of the corresponding libraries.
This OS-level emulation is also more secure than simple sandboxing. With WASI, any runtime can implement different versions of each system call with different security privileges as long as they all follow the specification. Then that runtime could place every instance of every Wasm module it launches into a separate sandbox, containing only the smallest, and least privileged combination of functions that that specific instance really needs.
This “principle of least privilege”, or “[capability-based security model][7]“, is everywhere in WASI. A WASI runtime can pass into a sandbox an instance of the “open” system call that is only capable of opening the specific files, or folders, that were _pre-selected_ by the runtime itself. This is a more robust, much more granular control on what programs can do than it would be possible with traditional file permissions, or even with chroot systems.
Coding-wise, functions for things like basic management of files, folders, network connections or time are needed by almost any program. Therefore the corresponding WASI interfaces are designed as similar as possible to their POSIX equivalents, and all packaged into one “wasi-core” module, that every WASI-compliant runtime must contain.
A version of the [libc][8] standard C library, rewritten usi wasi-core functions, is already available and, [according to its developers][9], already “sufficiently stable and usable for many purposes”. 
All the other virtual interfaces that WASI includes, or will include over time, are standardized and packaged as separate modules,  without forcing any runtime to support all of them. In the next article we will see how some of these WASI components are already used today.
The post [WASI, Bringing WebAssembly Way Beyond Browsers][10] appeared first on [Linux Foundation Training][11].
--------------------------------------------------------------------------------
via: https://www.linux.com/news/wasi-bringing-webassembly-way-beyond-browsers/
作者:[Dan Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/
[b]: https://github.com/lujun9972
[1]: https://training.linuxfoundation.org/announcements/an-introduction-to-webassembly/
[2]: https://training.linuxfoundation.org/announcements/webassembly-security-now-and-in-the-future/
[3]: https://webassembly.org/docs/non-web/
[4]: https://www.gnu.org/software/libc/manual/html_node/POSIX.html#POSIX
[5]: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/
[6]: https://github.com/appcypher/awesome-wasm-runtimes
[7]: https://github.com/WebAssembly/WASI/blob/main/docs/WASI-overview.md#capability-oriented
[8]: https://en.wikipedia.org/wiki/C_standard_library
[9]: https://github.com/WebAssembly/wasi-libc
[10]: https://training.linuxfoundation.org/announcements/wasi-bringing-webassembly-way-beyond-browsers/
[11]: https://training.linuxfoundation.org/

View File

@ -0,0 +1,101 @@
[#]: subject: (How I digitized my CD collection with open source tools)
[#]: via: (https://opensource.com/article/21/4/digitize-cd-open-source-tools)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How I digitized my CD collection with open source tools
======
Clean off your shelves by ripping your CDs and tagging them for easy
playback across your home network.
![11 CDs in a U shape][1]
The restrictions on getting out and about during the pandemic occasionally remind me that time is slipping by—although some days, "slipping" doesn't quite feel like the right word. But it also reminds me there are more than a few tasks around the house that can be great for restoring the sense of accomplishment that so many of us have missed.
One such task, in my home anyway, is converting our CD collection to [FLAC][2] and storing the files on our music server's hard drive. Considering we don't have a huge collection (at least, by some people's standards), I'm surprised we still have so many CDs awaiting conversion—even excluding all the ones that fail to impress and therefore don't merit the effort.
As for that ticking clock—who knows how much longer the CD player will continue working or the CD-ROM drive in the old computer will remain in service? Plus, I'd rather have the CDs shelved in the basement storage instead of cluttering up the family room.
So, here I sit on a rainy Sunday afternoon with a pile of classical music CDs, ready to go…
### Ripping CDs
I like using the open source [Asunder CD ripper][3]. It's a simple and straightforward tool that uses the [cdparanoia][4] tool to handle the conversion chores. This image shows it working away on an album.
![Asunder][5]
(Chris Hermansen, [CC BY-SA 4.0][6])
When I fired up Asunder, I was surprised that its Compact Disc Database (CDDB) lookup feature didn't seem to find any matching info. A quick online search led me to a Linux Mint forum discussion that [offered alternatives][7] for the freedb.freedb.org online service, which apparently is no longer working. I first tried using gnudb.gnudb.org with no appreciably better result; plus, the suggested link to gnudb.org/howto.php upset Firefox due to an expired certificate.
Next, I tried the freedb.freac.org service (note that it is on port 80, not 8880, as was freedb.freedb.org), which worked well for me… with one notable exception: The contributed database entries don't seem to understand the difference between "artist" (or "performer") and "composer." This isn't a huge problem for popular music, but having JS Bach as the "artist" seems a bit incongruous since he never made it to a recording studio, as far as I know.
Quite a few of the tracks I converted identified the composer in the track title, but if there's one thing I've learned, your metadata can never be too correct. This leads me to the issue of tag editing, or curating the collection.
Oh wait, there's another reason for tag editing, too, at least when using Asunder to rip: getting the albums' cover images.
### Editing tags and curating the collection
My open source go-to tool for [music tag editing continues to be EasyTag][8]. I use it a lot, both for downloads I purchase (it's amazing how messed up their tags can be, and some download services offer untagged WAV format files) and for tidying up the CDs I rip.
Take a look at what Asunder has (and hasn't) accomplished from EasyTag's perspective. One of the CDs I ripped included Ravel's _Daphnis et Chloé Suites 1 and 2_ and Strauss' _Don Quixote_. The freedb.freac.org database seemed to think that the composers Maurice Ravel and Richard Strauss were the artists performing the work, but the artist on this album is the wonderful London Symphony Orchestra led by André Previn. In Asunder, I clicked the "single artist" checkbox and changed the artist name to the LSO. Here's what it looks like in EasyTag:
![EasyTag][9]
(Chris Hermansen, [CC BY-SA 4.0][6])
It's not quite there! But in EasyTag, I can select the first six tracks, tagging the composer on all the files by clicking on that little "A" icon on the right of the Composer field:
![Editing tags in EasyTag][10]
(Chris Hermansen, [CC BY-SA 4.0][6])
I can set the remaining 13 similarly, then select the whole lot and set the Album Artist as well. Finally, I can flip to the Images tab and find and set the album cover image.
Speaking of images, I've found it wise to always name the image "cover.jpg" and make sure it's in the directory with the FLAC files… some players aren't happy with PNG files, some want the file in the same directory, and some are just plain difficult to get along with, as far as images go.
What is your favorite open source CD ripping tool? How about the open source tool you like to use to fix your metadata? Let me know in the comments below!
### And speaking of music…
I haven't been as regular with my music and open source column over the past year as I was in previous years. Although I didn't acquire a lot of new music in 2020 and 2021, a few jewels still came my way…
As always, [Erased Tapes][11] continues to develop an amazing collection of hmmm… what would you call it, anyway? The site uses the terms "genre-defying" and "avant-garde," which don't seem overblown for once. A recent favorite is Rival Consoles' [_Night Melody Articulation_][12], guaranteed to transport me from the day-to-day grind to somewhere else.
I've been a huge fan of [Gustavo Santaolalla][13] since I first heard his music on a road trip from Coyhaique to La Tapera in Chile's Aysén Region. You might be familiar with his film scores to _Motorcycle Diaries_ or _Brokeback Mountain_. I recently picked up [_Qhapaq Ñan_, music about the Inca Trail][14], on the Linux-friendly music site [7digital][15], which has a good selection of his work.
Finally, and continuing with the Latin American theme, The Queen's Six recording [_Journeys to the New World_][16] is not to be missed. It is available in FLAC format (including high-resolution versions) from the Linux-friendly [Signum Classics][17] site.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/digitize-cd-open-source-tools
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_cd_dvd.png?itok=RBwVIzmi (11 CDs in a U shape)
[2]: https://en.wikipedia.org/wiki/FLAC
[3]: https://opensource.com/article/17/2/open-music-tagging
[4]: https://www.xiph.org/paranoia/
[5]: https://opensource.com/sites/default/files/uploads/asunsder.png (Asunder)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://forums.linuxmint.com/viewtopic.php?t=322415
[8]: https://opensource.com/article/17/5/music-library-tag-management-tools
[9]: https://opensource.com/sites/default/files/uploads/easytag.png (EasyTag)
[10]: https://opensource.com/sites/default/files/uploads/easytag_editing-tags.png (Editing tags in EasyTag)
[11]: https://www.erasedtapes.com/about
[12]: https://www.erasedtapes.com/release/eratp139-rival-consoles-night-melody-articulation
[13]: https://en.wikipedia.org/wiki/Gustavo_Santaolalla
[14]: https://ca.7digital.com/artist/gustavo-santaolalla/release/qhapaq-%C3%B1an-12885504?f=20%2C19%2C12%2C16%2C17%2C9%2C2
[15]: https://ca.7digital.com/search/release?q=gustavo%20santaolalla&f=20%2C19%2C12%2C16%2C17%2C9%2C2
[16]: https://signumrecords.com/product/journeys-to-the-new-world-hispanic-sacred-music-from-the-16th-17th-centuries/SIGCD626/
[17]: https://signumrecords.com/

View File

@ -0,0 +1,79 @@
[#]: subject: (How to Download Ubuntu via Torrent [Absolute Beginners Tip])
[#]: via: (https://itsfoss.com/download-ubuntu-via-torrent/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Download Ubuntu via Torrent [Absolute Beginners Tip]
======
Downloading Ubuntu is pretty straightforward. You go to its [official website][1]. Click on the [desktop download section][2], select the appropriate Ubuntu version and hit the download button.
![][3]
Ubuntu is available as a single image of more than 2.5 GB in size. The direct download works well for people with high-speed internet connection.
However, if you have a slow or inconsistent internet connection, youll have a difficult time downloading such a big file. The download may be interrupted several times in the process or may take several hours.
![Direct download may take several hours for slow internet connections][4]
### Downloading Ubuntu via Torrent
If you also suffer from limited data or slow internet connection, using a download manager or torrent would be a better option. I am not going to discuss what torrent is in this quick tutorial. Just know that with torrents, you can download a large file in a number of sessions.
The Good thing is that Ubuntu actually provides downloads via torrents. The bad thing is that it is hidden on the website and difficult to guess if you are not familiar with it.
If you want to download Ubuntu via torrent, go to your chosen Ubuntu versions section and look for **alternative downloads**.
![][5]
**Click on this “alternative downloads” link** and it will open a new web page. **Scroll down** on this page to see the BitTorrent section. Youll see the option to download the torrent files for all the available versions. If you are going to use Ubuntu on your personal computer or laptop, you should go with the desktop version.
![][6]
Read [this article to get some guidance on which Ubuntu version][7] you should be using. Considering that you are going to use this distribution, having some ideas about [Ubuntu LTS and non-LTS release would be helpful][8].
#### How do you use the download torrent file for getting Ubuntu?
I presumed that you know how to use torrent. If not, let me quickly summarize it for you.
You have downloaded a .torrent file of a few KB in size. You need to download and install a Torrent application like uTorrent or Deluge or BitTorrent.
I recommend using [uTorrent][9] on Windows. If you are using some Linux distribution, you should already have a [torrent client like Transmission][10]. If not, you can install it from your distributions software manager.
Once you have installed the torrent application, run it. Now drag and drop the .torrent file you had downloaded from the website of Ubuntu. You may also use the open with option from the menu.
Once the torrent file has been added to the Torrent application, it starts downloading the file. If you turn off the system, the download is paused. Start the Torrent application again and the download resumes from the same point.
When the download is 100% complete, you can use it to [install Ubuntu afresh][11] or in [dual boot with Windows][12].
Enjoy Ubuntu :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-ubuntu-via-torrent/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://ubuntu.com
[2]: https://ubuntu.com/download/desktop
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/download-ubuntu.png?resize=800%2C325&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/slow-direct-download-ubuntu.png?resize=800%2C365&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/ubuntu-torrent-download.png?resize=800%2C505&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/ubuntu-torrent-download-option.png?resize=800%2C338&ssl=1
[7]: https://itsfoss.com/which-ubuntu-install/
[8]: https://itsfoss.com/long-term-support-lts/
[9]: https://www.utorrent.com/
[10]: https://itsfoss.com/best-torrent-ubuntu/
[11]: https://itsfoss.com/install-ubuntu/
[12]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/

View File

@ -0,0 +1,181 @@
[#]: subject: (Hyperbola Linux Review: Systemd-Free Arch With Linux-libre Kernel)
[#]: via: (https://itsfoss.com/hyperbola-linux-review/)
[#]: author: (Sarvottam Kumar https://itsfoss.com/author/sarvottam/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Hyperbola Linux Review: Systemd-Free Arch With Linux-libre Kernel
======
In the last month of 2019, the Hyperbola project took a [major decision][1] of ditching Linux in favor of OpenBSD. We also had a [chat][2] with Hyperbola co-founder Andre Silva, who detailed the reason for dropping Hyperbola OS and starting a new HyperbolaBSD.
HyperbolaBSD is still under development and its alpha release will be ready by September 2021 for initial testing. The current Hyperbola GNU/Linux-libre v0.3.1 Milky Way will be supported until the legacy [Linux-libre kernel][3] reaches the end of life in 2022.
I thought of giving it a try before it goes away and switches to BSD completely.
### What is Hyperbola GNU/Linux-libre?
![][4]
Back in April 2017, the Hyperbola project was started by its [six co-founders][5] with an aim to deliver a lightweight, stable, secure, software freedom, and privacy focussed operating system. 
Subsequently, the first stable version of Hyperbola GNU/Linux-libre arrived in July 2017. It was based on Arch Linux snapshots combining Debian development.
But, unlike Arch having a rolling release model, Hyperbola GNU/Linux-libre follows a Long Term Support (LTS) model.
Also, instead of a generic Linux kernel, it includes GNU operating system components and the Linux-libre kernel. Most importantly, Hyperbola is also one of the distributions without Systemd init system.
Even though the Systemd is widely adopted by major Linux distributions like Ubuntu, Hyperbola replaced it with OpenRC as the default init system. v0.1 of Hyperbola was the first and the last version to support Systemd.
Moreover, Hyperbola put high emphasis on Keep It Simple Stupid (KISS) methodology. It provides packages for i686 and x86_64 architecture that meets GNU Free System Distribution Guidelines (GNU FSDG).
Not just that, but it also has its own social contract and packaging guidelines that follow the philosophy of the Free Software Movement.
Hence, Free Software Foundation [recognized][6] Hyperbola GNU/Linux-libre as the first completely free Brazilian operating system in 2018.
### Downloading Hyperbola GNU/Linux-libre 0.3.1 Milky Way
The hyperbola project provides [two live images][7] for installation: one is the regular Hyperbola and the other is Hypertalking. Hypertalking is the ISO optimized and adapted for blind and visually impaired users.
Interestingly, if you already use Arch Linux or Arch-based distribution like Parabola, you dont need to download a live image. You can easily migrate to Hyperbola by following the official [Arch][8] or [Parabola][9] migration guide.
The ISO image sizes around 650MB containing only essential packages (excluding desktop environment) to boot only in a command line interface.
### Hardware requirements for Hyperbola
For v0.3.1 (x86_64), you require a minimum of any 64-bit processor, 47MiB (OS installed) and 302MiB (Live image) of RAM for text mode only with no desktop environment.
While for v0.3.1 (i686), you require a minimum of Intel Pentium II or AMD Athlon CPU model, 33MiB (OS installed), and 252MiB (Live image) of RAM for text mode only with no desktop environment.
### Installing Hyperbola Linux from scratch
Currently, I dont use Arch or Parabola distribution. Hence, instead of migration, I chose to install Hyperbola Linux from scratch.
I also mostly dont dual boot unknown (to me) distribution on my hardware as it may create undetermined problems. So, I decided to use the wonderful GNOME Boxes app for setting up a Hyperbola virtual machine with up to 2 GB of RAM and 22 GB of free disk space.
Similar to Arch, Hyperbola also does not come with a graphical user interface (GUI) installer. It means you need to set up almost everything from scratch using a command line interface (CLI).
Here, it also concludes that Hyperbola is definitely not for beginners and those afraid of the command line.
However, Hyperbola does provide separate [installation instruction][10], especially for beginners. But I think it still misses several steps that can trouble beginners during the installation process.
For instance, it does not guide you to connect to the network, set up a new user account, and install a desktop environment.
Hence, there is also another Hyperbola [installation guide][11] that you need to refer to in case youre stuck at any step.
As I booted the live image, the boot menu showed the option to install for both 64-bit or 32-bit architecture.
![Live Image Boot Menu][12]
Next, following the installation instruction, I went through setting up disk partition, DateTime, language, and password for the root user.
![Disk partition][13]
Once everything set up, I then installed the most common [Grub bootloader][14] and rebooted the system. Phew! until now, all went well as I could log in to my Hyperbola system.
![text mode][15]
### Installing Xfce desktop in Hyperbola Linux
The command-line interface was working fine for me. But now, to have a graphical user interface, I need to manually choose and install a new [desktop environment][16] as Hyperbola does not come with any default DE.
For the sake of simplicity and lightweight, I chose to get the popular [Xfce desktop][17]. But before installing it, I also needed a Xorg [display server][18]. So, I installed it along with other important packages using the default pacman package manager.
![Install X.Org][19]
Later, I installed LightDM cross-desktop [display manager][20], Xfce desktop, and other necessary packages like elogind for managing user logins.
![Install Xfce desktop environment][21]
After the Xfce installation, you also need to add LightDM service at the default run level to automatically switch to GUI mode. You can use the below command and reboot the system:
```
rc-update add lightdm default
reboot
```
![Add LightDM at runlevel][22]
#### Pacman Signature Error In Hyperbola Linux
While installing Xorg and Xfce in the latest Hyperbola v0.3.1, I encountered the signature error for some packages showing “signature is marginal trust” or “invalid or corrupted package.”
![Signature Error In Hyperbola Linux][23]
After searching the solution, I came to know from Hyperbola [Forum][24] that the main author Emulatormans keys expired on 1st Feb 2021.
Hence, until the author upgrades the key or a new version 0.4 arrives sooner or later, you can change the `SigLevel` from “SigLevel=Required DatabaseOptional” to “SigLevel=Never” in`/etc/pacman.conf` file to avoid this error.
![][25]
### Hyperbola Linux with Xfce desktop
![Hyperbola Linux With Xfce desktop][26]
Hyperbola GNU/Linux-libre with Xfce 4.12 desktop gives a very clean, light, and smooth user experience. At the core, it contains Linux-libre 4.9 and OpenRC 0.28 service manager.
![][27]
As Hyperbola does not come with customized desktops and tons of bloated software, it definitely gives flexibility and freedom to choose, install, and configure the services you want.
On the memory usage side, it takes around 205MB of RAM (approx. 10%) while running no applications (except terminal).
![][28]
### Is Hyperbola a suitable distribution for you?
As per my experience, it definitely not a [Linux distribution that I would like to suggest to complete beginners][29]. Well, the Hyperbola project does not even claim to be beginners-friendly.
If youre well-versed with the command line and have quite a good knowledge of Linux concepts like disk partition, you can give it a try and decide yourself. Spending time hacking around the installation and configuration process can teach you a lot.
Another thing that might matter in choosing Hyperbola Linux is also the default init system. If youre looking for Systemd-free distribution with complete customization control from scratch, what can be better than it.
Last but not least, you should also consider the future of Hyperbola, which will no longer contain Linux Kernel as it will turn into a HyperbolaBSD with OpenBSD Linux and userspace.
If youve already tried or currently using Hyperbola Linux, let us know your experience in the comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/hyperbola-linux-review/
作者:[Sarvottam Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sarvottam/
[b]: https://github.com/lujun9972
[1]: https://www.hyperbola.info/news/announcing-hyperbolabsd-roadmap/
[2]: https://itsfoss.com/hyperbola-linux-bsd/
[3]: https://www.fsfla.org/ikiwiki/selibre/linux-libre/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/hyperbola-gnu-linux.png?resize=800%2C450&ssl=1
[5]: https://www.hyperbola.info/members/founders/
[6]: https://www.fsf.org/news/fsf-adds-hyperbola-gnu-linux-libre-to-list-of-endorsed-gnu-linux-distributions
[7]: https://wiki.hyperbola.info/doku.php?id=en:main:downloads&redirect=1
[8]: https://wiki.hyperbola.info/doku.php?id=en:migration:from_arch
[9]: https://wiki.hyperbola.info/doku.php?id=en:migration:from_parabola
[10]: https://wiki.hyperbola.info/doku.php?id=en:guide:beginners
[11]: https://wiki.hyperbola.info/doku.php?id=en:guide:installation
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Live-Image-Boot-Menu.png?resize=640%2C480&ssl=1
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Disk-partition.png?resize=600%2C450&ssl=1
[14]: https://itsfoss.com/what-is-grub/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/text-mode.png?resize=600%2C450&ssl=1
[16]: https://itsfoss.com/what-is-desktop-environment/
[17]: https://xfce.org/
[18]: https://itsfoss.com/display-server/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Install-xorg-package.png?resize=600%2C450&ssl=1
[20]: https://itsfoss.com/display-manager/
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Install-Xfce-desktop-environment-800x600.png?resize=600%2C450&ssl=1
[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Add-LightDM-at-runlevel.png?resize=600%2C450&ssl=1
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Signature-Error-In-Hyperbola-Linux.png?resize=600%2C450&ssl=1
[24]: https://forums.hyperbola.info/viewtopic.php?id=493
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/Configure-pacman-SigLevel.png?resize=600%2C450&ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Hyperbola-Linux-With-Xfce-desktop.jpg?resize=800%2C450&ssl=1
[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/Hyperbola-System-Information.jpg?resize=800%2C450&ssl=1
[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/Memory-Usage.jpg?resize=800%2C450&ssl=1
[29]: https://itsfoss.com/best-linux-beginners/

View File

@ -0,0 +1,100 @@
[#]: subject: (4 steps to customizing your Mac terminal theme with open source tools)
[#]: via: (https://opensource.com/article/21/4/zsh-mac)
[#]: author: (Bryant Son https://opensource.com/users/brson)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
4 steps to customizing your Mac terminal theme with open source tools
======
Make your terminal window pretty on your Mac with open source tools.
![4 different color terminal windows with code][1]
Do you ever get bored with seeing the same old terminal window on your macOS computer? If so, add some bells and whistles to your view with the open source Oh My Zsh framework and Powerlevel10k theme.
This basic step-by-step walkthrough (including a video tutorial at the end) will get you started customizing your macOS terminal. If you're a Linux user, check out Seth Kenlon's guide to [Adding themes and plugins to Zsh][2] for in-depth guidance.
### Step 1: Install Oh My Zsh
[Oh My Zsh][3] is an open source, community-driven framework for managing your Z shell (Zsh) configuration.
![Oh My Zsh][4]
(Bryant Son, [CC BY-SA 4.0][5])
Oh My Zsh is released under the MIT License. Install it with:
```
`$ sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"`
```
### Step 2: Install Powerlevel10k fonts
![Powerlevel10k][6]
(Bryant Son, [CC BY-SA 4.0][5])
Powerlevel10k is an MIT-Licensed Zsh theme. Before installing Powerlevel10k, you will want to install custom fonts for your terminal.
Go to the [Powerlevel10 GitHub][7] page, and search for "fonts" in the README. The steps for installing the custom fonts will vary depending on your operating system; the video at the bottom of this page explains how to do it on macOS. It should be just a simple clickdownloadinstall series of operations.
![Powerlevel10k fonts][8]
(Bryant Son, [CC BY-SA 4.0][5])
### Step 3: Install the Powerlevel10k theme
Next, install Powerlevel10k by running:
```
`git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-$HOME/.oh-my-zsh/custom}/themes/powerlevel10k`
```
After you finish, open a `~/.zshrc` configuration file with a text editor, such as [Vim][9], set the line `ZSH_THEME="powerlevel10k/powerlevel10k`, then save the file.
### Step 4: Finalize your Powerlevel10k setup
Open a new terminal, and you should see the Powerlevel10k configuration wizard. If not, run `p10k configure` to bring up the configuration wizard. If you installed the custom fonts in Step 2, the icons and symbols should display correctly. Change the default font to **MeslowLG NF** (see the video below for instructions).
![Powerlevel10k configuration][10]
(Bryant Son, [CC BY-SA 4.0][5])
Once you complete the configuration, you should see a beautiful terminal.
![Oh My Zsh/Powerlevel10k theme][11]
(Bryant Son, [CC BY-SA 4.0][5])
If you want to see an interactive tutorial, please check out this video:
That's it! You should be ready to enjoy your beautiful new terminal. Be sure to check out other Opensource.com articles for more tips and articles on using the shell, Linux administration, and more.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/zsh-mac
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code)
[2]: https://opensource.com/article/19/9/adding-plugins-zsh
[3]: https://ohmyz.sh/
[4]: https://opensource.com/sites/default/files/uploads/1_ohmyzsh.jpg (Oh My Zsh)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/2_powerlevel10k.jpg (Powerlevel10k)
[7]: https://github.com/romkatv/powerlevel10k
[8]: https://opensource.com/sites/default/files/uploads/3_downloadfonts.jpg (Powerlevel10k fonts)
[9]: https://opensource.com/resources/what-vim
[10]: https://opensource.com/sites/default/files/uploads/4_p10kconfiguration.jpg (Powerlevel10k configuration)
[11]: https://opensource.com/sites/default/files/uploads/5_finalresult.jpg (Oh My Zsh/Powerlevel10k theme)

View File

@ -0,0 +1,282 @@
[#]: subject: (How to Deploy Seafile Server with Docker to Host Your Own File Synchronization and Sharing Solution)
[#]: via: (https://itsfoss.com/deploy-seafile-server-docker/)
[#]: author: (Hunter Wittenborn https://itsfoss.com/author/hunter/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Deploy Seafile Server with Docker to Host Your Own File Synchronization and Sharing Solution
======
First off, what is Seafile?
[Seafile][1] is a self-hosted file synchronization program that works with the server-client model, as in you have several devices like your laptop and phone that connect to a central server.
Unlike some more popular alternatives like [Nextcloud or ownCloud][2], Seafile tries to follow the philosophy of “do one thing only, but do it well”. Likewise, Seafile doesnt have extra goodies built in like Contacts or Calendar integration.
Seafile instead focuses solely on file syncing, sharing, and the things surrounding it, and thats it. As a result of that though, it ends up doing so _extremely_ well.
### Deploying Seafile Server with Docker and NGINX
Advanced tutorial
Most tutorials on Its FOSS are focused on beginners. This one is not. It is intended for advanced users who tinker a lot with DIY projects and prefer to self-host.
This tutorial presumes that you are comfortable using the command line, and that you are at least decently knowledgeable with the programs well be using.
While the whole process could be done without using NGINX at all, using NGINX will allow for an easier setup, as well as making it significantly easier to self-host more services in the future.
If you want to use a full-on Docker setup, you could set up [NGINX inside of Docker][3] as well, but it will only make things more complex and doesnt add too much of a benefit, and likewise wont be covered in this tutorial.
#### Installing and Setting Up NGINX
_**I will be using Ubuntu in this tutorial and will thus be using apt to install packages. If you use Fedora or some other non-Debian distribution, please use your distributions [package manager][4].**_
[NGINX][5], as well as being a web server, is whats known as a proxy. It will function as the connection between the Seafile server and the internet, whilst also making several tasks easier to deal with.
To install NGINX, use the following command:
```
sudo apt install nginx
```
If you want to use HTTPS (that little padlock in your browser), you will also need to install [Certbot][6]:
```
sudo apt install certbot python3-certbot-nginx
```
Next, you need to configure NGINX to connect to the Seafile instance that we set up later.
First, run the following command:
```
sudo nano /etc/nginx/sites-available/seafile.conf
```
Enter the following text into the file:
```
server {
server_name localhost;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
**Important**: Replace **localhost** on the **server_name** line with the address youll be accessing your server from (i.e. **seafile.example.com** or **192.168.0.0**). Not sure what to put?
* If you are testing just for the sake of it, use localhost. This setup will **only allow you to access the server from your computer**, and thats it.
* If you want to use Seafile across your local WiFi connection(any device on the same WiFi network as you), you should enter [your computers IP address][7]. You may also want to look into [setting a static IP address][8], though it isnt necessary.
* If you have a public IP address that you know points to your system, use that.
* If you have a domain name(i.e. **example.com**, **example.org**) _and_ a public IP address for your system, change your DNS settings to point the domain name to your systems IP address. This will also require the public IP address to point to your system.
Now you need to copy the config file to the directory NGINX looks at for files, then restart NGINX:
```
sudo ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf
sudo systemctl restart nginx
```
If you set up Certbot, youll also need to run the following to set up HTTPS:
```
sudo certbot
```
If asked to redirect HTTP traffic to HTTPS, choose **2**.
Now would be a good time to make sure everything weve set up so far is working. If you visit your site, you should get a screen that says something on the lines of `502 Bad Gateway`.
![][9]
#### Install Docker and Docker Compose
Now to get into the fun stuff!
First things first, you need to have [Docker][10] and [Docker Compose][11] installed. Docker Compose is needed to utilize a docker-compose.yml file, which will make managing the various Docker [containers][12] Seafile needs easier.
Docker and Docker Compose can be installed with the following command:
```
sudo apt install docker.io docker-compose
```
To check if Docker is installed and running, run the following:
```
sudo docker run --rm hello-world
```
You should see something along the lines of this in your terminal if it completed successfully:
![][13]
If you would like to avoid adding `sudo` to the beginning of the `docker` command, you can run the following commands to add yourself to the `docker` group:
```
sudo groupadd docker
sudo usermod -aG docker $USER
```
The rest of this tutorial assumes you ran the above two commands. If you didnt, add `sudo` to all commands that start with `docker` or `docker-compose`.
#### Installing Seafile Server
This part is significantly easier than the part before this. All you need to do is put some text into a file and run a few commands.
Open up a terminal. Then create a directory where youd like the contents of the Seafile server to be stored and enter the directory:
```
mkdir ~/seafile-server && cd ~/seafile-server
```
![][14]
Go to the directory you created and run the following:
```
nano docker-compose.yml
```
Next, enter the text below into the window that pops up:
```
version: '2.0'
services:
db:
image: mariadb
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_LOG_CONSOLE=true
volumes:
- ./data/mariadb:/var/lib/mysql
networks:
- seafile-net
memcached:
image: memcached
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
image: seafileltd/seafile-mc
container_name: seafile
ports:
- "8080:80"
volumes:
- ./data/app:/shared
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=password
- TIME_ZONE=Etc/UTC
- [email protected]
- SEAFILE_ADMIN_PASSWORD=password
- SEAFILE_SERVER_LETSENCRYPT=false
- SEAFILE_SERVER_HOSTNAME=docs.seafile.com
depends_on:
- db
- memcached
networks:
- seafile-net
networks:
seafile-net:
```
Before saving the file, a few things will need to be changed:
* **MYSQL_ROOT_PASSWORD**: Change to a stronger password, you _dont_ need to remember this, so dont try to pick anything easy. If you need help making one, use a [password generator][15]. Id recommend 20 characters long and avoiding any special characters(all the **[[email protected]][16]#$%^&amp;*** symbols).
* **DB_ROOT_PASSWD**: Change to the value you set for ****MYSQL_ROOT_PASSWORD****.
* ****SEAFILE_ADMIN_EMAIL****: Sets the email address for the admin account.
* **SEAFILE_ADMIN_PASSWORD**: Sets the password for the admin account. Avoid making this the same as **MYSQL_ROOT_PASSWORD** or **DB_ROOT_PASSWD**.
* **SEAFILE_SERVER_HOSTNAME**: Set to the address you set in the NGINX configuration.
With that done, you can bring up the whole thing with `docker-compose`:
```
docker-compose up -d
```
It might take a minute or two depending on your internet connection, as it has to pull down several containers that Seafile needs to run.
After its done, give it a few more minutes to finish up. You can also check the status of it by running the following:
```
docker logs seafile
```
When its done, youll see the following output:
![][17]
Next, just type the address you set for ****SEAFILE_SERVER_HOSTNAME**** into your browser, and you should be at a login screen.
![][18]
And there you go! Everythings now fully functional and ready to be used with the clients.
#### Installing the Seafile Clients
Seafile on mobile is available on [Google Play][19], [F-Droid][20], and on the [iOS App Store][21]. Seafile also has desktop clients available for Linux, Windows, and Mac, available [here][22].
Seafile is readily available on Ubuntu systems via the `seafile-gui` package:
```
sudo apt install seafile-gui
```
Seafile is also in the AUR for Arch users via the `seafile-client` package.
### Closing Up
Feel free to explore the clients and all they have to offer. Ill go into all of what the Seafile clients are capable of in a future article (stay tuned 😃).
If somethings not working right, or you just have a question in general, feel free to leave it in the comments below Ill try to respond whenever I can!
--------------------------------------------------------------------------------
via: https://itsfoss.com/deploy-seafile-server-docker/
作者:[Hunter Wittenborn][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/hunter/
[b]: https://github.com/lujun9972
[1]: https://www.seafile.com/en/home/
[2]: https://itsfoss.com/nextcloud-vs-owncloud/
[3]: https://linuxhandbook.com/nginx-reverse-proxy-docker/
[4]: https://itsfoss.com/package-manager/
[5]: https://www.nginx.com/
[6]: https://certbot.eff.org/
[7]: https://itsfoss.com/check-ip-address-ubuntu/
[8]: https://itsfoss.com/static-ip-ubuntu/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/nginx_bad_gateway.png?resize=489%2C167&ssl=1
[10]: https://www.docker.com/
[11]: https://docs.docker.com/compose/
[12]: https://www.docker.com/resources/what-container
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/seafile-docker-helloworld.png?resize=752%2C416&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/seafile-dir.png?resize=731%2C174&ssl=1
[15]: https://itsfoss.com/password-generators-linux/
[16]: https://itsfoss.com/cdn-cgi/l/email-protection
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/seafile-running.png?resize=752%2C484&ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/seafile-login.jpg?resize=800%2C341&ssl=1
[19]: https://play.google.com/store/apps/details?id=com.seafile.seadroid2
[20]: https://f-droid.org/repository/browse/?fdid=com.seafile.seadroid2
[21]: https://itunes.apple.com/cn/app/seafile-pro/id639202512?l=en&mt=8
[22]: https://www.seafile.com/en/download/

View File

@ -0,0 +1,70 @@
[#]: subject: (Something bugging you in Fedora Linux? Lets get it fixed!)
[#]: via: (https://fedoramagazine.org/something-bugging-you-in-fedora-linux-lets-get-it-fixed/)
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Something bugging you in Fedora Linux? Lets get it fixed!
======
![][1]
Software has bugs. Any complicated system is guaranteed to have at least some bits that dont work as planned. Fedora Linux is a _very_ complicated system. It contains thousands of packages created by countless independent upstream projects around the world. There are also hundreds of updates every week. So, its inevitable that problems creep in. This article addresses the bug fixing process and how some bugs may be prioritized.
### The release development process
As a Linux distribution project, we want to deliver a polished, “everything just works” experience to our users. Our release process starts with “Rawhide”. This is our development area where we integrate new versions of all that updated free and open source software. Were constantly improving our ongoing testing and continuous integration processes to make even Rawhide safe to use for the adventurous. By its nature, however, Rawhide will always be a little bit rough.
Twice a year we take that rough operating system and branch it for a beta release, and then a final release. As we do that, we make a concerted effort to find problems. We run Test Days to check on specific areas and features. “Candidate builds” are made which are checked against our [release validation test plan][2]. We then enter a “freeze” state where only approved changes go into the candidates. This isolates the candidate from the constant development (which still goes into Rawhide!) so new problems are not introduced.
Many bugs, big and small, are squashed as part of the release process. When all goes according to plan, we have a shiny new on-schedule Fedora Linux release for all of our users. (Weve done this reliably and repeatedly for the last few years — thanks, everyone who works so hard to make it so!) If something is really wrong, we can mark it as a “release blocker”. That means we wont ship until its fixed. This is often appropriate for big issues, and definitely turns up the heat and attention that bug gets.
Sometimes, we have issues that are persistent. Perhaps something thats been going on for a release or two, or where we dont have an agreed solution. Some issues are really annoying and frustrating to many users, but individually dont rise to the level wed normally block a release for. We _can_ mark these things as blockers. But that is a really big sledgehammer. A blocker may cause the bug to get finally smashed, but it can also cause disruption all around. If the schedule slips, all the _other_ bug fixes and improvements, as well as features people have been working on, dont get to users.
### The Prioritized Bugs process
So, we have another way to address annoying bugs! The [Prioritized Bugs process][3] is a different way to highlight issues that result in unpleasantness for a large number of users. Theres no hammer here, but something more like a spotlight. Unlike the release blocker process, the Prioritized Bugs process does not have a strictly-defined set of criteria. Each bug is evaluated based on the breadth and severity of impact.
A team of interested contributors helps curate a short list of issues that need attention. We then work to connect those issues to people who can fix them. This helps take pressure off of the release process, by not tying the issues to any specific deadlines. Ideally, we find and fix things before we even get to the beta stage. We try to keep the list short, no more than a handful, so there truly is a focus. This helps the teams and individuals addressing problems because they know were respectful of their often-stretched-thin time and energy.
Through this process, Fedora has resolved dozens of serious and annoying problems. This includes everything from keyboard input glitches to SELinux errors to that thing where gigabytes of old, obsolete package updates would gradually fill up your disk. But we can do a lot more — we actually arent getting as many nominations as we can handle. So, if theres something _you_ know thats causing long-term frustration or affecting a lot of people and yet which seems to not be reaching a resolution, follow the [Prioritized Bugs process][3] and let _us_ know.
#### **You can help**
All Fedora contributors are invited to participate in the Prioritized Bugs process. Evaluation meetings occur every two weeks on IRC. Anyone is welcome to join and help us evaluate the nominated bugs. See the [calendar][4] for meeting time and location. The Fedora Program Manager sends an agenda to the [triage][5] and [devel][6] mailing lists the day before meetings.
### Bug reports welcome
Big or small, when you find a bug, we really appreciate it if you report it. In many cases, the best place to do that is with the project that creates the software. For example, lets say there is a problem with the way the Darktable photography software renders images from your digital camera. Its best to take that to the Darktable developers. For another example, say theres a problem with the GNOME or KDE desktop environments or with the software that is part of them. Taking these issues to those projects will usually get you the best results.
However, if its a Fedora-specific problem, like something with our build or configuration of the software, or a problem with how its integrated, dont hesitate to [file a bug with us][7]. This is also true when there is a problem which you know has a fix that we just havent included yet.
I know this is kind of complex… itd be nice to have a one-stop place to handle all of the bugs. But remember that Fedora packagers — the people who do the work of taking upstream software and configuring it to build in our system — are largely volunteers. They are not always the deepest experts in the code for the software theyre working with. When in doubt, you can always file a [Fedora bug][7]. The folks in Fedora responsible for the corresponding package can help with their connections to the upstream software project.
Remember, when you find a bug thats gone through diagnosis and doesnt yet have a good fix, when you see something that affects a lot of people, or when theres a long-standing problem that just isnt getting attention, please nominate it as a Prioritized Bug. Well take a look and see what can be done!
_PS: The famous image in the header is, of course, from the logbook of the Mark II computer at Harvard where Rear Admiral Grace Murray Hopper worked. But contrary to popular belief about the story, this isnt the first use of the term “bug” for a systems problem — it was already common in engineering, which is why it was funny to find a literal bug as the cause of an issue. #nowyouknow #jokeexplainer_
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/something-bugging-you-in-fedora-linux-lets-get-it-fixed/
作者:[Matthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/bugging_you-816x345.jpg
[2]: https://fedoraproject.org/wiki/QA:Release_validation_test_plan
[3]: https://docs.fedoraproject.org/en-US/program_management/prioritized_bugs/
[4]: https://calendar.fedoraproject.org/base/
[5]: https://lists.fedoraproject.org/archives/list/triage%40lists.fedoraproject.org/
[6]: https://lists.fedoraproject.org/archives/list/devel%40lists.fedoraproject.org/
[7]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/

View File

@ -0,0 +1,88 @@
[#]: subject: (Blanket: Ambient Noise App With Variety of Sounds to Stay Focused)
[#]: via: (https://itsfoss.com/blanket-ambient-noise-app/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Blanket: Ambient Noise App With Variety of Sounds to Stay Focused
======
_**Brief: An open-source ambient noise player offering a variety of sounds to help you focus or fall asleep.**_
With the increase in the number of activities around you, it is often tough to keep calm and stay focused.
Sometimes music helps, but it also distracts in some cases. But, ambient noise? That is always soothing to hear. Who doesnt want to hear birds chirping, rain falling and crowd chattering in a restaurant? Okay, may be not the last one but listening to natural sound could help in relaxing and focusing. This indirectly boots your productivity.
Recently, I came across a dedicated player which includes different sounds that could help anyone focus.
### Play Different Ambient Sounds Using Blanket
Blanket is an impressive ambient noise player that features different sounds that can help you fall asleep or just regain focus by helping you forget about the surrounding distractions.
It includes nature sounds like rain, waves, birds chirping, storm, wind, water stream, and summer night.
![][1]
Also, if you are a commuter or someone comfortable in a mildly busy environment, you can find sounds for trains, boat, city, coffee shop, or a fireplace.
If you are fond of white noise or pink noise, which combines all sound frequencies that humans can hear, that is available here too.
It also lets you autostart every time you boot, if that is what you prefer.
![][2]
### Install Blanket on Linux
The best way to install Blanket is from [Flathub][3]. Considering that you have [Flatpak][4] enabled, all you have to type in the terminal to install it is:
```
flatpak install flathub com.rafaelmardojai.Blanket
```
In case youre new to Flatpak, you might want to go through our [Flatpak guide][5].
If you do not prefer using Flatpaks, you can install it using a PPA maintained by a contributor in the project. For Arch Linux users, you can find it in [AUR][6] to easily install it.
In addition, you can also find packages for Fedora and openSUSE. To explore all the available packages, you can head to its [GitHub page][7].
**Recommended Read:**
![][8]
#### [Relax With Natural Sounds By Using Ambient Noise Music Player In Ubuntu][9]
Listen to natural white noise music with Ambient Noise Music Player application In Ubuntu.
### Closing Thoughts
The user experience for a simple ambient noise player is pretty good. I have a pair of HyperX Alpha S headphones and I must mention that the quality of the sounds is good to hear.
In other words, it is soothing to hear, and I will recommend you to try it out if you wanted to experience Ambient sounds to focus, get rid of your anxiety or just fall asleep.
Have you tried it yet? Feel free to share your thoughts below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/blanket-ambient-noise-app/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/blanket-screenshot.png?resize=614%2C726&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/blanket-autostart-1.png?resize=514%2C214&ssl=1
[3]: https://flathub.org/apps/details/com.rafaelmardojai.Blanket
[4]: https://itsfoss.com/what-is-flatpak/
[5]: https://itsfoss.com/flatpak-guide/
[6]: https://itsfoss.com/aur-arch-linux/
[7]: https://github.com/rafaelmardojai/blanket
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/04/Ambient_Noise_Ubuntu.jpeg?fit=700%2C350&ssl=1
[9]: https://itsfoss.com/ambient-noise-music-player-ubuntu/

View File

@ -0,0 +1,224 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (11 Linux Distributions You Can Rely on for Your Ancient 32-bit Computer)
[#]: via: (https://itsfoss.com/32-bit-linux-distributions/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
可以在古老的 32 位计算机上使用的 11 种 Linux 发行版
======
如果您紧跟最新的[Linux 发行版][1],那么您一定已经注意到,[大多数流行的 Linux 发行版][2]已经终止了 32 位支持。Arch LinuxUbuntuFedora每一个都已经放弃了对这种较旧架构的支持。
但是,如果您拥有仍然需要再生的老式硬件,或者想将其用于某些用途,该怎么办?不用担心,还剩下一些 32 位系统选项可供选择。
在本文中,我试图编译一些最好的 Linux 发行版,这些发行版将在未来几年继续支持 32 位平台。
### 仍提供 32 位支持的最优 Linux 发行版
![][3]
此列表与[较早的支持旧笔记本电脑的 Linux 发行版列表][4]略有不同。如果 64 位计算机是在 2010 年之前发布的,那么甚至可以认为它们是过时的。这就是为什么其中列出的一些建议包括现在仅支持 64 位版本的发行版的原因。
根据我的知识和认知,此处提供的信息是正确的,但是如果您发现有误,请在评论部分让我知道。
在继续之前,我认为您知道[如何检查您拥有的是否是 32 位或 64 位计算机][5]。
#### 1\. Debian
![图片来源: mrneilypops / Deviantart][6]
对于 32 位系统Debian 是一个绝佳的选择,因为他们仍通过最新的稳定版本支持它。在撰写本文时,最新的稳定发行版 **Debian 10“buster”** 提供了32位版本并一直支持到2024年。
如果您是 Debian 的新手,值得一提的是,您可以在[官方Wiki][7]上获得有关其所有内容的可靠文档。因此,上手应该不是问题。
您可以浏览[可用的安装程序][8]进行安装。但是,在开始之前,除了[安装手册][10]外,我建议您参考[安装 Debian 之前要记住的事情][9]列表。
[Debian][11]
#### 2\. Slax
![][12]
如果您只是想快速启动设备以进行一些临时工作Slax是一个令人印象深刻的选择。
它基于 Debian但它旨在成为一种便携式且通过 USB 设备或 DVD 运行的快速选项。您可以从他们的网站免费下载 32 位 ISO 文件,或购买预装有 Slax 的可擦写 DVD /**(或)**加密的闪存盘。
当然,这并不是要取代传统的桌面操作系统。但是,是的,您确实获得了以 Debian 为基础的 32 位支持。
[Slax][13]
#### 3\. AntiX
![图片来源: Opensourcefeed(开源提供)][14]
另一个令人印象深刻的基于 Debian 的发行版。AntiX 通常被称为免系统发行版,该发行版在轻量级安装的同时侧重于性能。
它完全适合几乎所有老式的 32 位系统。建议至少需要 256 MB 内存和 2.7 GB 存储空间。不仅易于安装,而且用户体验也针对新手和有经验的用户。
您应该根据 Debian 的最新稳定可用分支获得最新版本。
[AntiX][15]
#### 4\. openSUSE
![][16]
openSUSE 是一个独立的 Linux 发行版,也支持 32 位系统。实际上最新的常规版本Leap不提供 32 位映像但滚动发行版本Tumbleweed确实提供了 32 位映像。
如果您是新手,那将是完全不同的体验。但是,我建议您仔细阅读[为什么要使用 openSUSE 的原因。][17]
它主要面向开发人员和系统管理员但也可以将其用作普通桌面用户。值得注意的是openSUSE 不意味在老式硬件上运行-因此必须确保至少有 2 GB 内存40+ GB存储空间和双核处理器。
[openSUSE][18]
#### 5\. Emmabuntüs
![][19]
Emmabuntus 是一个有趣的发行版,旨在通过 32 位支持来延长硬件的使用寿命,以减少原材料的浪费。作为一个小组,他们还参与向学校提供计算机和数字技术。
它提供了两个不同的版本,一个基于 Ubuntu另一个基于 Debian。如果您需要更长久的 32 位支持,则可能要使用 Debian 版本。它可能不是最好的选择,但是它具有许多预配置的软件来简化 Linux 学习体验并提供 32 位支持,如果您希望在此过程中支持他们的事业,那么这是一个相当不错的选择。
[Emmanbuntus][20]
#### 6\. NixOS
![Nixos KDE Edition \(图片来源: Distrowatch\)][21]
NixOS 是另一个独立的支持 32 位系统的 Linux 发行版。它着重于提供一个可靠的系统,其中程序包彼此隔离。
这可能不直接面向普通用户,但它是 KDE 支持的可用发行版,具有独一无二的软件包管理方法。您可以从其官方网站上了解有关其[功能][22]的更多信息。
[NixOS][23]
#### 7\. Gentoo Linux
![][24]
如果您是经验丰富的 Linux 用户,并且正在寻找 32 位 Linux 发行版,那么 Gentoo Linux 应该是一个不错的选择。
如果需要,您可以使用 Gentoo Linux 通过软件包管理器轻松配置,编译和安装内核。不仅限于众所周知的可配置性,您还可以在较旧的硬件上运行而不会出现任何问题。
即使您不是经验丰富的用户,也想尝试一下,只需阅读[安装说明][25],就可以大胆尝试了。
[Gentoo Linux][26]
#### 8\. Devuan
![][27]
[Devuan][28]是另一种免系统的发行版。从技术上讲,它是 Debian的一个分支只是没有系统化和鼓励[开始自由][29]。
对于普通用户来说,它可能不是一个非常流行的 Linux 发行版,但是如果您想要免系统发行版和 32 位支持Devuan 应该是一个不错的选择。
[Devuan][30]
#### 9\. Void Linux
![][31]
Void Linux 是由志愿者独立开发的有趣发行版。它旨在成为一个通用的 OS(操作系统),同时提供稳定的滚动发布周期。它以 `runit`作为初始系统替代 `systemd`,并具有多个[桌面环境][32]的选项。
它具有令人印象深刻的最低需求规格,只需 96 MB 的内存和 Pentium(奔腾) 4或等效的芯片配对。试试看吧
[Void Linux][33]
#### 10\. Q4OS
![][34]
Q4OS 是另一个基于 Debian 的发行版,致力于提供最小和快速的桌面用户体验。它也恰好是我们列表中的[最佳轻量级 Linux 发行版][4]之一。它的 32 位版本具有[Trinity 桌面][35],您可以在 64 位版本上找到 KDE Plasma 支持。
与 Void Linux 类似Q4OS 也至少可以运行在至少 128 MB 的内存和 300 MHz 的 CPU 上,并需要 3 GB 的存储空间。对于任何老式硬件来说,它应该绰绰有余。因此,我想说,您绝对应该尝试一下!
要了解更多信息,您还可以查看[我们对 Q4OS 的回顾][36]。
[Q$OS][37]
#### 11: MX Linux
![][38]
如果有一个稍微不错的配置(不完全是老式的,而是旧的),对于 32 位系统,我个人推荐 MX Linux。对于每种类型的用户它也都会成为[最佳 Linux 发行版][2]之一。
通常MX Linux 是基于 Debian 的出色的轻量级和可自定义发行版。您可以选择从 KDEXFCE 或 Fluxbox这是它们自己的用于较旧硬件的桌面环境中进行选择。您可以在他们的官方网站上找到更多关于它的信息然后尝试一下。
[MX Linux][39]
### 荣誉提名Funtoo
Funtoo 是基于 Gentoo 社区开发的 Linux 发行版。它着重于为您提供 Gentoo Linux 的最佳性能以及一些额外的软件包,以使用户获得完整的体验。有趣的是,该开发实际上是由 Gentoo Linux 的创建者丹尼尔·罗宾斯Daniel Robbins领导的。
当然,如果您不熟悉 Linux那么这里可能没有最好的体验。但是它确实支持 32 位系统,并且可以在许多较旧的 Intel/AMD 芯片组上很好地工作。
[Funtoo][40]
### 总结
我将列表集中在基于 Debian 的发行版和一些独立发行版上。但是,如果您不介意长期支持条款,而只想获得 32 位受支持的映像,也可以尝试使用任何基于 Ubuntu 18.04 的发行版(或任何官方版本)。
在撰写本文时,他们只剩下几个月的软件支持。因此,我避免将其作为主要选项提及。但是,如果您喜欢基于 Ubuntu 18.04 的发行版或其它任何版本,可以选择 [LXLE][41][Linux Lite][42][Zorin Lite 15][43]及其他官方版本。
即使大多数基于 Ubuntu 的现代桌面操作系统都放弃了对 32 位的支持。您仍然有很多选项可以选择。
在 32 位系统上更喜欢哪一个?在下面的评论中让我知道您的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/32-bit-linux-distributions/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/what-is-linux-distribution/
[2]: https://itsfoss.com/best-linux-distributions/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/32-bit-linux.png?resize=800%2C450&ssl=1
[4]: https://itsfoss.com/lightweight-linux-beginners/
[5]: https://itsfoss.com/32-bit-64-bit-ubuntu/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/08/debian-screenshot.png?resize=800%2C450&ssl=1
[7]: https://wiki.debian.org/FrontPage
[8]: https://www.debian.org/releases/buster/debian-installer/
[9]: https://itsfoss.com/before-installing-debian/
[10]: https://www.debian.org/releases/buster/installmanual
[11]: https://www.debian.org/
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/slax-screenshot.jpg?resize=800%2C600&ssl=1
[13]: https://www.slax.org
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/antiX-19-1.jpg?resize=800%2C500&ssl=1
[15]: https://antixlinux.com
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/opensuse-15-1.png?resize=800%2C500&ssl=1
[17]: https://itsfoss.com/why-use-opensuse/
[18]: https://www.opensuse.org/
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/Emmabuntus-xfce.png?resize=800%2C500&ssl=1
[20]: https://emmabuntus.org/
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/nixos-kde.jpg?resize=800%2C500&ssl=1
[22]: https://nixos.org/features.html
[23]: https://nixos.org/
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/08/gentoo-linux.png?resize=800%2C450&ssl=1
[25]: https://www.gentoo.org/get-started/
[26]: https://www.gentoo.org
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/devuan-beowulf.jpg?resize=800%2C600&ssl=1
[28]: https://itsfoss.com/devuan-3-release/
[29]: https://www.devuan.org/os/init-freedom
[30]: https://www.devuan.org
[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/void-linux.jpg?resize=800%2C450&ssl=1
[32]: https://itsfoss.com/best-linux-desktop-environments/
[33]: https://voidlinux.org/
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os8Debonaire.jpg?resize=800%2C500&ssl=1
[35]: https://en.wikipedia.org/wiki/Trinity_Desktop_Environment
[36]: https://itsfoss.com/q4os-linux-review/
[37]: https://q4os.org/index.html
[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde.jpg?resize=800%2C452&ssl=1
[39]: https://mxlinux.org/
[40]: https://www.funtoo.org/Welcome
[41]: https://www.lxle.net/
[42]: https://www.linuxliteos.com
[43]: https://zorinos.com/download/15/lite/32/

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Network address translation part 2 the conntrack tool)
[#]: via: (https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
网络地址转换第二部分 - conntrack 工具
======
![][1]
这是有关<ruby>网络地址转换<rt>network address translation</rt></ruby>NAT的系列文章中的第二篇。之前的第一篇文章介绍了 [如何使用 iptables/nftables 的报文跟踪功能][2] 来定位 NAT 相关的连接问题。作为第二部分,本文介绍 “conntrack” 命令。conntrack 命令允许您查看和修改被跟踪的连接。
### 引言
通过 iptables 或 nftables 配置的 NAT 建立在 netfilters 连接跟踪工具之上。_conntrack_ 命令作为 “conntrack-tools” 软件包的一部分,用于查看和更改连接状态表。
### Conntrack 连接状态表
连接跟踪子系统跟踪它看到的所有报文流。运行 “_sudo conntrack -L_” 可查看其内容:
```
tcp 6 43184 ESTABLISHED src=192.168.2.5 dst=10.25.39.80 sport=5646 dport=443 src=10.25.39.80 dst=192.168.2.5 sport=443 dport=5646 [ASSURED] mark=0 use=1
tcp 6 26 SYN_SENT src=192.168.2.5 dst=192.168.2.10 sport=35684 dport=443 [UNREPLIED] src=192.168.2.10 dst=192.168.2.5 sport=443 dport=35684 mark=0 use=1
udp 17 29 src=192.168.8.1 dst=239.255.255.250 sport=48169 dport=1900 [UNREPLIED] src=239.255.255.250 dst=192.168.8.1 sport=1900 dport=48169 mark=0 use=1
```
上述显示结果中,每行表示一个连接跟踪项。您可能会注意到,每行相同的地址和端口号会出现两次,而且第二次出现的源地址/端口对和目标地址/端口对会与第一次正好相反!这是因为每个连接跟踪项会先后两次被插入连接状态表。第一个四元组(源地址,目标地址,源端口,目标端口)记录的是原始方向的连接信息,即发送者发送报文的方向。而第二个四元组则记录的是 conntrack 子系统期望收到的对端回复报文的连接信息。这解决了两个问题:
1. 如果报文匹配到一个 NAT 规则,例如 IP 地址伪装,相应的映射信息会记录在链接跟踪项的回复方向部分,并自动应用于同一条流的所有后续报文。
2. 即使一条流经过了地址或端口的转换,也可以成功在连接状态表中查找到回复报文的四元组信息。
原始方向的第一个显示的四元组信息永远不会改变它就是发送者发送的连接信息。NAT 操作只会修改回复方向第二个四元组因为这是接受者看到的连接信息。修改第一个四元组没有意义netfilter 无法控制发起者的连接状态,它只能在收到/转发报文时对其施加影响。当一个报文未映射到现有连接表项时conntrack 可以为其新建一个表项。对于 UDP 报文,该操作会自动进行。对于 TCP 报文conntrack 可以配置为只有 TCP 报文设置了 [SYN 标志位][3] 才新建表项。默认情况下conntrack 会允许从流的中间报文开始创建,这是为了避免对 conntrack 使能之前就存在的流处理出现问题。
### Conntrack 连接状态表和 NAT
如上一节所述,回复方向的四元组包含 NAT 信息。您可以通过命令过滤输出经过源地址 NAT 或目标地址 NAT 的连接跟踪项。通过这种方式可以看到一个指定的流经过了哪种类型的 NAT 转换。例如,运行 “_sudo conntrack -L -p tcp src-nat_” 可显示经过源 NAT 的连接跟踪项,输出结果类似于以下内容:
```
tcp 6 114 TIME_WAIT src=10.0.0.10 dst=10.8.2.12 sport=5536 dport=80 src=10.8.2.12 dst=192.168.1.2 sport=80 dport=5536 [ASSURED]
```
这个连接跟踪项表示一条从 10.0.0.10:5536 到 10.8.2.12:80 的连接。与前面示例不同的是回复方向的四元组不是原始方向四元组的简单翻转源地址已修改。目标主机10.8.2.12)将回复数据包发送到 192.168.1.2,而不是 10.0.0.10。每当 10.0.0.10 发送新的报文时,具有此连接跟踪项的路由器会将源地址替换为 192.168.1.2。当 10.8.2.12 发送回复报文时,该路由器将目的地址修改回 10.0.0.10。上述源 NAT 行为源自一条 [NFT 伪装][4] 规则:
```
inet nat postrouting meta oifname "veth0" masquerade
```
其他类型的 NAT 规则,例如目标地址 DNAT 规则或重定向规则,其连接跟踪项也会以类似的方式显示,回复方向四元组的远端地址或端口与原始方向四元组的远端地址或端口不同。
### Conntrack 扩展
conntrack 的记帐功能和时间戳功能是两个有用的扩展功能。运行 “_sudo sysctl net.netfilter.nf_conntrack_acct=1_” 可以在运行 “_sudo conntrack -L_” 时显示每个流经过的字节数和报文数。运行 “_sudo sysctl net.netfilter.nf_conntrack_timestamp=1_” 为每个连接记录一个开始时间戳,之后每次运行 “_sudo conntrack -L_” 时都可以显示这个流从开始经过了多少秒。在上述命令中增加 “output ktimestamp” 选项也可以看到流开始的绝对时间。
### 插入和更改连接跟踪项
您可以手动为状态表添加连接跟踪项,例如:
```
sudo conntrack -I -s 192.168.7.10 -d 10.1.1.1 --protonum 17 --timeout 120 --sport 12345 --dport 80
```
这项命令通常被 conntrackd 用于状态复制即将主防火墙的连接跟踪项复制到备用防火墙系统。于是当切换发生的时候备用系统可以接管已经建立的连接且不会造成中断。Conntrack 还可以存储报文的带外元数据,例如 conntrack 标记和连接跟踪标签。可以用 “update” (-U) 选项来修改它们:
```
sudo conntrack -U -m 42 -p tcp
```
这条命令将所有的 TCP 流的 connmark 修改为 42。
### **Delete entries**
### **删除连接跟踪项**
在某些情况下,您可能想从状态表中删除条目。例如,对 NAT 规则的修改不会影响表中已存在流的经过报文。因此对 UDP 长连接(例如像 VXLAN 这样的隧道协议),删除表项可能很有意义,这样新的 NAT 转换规则才能生效。可以通过 “sudo conntrack -D” 命令附带可选的地址和端口列表选项,来删除相应的表项,如下例所示:
```
sudo conntrack -D -p udp --src 10.0.12.4 --dst 10.0.0.1 --sport 1234 --dport 53
```
### Conntrack 错误计数
Conntrack 也可以输出统计数字:
```
# sudo conntrack -S
cpu=0 found=0 invalid=130 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=10
cpu=1 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
cpu=2 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=1
cpu=3 found=0 invalid=0 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=0
```
大多数计数器将为 0。“Found” 和 “insert” 数将始终为 0它们只是为了后向兼容。其他错误计数包括
* invalid报文既不匹配已有连接跟踪项也未创建新连接。
* insert_failed报文新建了一个连接但插入状态表时失败。这在 NAT 引擎在伪装时恰好选择了重复的源地址和端口时可能出现。
* drop报文新建了一个连接但是没有可用的内存为其分配新的状态条目。
* early_dropconntrack 表已满。为了接受新的连接,已有的未看到双向报文的连接被丢弃。
* erroricmp(v6) 收到与已知连接不匹配的 icmp 错误数据包。
* search_restart查找过程由于另一个 CPU 的插入或删除操作而中断。
* clash_resolve多个 CPU 试图插入相同的 conntrack 条目。
除非经常发生,这些错误条件通常无害。一些错误可以通过针对预期工作负载调整 conntrack 系统的参数来降低其发生概率,典型的配置包括 _net.netfilter.nf_conntrack_buckets_ 和 _net.netfilter.nf_conntrack_max_ 参数。可在 [nf_conntrack-sysctl 文档][5] 中查阅相应配置参数的完整列表。
当报文状态是 invalid 时,请使用 “_sudo sysctl net.netfilter.nf_conntrack_log_invalid=255_” 来获取更多信息。例如,当 conntrack 遇到一个所有 TCP 标志位均为 0 的报文时,将记录以下内容:
```
nf_ct_proto_6: invalid tcp flag combination SRC=10.0.2.1 DST=10.0.96.7 LEN=1040 TOS=0x00 PREC=0x00 TTL=255 ID=0 PROTO=TCP SPT=5723 DPT=443 SEQ=1 ACK=0
```
### 总结
本文介绍了如何检查连接跟踪表和存储在跟踪流中的 NAT 信息。本系列的下一部分将延伸讨论 conntrack 工具和连接跟踪事件框架。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/network-address-translation-part-2-the-conntrack-tool/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[cooljelly](https://github.com/cooljelly)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/network-address-translation-part-2-816x345.jpg
[2]: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
[3]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
[4]: https://wiki.nftables.org/wiki-nftables/index.php/Performing_Network_Address_Translation_(NAT)#Masquerading
[5]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/networking/nf_conntrack-sysctl.rst

View File

@ -0,0 +1,284 @@
[#]: subject: (Using network bound disk encryption with Stratis)
[#]: via: (https://fedoramagazine.org/network-bound-disk-encryption-with-stratis/)
[#]: author: (briansmith https://fedoramagazine.org/author/briansmith/)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
使用 Stratis 的网络绑定磁盘加密
======
![][1]
在一个有许多加密磁盘的环境中,解锁所有的磁盘是一项困难的任务。<ruby>网络绑定磁盘加密<rt>Network bound disk encryption</rt></ruby>NBDE有助于自动解锁 Stratis 卷的过程。这是在大型环境中的一个关键要求。Stratis 2.1 版本增加了对加密的支持,这在《[Stratis 加密入门][4]》一文中介绍过。Stratis 2.3 版本最近在使用加密的 Stratis 池时引入了对网络绑定磁盘加密NBDE的支持这是本文的主题。
[Stratis 网站][5] 将 Stratis 描述为一个“_易于使用的 Linux 本地存储管理_”。短视频《[使用 Stratis 管理存储][6]》对基础知识进行了快速演示。该视频是在 Red Hat Enterprise Linux 8 系统上录制的,然而,视频中显示的概念也适用于 Fedora Linux 中的 Stratis。
### 先决条件
本文假设你熟悉 Stratis也熟悉 Stratis 池加密。如果你不熟悉这些主题,请参考这篇 [文章][4] 和前面提到的 [Stratis 概述视频][6]。
NBDE 需要 Stratis 2.3 或更高版本。本文中的例子使用的是 Fedora Linux 34 的预发布版本。Fedora Linux 34 的最终版本将包含 Stratis 2.3。
### 网络绑定磁盘加密NBDE概述
加密存储的主要挑战之一是有一个安全的方法在系统重启后再次解锁存储。在大型环境中手动输入加密口令并不能很好地扩展。NBDE 解决了这一问题,允许以自动方式解锁加密存储。
在更高层次上NBDE 需要环境中的 Tang 服务器。客户端系统(使用 Clevis Pin只要能与 Tang 服务器建立网络连接,就可以自动解密存储。如果网络没有连接到 Tang 服务器,则必须手动解密存储。
这背后的想法是Tang 服务器只能在内部网络上使用,因此,如果加密设备丢失或被盗,它将不再能够访问内部网络连接到 Tang 服务器,因此不会被自动解密。
关于 Tang 和 Clevis 的更多信息,请参见手册页(`man tang`、`man clevis`)、[Tang 的 GitHub 页面][7] 和 [Clevis 的 GitHub 页面][8]。
### 设置 Tang 服务器
本例使用另一个 Fedora Linux 系统作为 Tang 服务器,主机名为 `tang-server`。首先安装 `tang` 包。
```
dnf install tang
```
然后用 `systemctl` 启用并启动 `tangd.socket`
```
systemctl enable tangd.socket --now
```
Tang 使用的是 TCP 80 端口,所以你也需要在防火墙中打开该端口。
```
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=80/tcp
```
最后,运行 `tang-show-keys` 来显示输出签名密钥指纹。你以后会需要这个。
```
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
### 创建加密的 Stratis 池
上一篇关于 Stratis 加密的文章详细介绍了如何设置加密的 Stratis 池,所以本文不会深入介绍。
第一步是捕获一个将用于解密 Stratis 池的密钥。即使使用 NBDE也需要设置这个因为在 NBDE 服务器无法到达的情况下,可以用它来手动解锁池。使用以下命令捕获 `pool1` 密钥。
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
然后我将使用 `/dev/vdb` 设备创建一个加密的 Stratis 池(使用刚才创建的 `pool1key`),命名为 `pool1`
```
# stratis pool create --key-desc pool1key pool1 /dev/vdb。
```
接下来,在这个 Stratis 池中创建一个名为 `filesystem1` 的文件系统,创建一个挂载点,挂载文件系统,并在其中创建一个测试文件:
```
# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /dev/stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile
```
### 将 Stratis 池绑定到 Tang 服务器上
此时,我们已经创建了加密的 Stratis 池,并在池中创建了一个文件系统。下一步是将你的 Stratis 池绑定到刚刚设置的 Tang 服务器上。使用 `stratis pool bind nbde` 命令进行。
当你进行 Tang 绑定时,需要向该命令传递几个参数:
* 池名(在本例中,`pool1`
* 钥匙描述符名称(本例中为 `pool1key`
* Tang 服务器名称(在本例中,`http://tang-server`
记得之前在 Tang 服务器上,运行了 `tang-show-keys`,显示 Tang 输出的签名密钥指纹是 `l3fZGUCmnvKQF_OA6VZF9jf8z2s`。除了前面的参数外,还需要用参数 `-thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s` 传递这个指纹,或者用 `-trust-url` 参数跳过对指纹的验证。
使用 `-thumbprint` 参数更安全。例如:
```
# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
### 用 NBDE 解锁 Stratis 池
接下来重启主机,并验证你可以用 NBDE 解锁 Stratis 池,而不需要使用密钥口令。重启主机后,该池不再可用:
```
# stratis pool list
Name Total Physical Properties
```
要使用 NBDE 解锁池,请运行以下命令:
```
# stratis pool unlock clevis
```
注意,你不需要使用密钥口令。这个命令可以在系统启动时自动运行。
此时Stratis 池已经可以使用了:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
```
你可以挂载文件系统,访问之前创建的文件:
```
# mount /dev/stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file
```
### 轮换 Tang 服务器密钥
最好定期轮换 Tang 服务器密钥,并更新 Stratis 客户服务器以使用新的 Tang 密钥。
要生成新的 Tang 密钥,首先登录到 Tang 服务器,查看 `/var/db/tang` 目录的当前状态。然后,运行 `tang-show-keys` 命令:
```
# ls -al /var/db/tang
total 8
drwx------. 1 tang tang 124 Mar 15 15:51 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
```
要生成新的密钥,运行 `tangd-keygen` 并将其指向 `/var/db/tang` 目录:
```
# /usr/libexec/tangd-keygen /var/db/tang
```
如果你再看看 `/var/db/tang` 目录,你会看到两个新文件:
```
# ls -al /var/db/tang
total 16
drwx------. 1 tang tang 248 Mar 22 10:41 .
drwxr-xr-x. 1 root root 16 Mar 15 15:48 ..
-rw-r--r--. 1 tang tang 361 Mar 15 15:51 hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
-rw-r--r--. 1 root root 354 Mar 22 10:41 iyG5HcF01zaPjaGY6L_3WaslJ_E.jwk
-rw-r--r--. 1 root root 349 Mar 22 10:41 jHxerkqARY1Ww_H_8YjQVZ5OHao.jwk
-rw-r--r--. 1 tang tang 367 Mar 15 15:51 l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
```
如果你运行 `tang-show-keys`,就会显示出 Tang 所公布的密钥:
```
# tang-show-keys
l3fZGUCmnvKQF_OA6VZF9jf8z2s
iyG5HcF01zaPjaGY6L_3WaslJ_E
```
你可以通过将两个原始文件改名为以句号开头的隐藏文件,来防止旧的密钥(以 `l3fZ` 开头)被公布。通过这种方法,旧的密钥将不再被公布,但是它仍然可以被任何没有更新为使用新密钥的现有客户端使用。一旦所有的客户端都更新使用了新密钥,这些旧密钥文件就可以删除了。
```
# cd /var/db/tang
# mv hbjJEDXy8G8wynMPqiq8F47nJwo.jwk .hbjJEDXy8G8wynMPqiq8F47nJwo.jwk
# mv l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk .l3fZGUCmnvKQF_OA6VZF9jf8z2s.jwk
```
此时,如果再运行 `tang-show-keys`Tang 只公布新钥匙:
```
# tang-show-keys
iyG5HcF01zaPjaGY6L_3WaslJ_E
```
下一步,切换到你的 Stratis 系统并更新它以使用新的 Tang 密钥。当文件系统在线时, Stratis 支持这样做。
首先,解除对池的绑定:
```
# stratis pool unbind pool1
```
接下来,用创建加密池时使用的原始口令设置密钥:
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
最后,用更新后的密钥指纹将 Stratis 池绑定到 Tang 服务器上:
```
# stratis pool bind nbde pool1 pool1key http://tang-server --thumbprint iyG5HcF01zaPjaGY6L_3WaslJ_E
```
Stratis 系统现在配置为使用更新的 Tang 密钥。一旦使用旧的 Tang 密钥的任何其他客户系统被更新,在 Tang 服务器上的 `/var/db/tang` 目录中被重命名为隐藏文件的两个原始密钥文件就可以被备份和删除了。
### 如果 Tang 服务器不可用怎么办?
接下来,关闭 Tang 服务器,模拟它不可用,然后重启 Stratis 系统。
重启后Stratis 池又不可用了:
```
# stratis pool list
Name Total Physical Properties
```
如果你试图用 NBDE 解锁,会因为 Tang 服务器不可用而失败:
```
# stratis pool unlock clevis
Execution failed:
An iterative command generated one or more errors: The operation 'unlock' on a resource of type pool failed. The following errors occurred:
Partial action "unlock" failed for pool with UUID 4d62f840f2bb4ec9ab53a44b49da3f48: Cryptsetup error: Failed with error: Error: Command failed: cmd: "clevis" "luks" "unlock" "-d" "/dev/vdb" "-n" "stratis-1-private-42142fedcb4c47cea2e2b873c08fcf63-crypt", exit reason: 1 stdout: stderr: /dev/vdb could not be opened.
```
此时,在 Tang 服务器无法到达的情况下,解锁池的唯一选择就是使用原密钥口令:
```
# stratis key set --capture-key pool1key
Enter key data followed by the return key:
```
然后你可以使用钥匙解锁池:
```
# stratis pool unlock keyring
```
接下来,验证池是否成功解锁:
```
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/network-bound-disk-encryption-with-stratis/
作者:[briansmith][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/briansmith/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/stratis-nbde-816x345.jpg
[2]: https://unsplash.com/@imattsmart?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/lock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/getting-started-with-stratis-encryption/
[5]: https://stratis-storage.github.io/
[6]: https://www.youtube.com/watch?v=CJu3kmY-f5o
[7]: https://github.com/latchset/tang
[8]: https://github.com/latchset/clevis

View File

@ -0,0 +1,76 @@
[#]: subject: (5 signs you're a groff programmer)
[#]: via: (https://opensource.com/article/21/4/groff-programmer)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (liweitianux)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
你是 groff 程序员的 5 个标志
======
学习 groff一款老派的文本处理软件类似于学习骑自行车。
![Typewriter in the grass][1]
我第一次发现 Unix 系统是在 90 年代早期,当时我还在大学读本科。我太喜欢这个系统了,所以我将家里电脑上的 MS-DOS 也换成了 Linux 系统。
在 90 年代早期至中期Linux 所缺失的一个东西是<ruby>字处理软件<rt>word processor</rt></ruby>。在其他桌面操作系统上,这是一套标准的办公软件,其中的字处理软件能让你轻松地编辑文本。我经常在 DOS 上使用字处理软件来撰写课程论文。直到 90 年代末,我都没能找到一款 Linux 原生的字处理软件。也直到那时,文字处理是我在第一台电脑上保留双启动的个别原因之一,那样我可以偶尔切换到 DOS 系统写论文。
后来,我发现 Linux 提供了一款文字处理软件。GNU troff一般称为 [groff][2],是经典的文本处理系统 troff 的一个现代实现。troff 是 typesetter roff 的简称,是 nroff 系统的改进版本,而 nroff 又是最初的 roff 系统的新实现。roff 表示<ruby>快速印出<rt>run off</rt></ruby>,比如“快速印出”一份文档。
利用文本处理系统,你在纯文本编辑器里编辑内容,通过<ruby><rt>macro</rt></ruby>或其他处理命令来添加格式。然后将文件输入文本处理系统,比如 groff来生成适合打印的格式化输出。另一个知名的文本处理系统是 LaTeX但是 groff 已经满足我的需求,而且足够简单。
经过一点实践,我发现在 Linux 上使用 groff 来撰写课程论文与使用字处理软件一样容易。尽管我现在不再使用 groff 来写文档了,我依然记得它的那些宏和命令。如果你也是这样并且在那么多年之前学会了使用 groff 写作,你可能会认出这 5 个你是 groff 程序员的标志。
### 1\. 你有一个最喜欢的宏集
输入由宏点缀的纯文本,你便能在 groff 里对文档进行格式化。groff 里的宏是行首为单个句点的短命令。例如:如果你想在输出里插入几行,宏命令 `.sp 2` 会添加两个空行。groff 还具有其他一些基本的宏,支持各种各样的格式化。
为了能让作者更容易地格式化文档groff 还提供了不同的 _<ruby>宏集<rt>macro set</rt></ruby>_,即一组能够让你以自己的方式格式化文档的宏的集合。我学会的第一个宏集是 `-me` 宏集。这个宏集的名称其实是 `e`,而且你在处理文件时使用 `-me` 选项来指定这个 `e` 宏集。
groff 还包含其他宏集。例如,`-man` 宏集以前是用于格式化 Unix 系统内置 _<ruby>手册页<rt>manual page</rt></ruby>_ 的标准宏集,`-ms` 宏集经常用于格式化其他一些技术文档。如果你学会了使用 groff 写作,你可能有一个最喜欢的宏集。
### 2\. 你想专注于内容而非格式
使用 groff 写作的一个很好的特点是你能专注于你的 _内容_而不用太担心它看起来会怎么样。对于技术作者而言这是一个很实用的特点。对专业作家来说groff 是一个很好的、“不会分心”的写作环境。至少,你不必在意使用 groff `-T` 选项所支持的任何格式来交付内容,包括 PDF、PostScript、HTML、以及纯文本。不过你无法直接从 groff 生成 LibreOffice ODT 文件或者 Word DOC 文件。
一旦你使用 groff 写作变得有信心之后,宏便开始 _消失_。用于格式化的宏变成了背景,而你纯粹地专注于眼前的文本内容。我已经使用 groff 写了足够多,以至于我甚至不再看见那些宏。也许,这就像写代码,而你的大脑随意换档,于是你就像计算机一样思考,看到的代码就是一组指令。对我而言,使用 groff 写作就像那样:我仅仅看到文本,而我的大脑将宏自动地翻译成格式。
### 3\. 你喜欢怀旧复古的感觉
当然,使用一个更典型的字处理软件来写你的文档可能更 _简单_,比如 LibreOffice Writer、甚至 Google Docs 或 Microsoft Word。而且对于某些种类的文档桌面型字处理软件才是正确的选择。但是如果你想要这种怀旧复古的感觉使用 groff 写作很难被打败。
我会承认我大部分的写作是用 LibreOffice Writer 完成的,它的表现很出色。但是当我渴望以一种怀旧复古的方式去做时,我会打开编辑器用 groff 来写文档。
### 4\. 你希望能到处使用它
groff 及其同类软件在几乎所有的 Unix 系统上都是一个标准软件包。此外groff 宏不会随系统而变化。比如,`-me` 宏集在不同系统上都应该相同。因此,一旦你在一个系统上学会使用宏,你能在下一个系统上同样地使用它们。
另外,因为 groff 文档就是纯文本,所以你能使用任何你喜欢的编辑器来编辑文档。我喜欢使用 GNU Emacs 来编辑我的 groff 文档,但是你可能使用 GNOME Gedit、Vim、其他你[最喜欢的文本编辑器][3]。大部分编辑器会支持这样一种模式,其中 groff 宏会以不同的颜色高亮显示,帮助你在处理文件之前便能发现错误。
### 5\. 你使用 -me 写了这篇文章
当我决定要写这篇文章时,我认为最佳的方式便是直接使用 groff。我想要演示 groff 在编写文档方面是多么的灵活。所以,虽然你正在网上读这篇文章,但是它最初是用 groff 写的。
我希望这激发了你学习如何使用 groff 撰写文档的兴趣。如果你想学习 `-me` 宏集里更高级的函数,参考 Eric Allman 的《Writing papers with groff using -me》你应该能在系统的 groff 文档找到,文件名为 **meintro.me**。这是一份很好的参考资料,还解释了使用 `-me` 宏集格式化论文的其他方式。
我还提供了这篇文章的原始草稿,其中使用了 `-me` 宏集。下载这个文件并保存为 **five-signs-groff.me**,然后输入 groff 处理来查看。`-T` 选项设置输出类型,比如 `-Tps` 用于生成 PostScript 输出,`-Thtml` 用于生成 HTML 文件。比如:
groff -me -Thtml five-signs-groff.me &gt; five-signs-groff.html
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/groff-programmer
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[liweitianux](https://github.com/liweitianux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/doc-dish-lead.png?itok=h3fCkVmU (Typewriter in the grass)
[2]: https://en.wikipedia.org/wiki/Groff_(software)
[3]: https://opensource.com/article/21/2/open-source-text-editors

View File

@ -0,0 +1,102 @@
[#]: subject: (Create and Edit EPUB Files on Linux With Sigil)
[#]: via: (https://itsfoss.com/sigile-epub-editor/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
用 Sigil 在 Linux 上创建和编辑 EPUB 文件
======
Sigil 是一个开源的 Linux、Windows 和 MacOS 上的 EPUB 编辑器。你可以使用 Sigil 创建一个新的 EPUB 格式的电子书,或编辑现有的 EPUB 电子书(以 .epub 扩展结尾的文件)。
如果你感到好奇EPUB 是一个标准的电子书格式,并被几个数字出版集团认可。它被许多设备和电子阅读器支持,除了 亚马逊的 Kindle。
### Sigil 让你创建或编辑 EPUB 文件
[Sigil][1] 是一个允许你编辑 EPUB 文件的开源软件。当然,你可以从头开始创建一个新的 EPUB 文件。
![][2]
很多人在[创建或编辑电子书时非常相信 Calibre][3]。它确实是一个完整的工具,它有很多的功能并支持不仅仅是 EPUB 格式。然而Calibre 有时可能是沉重的资源。
Sigil 只专注于 EPUB 书籍,它有以下功能:
* 支持 EPUB 2 和 EPUB 3有一定的限制
* 提供代码视图预览
* 编辑 EPUB 语法
* 带有多级标题的内容表生成器
* 编辑元数据
* 拼写检查
* 支持正则查找和替换
* 支持导入 EPUB、HTML 文件、图像和样式表
* 额外插件
* 多语言支持的接口
* 支持 Linux、Windows 和 MacOS
Sigil 不是你可以直接输入新书章节的 [WYSIWYG][4] 类型的编辑器。由于 EPUB 依赖于 XML因此它专注于代码。 将其视为用于 EPUB 文件的[类似于 VS Code 的代码编辑器][5]。出于这个原因,你应该使用一些其他[开源写作工具][6],以 epub 格式导出你的文件(如果可能的话),然后在 Sigil 中编辑它。
![][7]
Sigil 有一个 [Wiki][8] 来提供一些安装和使用 Sigil 的文档。
### 在 Linux 上安装 Sigil
Sigil 是一款跨平台应用,支持 Windows 和 macOS 以及 Linux。它是一个流行的软件有超过十年的历史。这就是为什么你应该会在你的 Linux 发行版仓库中找到它。只要在你的发行版的软件中心应用中寻找它就可以了。
![Sigil in Ubuntu Software Center][9]
你可能需要事先启用 universe 仓库。你也可以在 Ubuntu发行版中使用 apt 命令:
```
sudo apt install sigil
```
Sigil 有很多对 Python 库和模块的依赖,因此它下载和安装了大量的包。
![][10]
我不会列出 Fedora、SUSE、Arch 和其他发行版的命令。你可能已经知道如何使用你的发行版的软件包管理器,对吧?
你的发行版提供的版本不一定是最新的。如果你想要 Sigil 的最新版本,你可以查看它的 GitHub 仓库。
[Sigil on GitHub][11]
### 并不适合所有人,当然也不适合用于阅读 ePUB 电子书
我不建议使用 Sigil 阅读电子书。Linux 上有[其他专门的应用来阅读 .epub 文件][12]。
如果你是一个必须处理 EPUB 书籍的作家或者如果你在数字化旧书并在各种格式间转换Sigil 可能是值得一试。
我还没有广泛使用 Sigil所以我不提供对它的评论。我让你去探索它并在这里与我们分享你的经验。
--------------------------------------------------------------------------------
via: https://itsfoss.com/sigile-epub-editor/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://sigil-ebook.com/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/open-epub-sigil.png?resize=800%2C621&ssl=1
[3]: https://itsfoss.com/create-ebook-calibre-linux/
[4]: https://www.computerhope.com/jargon/w/wysiwyg.htm
[5]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[6]: https://itsfoss.com/open-source-tools-writers/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/sigil-epub-editor-800x621.png?resize=800%2C621&ssl=1
[8]: https://github.com/Sigil-Ebook/Sigil/wiki
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/sigil-software-center-ubuntu.png?resize=800%2C424&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/installing-sigil-ubuntu.png?resize=800%2C547&ssl=1
[11]: https://github.com/Sigil-Ebook/Sigil
[12]: https://itsfoss.com/open-epub-books-ubuntu-linux/