Merge pull request #12 from LCTT/master

update 0826
This commit is contained in:
SamMa 2021-08-26 18:16:51 +08:00 committed by GitHub
commit 72f3630e15
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 3419 additions and 1574 deletions

View File

@ -0,0 +1,169 @@
[#]: collector: "lujun9972"
[#]: translator: "fisherue"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13717-1.html"
[#]: subject: "5 ways to improve your Bash scripts"
[#]: via: "https://opensource.com/article/20/1/improve-bash-scripts"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
改进你的脚本程序的 5 个方法
======
> 巧用 Bash 脚本程序能帮助你完成很多极具挑战的任务。
![](https://img.linux.net.cn/data/attachment/album/202108/25/131347yblk4jg4r6blebmg.jpg)
系统管理员经常写脚本程序,不论长短,这些脚本可以完成某种任务。
你是否曾经查看过某个软件发行方提供的安装用的<ruby>脚本<rt>script</rt></ruby>程序?为了能够适应不同用户的系统配置,顺利完成安装,这些脚本程序经常包含很多函数和逻辑分支。多年来,我积累了一些改进脚本程序的一些技巧,这里分享几个,希望能对朋友们也有用。这里列出一组短脚本示例,展示给大家做脚本样本。
### 初步尝试
我尝试写一个脚本程序时,原始程序往往就是一组命令行,通常就是调用标准命令完成诸如更新网页内容之类的工作,这样可以节省时间。其中一个类似的工作是解压文件到 Apache 网站服务器的主目录里,我的最初脚本程序大概是下面这样:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
这帮我节省了时间,也减少了键入多条命令操作。时日久了,我掌握了另外的技巧,可以用 Bash 脚本程序完成更难的一些工作,比如说创建软件安装包、安装软件、备份文件系统等工作。
### 1、条件分支结构
和众多其他编程语言一样,脚本程序的条件分支结构同样是强大的常用技能。条件分支结构赋予了计算机程序逻辑能力,我的很多实例都是基于条件逻辑分支。
基本的条件分支结构就是 `if` 条件分支结构。通过判定是否满足特定条件,可以控制程序选择执行相应的脚本命令段。比如说,想要判断系统是否安装了 Java ,可以通过判断系统有没有一个 Java 库目录;如果找到这个目录,就把这个目录路径添加到可运行程序路径,也就可以调用 Java 库应用了。
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2、限定运行权限
你或许想只允许特定的用户才能执行某个脚本程序。除了 Linux 的权限许可管理,比如对用户和用户组设定权限、通过 SELinux 设定此类的保护权限等,你还可以在脚本里设置逻辑判断来设置执行权限。类似的情况可能是,你需要确保只有网站程序的所有者才能执行相应的网站初始化操作脚本。甚至你可以限定只有 root 用户才能执行某个脚本。这个可以通过在脚本程序里设置逻辑判断实现Linux 提供的几个环境变量可以帮忙。其中一个是保存用户名称的变量 `$USER` 另一个是保存用户识别码的变量 `$UID` 。在脚本程序里,执行用户的 UID 值就保存在 `$UID` 变量里。
#### 用户名判别
第一个例子里,我在一个带有几个应用服务器实例的多用户环境里指定只有用户 `jboss1` 可以执行脚本程序。条件 `if` 语句主要是判断,“要求执行这个脚本程序的用户不是 `jboss1` 吗?”当此条件为真时,就会调用第一个 `echo` 语句,接着是 `exit 1`,即退出这个脚本程序。
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### 根用户判别
接下来的例子是要求只有根用户才能执行脚本程序。根用户的用户识别码UID是 0,设置的条件判断采用大于操作符(`-gt`),所有 UID 值大于 的用户都被禁止执行该脚本程序。
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3、带参数执行程序
可执行程序可以附带参数作为执行选项,命令行脚本程序也是一样,下面给出几个例子。在这之前,我想告诉你,能写出好的程序并不只是写出我们想要它执行什么的程序,程序还需要不执行我们不要它执行的操作。如果运行程序时没有提供参数造成程序缺少足够信息,我愿意脚本程序不要做任何破坏性的操作。因而,程序的第一步就是确认命令行是否提供了参数,判定的条件就是参数数量 `$#` 是否为 0 ,如果是(意味着没有提供参数),就直接终止脚本程序并退出操作。
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### 多个运行参数
可以传递给脚本程序的参数不止一个。脚本使用内部变量指代这些参数,内部变量名用非负整数递增标识,也就是 `$1`、`$2`、`$3` 等等递增。我只是扩展前面的程序,并在下面一行输出显示用户提供的前三个参数。显然,要针对所有的每个参数有对应的响应需要更多的逻辑判断,这里的例子只是简单展示参数的使用。
```
echo $1 $2 $3
```
我们在讨论这些参数变量名,你或许有个疑问,“参数变量名怎么跳过了 `$0`,(而直接从`$1` 开始)?”
是的,是这样,这是有原因的。变量名 `$0` 确实存在,也非常有用,它储存的是被执行的脚本程序的名称。
```
echo $0
```
程序执行过程中有一个变量名指代程序名称,很重要的一个原因是,可以在生成的日志文件名称里包含程序名称,最简单的方式应该是调用一个 `echo` 语句。
```
echo test >> $0.log
```
当然,你或许要增加一些代码,确保这个日志文件存放在你希望的路径,日志名称包含你认为有用的信息。
### 4、交互输入
脚本程序的另一个好用的特性是可以在执行过程中接受输入,最简单的情况是让用户可以输入一些信息。
```
echo "enter a word please:"
read word
echo $word
```
这样也可以让用户在程序执行中作出选择。
```
read -p "Install Software ?? [Y/n]: " answ
if [ "$answ" == 'n' ]; then
  exit 1
fi
  echo "Installation starting..."
```
### 5、出错退出执行
几年前,我写了个脚本,想在自己的电脑上安装最新版本的 Java 开发工具包JDK。这个脚本把 JDK 文件解压到指定目录,创建更新一些符号链接,再做一下设置告诉系统使用这个最新的版本。如果解压过程出现错误,在执行后面的操作就会使整个系统上的 Java 破坏不能使用。因而,这种情况下需要终止程序。如果解压过程没有成功,就不应该再继续进行之后的更新操作。下面语句段可以完成这个功能。
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
下面的单行语句可以给你快速展示一下变量 `$?` 的用法。
```
ls T; ec=$?; echo $ec
```
先用 `touch T` 命令创建一个文件名为 `T` 的文件,然后执行这个单行命令,变量 `ec` 的值会是 0。然后,用 `rm T` 命令删除文件,再执行该单行命令,变量 `ec` 的值会是 2,因为文件 `T` 不存在,命令 `ls` 找不到指定文件报错。
在逻辑条件里利用这个出错标识,参照前文我使用的条件判断,可以使脚本文件按需完成设定操作。
### 结语
要完成复杂的功能,或许我们觉得应该使用诸如 Python、C 或 Java 这类的高级编程语言,然而并不尽然,脚本编程语言也很强大,可以完成类似任务。要充分发挥脚本的作用,有很多需要学习的,希望这里的几个例子能让你意识到脚本编程的强大。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/fisherue)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl "工作者图片"

View File

@ -1,33 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (YungeG)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13720-1.html)
[#]: subject: (Understanding systemd at startup on Linux)
[#]: via: (https://opensource.com/article/20/5/systemd-startup)
[#]: author: (David Both https://opensource.com/users/dboth)
在 Linux 启动时理解 systemd
理解 systemd 启动时在做什么
======
systemd 启动过程提供的重要线索可以在问题出现时助你一臂之力。
![People at the start line of a race][1]
> systemd 启动过程提供的重要线索可以在问题出现时助你一臂之力。
在本系列的第一篇文章[_学着爱上 systemd_][2],我考察了 systemd 的功能和架构,以及围绕 systemd 作为古老的 SystemV 初始化程序和启动脚本的替代品的争论。在这第二篇文章中,我将开始探索管理 Linux 启动序列的文件和工具。我会解释 systemd 启动序列、如何更改默认的启动目标SystemV 术语中的运行级别)、以及在不重启的情况下如何手动切换到不同的目标。
![](https://img.linux.net.cn/data/attachment/album/202108/26/110220piwnicwxvvc1s8io.jpg)
我还将考察两个重要的 systemd 工具。第一个 **systemctl** 命令是和 systemd 交互、向其发送命令的基本方式。第二个是 **journalctl**,用于访问 systemd 日志,后者包含了大量系统历史数据,比如内核和服务的消息(包括指示性信息和错误信息)
在本系列的第一篇文章《[学着爱上 systemd][2]》,我考察了 systemd 的功能和架构,以及围绕 systemd 作为古老的 SystemV 初始化程序和启动脚本的替代品的争论。在这第二篇文章中,我将开始探索管理 Linux 启动序列的文件和工具。我会解释 systemd 启动序列、如何更改默认的启动目标(即 SystemV 术语中的运行级别)、以及在不重启的情况下如何手动切换到不同的目标
务必使用一个非生产系统进行本文和后续文章中的测试和实验。你的测试系统需要安装一个 GUI 桌面(比如 XfceLXDEGnomeKD E或其他
我还将考察两个重要的 systemd 工具。第一个 `systemctl` 命令是和 systemd 交互、向其发送命令的基本方式。第二个是 `journalctl`,用于访问 systemd 日志,后者包含了大量系统历史数据,比如内核和服务的消息(包括指示性信息和错误信息)。
务必使用一个非生产系统进行本文和后续文章中的测试和实验。你的测试系统需要安装一个 GUI 桌面(比如 Xfce、LXDE、Gnome、KDE 或其他)。
上一篇文章中我写道计划在这篇文章创建一个 systemd 单元并添加到启动序列。由于这篇文章比我预期中要长,这些内容将留到本系列的下一篇文章。
### 使用 systemd 探索 Linux 的启动
在观察启动序列之前,你需要做几件事情得使引导和启动序列开放可见。正常情况下,大多数发行版使用一个开机动画或者启动画面隐藏 Linux 启动和关机过程中的显示细节,在基于 Red Hat 的发行版中称作 Plymouth 引导画面。这些隐藏的消息能够向寻找信息以排除程序故障、或者只是学习启动序列的系统管理员提供大量有关系统启动和关闭的信息。你可以通过 GRUBGrand Unified Boot Loader配置改变这个设置。
在观察启动序列之前,你需要做几件事情得使引导和启动序列开放可见。正常情况下,大多数发行版使用一个开机动画或者启动画面隐藏 Linux 启动和关机过程中的显示细节,在基于 Red Hat 的发行版中称作 Plymouth 引导画面。这些隐藏的消息能够向寻找信息以排除程序故障、或者只是学习启动序列的系统管理员提供大量有关系统启动和关闭的信息。你可以通过 GRUB<ruby>大统一引导加载器<rt>Grand Unified Boot Loader</rt></ruby>)配置改变这个设置。
主要的 GRUB 配置文件是 **/boot/grub2/grub.cfg** ,但是这个文件在更新内核版本时会被覆盖,你不会想修改它的。相反,修改用于改变 **grub.cfg** 默认设置的 **/etc/default/grub** 文件。
主要的 GRUB 配置文件是 `/boot/grub2/grub.cfg` ,但是这个文件在更新内核版本时会被覆盖,你不会想修改它的。相反,应该修改用于改变 `grub.cfg` 默认设置的 `/etc/default/grub` 文件。
**/etc/default/grub** 文件当前还未修改的版本看起
首先看一下当前未修改的 `/etc/default/grub` 文件的版本
```
[root@testvm1 ~]# cd /etc/default ; cat grub
@ -43,10 +44,10 @@ GRUB_DISABLE_RECOVERY="true"
[root@testvm1 default]#
```
[GRUB 文档][3]的第 6 章列出了 **/etc/default/grub** 文件的所有可用项,我只关注下面的部分:
[GRUB 文档][3] 的第 6 章列出了 `/etc/default/grub` 文件的所有可用项,我只关注下面的部分:
* 我将 GRUB 菜单倒计时的秒数 **GRUB_TIMEOUT**,从 5 改成 10以便在倒计时达到 0 之前有更多的时间响应 GRUB 菜单。
* **GRUB_CMDLINE_LINUX** 列出了启动阶段传递给内核的命令行参数,我删除了其中的最后两个参数。其中的一个参数 **rhgb** 代表 Red Hat Graphical Boot在内核初始化阶段显示一个小小的 Fedora 图标动画,而不是显示启动阶段的信息。另一个参数 **quiet**,屏蔽记录启动进度和发生错误的消息。系统管理员需要这些信息,因此我删除了 **rhgb****quiet**。如果启动阶段发生了错误,屏幕上显示的信息可以指向故障的原因。
* 我将 GRUB 菜单倒计时的秒数 `GRUB_TIMEOUT`,从 5 改成 10以便在倒计时达到 0 之前有更多的时间响应 GRUB 菜单。
* `GRUB_CMDLINE_LINUX` 列出了引导阶段传递给内核的命令行参数,我删除了其中的最后两个参数。其中的一个参数 `rhgb` 代表 “<ruby>红帽图形化引导<rt>Red Hat Graphical Boot</rt></ruby>”,在内核初始化阶段显示一个小小的 Fedora 图标动画,而不是显示引导阶段的信息。另一个参数 `quiet`,屏蔽显示记录了启动进度和发生错误的消息。系统管理员需要这些信息,因此我删除了 `rhgb``quiet`。如果引导阶段发生了错误,屏幕上显示的信息可以指向故障的原因。
更改之后,你的 GRUB 文件将会像下面一样:
@ -64,7 +65,7 @@ GRUB_DISABLE_RECOVERY="false"
[root@testvm1 default]#
```
**grub2-mkconfig** 程序使用 **/etc/default/grub** 文件的内容生成 **grub.cfg** 配置文件,从而改变一些默认的 GRUB 设置。**grub2-mkconfig** 输出到 **STDOUT**,你可以使用程序的 **-o** 参数指明数据流输出的文件,不过使用重定向也同样简单。执行下面的命令更新 **/boot/grub2/grub.cfg** 配置文件:
`grub2-mkconfig` 程序使用 `/etc/default/grub` 文件的内容生成 `grub.cfg` 配置文件,从而改变一些默认的 GRUB 设置。`grub2-mkconfig` 输出到 `STDOUT`,你可以使用程序的 `-o` 参数指明数据流输出的文件,不过使用重定向也同样简单。执行下面的命令更新 `/boot/grub2/grub.cfg` 配置文件:
```
[root@testvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg
@ -83,17 +84,17 @@ done
重新启动你的测试系统查看本来会隐藏在 Plymouth 开机动画之下的启动信息。但是如果你没有关闭开机动画,又需要查看启动信息的话又该如何操作?或者你关闭了开机动画,而消息流过的速度太快,无法阅读怎么办?(实际情况如此。)
有两个解决方案,都涉及到日志文件和 systemd 日志——两个都是你的好伙伴。你可以使用 **less** 命令查看 **/var/log/messages** 文件的内容。这个文件包含引导和启动信息,以及操作系统执行正常操作时生成的信息。你也可以使用不加任何参数的 **journalctl** 命令查看 systemd 日志,包含基本相同的信息:
有两个解决方案,都涉及到日志文件和 systemd 日志 —— 两个都是你的好伙伴。你可以使用 `less` 命令查看 `/var/log/messages` 文件的内容。这个文件包含引导和启动信息,以及操作系统执行正常操作时生成的信息。你也可以使用不加任何参数的 `journalctl` 命令查看 systemd 日志,包含基本相同的信息:
```
[root@testvm1 grub2]# journalctl
\-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 ([mockbuild@bkernel03.phx2.fedoraproject.org][4]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct &gt;
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd&gt;
-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 (mockbuild@bkernel03.phx2.fedoraproject.org) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct >
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd>
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-provided physical RAM map:
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
@ -116,51 +117,51 @@ Jan 11 21:48:08 f31vm.both.org kernel: clocksource: kvm-clock: mask: 0xfffffffff
Jan 11 21:48:08 f31vm.both.org kernel: tsc: Detected 2807.992 MHz processor
Jan 11 21:48:08 f31vm.both.org kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 11 21:48:08 f31vm.both.org kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
<snip>
```
由于数据流可能长达几甚至几百万行,我在这里截断了它。(我的主要工作站上列出的日志长度是 1,188,482 行。)一定要在你的测试系统尝试这个命令。如果系统已经运行了一段时间——即使重启过很多次——还是会显示大量的数据。进行问题诊断时查看这个日志数据,因为其中包含了很多可能十分有用的信息。了解这个数据文件在正常的引导和启动过程中的模样,可以帮助你在问题出现时定位问题。
由于数据流可能长达几十万甚至几百万行,我在这里截断了它。(我的主要工作站上列出的日志长度是 1,188,482 行。)请确保是在你的测试系统尝试的这个命令。如果系统已经运行了一段时间 —— 即使重启过很多次 —— 还是会显示大量的数据。查看这些日志数据,因为它包含了很多信息,在进行问题判断时可能非常有用。了解这个数据文件在正常的引导和启动过程中的模样,可以帮助你在问题出现时定位问题。
我将在本系列之后的文章讨论 systemd 日志、**journalctl** 命令、以及如何整列输出的日志数据来寻找更详细的信息。
我将在本系列之后的文章讨论 systemd 日志、`journalctl` 命令、以及如何整理输出的日志数据来寻找更详细的信息。
内核被 GRUB 加载到内存后,必须先将自己从压缩后的文件中解压出来,才能执行任何有意义的操作。解压自己后,内核开始运行,加载 systemd 并转交控制权。
引导阶段到此结束,此时 Linux 内核和 systemd 正在运行,但是无法为用户执行任何生产性任务,因为其他的程序都没有执行,没有命令行解释器提供命令行,没有后台进程管理网络和其他的通信链接,也没有任何东西能够控制计算机执行生产功能。
<ruby>引导<rt>boot</rt></ruby>阶段到此结束,此时 Linux 内核和 systemd 正在运行,但是无法为用户执行任何生产性任务,因为其他的程序都没有执行,没有命令行解释器提供命令行,没有后台进程管理网络和其他的通信链接,也没有任何东西能够控制计算机执行生产功能。
现在 systemd 可以加载所需的功能性单元以便将系统启动到选择的目标运行状态。
### 目标
一个 systemd 目标代表一个 Linux 系统当前的或期望的运行状态。与 SystemV 启动脚本十分类似,目标定义了系统运行必须存在的服务,以及处于目标状态下必须激活的服务。图 1 展示了使用 systemd 的 Linux 系统可能的运行状态目标。就像在本系列的第一篇文章以及 systemd 启动的手册页(`man bootup`)所看到的一样,有一些开启不同必要服务的其他中间目标,包括 **swap.target**、**timers.target**、**local-fs.target** 等。一些目标(像 **basic.target**)作为检查点使用,在移动到下一个更高级的目标之前保证所有需要的服务已经启动并运行。
一个 systemd <ruby>目标<rt>target</rt></ruby>代表一个 Linux 系统当前的或期望的运行状态。与 SystemV 启动脚本十分类似,目标定义了系统运行必须存在的服务,以及处于目标状态下必须激活的服务。图 1 展示了使用 systemd 的 Linux 系统可能的运行状态目标。就像在本系列的第一篇文章以及 systemd 启动的手册页(`man bootup`)所看到的一样,有一些开启不同必要服务的其他中间目标,包括 `swap.target`、`timers.target`、`local-fs.target` 等。一些目标(像 `basic.target`)作为检查点使用,在移动到下一个更高级的目标之前保证所有需要的服务已经启动并运行。
除非开机时在 GRUB 菜单进行更改systemd 总是启动 **default.target**。**default.target** 文件是指向真实的目标文件的符号链接。对于桌面工作站,**default.target** 通常是 **graphical.target**,等同于 SystemV 的运行等级 5。对于服务器默认目标多半是 **multi-user.target**,就像 SystemV 的运行等级 3。**emergency.target** 文件类似单用户模式。目标和服务都是 systemd 单元。
除非开机时在 GRUB 菜单进行更改systemd 总是启动 `default.target`。`default.target` 文件是指向真实的目标文件的符号链接。对于桌面工作站,`default.target` 通常是 `graphical.target`,等同于 SystemV 的运行等级 5。对于服务器默认目标多半是 `multi-user.target`,就像 SystemV 的运行等级 3。`emergency.target` 文件类似单用户模式。目标和<ruby>服务<rt>service</rt></ruby>都是一种 systemd 单元。
下面的表,包含在本系列的上一篇文章中,比较了 systemd 目标和古老的 SystemV 启动运行等级。为了向后兼容systemd 提供了 systemd 目标别名,允许脚本——和系统管理员——使用像 **init 3** 一样的 SystemV 命令改变运行等级。当然SystemV 命令被转发给 systemd 进行解释和执行。
下面的表,包含在本系列的上一篇文章中,比较了 systemd 目标和古老的 SystemV 启动运行等级。为了向后兼容systemd 提供了 systemd 目标别名,允许脚本和系统管理员使用像 `init 3` 一样的 SystemV 命令改变运行等级。当然SystemV 命令被转发给 systemd 进行解释和执行。
**systemd targets** | **SystemV runlevel** | **target aliases** | **Description**
**systemd 目标** | **SystemV 运行级别** | **目标别名** | **描述**
---|---|---|---
default.target | | | 这个目标通常是一个符号链接,作为 **multi-user.target****graphical.target** 的别名。systemd 总是用 **default.target** 启动系统。**default.target** 不能命名为 **halt.target**、**poweroff.target**、和 **reboot.target**
graphical.target | 5 | runlevel5.target | 带有 GUI 的 **Multi-user.target**
| 4 | runlevel4.target | 未使用。运行等级 4 和 SystemV 的运行等级 3 一致,可以创建这个目标并进行定制,用于启动本地服务,而不必更改默认的 **multi-user.target**
multi-user.target | 3 | runlevel3.target | 运行所有的服务但是只有命令行接口command-line interfaceCLI
| 2 | runlevel2.target | 多用户,没有 NFS但是运行其他所有的非 GUI 服务
rescue.target | 1 | runlevel1.target | 一个基本的系统,包括挂载文件系统,但是只运行最基础的服务,以及一个主控制台上的救援命令行解释器
emergency.target | S | | 单用户模式——没有服务运行;文件系统没有挂载。这是最基础级的操作模式,只有一个运行在主控制台的紧急情况命令行解释器,供用户和系统交互。
halt.target | | | 不断电的情况下停止系统
reboot.target | 6 | runlevel6.target | 重启
poweroff.target | 0 | runlevel0.target | 停止系统并关闭电源
| `default.target` | | | 这个目标通常是一个符号链接,作为 `multi-user.target``graphical.target` 的别名。systemd 总是用 `default.target` 启动系统。`default.target** 不能作为 `halt.target`、`poweroff.target` 和 `reboot.target` 的别名。|
| `graphical.target` | 5 | `runlevel5.target` | 带有 GUI 的 `multi-user.target` 。|
| | 4 | `runlevel4.target` | 未使用。运行等级 4 和 SystemV 的运行等级 3 一致,可以创建这个目标并进行定制,用于启动本地服务,而不必更改默认的 `multi-user.target`。 |
| `multi-user.target` | 3 | `runlevel3.target` | 运行所有的服务但是只有命令行界面CLI 。|
| | 2 | `runlevel2.target` | 多用户,没有 NFS但是运行其他所有的非 GUI 服务
| `rescue.target` | 1 | `runlevel1.target` | 一个基本的系统,包括挂载文件系统,但是只运行最基础的服务,以及一个主控制台上的用于救援的命令行解释器。|
| `emergency.target` | S | | 单用户模式 —— 没有服务运行;文件系统没有挂载。这是最基础级的操作模式,只有一个运行在主控制台的用于紧急情况命令行解释器,供用户和系统交互。 |
| `halt.target` | | | 不断电的情况下停止系统 |
| `reboot.target` | 6 | `runlevel6.target` | 重启 |
| `poweroff.target` | 0 | `runlevel0.target` | 停止系统并关闭电源 |
每个目标在配置文件中都描述了一组依赖关系。systemd 启动需要的依赖,即 Linux 主机运行在特定功能级别所需的服务。加载目标配置文件中列出的所有依赖并运行后,系统就运行在那个目标等级。如果愿意,你可以在本系列的第一篇文章 [_学着爱上 systemd_][2] 中回顾 systemd 的启动序列和运行时目标。
每个目标在配置文件中都描述了一组依赖关系。systemd 启动需要的依赖,即 Linux 主机运行在特定功能级别所需的服务。加载目标配置文件中列出的所有依赖并运行后,系统就运行在那个目标等级。如果愿意,你可以在本系列的第一篇文章《[学着爱上 systemd][2]》中回顾 systemd 的启动序列和运行时目标。
### 探索当前的目标
许多 Linux 发行版默认安装一个 GUI 桌面接口,以便安装的系统可以像工作站一样使用。我总是从 Fedora Live USB 引导驱动器安装 Xfce 或 LXDE 桌面。即使是安装一个服务器或者其他基础类型的主机(比如用于路由器和防火墙的主机),我也使用 GUI 桌面的安装方式。
许多 Linux 发行版默认安装一个 GUI 桌面界面,以便安装的系统可以像工作站一样使用。我总是从 Fedora Live USB 引导驱动器安装 Xfce 或 LXDE 桌面。即使是安装一个服务器或者其他基础类型的主机(比如用于路由器和防火墙的主机),我也使用 GUI 桌面的安装方式。
我可以安装一个没有桌面的服务器(数据中心的典型做法),但是这样不满足我的需求。原因不是我需要 GUI 桌面本身,而是 LXDE 安装包含了许多其他默认的服务器安装没有提供的工具,这意味着初始安装之后我需要做的工作更少。
但是,仅仅因为有一个 GUI 桌面并不意味着我要使用它。我有一个 16 端口的 KVM可以用于访问我的大部分 Linux 系统的 KVM 接口,但我和它们交互的大部分交互是通过从我的主要工作站建立的远程 SSH 连接。这种方式更安全,而且和 **graphical.target** 相比,运行 **multi-user.target** 使用更少的系统资源。
但是,仅仅因为有 GUI 桌面并不意味着我要使用它。我有一个 16 端口的 KVM可以用于访问我的大部分 Linux 系统的 KVM 接口,但我和它们交互的大部分交互是通过从我的主要工作站建立的远程 SSH 连接。这种方式更安全,而且和 `graphical.target` 相比,运行 `multi-user.target` 使用更少的系统资源。
首先,检查默认目标,确认是 **graphical.target**
首先,检查默认目标,确认是 `graphical.target`
```
[root@testvm1 ~]# systemctl get-default
@ -168,7 +169,7 @@ graphical.target
[root@testvm1 ~]#
```
然后确认当前正在运行的目标,应该和默认目标相同。你仍可以使用老方法,输出古老的 SystemV 运行等级。注意,前一个运行等级在左边,这里是 **N**(意思是 None表示主机启动后没有修改过运行等级。数字 5 是当前的目标,正如古老的 SystemV 术语中的定义:
然后确认当前正在运行的目标,应该和默认目标相同。你仍可以使用老方法,输出古老的 SystemV 运行等级。注意,前一个运行等级在左边,这里是 `N`(意思是 None表示主机启动后没有修改过运行等级。数字 5 是当前的目标,正如古老的 SystemV 术语中的定义:
```
[root@testvm1 ~]# runlevel
@ -176,7 +177,7 @@ N 5
[root@testvm1 ~]#
```
注意runlevel 的手册页指出运行等级已经被淘汰,并提供了一个转换表。
注意,`runlevel` 的手册页指出运行等级已经被淘汰,并提供了一个转换表。
你也可以使用 systemd 方式,命令的输出有很多行,但确实用 systemd 术语提供了答案:
@ -213,23 +214,23 @@ SUB    = The low-level unit activation state, values depend on unit type.
To show all installed unit files use 'systemctl list-unit-files'.
```
上面列出了当前加载的和激活的目标,你也可以看到 **graphical.target****multi-user.target**。**multi-user.target** 需要在 **graphical.target** 之前加载。这个例子中,**graphical.target** 是激活的。
上面列出了当前加载的和激活的目标,你也可以看到 `graphical.target``multi-user.target`。`multi-user.target` 需要在 `graphical.target` 之前加载。这个例子中,`graphical.target` 是激活的。
### 切换到不同的目标
切换到 **multi-user.target** 很简单:
切换到 `multi-user.target` 很简单:
```
[root@testvm1 ~]# systemctl isolate multi-user.target
```
显示器现在应该从 GUI 桌面或登录界面切换到了一个虚拟控制台。登录并列出当前激活的 systemd 单元,确认 **graphical.target** 不再运行:
显示器现在应该从 GUI 桌面或登录界面切换到了一个虚拟控制台。登录并列出当前激活的 systemd 单元,确认 `graphical.target` 不再运行:
```
[root@testvm1 ~]# systemctl list-units --type target
```
务必使用 **runlevel** 确认命令输出了之前的和当前的“运行等级”:
务必使用 `runlevel` 确认命令输出了之前的和当前的“运行等级”:
```
[root@testvm1 ~]# runlevel
@ -238,7 +239,7 @@ To show all installed unit files use 'systemctl list-unit-files'.
### 更改默认目标
现在,将默认目标改为 **multi-user.target**,以便系统总是启动进入 **multi-user.target**,从而使用控制台命令行接口而不是 GUI 桌面接口。使用你的测试主机的根用户,切换到保存 systemd 配置的目录,执行一次快速列出操作:
现在,将默认目标改为 `multi-user.target`,以便系统总是启动进入 `multi-user.target`,从而使用控制台命令行接口而不是 GUI 桌面接口。使用你的测试主机的根用户,切换到保存 systemd 配置的目录,执行一次快速列出操作:
```
[root@testvm1 ~]# cd /etc/systemd/system/ ; ll
@ -256,13 +257,13 @@ drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants
为了强调一些有助于解释 systemd 如何管理启动过程的重要事项,我缩短了这个列表。你应该可以在虚拟机看到完整的目录和链接列表。
**default.target** 项是指向目录 **/lib/systemd/system/graphical.target** 的符号链接(软链接),列出那个目录查看目录中的其他内容:
`default.target` 项是指向目录 `/lib/systemd/system/graphical.target` 的符号链接(软链接),列出那个目录查看目录中的其他内容:
```
[root@testvm1 system]# ll /lib/systemd/system/ | less
```
你应该在这个列表中看到文件、目录、以及更多链接,但是专门寻找一下 **multi-user.target****graphical.target**。现在列出 **default.target**——一个指向 **/lib/systemd/system/graphical.target** 的链接——的内容:
你应该在这个列表中看到文件、目录、以及更多链接,但是专门寻找一下 `multi-user.target``graphical.target`。现在列出 `default.target`(指向 `/lib/systemd/system/graphical.target` 的链接)的内容:
```
[root@testvm1 system]# cat default.target
@ -286,16 +287,16 @@ AllowIsolate=yes
[root@testvm1 system]#
```
**graphical.target** 文件的这个链接描述了图形用户接口需要的所有必备条件。我会在本系列的下一篇文章至少探讨其中的一些选项。
`graphical.target` 文件的这个链接描述了图形用户接口需要的所有必备条件。我会在本系列的下一篇文章至少探讨其中的一些选项。
为了使主机启动到多用户模式,你需要删除已有的链接,创建一个新链接指向正确目标。如果你的 [PWD][5] 不是 **/etc/systemd/system**,切换过去:
为了使主机启动到多用户模式,你需要删除已有的链接,创建一个新链接指向正确目标。如果你的 [PWD][5] 不是 `/etc/systemd/system`,切换过去:
```
[root@testvm1 system]# rm -f default.target
[root@testvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target
```
列出 **default.target** 链接,确认其指向了正确的文件:
列出 `default.target` 链接,确认其指向了正确的文件:
```
[root@testvm1 system]# ll default.target
@ -303,7 +304,7 @@ lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -&gt; /lib/systemd/system/
[root@testvm1 system]#
```
如果你的链接看起来不一样,删除并重试。列出 **default.target** 链接的内容:
如果你的链接看起来不一样,删除并重试。列出 `default.target` 链接的内容:
```
[root@testvm1 system]# cat default.target
@ -326,9 +327,9 @@ AllowIsolate=yes
[root@testvm1 system]#
```
**default.target**——这里其实是指向 **multi-user.target** 的链接——其中的 **[Unit]** 部分现在有不同的必需条件。这个目标不需要有图形显示管理器。
`default.target`(这里其实是指向 `multi-user.target` 的链接)其中的 `[Unit]` 部分现在有不同的必需条件。这个目标不需要有图形显示管理器。
重启,你的虚拟机应该启动到虚拟控制台 1 的控制台登录,虚拟控制台 1 在显示器标识为 tty1。现在你已经知道如何修改默认的目标使用所需的命令将默认目标改回 **graphical.target**
重启,你的虚拟机应该启动到虚拟控制台 1 的控制台登录,虚拟控制台 1 在显示器标识为 `tty1`。现在你已经知道如何修改默认的目标,使用所需的命令将默认目标改回 `graphical.target`
首先检查当前的默认目标:
@ -341,19 +342,19 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
[root@testvm1 ~]#
```
输入下面的命令直接切换到 **graphical.target** 和显示管理器的登录界面,不需要重启:
输入下面的命令直接切换到 `graphical.target` 和显示管理器的登录界面,不需要重启:
```
[root@testvm1 system]# systemctl isolate default.target
```
我不清楚为何 systemd 的开发者选择了术语 “isolate” 作为这个子命令。我的研究表明指的可能是运行指明的目标,但是“隔离”并终结其他所有启动该目标不需要的目标。然而,命令执行的效果是从一个运行的目标切换到另一个——在这个例子中,从多用户目标切换到图形目标。上面的命令等同于 SystemV 启动脚本和 init 程序中古老的 init 5 命令。
我不清楚为何 systemd 的开发者选择了术语 `isolate` 作为这个子命令。我的研究表明指的可能是运行指明的目标,但是“隔离”并终结其他所有启动该目标不需要的目标。然而,命令执行的效果是从一个运行的目标切换到另一个——在这个例子中,从多用户目标切换到图形目标。上面的命令等同于 SystemV 启动脚本和 `init` 程序中古老的 `init 5` 命令。
登录 GUI 桌面,确认能正常工作。
### 总结
本文探索了 Linux systemd 启动序列,开始探讨两个重要的 systemd 工具 **systemdctl****journalctl**,还说明了如何从一个目标切换到另一个目标,以及如何修改默认目标。
本文探索了 Linux systemd 启动序列,开始探讨两个重要的 systemd 工具 `systemdctl``journalctl`,还说明了如何从一个目标切换到另一个目标,以及如何修改默认目标。
本系列的下一篇文章中将会创建一个新的 systemd 单元,并配置为启动阶段运行。下一篇文章还会查看一些配置选项,可以帮助确定某个特定的单元在序列中启动的位置,比如在网络启动运行后。
@ -362,9 +363,9 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
关于 systemd 网络上有大量的信息,但大部分都简短生硬、愚钝、甚至令人误解。除了本文提到的资源,下面的网页提供了关于 systemd 启动更详细可靠的信息。
* Fedora 项目有一个优质实用的 [systemd 指南][6],几乎有你使用 systemd 配置、管理、维护一个 Fedora 计算机需要知道的一切。
* Fedora 项目还有一个好用的[速查表][7],交叉引用了古老的 SystemV 命令和对应的 systemd 命令。
* Fedora 项目还有一个好用的 [速查表][7],交叉引用了古老的 SystemV 命令和对应的 systemd 命令。
* 要获取 systemd 的详细技术信息和创立的原因,查看 [Freedesktop.org][8] 的 [systemd 描述][9]。
* Linux.com 上”systemd 的更多乐趣"提供了更高级的 systemd [信息和提示][11]。
* Linux.com 上“systemd 的更多乐趣”提供了更高级的 systemd [信息和提示][11]。
还有一系列针对系统管理员的深层技术文章,由 systemd 的设计者和主要开发者 Lennart Poettering 所作。这些文章写于 2010 年 4 月到 2011 年 9 月之间,但在当下仍然像当时一样有价值。关于 systemd 及其生态的许多其他优秀的作品都是基于这些文章的。
@ -381,7 +382,6 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
* [systemd for Administrators, Part X][22]
* [systemd for Administrators, Part XI][23]
Mentor Graphics 公司的一位 Linux 内核和系统工程师 Alison Chiaken对 systemd 进行了预展...
--------------------------------------------------------------------------------
@ -390,7 +390,7 @@ via: https://opensource.com/article/20/5/systemd-startup
作者:[David Both][a]
选题:[lujun9972][b]
译者:[YungeG](https://github.com/YungeG)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,30 +3,31 @@
[#]: author: "D. Greg Scott https://opensource.com/users/greg-scott"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13714-1.html"
从客户端计算机连接到 0penVPN
如何在免费 WiFi 中保护隐私(四)
======
在 Linux 上安装好 VPN 之后,是时候使用它了。
![Woman programming][1]
> 在 Linux 上安装好“虚拟专用网络” 之后,是时候使用它了。
0penVPN 在两点之间创建一个加密通道,阻止第三方访问你的网络流量数据。通过设置你的 “虚拟专用网络” 服务,你可以成为你自己的 “虚拟专用网络” 服务商。许多流行的 “虚拟专用网络” 服务都使用 0penVPN所以当你可以掌控自己的网络时为什么还要将你的网络连接绑定到特定的提供商呢
![](https://img.linux.net.cn/data/attachment/album/202108/24/101214ng2afee2gmefgj5z.jpg)
本系列的 [第一篇文章][3] 安装了一个 VPN 的服务器,[第二篇文章][4] 介绍了如何安装和配置一个 0penVPN 服务软件,[第三篇文章][5] 解释了如何配置防火墙并启动你的 0penVPN 服务。第四篇也是最后一篇文章将演示如何从客户端计算机使用你的 0penVPN 服务器。这就是你做了前三篇文章中所有工作的原因!
0penVPN 在两点之间创建了一个加密通道,以阻止第三方访问你的网络流量数据。通过设置你的 “虚拟专用网络” 服务,你可以成为你自己的 “虚拟专用网络” 服务商。许多流行的 “虚拟专用网络” 服务都使用 0penVPN所以当你可以掌控自己的网络时为什么还要将你的网络连接绑定到特定的提供商呢
本系列的 [第一篇文章][3] 安装了一个“虚拟专用网络” 的服务器,[第二篇文章][4] 介绍了如何安装和配置一个 0penVPN 服务软件,[第三篇文章][5] 解释了如何配置防火墙并启动你的 0penVPN 服务。第四篇也是最后一篇文章将演示如何从客户端计算机使用你的 0penVPN 服务器。这就是你做了前三篇文章中所有工作的原因!
### 创建客户端证书
请记住0penVPN 的身份验证方法要求服务器和客户端都拥有某些东西(证书)并知道某些东西(密码)。是时候设置它了。
请记住0penVPN 的身份验证方法要求服务器和客户端都拥有某些东西(证书)并知道某些东西(口令)。是时候设置它了。
首先,为你的客户端计算机创建一个客户端证书和一个私钥。在你的 0penVPN 服务器上,生成证书请求。它会要求你输入密码;确保你记住它:
```
$ cd /etc/openvpn/ca
$ sudo /etc/openvpn/easy-rsa/easyrsa \
gen-req greglaptop
gen-req greglaptop
```
本例中,`greglaptop` 是创建证书的客户端计算机主机名。
@ -36,14 +37,14 @@ gen-req greglaptop
```
$ cd /etc/openvpn/ca
$ /etc/openvpn/easy-rsa/easyrsa \
show-req greglaptop
show-req greglaptop
```
你也可以以客户端身份签署请求:
```
$ /etc/openvpn/easy-rsa/easyrsa \
sign-req client greglaptop
sign-req client greglaptop
```
### 安装 0penVPN 客户端软件
@ -72,9 +73,9 @@ $ sudo dnf install NetworkManager-openvpn
### 复制和自定义客户端配置文件
在 Linux 系统上,你可以复制服务器上的 `/etc/openvpn/client/OVPNclient2020.ovpn` 文件到 `/etc/NetworkManager/system-connections/` 目录,或者你也可以导航到系统设置中的网络管理器添加一个 VPN 连接。
在 Linux 系统上,你可以复制服务器上的 `/etc/openvpn/client/OVPNclient2020.ovpn` 文件到 `/etc/NetworkManager/system-connections/` 目录,或者你也可以导航到系统设置中的网络管理器添加一个“虚拟专用网络” 连接。
连接类型选择 **证书**。告知网络管理器你从服务器上复制的证书和密钥。
连接类型选择<ruby>证书TLS<rt>Certificates(TLS)</rt></ruby>。告知网络管理器你从服务器上复制的证书和密钥。
![VPN displayed in Network Manager][7]
@ -88,20 +89,19 @@ $ sudo dnf install NetworkManager-openvpn
### 将你的客户端连接到服务器
在 Linux 系统上,网络管理器会显示你的 VPN 连接。选择它进行连接。
在 Linux 系统上,网络管理器会显示你的 “虚拟专用网络” 连接。选择它进行连接。
![Add a VPN connection in Network Manager][9]
![Add a connection in Network Manager][9]
在 Windows 系统上,启动 0penVPN 图形用户界面 (GUI)。它会在任务栏右侧的 Windows 系统托盘中生成一个图标,通常位于 Windows 桌面的右下角。右键单击图标以连接、断开连接或查看状态。
在 Windows 系统上,启动 0penVPN 图形用户界面。它会在任务栏右侧的 Windows 系统托盘中生成一个图标,通常位于 Windows 桌面的右下角。右键单击图标以连接、断开连接或查看状态。
对于第一次连接,编辑客户端配置文件的“remote”行以使用 0penVPN 服务器的内部 IP 地址。通过右键单击 Windows 系统托盘中的 0penVPN GUI 并单击 **连接**,从办公室网络内部连接到服务器。调试此连接。这应该可以找到并解决问题,而不会出现任何防火墙问题,因为客户端和服务器都在防火墙的同一侧。
对于第一次连接,编辑客户端配置文件的 `remote` 行以使用 0penVPN 服务器的内部 IP 地址。通过右键单击 Windows 系统托盘中的 0penVPN 图标并单击“<ruby>连接<rt>Connect</rt></ruby>”,从办公室网络内部连接到服务器。调试此连接,这应该可以找到并解决问题,而不会出现任何防火墙问题,因为客户端和服务器都在防火墙的同一侧。
接下来,编辑客户端配置文件的“remote”行以使用 0penVPN 服务器的公共 IP 地址。将 Windows 客户端连接到外部网络并进行连接。调试有可能的问题。
接下来,编辑客户端配置文件的 `remote` 行以使用 0penVPN 服务器的公共 IP 地址。将 Windows 客户端连接到外部网络并进行连接。调试有可能的问题。
### 安全连接
恭喜!你已经为其他客户端系统准备好了 0penVPN 网络。对其余客户端重复设置步骤。你甚至可以使用 Ansible 来分发证书和密钥并使其保持最新。
* * *
恭喜!你已经为其他客户端系统准备好了 0penVPN 网络。对其余客户端重复设置步骤。你甚至可以使用 Ansible 来分发证书和密钥并使其保持最新。
本文基于 D.Greg Scott 的 [博客][10],经许可后重新使用。
@ -112,7 +112,7 @@ via: https://opensource.com/article/21/7/openvpn-client
作者:[D. Greg Scott][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -126,5 +126,5 @@ via: https://opensource.com/article/21/7/openvpn-client
[6]: https://winscp.net/eng/index.php
[7]: https://opensource.com/sites/default/files/uploads/network-manager-profile.jpg (VPN displayed in Network Manager)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/network-manager-connect.jpg (Add a VPN connection in Network Manager)
[9]: https://opensource.com/sites/default/files/uploads/network-manager-connect.jpg (Add a“虚拟专用网络” connection in Network Manager)
[10]: https://www.dgregscott.com/how-to-build-a-vpn-in-four-easy-steps-without-spending-one-penny/

View File

@ -0,0 +1,188 @@
[#]: subject: "Monitor your Linux system in your terminal with procps-ng"
[#]: via: "https://opensource.com/article/21/8/linux-procps-ng"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13713-1.html"
在终端监控你的 Linux 系统
======
> 如何找到一个程序的进程 IDPID。最常见的 Linux 工具是由 procps-ng 包提供的,包括 `ps`、`pstree`、`pidof` 和 `pgrep` 命令。
![](https://img.linux.net.cn/data/attachment/album/202108/24/092948gyyv6nvbn77x7y6o.jpg)
在 [POSIX][2] 术语中,<ruby>进程<rt>process</rt></ruby>是一个正在进行的事件,由操作系统的内核管理。当你启动一个应用时就会产生一个进程,尽管还有许多其他的进程在你的计算机后台运行,包括保持系统时间准确的程序、监测新的文件系统、索引文件,等等。
大多数操作系统都有某种类型的系统活动监视器因此你可以了解在任何特定时刻有哪些进程在运行。Linux 有一些供你选择,包括 GNOME 系统监视器和 KSysGuard。这两个软件在桌面环境都很有用但 Linux 也提供了在终端监控系统的能力。不管你选择哪一种,对于那些积极管理自己电脑的人来说,检查一个特定的进程是一项常见的任务。
在这篇文章中,我演示了如何找到一个程序的进程 IDPID。最常见的工具是由 [procps-ng][3] 包提供的,包括 `ps`、`pstree`、`pidof` 和 `pgrep` 命令。
### 查找一个正在运行的程序的 PID
有时你想得到一个你知道正在运行的特定程序的进程 IDPID。`pidof` 和 `pgrep` 命令可以通过命令名称查找进程。
`pidof` 命令返回一个命令的 PID它按名称搜索确切的命令
```
$ pidof bash
1776 5736
```
`pgrep` 命令允许使用正则表达式:
```
$ pgrep .sh
1605
1679
1688
1776
2333
5736
$ pgrep bash
5736
```
### 通过文件查找 PID
你可以用 `fuser` 命令找到使用特定文件的进程的 PID。
```
$ fuser --user ~/example.txt
/home/tux/example.txt: 3234(tux)
```
### 通过 PID 获得进程名称
如果你有一个进程的 PID 编号,但没有生成它的命令,你可以用 `ps` 做一个“反向查找”:
```
$ ps 3234
PID TTY STAT TIME COMMAND
5736 pts/1 Ss 0:00 emacs
```
### 列出所有进程
`ps` 命令列出进程。你可以用 `-e` 选项列出你系统上的每一个进程:
```
PID TTY TIME CMD
1 ? 00:00:03 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp
6 ? 00:00:00 kworker/0:0H-events_highpri
[...]
5648 ? 00:00:00 gnome-control-c
5656 ? 00:00:00 gnome-terminal-
5736 pts/1 00:00:00 bash
5791 pts/1 00:00:00 ps
5792 pts/1 00:00:00 less
(END)
```
### 只列出你的进程
`ps -e` 的输出可能会让人不知所措,所以使用 `-U` 来查看一个用户的进程:
```
$ ps -U tux | less
PID TTY TIME CMD
3545 ? 00:00:00 systemd
3548 ? 00:00:00 (sd-pam)
3566 ? 00:00:18 pulseaudio
3570 ? 00:00:00 gnome-keyring-d
3583 ? 00:00:00 dbus-daemon
3589 tty2 00:00:00 gdm-wayland-ses
3592 tty2 00:00:00 gnome-session-b
3613 ? 00:00:00 gvfsd
3618 ? 00:00:00 gvfsd-fuse
3665 tty2 00:01:03 gnome-shell
[...]
```
这样就减少了 200 个(可能是 100 个,取决于你运行的系统)需要分类的进程。
你可以用 `pstree` 命令以不同的格式查看同样的输出:
```
$ pstree -U tux -u --show-pids
[...]
├─gvfsd-metadata(3921)─┬─{gvfsd-metadata}(3923)
│ └─{gvfsd-metadata}(3924)
├─ibus-portal(3836)─┬─{ibus-portal}(3840)
│ └─{ibus-portal}(3842)
├─obexd(5214)
├─pulseaudio(3566)─┬─{pulseaudio}(3640)
│ ├─{pulseaudio}(3649)
│ └─{pulseaudio}(5258)
├─tracker-store(4150)─┬─{tracker-store}(4153)
│ ├─{tracker-store}(4154)
│ ├─{tracker-store}(4157)
│ └─{tracker-store}(4178)
└─xdg-permission-(3847)─┬─{xdg-permission-}(3848)
└─{xdg-permission-}(3850)
```
### 列出进程的上下文
你可以用 `-u` 选项查看你拥有的所有进程的额外上下文。
```
$ ps -U tux -u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tux 3545 0.0 0.0 89656 9708 ? Ss 13:59 0:00 /usr/lib/systemd/systemd --user
tux 3548 0.0 0.0 171416 5288 ? S 13:59 0:00 (sd-pam)
tux 3566 0.9 0.1 1722212 17352 ? S<sl 13:59 0:29 /usr/bin/pulseaudio [...]
tux 3570 0.0 0.0 664736 8036 ? SLl 13:59 0:00 /usr/bin/gnome-keyring-daemon [...]
[...]
tux 5736 0.0 0.0 235628 6036 pts/1 Ss 14:18 0:00 bash
tux 6227 0.0 0.4 2816872 74512 tty2 Sl+14:30 0:00 /opt/firefox/firefox-bin [...]
tux 6660 0.0 0.0 268524 3996 pts/1 R+ 14:50 0:00 ps -U tux -u
tux 6661 0.0 0.0 219468 2460 pts/1 S+ 14:50 0:00 less
```
### 用 PID 排除故障
如果你在某个特定的程序上有问题,或者你只是好奇某个程序在你的系统上还使用了什么资源,你可以用 `pmap` 查看运行中的进程的内存图。
```
$ pmap 1776
5736: bash
000055f9060ec000 1056K r-x-- bash
000055f9063f3000 16K r---- bash
000055f906400000 40K rw--- [ anon ]
00007faf0fa67000 9040K r--s- passwd
00007faf1033b000 40K r-x-- libnss_sss.so.2
00007faf10345000 2044K ----- libnss_sss.so.2
00007faf10545000 4K rw--- libnss_sss.so.2
00007faf10546000 212692K r---- locale-archive
00007faf1d4fb000 1776K r-x-- libc-2.28.so
00007faf1d6b7000 2044K ----- libc-2.28.so
00007faf1d8ba000 8K rw--- libc-2.28.so
[...]
```
### 处理进程 ID
procps-ng 软件包有你需要的所有命令,以调查和监控你的系统在任何时候的使用情况。无论你是对 Linux 系统中各个分散的部分如何结合在一起感到好奇,还是要对一个错误进行调查,或者你想优化你的计算机的性能,学习这些命令都会为你了解你的操作系统提供一个重要的优势。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-procps-ng
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/system-monitor-splash.png?itok=0UqsjuBQ (System monitor)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://gitlab.com/procps-ng

View File

@ -0,0 +1,161 @@
[#]: subject: "Schedule a task with the Linux at command"
[#]: via: "https://opensource.com/article/21/8/linux-at-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13710-1.html"
用 Linux 的 at 命令来安排一个任务
======
> at 命令是一种在特定时间和日期安排一次性任务的 Linux 终端方法。
![](https://img.linux.net.cn/data/attachment/album/202108/23/144542rmmyzwxsnanm4wpj.jpg)
计算机擅长 [自动化][2],但不是每个人都知道如何使自动化工作。不过,能够在特定的时间为电脑安排一个任务,然后忘记它,这确实是一种享受。也许你有一个文件要在特定的时间上传或下载,或者你需要处理一批还不存在但可以保证在某个时间存在的文件,或者需要监控设置,或者你只是需要一个友好的提醒,在下班回家的路上买上面包和黄油。
这就是 `at` 命令的用处。
### 什么是 Linux at 命令?
`at` 命令是在 Linux 终端让你在特定时间和日期安排一次性工作的方法。它是一种自发的自动化,在终端上很容易实现。
### 安装 at
在 Linux 上,`at` 命令可能已经安装了。你可以使用 `at -V` 命令来验证它是否已经安装。只要返回一个版本号,就说明你已经安装了 `at`
```
$ at -V
at version x.y.z
```
如果你试图使用 `at`,但没有找到该命令,大多数现代的 Linux 发行版会为你提供缺少的 `at` 软件包。
你可能还需要启动 `at` 守护程序,称为 `atd`。在大多数 Linux 系统中,你可以使用 `systemctl` 命令来启用该服务,并将它们设置为从现在开始自动启动:
```
$ sudo systemctl enable --now atd
```
### 用 at 交互式地安排一个作业
当你使用 `at` 命令并加上你希望任务运行的时间,会打开一个交互式 `at` 提示符。你可以输入你想在指定时间运行的命令。
做个比喻,你可以把这个过程看作是一个日历应用,就像你在你的手机上使用的那样。首先,你在某一天的某个时间创建一个事件,然后指定你想要发生什么。
例如,可以试试创建一个未来几分钟的任务,来给自己计划一个备忘录。这里运行一个简单的任务,以减少失败的可能性。要退出 `at` 提示符,请按键盘上的 `Ctrl+D`
```
$ at 11:20 AM
warning: commands will be executed using /bin/sh
at> echo "hello world" > ~/at-test.txt
at> <EOT>
job 3 at Mon Jul 26 11:20:00 2021
```
正如你所看到的,`at` 使用直观和自然的时间定义。你不需要用 24 小时制的时钟,也不需要把时间翻译成 UTC 或特定的 ISO 格式。一般来说,你可以使用你自然想到的任何符号,如 `noon`、`1:30 PM`、`13:37` 等等,来描述你希望一个任务发生的时间。
等待几分钟,然后在你创建的文件上运行 `cat` 或者 `tac` 命令,验证你的任务是否已经运行:
```
$ cat ~/at-test.txt
hello world
```
### 用 at 安排一个任务
你不必使用 `at` 交互式提示符来安排任务。你可以使用 `echo``printf` 向它传送命令。在这个例子中,我使用了 `now` 符号,以及我希望任务从现在开始延迟多少分钟:
```
$ echo "echo 'hello again' >> ~/at-test.txt" | at now +1 minute
```
一分钟后,验证新的命令是否已被执行:
```
$ cat ~/at-test.txt
hello world
hello again
```
### 时间表达式
`at` 命令在解释时间时是非常宽容的。你可以在许多格式中选择,这取决于哪一种对你来说最方便:
* `YYMMDDhhmm[.ss]`(两位的年份、月、日、小时、分钟,及可选的秒)
* `CCYYMMDDhhmm[.ss]`(四位的年份、月、日、时、分钟,及可选的秒)
* `now`(现在)
* `midnight`(午夜 00:00
* `noon`(中午 12:00
* `teatime`(下午 16 点)
* `AM`(上午)
* `PM`(下午)
时间和日期可以是绝对时间,也可以加一个加号(`+`),使其与 `now` 相对。当指定相对时间时,你可以使用你可能用过的词语:
* `minutes`(分钟)
* `hours`(小时)
* `days`(天)
* `weeks`(星期)
* `months`(月)
* `years`(年)
### 时间和日期语法
`at` 命令对时间的输入相比日期不那么宽容。时间必须放在第一位,接着是日期,尽管日期默认为当前日期,并且只有在为未来某天安排任务时才需要。
这些是一些有效表达式的例子:
```
$ echo "rsync -av /home/tux me@myserver:/home/tux/" | at 3:30 AM tomorrow
$ echo "/opt/batch.sh ~/Pictures" | at 3:30 AM 08/01/2022
$ echo "echo hello" | at now + 3 days
```
### 查看你的 at 队列
当你爱上了 `at`,并且正在安排任务,而不是在桌子上的废纸上乱写乱画,你可能想查看一下你是否有任务还在队列中。
要查看你的 `at` 队列,使用 `atq` 命令:
```
$ atq
10 Thu Jul 29 12:19:00 2021 a tux
9 Tue Jul 27 03:30:00 2021 a tux
7 Tue Jul 27 00:00:00 2021 a tux
```
要从队列中删除一个任务,使用 `atrm` 命令和任务号。例如,要删除任务 7
```
$ atrm 7
$ atq
10 Thu Jul 29 12:19:00 2021 a tux
9 Tue Jul 27 03:30:00 2021 a tux
```
要看一个计划中的任务的实际内容,你需要查看 `/var/spool/at` 下的内容。只有 root 用户可以查看该目录的内容,所以你必须使用 `sudo` 来查看或 `cat` 任何任务的内容。
### 用 Linux at 安排任务
`at` 系统是一个很好的方法,可以避免忘记在一天中晚些时候运行一个作业,或者在你离开时让你的计算机为你运行一个作业。与 `cron` 不同的是,它不像 `cron` 那样要求任务必须从现在起一直按计划运行到永远,因此它的语法比 `cron` 简单得多。
等下次你有一个希望你的计算机记住并管理它的小任务,试试 `at` 命令。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-at-command
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/article/20/11/orchestration-vs-automation

View File

@ -0,0 +1,72 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13716-1.html"
Linux 中 cron 系统的 4 种替代方案
======
> 在 Linux 系统中有一些其他开源项目可以结合或者替代 cron 系统使用。
![](https://img.linux.net.cn/data/attachment/album/202108/25/104033ro6lasn54lq25r2l.jpg)
[Linux cron 系统][2] 是一项经过时间检验的成熟技术,然而在任何情况下它都是最合适的系统自动化工具吗?答案是否定的。有一些开源项目就可以用来与 cron 结合或者直接代替 cron 使用。
### at 命令
cron 适用于长期重复任务。如果你设置了一个工作任务,它会从现在开始定期运行,直到计算机报废为止。但有些情况下你可能只想设置一个一次性命令,以备不在计算机旁时该命令可以自动运行。这时你可以选择使用 `at` 命令。
`at` 的语法比 cron 语法简单和灵活得多,并且兼具交互式和非交互式调度方法。(只要你想,你甚至可以使用 `at` 作业创建一个 `at` 作业。)
```
$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM
```
该命令语法自然且易用,并且不需要用户清理旧作业,因为它们一旦运行后就完全被计算机遗忘了。
阅读有关 [at 命令][3] 的更多信息并开始使用吧。
### systemd
除了管理计算机上的进程外,`systemd` 还可以帮你调度这些进程。与传统的 cron 作业一样systemd 计时器可以在指定的时间间隔触发事件,例如 shell 脚本和命令。时间间隔可以是每月特定日期的一天一次(例如在星期一的时候触发),或者在 09:00 到 17:00 的工作时间内每 15 分钟一次。
此外 systemd 里的计时器还可以做一些 cron 作业不能做的事情。
例如,计时器可以在一个事件 _之后_ 触发脚本或程序来运行特定时长,这个事件可以是开机,可以是前置任务的完成,甚至可以是计时器本身调用的服务单元的完成!
如果你的系统运行着 systemd 服务,那么你的机器就已经在技术层面上使用 systemd 计时器了。默认计时器会执行一些琐碎的任务,例如滚动日志文件、更新 mlocate 数据库、管理 DNF 数据库等。创建自己的计时器很容易,具体可以参阅 David Both 的文章 [使用 systemd 计时器来代替 cron][4]。
### anacron 命令
cron 专门用于在特定时间运行命令这适用于从不休眠或断电的服务器。然而对笔记本电脑和台式工作站而言时常有意或无意地关机是很常见的。当计算机处于关机状态时cron 不会运行,因此设定在这段时间内的一些重要工作(例如备份数据)也就会跳过执行。
anacron 系统旨在确保作业定期运行,而不是按计划时间点运行。这就意味着你可以将计算机关机几天,再次启动时仍然靠 anacron 来运行基本任务。anacron 与 cron 协同工作,因此严格来说前者不是后者的替代品,而是一种调度任务的有效可选方案。许多系统管理员配置了一个 cron 作业来在深夜备份远程工作者计算机上的数据结果却发现该作业在过去六个月中只运行过一次。anacron 确保重要的工作在 _可执行的时候_ 发生,而不是必须在安排好的 _特定时间点_ 发生。
点击参阅关于 [使用 anacron 获得更好的 crontab 效果][5] 的更多内容。
### 自动化
计算机和技术旨在让人们的生活更美好工作更轻松。Linux 为用户提供了许多有用的功能以确保完成重要的操作系统任务。查看这些可用的功能然后试着将这些功能用于你自己的工作任务吧。LCTT 译注:作者本段有些语焉不详,读者可参阅譬如 [Ansible 自动化工具安装、配置和快速入门指南](https://linux.cn/article-13142-1.html) 等关于 Linux 自动化的文章)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -0,0 +1,126 @@
[#]: subject: "Linux Phones: Here are Your Options"
[#]: via: "https://itsfoss.com/linux-phones/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13711-1.html"
如何选择一台 Linux 手机
======
![](https://img.linux.net.cn/data/attachment/album/202108/23/155159e5s33xo63tz5jddz.jpg)
> 未来取代安卓或 iOS 的可能是 Linux 手机,但如今,有哪些选择可以尝试一下呢?
虽然安卓是基于 Linux 内核的,但它经过了大量修改。因此,这意味着它不是一个完全意义上的基于 Linux 的操作系统。
谷歌正在努力使安卓内核更接近主线 Linux 内核,但这仍然是一个遥远的梦想。
那么,在这种情况下,如果你正在寻找一款 Linux 手机、一款由 Linux 操作系统驱动的智能手机,有哪些可以选择呢?
这并不是一个容易做出的决定,因为你的选择非常有限。因此,我试图推荐一些最好的、不同于主流选择的 Linux 手机。
### 如今你可以使用的顶级 Linux 手机
值得注意的是,这里提到的 Linux 手机或许无法取代你的安卓或 iOS 设备。因此,在做出购买决定之前,请确保你做了一些背景研究。
**注意:** 你需要仔细检查这些 Linux 手机是否可以购买到、预期的发货日期和使用风险。它们大多数只适合于发烧友或早期试用者。
#### 1、PinePhone
![][1]
[PinePhone][2] 是最有性价比和最受欢迎的选择之一,我觉得它是一个有前途的 Linux 手机。
它并不局限于单一的操作系统。你可以尝试使用带有 Plasma mobile OS 的 Manjaro、UBports、Sailfish OS 等系统。PinePhone 的配置不错,它包括一个四核处理器和 2G 或3G 的内存。它支持使用可启动的 microSD 卡来帮助你安装系统,还可选 16/32GB eMMC 存储。
其显示屏是一个基本的 1440×720p IPS 屏幕。你还可以得到特殊的隐私保护,如蓝牙、麦克风和摄像头的断路开关。
PinePhone 还为你提供了使用六个可用的 pogo 引脚添加自定义的硬件扩展的方式。
其基本版2GB 内存和 16GB 存储)默认加载了 Manjaro价格为 149 美元而融合版3GB 内存和 32GB 存储)价格为 199 美元。
#### 2、Fairphone
![][3]
与这个清单上的其他选择相比,[Fairphone][6] 在商业上是成功的。它不是一款 Linux 智能手机,但它具有定制版的安卓系统,即 Fairphone OS并且可以选择 [开源安卓系统替代品][5] 之一 [/e/ OS][4]。如果你想使用 Linux 操作系统,也有一些社区移植版本,但可能有点碰运气。
Fairphone 有两个不同的版本,提供了一些不错的配置规格。你会发现 Fairphone 3+ 有一个 4800 万像素的相机传感器和一个全高清显示屏。另外,你还会发现先进的高通处理器为该设备提供了动力。
他们专注于制造可持续发展的智能手机,并使用了一定量的回收塑料制造。这也为了方便维修。
因此,它不仅是一个非主流智能手机的选择,而且如果你选择了它,你也将为保护环境出了力。
### 3、Librem 5
![][7]
[Librem 5][9] 是一款非常注重用户隐私的智能手机,同时它采用了开源的操作系统,即 PureOS并非基于安卓。
它所提供的配置规格还不错,有 3GB 内存和四核 Cortex A53 芯片组。但是,这无法与主流选择相竞争。因此,你可能不会觉得它物美价廉。
它的目标是那些对尊重隐私的智能手机感兴趣的发烧友。
与其他产品类似Librem 5 也专注于通过提供用户可更换的电池使手机易于维修。
在隐私方面,你会注意到有蓝牙、相机和麦克风的断路开关。他们还承诺了未来几年的安全更新。
### 4、Pro 1X
![][10]
[Pro 1X][11] 是一款有趣的智能手机,同时支持 Ubuntu Touch、Lineage OS 和安卓。
它不仅是一款 Linux 智能手机,而且是一款带有独立 QWERTY 键盘的手机,这在现在是很罕见的。
Pro 1 X 的配置规格不错,包括了一个骁龙 662 处理器和 6GB 内存。它还带有一块不错的 AMOLED 全高清显示屏。
它的相机不是特别强大,但在大多数情况下应该是足够了。
### 5、Volla Phone
![][12]
[Volla Phone][13] 是一个有吸引力的产品,运行在 UBports 的 Ubuntu Touch。
它配备了预制的 “虚拟专用网络” ,并专注于简化用户体验。它的操作系统是定制的,因此,可以快速访问所有重要的东西,而无需自己组织。
它的配置规格令人印象深刻,包括了一个八核联发科处理器和 4700 毫安时的电池。你会得到类似于一些最新的智能手机上的设计。
### 总结
Linux 智能手机不是到处都能买到的,当然也还不适合大众使用。
因此,如果你是一个发烧友,或者想支持这种手机的发展,你可以考虑购买一台。
你已经拥有一台这种智能手机了吗?请不要犹豫,在下面的评论中分享你的经验。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-phones/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/PinePhone-3.jpg?resize=800%2C800&ssl=1
[2]: https://www.pine64.org/pinephone/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/fairphone.png?resize=360%2C600&ssl=1
[4]: https://itsfoss.com/e-os-review/
[5]: https://itsfoss.com/open-source-alternatives-android/
[6]: https://shop.fairphone.com/en/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/librem-5.png?resize=800%2C450&ssl=1
[8]: https://itsfoss.com/librem-linux-phone/
[9]: https://puri.sm/products/librem-5/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/pro1x.jpg?resize=800%2C542&ssl=1
[11]: https://www.fxtec.com/pro1x
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/volla-smartphone.jpg?resize=695%2C391&ssl=1
[13]: https://www.indiegogo.com/projects/volla-phone-free-your-mind-protect-your-privacy#/

View File

@ -0,0 +1,105 @@
[#]: subject: "KDE Plasma 5.23 New Features and Release Dates"
[#]: via: "https://www.debugpoint.com/2021/08/kde-plasma-5-23/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "imgradeone"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13719-1.html"
KDE Plasma 5.23 的新功能和发布日期
======
![](https://img.linux.net.cn/data/attachment/album/202108/25/222802zwhmvv1vwzusevzw.jpg)
> 我们在这篇文章中总结了 KDE Plasma 5.23(即将到来)的新功能,包括主要特点、下载和 测试说明。
KDE Plasma 桌面是当今最流行、最顶级的 Linux 桌面环境,而 KDE Plasma 的热度之高主要得益于其适应能力强、迭代发展迅速,以及性能不断提高。[KDE Plasma 5.22][1] 发布以来KDE 团队一直忙于为即将到来的 KDE Plasma 5.23 合并更改和测试新功能。目前 KDE Plasma 5.23 仍在开发中,如下是暂定的时间表。
### KDE Plasma 5.23 发布时间表
KDE Plasma 5.23 将于 2021 年 10 月 7 日发布,以下是时间表:
* Beta 公测 2021 年 9 月 16 日
* 最终发布 2021 年 10 月 7 日
正如每个 Plasma 版本更新一样,本次更新也同样承诺对核心 Plasma Shell 和 KDE 应用进行大幅更改、代码清理、性能改进、数百个 bug 修复、Wayland 优化等。我们在本篇文章中收集了一些重要的功能,让你对即将发布的新功能有基本了解。下面就让我们看看。
### KDE Plasma 5.23 新功能
* 本次版本更新基于 Qt 5.15 版本KDE 框架 5.86 版本。
#### Plasma Shell 和应用程序更新
* 本次 KDE Plasma 的 Kickoff 程序启动器将有大幅更新,包括 bug 修复、减少内存占用、视觉更新、键鼠导航优化。
* Kickoff 程序启动器菜单允许使用固定按钮固定在桌面上,保持开启状态。
* Kickoff 的标签不会在你滚动时切换(从应用标签到位置标签)。
* Kickoff 里可以使用 `CTRL+F` 快捷键直接聚焦到搜索栏。
* Kickoff 中的操作按钮(如关机等)可以设置为仅显示图标。
* 现在可以针对所有 Kickoff 项目选择使用网格或列表视图(而不仅仅局限于收藏夹)。
![KDE Plasma 5.23 中 Kickoff 程序启动器新增的选项][2]
![Kickoff 程序启动器的更改][3]
* 新增基于 QML 的全新概览视图(类似 GNOME 3.38 的工作区视图),用于展示所有打开的窗口(详见如下视频)。目前我找不到关于此合并请求的更多详情,而且这个新视图也很不稳定。
![](https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Overview-effect-in-KDE-Plasma-5.23.mp4)
_视频作者KDE 团队_
* 该概览效果将替代现有的“展现窗口”特效和“虚拟桌面平铺网格”特效(计划中)。
* 未连接触控板时将展示更易察觉的“未找到触摸板”提示。
* “电源配置方案”设置现在呈现于 Plasma UI电池和亮度窗口中。电源配置方案功能从 Linux 内核 5.12 版本开始已经登陆戴尔和联想的笔记本电脑了。因此如果你拥有这些品牌的较新款笔记本电脑你可以将电源配置方案设置为高性能或省电模式。_[注Fedora 35很大可能会在 GNOME 41 中增加该功能]_
![新的“电源配置方案”设置][4]
* 如果你有多屏幕设置,包括垂直和横向屏幕,那么登录屏幕现在可以正确同步和对齐。这个功能的需求度很高。
* 新的 Breeze 主题预计会有风格上的更新。
* 如前序版本一样,预计会有全新的壁纸(目前壁纸大赛仍在进行中)。
* 新增当硬件从笔记本模式切换到平板模式时是否缩放系统托盘图标的设置。
* 你可以选择在登录时的蓝牙状态:总是启用、总是禁用、记住上一次的状态。该状态在版本升级后仍可保留。
* 用户现在可以更改传感器的显示名称。
* Breeze 风格的滚动条现在比之前版本的更宽。
* Dolphin 文件管理器提供在文件夹前之前优先显示隐藏文件的新选项。
* 你现在可以使用 `DEL` 键删除剪贴板弹窗中选中的项目。
* KDE 现在允许你直接从 Plasma 桌面,向 store.kde.org 提交你制作的图标和主题。
#### Wayland 更新
* 在 Wayland 会话中,运行程序时光标旁也会展示图标反馈动画。
* 现在可以从通知中复制文字。
* 中键单击粘贴功能现在可以在 Wayland 和 XWayland 应用程序中正常使用。
请务必牢记,每个版本都有数以百计的 bug 修复和改进。本文仅仅包括了我收集的表面层次的东西。因此,如果想了解应用程序和 Plasma Shell 的变更详情,请访问 GitLab 或 KDE Planet 社区。
### 不稳定版本下载
你现在可以通过下方的链接下载 KDE neon 的不稳定版本来体验上述全部功能。直接下载 .iso 文件,然后安装测试即可。请务必在发现 bug 后及时反馈。该不稳定版本不适合严肃场合及生产力设备使用。
- [下载 KDE neon 不稳定版本][5]
### 结束语
KDE Plasma 5.23 每次发布都在改进底层、增加新功能。虽然这个版本不是大更新,但一切优化、改进最终都将累积成稳定性、适应性和更好的用户体验。当然,还有更多的 Wayland 改进讲真Wayland 兼容看上去一直都处在“正在进行中”的状态 - 就像十年过去了,却还在制作那样。当然这是另一个话题了)。
再会。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/kde-plasma-5-23/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[imgradeone](https://github.com/imgradeone)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/2021/06/kde-plasma-5-22-release/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Kickoff-Options-in-KDE-Plasma-5.23.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Changes-in-kickoff.jpeg
[4]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-power-profiles.jpeg
[5]: https://neon.kde.org/download

View File

@ -0,0 +1,99 @@
[#]: subject: "10 steps to more open, focused, and energizing meetings"
[#]: via: "https://opensource.com/open-organization/21/8/10-steps-better-meetings"
[#]: author: "Catherine Louis https://opensource.com/users/catherinelouis"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 steps to more open, focused, and energizing meetings
======
Constructing your meetings with open organization principles in mind can
help you avoid wasted time, effort, and talent.
![Open lightbulbs.][1]
The negative impact of poorly run meetings is [huge][2]. So leaders face a challenge: how do we turn poorly run meetings—which have a negative impact on team creativity, success, and even cause stress and anxiety—to meetings with positive outcomes? But to make the situation even tougher, we now find most meetings are being held remotely, online, where attendees' cameras are off and you're likely staring at a green dot at the top of your screen. That makes holding genuinely productive and useful meetings an even greater challenge.
Thinking about your meetings differently—and constructing your meetings with [open organization principles][3] in mind—can help turn your next remote meeting into an energizing experience with positive outcomes. Here are some guidelines to get you started. I'll explain steps you can take as you _prepare for_, _hold_, and _follow up from_ your meetings.
### Preparing for your meeting:
#### 1\. Protect everyone's time
First, you'll need to reflect on the reason you've calling a meeting in the first place. As a meeting leader, you must recognize your role as the person who could kill productivity and destroy the ability for attendees to be mindfully present. By holding a meeting and asking people to be there, you are removing hours from people's days, exhausting the time they have to spend—and time is a non-replenishable resource. So imagine instead that you are a _guardian_ of people's time. You need to treat their time with respect. Consider that the only reason _why_ you're holding a meeting in the first place is to _keep from_ wasting time*.* For example, if you see a group thrashing over a decision again and again because the wrong people were involved in an email chain, instead suggest holding a half-hour meeting to reach a consensus, thereby saving everyone's time in the end. One way to think about this: Treat employees the same way you'd treat your customers. You would never want a customer to feel they were invited to a meeting that was a waste of their time. Adopting this mindset, you'll instantly become sensitive to scheduling meetings over someone's lunch hour. If you commit to becoming a time saver, you'll become more intentional in _all aspects_ of meeting planning, executing, and closing. And you will get better and better at this as a result.
#### 2\. Use tools to be more inclusive
Like all meetings, remote meetings can contain their moments of silence, as people think, reflect, or take notes. But don't take silence as an indication of understanding, agreement, or even presence. You want to hear input and feedback from everyone at the meeting—not just those who are most comfortable or chatty. Familiarize yourself with some of the growing list of fantastic apps (Mentimeter, Klaxoon, Sli.do, Meeting pulse, Poll Everywhere, and other [open source tools][4]) designed to help attendees collaborate during meetings, even vote and reach a consensus. Make use of video when you can. Use your chat room technology for attendees to share if they missed something or raise a hand to speak, or even as a second channel of communication. Remote meeting tools are multiplying at an exponential rate; bone up on new ones to keep meetings interesting and help increase engagement.
#### 3\. Hone your invitation list
When preparing invitations to your meeting, keep the list as small as possible. The larger the group, the more time you'll need, and the quality of a meeting tends to decrease as the size of the meeting increases. One way to make sure you have the right people: instead of sending out topics for discussion, send a preliminary note to participants you think could add value, and solicit questions. Those who answer the preliminary questions will be those who need to attend, and these are the people who you need to invite.
Treat employees the same way you'd treat your customers. You would never want a customer to feel they were invited to a meeting that was a waste of their time.
#### 4\. Time box everything, and adapt
With our shorter attention spans, [stop defaulting to hour long meetings][5]. Don't hesitate to schedule even just 15 minutes for a meeting. Reducing the meeting length creates positive pressure; [research shows][6] that groups operating under a level of time pressure (using time boxing) perform more optimally due to increased focus. Imagine that after five minutes of one person speaking, everyone else in the meeting will begin to multitask. This means that as a facilitator you have just five minutes to present a concept. Use just these five minutes, then ask for connections: Who knows what about this topic? You'll learn there are experts in the room. Time box this activity, too, to five minutes. Next, break into small groups to discuss concrete practices and steps. Time box this for just 10 minutes, then share how far folks got in that shortened time box. Iterate and adjust for the next time box, and reserve yet another one for takeaways and conclusions.
#### 5\. Make your agenda transparent
Make meeting details as transparent as possible to everyone who's invited. The meeting agenda, for example, should have a desired outcome in the subject line. The opening line of the agenda should state clearly why the meeting needs to be held. For example:
"The choice of go-forward strategy for Product A has been thrashing for two weeks with an estimate of 60 or more hours of back and forth discussion. This meeting is being called with the people involved to agree on our go-forward plan."
Agenda details should outline the time boxes you've outlined to accomplish the goal. Logistics are critical: if you wish cameras to be on, ask for cameras to be on. And even though you've thought thoroughly about your invitee list, note that you may still have invited someone who doesn't need to be there. Provide an opt-out opportunity. For example:
"If you feel you cannot contribute to this meeting and someone else should, please reach out to me with their contact information."
### Conducting your meeting:
#### 6\. Be punctual
Start and end the meeting on time! Arrive early to check the technology. As the meeting leader, recognize that your mood will set the tone for your attendees. So consider beginning the meeting with appreciations, recognitions, and statements of gratitude. Beginning a meeting on a positive note establishes a positive mood and promotes creativity, active listening, and participation. You'll find your meeting will be more constructive as a result.
Like all meetings, remote meetings can contain their moments of silence, as people think, reflect, or take notes. But don't take silence as an indication of understanding, agreement, or even presence.
#### 7\. Engineer your meeting's culture
In the meeting itself, use Strategyzer's [Culture map][7] to create the culture you want for the meeting itself. You do this by agreeing the desired outcome of the meeting, asking what can enable or block attendees from achieving this outcome, and identifying the behaviors the group must exhibit to make this happen. Silently brainstorm with post-its on a jamboard, then have folks actively share what can make this meeting successful for all.
#### 8\. Invite collaboration
In openly run meetings, the best ideas should emerge. But this can only happen with your help. Recognize your role as a meeting leader who must remain neutral and encourage collaboration. Look for those who aren't participating and provide tools (or encouragement) that will help them get involved. For example, instead of verbal brainstorming, do a silent and anonymous brainstorm using stickies in a jamboard. You'll begin to see participation. Stick to the agenda and its time boxes, and watch for folks that talk over others: 
"Sara, Fred wasn't finished with his thought. Please let him finish."
### Closing and and reviewing your meeting:
#### 9\. Write it down
Openly run meetings should result in openly recorded outcomes. Be sure your agenda includes time for the group to clarify takeaways, assign action items, and identify stakeholders who'll be responsible for completing work.
#### 10\. Close the loop
Finally, review the meeting with a retrospective. Ask for feedback on the meeting itself. What worked in your facilitation? What was lacking? Does anyone have ideas for ways to improve the next meeting? Were any questions unanswered? Any epiphanies reached? Taking in this feedback, actually coming up with a new experiment for the next meeting to address the improvements. Attendees at your next meeting will be more than grateful, and in the long run you'll improve your meeting facilitation skills.
The path to collaboration is usually paved with the best intentions. We all know too well that this...
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/21/8/10-steps-better-meetings
作者:[Catherine Louis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/catherinelouis
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH (Open lightbulbs.)
[2]: https://ideas.ted.com/the-economic-impact-of-bad-meetings/
[3]: https://theopenorganization.org/definition
[4]: https://opensource.com/article/20/3/open-source-working-home
[5]: https://opensource.com/open-organization/18/3/open-approaches-meetings
[6]: https://learn.filtered.com/hubfs/Definitive%20100%20Most%20Useful%20Productivity%20Hacks.pdf
[7]: https://www.strategyzer.com/blog/posts/2015/10/13/the-culture-map-a-systematic-intentional-tool-for-designing-great-company-culture

View File

@ -0,0 +1,67 @@
[#]: subject: "When Linus Torvalds Was Wrong About Linux (And I am Happy He Was Wrong)"
[#]: via: "https://news.itsfoss.com/trovalds-linux-announcement/"
[#]: author: "Abhishek https://news.itsfoss.com/author/root/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
When Linus Torvalds Was Wrong About Linux (And I am Happy He Was Wrong)
======
Linus Torvalds, the creator of Linux kernel and Git, needs no introduction.
A shy geek who does not talk much in public but prefers mailing lists. Loves codes and gadgets more than other things. Prefers working from home than spending time in shiny offices.
Torvalds expresses his opinion on Linux related things quite vocally. We cant forget the finger to Nvidia moment that forced Nvidia to improve Linux support (it was way worse back in 2012).
Generally, I agree with his opinion and most often his views have turned out to be correct. Except in this one case (and thats a good thing).
### Torvalds “incorrect prediction” on Linux
30 years ago, Torvalds announced the Linux project. He was a university student at that time and wanted to create a UNIX-like operating system because UNIX itself was too costly.
While announcing the project, Torvalds mentioned that the project was just a hobby and it wont be big and professional like GNU.
> Im doing a (free) operating system (just a hobby, wont be big and professional like gnu) for 386(486) AT clones.
Linus Torvalds while announcing the Linux project
Little did Torvalds knew that his hobby will become the backbone of todays IT world and the face of a successful open source project.
Heres the complete message he sent:
Hello everybody out there using minix
Ive currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that Ill get something practical within a few months, and Id like to know what features most people would want. Any suggestions are welcome, but I wont promise Ill implement them 🙂
Linus ([torv…@kruuna.helsinki.fi][1])
PS. Yes its free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as thats all I have :-(.
That was on 25th August 1991. Torvalds announced the Linux project and then on 5th October 1991, he released the first Linux kernel. The [interesting fact about Linux][2] is that it was not open source initially. It was released under GPL license a year later.
The Linux Kernel is 30 years old today. Happy 30th to this amazing open source project.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/trovalds-linux-announcement/
作者:[Abhishek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/root/
[b]: https://github.com/lujun9972
[1]: https://groups.google.com/
[2]: https://itsfoss.com/facts-linux-kernel/

View File

@ -1,183 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginners guide to SSH for remote connection on Linux)
[#]: via: (https://opensource.com/article/20/9/ssh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
A beginners guide to SSH for remote connection on Linux
======
Establish connections with remote computers using secure shell.
![woman on laptop sitting at the window][1]
One of Linux's most appealing features is the ability to skillfully use a computer with nothing but commands entered into the keyboard—and better yet, to be able to do that on computers anywhere in the world. Thanks to OpenSSH, [POSIX][2] users can open a secure shell on any computer they have permission to access and use it from a remote location. It's a daily task for many Linux users, but it can be confusing for someone who has yet to try it. This article explains how to configure two computers for secure shell (SSH) connections, and how to securely connect from one to the other without a password.
### Terminology
When discussing more than one computer, it can be confusing to identify one from the other. The IT community has well-established terms to help clarify descriptions of the process of networking computers together.
* **Service:** A service is software that runs in the background so it can be used by computers other than the one it's installed on. For instance, a web server hosts a web-sharing _service_. The term implies (but does not insist) that it's software without a graphical interface.
* **Host:** A host is any computer. In IT, computers are called a _host_ because technically any computer can host an application that's useful to some other computer. You might not think of your laptop as a "host," but you're likely running some service that's useful to you, your mobile, or some other computer.
* **Local:** The local computer is the one you or some software is using. Every computer refers to itself as `localhost`, for example.
* **Remote:** A remote computer is one you're not physically in front of nor physically using. It's a computer in a _remote_ location.
Now that the terminology is settled, you can begin.
### Activate SSH on each host
For two computers to be connected over SSH, each host must have SSH installed. SSH has two components: the command you use on your local machine to start a connection, and a _server_ to accept incoming connection requests. Some computers come with one or both parts of SSH already installed. The commands vary, depending on your system, to verify whether you have both the command and the server installed, so the easiest method is to look for the relevant configuration files:
```
$ file /etc/ssh/ssh_config
/etc/ssh/ssh_config: ASCII text
```
Should this return a `No such file or directory` error, then you don't have the SSH command installed.
Do a similar check for the SSH service (note the `d` in the filename):
```
$ file /etc/ssh/sshd_config
/etc/ssh/sshd_config: ASCII text
```
Install one or the other, as needed:
```
`$ sudo dnf install openssh-clients openssh-server`
```
On the remote computer, enable the SSH service with systemd:
```
`$ sudo systemctl enable --now sshd`
```
Alternately, you can enable the SSH service from within **System Settings** on GNOME or **System Preferences** on macOS. On the GNOME desktop, it's located in the **Sharing** panel:
![Activate SSH in GNOME System Settings][3]
(Seth Kenlon, [CC BY-SA 4.0][4])
### Start a secure shell
Now that you've installed and enabled SSH on the remote computer, you can try logging in with a password as a test. To access the remote computer, you must have a user account and a password.
Your remote user doesn't have to be the same as your local user. You can log in as any user on the remote machine as long as you have that user's password. For instance, I'm `sethkenlon` on my work computer, but I'm `seth` on my personal computer. If I'm on my personal computer (making it my current local machine) and I want to SSH into my work computer, I can do that by identifying myself as `sethkenlon` and using my work password.
To SSH into the remote computer, you must know its internet protocol (IP) address or its resolvable hostname. To find the remote machine's IP address, use the `ip` command (on the remote computer):
```
$ ip addr show | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.1.1.5/27 brd 10.1.1.31 [...]
```
If the remote computer doesn't have the `ip` command, try `ifconfig` instead (or even `ipconfig` on Windows).
The address 127.0.0.1 is a special one and is, in fact, the address of `localhost`. It's a "loopback" address, which your system uses to reach itself. That's not useful when logging into a remote machine, so in this example, the remote computer's correct IP address is 10.1.1.5. In real life, I would know that because my local network uses the 10.1.1.0 subnet. If the remote computer is on a different network, then the IP address could be nearly anything (never 127.0.0.1, though), and some special routing is probably necessary to reach it through various firewalls. Assume your remote computer is on the same network, but if you're interested in reaching computers more remote than your own network, [read my article about opening ports in your firewall][5].
If you can ping the remote machine by its IP address _or_ its hostname, and have a login account on it, then you can SSH into it:
```
$ ping -c1 10.1.1.5
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
64 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=4.66 ms
$ ping -c1 akiton.local
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
```
That's a success. Now use SSH to log in:
```
$ whoami
seth
$ ssh sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
The test login works, so now you're ready to activate passwordless login.
### Create an SSH key
To log in securely to another computer without a password, you must have an SSH key. You may already have an SSH key, but it doesn't hurt to create a new one. An SSH key begins its life on your local machine. It consists of two components: a private key, which you never share with anyone or anything, and a public one, which you copy onto any remote machine you want to have passwordless access to.
Some people create one SSH key and use it for everything from remote logins to GitLab authentication. However, I use different keys for different groups of tasks. For instance, I use one key at home to authenticate to local machines, a different key to authenticate to web servers I maintain, a separate one for Git hosts, another for Git repositories I host, and so on. In this example, I'll create a unique key to use on computers within my local area network.
To create a new SSH key, use the `ssh-keygen` command:
```
`$ ssh-keygen -t ed25519 -f ~/.ssh/lan`
```
The `-t` option stands for _type_ and ensures that the encryption used for the key is higher than the default. The `-f` option stands for _file_ and sets the key's file name and location. After running this command, you're left with an SSH private key called `lan` and an SSH public key called `lan.pub`.
To get the public key over to your remote machine, use the `ssh-copy-id`. For this to work, you must verify that you have SSH access to the remote machine. If you can't log into the remote host with a password, you can't set up passwordless login either:
```
`$ ssh-copy-id -i ~/.ssh/lan.pub sethkenlon@10.1.1.5`
```
During this process, you'll be prompted for your login password on the remote host.
Upon success, try logging in again, but this time using the `-i` option to point the SSH command to the appropriate key (`lan`, in this example):
```
$ ssh -i ~/.ssh/lan sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
Repeat this process for all computers on your network, and you'll be able to wander through each host without ever thinking about passwords again. In fact, once you have passwordless authentication set up, you can edit the `/etc/ssh/sshd_config` file to disallow password authentication. This prevents anyone from using SSH to authenticate to a computer unless they have your private key. To do this, open `/etc/ssh/sshd_config` in a text editor with `sudo` permissions and search for the string `PasswordAuthentication`. Change the default line to this:
```
`PasswordAuthentication no`
```
Save it and restart the SSH server (or just reboot):
```
$ sudo systemctl restart sshd &amp;&amp; echo "OK"
OK
$
```
### Using SSH every day
OpenSSH changes your view of computing. No longer are you bound to just the computer in front of you. With SSH, you have access to any computer in your house, or servers you have accounts on, and even mobile and Internet of Things devices. Unlocking the power of SSH also unlocks the power of the Linux terminal. If you're not using SSH every day, start now. Get comfortable with it, collect some keys, live more securely, and expand your world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ssh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/sites/default/files/uploads/gnome-activate-remote-login.png (Activate SSH in GNOME System Settings)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/8/open-ports-your-firewall

View File

@ -1,114 +0,0 @@
[#]: subject: (How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux])
[#]: via: (https://itsfoss.com/check-mbr-or-gpt/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux]
======
Knowing the correct partitioning scheme of your disk could be crucial when you are installing Linux or any other operating system.
There are two popular partitioning schemes; the older MBR and the newer GPT. Most computers use GPT these days.
While creating the live or bootable USB, some tools (like [Rufus][1]) ask you the type of disk partitioning in use. If you choose GPT with an MBR disk, the bootable USB might not work.
In this tutorial, Ill show various methods to check the disk partitioning scheme on Windows and Linux systems.
### Check whether your system uses MBR or GPT on Windows systems
While there are several ways to check the disk partitioning scheme in Windows including command line ones, Ill stick with the GUI methods.
Press the Windows button and search for disk and then click on “**Create and format disk partitions**“.
![][2]
In here, **right-click on the disk** for which you want to check the partitioning scheme. In the right-click context menu, **select Properties**.
![Right click on the disk and select properties][3]
In the Properties, go to **Volumes** tab and look for **Partition style**.
![In Volumes tab, look for Partition style][4]
As you can see in the screenshot above, the disk is using GPT partitioning scheme. For some other systems, it could show MBR or MSDOS partitioning scheme.
Now you know how to check disk partitioning scheme in Windows. In the next section, youll learn to do the same in Linux.
### Check whether your system uses MBR or GPT on Linux
There are several ways to check whether a disk uses MBR or GPT partitioning scheme in Linux as well. This includes commands and GUI tools.
Let me first show the command line method and then Ill show a couple of GUI methods.
#### Check disk partitioning scheme in Linux command line
The command line method should work on all Linux distributions.
Open a terminal and use the following command with sudo:
```
sudo parted -l
```
The above command is actually a CLI-based [partitioning manager in Linux][5]. With the option -l, it lists the disks on your system along with the details about those disks. It includes partitioning scheme information.
In the output, look for the line starting with **Partition Table**:
![][6]
In the above screenshot, the disk has GPT partitioning scheme. For **MBR**, it would show **msdos**.
You learned the command line way. But if you are not comfortable with the terminal, you can use graphical tools as well.
#### Checking disk information with GNOME Disks tool
Ubuntu and many other GNOME-based distributions have a built-in graphical tool called Disks that lets you handle the disks in your system.
You can use the same tool for getting the partition type of the disk as well.
![][7]
#### Checking disk information with Gparted graphical tool
If you dont have the option to use GNOME Disks tool, no worries. There are other tools available.
One such popular tool is Gparted. You should find it in the repositories of most Linux distributions. If not installed already, [install Gparted][8] using your distributions software center or [package manager][9].
In Gparted, select the disk and from the menu select **View-&gt;Device** Information. It will start showing the disk information in the bottom-left area and this information includes the partitioning scheme.
![][10]
See, not too complicated, was it? Now you know multiple ways of figuring our whether the disks in your system use GPT or MBR partitioning scheme.
On the same note, I would also like to mention that sometimes disks also have a [hybrid partitioning scheme][11]. This is not common and most of the time it is either MBR or GPT.
Questions? Suggestions? Please leave a comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-mbr-or-gpt/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://rufus.ie/en_US/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/disc-management-windows.png?resize=800%2C561&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-1.png?resize=800%2C603&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-2-1.png?resize=800%2C600&ssl=1
[5]: https://itsfoss.com/partition-managers-linux/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux.png?resize=800%2C446&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux-gui.png?resize=800%2C548&ssl=1
[8]: https://itsfoss.com/gparted/
[9]: https://itsfoss.com/package-manager/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-disk-partitioning-scheme-linux-gparted.jpg?resize=800%2C555&ssl=1
[11]: https://www.rodsbooks.com/gdisk/hybrid.html

View File

@ -1,150 +0,0 @@
[#]: subject: "Build a JAR file with fastjar and gjar"
[#]: via: "https://opensource.com/article/21/8/fastjar"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Build a JAR file with fastjar and gjar
======
Utilities like fastjar, gjar, and jar help you manually or
programmatically build JAR files, while other toolchains such as Maven
and Gradle offer features for dependency management.
![Someone wearing a hardhat and carrying code ][1]
One of the many advantages of Java, in my experience, is its ability to deliver applications in a neat and tidy package (called a JAR, or _Java archive_.) JAR files make it easy for users to download and launch an application they want to try, easy to transfer that application from one computer to another (and Java is cross-platform, so sharing liberally can be encouraged), and easy to understand for new programmers to look inside a JAR to find out what makes a Java app run.
There are many ways to create a JAR file, including toolchain solutions such as Maven and Gradle, and one-click build features in your IDE. However, there are also stand-alone commands such as `jarfast`, `gjar`, and just plain old `jar`, which are useful for quick and simple builds, and to demonstrate what a JAR file needs to run.
### Install
On Linux, you may already have the `fastjar`, `gjar`, or `jar` commands as part of an OpenJDK package, or GCJ (GCC-Java.) You can test whether any of these commands are installed by typing the command with no arguments: 
```
$ fastjar
Try 'fastjar --help' for more information.
$ gjar
jar: must specify one of -t, -c, -u, -x, or -i
jar: Try 'jar --help' for more information
$ jar
Usage: jar [OPTION...] [ [--release VERSION] [-C dir] files] ...
Try `jar --help' for more information.
```
I have all of them installed, but you only need one. All of these commands are capable of building a JAR.
On a modern Linux system such as Fedora, typing a missing command causes your OS to prompt you to install it for you.
Alternately, you can just [install Java][2] from [AdoptOpenJDK.net][3] for Linux, MacOS, and Windows.
### Build a JAR 
First, you need a Java application to build.
To keep things simple, create a basic "hello world" application in a file called hello.java:
```
class Main {
public static void main([String][4][] args) {
    [System][5].out.println("Hello Java World");
}}
```
It's a simple application that somewhat trivializes the real-world importance of managing external dependencies. Still, it's enough to get started with the basic concepts you need to create a JAR.
Next, create a manifest file. A manifest file describes the Java environment of the JAR. In this case, the most important information is identifying the main class, so the Java runtime executing the JAR knows where to find the application's entry point. 
```
$ mdir META-INF
$ echo "Main-Class: Main" &gt; META-INF/MANIFEST.MF 
```
### Compiling Java bytecode
Next, compile your Java file into Java bytecode.
```
`$ javac hello.java`
```
Alternately, you can use the Java component of GCC to compile:
```
`$ gcj -C hello.java`
```
Either way, this produces the file `Main.class`:
```
$ file Main.class
Main.class: compiled Java class data, version XX.Y
```
### Creating a JAR 
You have all the components you need so that you can create the JAR file.
I often include the Java source code as a reference for curious users, but all that's _required_ is the `META-INF` directory and the class files.
The `fastjar` command uses syntax similar to the [`tar` command][6].
```
`$ fastjar cvf hello.jar META-INF Main.class`
```
Alternately, you can use `gjar` in much the same way, except that `gjar` requires you to specify your manifest file explicitly:
```
`$ gjar cvf world.jar Main.class -m META-INF/MANIFEST.MF`
```
Or you can use the `jar` command. Notice this one doesn't require a Manifest file because it auto-generates one for you, but for safety I define the main class explicitly:
```
`$ jar --create --file hello.jar --main-class=Main Main.class`
```
Test your application:
```
$ java -jar hello.jar
Hello Java World
```
### Easy packaging
Utilities like `fastjar`, `gjar`, and `jar` help you manually or programmatically build JAR files, while other toolchains such as Maven and Gradle offer features for dependency management. A good IDE may integrate one or more of these features.
Whatever solution you use, Java provides an easy and unified target for distributing your application code.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/fastjar
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/article/19/11/install-java-linux
[3]: https://adoptopenjdk.net/
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[6]: https://opensource.com/article/17/7/how-unzip-targz-file

View File

@ -1,154 +0,0 @@
[#]: subject: "Check free disk space in Linux with ncdu"
[#]: via: "https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Check free disk space in Linux with ncdu
======
Get an interactive report about disk usage with the ncdu Linux command.
![Check disk usage][1]
Computer users tend to amass a lot of data over the years, whether it's important personal projects, digital photos, videos, music, or code repositories. While hard drives tend to be pretty big these days, sometimes you have to step back and take stock of what you're actually storing on your drives. The classic Linux commands [` df`][2] and [` du`][3] are quick ways to gain insight about what's on your drive, and they provide a reliable report that's easy to parse and process. That's great for scripting and processing, but the human brain doesn't always respond well to hundreds of lines of raw data. In recognition of this, the `ncdu` command aims to provide an interactive report about the space you're using on your hard drive.
### Installing ncdu on Linux
On Linux, you can install `ncdu` from your software repository. For instance, on Fedora or CentOS:
```
`$ sudo dnf install ncdu`
```
On BSD, you can use [pkgsrc][4].
On macOS, you can install from [MacPorts][5] or [HomeBrew][6].
Alternately, you can [compile ncdu from source code][7].
### Using ncdu
The interface of `ncdu` uses the ncurses library, which turns your terminal window into a rudimentary graphical application so you can use the Arrow keys to navigate visual menus.
![ncdu interface][8]
CC BY-SA Seth Kenlon
That's one of the main appeals of `ncdu`, and what sets it apart from the original `du` command.
To get a complete listing of a directory, launch `ncdu`. It defaults to the current directory.
```
$ ncdu
ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help                                                                  
\--- /home/tux -----------------------------------------------
   22.1 GiB [##################] /.var                                                                                        
   19.0 GiB [###############   ] /Iso
   10.0 GiB [########          ] /.local
    7.9 GiB [######            ] /.cache
    3.8 GiB [###               ] /Downloads
    3.6 GiB [##                ] /.mail
    2.9 GiB [##                ] /Code
    2.8 GiB [##                ] /Documents
    2.3 GiB [#                 ] /Videos
[...]
```
The listing shows the largest directory first (in this example, that's the `~/.var` directory, full of many many flatpaks).
Using the Arrow keys on your keyboard, you can navigate through the listing to move deeper into a directory so you can gain better insight into what's taking up the most space.
### Get the size of a specific directory
You can run `ncdu` on an arbitrary directory by providing the path of a folder when launching it:
```
`$ ncdu ~/chromiumos`
```
### Excluding directories
By default, `ncdu` includes everything it can, including symbolic links and pseudo-filesystems such as procfs and sysfs. `You can` exclude these with the `--exclude-kernfs`.
You can exclude arbitrary files and directories using the --exclude option, followed by a pattern to match.
```
$ ncdu --exclude ".var"
   19.0 GiB [##################] /Iso                                                                                          
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
Alternately, you can list files and directories to exclude in a file, and cite the file using the `--exclude-from` option:
```
$ ncdu --exclude-from myexcludes.txt /home/tux                                                                                    
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
### Color scheme
You can add some color to ncdu with the `--color dark` option.
![ncdu color scheme][9]
CC BY-SA Seth Kenlon
### Including symlinks
The `ncdu` output treats symlinks literally, meaning that a symlink pointing to a 9 GB file takes up just 40 bytes.
```
$ ncdu ~/Iso
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso                                                    
@   0.0   B [                  ]  fake.iso
```
You can force ncdu to follow symlinks with the `--follow-symlinks` option:
```
$ ncdu --follow-symlinks ~/Iso
    9.3 GiB [##################]  fake.iso                                                                                    
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso
```
### Disk usage
It's not fun to run out of disk space, so monitoring your disk usage is important. The `ncdu` command makes it easy and interactive. Try `ncdu` the next time you're curious about what you've got stored on your PC, or just to explore your filesystem in a new way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/du-splash.png?itok=nRLlI-5A (Check disk usage)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-df
[3]: https://opensource.com/article/21/7/check-disk-space-linux-du
[4]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[5]: https://opensource.com/article/20/11/macports
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://dev.yorhel.nl/ncdu
[8]: https://opensource.com/sites/default/files/ncdu.jpg (ncdu interface)
[9]: https://opensource.com/sites/default/files/ncdu-dark.jpg (ncdu color scheme)

View File

@ -2,7 +2,7 @@
[#]: via: "https://itsfoss.com/debian-vs-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "perfiffer"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,137 +0,0 @@
[#]: subject: "How to Monitor Log Files in Real Time in Linux [Desktop and Server]"
[#]: via: "https://www.debugpoint.com/2021/08/monitor-log-files-real-time/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Monitor Log Files in Real Time in Linux [Desktop and Server]
======
This tutorial explains how you can monitor Linux log files (desktop,
server or applications) in real time for diagnosis and troubleshooting
purpose.Basic SyntaxUsage
When you ran into problems in your Linux desktop, or server or any application, you first look into the respective log files. The log files are generally a stream of text and messages from applications with a timestamp attached to it. It helps you to narrow down specific instances and helps you find the cause of any problem. It can also help to get assistance from the web as well.
In general, all log files are located in /var/log. This directory contains log files with extension .log for specific applications, services, and it also contains separate other directories which contains their log files.
![log files in var-log][1]
So, that said, if you want to monitor a bunch of log files Or, a specific one heres are some ways how you can do it.
### Monitor Log Files in real time Linux
#### Using tail command
Using the tail command is the most basic way of following a log file in real time. Specially, if you are in a server with only just a terminal, no GUI. This is very helpful.
Examples:
```
tail /path/to/log/file
```
![Monitoring multiple log files via tail][2]
Use the switch -f to follow the log file, which updates in real time. For example, if you want to follow syslog, you can use the following command.
```
tail -f /var/log/syslog
```
You can monitor multiple log files using a single command using
```
tail -f /var/log/syslog /var/log/dmesg
```
If you want to monitor http or sftp or any server, you can also their respective log files in this command.
Remember, above commands requires admin privileges.
#### Using lnav (The Logfile Navigator)
![lnav Running][3]
The lnav is a nice utility which you can use to monitor log files in a more structured way with color coded messages. This is not installed by default in Linux systems. You can install it using the below command:
```
sudo apt install lnav (Ubuntu)
sudo dnf install lnav (Fedora)
```
The good thing about lnav is, if you do not want to install it, you can just download its pre-compiled executable and run in anywhere. Even from a USB stick. No setup is required, plus loaded with features. Using lnav you can query the log files via SQL among other cool features which you can learn on it [official website][4].
[][5]
SEE ALSO:   This App is An Advanced Log File Viewer - lnav
Once installed, you can simply run lnav from terminal with admin privilege, and it will show all the logs from /var/log by default and start monitoring in real time.
#### A note about journalctl of systemd
All modern Linux distributions today use systemd, mostly. The systemd provides basic framework and components which runs Linux operating system in general. The systemd provides journal services via journalctl which helps to manage logs from all systemd services. You can also monitor respective systemd services and logs in real time using the following command.
```
journalctl -f
```
Here are some of the specific journalctl commands which you can use for several cases. You can combine these with -f switch above to start monitoring in real time.
* To emergency system messages use
```
journalctl -p 0
```
* Show errors with explanations
```
journalctl -xb -p 3
```
* Use time controls to filter out
```
journalctl --since "2020-12-04 06:00:00"
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
journalctl --since yesterday
journalctl --since 09:00 --until "1 hour ago"
```
If you want to learn more about and want to find out details about journalctl I have written a [guide here][6].
### Closing Notes
I hope these commands and tricks helps you find out the root cause of your problem/errors in your desktop or servers. For more details, you can always refer to the man pages and play around with various switches. Let me know using the comment box below, if you have any comments or what do you think about this article.
Cheers.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/monitor-log-files-real-time/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/log-files-in-var-log-1024x312.jpeg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Monitoring-multiple-log-files-via-tail-1024x444.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/lnav-Running-1024x447.jpeg
[4]: https://lnav.org/features
[5]: https://www.debugpoint.com/2016/11/advanced-log-file-viewer-lnav-ubuntu-linux/
[6]: https://www.debugpoint.com/2020/12/systemd-journalctl/

View File

@ -1,134 +0,0 @@
[#]: subject: "Linux Phones: Here are Your Options"
[#]: via: "https://itsfoss.com/linux-phones/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Phones: Here are Your Options
======
_**Brief:**_ _Linux phones could be the future to replace Android or iOS, but what are some of your options to give it a try?_
While Android is based on a Linux kernel, it has been heavily modified. So, that does not make it a full-fledged Linux-based operating system.
Google is trying to get the Android kernel close to the mainline Linux kernel, but that is still a distant dream.
So, in that case, what are some of the options if you are looking for a Linux phone? A smartphone powered by a Linux operating system.
It is not an easy decision to make because the options are super limited. Hence, I try to highlight some of the best Linux phones and a few different options from the mainstream choices.
### Top Linux phones you can use today
It is worth noting that the Linux phones mentioned here may not be able to replace your Android or iOS devices. So, make sure that you do some background research before making a purchase decision.
**Note:** You need to carefully check the availability, expected shipping date, and risks of using a Linux phone. Most of the options are only suitable for enthusiasts or early adopters.
#### 1\. PinePhone
![][1]
PinePhone is one of the most affordable and popular choices to consider as a promising Linux phone.
It is not limited to a single operating system. You can try it with Manjaro with Plasma mobile OS, UBports, Sailfish OS, and others. PinePhone packs in some decent specifications that include a Quad-core processor and 2/3 Gigs of RAM. It does support a bootable microSD card to help you with installation, along with 16/32 GB eMMC storage options.
The display is a basic 1440×720p IPS screen. You also get special privacy protection tweaks like kill switches for Bluetooth, microphones, and cameras.
PinePhone also gives you an option to add custom hardware extensions using the six pogo pins available.
The base edition (2 GB RAM and 16 GB storage) comes loaded with Manjaro by default and costs $149. And, the convergence edition (3 GB RAM / 32 GB storage) costs $199.
[PinePhone][2]
#### 2\. Fairphone
![][3]
Compared to others on the list, Fairphone is a commercial success. It is not a Linux smartphone, but it features a customized version of Android, i.e., Fairphone OS, and the option to opt for [/e/ OS][4], one of the [open-source Android alternatives][5]. Some community ports are available if you want to use the Linux operating system, but it could be a hit and miss.
The Fairphone offers some decent specs, considering there are two different variants. You will find a 48 MP camera sensor for Fairphone 3+ and a full-HD display. Not to forget, you will also find decent Qualcomm processors powering the device.
They focus on making smartphones that are sustainable and have been built using some amount of recycled plastic. Fairphone is also meant to be easily repairable.
So, it is not just an option away from mainstream smartphones, but you will also be helping with protecting the environment if you opt for it.
[Fairphone][6]
### 3\. Librem 5
![][7]
[Librem 5][8] is a smartphone that focuses heavily on user privacy while featuring an open-source operating system, i.e., PureOS, not based on Android.
The specifications offered are decent, with 3 Gigs of RAM and a quad-core Cortex A53 chipset. But, this is not something geared to compete with mainstream options. Hence, you may not find it as a value for money offering.
It is aimed at enthusiasts who are interested in testing privacy-respecting smartphones in the process.
Similar to others, Librem 5 also focuses on making the phone easily repairable by offering user-replaceable batteries.
For privacy, you will notice kill switches for Bluetooth, Cameras, and microphones. They also promise security updates for years to come.
[Librem 5][9]
### 4\. Pro 1X
![][10]
An interesting smartphone that supports Ubuntu Touch, Lineage OS, and Android as well.
It is not just a Linux smartphone but a mobile phone with a separate QWERTY keypad, which is rare to find these days.
The Pro 1 X features a decent specification, including a Snapdragon 662 processor coupled with 6 GB of RAM. You also get a respectable AMOLED Full HD display with the Pro 1 X.
The camera does not pack in anything crazy, but should be good enough for the most part.
[Pro 1X][11]
### 5\. Volla Phone
![][12]
An attractive offering that runs on Ubuntu Touch by UBports.
It comes with a pre-built VPN and focuses on making the user experience easy. The operating system has been customized so that everything essential should be accessible quickly without organizing anything yourself.
It packs in some impressive specifications that include an Octa-core MediaTek processor along with a 4700 mAh battery. You get a notch design resembling some of the latest smartphones available.
[Volla Phone][13]
### Wrapping Up
Linux smartphones are not readily available and certainly not yet suitable for the masses.
So, if you are an enthusiast or want to support the development of such phones, you can consider getting one of the devices.
Do you already own one of these smartphones? Please dont hesitate to share your experiences in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-phones/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/PinePhone-3.jpg?resize=800%2C800&ssl=1
[2]: https://www.pine64.org/pinephone/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/fairphone.png?resize=360%2C600&ssl=1
[4]: https://itsfoss.com/e-os-review/
[5]: https://itsfoss.com/open-source-alternatives-android/
[6]: https://shop.fairphone.com/en/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/librem-5.png?resize=800%2C450&ssl=1
[8]: https://itsfoss.com/librem-linux-phone/
[9]: https://puri.sm/products/librem-5/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/pro1x.jpg?resize=800%2C542&ssl=1
[11]: https://www.fxtec.com/pro1x
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/volla-smartphone.jpg?resize=695%2C391&ssl=1
[13]: https://www.indiegogo.com/projects/volla-phone-free-your-mind-protect-your-privacy#/

View File

@ -0,0 +1,107 @@
[#]: subject: "Access your iPhone on Linux with this open source tool"
[#]: via: "https://opensource.com/article/21/8/libimobiledevice-iphone-linux"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Access your iPhone on Linux with this open source tool
======
Communicate with iOS devices from Linux by using Libimobiledevice.
![A person looking at a phone][1]
The iPhone and iPad aren't by any means open source, but they're popular devices. Many people who own an iOS device also happen to use a lot of open source, including Linux. Users of Windows and macOS can communicate with an iOS device by using software provided by Apple, but Apple doesn't support Linux users. Open source programmers came to the rescue back in 2007 (just a year after the iPhone's release) with Libimobiledevice (then called libiphone), a cross-platform solution for communicating with iOS. It runs on Linux, Android, Arm systems such as the Raspberry Pi, Windows, and even macOS.
Libimobiledevice is written in C and uses native protocols to communicate with services running on iOS devices. It doesn't require any libraries from Apple, so it's fully free and open source.
Libimobiledevice is an object-oriented API, and there are a number of terminal utilities that come bundled with it for your convenience. The library supports Apple's earliest iOS devices all the way up to its latest models. This is the result of years of research and development. Applications in the project include **usbmuxd**, **ideviceinstaller**, **idevicerestore**, **ifuse**, **libusbmuxd**, **libplist**, **libirecovery**, and **libideviceactivation**.
### Install Libimobiledevice on Linux
On Linux, you may already have **libimobiledevice** installed by default. You can find out through your package manager or app store, or by running one of the commands included in the project:
```
`$ ifuse --help`
```
You can install **libimobiledevice** using your package manager. For instance, on Fedora or CentOS:
```
`$ sudo dnf install libimobiledevice ifuse usbmuxd`
```
On Debian and Ubuntu:
```
`$ sudo apt install usbmuxd libimobiledevice6 libimobiledevice-utils`
```
Alternatively, you can [download][2] and install **libimobiledevice** from source code.
### Connecting your device
Once you have the required packages installed, connect your iOS device to your computer.
Make a directory as a mount point for your iOS device.
```
`$ mkdir ~/iPhone`
```
Next, mount the device:
```
`$ ifuse ~/iPhone`
```
Your device prompts you to trust the computer you're using to access it.
![iphone prompts to trust the computer][3]
Figure 1: The iPhone prompts you to trust the computer.
Once the trust issue is resolved, you see new icons on your desktop.
![iphone icons appear on desktop][4]
Figure 2: New icons for the iPhone appear on the desktop.
Click on the **iPhone** icon to reveal the folder structure of your iPhone.
![iphone folder structure displayed][5]
Figure 3: The iPhone folder structure is displayed.
The folder I usually access most frequently is **DCIM**, where my iPhone photos are stored. Sometimes I use these photos in articles I write, and sometimes there are photos I want to enhance with open source applications like Gimp. Having direct access to the images instead of emailing them to myself is one of the benefits of using the Libimobiledevice utilities. I can copy any of these folders to my Linux computer. I can create folders on the iPhone and delete them too.
### Find out more
[Martin Szulecki][6] is the lead developer for the project. The project is looking for developers to add to their [community][7]. Libimobiledevice can change the way you use your peripherals, regardless of what platform you're on. It's another win for open source, which means it's a win for everyone.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/libimobiledevice-iphone-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://github.com/libimobiledevice/libimobiledevice/
[3]: https://opensource.com/sites/default/files/1trust_0.png
[4]: https://opensource.com/sites/default/files/2docks.png
[5]: https://opensource.com/sites/default/files/2iphoneicon.png
[6]: https://github.com/FunkyM
[7]: https://libimobiledevice.org/#community

View File

@ -0,0 +1,87 @@
[#]: subject: "Apps for daily needs part 4: audio editors"
[#]: via: "https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apps for daily needs part 4: audio editors
======
![][1]
Photo by [Brooke Cagle][2] on [Unsplash][3]
Audio editor applications or digital audio workstations (DAW) were only used in the past by professionals, such as record producers, sound engineers, and musicians. But nowadays many people who are not professionals also need them. These tools are used for narration on presentations, video blogs, and even just as a hobby. This is especially true now since there are so many online platforms that facilitate everyone sharing audio works, such as music, songs, podcast, etc. This article will introduce some of the open source audio editors or DAW that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article [Things to do after installing Fedora 34 Workstation][4]. Here is a list of a few apps for daily needs in the audio editors or DAW category.
### Audacity
Im sure many already know Audacity. It is a popular multi-track audio editor and recorder that can be used for post-processing all types of audio. Most people use Audacity to record their voices, then do editing to make the results better. The results can be used as a podcast or a narration for a video blog. In addition, people also use Audacity to create music and songs. You can record live audio through a microphone or mixer. It also supports 32 bit sound quality.
Audacity has a lot of features that can support your audio works. It has support for plugins, and you can even write your own plugin. Audacity provides many built-in effects, such as noise reduction, amplification, compression, reverb, echo, limiter, and many more. You can try these effects while listening to the audio directly with the real-time preview feature. The built in plugin-manager lets you manage frequently used plugins and effects.
![][5]
More information is available at this link: <https://www.audacityteam.org/>
* * *
### LMMS
LMMS or Linux MultiMedia Studio is a comprehensive music creation application. You can use LMMS to produce your music from scratch with your computer. You can create melodies and beats according to your creativity, and make it better with selection of sound instruments and various effects. There are several built-in features related to musical instruments and effects, such as 16 built-in sythesizers, embedded ZynAddSubFx, drop-in VST effect plug-in support, bundled graphic and parametric equalizer, built-in analyzer, and many more. LMMS also supports MIDI keyboards and other audio peripherals.
![][6]
More information is available at this link: <https://lmms.io/>
* * *
### Ardour
Ardour has capabilities similar to LMMS as a comprehensive music creation application. It says on its website that Ardour is a DAW application that is the result of collaboration between musicians, programmers, and professional recording engineers from around the world. Ardour has various functions that are needed by audio engineers, musicians, soundtrack editors, and composers.
Ardour provides complete features for recording, editing, mixing, and exporting. It has unlimited multichannel tracks, non-linear editor with unlimited undo/redo, a full featured mixer, built-in plugins, and much more. Ardour also comes with video playback tools, so it is also very helpful in the process of creating and editing soundtracks for video projects.
![][7]
More information is available at this link: <https://ardour.org/>
* * *
### TuxGuitar
TuxGuitar is a tablature and score editor. It comes with a tablature editor, score viewer, multitrack display, time signature management, and tempo management. It includes various effects, such as bend, slide, vibrato, etc. While TuxGuitar focuses on the guitar, it allows you to write scores for other instruments. It can also serve as a basic MIDI editor. You need to have an understanding of tablature and music scoring to be able to use it.
![][8]
More information is available at this link: <http://www.tuxguitar.com.ar/>
* * *
### Conclusion
This article presented four audio editors as apps for your daily needs and use on Fedora Linux. Actually there are many other audio editors, or DAW, that you can use on Fedora Linux. You can also use Mixxx, Rosegarden, Kwave, Qtractor, MuseScore, musE, and many more. Hopefully this article can help you investigate and choose the right audio editor or DAW. If you have experience using these applications, please share your experiences in the comments.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/FedoraMagz-Apps-4-Audio-816x345.jpg
[2]: https://unsplash.com/@brookecagle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/meeting-on-cafe-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
[5]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-audacity-1024x575.png
[6]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-lmms-1024x575.png
[7]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-ardour-1024x592.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-tuxguitar-1024x575.png

View File

@ -0,0 +1,166 @@
[#]: subject: "Write a chess game using bit-fields and masks"
[#]: via: "https://opensource.com/article/21/8/binary-bit-fields-masks"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Write a chess game using bit-fields and masks
======
Using bit-fields and masks is a common method to combine data without
using structures.
![Chess pieces on a chess board][1]
Let's say you were writing a chess game in C. One way to track the pieces on the board is by defining a structure that defines each possible piece on the board, and its color, so every square contains an element from that structure. For example, you might have a structure that looks like this:
```
struct chess_pc {
   int piece;
   int is_black;
}
```
With this programming structure, your program will know what piece is in every square and its color. You can quickly identify if the piece is a pawn, rook, knight, bishop, queen, or king—and if the piece is black or white. But there's a more straightforward way to track the same information while using less data and memory. Rather than storing a structure of two `int` values for every square on a chessboard, we can store a single `int` value and use binary _bit-fields_ and _masks_ to identify the pieces and color in each square.
### Bits and binary
When using bit-fields to represent data, it helps to think like a computer. Let's start by listing the possible chess pieces and assigning a number to each. I'll help us along to the next step by representing the number in its binary form, the way the computer would track it. Remember that binary numbers are made up of _bits_, which are either zero or one.
* `00000000:` empty (0)
* `00000001:` pawn (1)
* `00000010:` rook (2)
* `00000011:` knight (3)
* `00000100:` bishop (4)
* `00000101:` queen (5)
* `00000110:` king (6)
To list all pieces on a chessboard, we only need the three bits that represent (from right to left) the values 1, 2, and 4. For example, the number 6 is binary `110`. All of the other bits in the binary representation of 6 are zeroes.
And with a bit of cleverness, we can use one of those extra always-zero bits to track if a piece is black or white. We can use the number 8 (binary `00001000`) to indicate if a piece is black. If this bit is 1, it's black; if it's 0, it's white. That's called a _bit-field_, which we can pull out later using a binary _mask_.
### Storing data with bit-fields
To write a chess program using bit-fields and masks, we might start with these definitions:
```
/* game pieces */
#define EMPTY 0
#define PAWN 1
#define ROOK 2
#define KNIGHT 3
#define BISHOP 4
#define QUEEN 5
#define KING 6
/* piece color (bit-field) */
#define BLACK 8
#define WHITE 0
/* piece only (mask) */
#define PIECE 7
```
When you assign a value to a square, such as when initializing the chessboard, you can assign a single `int` value to track both the piece and its color. For example, to store a black rook in position 0,0 of an array, you would use this code:
```
  int board[8][8];
..
  board[0][0] = BLACK | ROOK;
```
The `|` is a binary OR, which means the computer will combine the bits from two numbers. For every bit position, if that bit from _either_ number is 1, the result for that bit position is also 1. Binary OR of the value `BLACK` (8, or binary `00001000`) and the value `ROOK` (2, or binary `00000010`) is binary `00001010`, or 10:
```
    00001000 = 8
 OR 00000010 = 2
    ________
    00001010 = 10
```
Similarly, to store a white pawn in position 6,0 of the array, you could use this:
```
`  board[6][0] = WHITE | PAWN;`
```
This stores the value 1 because the binary OR of `WHITE` (0) and `PAWN` (1) is just 1:
```
    00000000 = 0
 OR 00000001 = 1
    ________
    00000001 = 1
```
### Getting data out with masks
During the chess game, the program will need to know what piece is in a square and its color. We can separate the piece using a binary mask.
For example, the program might need to know the contents of a specific square on the board during the chess game, such as the array element at `board[5][3]`. What piece is there, and is it black or white? To identify the chess piece, combine the element's value with the `PIECE` mask using the binary AND:
```
  int board[8][8];
  int piece;
..
  piece = board[5][3] &amp; PIECE;
```
The binary AND operator (`&`) combines two binary values so that for any bit position, if that bit in _both_ numbers is 1, then the result is also 1. For example, if the value of `board[5][3]` is 11 (binary `00001011`), then the binary AND of 11 and the mask PIECE (7, or binary `00000111`) is binary `00000011`, or 3. This is a knight, which also has the value 3.
```
    00001011 = 11
AND 00000111 = 7
    ________
    00000011 = 3
```
Separating the piece's color is a simple matter of using binary AND with the value and the `BLACK` bit-field. For example, you might write this as a function called `is_black` to determine if a piece is either black or white:
```
int
is_black(int piece)
{
  return (piece &amp; BLACK);
}
```
This works because the value `BLACK` is 8, or binary `00001000`. And in the C programming language, any non-zero value is treated as True, and zero is always False. So `is_black(board[5][3])` will return a True value (8) if the piece in array element `5,3` is black and will return a False value (0) if it is white.
### Bit fields
Using bit-fields and masks is a common method to combine data without using structures. They are worth adding to your programmer's "tool kit." While data structures are a valuable tool for ordered programming where you need to track related data, using separate elements to track single On or Off values (such as the colors of chess pieces) is less efficient. In these cases, consider using bit-fields and masks to combine your data more efficiently.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/binary-bit-fields-masks
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-chess-games.png?itok=U1lWMZ0y (Chess pieces on a chess board)

25
sources/tech/20210824 .md Normal file
View File

@ -0,0 +1,25 @@
[#]: subject: ""
[#]: via: "https://www.2daygeek.com/upgrade-opensuse-from-15-2-to-15-3/"
[#]: author: " "
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
======
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/upgrade-opensuse-from-15-2-to-15-3/
作者:[][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,180 @@
[#]: subject: "How to include options in your Bash shell scripts"
[#]: via: "https://opensource.com/article/21/8/option-parsing-bash"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to include options in your Bash shell scripts
======
Give your shell scripts options.
![Terminal commands][1]
Terminal commands usually have [options or switches][2], which you can use to modify how the command does what it does. Options are included in the [POSIX specification][3] for command-line interfaces. It's also a time-honored convention established with the earliest UNIX applications, so it's good to know how to include them in your [Bash scripts][4] when you're creating your own commands.
As with most languages, there are several ways to solve the problem of parsing options in Bash. To this day, my favorite method remains the one I learned from Patrick Volkerding's Slackware build scripts, which served as my introduction to shell scripting back when I first discovered Linux and dared to venture into the plain text files that shipped with the OS.
### Option parsing in Bash
The strategy for parsing options in Bash is to cycle through all arguments passed to your shell script, determine whether they are an option or not, and then shift to the next argument. Repeat this process until no options remain.
Start with a simple Boolean option (sometimes called a _switch_ or a _flag_):
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
else
    break
fi
done
echo $ALPHA
```
In this code, I create a `while` loop which serves as an infinite loop until there are no further arguments to process. An `if` statement attempts to match whatever argument is found in the first position (`$1`) to either `--alpha` or `-a`. (These are arbitrary option names with no special significance. In an actual script, you might use `--verbose` and `-v` to trigger verbose output).
The `shift` keyword causes all arguments to shift by 1, such that an argument in position 2 (`$2`) is moved into position 1 (`$1`). The `else` statement is triggered when there are no further arguments to process, which breaks the `while` loop.
At the end of the script, the value of `$ALPHA` is printed to the terminal.
Test the script:
```
$ bash ./test.sh --alpha
1
$ bash ./test.sh
$ bash ./test.sh -a
1
```
The option is correctly detected.
### Detecting arguments in Bash
There is a problem, though: Extra arguments are ignored.
```
$ bash ./test.sh --alpha foo
1
$
```
To catch arguments that aren't intended as options, you can dump remaining arguments into a [Bash array][5].
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
else
    break
fi
done
echo $ALPHA
ARG=( "${@}" )
for i in ${ARG[@]}; do
    echo $i
done
```
Test the new version of the script:
```
$ bash ./test.sh --alpha foo
1
foo
$ bash ./test.sh foo
foo
$ bash ./test.sh --alpha foo bar
1
foo
bar
```
### Options with arguments
Some options require an argument all their own. For instance, you might want to allow the user to set an attribute such as a color or the resolution of a graphic or to point your application to a custom configuration file.
To implement this in Bash, you can use the `shift` keyword as you do with Boolean switches but shift the arguments by 2 instead of 1.
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
elif [ "$1" = "--config" -o "$1" = "-c" ]; then
    CONFIG=$2
    shift 2
else
    break
fi
done
echo $ALPHA
echo $CONFIG
ARG=( "${@}" )
for i in ${ARG[@]}; do
    echo $i
done
```
In this code, I add an `elif` clause to compare each argument to both `--config` and `-c`. In the event of a match, the value of a variable called `CONFIG` is set to the value of whatever the second argument is (this means that the `--config` option requires an argument). All arguments shift place by 2: 1 to shift `--config` or `-c`, and 1 to move its argument. As usual, the loop repeats until no matching arguments remain.
Here's a test of the new version of the script:
```
$ bash ./test.sh --config my.conf foo bar
my.conf
foo
bar
$ bash ./test.sh -a --config my.conf baz
1
my.conf
baz
```
### Option parsing made easy
There are other ways to parse options in Bash. You can alternately use a `case` statement or the `getopt` command. Whatever you choose to use, options for your users are important features for any application, and Bash makes it easy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/option-parsing-bash
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/terminal-commands_1.png?itok=Va3FdaMB (Terminal commands)
[2]: https://opensource.com/article/21/8/linux-terminal#options
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: https://opensource.com/downloads/bash-scripting-ebook
[5]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays

View File

@ -0,0 +1,207 @@
[#]: subject: "Solve the repository impedance mismatch in CI/CD"
[#]: via: "https://opensource.com/article/21/8/impedance-mismatch-cicd"
[#]: author: "Evan "Hippy" Slatis https://opensource.com/users/hippyod"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Solve the repository impedance mismatch in CI/CD
======
Aligning deployment images and descriptors can be difficult, but here
are few strategies to streamline the process.
![Tips and gears turning][1]
An _impedance mismatch_ in software architecture happens when there's a set of conceptual and technical difficulties between two components. It's actually a term borrowed from electrical engineering, where the impedance of electrical input and output must match for the circuit to work.
In software development, an impedance mismatch exists between images stored in an image repository and its deployment descriptors stored in the SCM. How do you know whether the deployment descriptors stored in the SCM are actually meant for the image in question? The two repositories don't track the data they hold the same way, so matching an image (an immutable binary stored individually in an image repository) to its specific deployment descriptors (text files stored as a series of changes in Git) isn't straightforward.
**NOTE**: This article assumes at least a passing familiarity with the following concepts:
* Source Control Management (SCM) systems and branching
* Docker/OCI-compliant images and containers
* Container Orchestration Platforms (COP) such as Kubernetes
* Continuous Integration/Continuous Delivery (CI/CD)
* Software development lifecycle (SDLC) environments
### Impedance mismatch: SCM and image repositories
To fully understand where this becomes a problem, consider a set of basic Software Development LifeCycle (SDLC) environments typically used in any given project; for example, dev, test, and prod (or release) environments.
The dev environment does not suffer from an impedance mismatch. Best practices, which today include using CI/CD, dictate that the latest commit to your development branch should reflect what's deployed in the development environment. So, given a typical, successful CI/CD development workflow:
1. A commit is made to the development branch in the SCM
2. The commit triggers an image build
3. The new, distinct image is pushed to the image repository and tagged as being in dev
4. The image is deployed to the dev environment in a Container Orchestration Platform (COP) with the latest deployment descriptors pulled from the SCM
In other words, the latest image is always matched to the latest deployment descriptors in the development environment. Rolling back to a previous build isn't an issue, either, because that implies rolling back the SCM, too.
Eventually, though, development progresses to the point where more formal testing needs to occur, so an image—which implicitly relates to a specific commit in the SCM—is promoted to a test environment. Again, assuming a successful build, this isn't much of a problem because the image promoted from development should reflect the latest in the development branch:
1. The latest deployment to development is approved for promotion, and the promotion process is triggered
2. The latest development image tagged as being in test
3. The image is pulled and deployed to the test environment using the latest deployment descriptors pulled from the SCM
So far, so good, right? But what happens in either of the following scenarios?
**Scenario A**. The image is promoted to the next downstream environment, e.g., user acceptance testing (UAT) or even a production environment.
**Scenario B**. A breaking bug is discovered in the test environment, and the image needs to be rolled back to a known good image.
In either scenario, it's not as if development has stopped, which means one or more commits to the development branch may have occurred, which in turn means it's possible the latest deployment descriptors have changed, and the latest image isn't the same as what was previously deployed in test. Changes to the deployment descriptors may or may not apply to older versions of an image, but they certainly can't be trusted. If they have changed, they certainly aren't the same deployment descriptors you've been testing with up to now with the image you want to deploy.
And that's the crux of the problem: I**f the image being deployed isn't the latest from the image repository, how do you identify which deployment descriptors in the SCM apply specifically to the image being deployed?** The short answer is, you can't. The two repositories have an impedance mismatch. The longer answer is that you can, but you have to work for it, which will be the subject of the rest of this article. Note that the following isn't necessarily the only solution to this problem, but it has been put into production and proven to work for dozens of projects that, in turn, have been built and deployed in production for more than a year now.
### Binaries and deployment descriptors
A common artifact produced from building source code is a Docker or OCI-compliant image, and that image will typically be deployed to a Container Orchestration Platform (COP) such as Kubernetes. Deploying to a COP requires deployment descriptors defining how the image is to be deployed and run as a container, e.g., [Kubernetes Deployments][2] or [CronJobs][3]. It is because of the fundamental difference between what an image is and its deployment descriptors where the impedance mismatch manifests itself. For this discussion, think of images as immutable binaries stored in an image repository. Any change in the source code does not change the image but rather replaces it with a distinct, new image.
By contrast, deployment descriptors are text files and thus can be considered source code and mutable. If best practices are being followed, then the deployment descriptors are stored in SCM, and all changes are committed there first to be properly tracked.
### Solving the impedance mismatch
The first part of the proposed solution is to ensure that a method exists of matching the image in the image repository to the source commit in the SCM, which holds the deployment descriptors. The most straightforward solution is to tag the image with its source commit hash. This will keep different versions of the image separate, easily identifiable, and provide enough information to find the correct deployment descriptors so that the image can be properly deployed in the COP.
Reviewing the scenarios above again:
**Scenario A**. _Promoting an image from one downstream environment to the next_: When the image is promoted from test to UAT, the image's tag tells us from which source commit in the SCM to pull the deployment descriptors.
**Scenario B**. _When an image needs to be rolled back in a downstream environment_: Whichever image we choose to roll back to will also tell us from which source commit in the SCM to pull the correct deployment descriptors.
In each case, it doesn't matter how many development branch commits and builds have taken place since a particular image has been deployed in test since every image that's been promoted can find the exact deployment descriptors it was originally deployed with.
This isn't a complete solution to the impedance mismatch, however. Consider two additional scenarios:
**Scenario C**. In a load testing environment, different deployment descriptors are tried at various times to see how a particular build performs.
**Scenario D**. An image is promoted to a downstream environment, and there's an error in the deployment descriptors for that environment.
In each of these scenarios, changes need to be made to the deployment descriptors, but right now all we have is a source commit hash. Remember that best practices require all source code changes to be committed back to SCM first. The commit at that hash is immutable by itself, so a better solution than just tracking the initial source commit hash is clearly needed.
The solution here is a new branch created at the original source commit hash. This will be dubbed a **Deployment Branch**. Every time an image is promoted to a downstream test or release environment, you should create a new Deployment Branch **from the head of the previous SDLC environment's Deployment Branch**.
This will allow the same image to be deployed differently and repeatedly within each SDLC environment and also pick up any changes discovered or applied for that image in each subsequent environment.
**NOTE:** How changes applied in one environment's deployment descriptors are applied to the next, whether by tools that enable sharing values such as Helm Charts or by manually cutting and pasting across directories, is beyond the scope of this article.
So, when an image is promoted from one SDLC environment to the next:
1. A Deployment Branch is created
1. If the image is being promoted from the dev environment, the branch is created from the source commit hash that built the image
2. Otherwise, _the Deployment Branch is created from the head of the current Deployment Branch_
2. The image is deployed into the next SDLC environment using the deployment descriptors from the newly created Deployment Branch for that environment
![deployment branching tree][4]
Figure 1: Deployment branches
1. Development branch
2. First downstream environment's Deployment Branch with a single commit
3. Second downstream environment's Deployment Branch with a single commit
Revisiting Scenarios C and D from above with Deployment Branches as a solution:
**Scenario C**. Change the deployment descriptors for an image deployed to a downstream SDLC environment
**Scenario D**. Fix an error in the deployment descriptors for a particular SDLC environment
In each scenario, the workflow is as follows:
1. Commit the changes to the deployment descriptors to the Deployment Branch for the SLDC environment and image
2. Redeploy the image into the SLDC environment using the deployment descriptors at the head of the Deployment Branch
Thus, Deployment Branches fully resolve the impedance mismatch between image repositories storing a single, immutable image representing a unique build and SCM repositories storing mutable deployment descriptors for one more downstream SDLC environments.
### Practical considerations
While this seems like a workable solution, it also opens up several new practical questions for developers and operations resources alike, such as:
A. Where should deployment descriptors be kept as source to best facilitate Deployment Branch management, i.e., in the same or a different SCM repository than the source that built the image?
Up until now, we've avoided speaking about which repository the deployment descriptors should reside. Without going into too much detail, we recommend putting the deployment descriptors for all SDLC environments into the same SCM repository as the image source. As Deployment Branches are created, the source for the images will follow and act as an easy-to-find reference for what is actually running in the container being deployed.
As mentioned above, images will be associated with the original source commit via their tag. Finding the reference for the source at a particular commit in a separate repository would add a level of difficulty to developers, even with tooling, which is unnecessary by keeping everything in a single repository.
B. Should the source code that built the image be modified on a Deployment Branch?
Short answer: **NEVER**.
Longer answer: No, because images should never be built from Deployment Branches. They're built from development branches. Changing the source that defines an image in a Deployment Branch will destroy the record of what built the image being deployed and doesn't actually modify the functionality of the image. This could also become an issue when comparing two Deployment Branches from different versions. It might give a false positive for differences in functionality between them (a small but additional benefit to using Deployment Branches).
C. Why an image tag? Couldn't image labels be used?
Tags are easily readable and searchable for images stored in a repository. Reading and searching for labels with a particular value over a group of images requires pulling the manifest for each image, which adds complexity and reduces performance. Also, tagging images for different versions is still necessary for historical record and finding different versions, so using the source commit hash is the easiest solution that guarantees uniqueness while also containing instantly useful information.
D. What is the most practical way to create Deployment Branches?
The first three rules of DevOps are _automate_, _automate_, _automate_.
Relying on resources to enforce best practices uniformly is hit and miss at best, so when implementing a CI/CD pipeline for image promotion, rollback, etc., incorporate automated Deployment Branching into the script.
E. Any suggestions for a naming convention for Deployment Branches?
&lt;_**deployment-branch-identifier**_&gt;-&lt;_**env**_&gt;-&lt;_**src-commit-hash**_&gt;
* _**deployment-branch-identifier:**_ A unique string used by every Deployment Branch to identify it as a Deployment Branch; e.g. 'deployment' or 'deploy'
* _**env:**_ The SDLC environment the Deployment Branch pertains to; e.g. 'qa', 'stg', or' prod' for the test, staging, and production environments, respectively
* _**src-commit-hash:**_ The source code commit hash that holds the original code that built the image being deployed, which allows developers to easily find the original commit that created the image while ensuring the branch name is unique
For example, _**deployment-qa-asdf78s**_ or _**deployment-stg-asdf78s**_ for Deployment Branches promoted to the QA and STG environments, respectively.
F. How do you tell which version of the image is running in the environment?
Our suggestion is to [label][5] all your deployment resources with the latest Deployment Branch commit hash and the source commit hash. These two unique identifiers will allow developers and operations personnel to find everything that was deployed and from where. It also makes cleanup of resources trivial using those selectors on deployments of different versions, e.g., on rollback or roll forward operations.
G. When is it appropriate to merge changes from Deployment Branches back into the development branch?
It's completely up to the development team on what makes sense.
If you're making changes for load testing purposes just to see what will break your application, for example, then those changes may not be the best thing to merge back into the development branch. On the other hand, if you find and fix an error or tune a deployment in a downstream environment, merging the Deployment Branch changes back into the development branch makes sense.
H. Is there a working example of Deployment Branching to test with first?
[el-CICD][6] has been successfully using this strategy for a year and a half in production for more than a hundred projects across all SDLC downstream environments, including managing deployments to production. If you have access to an [OKD][7], Red Hat OpenShift lab cluster, or [Red Hat CodeReady Containers][8], you can download the [latest el-CICD version][9] and run through the [tutorial][10] to see how and when Deployment Branches are created and used.
### Wrap up
Using the working example above would be a good exercise to help you better understand the issues surrounding impedance mismatches in development processes. Maintaining alignment between images and deployment descriptors is a critical part of successfully managing deployments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/impedance-mismatch-cicd
作者:[Evan "Hippy" Slatis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hippyod
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
[2]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[3]: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
[4]: https://opensource.com/sites/default/files/picture1.png
[5]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[6]: https://github.com/elcicd
[7]: https://www.okd.io/
[8]: https://cloud.redhat.com/openshift/create/local
[9]: https://github.com/elcicd/el-CICD-RELEASES
[10]: https://github.com/elcicd/el-CICD-docs/blob/master/tutorial.md

View File

@ -0,0 +1,126 @@
[#]: subject: "Ulauncher: A Super Useful Application Launcher for Linux"
[#]: via: "https://itsfoss.com/ulauncher/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ulauncher: A Super Useful Application Launcher for Linux
======
_**Brief:**_ _Ulauncher is a fast application launcher with extension and shortcut support to help you quickly access application and files in Linux._
An application launcher lets you quickly access or open an app without hovering over the application menu icons.
By default, I found the application launcher with Pop!_OS super handy. But, not every Linux distribution offers an application launcher out-of-the-box.
Fortunately, there is a solution with which you can add the application launcher to most of the popular distros out there.
### Ulauncher: Open Source Application Launcher
![][1]
Ulauncher is a quick application launcher built using Python while utilizing GTK+.
It gives a decent amount of customization and control options to tweak. Overall, you can adjust its behavior and experience to suit your taste.
Let me highlight some of the features that you can expect with it.
### Ulauncher Features
The options that you get with Ulauncher are super accessible and easy to customize. Some key highlights include:
* Fuzzy search algorithm, which lets you find applications even if you misspell them
* Remembers your last searched application in the same session
* Frequently used apps display (optional)
* Custom color themes
* Preset color themes that include a dark theme
* Shortcut to summon the launcher can be easily customized
* Browse files and directories
* Support for extensions to get extra functionality (emoji, weather, speed test, notes, password manager, etc.)
* Shortcuts for browsing sites like Google, Wikipedia, and Stack Overflow
It provides almost every helpful ability that you may expect in an application launcher, and even better.
### How to Use Ulauncher in Linux?
By default, you need to press **Ctrl + Space** to get the application launcher after you open it from the application menu for the first time.
Start typing in to search for an application. And, if you are looking for a file or directory, start typing with “**~**” or “**/**” (ignoring the quotes).
![][2]
There are default shortcuts like “**g XYZ**” where XYZ is the search term you want to search for in Google.
![][3]
Similarly, you can search for something directly taking you to Wikipedia or Stack Overflow, with “**wiki**” and “**so**” shortcuts, respectively.
Without any extensions, you can also calculate things on the go and copy the results directly to the keyboard.
![][4]
This should come in handy for quick calculations without needing to launch the calculator app separately.
You can head to its [extensions page][5] and browse for useful extensions along with screenshots that should instruct you how to use it.
To change how it works, enable frequent applications display, and adjust the theme — click on the gear icon on the right side of the launcher.
![][6]
You can set it to auto-start. But, if it does not work on your Systemd enabled distro, you can refer to its GitHub page to add it to the service manager.
The options are self-explanatory and are easy to customize, as shown in the screenshot below.
![][7]
### Installing Ulauncher in Linux
Ulauncher provides a **.deb** package for Debian or Ubuntu-based distributions. You can explore [how to install Deb][8] [f][8][iles][8] if youre new to Linux.
In either case, you can also add its PPA and install it via terminal by following the commands below:
```
sudo add-apt-repository ppa:agornostal/ulauncher
sudo apt update
sudo apt install ulauncher
```
You can also find it available in the [AUR][9] for Arch and Fedoras default repositories.
For more information, you can head to its official website or the [GitHub page][10].
[Ulauncher][11]
Ulauncher should be an impressive addition to any Linux distro. Especially, if you want the functionality of a quick launcher like Pop!_OS offers, this is a fantastic option to consider.
_Have you tried Ulauncher yet? You are welcome to share your thoughts on how this might help you get things done quickly._
--------------------------------------------------------------------------------
via: https://itsfoss.com/ulauncher/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher.png?resize=800%2C512&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-directory.png?resize=800%2C503&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-google.png?resize=800%2C449&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-calculator.png?resize=800%2C429&ssl=1
[5]: https://ext.ulauncher.io
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-gear-icon.png?resize=800%2C338&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-settings.png?resize=800%2C492&ssl=1
[8]: https://itsfoss.com/install-deb-files-ubuntu/
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://github.com/Ulauncher/Ulauncher/
[11]: https://ulauncher.io

View File

@ -0,0 +1,281 @@
[#]: subject: "Auto-updating podman containers with systemd"
[#]: via: "https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/"
[#]: author: "Daniel Schier https://fedoramagazine.org/author/danielwtd/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Auto-updating podman containers with systemd
======
![][1]
Auto-Updating containers can be very useful in some cases. Podman provides mechanisms to take care of container updates automatically. This article demonstrates how to use Podman Auto-Updates for your setups.
### Podman
Podman is a daemonless Docker replacement that can handle rootfull and rootless containers. It is fully aware of SELinux and Firewalld. Furthermore, it comes pre-installed with Fedora Linux so you can start using it right away.
If Podman is not installed on your machine, use one of the following commands to install it. Select the appropriate command for your environment.
```
# Fedora Workstation / Server / Spins
$ sudo dnf install -y podman
# Fedora Silverblue, IoT, CoreOS
$ rpm-ostree install podman
```
Podman is also available for many other Linux distributions like CentOS, Debian or Ubuntu. Please have a look at the [Podman Install Instructions][2].
### Auto-Updating Containers
Updating the Operating System on a regular basis is somewhat mandatory to get the newest features, bug fixes, and security updates. But what about containers? These are not part of the Operating System.
#### Why Auto-Updating?
If you want to update your Operating System, it can be as easy as:
```
$ sudo dnf update
```
This will not take care of the deployed containers. But why should you take care of these? If you check the content of containers, you will find the application (for example MariaDB in the docker.io/library/mariadb container) and some dependencies, including basic utilities.
Running updates for containers can be tedious and time-consuming, since you have to:
1. pull the new image
2. stop and remove the running container
3. start the container with the new image
This procedure must be done for every container. Updating 10 containers can easily end up taking 30-40 commands that must be run.
Automating these steps will save time and ensure, that everything is up-to-date.
#### Podman and systemd
Podman has built-in support for systemd. This means you can start/stop/restart containers via systemd without the need of a separate daemon. The Podman Auto-Update feature requires you to have containers running via systemd. This is the only way to automatically ensure that all desired containers are running properly. Some articles like these for [Bitwarden][3] and [Matrix Server][4] already had a look at this feature. For this article, I will use an even simpler [Apache httpd][5] container.
First, start the container with the desired settings.
```
# Run httpd container with some custom settings
$ sudo podman container run -d -t -p 80:80 --name web -v web-volume:/usr/local/apache2/htdocs/:Z docker.io/library/httpd:2.4
# Just a quick check of the container
$ sudo podman container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58e5b07febdf docker.io/library/httpd:2.4 httpd-foreground 4 seconds ago Up 5 seconds ago 0.0.0.0:80->80/tcp web
# Also check the named volume
$ sudo podman volume ls
DRIVER VOLUME NAME
local web-volume
```
Now, set up systemd to handle the deployment. Podman will generated the necessary file.
```
# Generate systemd service file
$ sudo podman generate systemd --new --name --files web
/home/USER/container-web.service
```
This will generate the file _container-web service_ in your current directory. Review and edit the file to your liking. Here is the file contents with added newlines and formatting to improve readability.
```
# container-web.service
[Unit]
Description=Podman container-web.service
Documentation=man:podman-generate-systemd(1)
Wants=network.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/container-web.pid %t/container-web.ctr-id
ExecStart=/usr/bin/podman container run \
--conmon-pidfile %t/container-web.pid \
--cidfile %t/container-web.ctr-id \
--cgroups=no-conmon \
--replace \
-d \
-t \
-p 80:80 \
--name web \
-v web-volume:/usr/local/apache2/htdocs/ \
docker.io/library/httpd:2.4
ExecStop=/usr/bin/podman container stop \
--ignore \
--cidfile %t/container-web.ctr-id \
-t 10
ExecStopPost=/usr/bin/podman container rm \
--ignore \
-f \
--cidfile %t/container-web.ctr-id
PIDFile=%t/container-web.pid
Type=forking
[Install]
WantedBy=multi-user.target default.target
```
Now, remove the current container, copy the file to the proper systemd directory, and start/enable the service.
```
# Remove the temporary container
$ sudo podman container rm -f web
# Copy the service file
$ sudo cp container-web.service /etc/systemd/system/container-web.service
# Reload systemd
$ sudo systemctl daemon-reload
# Enable and start the service
$ sudo systemctl enable --now container-web
# Another quick check
$ sudo podman container ls
$ sudo systemctl status container-web
```
Please be aware, that the container can now only be managed via systemd. Starting and stopping the container with the “podman” command may interfere with systemd.
Now that the general setup is out of the way, have a look at auto-updating this container.
#### Manual Auto-Updates
The first thing to look at is manual auto-updates. Sounds weird? This feature allows you to avoid the 3 steps per container, but you will have full control over the update time and date. This is very useful if you only want to update containers in a maintenance window or on the weekend.
Edit the _/etc/systemd/system_/_container-web.service_ file and add the label shown below to it.
```
--label "io.containers.autoupdate=registry"
```
The changed file will have a section appearing like this:
```
...snip...
ExecStart=/usr/bin/podman container run \
--conmon-pidfile %t/container-web.pid \
--cidfile %t/container-web.ctr-id \
--cgroups=no-conmon \
--replace \
-d \
-t \
-p 80:80 \
--name web \
-v web-volume:/usr/local/apache2/htdocs/ \
--label "io.containers.autoupdate=registry" \
docker.io/library/httpd:2.4
...snip...
```
Now reload systemd and restart the container service to apply the changes.
```
# Reload systemd
$ sudo systemctl daemon-reload
# Restart container-web service
$ sudo systemctl restart container-web
```
After this setup you can run a simple command to update a running instance to the latest available image for the used tag. In this example case, if a new 2.4 image is available in the registry, Podman will download the image and restart the container automatically with a single command.
```
# Update containers
$ sudo podman auto-update
```
#### Scheduled Auto-Updates
Podman also provides a systemd timer unit that enables container updates on a schedule. This can be very useful if you dont want to handle the updates on your own. If you are running a small home server, this might be the right thing for you, so you are getting the latest updates every week or so.
Enable the systemd timer for podman as follows:
```
# Enable podman auto update timer unit
$ sudo systemctl enable --now podman-auto-update.timer
Created symlink /etc/systemd/system/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/system/podman-auto-update.timer.
```
Optionally, you can edit the schedule of the timer. By default, the update will run every Monday morning, which is ok for me. Edit the timer module using this command:
```
$ sudo systemctl edit podman-auto-update.timer
```
This will bring up your default editor. Changing the schedule is beyond the scope of this article but the link to _systemd.timer_ below will help. The Demo section of [Systemd Timers for Scheduling Tasks][6] contains details as well.
Thats it. Nothing more to do. Podman will now take care of image updates and also prune old images on a schedule.
### Hints &amp; Tips
Auto-Updating seems like the perfect solution for container updates, but you should consider some things, before doing so.
* avoid using the “latest” tag, since it can include major updates
* consider using tags like “2” or “2.4”, if the image provider has them
* test auto-updates beforehand (does the container support updates without additional steps?)
* consider having backups of your Podman volumes, in case something goes sideways
* auto-updates might not be very useful for highly productive setups, where you need full control over the image version in use
* updating a container also restarts the container and prunes the old image
* occasionally check if the updates are being applied
If you take care of the above hints, you should be good to go.
### Docs &amp; Links
If you want to learn more about this topic, please check out the links below. There is a lot of useful information in the official documentation and some blogs.
* <https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html>
* <https://docs.podman.io/en/latest/markdown/podman-generate-systemd.1.html>
* <https://www.freedesktop.org/software/systemd/man/systemd.service.html>
* <https://www.freedesktop.org/software/systemd/man/systemd.timer.html>
* [Systemd Timers for Scheduling Tasks][6]
### Conclusion
As you can see, without the use of additional tools, you can easily run auto-updates on Podman containers manually or on a schedule. Scheduling allows unattended updates overnight, and you will get all the latest security updates, features, and bug fixes. Some setups I have tested successfully are: MariaDB, Ghost Blog, WordPress, Gitea, Redis, and PostgreSQL.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/
作者:[Daniel Schier][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/danielwtd/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/auto-updating-podman-containers-816x345.jpg
[2]: https://podman.io/getting-started/installation
[3]: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
[4]: https://fedoramagazine.org/deploy-your-own-matrix-server-on-fedora-coreos/
[5]: https://hub.docker.com/_/httpd
[6]: https://fedoramagazine.org/systemd-timers-for-scheduling-tasks/

View File

@ -0,0 +1,140 @@
[#]: subject: "Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux"
[#]: via: "https://itsfoss.com/enable-fractional-scaling-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux
======
A few months ago, I bought a Dell XPS laptop with a 4K UHD screen. The screen resolution is 3840×2400 resolution with a 16:10 aspect ratio.
When I was installing Ubuntu on it, everything looked so small. The desktop icons, applications, menus, items in the top panel, everything.
Its because the screen has too many pixels but the desktop icons and rest of the elements remain the same in size (as on a regular screen of 1920×1080). Hence, they look too small on the HiDPI screen.
![Icons and other elements look too small on a HiDPI screen in Ubuntu][1]
This is not pretty and makes it very difficult to use your Linux system. Thankfully, there is a solution for GNOME desktop users.
If you too have a 2K or 4K screen where the desktop icons and other elements look too small, heres what you need to do.
### Scale-up display if the screen looks too small
If you have a 4K screen, you can scale the display to 200%. This means that you are making every element twice its size.
Press the Windows key and search for Settings:
![Go to Settings][2]
In Settings, go to Display settings.
![Access the Display Settings and look for Scaling][3]
Here, select 200% as the scale factor and click on Apply button.
![Scaling the display in Ubuntu][4]
It will change the display settings and ask you to confirm whether you want to keep the changed settings or revert to the original. If things look good to you, select “Keep Changes.”
Your display settings will be changed and remain the same even after reboots until you change it again.
### Enable fractional scaling (suitable for 2K screens)
200% scaling is good for 4K screens however if you have a 2K screen, the 200% scaling will make the icons look too big for the screen.
Now you are in the soup. You have the screen looking too small or too big. What about a mid-point?
Thankfully, [GNOME][5] has a fractional scaling feature that allows you to set the scaling to 125%, 150%, and 175%.
#### Using fractional scaling on Ubuntu 20.04 and newer versions
Ubuntu 20.04 and the new versions have newer versions of GNOME desktop environment and it allows you to enable or disable fractional scaling from Display settings itself.
Just go to the Display settings and look for the Fractional Scaling switch. Toggle it to enable or disable it.
When you enable the fractional scaling, youll see new scaling factors between 100% to 200%. You can choose the one which is suitable for your screen.
![Enable fractional scaling][6]
#### Using fractional scaling on Ubuntu 18.04
Youll have to make some additional efforts to make it work on the older Ubuntu 18.04 LTS version.
First, [switch to Wayland from Xorg][7].
Second, enable fractional scaling as an experimental feature using this command:
```
gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
```
Third, restart your system and then go to the Display settings and you should see the fractional scaling toggle button now.
#### Disabling fractional scaling on Ubuntu 18.04
If you are experiencing issues with fractional scaling, like increased power consumption and mouse lagging, you may want to disable it. Wayland could also be troublesome for some applications.
First, toggle the fractional scaling switch in the display settings. Now use the following command to disable the experimental feature.
```
gsettings reset org.gnome.mutter experimental-features
```
Switch back to Xorg from Wayland again.
### Multi-monitor setup and fractional scaling
4K screen is good but I prefer a multi-monitor setup for work. The problem here is that I have two Full HD (1080p) monitors. Pairing them with my 4K laptop screen requires little settings change.
What I do here is to keep the 4K screen at 200% scaling at 3840×2400 resolution. At the same time, I keep the full-HD monitors at 100% scaling with 1920×1080 resolution.
![HiDPI screen is set at 200%][8]
![Full HD screens are set at 100%][9]
![Full HD screens are set at 100%][10]
To ensure a smooth experience, you should take care of the following:
* Use Wayland display server: It is a lot better at handling multi-screens and HiDPI screens than the legacy Xorg.
* Even if you use only 100% and 200% scaling, enabling fractional scaling is a must, otherwise, it doesnt work properly. I know it sounds weird but thats what I have experienced.
### Did it help?
HiDPI support in Linux is far from perfect but it is certainly improving. Newer desktop environment versions of GNOME and KDE keep on improving on this front.
Fractional scaling with Wayland works quite well. It is improving with Xorg as well but it struggles especially on a multi-monitor set up.
I hope this quick tip helped you to enable fractional scaling in Ubuntu and enjoy your Linux desktop on a UHD screen.
Please leave your questions and suggestions in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/enable-fractional-scaling-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2021/08/HiDPI-screen-icons-too-small-in-Ubuntu.webp
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/settings-application-ubuntu.jpg?resize=800%2C247&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/display-settings-scaling-ubuntu.png?resize=800%2C432&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/scale-display-ubuntu.png?resize=800%2C443&ssl=1
[5]: https://www.gnome.org/
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/enable-fractional-scaling.png?resize=800%2C452&ssl=1
[7]: https://itsfoss.com/switch-xorg-wayland/
[8]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-3.webp
[9]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-2.webp
[10]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-1.webp

View File

@ -0,0 +1,217 @@
[#]: subject: "Use this open source tool for automated unit testing"
[#]: via: "https://opensource.com/article/21/8/tackle-test"
[#]: author: "Saurabh Sinha https://opensource.com/users/saurabhsinha"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Use this open source tool for automated unit testing
======
Tackle-test is an automatic generator of unit test cases for Java
applications.
![Looking at a map][1]
Modernizing and transforming legacy applications is a challenging activity that involves several tasks. One of the key tasks is validating that the modernized application preserves the functionality of the legacy application. Unfortunately, this can be tedious and hard to perform. Legacy applications often do not have automated test cases, or, if available, test coverage might be inadequate, both in general and specifically for covering modernization-related changes. A poorly maintained test suite might also contain many obsolete tests (accumulated over time as the application evolved). Therefore, validation is mainly done manually in most modernization projects—it is a process that is time-consuming and may not test the application sufficiently. In some reported case studies, testing accounted for approximately 70% to 80% of the time spent on modernization projects [1]. Tackle-test is an automated testing tool designed to address this challenge.
### Overview of Tackle-test
At its core, Tackle-test is an automatic generator of unit test cases for Java applications. It can generate tests with assertions, which makes the tool especially useful in modernization projects, where application transformation is typically functionality-preserving—thus, useful test assertions can be created by observing runtime states of legacy application versions. This can make differential testing between the legacy and modernized application versions much more effective; test cases without assertions would detect only those differences where the modernized version crashes on a test input on which the legacy version executes successfully. The assertions that Tackle-test generates capture created object values after each code statement, as illustrated in the next section.
Tackle-test uses a novel test-generation technique that applies combinatorial test design (CTD)—also called combinatorial testing or combinatorial interaction testing [2]—to method interfaces, with the goal of performing rigorous testing of methods with “complex interfaces,” where interface complexity is characterized over the space of parameter-type combinations that a method can be invoked with. CTD is a well-known, effective, and efficient test-design technique. It typically requires a manual definition of the test space in the form of a CTD model, consisting of a set of parameters, their respective values, and constraints on the value combinations. A valid test in the test space is defined as an assignment of one value to each parameter that satisfies the constraints. A CTD algorithm automatically constructs a subset of the set of valid tests to cover all legitimate value combinations of every _t_ parameters, where *t *is usually a user input.
Although CTD is typically applied to program inputs in a black-box manner and the CTD model is created manually, Tackle-test automatically builds a parameter-type-based white-box CTD model for each method under test. It then generates a test plan consisting of coverage goals from the model and synthesizes test sequences for covering rows of the test plan. The test plan can be generated at different, user-configurable interaction levels, where higher levels result in the generation of more test cases and more thorough testing, but at the cost of increased test-generation time.
Tackle-test also leverages some existing and commonly used test-generation strategies to maximize code coverage. Specifically, the strategies include feedback-driven random test generation (via the [Randoop][2] open source tool) and evolutionary and constraint-based test generation (via the [EvoSuite][3] open source tool). These tools compute coverage goals in code elements, such as methods, statements, and branches.
![tackle-test components][4]
Figure 1: High-level components of Tackle-test.
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Figure 1 presents a high-level view of the main components of Tackle-test. It consists of a Java-based core test generator that generates CTD-driven tests and a Python-based command-line interface (CLI), which is the primary mechanism for user interaction.
### Getting started with the tool
Tackle-test is released as open source under the Konveyor organization (<https://github.com/konveyor/tackle-test-generator-cli>). To get started, clone the repo, and follow the instructions for installing and running the tool provided in the repo readme. There are two installation options: using docker/docker-compose or a local installation.
The CLI provides two main commands: `generate` for generating JUnit test cases and `execute` for executing them. To verify your installation completed successfully, use the sample `irs` application located in the test/data folder to run these two commands.
The `generate` command is accompanied by a subcommand specifying the test-generation strategy (`ctd-amplified`, `randoop`, or `evosuite`) and creates JUnit test cases. By default, diff assertions are added to the generated test cases. Lets run the generate command on the `irs` sample, using the CTD-guided strategy.
```
$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose generate ctd-amplified
[tkltest|18:00:11.171] Loading config file ./test/data/irs/tkltest_config.toml
[tkltest|18:00:11.175] Computing coverage goals using CTD
* CTD interaction level: 1
* Total number of classes: 5
* Targeting 5 classes
* Created a total of 20 test combinations for 20 target methods of 5 target classes
[tkltest|18:00:12.816] Computing test plans with CTD took 1.64 seconds
[tkltest|18:00:12.816] Generating basic block test sequences using CombinedTestGenerator
[tkltest|18:00:12.816] Test generator output will be written to irs_CombinedTestGenerator_output.log
[tkltest|18:01:02.693] Generating basic block test sequences with CombinedTestGenerator took 49.88 seconds
[tkltest|18:01:02.693] Extending sequences to reach coverage goals and generating junit tests
* === total CTD test-plan coverage rate: 90.00% (18/20)
* Added a total of 64 diff assertions across all sequences
* wrote summary file for generation of CTD-amplified tests (JSON)
* wrote 5 test class files to "irs-ctd-amplified-tests/monolithic" with 18 total test methods
* wrote CTD test-plan coverage report (JSON)
[tkltest|18:01:06.694] JUnit tests are saved in ./irs-ctd-amplified-tests
[tkltest|18:01:06.695] Extending test sequences and writing junit tests took 4.0 seconds
[tkltest|18:01:06.700] CTD coverage report is saved in ./irs-tkltest-reports/ctd report/ctdsummary.html
[tkltest|18:01:06.743] Generated Ant build file ./irs-ctd-amplified-tests/build.xml
[tkltest|18:01:06.743] Generated Maven build file ./irs-ctd-amplified-tests/pom.xml
```
Test generation takes a couple of minutes on the `irs` sample. By default, the tool spends 10 seconds per class on initial test sequence generation. However, the overall runtime can be longer due to additional steps, as explained in the following section. Note that the time limit per class option is configurable and that for large applications, test generation might take several hours. Therefore, it is a good practice to start with a limited scope of a few classes to get a feel for the tool before performing test generation on all application classes.
When test generation completes, the test cases are written to a designated directory named `irs-ctd-amplified-tests` as output by the tool, along with Maven and Ant scripts for compiling and executing them. The test cases are in a subdirectory named `monolith`. A separate test file is created for each application class. Each such file contains multiple test approaches for testing the public methods of the class with different combinations of parameter types, as specified by the CTD test plan. A CTD coverage report is created that summarizes the test plan parts for which unit tests could be generated in a directory named `irs-tkltest-reports`. In the above output, we can see that Tackle-test created test cases for 18 of the 20 test-plan rows, resulting in 90% test-plan coverage.
![amplified tests][6]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Now lets look at one of the generated test methods for the `irs.IRS` class.
```
  @Test    
   public void test1() throws Throwable {
       irs.IRS iRS0 = new irs.IRS();
       java.util.ArrayList&lt;irs.Salary&gt; salaryList1 = new java.util.ArrayList&lt;irs.Salary&gt;();                 
       irs.Salary salary5 = new irs.Salary(0, 0, (double)100);
       assertEquals(0, ((irs.Salary) salary5).getEmployerId());
       assertEquals(0, ((irs.Salary) salary5).getEmployeeId());
       assertEquals(100.0, (double) ((irs.Salary) salary5).getSalary(), 1.0E-4);
       boolean boolean6 = salaryList1.add(salary5);
        assertEquals(true, boolean6);
       iRS0.setSalaryList((java.util.List&lt;irs.Salary&gt;)salaryList1);
    }
```
This test method intends to test the `setSalaryList` method of IRS, which receives a list of `irs.Salary` objects as its input. We can see that statements of the test case are followed by calls to the `assertEquals` method, comparing the values of generated objects to the values recorded during the generation of this test. When the test executes again, e.g., on the modernized version of the application, if any value differs from the recorded one, an assertion failure would occur, potentially indicating broken code that did not preserve the functionality of the legacy application.
Next, we will compile and run the generated test cases using the CLI `execute`command. We note that these are standard JUnit test cases that can be run in an IDE or using any JUnit test runner; they can also be integrated into a CI pipeline. When executed with the CLI, JUnit reports are generated and optionally also code-coverage reports (created using [JaCoCo][7]).
```
$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose execute
[tkltest|18:12:46.446] Loading config file ./test/data/irs/tkltest_config.toml
[tkltest|18:12:46.457] Total test classes: 5
[tkltest|18:12:46.457] Compiling and running tests in ./irs-ctd-amplified-tests
Buildfile: ./irs-ctd-amplified-tests/build.xml
delete-classes:
compile-classes_monolithic:
      [javac] Compiling 5 source files
execute-tests_monolithic:
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/raw
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/html
[jacoco:coverage] Enhancing junit with coverage
...
BUILD SUCCESSFUL
Total time: 2 seconds
[tkltest|18:12:49.772] JUnit reports are saved in ./irs-tkltest-reports/junit-reports
[tkltest|18:12:49.773] Jacoco code coverage reports are saved in ./irs-tkltest-reports/jacoco-reports
```
The Ant script executes the unit tests by default, but the user can configure the tool to use Maven instead. Gradle will also be supported soon.
Looking at the JUnit report, located in `irs-tkltest-reports`, we can see that all JUnit test methods passed. This is expected because we executed them on the same version of the application on which they were generated.
![junit report][8]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
From the JaCoCo code-coverage report, also located in `irs-tkltest-reports`, we can see that CTD-guided test generation achieved overall 71% statement coverage and 94% branch coverage on the irs sample. We can also drill down to the class and method levels to see their coverage rates. The missing coverage is the result of test-plan rows for which the test generator was unable to generate a passing sequence. Increasing the test-generation time limit per class can increase the coverage rate.
![jacoco][9]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
### CTD-guided test generation
Figure 2 illustrates the test-generation flow for CTD-guided test generation, implemented in the core test-generation engine of Tackle-test. The input to the test-generation flow is a specification of (1) the application classes, (2) the library dependencies of the application, and (3) optionally, the set of application classes to target for test generation (if unspecified, all application classes are targeted). This specification is provided via a [TOML][10] configuration file. The output from the flow consists of: (1) JUnit test cases (with or without assertions), (2) Maven and Ant build files, and (3) JSON files containing a summary of test generation and CTD test-plan coverage.
![ctd-guided test generation][11]
Figure 2: The process for CTD-guided test generation.
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
The flow starts with the generation of the CTD test plan. This involves creating a CTD model for each public method of the targeted classes. The CTD model for each method captures all possible concrete types for every formal parameter of the method, including elements that can be added to collection/map/array parameter types. Tackle-test incorporates lightweight static analysis to deduce the feasible concrete types for each parameter of each method.
Next, a CTD test plan is generated automatically from the model at a given (user-configurable) interaction level. Each row in the test plan describes a specific combination of concrete parameter types with which the method should be invoked. By default, the interaction level is set to one, which results in one-way testing: each possible concrete parameter type appears in at least one row of the test plan. Setting the Interaction level to two, a.k.a. pairwise testing, would result in a test plan that includes every pair of concrete types for each pair of method parameters in at least one of its rows.
The CTD test plan provides a set of coverage goals for which test sequences need to be synthesized. Tackle-test does this in two steps. In the first step, it uses Randoop and/or EvoSuite (the user can configure which tools are used) to create base test sequences. The base test sequences are analyzed to generate sequence pools at method and class levels from which the test-generation engine samples sequences to put together a covering sequence for each test-plan row. If a covering sequence is successfully created, the engine executes it to ensure that the sequence is valid in the sense that it does not cause the application to crash. During this execution, runtime states in terms of objects created are also recorded to be used later for assertion generation. Failing sequences are discarded. The engine adds assertions to passing sequences if the user specifies the assertion option. Finally, the engine exports the sequences, grouped by classes, to JUnit class files. The engine also creates Ant `build.xml` and Maven `pom.xml` files, which can be used if needed for running the generated test cases.
### Other tool features
Tackle-test is highly configurable and provides several configuration options using which the user can tailor the behavior of the tool: for example, which classes to generate tests for, which tools to use for test generation, how much time to spend on test generation, whether to add assertions to test cases, what interaction level to use for generating CTD test plans, how many executions to perform for extended test sequences, etc.
### Effectiveness of different test-generation strategies
Tackle-test has been evaluated on several open source Java applications and is currently being applied to enterprise-grade Java applications as well.
![instruction coverage results][12]
Figure 3: Instruction coverage achieved by test cases generated using different strategies and interaction levels for two small open-source Java applications taken from the[ SF110 benchmark][13].
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Figure 3 presents data about statement coverage achieved by tests generated using different testing strategies on two small open source Java applications. The applications were taken from the [SF110 benchmark][13], a large corpus of open source Java applications created to facilitate empirical studies of automated testing techniques. One of the applications, `jni-inchi`, consists of 24 classes and 74 methods; the other, `gaj`, consists of 14 classes and 17 methods. The box plot shows that targeting CTD test-plan rows by itself can achieve good statement coverage and, compared to test suites of the same size as the CTD-guided test suite sampled out of Randoop- and EvoSuite-generated test cases, the CTD-guided test suite achieves higher statement coverage, making it more efficient.
A large-scale evaluation of Tackle-test, using more applications from the SF110 benchmark and some proprietary enterprise Java applications, is currently being conducted.
If you prefer to see a video demonstration, you can watch it [here][14].
We encourage you to try out the tool and provide feedback to help us improve it by submitting a pull request. We also invite you to help improve the tool by contributing to the project.
#### Migrate to Kubernetes with the Konveyor community
Tackle-test is part of the Konveyor community. This community is helping others modernize and migrate their applications to the hybrid cloud by building tools, identifying patterns, and providing advice on breaking down monoliths, adopting containers, and embracing Kubernetes.
This community includes open source tools that migrate virtual machines to KubeVirt, Cloud Foundry, or Docker containers to Kubernetes, or namespaces between Kubernetes clusters. These are a few of the use cases we solve for.
For updates on these tools and invites to meetups where practitioners show how they moved to Kubernetes, [join the community][15].
#### References
[1] COBOL to Java and Newspapers Still Get Delivered, <https://arxiv.org/pdf/1808.03724.pdf>, 2018.
[2] D. R. Kuhn, R. N. Kacker, and Y. Lei. Introduction to Combinatorial Testing. Chapman &amp; Hall/CRC, 2013.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/tackle-test
作者:[Saurabh Sinha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/saurabhsinha
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
[2]: https://randoop.github.io/randoop/
[3]: https://www.evosuite.org/
[4]: https://opensource.com/sites/default/files/1tackle-test-components.png
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/2amplified-tests.png (amplified tests)
[7]: https://www.eclemma.org/jacoco/
[8]: https://opensource.com/sites/default/files/3junit-report.png (junit report)
[9]: https://opensource.com/sites/default/files/4jacoco.png (jacoco)
[10]: https://toml.io/en/
[11]: https://opensource.com/sites/default/files/5ctd-guided-test-generation.png (ctd-guided test generation)
[12]: https://opensource.com/sites/default/files/6instructioncoverage.png (instruction coverage results)
[13]: https://www.evosuite.org/experimental-data/sf110/
[14]: https://youtu.be/qThqTFh2PM4
[15]: https://www.konveyor.io/

View File

@ -0,0 +1,87 @@
[#]: subject: "Elementary OS 6 Odin Review Late Arrival but a Solid One"
[#]: via: "https://www.debugpoint.com/2021/08/elementary-os-6-odin-review/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "imgradeone"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Elementary OS 6 Odin Review Late Arrival but a Solid One
======
We review the elementary OS 6 Odin and give you some glimpse on how it
went for our test drive.
For almost two years, the elementary OS fans was waiting for elementary OS 6 Odin release. Because, the earlier version elementary OS 5.1 was too old in terms of Kernel, packages in 2021. It was based on Ubuntu 18.04 LTS. So, the users was waiting to get a flavor based on Ubuntu 20.04 LTS which is already in 2nd year, and we have another LTS coming up.
You get the idea. Sometimes the wait was too long, probably some users jumped ship to other distributions. 
However, the release [was done in August][1], and it was a hit among the users and fanboys.
So, I ran elementary OS 6 Odin for a week on an old hardware (I know newer hardware would do just fine), and this is the review.
![elementary OS 6 ODIN Desktop][2]
### Elementary OS 6 Odin review
Test Hardware
* CPU Intel Core i3 with RAM 4GB
* Disk SSD
* Graphics Nvidia GeForce (340)
#### Installation
In this release, the team made some usability changes to the elementary Installer, which is a homegrown tool. It reduced the steps require to begin the installation. Although it still depends on gparted for partition (which is a great tool itself anyway).
The installation took around 10 minutes in above hardware and went without any error. Post installation, the Grub is updated properly and no surprises there. It was a triple boot system with Legacy Bios.
#### First Impression
If you are new to elementary OS or Pantheon desktop, and coming from traditional menu-driven desktops, then you might need a day or two to be familiar to the way this desktop is set up. Otherwise, if you are a long time elementary user, you feel the same with some performance benefits and looks.
Couple of [new features of elementary OS 6][3] you might notice as they are visible. The accent color, native dark mode, a setup of nice wallpapers.
[][4]
SEE ALSO:   elementary OS 6 Odin: New Features and Release Date
#### Stability and performance
I have used elementary OS Odin for more than a week. After using it daily, I must say it is very stable. No sudden crash or surprises. Additional applications (those installed separately via apt) are working well with no loss to performance. 
In almost idle state, the CPU usage is around 5% to 10% and memory is consumed around 900 MB. The CPU/Memory mostly consumed by Gala Pantheons window manager, Wingpanel and AppCenter.
![System performance of elementary OS 6][5]
Considering the look and feel it provides, I guess the above numbers are well justified. But remember, if you open more applications such as LibreOffice, Chrome, or Kdenlive for example, it will definitely consume more resources.
#### Applications and AppCenter
The application list of elementary OS is well curated and almost all types of apps are available from AppCenter including the Flatpak apps. However, elementary doesnt include some important applications pre-loaded in default install. For example, Firefox, LibreOffice, Torrent client, disk formatter, photo editor some important ones you need to manually install after a fresh installation. This is one of the improvement areas for the team, I feel.
### Final Notes
I have encountered one bug multiple times in my week long test run. The Wi-Fi was disconnecting randomly sometimes. But that is totally on Ubuntu 20.04 which has weird Wi-Fi problems over the years. Apart from that, it is a very stable and good Linux distribution. I wish there is a rolling-release of elementary, that would have been awesome. That said, its a recommended distro for all, specially for those coming from macOS.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/elementary-os-6-odin-review/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://blog.elementary.io/elementary-os-6-odin-released/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/elementary-OS-6-ODIN-Desktop-1024x576.jpeg
[3]: https://www.debugpoint.com/2021/08/elementary-os-6/
[4]: https://www.debugpoint.com/2020/09/elementary-os-6-odin-new-features-release-date/
[5]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/System-performance-of-elementary-OS-6.jpeg

View File

@ -1,177 +0,0 @@
[#]: collector: "lujun9972"
[#]: translator: "fisherue"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "5 ways to improve your Bash scripts"
[#]: via: "https://opensource.com/article/20/1/improve-bash-scripts"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
5 步提升你的脚本程序
======
巧用 Bash 脚本程序能帮助你完成很多极具挑战的任务。
![A person working.工作者图片][1]
系统管理员经常写脚本程序,不论长短,这些脚本可以完成某种任务。
你是否曾经查看过某个软件发行方提供的安装用的脚本 (script) 程序?为了能够适应不同用户的系统配置,顺利完成安装,这些脚本程序经常包含很多函数和逻辑分支。多年来,我已经集合了提升我的脚本程序的一些技巧,这里分享几个技巧,希望能对朋友们也有用。这里列出一组短脚本示例,展示给大家做脚本样本。
### 初步尝试
我尝试写一个脚本程序时,原始程序往往就是一组命令行,通常就是调用标准命令完成诸如更新网页内容之类的工作,这样可以节省时间。其中一个类似的工作是解压文件到阿帕奇 (Apache) 网站服务器的主目录里,我的最初脚本程序大概是下面这样:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
这帮我节省了时间,也减少了键入多条命令操作。时日久了,我掌握了另外的技巧,可以用 Bash 脚本程序完成更难的一些工作,比如说创建软件安装包、安装软件、备份文件系统等工作。
### 1\. 条件分支结构
和众多其他编程语言一样,脚本程序的条件分支结构同样是强大的常用技能。条件分支结构赋予了计算机程序逻辑能力,我的很多实例都是基于条件逻辑分支。
基本的条件分支结构就是 IF 条件分支结构。通过判定是否满足特定条件,可以控制程序选择执行相应的脚本命令段。比如说,想要判断系统是否安装了 Java ,可以通过判断系统有没有一个 Java 库目录;如果找到这个目录,就把这个目录路径添加到可运行程序路径,也就可以调用 Java 库应用了。
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2\. 限定运行权限
你或许想只允许特定的用户才能执行某个脚本程序。除了 Linux 的权限许可管理,比如对用户和用户组设定权限、通过 SELinux 设定此类的保护权限等,你还可以在脚本里设置逻辑判断来设置执行权限。类似的情况可能是,你需要确保只有网站程序的所有者才能执行相应的网站初始化操作脚本。甚至你可以限定只有根用户才能执行某个脚本。这个可以通过在脚本程序里设置逻辑判断实现, Linux 提供的几个环境变量可以帮忙。其中一个是保存用户名称的变量 **$USER**, 另一个是保存用户识别码的变量 **$UID** 。在脚本程序里,执行用户的 UID 值就保存在 **$UID** 变量里。
#### 用户名判别
第一个例子里,我在一个多用户环境里指定只有用户 jboss1 可以执行脚本程序。条件 if 语句猜测判断,‘要求执行这个脚本程序的用户不是 jboss1?’ 如果我猜测的没错,就会输出提示“用户不是 jboss1 ”然后直接退出这个脚本程序返回码为1 **exit 1**
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### 根用户判别
接下来的例子是要求只有根用户才能执行脚本程序。根用户的用户识别码 (UID) 是0,设置的条件判断采用大于操作符 (**-gt**) ,所有 UID 值大于0的用户都被禁止执行该脚本程序。
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3\. 带参数执行程序
可执行程序可以附带参数作为执行选项,命令行脚本程序也是一样,下面给出几个例子。在这之前,我想告诉你,能写出好的程序并不只是写出我们想要它执行什么就执行什么的程序,程序还需要按照我们不想让它执行什么它可以按我们的意愿能够不执行相应操作。如果运行程序时没有提供参数造成程序缺少足够信息,我愿意脚本程序不要做任何破坏性的操作。因而,程序的第一步就是确认命令行是否提供了参数,判定的条件就是参数 **$# **的值是否为 0 ,如果是(意味着没有提供参数),就直接终止脚本程序并退出操作。
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### 多个运行参数
可以传递给脚本程序的参数不知一个。脚本使用内部变量指代这些参数,内部变量名用非负整数递增标识,也就是 **$1**,** $2**,** $3 **等等递增。我只是扩展前面的程序,输出显示用户提供的前三个参数。显然,要针对所有的每个参数有对应的响应需要更多的逻辑判断,这里的例子只是简单展示参数的使用。
```
`echo $1 $2 $3`
```
我们在讨论这些参数变量名,你或许有个疑问,“参数变量名怎么跳过了**$0**,(而直接从**$1**开始)?”
是的,是这样,这是有原因的。变量名 **$0** 确实存在,也非常有用,它储存的是被执行的脚本程序的名称。
```
`echo $0`
```
程序执行过程中有一个变量名指代程序名称,很重要的一个原因是,可以在生成的日志文件名称里包含程序名称,最简单的方式应该是调用一个 "echo" 语句。
```
`echo test >> $0.log`
```
当然,你或许要增加一些代码,确保这个日志文件存放在你希望的路径,日志名称包含你认为有用的信息。
### 4\. 交互输入
脚本程序的另一个好用的特性是可以在执行过程中接受输入,最简单的情况是让用户可以输入一些信息。
```
echo "enter a word please:"
 read word
 echo $word
```
这样也可以让用户在程序执行中作出选择。
```
read -p "Install Software ?? [Y/n]: " answ
 if [ "$answ" == 'n' ]; then
   exit 1
 fi
   echo "Installation starting..."
```
### 5\. 出错退出执行
几年前,我写了各脚本,想在自己的电脑上安装最新版本的 Java 开发工具包 (JDK) 。这个脚本把 JDK 文件解压到指定目录,创建更新一些符号链接,再做一下设置告诉系统使用这个最新的版本。如果解压过程出现错误,在执行后面的操作就会使 Java 系统破坏不能使用。因而,这种情况下需要终止程序。如果解压过程没有成功,就不应该再继续进行之后的更新操作。下面语句段可以完成这个功能。
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
下面的一行语句组可以快速展示变量 **$?** 的用法。
```
`ls T; ec=$?; echo $ec`
```
先用 **touch T** 命令创建一个文件名为 **T** 的文件,然后执行这个文件行,变量 **ec **的值会是0。然后,用 **rm T **命令删除文件,在执行示例的文件行,变量 **ec** 的值会是2,因为文件 T 不存在,命令 **ls **找不到指定文件报错,相应返回值是2。
在逻辑条件里利用这个出错标识,参照前文我使用的条件判断,可以使脚本文件按需完成设定操作。
### 结语
要完成复杂的功能,或许我们觉得应该使用诸如 Python, C, 或 Java 这类的高级编程语言,然而并不尽然,脚本编程语言也很强大,可以完成类似任务。要充分发挥脚本的作用,有很多需要学习的,希望这里的几个例子能让你意识到脚本编程的功能。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl "工作者图片"

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginners guide to SSH for remote connection on Linux)
[#]: via: (https://opensource.com/article/20/9/ssh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Linux 远程连接之 SSH 新手指南
======
学会使用安全外壳协议连接远程计算机。
![woman on laptop sitting at the window][1]
使用 Linux你只需要在键盘上输入命令就可以巧妙地使用计算机甚至这台计算机可以在世界上任何地方这正是 Linux 最吸引人的特性之一。有了 OpenSSH[POSIX][2] 用户就可以在有权限连接的计算机上打开安全外壳协议,然后远程使用。这对于许多 Linux 用户来说可能不过是日常任务,但从没操作过的人可能就会感到很困惑。本文介绍了如何为 <ruby>安全外壳协议<rt>secure shell</rt></ruby>(简称 SSH连接配置两台计算机以及如何在没有密码的情况下安全地从一台计算机连接到另一台计算机。
### 相关术语
在讨论多台计算机时如何将不同计算机彼此区分开可能会让人头疼。IT 社区拥有完善的术语来描述计算机联网的过程。
* **<ruby>服务<rt>service</rt></ruby>**
服务是指在后台运行的软件因此它不会局限于仅供安装它的计算机使用。例如Web 服务器通常托管着 Web 共享 _服务_。该术语暗含(但非绝对)它是没有图形界面的软件。
* **<ruby>主机<rt>host</rt></ruby>**
主机可以是任何计算机。在 IT 中,任何计算机都可以称为 _主机_,因为从技术上讲,任何计算机都可以托管对其他计算机有用的应用程序。你可能不会把自己的笔记本电脑视为 `主机`,但其实上面可能正运行着一些对你、你的手机或其他计算机有用的服务。
* **<ruby>本地<rt>local</rt></ruby>**
本地计算机是指用户或某些特定软件正在使用的计算机。例如,每台计算机都会把自己称为 `localhost`
* **<ruby>远程<rt>remote</rt></ruby>**
远程计算机是指你既没在其面前,也没有在实际使用的计算机,是真正意义上在 _远程_ 位置的计算机。
现在术语已经明确好,我们可以开始了。
### 在每台主机上激活 SSH
要通过 SSH 连接两台计算机,每个主机都必须安装 SSH。SSH 有两个组成部分:本地计算机上使用的用于启动连接的命令,以及用于接收连接请求的 _服务器_。有些计算机可能已经安装好了 SSH 的一个或两个部分。验证 SSH 是否完全安装的命令因系统而异,因此最简单的验证方法是查阅相关配置文件:
```
$ file /etc/ssh/ssh_config
/etc/ssh/ssh_config: ASCII text
```
如果返回 `No such file or directory` 错误,说明没有安装 SSH 命令。
SSH 服务的检测与此类似(注意文件名中的 `d`
```
$ file /etc/ssh/sshd_config
/etc/ssh/sshd_config: ASCII text
```
根据缺失情况选择安装两个组件:
```
$ sudo dnf install openssh-clients openssh-server
```
在远程计算机上,使用 systemd 命令启用 SSH 服务:
```
$ sudo systemctl enable --now sshd
```
你也可以在 GNOME 上的 **系统设置** 或 macOS 上的 **系统首选项** 中启用 SSH 服务。在 GNOME 桌面上,该设置位于 **Sharing** 面板中:
![在 GNOME 系统设置中激活 SSH][3]
(Seth Kenlon, [CC BY-SA 4.0][4])
### 开启安全外壳协议
现在你已经在远程计算机上安装并启用了 SSH可以尝试使用密码登录作为测试。要访问远程计算机你需要有用户帐户和密码。
远程用户不必与本地用户相同。只要拥有相应用户的密码,你就可以在远程机器上以任何用户的身份登录。例如,我在我的工作计算机上的用户是 `sethkenlon` ,但在我的个人计算机上是 `seth`。如果我正在使用我的个人计算机(即作为当前的本地计算机),并且想通过 SSH 连接到我的工作计算机,我可以通过将自己标识为 `sethkenlon` 并使用我的工作密码来实现连接。
要通过 SSH 连接到远程计算机,你必须知道其 <ruby>因特网协议<rt>internet protocol</rt></ruby> (简称IP) 地址或可解析的主机名。在远程计算机上使用 `ip` 命令可以查看该机器的 IP 地址:
```
$ ip addr show | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.1.1.5/27 brd 10.1.1.31 [...]
```
如果远程计算机没有 `ip` 命令,可以尝试使用 `ifconfig` 命令(甚至可以试试 Windows 上通用的 `ipconfig` 命令)。
127.0.0.1 是一个特殊的地址,它实际上是 `localhost` 的地址。这是一个 `环回` 地址,系统使用它来找到自己。这在登录远程计算机时并没有什么用,因此在此示例中,远程计算机的正确 IP 地址为 10.1.1.5。在现实生活中,我的本地网络正在使用 10.1.1.0 子网,进而可得知前述正确的 IP 地址。如果远程计算机在不同的网络上,那么 IP 地址几乎可能是任何地址(但绝不会是 127.0.0.1),并且可能需要一些特殊的路由才能通过各种防火墙到达远程。如果你的远程计算机在同一个网络上,但想要访问比自己的网络更远的计算机,请阅读我之前写的关于 [在防火墙中打开端口][5] 的文章。
如果你能通过 IP 地址 _或_ 主机名 ping 到远程机器,并且拥有登录帐户,那么就可以通过 SSH 接入远程机器:
```
$ ping -c1 10.1.1.5
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
64 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=4.66 ms
$ ping -c1 akiton.local
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
```
至此就成功了一小步。再试试使用SSH登录
```
$ whoami
seth
$ ssh sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
测试登录有效,下一节会介绍如何激活无密码登录。
### 创建 SSH 密钥
要在没有密码的情况下安全地登录到另一台计算机,登陆者必须拥有 SSH 密钥。可能你的机器上已经有一个 SSH 密钥但再多创建一个新密钥也没有什么坏处。SSH 密钥的生命周期是在本地计算机上开始的,它由两部分组成:一个是永远不会与任何人或任何东西共享的私钥,一个是可以复制到任何你想要无密码访问的远程机器上的公钥。
有的人可能会创建一个 SSH 密钥,并将其用于从远程登录到 GitLab 身份验证的所有操作,但我会选择对不同的任务组使用不同的密钥。例如,我在家里使用一个密钥对本地机器进行身份验证,使用另一个密钥对我维护的 Web 服务器进行身份验证,再一个单独的密钥用于 Git 主机,以及又一个用于我托管的 Git 存储库,等等。在此示例中,我将只创建一个唯一密钥,以在局域网内的计算机上使用。
使用 `ssh-keygen` 命令创建新的 SSH 密钥:
```
$ ssh-keygen -t ed25519 -f ~/.ssh/lan
```
`-t` 选项代表 _类型_ ,上述代码设置了一个高于默认值的密钥加密级别。`-f` 选项代表 _文件_,指定了密钥的文件名和位置。运行此命令后会生成一个名为 `lan` 的 SSH 私钥和一个名为 `lan.pub` 的 SSH 公钥。
使用 `ssh-copy-id` 命令把公钥发送到远程机器上,在此之前要先确保具有远程计算机的 SSH 访问权限。如果你无法使用密码登录远程主机,也就无法设置无密码登录:
```
$ ssh-copy-id -i ~/.ssh/lan.pub sethkenlon@10.1.1.5
```
过程中系统会提示你输入远程主机上的登录密码。
操作成功后,使用 `-i` 选项将 SSH 命令指向对应的密钥(在本例中为 `lan`)再次尝试登录:
```
$ ssh -i ~/.ssh/lan sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
对局域网上的所有计算机重复此过程,你就将能够无密码浏览这个局域网上的每台主机。实际上,一旦你设置了无密码认证,你就可以编辑 `/etc/ssh/sshd_config` 文件来禁止密码认证。这有助于防止其他人使用 SSH 对计算机进行身份验证,除非他们拥有你的私钥。要想达到这个效果,可以在有 `sudo` 权限的文本编辑器中打开 `/etc/ssh/sshd_config` 并搜索字符串 `PasswordAuthentication`,将默认行更改为:
```
PasswordAuthentication no
```
保存并重启 SSH 服务器:
```
$ sudo systemctl restart sshd &amp;&amp; echo "OK"
OK
$
```
### 日常使用 SSH
OpenSSH 改变了人们对操作计算机的看法,使用户不再被束缚在面前的计算机上。使用 SSH你可以访问家中的任何计算机或者拥有帐户的服务器甚至是移动和物联网设备。充分利用 SSH 也意味着解锁 Linux 终端的更多用途。如果你还没有习惯使用 SSH请试一下它吧。试着适应 SSH创建一些适当的密钥以此更安全地使用计算机打破必须与计算机面对面的局限性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ssh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/sites/default/files/uploads/gnome-activate-remote-login.png (Activate SSH in GNOME System Settings)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/8/open-ports-your-firewall

View File

@ -0,0 +1,114 @@
[#]: subject: (How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux])
[#]: via: (https://itsfoss.com/check-mbr-or-gpt/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
如何在 Windows 和 Linux 上确定系统使用 MBR 还是 GPT 分区
======
在你安装 Linux 或任何其他系统的时候,了解你的磁盘的正确分区方案是非常关键的。
目前有两种流行的分区方案,老一点的 MBR 和新一些的 GPT。现在大多数的电脑使用 GPT。
在制作 live 镜像或可启动 USB 设备时,一些工具(比如 [Rufus][1])会问你在用的磁盘分区情况。如果你在 MBR 分区的磁盘上选择 GPT 方案的话,制作出来的可启动 USB 设备可能会不起作用。
在这个教程里,我会展示若干方法,来在 Windows 和 Linux 系统上检查磁盘分区方案。
### 在 Windows 上检查系统使用的是 MBR 还是 GPT
尽管在 Windows 上包括命令行在内有不少方法可以检查磁盘分区方案,这里我还是使用图形界面的方式查看。
按下 Windows 按键然后搜索“disk”然后点击“**创建并格式化硬盘分区**”。
![][2]
在这里,**右键点击**你想要检查分区方案的磁盘。在右键菜单里**选择属性**。
![右键点击磁盘并选择属性][3]
在属性窗口,切换到**卷**标签页,寻找**磁盘分区形式**属性。
![在卷标签页寻找磁盘分区形式属性][4]
正如你在上面截图所看到的,磁盘正在使用 GPT 分区方案。对于一些其他系统,它可能显示的是 MBR 或 MSDOS 分区方案。
现在你知道如何在 Windows 下检查磁盘分区方案了。在下一部分,你会学到如何在 Linux 下进行检查。
### 在 Linux 上检查系统使用的是 MBR 还是 GPT
在 Linux 上也有不少方法可以检查磁盘分区方案使用的是 MBR 还是 GPT。既有命令行方法也有图形界面工具。
让我先给你演示一下命令行方法,然后再看看一些图形界面的方法。
#### 在 Linux 使用命令行检查磁盘分区方案
命令行的方法应该在所有 Linux 发行版上都有效。
打开终端并使用 sudo 运行下列命令:
```
sudo parted -l
```
上述命令实际上是一个基于命令行的 [Linux 分区管理器][5]。命令参数 -l 会列出系统中的所有磁盘以及它们的详情,里面包含了分区方案信息。
在命令输出中,寻找以 **Partition Table分区表**开头的行:
![][6]
在上面的截图中,磁盘使用的是 GPT 分区方案。如果是 **MBR**,它会显示为 **msdos**
你已经学会了命令行的方式。但如果你不习惯使用终端,你还可以使用图形界面工具。
#### 使用 GNOME Disks 工具检查磁盘信息
Ubuntu 和一些其它基于 GNOME 的发行版内置了叫做 Disks 的图形工具,你可以用它管理系统中的磁盘。
你也可以使用它来获取磁盘的分区类型。
![][7]
#### 使用 Gparted 图形工具检查磁盘信息
如果你没办法使用 GNOME Disks 工具,别担心,还有其它工具可以使用。
其中一款流行的工具是 Gparted。你应该可以在大多数 Linux 发行版的软件源中找到它。如果系统中没有安装的话,使用你的发行版的软件中心或[包管理器][9]来[安装 Gparted][8]。
在 Gparted 中,通过菜单选择 **View-Device Information查看—设备信息**。它会在左下区域显示磁盘信息,这些信息中包含分区方案信息。
![][10]
看吧,也不是太复杂,对吗?现在你了解了好几种途径来确认你的系统使用的是 GPT 还是 MBR 分区方案。
同时我还要提一下,有时候磁盘还会有[混合分区方案][11]。这不是很常见,大多数时候分区不是 MBR 就是 GPT。
有任何问题或建议?请在下方留下评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-mbr-or-gpt/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[alim0x](https://github.com/alim0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://rufus.ie/en_US/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/disc-management-windows.png?resize=800%2C561&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-1.png?resize=800%2C603&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-2-1.png?resize=800%2C600&ssl=1
[5]: https://itsfoss.com/partition-managers-linux/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux.png?resize=800%2C446&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux-gui.png?resize=800%2C548&ssl=1
[8]: https://itsfoss.com/gparted/
[9]: https://itsfoss.com/package-manager/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-disk-partitioning-scheme-linux-gparted.jpg?resize=800%2C555&ssl=1
[11]: https://www.rodsbooks.com/gdisk/hybrid.html

View File

@ -1,196 +0,0 @@
[#]: subject: "Monitor your Linux system in your terminal with procps-ng"
[#]: via: "https://opensource.com/article/21/8/linux-procps-ng"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 procps-ng 在终端监控你的 Linux 系统
======
如何找到一个程序的进程 IDPID。最常见的 Linux 工具是由 procps-ng 包提供的,包括 ps、pstree、pidof 和 pgrep 命令。
![System monitor][1]
在[POSIX][2]术语中,进程是一个正在进行的事件,由操作系统的内核管理。当你启动一个应用时就会产生一个进程,尽管还有许多其他的进程在你的计算机后台运行,包括保持系统时间准确的程序,监测新的文件系统,索引文件,等等。
大多数操作系统都有某种类型的系统活动监视器因此你可以了解在任何特定时刻有哪些进程在运行。Linux 有一些供你选择,包括 GNOME 系统监视器和 KSysGuard。这两个软件在桌面上都很有用但 Linux 也提供了在终端监控系统的能力。不管你选择哪一种,对于那些积极管理自己电脑的人来说,检查一个特定的进程是一项常见的任务。
在这篇文章中,我演示了如何找到一个程序的进程 IDPID。最常见的工具是由 [procps-ng][3] 包提供的,包括 `ps`、`pstree`、`pidof` 和 `pgrep` 命令。
### 查找一个正在运行的程序的 PID
有时你想得到一个你知道正在运行的特定程序的进程 IDPID。`pidof` 和 `pgrep` 命令通过命令名称查找进程。
`pidof` 命令返回一个命令的 PID按名称搜索确切的命令
```
$ pidof bash
1776 5736
```
`pgrep` 命令允许使用正则表达式regex
```
$ pgrep .sh
1605
1679
1688
1776
2333
5736
$ pgrep bash
5736
```
### 通过文件查找 PID
你可以用 `fuser` 命令找到使用特定文件的进程的 PID。
```
$ fuser --user ~/example.txt
/home/tux/example.txt: 3234(tux)
```
### 通过 PID 获得进程名称
如果你有一个进程的 PID _编号_,但没有生成它的命令,你可以用 `ps` 做一个“反向查找”:
```
$ ps 3234
PID TTY STAT TIME COMMAND
5736 pts/1 Ss 0:00 emacs
```
### 列出所有进程
`ps` 命令列出进程。你可以用 `-e` 选项列出你系统上的每一个进程:
```
$ ps -e | less
PID TTY TIME CMD
1 ? 00:00:03 systemd
2 ? 00:00:00 kthreadd
3 ? 00:00:00 rcu_gp
4 ? 00:00:00 rcu_par_gp
6 ? 00:00:00 kworker/0:0H-events_highpri
[...]
5648 ? 00:00:00 gnome-control-c
5656 ? 00:00:00 gnome-terminal-
5736 pts/1 00:00:00 bash
5791 pts/1 00:00:00 ps
5792 pts/1 00:00:00 less
(END)
```
### 只列出你的进程
`ps -e` 的输出可能会让人不知所措,所以使用 `-U` 来查看一个用户的进程:
```
$ ps -U tux | less
PID TTY TIME CMD
3545 ? 00:00:00 systemd
3548 ? 00:00:00 (sd-pam)
3566 ? 00:00:18 pulseaudio
3570 ? 00:00:00 gnome-keyring-d
3583 ? 00:00:00 dbus-daemon
3589 tty2 00:00:00 gdm-wayland-ses
3592 tty2 00:00:00 gnome-session-b
3613 ? 00:00:00 gvfsd
3618 ? 00:00:00 gvfsd-fuse
3665 tty2 00:01:03 gnome-shell
[...]
```
这样就减少了 200 个(可能是 100 个,取决于你运行的系统)需要分类的进程。
你可以用 `pstree` 命令以不同的格式查看同样的输出:
```
$ pstree -U tux -u --show-pids
[...]
├─gvfsd-metadata(3921)─┬─{gvfsd-metadata}(3923)
│ └─{gvfsd-metadata}(3924)
├─ibus-portal(3836)─┬─{ibus-portal}(3840)
│ └─{ibus-portal}(3842)
├─obexd(5214)
├─pulseaudio(3566)─┬─{pulseaudio}(3640)
│ ├─{pulseaudio}(3649)
│ └─{pulseaudio}(5258)
├─tracker-store(4150)─┬─{tracker-store}(4153)
│ ├─{tracker-store}(4154)
│ ├─{tracker-store}(4157)
│ └─{tracker-store}(4178)
└─xdg-permission-(3847)─┬─{xdg-permission-}(3848)
└─{xdg-permission-}(3850)
```
### 列出进程的上下文
你可以用 `-u` 选项查看你拥有的所有进程的额外上下文。
```
$ ps -U tux -u
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
tux 3545 0.0 0.0 89656 9708 ? Ss 13:59 0:00 /usr/lib/systemd/systemd --user
tux 3548 0.0 0.0 171416 5288 ? S 13:59 0:00 (sd-pam)
tux 3566 0.9 0.1 1722212 17352 ? S&lt;sl 13:59 0:29 /usr/bin/pulseaudio [...]
tux 3570 0.0 0.0 664736 8036 ? SLl 13:59 0:00 /usr/bin/gnome-keyring-daemon [...]
[...]
tux 5736 0.0 0.0 235628 6036 pts/1 Ss 14:18 0:00 bash
tux 6227 0.0 0.4 2816872 74512 tty2 Sl+14:30 0:00 /opt/firefox/firefox-bin [...]
tux 6660 0.0 0.0 268524 3996 pts/1 R+ 14:50 0:00 ps -U tux -u
tux 6661 0.0 0.0 219468 2460 pts/1 S+ 14:50 0:00 less
```
### 用 PID 排除故障
如果你在某个特定的程序上有问题,或者你只是好奇某个程序在你的系统上还使用了什么,你可以用 `pmap` 查看运行中的进程的内存图。
```
$ pmap 1776
5736: bash
000055f9060ec000 1056K r-x-- bash
000055f9063f3000 16K r---- bash
000055f906400000 40K rw--- [ anon ]
00007faf0fa67000 9040K r--s- passwd
00007faf1033b000 40K r-x-- libnss_sss.so.2
00007faf10345000 2044K ----- libnss_sss.so.2
00007faf10545000 4K rw--- libnss_sss.so.2
00007faf10546000 212692K r---- locale-archive
00007faf1d4fb000 1776K r-x-- libc-2.28.so
00007faf1d6b7000 2044K ----- libc-2.28.so
00007faf1d8ba000 8K rw--- libc-2.28.so
[...]
```
### 处理进程 ID
**procps-ng** 软件包有你需要的所有命令,以调查和监控你的系统在任何时候的使用情况。无论你是对 Linux 系统中所有分散的部分如何结合在一起感到好奇,还是对一个错误进行调查,或者你想优化你的计算机的性能,学习这些命令都会为你了解你的操作系统提供一个重要的优势。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-procps-ng
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/system-monitor-splash.png?itok=0UqsjuBQ (System monitor)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://gitlab.com/procps-ng

View File

@ -1,169 +0,0 @@
[#]: subject: "Schedule a task with the Linux at command"
[#]: via: "https://opensource.com/article/21/8/linux-at-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 Linux 的命令来安排一个任务
======
at 命令是一种在特定时间和日期安排一次性任务的终端方法。
![Team checklist][1]
计算机擅长[自动化][2],但不是每个人都知道如何使自动化工作。不过,能够在特定的时间为电脑安排一个任务,然后忘记它,这确实是一种奢侈。也许你有一个文件要在特定的时间上传或下载,或者你需要处理一批还不存在但保证在某个时间存在的文件,或者需要监控的设置,或者你只是需要一个友好的提醒,在下班回家的路上拿起面包和黄油。
That's what the `at`command is for.
这就是 `at` 命令的用处。
### 什么是 Linux at 命令?
`at` 命令是 Linux 终端允许你在特定时间和日期安排一次性工作的方法。它是一种自发的自动化,在终端上很容易实现。
### 安装 at
在 Linux 上,`at` 命令可能已经安装了。你可以使用 `at -V` 命令来验证它是否已经安装。只要返回一个版本,就说明你已经安装了 `at`
```
$ at -V
at version x.y.z
```
如果你试图使用 `at`,但没有找到该命令,大多数现代的 Linux 发行版会提供缺少的 `at` 包。
### 用 at 交互式地安排一个作业
当你使用 `at` 命令和你希望任务运行的时间时,你会打开一个交互式 `at` 提示。你可以输入你想在你指定的时间运行的命令。
如果有帮助的话,你可以把这个过程看作是一个日历应用,就像你可能在你的手机上使用的那样。首先,你在某一天的某个时间创建一个事件,然后指定你想要发生什么。
例如,尝试通过创建一个未来几分钟的任务来计划给自己的备忘录。让任务变得简单,以减少失败的可能性。要退出 `at` 提示,请按键盘上的 **Ctrl+D**
```
$ at 11:20 AM
warning: commands will be executed using /bin/sh
at&gt; echo "hello world" &gt; ~/at-test.txt
at&gt; &lt;EOT&gt;
job 3 at Mon Jul 26 11:20:00 2021
```
正如你所看到的,`at` 使用直观和自然的时间定义。你不需要知道 24 小时制的时钟,也不需要把时间翻译成 UTC 或特定的 ISO 格式。一般来说,你可以使用你自然想到的任何符号,如 _noon_、_1:30 PM_、_13:37_ 等等,来描述你希望一个任务发生的时间。
等待几分钟,然后在你创建的文件上运行 `cat` 或者 `tac` 命令,验证你的任务是否已经运行:
```
$ cat ~/at-test.txt
hello world
```
### 用 at 安排一个任务
你不必使用 `at` 交互式提示符来安排任务。你可以使用 `echo``printf` 向它传送命令。在这个例子中,我使用了 _now_ 符号,以及我希望任务从现在开始延迟多少分钟:
```
`$ echo "echo 'hello again' >> ~/at-test.txt" | at now +1 minute`
```
一分钟后,验证新的命令是否已被执行:
```
$ cat ~/at-test.txt
hello world
hello again
```
### 时间表达式
`at` 命令在解释时间时是非常宽容的。你可以在许多格式中选择,这取决于哪一种对你来说最方便:
* `YYMMDDhhmm`[.ss]
(缩写的年、月、日、小时、分钟,也可选择秒)
* `CCYYMMDDhhmm`[.ss]
(完整的年、月、日、时、分,也可选择的秒)
* `now`
* `midnight`
* `noon`
* `teatime`4 PM
* `AM`
* `PM`
时间和日期可以是绝对的也可以加一个加号_+_使其与 _now_ 相对。当指定相对时间时,你可以使用你可能已经使用的词语:
* `minutes`
* `hours`
* `days`
* `weeks`
* `months`
* `years`
### 时间和日期语法
`at` 命令对日期的输入相比日期不那么宽容。时间必须放在第一位,接着是日期,尽管日期默认为当前日期,并且只有在为未来某天安排任务时才需要。
这些是一些有效表达式的例子:
```
$ echo "rsync -av /home/tux me@myserver:/home/tux/" | at 3:30 AM tomorrow
$ echo "/opt/batch.sh ~/Pictures" | at 3:30 AM 08/01/2022
$ echo "echo hello" | at now + 3 days
```
### 查看你的 at 队列
当你接受了 `at`,并且正在安排任务,而不是在桌子上的废纸上乱写乱画,你可能想查看一下你是否有任务还在队列中。
要查看你的 `at` 队列,使用 `atq` 命令:
```
$ atq
10 Thu Jul 29 12:19:00 2021 a tux
9 Tue Jul 27 03:30:00 2021 a tux
7 Tue Jul 27 00:00:00 2021 a tux
```
要从队列中删除一个任务,使用 `atrm` 命令和任务号。例如,要删除任务 7
```
$ atrm 7
$ atq
10 Thu Jul 29 12:19:00 2021 a tux
9 Tue Jul 27 03:30:00 2021 a tux
```
要看一个计划中的任务的实际内容,你需要查看 `at` spool。只有 root 用户可以查看 `at` spool所以你必须使用 `sudo` 来查看 spool 或 `cat` 任何任务的内容。
### 用 Linux at 安排任务
`at` 系统是一个很好的避免忘记在一天中晚些时候运行一个作业,或者在你离开时让你的计算机为你运行一个作业的方法。与 `cron` 不同的是,它不像 `cron` 那样要求任务必须从现在起一直按计划运行到永远,因此它的语法比 `cron` 简单得多。
等下次你有一个希望你的计算机记住并管理它的小任务,试试 `at` 命令。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-at-command
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/article/20/11/orchestration-vs-automation

View File

@ -1,71 +0,0 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux 中 cron 命令的 4 种替代方案
======
在 Linux 系统中有一些其他开源项目可以结合或者替代 cron 命令使用。
![Alarm clocks with different time][1]
[Linux `cron` 系统][2] 是一项经过时间检验的成熟技术,然而在任何情况下它都是最合适的系统自动化工具吗?答案是否定的。有一些开源项目就可以用来与 `cron` 结合或者直接代替 `cron` 使用。
### at 命令
`cron` 适用于长期重复任务。如果你设置了一个工作任务,它会从现在开始定期运行,直到计算机报废为止。但有些情况下你可能只想设置一个一次性命令,以备不在计算机旁时该命令可以自动运行。这时你可以选择使用 `at` 命令。
`at` 的语法比 `cron` 语法简单和灵活得多,并且兼具交互式和非交互式调度方法。(只要你想,你甚至可以使用 `at` 作业创建一个 `at` 作业。)
```
$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM
```
该命令语法自然且易用,并且不需要用户清理旧作业,因为它们一旦运行后就完全被计算机遗忘了。
阅读有关 [at 命令][3] 的更多信息并开始使用吧。
### systemd 命令
除了管理计算机上的进程外,`systemd` 还可以帮你调度这些进程。与传统的 `cron` 作业一样,`systemd` 计时器可以在指定的时间间隔触发事件,例如 shell 脚本和命令。时间间隔可以是每月特定日期的一天一次(例如在星期一的时候触发),或者在 09:00 到 17:00 的工作时间内每 15 分钟一次。
此外 `systemd` 里的计时器还可以做一些 `cron` 作业不能做的事情。
例如,计时器可以在一个事件 _之后_ 触发脚本或程序来运行特定时长,这个事件可以是开机,可以是前置任务的完成,甚至可以是计时器本身调用的服务单元的完成!
如果你的系统运行着 `systemd` 服务,那么你的机器就已经在技术层面上使用 `systemd` 计时器了。默认计时器会执行一些琐碎的任务,例如滚动日志文件、更新 mlocate 数据库、管理 DNF 数据库等。创建自己的计时器很容易,具体可以参阅 David Both 的文章 [使用 systemd 计时器来代替 cron][4]。
### anacron 命令
`cron` 专门用于在特定时间运行命令,这适用于从不休眠或断电的服务器。然而对笔记本电脑和台式工作站而言,时常有意或无意地关机是很常见的。当计算机处于关机状态时,`cron` 不会运行,因此设定在这段时间内的一些重要工作(例如备份数据)也就会跳过执行。
`anacron` 系统旨在确保作业定期运行,而不是按计划时间点运行。这就意味着你可以将计算机关机几天,再次启动时仍然靠 `anacron` 来运行基本任务。`anacron` 与 `cron` 协同工作,因此严格来说前者不是后者的替代品,而是一种调度任务的有效可选方案。许多系统管理员配置了一个 `cron` 作业来在深夜备份远程工作者计算机上的数据,结果却发现该作业在过去六个月中只运行过一次。`anacron` 确保重要的工作在 _可执行的时候_ 发生,而不是必须在安排好的 _特定时间点_ 发生。
点击参阅关于 [使用 anacron 获得更好的 crontab 效果][5] 的更多内容。
### 自动化
计算机和技术旨在让人们的生活更美好工作更轻松。Linux 为用户提供了许多有用的功能以确保完成重要的操作系统任务。查看这些可用的功能然后试着将这些功能用于你自己的工作任务吧。LCTT译注作者本段有些语焉不详读者可参阅譬如 [Ansible 自动化工具安装、配置和快速入门指南](https://linux.cn/article-13142-1.html) 等关于 Linux 自动化的文章)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -0,0 +1,149 @@
[#]: subject: "Build a JAR file with fastjar and gjar"
[#]: via: "https://opensource.com/article/21/8/fastjar"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 fastjar 和 gjar 构建一个 JAR 文件
======
fastjar、gjar 和 jar 等工具可以帮助你手动或以编程方式构建 JAR 文件,而其他工具链,如 Maven
和 Gradle 提供了依赖性管理的功能。
![Someone wearing a hardhat and carrying code ][1]
JAR 文件使用户很容易下载和启动他们想尝试的应用,很容易将该应用从一台计算机转移到另一台计算机(而且 Java 是跨平台的,所以可以鼓励自由分享),而且对于新的程序员来说,很容易理解 JAR 文件的内容,以找出使 Java 应用运行的原因。
创建 JAR 文件的方法有很多,包括 Maven 和 Gradle 等工具链解决方案,以及 IDE 中的一键构建功能。然而,也有一些独立的命令,如 `jarfast`、`gjar` 和普通的 `jar`,它们对于快速和简单的构建是很有用的,并且可以演示 JAR 文件运行所需要的东西。
### 安装
在 Linux 上,你可能已经有了 `fastjar`、`gjar` 或 `jar` 命令,作为 OpenJDK 包或 GCJGCC-Java的一部分。你可以通过输入不带参数的命令来测试这些命令是否已经安装
```
$ fastjar
Try 'fastjar --help' for more information.
$ gjar
jar: must specify one of -t, -c, -u, -x, or -i
jar: Try 'jar --help' for more information
$ jar
Usage: jar [OPTION...] [ [--release VERSION] [-C dir] files] ...
Try `jar --help' for more information.
```
我安装了所有这些命令,但你只需要一个。所有这些命令都能够构建一个 JAR。
在 Fedora 等现代 Linux 系统上,输入一个缺失的命令会使你的操作系统提示安装。
另外,你可以直接从 [AdoptOpenJDK.net][3] 为 Linux、MacOS 和 Windows [安装 Java][2]。
### 构建 JAR
首先,你需要构建一个 Java 应用。
为了简单起见,在一个名为 hello.java 的文件中创建一个基本的 “hello world” 应用:
```
class Main {
public static void main([String][4][] args) {
[System][5].out.println("Hello Java World");
}}
```
这是一个简单的应用,在某种程度上淡化了管理外部依赖关系在现实世界中的重要性。不过,这也足以让你开始了解创建 JAR 所需的基本概念了。
接下来,创建一个清单文件。清单文件描述了 JAR 的 Java 环境。在这种情况下,最重要的信息是识别主类,这样执行 JAR 的 Java 运行时就知道在哪里可以找到应用的入口点。
```
$ mdir META-INF
$ echo "Main-Class: Main" &gt; META-INF/MANIFEST.MF
```
### 编译 Java 字节码
接下来,把你的 Java 文件编译成 Java 字节码。
```
`$ javac hello.java`
```
另外,你也可以使用 GCC 的 Java 组件来编译:
```
`$ gcj -C hello.java`
```
无论哪种方式,都会产生文件 `Main.class`
```
$ file Main.class
Main.class: compiled Java class data, version XX.Y
```
### 创建 JAR
你有了所有需要的组件,这样你就可以创建 JAR 文件了。
我经常包含 Java 源码给好奇的用户参考但_所有_需要的只是 `META-INF` 目录和类文件。
`fastjar` 命令使用类似于 [`tar` 命令][6]的语法。
```
`$ fastjar cvf hello.jar META-INF Main.class`
```
另外,你也可以用 `gjar`,方法大致相同,只是 `gjar` 需要你明确指定清单文件:
```
`$ gjar cvf world.jar Main.class -m META-INF/MANIFEST.MF`
```
或者你可以使用 `jar` 命令。注意这个命令不需要 Manifest 文件,因为它会自动为你生成一个,但为了安全起见,我明确定义了主类:
```
`$ jar --create --file hello.jar --main-class=Main Main.class`
```
测试你的应用:
```
$ java -jar hello.jar
Hello Java World
```
### 轻松打包
`fastjar`、`gjar` 和 `jar` 这样的工具可以帮助你手动或以编程方式构建 JAR 文件,而其他工具链如 Maven 和 Gradle 则提供了依赖性管理的功能。一个好的 IDE 可能会集成这些功能中的一个或多个。
无论你使用什么解决方案Java 都为分发你的应用代码提供了一个简单而统一的目标。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/fastjar
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag (Someone wearing a hardhat and carrying code )
[2]: https://opensource.com/article/19/11/install-java-linux
[3]: https://adoptopenjdk.net/
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
[6]: https://opensource.com/article/17/7/how-unzip-targz-file

View File

@ -0,0 +1,153 @@
[#]: subject: "Check free disk space in Linux with ncdu"
[#]: via: "https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 ncdu 检查 Linux 中的可用磁盘空间
======
用 ncdu Linux 命令获得关于磁盘使用的交互式报告。
![Check disk usage][1]
计算机用户多年来往往积累了大量的数据,无论是重要的个人项目、数码照片、视频、音乐还是代码库。虽然现在的硬盘往往相当大,但有时你必须退一步,评估一下你在硬盘上实际存储了什么。经典的 Linux 命令 [`df`][2] 和 [`du`][3] 是快速了解硬盘上的内容的方法,它们提供了一个可靠的报告,易于解析和处理。这对脚本和处理来说是很好的,但人的大脑对数百行的原始数据并不总是反应良好。认识到这一点,`ncdu` 命令旨在提供一份关于你在硬盘上使用的空间的交互式报告。
### 在 Linux 上安装 ncdu
在 Linux 上,你可以从你的软件仓库安装 `ncdu`。例如,在 Fedora 或 CentOS 上:
```
`$ sudo dnf install ncdu`
```
在 BSD 上,你可以使用 [pkgsrc][4]。
在 macOS 上,你可以从 [MacPorts][5] 或 [HomeBrew][6] 安装。
另外,你也可以[从源码编译 ncdu][7]。
### 使用 ncdu
ncdu 界面使用 ncurses 库,它将你的终端窗口变成一个基本的图形应用,所以你可以使用方向键来浏览菜单。
![ncdu interface][8]
CC BY-SA Seth Kenlon
这是 `ncdu` 的主要吸引力之一,也是它与最初的 `du` 命令不同的地方。
要获得一个目录的完整列表,启动 `ncdu`。它默认为当前目录。
```
$ ncdu
ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help
\--- /home/tux -----------------------------------------------
22.1 GiB [##################] /.var
19.0 GiB [############### ] /Iso
10.0 GiB [######## ] /.local
7.9 GiB [###### ] /.cache
3.8 GiB [### ] /Downloads
3.6 GiB [## ] /.mail
2.9 GiB [## ] /Code
2.8 GiB [## ] /Documents
2.3 GiB [# ] /Videos
[...]
```
这个列表首先显示了最大的目录(在这个例子中,那是 `~/.var` 目录,充满了很多的 flatpaks
使用键盘上的方向键,你可以浏览列表,深入到一个目录,这样你就可以更好地了解什么东西占用了最大的空间。
### 获取一个特定目录的大小
你可以在启动 `ncdu` 时提供任意一个文件夹的路径:
```
`$ ncdu ~/chromiumos`
```
### 排除目录
默认情况下,`ncdu` 包括一切可以包括的东西,包括符号链接和伪文件系统,如 procfs 和 sysfs。你可以用 `--exclude-kernfs` 来排除这些。
你可以使用 --exclude 选项排除任意文件和目录,并在后面加上一个匹配模式。
```
$ ncdu --exclude ".var"
19.0 GiB [##################] /Iso
10.0 GiB [######### ] /.local
7.9 GiB [####### ] /.cache
3.8 GiB [### ] /Downloads
[...]
```
另外,你可以在文件中列出要排除的文件和目录,并使用 `--exclude-from` 选项来引用该文件:
```
$ ncdu --exclude-from myexcludes.txt /home/tux
10.0 GiB [######### ] /.local
7.9 GiB [####### ] /.cache
3.8 GiB [### ] /Downloads
[...]
```
### 颜色方案
你可以用 `--color dark` 选项给 ncdu 添加一些颜色。
![ncdu color scheme][9]
CC BY-SA Seth Kenlon
### 包括符号链接
`ncdu` 输出按字面意思处理符号链接,这意味着一个指向 9GB 文件的符号链接只占用 40 个字节。
```
$ ncdu ~/Iso
9.3 GiB [##################] CentOS-Stream-8-x86_64-20210427-dvd1.iso
@ 0.0 B [ ] fake.iso
```
你可以用 `--follow-symlinks` 选项强制 ncdu 跟踪符号链接:
```
$ ncdu --follow-symlinks ~/Iso
9.3 GiB [##################] fake.iso
9.3 GiB [##################] CentOS-Stream-8-x86_64-20210427-dvd1.iso
```
### 磁盘使用率
磁盘空间用完并不有趣,所以监控你的磁盘使用情况很重要。`ncdu` 命令使它变得简单和互动。下次当你对你的电脑上存储的东西感到好奇时,或者只是想以一种新的方式探索你的文件系统时,不妨试试 `ncdu`
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/du-splash.png?itok=nRLlI-5A (Check disk usage)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-df
[3]: https://opensource.com/article/21/7/check-disk-space-linux-du
[4]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[5]: https://opensource.com/article/20/11/macports
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://dev.yorhel.nl/ncdu
[8]: https://opensource.com/sites/default/files/ncdu.jpg (ncdu interface)
[9]: https://opensource.com/sites/default/files/ncdu-dark.jpg (ncdu color scheme)

View File

@ -0,0 +1,131 @@
[#]: subject: "How to Monitor Log Files in Real Time in Linux [Desktop and Server]"
[#]: via: "https://www.debugpoint.com/2021/08/monitor-log-files-real-time/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Linux 中实时监控日志文件(桌面和服务器)
======
本教程解释了如何实时监控 Linux 日志文件(桌面、服务器或应用),以进行诊断和故障排除。
当你在你的 Linux 桌面、服务器或任何应用中遇到问题时,你会首先查看各自的日志文件。日志文件通常是来自应用的文本和信息流,上面有一个时间戳。它可以帮助你缩小具体的实例,并帮助你找到任何问题的原因。它也可以帮助从网络上获得援助。
一般来说,所有的日志文件都位于 /var/log 中。这个目录包含以 .log 为扩展名的特定应用、服务的日志文件,它还包含单独的其他目录,这些目录包含其日志文件。
![log files in var-log][1]
所以说,如果你想监控一堆日志文件或特定的日志文件。这里有一些你可以做到方法。
### 实时监控 Linux 日志文件
#### 使用 tail 命令
使用 tail 命令是实时跟踪日志文件的最基本方法。特别是,如果你所在的服务器只有一个终端,没有 GUI。这是很有帮助的。
比如:
```
tail /path/to/log/file
```
![Monitoring multiple log files via tail][2]
使用开关 -f 来跟踪日志文件,它是实时更新的。例如,如果你想跟踪 syslog你可以使用以下命令。
```
tail -f /var/log/syslog
```
你可以用一个命令监控多个日志文件,使用:
```
tail -f /var/log/syslog /var/log/dmesg
```
如果你想监控 http 或 sftp 或任何服务器,你也可以在这个命令中监控它们各自的日志文件。
记住,上述命令需要管理员权限。
#### 使用 lnav日志文件浏览器
![lnav Running][3]
lnav 是一个很好的工具,你可以用它来用彩色编码的信息以更有条理的方式监控日志文件。在 Linux 系统中,它不是默认安装的。你可以用下面的命令来安装它:
```
sudo apt install lnav (Ubuntu)
sudo dnf install lnav (Fedora)
```
lnav 的好处是,如果你不想安装它,你可以直接下载其预编译的可执行文件,然后在任何地方运行。甚至从 U 盘上也可以。它不需要设置,而且有很多功能。使用 lnav你可以通过 SQL 查询日志文件,以及其他很酷的功能,你可以在它的[官方网站][4]上了解。
Once installed, you can simply run lnav from terminal with admin privilege, and it will show all the logs from /var/log by default and start monitoring in real time.
#### 关于 systemd 的 journalctl 说明
今天所有的现代 Linux 发行版大多使用 systemd。systemd 提供了运行 Linux 操作系统的基本框架和组件。systemd 通过 journalctl 提供日志服务,帮助管理所有 systemd 服务的日志。你还可以通过以下命令实时监控各个 systemd 服务和日志。
```
journalctl -f
```
下面是一些特定的 journalctl 命令,可以在一些情况下使用。你可以将这些命令与上面的 -f 开关结合起来,开始实时监控。
* 对紧急系统信息,使用
```
journalctl -p 0
```
* 显示带有解释的错误
```
journalctl -xb -p 3
```
* 使用时间控制来过滤输出
```
journalctl --since "2020-12-04 06:00:00"
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
journalctl --since yesterday
journalctl --since 09:00 --until "1 hour ago"
```
如果你想了解更多关于 journalctl 的细节,我已经写了一个[指南][6]。
### 结束语
我希望这些命令和技巧能帮助你找出桌面或服务器问题/错误的根本原因。对于更多的细节,你可以随时参考手册,玩弄各种开关。如果你对这篇文章有什么意见或看法,请在下面的评论栏告诉我。
干杯。
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/monitor-log-files-real-time/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/log-files-in-var-log-1024x312.jpeg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Monitoring-multiple-log-files-via-tail-1024x444.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/lnav-Running-1024x447.jpeg
[4]: https://lnav.org/features
[6]: https://www.debugpoint.com/2020/12/systemd-journalctl/