Merge pull request #12 from LCTT/master

Update from LCTT
This commit is contained in:
perfiffer 2021-08-31 11:30:31 +08:00 committed by GitHub
commit f747ec57c7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
63 changed files with 5972 additions and 2676 deletions

View File

@ -0,0 +1,169 @@
[#]: collector: "lujun9972"
[#]: translator: "fisherue"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13717-1.html"
[#]: subject: "5 ways to improve your Bash scripts"
[#]: via: "https://opensource.com/article/20/1/improve-bash-scripts"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
改进你的脚本程序的 5 个方法
======
> 巧用 Bash 脚本程序能帮助你完成很多极具挑战的任务。
![](https://img.linux.net.cn/data/attachment/album/202108/25/131347yblk4jg4r6blebmg.jpg)
系统管理员经常写脚本程序,不论长短,这些脚本可以完成某种任务。
你是否曾经查看过某个软件发行方提供的安装用的<ruby>脚本<rt>script</rt></ruby>程序?为了能够适应不同用户的系统配置,顺利完成安装,这些脚本程序经常包含很多函数和逻辑分支。多年来,我积累了一些改进脚本程序的一些技巧,这里分享几个,希望能对朋友们也有用。这里列出一组短脚本示例,展示给大家做脚本样本。
### 初步尝试
我尝试写一个脚本程序时,原始程序往往就是一组命令行,通常就是调用标准命令完成诸如更新网页内容之类的工作,这样可以节省时间。其中一个类似的工作是解压文件到 Apache 网站服务器的主目录里,我的最初脚本程序大概是下面这样:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
这帮我节省了时间,也减少了键入多条命令操作。时日久了,我掌握了另外的技巧,可以用 Bash 脚本程序完成更难的一些工作,比如说创建软件安装包、安装软件、备份文件系统等工作。
### 1、条件分支结构
和众多其他编程语言一样,脚本程序的条件分支结构同样是强大的常用技能。条件分支结构赋予了计算机程序逻辑能力,我的很多实例都是基于条件逻辑分支。
基本的条件分支结构就是 `if` 条件分支结构。通过判定是否满足特定条件,可以控制程序选择执行相应的脚本命令段。比如说,想要判断系统是否安装了 Java ,可以通过判断系统有没有一个 Java 库目录;如果找到这个目录,就把这个目录路径添加到可运行程序路径,也就可以调用 Java 库应用了。
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2、限定运行权限
你或许想只允许特定的用户才能执行某个脚本程序。除了 Linux 的权限许可管理,比如对用户和用户组设定权限、通过 SELinux 设定此类的保护权限等,你还可以在脚本里设置逻辑判断来设置执行权限。类似的情况可能是,你需要确保只有网站程序的所有者才能执行相应的网站初始化操作脚本。甚至你可以限定只有 root 用户才能执行某个脚本。这个可以通过在脚本程序里设置逻辑判断实现Linux 提供的几个环境变量可以帮忙。其中一个是保存用户名称的变量 `$USER` 另一个是保存用户识别码的变量 `$UID` 。在脚本程序里,执行用户的 UID 值就保存在 `$UID` 变量里。
#### 用户名判别
第一个例子里,我在一个带有几个应用服务器实例的多用户环境里指定只有用户 `jboss1` 可以执行脚本程序。条件 `if` 语句主要是判断,“要求执行这个脚本程序的用户不是 `jboss1` 吗?”当此条件为真时,就会调用第一个 `echo` 语句,接着是 `exit 1`,即退出这个脚本程序。
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### 根用户判别
接下来的例子是要求只有根用户才能执行脚本程序。根用户的用户识别码UID是 0,设置的条件判断采用大于操作符(`-gt`),所有 UID 值大于 的用户都被禁止执行该脚本程序。
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3、带参数执行程序
可执行程序可以附带参数作为执行选项,命令行脚本程序也是一样,下面给出几个例子。在这之前,我想告诉你,能写出好的程序并不只是写出我们想要它执行什么的程序,程序还需要不执行我们不要它执行的操作。如果运行程序时没有提供参数造成程序缺少足够信息,我愿意脚本程序不要做任何破坏性的操作。因而,程序的第一步就是确认命令行是否提供了参数,判定的条件就是参数数量 `$#` 是否为 0 ,如果是(意味着没有提供参数),就直接终止脚本程序并退出操作。
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### 多个运行参数
可以传递给脚本程序的参数不止一个。脚本使用内部变量指代这些参数,内部变量名用非负整数递增标识,也就是 `$1`、`$2`、`$3` 等等递增。我只是扩展前面的程序,并在下面一行输出显示用户提供的前三个参数。显然,要针对所有的每个参数有对应的响应需要更多的逻辑判断,这里的例子只是简单展示参数的使用。
```
echo $1 $2 $3
```
我们在讨论这些参数变量名,你或许有个疑问,“参数变量名怎么跳过了 `$0`,(而直接从`$1` 开始)?”
是的,是这样,这是有原因的。变量名 `$0` 确实存在,也非常有用,它储存的是被执行的脚本程序的名称。
```
echo $0
```
程序执行过程中有一个变量名指代程序名称,很重要的一个原因是,可以在生成的日志文件名称里包含程序名称,最简单的方式应该是调用一个 `echo` 语句。
```
echo test >> $0.log
```
当然,你或许要增加一些代码,确保这个日志文件存放在你希望的路径,日志名称包含你认为有用的信息。
### 4、交互输入
脚本程序的另一个好用的特性是可以在执行过程中接受输入,最简单的情况是让用户可以输入一些信息。
```
echo "enter a word please:"
read word
echo $word
```
这样也可以让用户在程序执行中作出选择。
```
read -p "Install Software ?? [Y/n]: " answ
if [ "$answ" == 'n' ]; then
  exit 1
fi
  echo "Installation starting..."
```
### 5、出错退出执行
几年前,我写了个脚本,想在自己的电脑上安装最新版本的 Java 开发工具包JDK。这个脚本把 JDK 文件解压到指定目录,创建更新一些符号链接,再做一下设置告诉系统使用这个最新的版本。如果解压过程出现错误,在执行后面的操作就会使整个系统上的 Java 破坏不能使用。因而,这种情况下需要终止程序。如果解压过程没有成功,就不应该再继续进行之后的更新操作。下面语句段可以完成这个功能。
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
下面的单行语句可以给你快速展示一下变量 `$?` 的用法。
```
ls T; ec=$?; echo $ec
```
先用 `touch T` 命令创建一个文件名为 `T` 的文件,然后执行这个单行命令,变量 `ec` 的值会是 0。然后,用 `rm T` 命令删除文件,再执行该单行命令,变量 `ec` 的值会是 2,因为文件 `T` 不存在,命令 `ls` 找不到指定文件报错。
在逻辑条件里利用这个出错标识,参照前文我使用的条件判断,可以使脚本文件按需完成设定操作。
### 结语
要完成复杂的功能,或许我们觉得应该使用诸如 Python、C 或 Java 这类的高级编程语言,然而并不尽然,脚本编程语言也很强大,可以完成类似任务。要充分发挥脚本的作用,有很多需要学习的,希望这里的几个例子能让你意识到脚本编程的强大。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/fisherue)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl "工作者图片"

View File

@ -1,33 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (YungeG)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13720-1.html)
[#]: subject: (Understanding systemd at startup on Linux)
[#]: via: (https://opensource.com/article/20/5/systemd-startup)
[#]: author: (David Both https://opensource.com/users/dboth)
在 Linux 启动时理解 systemd
理解 systemd 启动时在做什么
======
systemd 启动过程提供的重要线索可以在问题出现时助你一臂之力。
![People at the start line of a race][1]
> systemd 启动过程提供的重要线索可以在问题出现时助你一臂之力。
在本系列的第一篇文章[_学着爱上 systemd_][2],我考察了 systemd 的功能和架构,以及围绕 systemd 作为古老的 SystemV 初始化程序和启动脚本的替代品的争论。在这第二篇文章中,我将开始探索管理 Linux 启动序列的文件和工具。我会解释 systemd 启动序列、如何更改默认的启动目标SystemV 术语中的运行级别)、以及在不重启的情况下如何手动切换到不同的目标。
![](https://img.linux.net.cn/data/attachment/album/202108/26/110220piwnicwxvvc1s8io.jpg)
我还将考察两个重要的 systemd 工具。第一个 **systemctl** 命令是和 systemd 交互、向其发送命令的基本方式。第二个是 **journalctl**,用于访问 systemd 日志,后者包含了大量系统历史数据,比如内核和服务的消息(包括指示性信息和错误信息)
在本系列的第一篇文章《[学着爱上 systemd][2]》,我考察了 systemd 的功能和架构,以及围绕 systemd 作为古老的 SystemV 初始化程序和启动脚本的替代品的争论。在这第二篇文章中,我将开始探索管理 Linux 启动序列的文件和工具。我会解释 systemd 启动序列、如何更改默认的启动目标(即 SystemV 术语中的运行级别)、以及在不重启的情况下如何手动切换到不同的目标
务必使用一个非生产系统进行本文和后续文章中的测试和实验。你的测试系统需要安装一个 GUI 桌面(比如 XfceLXDEGnomeKD E或其他
我还将考察两个重要的 systemd 工具。第一个 `systemctl` 命令是和 systemd 交互、向其发送命令的基本方式。第二个是 `journalctl`,用于访问 systemd 日志,后者包含了大量系统历史数据,比如内核和服务的消息(包括指示性信息和错误信息)。
务必使用一个非生产系统进行本文和后续文章中的测试和实验。你的测试系统需要安装一个 GUI 桌面(比如 Xfce、LXDE、Gnome、KDE 或其他)。
上一篇文章中我写道计划在这篇文章创建一个 systemd 单元并添加到启动序列。由于这篇文章比我预期中要长,这些内容将留到本系列的下一篇文章。
### 使用 systemd 探索 Linux 的启动
在观察启动序列之前,你需要做几件事情得使引导和启动序列开放可见。正常情况下,大多数发行版使用一个开机动画或者启动画面隐藏 Linux 启动和关机过程中的显示细节,在基于 Red Hat 的发行版中称作 Plymouth 引导画面。这些隐藏的消息能够向寻找信息以排除程序故障、或者只是学习启动序列的系统管理员提供大量有关系统启动和关闭的信息。你可以通过 GRUBGrand Unified Boot Loader配置改变这个设置。
在观察启动序列之前,你需要做几件事情得使引导和启动序列开放可见。正常情况下,大多数发行版使用一个开机动画或者启动画面隐藏 Linux 启动和关机过程中的显示细节,在基于 Red Hat 的发行版中称作 Plymouth 引导画面。这些隐藏的消息能够向寻找信息以排除程序故障、或者只是学习启动序列的系统管理员提供大量有关系统启动和关闭的信息。你可以通过 GRUB<ruby>大统一引导加载器<rt>Grand Unified Boot Loader</rt></ruby>)配置改变这个设置。
主要的 GRUB 配置文件是 **/boot/grub2/grub.cfg** ,但是这个文件在更新内核版本时会被覆盖,你不会想修改它的。相反,修改用于改变 **grub.cfg** 默认设置的 **/etc/default/grub** 文件。
主要的 GRUB 配置文件是 `/boot/grub2/grub.cfg` ,但是这个文件在更新内核版本时会被覆盖,你不会想修改它的。相反,应该修改用于改变 `grub.cfg` 默认设置的 `/etc/default/grub` 文件。
**/etc/default/grub** 文件当前还未修改的版本看起
首先看一下当前未修改的 `/etc/default/grub` 文件的版本
```
[root@testvm1 ~]# cd /etc/default ; cat grub
@ -43,10 +44,10 @@ GRUB_DISABLE_RECOVERY="true"
[root@testvm1 default]#
```
[GRUB 文档][3]的第 6 章列出了 **/etc/default/grub** 文件的所有可用项,我只关注下面的部分:
[GRUB 文档][3] 的第 6 章列出了 `/etc/default/grub` 文件的所有可用项,我只关注下面的部分:
* 我将 GRUB 菜单倒计时的秒数 **GRUB_TIMEOUT**,从 5 改成 10以便在倒计时达到 0 之前有更多的时间响应 GRUB 菜单。
* **GRUB_CMDLINE_LINUX** 列出了启动阶段传递给内核的命令行参数,我删除了其中的最后两个参数。其中的一个参数 **rhgb** 代表 Red Hat Graphical Boot在内核初始化阶段显示一个小小的 Fedora 图标动画,而不是显示启动阶段的信息。另一个参数 **quiet**,屏蔽记录启动进度和发生错误的消息。系统管理员需要这些信息,因此我删除了 **rhgb****quiet**。如果启动阶段发生了错误,屏幕上显示的信息可以指向故障的原因。
* 我将 GRUB 菜单倒计时的秒数 `GRUB_TIMEOUT`,从 5 改成 10以便在倒计时达到 0 之前有更多的时间响应 GRUB 菜单。
* `GRUB_CMDLINE_LINUX` 列出了引导阶段传递给内核的命令行参数,我删除了其中的最后两个参数。其中的一个参数 `rhgb` 代表 “<ruby>红帽图形化引导<rt>Red Hat Graphical Boot</rt></ruby>”,在内核初始化阶段显示一个小小的 Fedora 图标动画,而不是显示引导阶段的信息。另一个参数 `quiet`,屏蔽显示记录了启动进度和发生错误的消息。系统管理员需要这些信息,因此我删除了 `rhgb``quiet`。如果引导阶段发生了错误,屏幕上显示的信息可以指向故障的原因。
更改之后,你的 GRUB 文件将会像下面一样:
@ -64,7 +65,7 @@ GRUB_DISABLE_RECOVERY="false"
[root@testvm1 default]#
```
**grub2-mkconfig** 程序使用 **/etc/default/grub** 文件的内容生成 **grub.cfg** 配置文件,从而改变一些默认的 GRUB 设置。**grub2-mkconfig** 输出到 **STDOUT**,你可以使用程序的 **-o** 参数指明数据流输出的文件,不过使用重定向也同样简单。执行下面的命令更新 **/boot/grub2/grub.cfg** 配置文件:
`grub2-mkconfig` 程序使用 `/etc/default/grub` 文件的内容生成 `grub.cfg` 配置文件,从而改变一些默认的 GRUB 设置。`grub2-mkconfig` 输出到 `STDOUT`,你可以使用程序的 `-o` 参数指明数据流输出的文件,不过使用重定向也同样简单。执行下面的命令更新 `/boot/grub2/grub.cfg` 配置文件:
```
[root@testvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg
@ -83,17 +84,17 @@ done
重新启动你的测试系统查看本来会隐藏在 Plymouth 开机动画之下的启动信息。但是如果你没有关闭开机动画,又需要查看启动信息的话又该如何操作?或者你关闭了开机动画,而消息流过的速度太快,无法阅读怎么办?(实际情况如此。)
有两个解决方案,都涉及到日志文件和 systemd 日志——两个都是你的好伙伴。你可以使用 **less** 命令查看 **/var/log/messages** 文件的内容。这个文件包含引导和启动信息,以及操作系统执行正常操作时生成的信息。你也可以使用不加任何参数的 **journalctl** 命令查看 systemd 日志,包含基本相同的信息:
有两个解决方案,都涉及到日志文件和 systemd 日志 —— 两个都是你的好伙伴。你可以使用 `less` 命令查看 `/var/log/messages` 文件的内容。这个文件包含引导和启动信息,以及操作系统执行正常操作时生成的信息。你也可以使用不加任何参数的 `journalctl` 命令查看 systemd 日志,包含基本相同的信息:
```
[root@testvm1 grub2]# journalctl
\-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 ([mockbuild@bkernel03.phx2.fedoraproject.org][4]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct &gt;
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd&gt;
-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. --
Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 (mockbuild@bkernel03.phx2.fedoraproject.org) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct >
Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd>
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-provided physical RAM map:
Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
@ -116,51 +117,51 @@ Jan 11 21:48:08 f31vm.both.org kernel: clocksource: kvm-clock: mask: 0xfffffffff
Jan 11 21:48:08 f31vm.both.org kernel: tsc: Detected 2807.992 MHz processor
Jan 11 21:48:08 f31vm.both.org kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 11 21:48:08 f31vm.both.org kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
<snip>
```
由于数据流可能长达几甚至几百万行,我在这里截断了它。(我的主要工作站上列出的日志长度是 1,188,482 行。)一定要在你的测试系统尝试这个命令。如果系统已经运行了一段时间——即使重启过很多次——还是会显示大量的数据。进行问题诊断时查看这个日志数据,因为其中包含了很多可能十分有用的信息。了解这个数据文件在正常的引导和启动过程中的模样,可以帮助你在问题出现时定位问题。
由于数据流可能长达几十万甚至几百万行,我在这里截断了它。(我的主要工作站上列出的日志长度是 1,188,482 行。)请确保是在你的测试系统尝试的这个命令。如果系统已经运行了一段时间 —— 即使重启过很多次 —— 还是会显示大量的数据。查看这些日志数据,因为它包含了很多信息,在进行问题判断时可能非常有用。了解这个数据文件在正常的引导和启动过程中的模样,可以帮助你在问题出现时定位问题。
我将在本系列之后的文章讨论 systemd 日志、**journalctl** 命令、以及如何整列输出的日志数据来寻找更详细的信息。
我将在本系列之后的文章讨论 systemd 日志、`journalctl` 命令、以及如何整理输出的日志数据来寻找更详细的信息。
内核被 GRUB 加载到内存后,必须先将自己从压缩后的文件中解压出来,才能执行任何有意义的操作。解压自己后,内核开始运行,加载 systemd 并转交控制权。
引导阶段到此结束,此时 Linux 内核和 systemd 正在运行,但是无法为用户执行任何生产性任务,因为其他的程序都没有执行,没有命令行解释器提供命令行,没有后台进程管理网络和其他的通信链接,也没有任何东西能够控制计算机执行生产功能。
<ruby>引导<rt>boot</rt></ruby>阶段到此结束,此时 Linux 内核和 systemd 正在运行,但是无法为用户执行任何生产性任务,因为其他的程序都没有执行,没有命令行解释器提供命令行,没有后台进程管理网络和其他的通信链接,也没有任何东西能够控制计算机执行生产功能。
现在 systemd 可以加载所需的功能性单元以便将系统启动到选择的目标运行状态。
### 目标
一个 systemd 目标代表一个 Linux 系统当前的或期望的运行状态。与 SystemV 启动脚本十分类似,目标定义了系统运行必须存在的服务,以及处于目标状态下必须激活的服务。图 1 展示了使用 systemd 的 Linux 系统可能的运行状态目标。就像在本系列的第一篇文章以及 systemd 启动的手册页(`man bootup`)所看到的一样,有一些开启不同必要服务的其他中间目标,包括 **swap.target**、**timers.target**、**local-fs.target** 等。一些目标(像 **basic.target**)作为检查点使用,在移动到下一个更高级的目标之前保证所有需要的服务已经启动并运行。
一个 systemd <ruby>目标<rt>target</rt></ruby>代表一个 Linux 系统当前的或期望的运行状态。与 SystemV 启动脚本十分类似,目标定义了系统运行必须存在的服务,以及处于目标状态下必须激活的服务。图 1 展示了使用 systemd 的 Linux 系统可能的运行状态目标。就像在本系列的第一篇文章以及 systemd 启动的手册页(`man bootup`)所看到的一样,有一些开启不同必要服务的其他中间目标,包括 `swap.target`、`timers.target`、`local-fs.target` 等。一些目标(像 `basic.target`)作为检查点使用,在移动到下一个更高级的目标之前保证所有需要的服务已经启动并运行。
除非开机时在 GRUB 菜单进行更改systemd 总是启动 **default.target**。**default.target** 文件是指向真实的目标文件的符号链接。对于桌面工作站,**default.target** 通常是 **graphical.target**,等同于 SystemV 的运行等级 5。对于服务器默认目标多半是 **multi-user.target**,就像 SystemV 的运行等级 3。**emergency.target** 文件类似单用户模式。目标和服务都是 systemd 单元。
除非开机时在 GRUB 菜单进行更改systemd 总是启动 `default.target`。`default.target` 文件是指向真实的目标文件的符号链接。对于桌面工作站,`default.target` 通常是 `graphical.target`,等同于 SystemV 的运行等级 5。对于服务器默认目标多半是 `multi-user.target`,就像 SystemV 的运行等级 3。`emergency.target` 文件类似单用户模式。目标和<ruby>服务<rt>service</rt></ruby>都是一种 systemd 单元。
下面的表,包含在本系列的上一篇文章中,比较了 systemd 目标和古老的 SystemV 启动运行等级。为了向后兼容systemd 提供了 systemd 目标别名,允许脚本——和系统管理员——使用像 **init 3** 一样的 SystemV 命令改变运行等级。当然SystemV 命令被转发给 systemd 进行解释和执行。
下面的表,包含在本系列的上一篇文章中,比较了 systemd 目标和古老的 SystemV 启动运行等级。为了向后兼容systemd 提供了 systemd 目标别名,允许脚本和系统管理员使用像 `init 3` 一样的 SystemV 命令改变运行等级。当然SystemV 命令被转发给 systemd 进行解释和执行。
**systemd targets** | **SystemV runlevel** | **target aliases** | **Description**
**systemd 目标** | **SystemV 运行级别** | **目标别名** | **描述**
---|---|---|---
default.target | | | 这个目标通常是一个符号链接,作为 **multi-user.target****graphical.target** 的别名。systemd 总是用 **default.target** 启动系统。**default.target** 不能命名为 **halt.target**、**poweroff.target**、和 **reboot.target**
graphical.target | 5 | runlevel5.target | 带有 GUI 的 **Multi-user.target**
| 4 | runlevel4.target | 未使用。运行等级 4 和 SystemV 的运行等级 3 一致,可以创建这个目标并进行定制,用于启动本地服务,而不必更改默认的 **multi-user.target**
multi-user.target | 3 | runlevel3.target | 运行所有的服务但是只有命令行接口command-line interfaceCLI
| 2 | runlevel2.target | 多用户,没有 NFS但是运行其他所有的非 GUI 服务
rescue.target | 1 | runlevel1.target | 一个基本的系统,包括挂载文件系统,但是只运行最基础的服务,以及一个主控制台上的救援命令行解释器
emergency.target | S | | 单用户模式——没有服务运行;文件系统没有挂载。这是最基础级的操作模式,只有一个运行在主控制台的紧急情况命令行解释器,供用户和系统交互。
halt.target | | | 不断电的情况下停止系统
reboot.target | 6 | runlevel6.target | 重启
poweroff.target | 0 | runlevel0.target | 停止系统并关闭电源
| `default.target` | | | 这个目标通常是一个符号链接,作为 `multi-user.target``graphical.target` 的别名。systemd 总是用 `default.target` 启动系统。`default.target** 不能作为 `halt.target`、`poweroff.target` 和 `reboot.target` 的别名。|
| `graphical.target` | 5 | `runlevel5.target` | 带有 GUI 的 `multi-user.target` 。|
| | 4 | `runlevel4.target` | 未使用。运行等级 4 和 SystemV 的运行等级 3 一致,可以创建这个目标并进行定制,用于启动本地服务,而不必更改默认的 `multi-user.target`。 |
| `multi-user.target` | 3 | `runlevel3.target` | 运行所有的服务但是只有命令行界面CLI 。|
| | 2 | `runlevel2.target` | 多用户,没有 NFS但是运行其他所有的非 GUI 服务
| `rescue.target` | 1 | `runlevel1.target` | 一个基本的系统,包括挂载文件系统,但是只运行最基础的服务,以及一个主控制台上的用于救援的命令行解释器。|
| `emergency.target` | S | | 单用户模式 —— 没有服务运行;文件系统没有挂载。这是最基础级的操作模式,只有一个运行在主控制台的用于紧急情况命令行解释器,供用户和系统交互。 |
| `halt.target` | | | 不断电的情况下停止系统 |
| `reboot.target` | 6 | `runlevel6.target` | 重启 |
| `poweroff.target` | 0 | `runlevel0.target` | 停止系统并关闭电源 |
每个目标在配置文件中都描述了一组依赖关系。systemd 启动需要的依赖,即 Linux 主机运行在特定功能级别所需的服务。加载目标配置文件中列出的所有依赖并运行后,系统就运行在那个目标等级。如果愿意,你可以在本系列的第一篇文章 [_学着爱上 systemd_][2] 中回顾 systemd 的启动序列和运行时目标。
每个目标在配置文件中都描述了一组依赖关系。systemd 启动需要的依赖,即 Linux 主机运行在特定功能级别所需的服务。加载目标配置文件中列出的所有依赖并运行后,系统就运行在那个目标等级。如果愿意,你可以在本系列的第一篇文章《[学着爱上 systemd][2]》中回顾 systemd 的启动序列和运行时目标。
### 探索当前的目标
许多 Linux 发行版默认安装一个 GUI 桌面接口,以便安装的系统可以像工作站一样使用。我总是从 Fedora Live USB 引导驱动器安装 Xfce 或 LXDE 桌面。即使是安装一个服务器或者其他基础类型的主机(比如用于路由器和防火墙的主机),我也使用 GUI 桌面的安装方式。
许多 Linux 发行版默认安装一个 GUI 桌面界面,以便安装的系统可以像工作站一样使用。我总是从 Fedora Live USB 引导驱动器安装 Xfce 或 LXDE 桌面。即使是安装一个服务器或者其他基础类型的主机(比如用于路由器和防火墙的主机),我也使用 GUI 桌面的安装方式。
我可以安装一个没有桌面的服务器(数据中心的典型做法),但是这样不满足我的需求。原因不是我需要 GUI 桌面本身,而是 LXDE 安装包含了许多其他默认的服务器安装没有提供的工具,这意味着初始安装之后我需要做的工作更少。
但是,仅仅因为有一个 GUI 桌面并不意味着我要使用它。我有一个 16 端口的 KVM可以用于访问我的大部分 Linux 系统的 KVM 接口,但我和它们交互的大部分交互是通过从我的主要工作站建立的远程 SSH 连接。这种方式更安全,而且和 **graphical.target** 相比,运行 **multi-user.target** 使用更少的系统资源。
但是,仅仅因为有 GUI 桌面并不意味着我要使用它。我有一个 16 端口的 KVM可以用于访问我的大部分 Linux 系统的 KVM 接口,但我和它们交互的大部分交互是通过从我的主要工作站建立的远程 SSH 连接。这种方式更安全,而且和 `graphical.target` 相比,运行 `multi-user.target` 使用更少的系统资源。
首先,检查默认目标,确认是 **graphical.target**
首先,检查默认目标,确认是 `graphical.target`
```
[root@testvm1 ~]# systemctl get-default
@ -168,7 +169,7 @@ graphical.target
[root@testvm1 ~]#
```
然后确认当前正在运行的目标,应该和默认目标相同。你仍可以使用老方法,输出古老的 SystemV 运行等级。注意,前一个运行等级在左边,这里是 **N**(意思是 None表示主机启动后没有修改过运行等级。数字 5 是当前的目标,正如古老的 SystemV 术语中的定义:
然后确认当前正在运行的目标,应该和默认目标相同。你仍可以使用老方法,输出古老的 SystemV 运行等级。注意,前一个运行等级在左边,这里是 `N`(意思是 None表示主机启动后没有修改过运行等级。数字 5 是当前的目标,正如古老的 SystemV 术语中的定义:
```
[root@testvm1 ~]# runlevel
@ -176,7 +177,7 @@ N 5
[root@testvm1 ~]#
```
注意runlevel 的手册页指出运行等级已经被淘汰,并提供了一个转换表。
注意,`runlevel` 的手册页指出运行等级已经被淘汰,并提供了一个转换表。
你也可以使用 systemd 方式,命令的输出有很多行,但确实用 systemd 术语提供了答案:
@ -213,23 +214,23 @@ SUB    = The low-level unit activation state, values depend on unit type.
To show all installed unit files use 'systemctl list-unit-files'.
```
上面列出了当前加载的和激活的目标,你也可以看到 **graphical.target****multi-user.target**。**multi-user.target** 需要在 **graphical.target** 之前加载。这个例子中,**graphical.target** 是激活的。
上面列出了当前加载的和激活的目标,你也可以看到 `graphical.target``multi-user.target`。`multi-user.target` 需要在 `graphical.target` 之前加载。这个例子中,`graphical.target` 是激活的。
### 切换到不同的目标
切换到 **multi-user.target** 很简单:
切换到 `multi-user.target` 很简单:
```
[root@testvm1 ~]# systemctl isolate multi-user.target
```
显示器现在应该从 GUI 桌面或登录界面切换到了一个虚拟控制台。登录并列出当前激活的 systemd 单元,确认 **graphical.target** 不再运行:
显示器现在应该从 GUI 桌面或登录界面切换到了一个虚拟控制台。登录并列出当前激活的 systemd 单元,确认 `graphical.target` 不再运行:
```
[root@testvm1 ~]# systemctl list-units --type target
```
务必使用 **runlevel** 确认命令输出了之前的和当前的“运行等级”:
务必使用 `runlevel` 确认命令输出了之前的和当前的“运行等级”:
```
[root@testvm1 ~]# runlevel
@ -238,7 +239,7 @@ To show all installed unit files use 'systemctl list-unit-files'.
### 更改默认目标
现在,将默认目标改为 **multi-user.target**,以便系统总是启动进入 **multi-user.target**,从而使用控制台命令行接口而不是 GUI 桌面接口。使用你的测试主机的根用户,切换到保存 systemd 配置的目录,执行一次快速列出操作:
现在,将默认目标改为 `multi-user.target`,以便系统总是启动进入 `multi-user.target`,从而使用控制台命令行接口而不是 GUI 桌面接口。使用你的测试主机的根用户,切换到保存 systemd 配置的目录,执行一次快速列出操作:
```
[root@testvm1 ~]# cd /etc/systemd/system/ ; ll
@ -256,13 +257,13 @@ drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants
为了强调一些有助于解释 systemd 如何管理启动过程的重要事项,我缩短了这个列表。你应该可以在虚拟机看到完整的目录和链接列表。
**default.target** 项是指向目录 **/lib/systemd/system/graphical.target** 的符号链接(软链接),列出那个目录查看目录中的其他内容:
`default.target` 项是指向目录 `/lib/systemd/system/graphical.target` 的符号链接(软链接),列出那个目录查看目录中的其他内容:
```
[root@testvm1 system]# ll /lib/systemd/system/ | less
```
你应该在这个列表中看到文件、目录、以及更多链接,但是专门寻找一下 **multi-user.target****graphical.target**。现在列出 **default.target**——一个指向 **/lib/systemd/system/graphical.target** 的链接——的内容:
你应该在这个列表中看到文件、目录、以及更多链接,但是专门寻找一下 `multi-user.target``graphical.target`。现在列出 `default.target`(指向 `/lib/systemd/system/graphical.target` 的链接)的内容:
```
[root@testvm1 system]# cat default.target
@ -286,16 +287,16 @@ AllowIsolate=yes
[root@testvm1 system]#
```
**graphical.target** 文件的这个链接描述了图形用户接口需要的所有必备条件。我会在本系列的下一篇文章至少探讨其中的一些选项。
`graphical.target` 文件的这个链接描述了图形用户接口需要的所有必备条件。我会在本系列的下一篇文章至少探讨其中的一些选项。
为了使主机启动到多用户模式,你需要删除已有的链接,创建一个新链接指向正确目标。如果你的 [PWD][5] 不是 **/etc/systemd/system**,切换过去:
为了使主机启动到多用户模式,你需要删除已有的链接,创建一个新链接指向正确目标。如果你的 [PWD][5] 不是 `/etc/systemd/system`,切换过去:
```
[root@testvm1 system]# rm -f default.target
[root@testvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target
```
列出 **default.target** 链接,确认其指向了正确的文件:
列出 `default.target` 链接,确认其指向了正确的文件:
```
[root@testvm1 system]# ll default.target
@ -303,7 +304,7 @@ lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -&gt; /lib/systemd/system/
[root@testvm1 system]#
```
如果你的链接看起来不一样,删除并重试。列出 **default.target** 链接的内容:
如果你的链接看起来不一样,删除并重试。列出 `default.target` 链接的内容:
```
[root@testvm1 system]# cat default.target
@ -326,9 +327,9 @@ AllowIsolate=yes
[root@testvm1 system]#
```
**default.target**——这里其实是指向 **multi-user.target** 的链接——其中的 **[Unit]** 部分现在有不同的必需条件。这个目标不需要有图形显示管理器。
`default.target`(这里其实是指向 `multi-user.target` 的链接)其中的 `[Unit]` 部分现在有不同的必需条件。这个目标不需要有图形显示管理器。
重启,你的虚拟机应该启动到虚拟控制台 1 的控制台登录,虚拟控制台 1 在显示器标识为 tty1。现在你已经知道如何修改默认的目标使用所需的命令将默认目标改回 **graphical.target**
重启,你的虚拟机应该启动到虚拟控制台 1 的控制台登录,虚拟控制台 1 在显示器标识为 `tty1`。现在你已经知道如何修改默认的目标,使用所需的命令将默认目标改回 `graphical.target`
首先检查当前的默认目标:
@ -341,19 +342,19 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
[root@testvm1 ~]#
```
输入下面的命令直接切换到 **graphical.target** 和显示管理器的登录界面,不需要重启:
输入下面的命令直接切换到 `graphical.target` 和显示管理器的登录界面,不需要重启:
```
[root@testvm1 system]# systemctl isolate default.target
```
我不清楚为何 systemd 的开发者选择了术语 “isolate” 作为这个子命令。我的研究表明指的可能是运行指明的目标,但是“隔离”并终结其他所有启动该目标不需要的目标。然而,命令执行的效果是从一个运行的目标切换到另一个——在这个例子中,从多用户目标切换到图形目标。上面的命令等同于 SystemV 启动脚本和 init 程序中古老的 init 5 命令。
我不清楚为何 systemd 的开发者选择了术语 `isolate` 作为这个子命令。我的研究表明指的可能是运行指明的目标,但是“隔离”并终结其他所有启动该目标不需要的目标。然而,命令执行的效果是从一个运行的目标切换到另一个——在这个例子中,从多用户目标切换到图形目标。上面的命令等同于 SystemV 启动脚本和 `init` 程序中古老的 `init 5` 命令。
登录 GUI 桌面,确认能正常工作。
### 总结
本文探索了 Linux systemd 启动序列,开始探讨两个重要的 systemd 工具 **systemdctl****journalctl**,还说明了如何从一个目标切换到另一个目标,以及如何修改默认目标。
本文探索了 Linux systemd 启动序列,开始探讨两个重要的 systemd 工具 `systemdctl``journalctl`,还说明了如何从一个目标切换到另一个目标,以及如何修改默认目标。
本系列的下一篇文章中将会创建一个新的 systemd 单元,并配置为启动阶段运行。下一篇文章还会查看一些配置选项,可以帮助确定某个特定的单元在序列中启动的位置,比如在网络启动运行后。
@ -362,9 +363,9 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
关于 systemd 网络上有大量的信息,但大部分都简短生硬、愚钝、甚至令人误解。除了本文提到的资源,下面的网页提供了关于 systemd 启动更详细可靠的信息。
* Fedora 项目有一个优质实用的 [systemd 指南][6],几乎有你使用 systemd 配置、管理、维护一个 Fedora 计算机需要知道的一切。
* Fedora 项目还有一个好用的[速查表][7],交叉引用了古老的 SystemV 命令和对应的 systemd 命令。
* Fedora 项目还有一个好用的 [速查表][7],交叉引用了古老的 SystemV 命令和对应的 systemd 命令。
* 要获取 systemd 的详细技术信息和创立的原因,查看 [Freedesktop.org][8] 的 [systemd 描述][9]。
* Linux.com 上”systemd 的更多乐趣"提供了更高级的 systemd [信息和提示][11]。
* Linux.com 上“systemd 的更多乐趣”提供了更高级的 systemd [信息和提示][11]。
还有一系列针对系统管理员的深层技术文章,由 systemd 的设计者和主要开发者 Lennart Poettering 所作。这些文章写于 2010 年 4 月到 2011 年 9 月之间,但在当下仍然像当时一样有价值。关于 systemd 及其生态的许多其他优秀的作品都是基于这些文章的。
@ -381,7 +382,6 @@ Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/g
* [systemd for Administrators, Part X][22]
* [systemd for Administrators, Part XI][23]
Mentor Graphics 公司的一位 Linux 内核和系统工程师 Alison Chiaken对 systemd 进行了预展...
--------------------------------------------------------------------------------
@ -390,7 +390,7 @@ via: https://opensource.com/article/20/5/systemd-startup
作者:[David Both][a]
选题:[lujun9972][b]
译者:[YungeG](https://github.com/YungeG)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: (unigeorge)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13726-1.html)
[#]: subject: (A beginners guide to SSH for remote connection on Linux)
[#]: via: (https://opensource.com/article/20/9/ssh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Linux 远程连接之 SSH 新手指南
======
> 学会使用安全外壳协议连接远程计算机。
![](https://img.linux.net.cn/data/attachment/album/202108/28/105409ztj7akfjpcluwjp3.jpg)
使用 Linux你只需要在键盘上输入命令就可以巧妙地使用计算机甚至这台计算机可以在世界上任何地方这正是 Linux 最吸引人的特性之一。有了 OpenSSH[POSIX][2] 用户就可以在有权限连接的计算机上打开安全外壳协议,然后远程使用。这对于许多 Linux 用户来说可能不过是日常任务,但从没操作过的人可能就会感到很困惑。本文介绍了如何配置两台计算机的 <ruby>安全外壳协议<rt>secure shell</rt></ruby>(简称 SSH连接以及如何在没有密码的情况下安全地从一台计算机连接到另一台计算机。
### 相关术语
在讨论多台计算机时如何将不同计算机彼此区分开可能会让人头疼。IT 社区拥有完善的术语来描述计算机联网的过程。
* <ruby>服务<rt>service</rt></ruby>
服务是指在后台运行的软件因此它不会局限于仅供安装它的计算机使用。例如Web 服务器通常托管着 Web 共享 _服务_。该术语暗含(但非绝对)它是没有图形界面的软件。
* <ruby>主机<rt>host</rt></ruby>
主机可以是任何计算机。在 IT 中,任何计算机都可以称为 _主机_,因为从技术上讲,任何计算机都可以<ruby>托管<rt>host</rt></ruby>对其他计算机有用的应用程序。你可能不会把自己的笔记本电脑视为 **主机**,但其实上面可能正运行着一些对你、你的手机或其他计算机有用的服务。
* <ruby>本地<rt>local</rt></ruby>
本地计算机是指用户或某些特定软件正在使用的计算机。例如,每台计算机都会把自己称为 `localhost`
* <ruby>远程<rt>remote</rt></ruby>
远程计算机是指你既没在其面前,也没有在实际使用的计算机,是真正意义上在 _远程_ 位置的计算机。
现在术语已经明确好,我们可以开始了。
### 在每台主机上激活 SSH
要通过 SSH 连接两台计算机,每个主机都必须安装 SSH。SSH 有两个组成部分:本地计算机上使用的用于启动连接的命令,以及用于接收连接请求的 _服务器_。有些计算机可能已经安装好了 SSH 的一个或两个部分。验证 SSH 是否完全安装的命令因系统而异,因此最简单的验证方法是查阅相关配置文件:
```
$ file /etc/ssh/ssh_config
/etc/ssh/ssh_config: ASCII text
```
如果返回 `No such file or directory` 错误,说明没有安装 SSH 命令。
SSH 服务的检测与此类似(注意文件名中的 `d`
```
$ file /etc/ssh/sshd_config
/etc/ssh/sshd_config: ASCII text
```
根据缺失情况选择安装两个组件:
```
$ sudo dnf install openssh-clients openssh-server
```
在远程计算机上,使用 systemd 命令启用 SSH 服务:
```
$ sudo systemctl enable --now sshd
```
你也可以在 GNOME 上的 **系统设置** 或 macOS 上的 **系统首选项** 中启用 SSH 服务。在 GNOME 桌面上,该设置位于 **共享** 面板中:
![在 GNOME 系统设置中激活 SSH][3]
### 开启安全外壳协议
现在你已经在远程计算机上安装并启用了 SSH可以尝试使用密码登录作为测试。要访问远程计算机你需要有用户帐户和密码。
远程用户不必与本地用户相同。只要拥有相应用户的密码,你就可以在远程机器上以任何用户的身份登录。例如,我在我的工作计算机上的用户是 `sethkenlon` ,但在我的个人计算机上是 `seth`。如果我正在使用我的个人计算机(即作为当前的本地计算机),并且想通过 SSH 连接到我的工作计算机,我可以通过将自己标识为 `sethkenlon` 并使用我的工作密码来实现连接。
要通过 SSH 连接到远程计算机,你必须知道其 IP 地址或可解析的主机名。在远程计算机上使用 `ip` 命令可以查看该机器的 IP 地址:
```
$ ip addr show | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.1.1.5/27 brd 10.1.1.31 [...]
```
如果远程计算机没有 `ip` 命令,可以尝试使用 `ifconfig` 命令(甚至可以试试 Windows 上通用的 `ipconfig` 命令)。
`127.0.0.1` 是一个特殊的地址,它实际上是 `localhost` 的地址。这是一个<ruby>环回<rt>loopback</rt></ruby>地址,系统使用它来找到自己。这在登录远程计算机时并没有什么用,因此在此示例中,远程计算机的正确 IP 地址为 `10.1.1.5`。在现实生活中,我的本地网络正在使用 `10.1.1.0` 子网,进而可得知前述正确的 IP 地址。如果远程计算机在不同的网络上,那么 IP 地址几乎可能是任何地址(但绝不会是 `127.0.0.1`),并且可能需要一些特殊的路由才能通过各种防火墙到达远程。如果你的远程计算机在同一个网络上,但想要访问比自己的网络更远的计算机,请阅读我之前写的关于 [在防火墙中打开端口][5] 的文章。
如果你能通过 IP 地址 _或_ 主机名 `ping` 到远程机器,并且拥有登录帐户,那么就可以通过 SSH 接入远程机器:
```
$ ping -c1 10.1.1.5
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
64 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=4.66 ms
$ ping -c1 akiton.local
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
```
至此就成功了一小步。再试试使用 SSH 登录:
```
$ whoami
seth
$ ssh sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
测试登录有效,下一节会介绍如何激活无密码登录。
### 创建 SSH 密钥
要在没有密码的情况下安全地登录到另一台计算机,登录者必须拥有 SSH 密钥。可能你的机器上已经有一个 SSH 密钥但再多创建一个新密钥也没有什么坏处。SSH 密钥的生命周期是在本地计算机上开始的,它由两部分组成:一个是永远不会与任何人或任何东西共享的私钥,一个是可以复制到任何你想要无密码访问的远程机器上的公钥。
有的人可能会创建一个 SSH 密钥,并将其用于从远程登录到 GitLab 身份验证的所有操作,但我会选择对不同的任务组使用不同的密钥。例如,我在家里使用一个密钥对本地机器进行身份验证,使用另一个密钥对我维护的 Web 服务器进行身份验证,再一个单独的密钥用于 Git 主机,以及又一个用于我托管的 Git 存储库,等等。在此示例中,我将只创建一个唯一密钥,以在局域网内的计算机上使用。
使用 `ssh-keygen` 命令创建新的 SSH 密钥:
```
$ ssh-keygen -t ed25519 -f ~/.ssh/lan
```
`-t` 选项代表 _类型_ ,上述代码设置了一个高于默认值的密钥加密级别。`-f` 选项代表 _文件_,指定了密钥的文件名和位置。运行此命令后会生成一个名为 `lan` 的 SSH 私钥和一个名为 `lan.pub` 的 SSH 公钥。
使用 `ssh-copy-id` 命令把公钥发送到远程机器上,在此之前要先确保具有远程计算机的 SSH 访问权限。如果你无法使用密码登录远程主机,也就无法设置无密码登录:
```
$ ssh-copy-id -i ~/.ssh/lan.pub sethkenlon@10.1.1.5
```
过程中系统会提示你输入远程主机上的登录密码。
操作成功后,使用 `-i` 选项将 SSH 命令指向对应的密钥(在本例中为 `lan`)再次尝试登录:
```
$ ssh -i ~/.ssh/lan sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
对局域网上的所有计算机重复此过程,你就将能够无密码访问这个局域网上的每台主机。实际上,一旦你设置了无密码认证,你就可以编辑 `/etc/ssh/sshd_config` 文件来禁止密码认证。这有助于防止其他人使用 SSH 对计算机进行身份验证,除非他们拥有你的私钥。要想达到这个效果,可以在有 `sudo` 权限的文本编辑器中打开 `/etc/ssh/sshd_config` 并搜索字符串 `PasswordAuthentication`,将默认行更改为:
```
PasswordAuthentication no
```
保存并重启 SSH 服务器:
```
$ sudo systemctl restart sshd &amp;&amp; echo "OK"
OK
$
```
### 日常使用 SSH
OpenSSH 改变了人们对操作计算机的看法,使用户不再被束缚在面前的计算机上。使用 SSH你可以访问家中的任何计算机或者拥有帐户的服务器甚至是移动和物联网设备。充分利用 SSH 也意味着解锁 Linux 终端的更多用途。如果你还没有使用过 SSH请试一下它吧。试着适应 SSH创建一些适当的密钥以此更安全地使用计算机打破必须与计算机面对面的局限性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ssh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/sites/default/files/uploads/gnome-activate-remote-login.png (Activate SSH in GNOME System Settings)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/8/open-ports-your-firewall

View File

@ -0,0 +1,116 @@
[#]: subject: (How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux])
[#]: via: (https://itsfoss.com/check-mbr-or-gpt/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13727-1.html)
如何在 Windows 和 Linux 上确定系统使用的是 MBR 还是 GPT 分区
======
![](https://img.linux.net.cn/data/attachment/album/202108/28/165508gqjyigp3yz3gy6yy.jpg)
在你安装 Linux 或任何其他系统的时候,了解你的磁盘的正确分区方案是非常关键的。
目前有两种流行的分区方案,老一点的 MBR 和新一些的 GPT。现在大多数的电脑使用 GPT。
在制作临场镜像或可启动 USB 设备时,一些工具(比如 [Rufus][1])会问你在用的磁盘分区情况。如果你在 MBR 分区的磁盘上选择 GPT 方案的话,制作出来的可启动 USB 设备可能会不起作用。
在这个教程里,我会展示若干方法,来在 Windows 和 Linux 系统上检查磁盘分区方案。
### 在 Windows 上检查系统使用的是 MBR 还是 GPT
尽管在 Windows 上包括命令行在内有不少方法可以检查磁盘分区方案,这里我还是使用图形界面的方式查看。
按下 Windows 按键然后搜索“disk”然后点击“**创建并格式化硬盘分区**”。
![][2]
在这里,**右键点击**你想要检查分区方案的磁盘。在右键菜单里**选择属性**。
![右键点击磁盘并选择属性][3]
在属性窗口,切换到**卷**标签页,寻找**磁盘分区形式**属性。
![在卷标签页寻找磁盘分区形式属性][4]
正如你在上面截图所看到的,磁盘正在使用 GPT 分区方案。对于一些其他系统,它可能显示的是 MBR 或 MSDOS 分区方案。
现在你知道如何在 Windows 下检查磁盘分区方案了。在下一部分,你会学到如何在 Linux 下进行检查。
### 在 Linux 上检查系统使用的是 MBR 还是 GPT
在 Linux 上也有不少方法可以检查磁盘分区方案使用的是 MBR 还是 GPT。既有命令行方法也有图形界面工具。
让我先给你演示一下命令行方法,然后再看看一些图形界面的方法。
#### 在 Linux 使用命令行检查磁盘分区方案
命令行的方法应该在所有 Linux 发行版上都有效。
打开终端并使用 `sudo` 运行下列命令:
```
sudo parted -l
```
上述命令实际上是一个基于命令行的 [Linux 分区管理器][5]。命令参数 `-l` 会列出系统中的所有磁盘以及它们的详情,里面包含了分区方案信息。
在命令输出中,寻找以 **Partition Table**(分区表)开头的行:
![][6]
在上面的截图中,磁盘使用的是 GPT 分区方案。如果是 **MBR**,它会显示为 **msdos**
你已经学会了命令行的方式。但如果你不习惯使用终端,你还可以使用图形界面工具。
#### 使用 GNOME Disks 工具检查磁盘信息
Ubuntu 和一些其它基于 GNOME 的发行版内置了叫做 Disks 的图形工具,你可以用它管理系统中的磁盘。
你也可以使用它来获取磁盘的分区类型。
![][7]
#### 使用 Gparted 图形工具检查磁盘信息
如果你没办法使用 GNOME Disks 工具,别担心,还有其它工具可以使用。
其中一款流行的工具是 Gparted。你应该可以在大多数 Linux 发行版的软件源中找到它。如果系统中没有安装的话,使用你的发行版的软件中心或 [包管理器][9] 来 [安装 Gparted][8]。
在 Gparted 中,通过菜单选择 **View->Device Information**(查看—>设备信息)。它会在左下区域显示磁盘信息,这些信息中包含分区方案信息。
![][10]
看吧,也不是太复杂,对吗?现在你了解了好几种途径来确认你的系统使用的是 GPT 还是 MBR 分区方案。
同时我还要提一下,有时候磁盘还会有 [混合分区方案][11]。这不是很常见,大多数时候分区不是 MBR 就是 GPT。
有任何问题或建议,请在下方留下评论。
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-mbr-or-gpt/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[alim0x](https://github.com/alim0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://rufus.ie/en_US/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/disc-management-windows.png?resize=800%2C561&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-1.png?resize=800%2C603&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-2-1.png?resize=800%2C600&ssl=1
[5]: https://itsfoss.com/partition-managers-linux/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux.png?resize=800%2C446&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux-gui.png?resize=800%2C548&ssl=1
[8]: https://itsfoss.com/gparted/
[9]: https://itsfoss.com/package-manager/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-disk-partitioning-scheme-linux-gparted.jpg?resize=800%2C555&ssl=1
[11]: https://www.rodsbooks.com/gdisk/hybrid.html

View File

@ -0,0 +1,235 @@
[#]: subject: (Brave vs. Firefox: Your Ultimate Browser Choice for Private Web Experience)
[#]: via: (https://itsfoss.com/brave-vs-firefox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13736-1.html)
Brave vs. Firefox你的私人网络体验的终极浏览器选择
======
![](https://img.linux.net.cn/data/attachment/album/202108/30/223133tqzkg4pjpwwb8u4g.jpg)
Web 浏览器经过多年的发展,从下载文件到访问成熟的 Web 应用程序,已经有了长足的发展。
对于很多用户来说Web 浏览器是他们如今完成工作的唯一需要。
因此,选择合适的浏览器就成为了一项重要的任务,它可以帮助改善你多年来的工作流程。
### Brave vs. Firefox
Brave 和 Mozilla Firefox 是两个最受到关注隐私的用户和开源爱好者欢迎的 Web 浏览器。
考虑到两者都非常注重隐私和安全,让我们看看它们到底能提供什么,以帮助你决定应该选择哪一个。
以下是我所使用的比较指标:
### 用户界面
用户界面是使用浏览器时的工作流程和体验的最大区别。
当然,你会有你的个人偏好,但它看起来越容易使用、越轻快、越干净,就越好。
![Brave 浏览器][12]
首先Brave 与 Chrome 和微软 Edge 有着相似的外观和感受。它提供了一种简洁的体验,具有精简的 UI 元素,所有的基本选项都可以通过浏览器菜单访问。
它也提供了一个暗色主题。恰到好处的动画使得互动成为一种愉快的体验。
要定制它,你可以选择使用 Chrome Web 商店中的主题。
说到 Mozilla Firefox多年来它经历了几次重大的重新设计其最新的用户界面试图提供与 Chrome 更接近的体验。
![Firefox 浏览器][13]
Firefox 浏览器的设计看起来令人印象深刻,并提供了干净利落的用户体验。如果需要的话,你还可以选择一个暗色主题,此外还有其它几个主题可供下载使用。
这两个 Web 浏览器都能提供良好的用户体验。
如果你想要一个熟悉的体验但又具有一丝独特之处Mozilla Firefox 是一个不错的选择。
但是如果你想获得更快捷的体验、更好的动画感受Brave 更有优势。
### 性能
实际上,我发现 Brave 加载网页的速度更快,整体的用户体验感觉很轻快。
Firefox 浏览器倒不是非常慢,但它绝对感觉比 Brave 慢。
为了给你一些参考,我还利用 [Basemark][14] 运行了一个基准测试,看看事实上是否真的如此。
你可以使用其他的浏览器基准测试工具来测试一下,但我用 Basemark 进行了各种测试,所以我们在这篇文章中会用它。
![Firefox 基准得分][15]
![Brave 基准得分][16]
Firefox 浏览器成功获得了 **630** 的得分,而 Brave 以大约 **792** 的得分取得了更好的成绩。
请注意,这些基准测试是在没有安装任何浏览器扩展程序的情况下,以默认的浏览器设置进行的。
当然,你的分数可能会有所不同,这取决于你在后台进行的工作和你系统的硬件配置。
这是我在 **i5-7400、16GB 内存和 GTX 1050ti GPU** 配置的桌面电脑上得到的结果。
一般来说与大多数流行的浏览器相比Brave 浏览器是一个快速的浏览器。
这两者都占用了相当大的系统资源,而且在一定程度上随着标签数量、访问的网页类型和使用的拦截扩展的种类而变化。
例如Brave 在默认情况下会主动阻止广告,但 Firefox 在默认情况下不会阻止显示广告。而且,这也影响了系统资源的使用。
### 浏览器引擎
Firefox 浏览器在自己的 Gecko 引擎基础上,使用来自 [servo 研究项目][17] 的组件来进行改进。
目前,它基本上是一个改进的 Gecko 引擎,其项目名称是随着 Firefox Quantum 的发布而推出的 “Quantum”。
另一方面Brave 使用 Chromium 的引擎。
虽然两者都有足够的能力处理现代 Web 体验,但基于 Chromium 的引擎更受欢迎Web 开发人员通常会在基于 Chrome 的浏览器上定制他们的网站以获得最佳体验。
另外,有些服务恰好只支持基于 Chrome 的浏览器。
### 广告 & 追踪器阻止功能
![][18]
正如我之前提到的Brave 在阻止跟踪器和广告方面非常积极。默认情况下,它已经启用了屏蔽功能。
Firefox 浏览器也默认启用了增强的隐私保护功能,但并不阻止显示广告。
如果你想摆脱广告,你得选择火狐浏览器的 “严格隐私保护模式”。
也就是说,火狐浏览器执行了一些独特的跟踪保护技术,包括“全面 Cookie 保护”,可以为每个网站隔离 Cookie 并防止跨站 Cookie 跟踪。
![][19]
这是在 [Firefox 86][20] 中引入的技术,要使用它,你需要启用 “严格隐私保护模式”。
总的来说Brave 可能看起来是一个更好的选择,而 Mozilla Firefox 提供了更好的隐私保护功能。
### 容器
当你访问 Facebook 时Firefox 还提供了一种借助容器来隔离网站活动的方法。换句话说,它可以防止 Facebook 跟踪你的站外活动。
你还可以使用容器来组织你的标签,并在需要时分离会话。
Brave 没有提供任何类似的功能,但它本身可以阻止跨站追踪器和 cookie。
### 奖励
![][21]
与 Firefox 不同Brave 通过屏蔽网络上的其他广告来提供自己的广告网络。
当你选择显示 Brave 的隐私友好型广告时,你会得到可以放到加密货币钱包里的通证奖励,而你可以用这些通证来回馈你喜欢的网站。
虽然这是摆脱主流广告的一个很好的商业策略,但对于不想要任何形式的广告的用户来说,这可能没有用。
因此Brave 以奖励的形式提供了一个替代方案即使你屏蔽了广告也可以帮助网站发展。如果这是你欣赏的东西Brave 将是你的一个好选择。
### 跨平台可用性
你会发现 Brave 和 Firefox 都有 Linux、Windows 和 macOS 版本,也有用于 iOS 和 Android 的移动应用程序。
对于 Linux 用户来说Firefox 浏览器捆绑在大多数的 Linux 发行版中。而且,你也可以在软件中心里找到它。除此之外,还有一个 [Flatpak][22] 包可用。
Brave 不能通过默认的软件库和软件中心获得。因此,你需要按照官方的说明来添加私有仓库,然后 [把 Brave 安装在你的 Linux 发行版中][23]。
### 同步
通过 Mozilla Firefox你可以创建一个 Firefox 账户来跨平台同步你的所有数据。
![][24]
Brave 也可以让你跨平台同步,但你需要能访问其中一个设备才行。
![][25]
因此Firefox 的同步更方便。
另外,你可以通过 Firefox 的账户访问它的“虚拟专用网络”、数据泄露监控器、电子邮件中继,以及密码管理器。
### 服务集成
从一开始 Firefox 就提供了更多的服务集成,包括 Pocket、“虚拟私有网络”、密码管理器还有一些新产品如 Firefox 中继。
如果你想通过你的浏览器访问这些服务Firefox 将是你的方便选择。
虽然 Brave 确实提供了加密货币钱包,但它并不适合所有人。
![][26]
同样,如果你喜欢使用 [Brave Search][27],在使用 Brave 浏览器时,由于用户体验的原因,你可能体验会更顺滑。
### 可定制性 & 安全性
Firefox 浏览器在可定制性方面大放异彩。你可以通过众多选项来调整体验,也可以控制你的浏览器的隐私/安全。
自定义的能力使你可以让 Firefox 比 Brave 浏览器更安全。
而加固 Firefox 浏览器是一个我们将讨论的单独话题。略举一例,[Tor 浏览器][28] 只是一个定制的 Firefox 浏览器。
然而,这并不意味着 Brave 的安全性更低。总的来说,它是一个安全的浏览器,但你确实可以通过 Firefox 浏览器获得更多的选择。
### 扩展支持
毫无疑问Chrome Web 商店提供了更多的扩展。
因此如果你是一个使用大量扩展或不断尝试新扩展的人Brave 明显比 Firefox 更有优势。
可能 Firefox 的扩展清单不是最大的,但它确实支持大多数的扩展。对于常见的使用情况,你很少能找到一个 Firefox 中没有的扩展。
### 你应该选择那个?
如果你希望尽量兼容现代的 Web 体验并希望有更多的扩展Brave 浏览器似乎更合适。
另一方面Firefox 浏览器是日常浏览的绝佳选择,它具有业界首创的隐私功能,并为不懂技术的用户提供了方便的同步选项。
在选择它们中的任何一个时会有一些取舍。因此,你需要优先考虑你最想要的东西。
请在下面的评论中告诉我你的最终选择!
--------------------------------------------------------------------------------
via: https://itsfoss.com/brave-vs-firefox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: tmp.5yJseRG2rb#ui
[2]: tmp.5yJseRG2rb#perf
[3]: tmp.5yJseRG2rb#engine
[4]: tmp.5yJseRG2rb#ad
[5]: tmp.5yJseRG2rb#container
[6]: tmp.5yJseRG2rb#reward
[7]: tmp.5yJseRG2rb#cp
[8]: tmp.5yJseRG2rb#sync
[9]: tmp.5yJseRG2rb#service
[10]: tmp.5yJseRG2rb#customise
[11]: tmp.5yJseRG2rb#extensions
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-ui-new.jpg?resize=800%2C450&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-ui.jpg?resize=800%2C450&ssl=1
[14]: https://web.basemark.com
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-basemark.png?resize=800%2C598&ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/basemark-brave.png?resize=800%2C560&ssl=1
[17]: https://servo.org
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-blocker.png?resize=800%2C556&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-blocker.png?resize=800%2C564&ssl=1
[20]: https://news.itsfoss.com/firefox-86-release/
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-rewards.png?resize=800%2C560&ssl=1
[22]: https://itsfoss.com/what-is-flatpak/
[23]: https://itsfoss.com/brave-web-browser/
[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-sync.png?resize=800%2C651&ssl=1
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-sync.png?resize=800%2C383&ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-crypto-wallet.png?resize=800%2C531&ssl=1
[27]: https://itsfoss.com/brave-search-features/
[28]: https://itsfoss.com/install-tar-browser-linux/

View File

@ -0,0 +1,72 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13716-1.html"
Linux 中 cron 系统的 4 种替代方案
======
> 在 Linux 系统中有一些其他开源项目可以结合或者替代 cron 系统使用。
![](https://img.linux.net.cn/data/attachment/album/202108/25/104033ro6lasn54lq25r2l.jpg)
[Linux cron 系统][2] 是一项经过时间检验的成熟技术,然而在任何情况下它都是最合适的系统自动化工具吗?答案是否定的。有一些开源项目就可以用来与 cron 结合或者直接代替 cron 使用。
### at 命令
cron 适用于长期重复任务。如果你设置了一个工作任务,它会从现在开始定期运行,直到计算机报废为止。但有些情况下你可能只想设置一个一次性命令,以备不在计算机旁时该命令可以自动运行。这时你可以选择使用 `at` 命令。
`at` 的语法比 cron 语法简单和灵活得多,并且兼具交互式和非交互式调度方法。(只要你想,你甚至可以使用 `at` 作业创建一个 `at` 作业。)
```
$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM
```
该命令语法自然且易用,并且不需要用户清理旧作业,因为它们一旦运行后就完全被计算机遗忘了。
阅读有关 [at 命令][3] 的更多信息并开始使用吧。
### systemd
除了管理计算机上的进程外,`systemd` 还可以帮你调度这些进程。与传统的 cron 作业一样systemd 计时器可以在指定的时间间隔触发事件,例如 shell 脚本和命令。时间间隔可以是每月特定日期的一天一次(例如在星期一的时候触发),或者在 09:00 到 17:00 的工作时间内每 15 分钟一次。
此外 systemd 里的计时器还可以做一些 cron 作业不能做的事情。
例如,计时器可以在一个事件 _之后_ 触发脚本或程序来运行特定时长,这个事件可以是开机,可以是前置任务的完成,甚至可以是计时器本身调用的服务单元的完成!
如果你的系统运行着 systemd 服务,那么你的机器就已经在技术层面上使用 systemd 计时器了。默认计时器会执行一些琐碎的任务,例如滚动日志文件、更新 mlocate 数据库、管理 DNF 数据库等。创建自己的计时器很容易,具体可以参阅 David Both 的文章 [使用 systemd 计时器来代替 cron][4]。
### anacron 命令
cron 专门用于在特定时间运行命令这适用于从不休眠或断电的服务器。然而对笔记本电脑和台式工作站而言时常有意或无意地关机是很常见的。当计算机处于关机状态时cron 不会运行,因此设定在这段时间内的一些重要工作(例如备份数据)也就会跳过执行。
anacron 系统旨在确保作业定期运行,而不是按计划时间点运行。这就意味着你可以将计算机关机几天,再次启动时仍然靠 anacron 来运行基本任务。anacron 与 cron 协同工作,因此严格来说前者不是后者的替代品,而是一种调度任务的有效可选方案。许多系统管理员配置了一个 cron 作业来在深夜备份远程工作者计算机上的数据结果却发现该作业在过去六个月中只运行过一次。anacron 确保重要的工作在 _可执行的时候_ 发生,而不是必须在安排好的 _特定时间点_ 发生。
点击参阅关于 [使用 anacron 获得更好的 crontab 效果][5] 的更多内容。
### 自动化
计算机和技术旨在让人们的生活更美好工作更轻松。Linux 为用户提供了许多有用的功能以确保完成重要的操作系统任务。查看这些可用的功能然后试着将这些功能用于你自己的工作任务吧。LCTT 译注:作者本段有些语焉不详,读者可参阅譬如 [Ansible 自动化工具安装、配置和快速入门指南](https://linux.cn/article-13142-1.html) 等关于 Linux 自动化的文章)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -0,0 +1,89 @@
[#]: subject: "Automatically Synchronize Subtitle With Video Using SubSync"
[#]: via: "https://itsfoss.com/subsync/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13722-1.html"
使用 SubSync 自动同步视频字幕
======
![](https://img.linux.net.cn/data/attachment/album/202108/27/100003ts3j0odw05j0ooy3.jpg)
让我分享一个场景:当你想要观看一部电影或视频,而又需要字幕时,在你下载字幕后,却发现字幕没有正确同步,也没有其他更好的字幕可用。现在该怎么做?
你可以 [在 VLC 中按 G 或 H 键来同步字幕][1]。它可以为字幕增加延迟。如果字幕在整个视频中的时间延迟相同,这可能会起作用。但如果不是这种情况,就需要 SubSync 出场了。
### SubSync: 字幕语音同步器
[SubSync][2] 是一款实用的开源工具,可用于 Linux、macOS 和 Windows。
它通过监听音轨来同步字幕,这就是它的神奇之处。即使音轨和字幕使用的是不同的语言,它也能发挥作用。如果有必要,它也支持翻译,但我没有测试过这个功能。
我播放一个视频不同步的字幕进行了一个简单的测试。令我惊讶的是,它工作得很顺利,我得到了完美的同步字幕。
使用 SubSync 很简单。启动这个应用,它会让你添加字幕文件和视频文件。
![SubSync 用户界面][3]
你需要在界面上选择字幕和视频的语言。它可能会根据选择的语言下载额外的资源。
![SubSync 可下载附加语言支持包][4]
请记住,同步字幕需要一些时间,这取决于视频和字幕的长度。在等待过程完成时,你可以喝杯茶/咖啡或啤酒。
你可以看到正在进行同步的状态,甚至可以在完成之前保存它。
![SubSync 同步中][5]
同步完成后,你就可以点击保存按钮,把修改的内容保存到原文件中,或者把它保存为新的字幕文件。
![同步完成][6]
我不能保证所有情况下都能正常工作,但在我运行的样本测试中它是正常的。
### 安装 SubSync
SubSync 是一个跨平台的应用,你可以从它的 [下载页面][7] 获得 Windows 和 MacOS 的安装文件。
对于 Linux 用户SubSync 是作为一个 Snap 包提供的。如果你的发行版已经提供了 Snap 支持,使用下面的命令来安装 SubSync
```
sudo snap install subsync
```
请记住,下载 SubSync Snap 包将需要一些时间。所以要有一个稳定的网络连接或足够的耐心。
### 最后
就我个人而言,我很依赖字幕。即使我在 Netflix 上看英文电影,我也会把字幕打开。它有助于我清楚地理解每段对话,特别是在有强烈口音的情况下。如果没有字幕,我永远无法理解 [电影 Snatch 中 Mickey O'Neil由 Brad Pitt 扮演)的一句话][8]。
使用 SubSync 比 [Subtitle Editor][9] 同步字幕要容易得多。对于像我这样在整个互联网上搜索不同国家的冷门或推荐(神秘)电影的人来说,除了 [企鹅字幕播放器][10],这是另一个很棒的工具。
如果你是一个“字幕用户”,你会喜欢这个工具。如果你使用过它,请在评论区分享你的使用经验。
--------------------------------------------------------------------------------
via: https://itsfoss.com/subsync/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[2]: https://subsync.online/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-interface.png?resize=593%2C280&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize.png?resize=522%2C189&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-1.png?resize=424%2C278&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-2.png?resize=424%2C207&ssl=1
[7]: https://subsync.online/en/download.html
[8]: https://www.youtube.com/watch?v=tGDO-9hfaiI
[9]: https://itsfoss.com/subtitld/
[10]: https://itsfoss.com/penguin-subtitle-player/

View File

@ -3,24 +3,24 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13723-1.html"
用 fastjar 和 gjar 构建一个 JAR 文件
======
fastjar、gjar 和 jar 等工具可以帮助你手动或以编程方式构建 JAR 文件,而其他工具链,如 Maven
和 Gradle 提供了依赖性管理的功能。
![Someone wearing a hardhat and carrying code ][1]
JAR 文件使用户很容易下载和启动他们想尝试的应用,很容易将该应用从一台计算机转移到另一台计算机(而且 Java 是跨平台的,所以可以鼓励自由分享),而且对于新的程序员来说,很容易理解 JAR 文件的内容,以找出使 Java 应用运行的原因。
> fastjar、gjar 和 jar 等工具可以帮助你手动或以编程方式构建 JAR 文件,而其他工具链,如 Maven 和 Gradle 提供了依赖性管理的功能。
![](https://img.linux.net.cn/data/attachment/album/202108/27/105207oj4f44t4vbkkv4iq.jpg)
根据我的经验Java 的许多优点之一是它能够以整齐方便的包(称为 JAR或 Java 归档来提供应用程序。JAR 文件使用户很容易下载并启动他们想尝试的应用,很容易将该应用从一台计算机转移到另一台计算机(而且 Java 是跨平台的,所以可以鼓励自由分享),而且对于新的程序员来说,查看 JAR 文件的内容,以找出使 Java 应用运行的原因是很容易理解的。
创建 JAR 文件的方法有很多,包括 Maven 和 Gradle 等工具链解决方案,以及 IDE 中的一键构建功能。然而,也有一些独立的命令,如 `jarfast`、`gjar` 和普通的 `jar`,它们对于快速和简单的构建是很有用的,并且可以演示 JAR 文件运行所需要的东西。
### 安装
在 Linux 上,你可能已经有了 `fastjar`、`gjar` 或 `jar` 命令,作为 OpenJDK 包或 GCJGCC-Java的一部分。你可以通过输入不带参数的命令来测试这些命令是否已经安装
在 Linux 上,你可能已经有了 `fastjar`、`gjar` 或作为 OpenJDK 包或 GCJGCC-Java的一部分的 `jar` 命令。你可以通过输入不带参数的命令来测试这些命令是否已经安装:
```
$ fastjar
@ -35,7 +35,7 @@ Try `jar --help' for more information.
我安装了所有这些命令,但你只需要一个。所有这些命令都能够构建一个 JAR。
在 Fedora 等现代 Linux 系统上,输入一个缺失的命令会使你的操作系统提示安装。
在 Fedora 等现代 Linux 系统上,输入一个缺失的命令你的操作系统提示安装
另外,你可以直接从 [AdoptOpenJDK.net][3] 为 Linux、MacOS 和 Windows [安装 Java][2]。
@ -43,45 +43,40 @@ Try `jar --help' for more information.
首先,你需要构建一个 Java 应用。
为了简单起见,在一个名为 hello.java 的文件中创建一个基本的 “hello world” 应用:
为了简单起见,在一个名为 `hello.java` 的文件中创建一个基本的 “hello world” 应用:
```
class Main {
public static void main([String][4][] args) {
[System][5].out.println("Hello Java World");
public static void main(String[] args) {
System.out.println("Hello Java World");
}}
```
这是一个简单的应用,在某种程度上淡化了管理外部依赖关系在现实世界中的重要性。不过,这也足以让你开始了解创建 JAR 所需的基本概念了。
接下来,创建一个清单文件。清单文件描述了 JAR 的 Java 环境。在这种情况下,最重要的信息是识别主类,这样执行 JAR 的 Java 运行时就知道在哪里可以找到应用的入口点。
接下来,创建一个清单文件。清单文件描述了 JAR 的 Java 环境。在这个例子里,最重要的信息是识别主类,这样执行 JAR 的 Java 运行时就知道在哪里可以找到应用的入口点。
```
$ mdir META-INF
$ echo "Main-Class: Main" &gt; META-INF/MANIFEST.MF
$ echo "Main-Class: Main" > META-INF/MANIFEST.MF
```
### 编译 Java 字节码
接下来,把你的 Java 文件编译成 Java 字节码。
```
`$ javac hello.java`
$ javac hello.java
```
另外,你也可以使用 GCC 的 Java 组件来编译:
```
`$ gcj -C hello.java`
$ gcj -C hello.java
```
无论哪种方式,都会产生文件 `Main.class`
```
$ file Main.class
Main.class: compiled Java class data, version XX.Y
@ -91,32 +86,28 @@ Main.class: compiled Java class data, version XX.Y
你有了所有需要的组件,这样你就可以创建 JAR 文件了。
我经常包含 Java 源码给好奇的用户参考但_所有_需要的只是 `META-INF` 目录和类文件。
`fastjar` 命令使用类似于 [`tar` 命令][6]的语法。
我经常包含 Java 源码给好奇的用户参考,这只需 `META-INF` 目录和类文件即可。
`fastjar` 命令使用类似于 [tar 命令][6]的语法。
```
`$ fastjar cvf hello.jar META-INF Main.class`
$ fastjar cvf hello.jar META-INF Main.class
```
另外,你也可以用 `gjar`,方法大致相同,只是 `gjar` 需要你明确指定清单文件:
```
`$ gjar cvf world.jar Main.class -m META-INF/MANIFEST.MF`
$ gjar cvf world.jar Main.class -m META-INF/MANIFEST.MF
```
或者你可以使用 `jar` 命令。注意这个命令不需要 Manifest 文件,因为它会自动为你生成一个,但为了安全起见,我明确定义了主类:
或者你可以使用 `jar` 命令。注意这个命令不需要清单文件,因为它会自动为你生成一个,但为了安全起见,我明确定义了主类:
```
`$ jar --create --file hello.jar --main-class=Main Main.class`
$ jar --create --file hello.jar --main-class=Main Main.class
```
测试你的应用:
```
$ java -jar hello.jar
Hello Java World
@ -135,7 +126,7 @@ via: https://opensource.com/article/21/8/fastjar
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,145 @@
[#]: subject: "Check free disk space in Linux with ncdu"
[#]: via: "https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13729-1.html"
用 ncdu 检查 Linux 中的可用磁盘空间
======
> 用 ncdu Linux 命令获得关于磁盘使用的交互式报告。
![](https://img.linux.net.cn/data/attachment/album/202108/29/095819e87oz4ox6p40t6q0.jpg)
计算机用户多年来往往积累了大量的数据,无论是重要的个人项目、数码照片、视频、音乐还是代码库。虽然现在的硬盘往往相当大,但有时你必须退一步,评估一下你在硬盘上实际存储了什么。经典的 Linux 命令 [df][2] 和 [du][3] 是快速了解硬盘上的内容的方法,它们提供了一个可靠的报告,易于解析和处理。这对脚本和处理来说是很好的,但人的大脑对数百行的原始数据并不总是反应良好。认识到这一点,`ncdu` 命令旨在提供一份关于你在硬盘上使用的空间的交互式报告。
### 在 Linux 上安装 ncdu
在 Linux 上,你可以从你的软件仓库安装 `ncdu`。例如,在 Fedora 或 CentOS 上:
```
$ sudo dnf install ncdu
```
在 BSD 上,你可以使用 [pkgsrc][4]。
在 macOS 上,你可以从 [MacPorts][5] 或 [HomeBrew][6] 安装。
另外,你也可以 [从源码编译 ncdu][7]。
### 使用 ncdu
`ncdu` 界面使用 ncurses 库,它将你的终端窗口变成一个基本的图形应用,所以你可以使用方向键来浏览菜单。
![ncdu interface][8]
这是 `ncdu` 的主要吸引力之一,也是它与最初的 `du` 命令不同的地方。
要获得一个目录的完整列表,启动 `ncdu`。它默认为当前目录。
```
$ ncdu
ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help
--- /home/tux -----------------------------------------------
22.1 GiB [##################] /.var
19.0 GiB [############### ] /Iso
10.0 GiB [######## ] /.local
7.9 GiB [###### ] /.cache
3.8 GiB [### ] /Downloads
3.6 GiB [## ] /.mail
2.9 GiB [## ] /Code
2.8 GiB [## ] /Documents
2.3 GiB [# ] /Videos
[...]
```
这个列表首先显示了最大的目录(在这个例子中,那是 `~/.var` 目录,塞满了很多的 flatpak 包)。
使用键盘上的方向键,你可以浏览列表,深入到一个目录,这样你就可以更好地了解什么东西占用了最大的空间。
### 获取一个特定目录的大小
你可以在启动 `ncdu` 时提供任意一个文件夹的路径:
```
$ ncdu ~/chromiumos
```
### 排除目录
默认情况下,`ncdu` 包括一切可以包括的东西,包括符号链接和伪文件系统,如 procfs 和 sysfs。你可以用 `--exclude-kernfs` 来排除这些。
你可以使用 `--exclude` 选项排除任意文件和目录,并在后面加上一个匹配模式。
```
$ ncdu --exclude ".var"
19.0 GiB [##################] /Iso
10.0 GiB [######### ] /.local
7.9 GiB [####### ] /.cache
3.8 GiB [### ] /Downloads
[...]
```
另外,你可以在文件中列出要排除的文件和目录,并使用 `--exclude-from` 选项来引用该文件:
```
$ ncdu --exclude-from myexcludes.txt /home/tux
10.0 GiB [######### ] /.local
7.9 GiB [####### ] /.cache
3.8 GiB [### ] /Downloads
[...]
```
### 颜色方案
你可以用 `--color dark` 选项给 `ncdu` 添加一些颜色。
![ncdu color scheme][9]
### 包括符号链接
`ncdu` 输出按字面意思处理符号链接,这意味着一个指向 9GB 文件的符号链接只占用 40 个字节。
```
$ ncdu ~/Iso
9.3 GiB [##################] CentOS-Stream-8-x86_64-20210427-dvd1.iso
@ 0.0 B [ ] fake.iso
```
你可以用 `--follow-symlinks` 选项强制 ncdu 跟踪符号链接:
```
$ ncdu --follow-symlinks ~/Iso
9.3 GiB [##################] fake.iso
9.3 GiB [##################] CentOS-Stream-8-x86_64-20210427-dvd1.iso
```
### 磁盘使用率
磁盘空间用完并不有趣,所以监控你的磁盘使用情况很重要。`ncdu` 命令使它变得简单和互动。下次当你对你的电脑上存储的东西感到好奇时,或者只是想以一种新的方式探索你的文件系统时,不妨试试 `ncdu`
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/du-splash.png?itok=nRLlI-5A (Check disk usage)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-df
[3]: https://opensource.com/article/21/7/check-disk-space-linux-du
[4]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[5]: https://opensource.com/article/20/11/macports
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://dev.yorhel.nl/ncdu
[8]: https://opensource.com/sites/default/files/ncdu.jpg (ncdu interface)
[9]: https://opensource.com/sites/default/files/ncdu-dark.jpg (ncdu color scheme)

View File

@ -0,0 +1,121 @@
[#]: subject: "How to Monitor Log Files in Real Time in Linux [Desktop and Server]"
[#]: via: "https://www.debugpoint.com/2021/08/monitor-log-files-real-time/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13733-1.html"
如何在 Linux 中实时监控日志文件
======
> 本教程解释了如何实时监控 Linux 日志文件(桌面、服务器或应用),以进行诊断和故障排除。
![](https://img.linux.net.cn/data/attachment/album/202108/30/082607bmf6nlud6sdy49rm.jpg)
当你在你的 Linux 桌面、服务器或任何应用中遇到问题时,你会首先查看各自的日志文件。日志文件通常是来自应用的文本和信息流,上面有一个时间戳。它可以帮助你缩小具体的实例,并帮助你找到任何问题的原因。它也可以帮助从网络上获得援助。
一般来说,所有的日志文件都位于 `/var/log` 中。这个目录包含以 `.log` 为扩展名的特定应用、服务的日志文件,它还包含单独的其他目录,这些目录包含其日志文件。
![log files in var-log][1]
所以说,如果你想监控一堆日志文件或特定的日志文件。这里有一些你可以做到方法。
### 实时监控 Linux 日志文件
#### 使用 tail 命令
使用 `tail` 命令是实时跟踪日志文件的最基本方法。特别是,如果你所在的服务器只有一个终端,没有 GUI。这是很有帮助的。
比如:
```
tail /path/to/log/file
```
![Monitoring multiple log files via tail][2]
使用开关 `-f` 来跟踪日志文件,它是实时更新的。例如,如果你想跟踪 `syslog`,你可以使用以下命令:
```
tail -f /var/log/syslog
```
你可以用一个命令监控多个日志文件,使用:
```
tail -f /var/log/syslog /var/log/dmesg
```
如果你想监控 http 或 sftp 或任何服务器,你也可以在这个命令中监控它们各自的日志文件。
记住,上述命令需要管理员权限。
#### 使用 lnav日志文件浏览器
![lnav Running][3]
`lnav` 是一个很好的工具,你可以用它来通过彩色编码的信息以更有条理的方式监控日志文件。在 Linux 系统中,它不是默认安装的。你可以用下面的命令来安装它:
```
sudo apt install lnav ### Ubuntu
sudo dnf install lnav ### Fedora
```
好的是,如果你不想安装它,你可以直接下载其预编译的可执行文件,然后在任何地方运行。甚至从 U 盘上也可以。它不需要设置,而且有很多功能。使用 `lnav`,你可以通过 SQL 查询日志文件,以及其他很酷的功能,你可以在它的 [官方网站][4] 上了解。
一旦安装,你可以简单地用管理员权限从终端运行 `lnav`,它将默认显示 `/var/log` 中的所有日志并开始实时监控。
#### 关于 systemd 的 journalctl 说明
今天所有的现代 Linux 发行版大多使用 systemd。systemd 提供了运行 Linux 操作系统的基本框架和组件。systemd 通过 `journalctl` 提供日志服务,帮助管理所有 systemd 服务的日志。你还可以通过以下命令实时监控各个 systemd 服务和日志。
```
journalctl -f
```
下面是一些具体的 `journalctl` 命令,可以在一些情况下使用。你可以将这些命令与上面的 `-f` 开关结合起来,开始实时监控。
* 对紧急系统信息,使用:
```
journalctl -p 0
```
* 显示带有解释的错误:
```
journalctl -xb -p 3
```
* 使用时间控制来过滤输出:
```
journalctl --since "2020-12-04 06:00:00"
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
journalctl --since yesterday
journalctl --since 09:00 --until "1 hour ago"
```
如果你想了解更多关于 `journalctl` 的细节,我已经写了一个 [指南][6]。
### 结束语
我希望这些命令和技巧能帮助你找出桌面或服务器问题/错误的根本原因。对于更多的细节,你可以随时参考手册,摆弄各种开关。如果你对这篇文章有什么意见或看法,请在下面的评论栏告诉我。
加油。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/monitor-log-files-real-time/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/log-files-in-var-log-1024x312.jpeg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Monitoring-multiple-log-files-via-tail-1024x444.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/lnav-Running-1024x447.jpeg
[4]: https://lnav.org/features
[6]: https://www.debugpoint.com/2020/12/systemd-journalctl/

View File

@ -0,0 +1,105 @@
[#]: subject: "Access your iPhone on Linux with this open source tool"
[#]: via: "https://opensource.com/article/21/8/libimobiledevice-iphone-linux"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13737-1.html"
用这个开源工具在 Linux 上访问你的 iPhone
======
> 通过使用 Libimobiledevice 从 Linux 与 iOS 设备进行通信。
![](https://img.linux.net.cn/data/attachment/album/202108/31/092907bc26qep3ekc73czl.jpg)
iPhone 和 iPad 绝不是开源的,但它们是流行的设备。许多拥有 iOS 备的人恰好也在使用大量的开源软件,包括 Linux。Windows 和 macOS 的用户可以通过使用苹果公司提供的软件与 iOS 设备通信,但苹果公司不支持 Linux 用户。开源程序员早在 2007 年(就在 iPhone 发布一年后)就以 Libimobiledevice当时叫 libiphone来拯救了人们这是一个与 iOS 通信的跨平台解决方案。它可以在 Linux、Android、Arm 系统如树莓派、Windows、甚至 macOS 上运行。
Libimobiledevice 是用 C 语言编写的,使用原生协议与 iOS 设备上运行的服务进行通信。它不需要苹果公司的任何库,所以它完全是自由而开源的。
Libimobiledevice 是一个面向对象的 API它捆绑了许多便于你使用的终端工具。该库支持苹果从最早到其最新的型号的 iOS 设备。这是多年来研究和开发的结果。该项目中的应用包括 `usbmuxd`、`ideviceinstaller`、`idevicerestore`、`ifuse`、`libusbmuxd`、`libplist`、`libirecovery` 和 `libideviceactivation`
### 在 Linux 上安装 Libimobiledevice
在 Linux 上,你可能已经默认安装了 `libimobiledevice`。你可以通过你的软件包管理器或应用商店找到,或者通过运行项目中包含的一个命令:
```
$ ifuse --help
```
你可以用你的包管理器安装 `libimobiledevice`。例如,在 Fedora 或 CentOS 上:
```
$ sudo dnf install libimobiledevice ifuse usbmuxd
```
在 Debian 和 Ubuntu 上:
```
$ sudo apt install usbmuxd libimobiledevice6 libimobiledevice-utils
```
或者,你可以从源代码 [下载][2] 并安装 `libimobiledevice`
### 连接你的设备
当你安装了所需的软件包,将你的 iOS 设备连接到你的电脑。
为你的 iOS 设备建立一个目录作为挂载点。
```
$ mkdir ~/iPhone
```
接下来,挂载设备:
```
$ ifuse ~/iPhone
```
你的设备提示你,是否信任你用来访问它的电脑。
![iphone prompts to trust the computer][3]
*图 1iPhone 提示你要信任该电脑。*
信任问题解决后,你会在桌面上看到新的图标。
![iphone icons appear on desktop][4]
*图 2iPhone 的新图标出现在桌面上。*
点击 “iPhone” 图标,显示出你的 iPhone 的文件夹结构。
![iphone folder structure displayed][5]
*图 3显示了 iPhone 的文件夹结构。*
我通常最常访问的文件夹是 `DCIM`,那里存放着我的 iPhone 照片。有时我在写文章时使用这些照片,有时有一些照片我想用 GIMP 等开源应用来增强。可以直接访问这些图片,而不是通过电子邮件把它们发给我自己,这是使用 `libimobiledevice` 工具的好处之一。我可以把这些文件夹中的任何一个复制到我的 Linux 电脑上。我也可以在 iPhone 上创建文件夹并删除它们。
### 发现更多
[Martin Szulecki][6] 是该项目的首席开发者。该项目正在寻找开发者加入他们的 [社区][7]。Libimobiledevice 可以改变你使用外设的方式,而无论你在什么平台上。这是开源的又一次胜利,这意味着它是所有人的胜利。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/libimobiledevice-iphone-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://github.com/libimobiledevice/libimobiledevice/
[3]: https://opensource.com/sites/default/files/1trust_0.png
[4]: https://opensource.com/sites/default/files/2docks.png
[5]: https://opensource.com/sites/default/files/2iphoneicon.png
[6]: https://github.com/FunkyM
[7]: https://libimobiledevice.org/#community

View File

@ -0,0 +1,105 @@
[#]: subject: "KDE Plasma 5.23 New Features and Release Dates"
[#]: via: "https://www.debugpoint.com/2021/08/kde-plasma-5-23/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "imgradeone"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13719-1.html"
KDE Plasma 5.23 的新功能和发布日期
======
![](https://img.linux.net.cn/data/attachment/album/202108/25/222802zwhmvv1vwzusevzw.jpg)
> 我们在这篇文章中总结了 KDE Plasma 5.23(即将到来)的新功能,包括主要特点、下载和 测试说明。
KDE Plasma 桌面是当今最流行、最顶级的 Linux 桌面环境,而 KDE Plasma 的热度之高主要得益于其适应能力强、迭代发展迅速,以及性能不断提高。[KDE Plasma 5.22][1] 发布以来KDE 团队一直忙于为即将到来的 KDE Plasma 5.23 合并更改和测试新功能。目前 KDE Plasma 5.23 仍在开发中,如下是暂定的时间表。
### KDE Plasma 5.23 发布时间表
KDE Plasma 5.23 将于 2021 年 10 月 7 日发布,以下是时间表:
* Beta 公测 2021 年 9 月 16 日
* 最终发布 2021 年 10 月 7 日
正如每个 Plasma 版本更新一样,本次更新也同样承诺对核心 Plasma Shell 和 KDE 应用进行大幅更改、代码清理、性能改进、数百个 bug 修复、Wayland 优化等。我们在本篇文章中收集了一些重要的功能,让你对即将发布的新功能有基本了解。下面就让我们看看。
### KDE Plasma 5.23 新功能
* 本次版本更新基于 Qt 5.15 版本KDE 框架 5.86 版本。
#### Plasma Shell 和应用程序更新
* 本次 KDE Plasma 的 Kickoff 程序启动器将有大幅更新,包括 bug 修复、减少内存占用、视觉更新、键鼠导航优化。
* Kickoff 程序启动器菜单允许使用固定按钮固定在桌面上,保持开启状态。
* Kickoff 的标签不会在你滚动时切换(从应用标签到位置标签)。
* Kickoff 里可以使用 `CTRL+F` 快捷键直接聚焦到搜索栏。
* Kickoff 中的操作按钮(如关机等)可以设置为仅显示图标。
* 现在可以针对所有 Kickoff 项目选择使用网格或列表视图(而不仅仅局限于收藏夹)。
![KDE Plasma 5.23 中 Kickoff 程序启动器新增的选项][2]
![Kickoff 程序启动器的更改][3]
* 新增基于 QML 的全新概览视图(类似 GNOME 3.38 的工作区视图),用于展示所有打开的窗口(详见如下视频)。目前我找不到关于此合并请求的更多详情,而且这个新视图也很不稳定。
![](https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Overview-effect-in-KDE-Plasma-5.23.mp4)
_视频作者KDE 团队_
* 该概览效果将替代现有的“展现窗口”特效和“虚拟桌面平铺网格”特效(计划中)。
* 未连接触控板时将展示更易察觉的“未找到触摸板”提示。
* “电源配置方案”设置现在呈现于 Plasma UI电池和亮度窗口中。电源配置方案功能从 Linux 内核 5.12 版本开始已经登陆戴尔和联想的笔记本电脑了。因此如果你拥有这些品牌的较新款笔记本电脑你可以将电源配置方案设置为高性能或省电模式。_[注Fedora 35很大可能会在 GNOME 41 中增加该功能]_
![新的“电源配置方案”设置][4]
* 如果你有多屏幕设置,包括垂直和横向屏幕,那么登录屏幕现在可以正确同步和对齐。这个功能的需求度很高。
* 新的 Breeze 主题预计会有风格上的更新。
* 如前序版本一样,预计会有全新的壁纸(目前壁纸大赛仍在进行中)。
* 新增当硬件从笔记本模式切换到平板模式时是否缩放系统托盘图标的设置。
* 你可以选择在登录时的蓝牙状态:总是启用、总是禁用、记住上一次的状态。该状态在版本升级后仍可保留。
* 用户现在可以更改传感器的显示名称。
* Breeze 风格的滚动条现在比之前版本的更宽。
* Dolphin 文件管理器提供在文件夹前之前优先显示隐藏文件的新选项。
* 你现在可以使用 `DEL` 键删除剪贴板弹窗中选中的项目。
* KDE 现在允许你直接从 Plasma 桌面,向 store.kde.org 提交你制作的图标和主题。
#### Wayland 更新
* 在 Wayland 会话中,运行程序时光标旁也会展示图标反馈动画。
* 现在可以从通知中复制文字。
* 中键单击粘贴功能现在可以在 Wayland 和 XWayland 应用程序中正常使用。
请务必牢记,每个版本都有数以百计的 bug 修复和改进。本文仅仅包括了我收集的表面层次的东西。因此,如果想了解应用程序和 Plasma Shell 的变更详情,请访问 GitLab 或 KDE Planet 社区。
### 不稳定版本下载
你现在可以通过下方的链接下载 KDE neon 的不稳定版本来体验上述全部功能。直接下载 .iso 文件,然后安装测试即可。请务必在发现 bug 后及时反馈。该不稳定版本不适合严肃场合及生产力设备使用。
- [下载 KDE neon 不稳定版本][5]
### 结束语
KDE Plasma 5.23 每次发布都在改进底层、增加新功能。虽然这个版本不是大更新,但一切优化、改进最终都将累积成稳定性、适应性和更好的用户体验。当然,还有更多的 Wayland 改进讲真Wayland 兼容看上去一直都处在“正在进行中”的状态 - 就像十年过去了,却还在制作那样。当然这是另一个话题了)。
再会。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/kde-plasma-5-23/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[imgradeone](https://github.com/imgradeone)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/2021/06/kde-plasma-5-22-release/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Kickoff-Options-in-KDE-Plasma-5.23.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Changes-in-kickoff.jpeg
[4]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-power-profiles.jpeg
[5]: https://neon.kde.org/download

View File

@ -0,0 +1,173 @@
[#]: subject: "How to include options in your Bash shell scripts"
[#]: via: "https://opensource.com/article/21/8/option-parsing-bash"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13730-1.html"
如何在 Bash shell 脚本中解析命令行选项
======
> 给你的 shell 脚本添加选项。
![](https://img.linux.net.cn/data/attachment/album/202108/29/110849lvhr1bjg1r43sfcx.jpg)
终端命令通常具有 [选项或开关][2],用户可以使用它们来修改命令的执行方式。关于命令行界面的 [POSIX 规范][3] 中就对选项做出了规范,这也是最早的 UNIX 应用程序建立的一个由来已久的惯例,因此你在创建自己的命令时,最好知道如何将选项包含进 [Bash 脚本][4] 中。
与大多数语言一样,有若干种方法可以解决 Bash 中解析选项的问题。但直到今天,我最喜欢的方法仍然是我从 Patrick Volkerding 的 Slackware 构建脚本中学到的方法,当我第一次发现 Linux 并敢于冒险探索操作系统所附带的纯文本文件时,这些脚本就是我的 shell 脚本的引路人。
### Bash 中的选项解析
在 Bash 中解析选项的策略是循环遍历所有传递给 shell 脚本的参数,确定它们是否是一个选项,然后转向下一个参数。重复这个过程,直到没有选项为止。
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
else
    break
fi
done
echo $ALPHA
```
在这段代码中,我创建了一个 `while` 循环,它会一直进行循环操作,直到处理完所有参数。`if` 语句会试着将在第一个位置(`$1`)中找到的参数与 `--alpha``-a` 匹配。(此处的待匹配项是任意选项名称,并没有特殊意义。在实际的脚本中,你可以使用 `--verbose``-v` 来触发详细输出)。
`shift` 关键字会使所有参数位移一位,这样位置 2`$2`)的参数移动到位置 1`$1`)。处理完所有参数后会触发 `else` 语句,进而中断 `while` 循环。
在脚本的末尾,`$ALPHA` 的值会输出到终端。
测试一下这个脚本:
```
$ bash ./test.sh --alpha
1
$ bash ./test.sh
$ bash ./test.sh -a
1
```
可以看到,选项被正确地检测到了。
### 在 Bash 中检测参数
但上面的脚本还有一个问题:多余的参数被忽略了。
```
$ bash ./test.sh --alpha foo
1
$
```
要想捕获非选项名的参数,可以将剩余的参数转储到 [Bash 数组][5] 中。
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
else
    break
fi
done
echo $ALPHA
ARG=( "${@}" )
for i in ${ARG[@]}; do
    echo $i
done
```
测试一下新版的脚本:
```
$ bash ./test.sh --alpha foo
1
foo
$ bash ./test.sh foo
foo
$ bash ./test.sh --alpha foo bar
1
foo
bar
```
### 带参选项
有一些选项需要传入参数。比如,你可能希望允许用户设置诸如颜色或图形分辨率之类的属性,或者将应用程序指向自定义配置文件。
要在 Bash 中实现这一点,你仍然可以像使用布尔开关一样使用 `shift` 关键字,但参数需要位移两位而不是一位。
```
#!/bin/bash
while [ True ]; do
if [ "$1" = "--alpha" -o "$1" = "-a" ]; then
    ALPHA=1
    shift 1
elif [ "$1" = "--config" -o "$1" = "-c" ]; then
    CONFIG=$2
    shift 2
else
    break
fi
done
echo $ALPHA
echo $CONFIG
ARG=( "${@}" )
for i in ${ARG[@]}; do
    echo $i
done
```
在这段代码中,我添加了一个 `elif` 子句来将每个参数与 `--config``-c` 进行比较。如果匹配,名为 `CONFIG` 的变量的值就设置为下一个参数的值(这就表示 `--config` 选项需要一个参数)。所有参数都位移两位:其中一位是跳过 `--config``-c`,另一位是跳过其参数。与上节一样,循环重复直到没有匹配的参数。
下面是新版脚本的测试:
```
$ bash ./test.sh --config my.conf foo bar
my.conf
foo
bar
$ bash ./test.sh -a --config my.conf baz
1
my.conf
baz
```
### Bash 让选项解析变得简单
还有一些其他方法也可以解析 Bash 中的选项。你可以替换使用 `case` 语句或 `getopt` 命令。无论使用什么方法,给你的用户提供选项都是应用程序的重要功能,而 Bash 让解析选项成为了一件简单的事。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/option-parsing-bash
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/terminal-commands_1.png?itok=Va3FdaMB (Terminal commands)
[2]: https://opensource.com/article/21/8/linux-terminal#options
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: https://opensource.com/downloads/bash-scripting-ebook
[5]: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays

View File

@ -0,0 +1,70 @@
[#]: subject: "30 things you didn't know about the Linux kernel"
[#]: via: "https://opensource.com/article/21/8/linux-kernel"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13724-1.html"
关于 Linux 内核的 30 件你不知道的事
======
> Linux 内核今年 30 岁了。
![](https://img.linux.net.cn/data/attachment/album/202108/27/150006o152rdghq0zqr02f.jpg)
Linux 内核今年 30 岁了。这开创性的开源软件的三个十年,让用户能够运行自由软件,让他们能从运行的应用程序中学习,让他们能与朋友分享他们所学到的知识。有人认为,如果没有 Linux 内核,我们如今所享受的 [开源文化][2] 和自由软件的累累硕果,可能就不会应时而出现。如果没有 Linux 作为催化剂苹果、微软和谷歌所开源的那些就不可能开源。Linux 作为一种现象,对开源文化、软件开发和用户体验的影响,是怎么强调都不为过的,但所有这一切,都滥觞于一个 Linux 内核。
Linux 内核是启动计算机、并识别和确保计算机内外所连接的所有组件之间通信的软件。这些对于大多数用户从未想过更不用说能理解的代码Linux 内核有很多令人惊讶的地方。以下是 Linux 内核在其三十年生命中每一年的一件事。顺序无关。
1. Linux 是第一个具有 USB 3.0 驱动的操作系统。Sarah Sharp 在 2009 年 6 月 7 日宣布她的 USB 3.0 设备的驱动程序可以使用了,她的代码被包含在内核 2.6.31 版本中。
2. 当某些事件发生时,内核会将自己标记为“受污染”,这在以后的故障排除中可能有用。运行一个“被污染”的内核并不是什么问题。但如果出现错误,首先要做的是在一个没有被污染的内核上重现该问题。
3. 你可以指定一个主机名或域名作为 `ip=` 内核命令行选项的一部分Linux 会保留它,而不是用 DHCP 或 BOOTP 提供的主机名或域名来覆盖它。例如,`ip=::::myhostname::dhcp` 设置主机名 `myhostname`
4. 在文本启动过程中可以选择显示黑白的、16 色的或 224 色的 Tux 徽标之一。
5. 在娱乐业中DRM 是一种用来防止访问媒介的技术。然而,在 Linux 内核中DRM 指的是<ruby>直接渲染管理器<rt>Direct Rendering Manager</rt></ruby>,它指的是用于与对接显卡的 GPU 的库(`libdrm`)和驱动程序。
6. 能够在不重启的情况下给 Linux 内核打补丁。
7. 如果你自己编译内核,你可以将文本控制台配置为超过 80 列宽。
8. Linux 内核提供了内置的 FAT、exFAT 和 NTFS读和写支持。
9. Wacom 平板电脑和许多类似设备的驱动程序都内置在内核中。
10. 大多数内核高手使用 `git send-email` 来提交补丁。
11. 内核使用一个叫做 [Sphinx][3] 的文档工具链,它是用 Python 编写的。
12. Hamlib 提供了具有标准化 API 的共享库,可以通过你的 Linux 电脑控制业余无线电设备。
13. 我们鼓励硬件制造商帮助开发 Linux 内核,以确保兼容性。这样就可以直接处理硬件,而不必从制造商那里下载驱动程序。直接成为内核一部分的驱动程序也会自动从新版本内核的性能和安全改进中受益。
14. 内核中包含了许多树莓派模块Pi Hats的驱动程序。
15. netcat 乐队发布了一张只能作为 [Linux 内核模块][4] 播放的专辑。
16. 受 netcat 发布专辑的启发,人们又开发了一个 [把你的内核变成一个音乐播放器][5] 的模块。
17. Linux 内核的功能支持许多 CPU 架构ARM、ARM64、IA-64、 m68k、MIPS、Nios II、PA-RISC、OpenRISC、PowerPC、s390、 Sparc、x86、Xtensa 等等。
18. 2001 年Linux 内核成为第一个 [以长模式运行的 x86-64 CPU 架构][6]。
19. Linux 3.4 版引入了 x32 ABI允许开发者编译在 64 位模式下运行的代码,而同时只使用 32 位指针和数据段。
20. 内核支持许多不同的文件系统,包括 Ext2、Ext3、Ext4、JFS、XFS、GFS2、GCFS2、BtrFS、NILFS2、NFS、Overlay FS、UDF 等等。
21. <ruby>虚拟文件系统<rt>Virtual File System</rt></ruby>VFS是 Linux 内核中的一个软件层,为用户运行的应用程序提供文件系统接口。它也是内核的一个抽象层,以便不同的文件系统实现可以共存。
22. Linux 内核包括一个实体的盲文输出设备的驱动程序。
23. 在 2.6.29 版本的内核中,启动时的 Tux 徽标被替换为 “Tuz”以提高人们对当时影响澳大利亚的<ruby>塔斯马尼亚魔鬼<rt>Tasmanian Devil</rt></ruby>(即袋獾)种群的一种侵袭性癌症的认识。
24. <ruby>控制组<rt>Control Groups</rt></ruby>cgroups是容器Docker、Podman、Kubernetes 等的基础技术)能够存在的原因。
25. 曾经花了大量的法律行动来解放 CIFS以便将其纳入内核中而今天CIFS 模块已被内置于内核,以实现对 SMB 的支持。这使得 Linux 可以挂载微软的远程共享和基于云的文件共享。
26. 对于计算机来说,产生一个真正的随机数是出了名的困难(事实上,到目前为止是不可能的)。`hw_random` 框架可以利用你的 CPU 或主板上的特殊硬件功能,尽量改进随机数的生成。
27. _操作系统抖动_ 是应用程序遇到的干扰,它是由后台进程的调度方式和系统处理异步事件(如中断)的方式的冲突引起的。像这些问题在内核文档中都有详细的讨论,可以帮助面向 Linux 开发的程序员写出更聪明的代码。
28. `make menuconfig` 命令可以让你在编译前使用 GUI 来配置内核。`Kconfig` 语言定义了内核配置选项。
29. 对于基本的 Linux 服务器,可以实施一个 _看门狗_ 系统来监控服务器的健康状况。在健康检查间隔中,`watchdog` 守护进程将数据写入一个特殊的 `watchdog` 内核设备,以防止系统重置。如果看门狗不能成功记录,系统就会被重置。有许多看门狗硬件的实现,它们对远程任务关键型计算机(如发送到火星上的计算机)至关重要。
30. 在火星上有一个 Linux 内核的副本,虽然它是在地球上开发的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-kernel
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/kernel-30.png?itok=xmwX2pCQ (30 years)
[2]: https://opensource.com/article/18/1/creative-commons-real-world
[3]: https://opensource.com/article/19/11/document-python-sphinx
[4]: https://github.com/usrbinnc/netcat-cpi-kernel-module
[5]: https://github.com/FlaviaR/Netcat-Music-Kernel-Expansion
[6]: http://www.x86-64.org/pipermail/announce/2001-June/000020.html

View File

@ -1,118 +0,0 @@
[#]: subject: "KDE Plasma 5.23 New Features and Release Dates"
[#]: via: "https://www.debugpoint.com/2021/08/kde-plasma-5-23/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
KDE Plasma 5.23 New Features and Release Dates
======
We round up the features of KDE Plasma 5.23 (upcoming) in this post,
with major highlights and download/testing instructions.
KDE Plasma desktop is the most popular and top class Linux desktop environment today. The main reason is adaptation, rapid iteration of its development, performance improvements. Since the release of [KDE Plasma 5.22][1], the team is busy with merging changes and testing new features for upcoming KDE Plasma 5.23. It is currently under development with a tentative schedule as below.
### KDE Plasma 5.23 Schedule
KDE Plasma releases on Oct 7, 2021. Heres the overall schedule:
* Beta Sep 16, 2021
* Final Release Oct 7, 2021
Like every Plasma release, this iteration also promises a wide range of changes to core Plasma Shell, KDE Applications, code cleanups, performance improvements, hundreds of bug fixes, Wayland improvements and more. We collected some important features in this post to give you an idea on whats incoming. Lets take a look.
### KDE Plasma 5.23 New Features
* This release is powered by Qt version 5.15, KDE Frameworks version 5.86.
##### Plasma Shell and App Updates
* The KDE Plasma Kickoff brings a huge set of updates that includes bug fixes, low RAM usage, look and feel updates, keyboard and mouse navigation improvements.
* Kickoff menu can be set to keep open using a pin option.
* Kickoff tabs not changes (from Applications to Places) when you scroll.
* Press CTRL+F to directly focus to the search bar in Kickoff.
* The action button captions in Kickoff (shut down, etc.) can be turned off via an option to show only icons.
* You can now either choose list or grid view for all Kickoff items (not only Favorites).
![New Kickoff Options in KDE Plasma 5.23][2]
![Changes in kickoff][3]
* A new QML-based Overview effect is introduced (much like GNOME 3.38 workspace view) which shows the opened windows (have a look at this video). I could not find the merge request no for this for further detail, and its still not in unstable.
_Video credit: KDE team_
* This overview effect may replace the existing Present Windows effect and the Desktop Grid effect as well (planned).
* A more visible No Touchpad found message is introduced when there is no touchpad.
* You can now have the Power Profile settings in Plasma UI (Battery and Brightness window). This power profile features landed since Linux Kernel 5.12 for Dell and Lenovo laptops. So, if you have the latest brands of these Laptops, you can now set your power profiles either more performance mode or power saving mode. _[Note: Fedora 35 expected to bring this feature for GNOME 41 (probably)]_
![New power profiles][4]
* If you have multiscreen setup with say vertical and landscape screen, then the login screen now properly synced and aligned. This was much needed features.
* A new Breeze theme is expected with style updates.
* A brand-new wallpaper is expected, like prior releases (the competition is still going on).
* A new setting to resize system tray icons when your hardware is changed from Laptop mode to Tablet mode.
* You can now have the ability to choose Bluetooth status on login always enabled, always disabled, or remember the previous status. This status ca be carried over the version upgrades.
* Users can now change the displayed name of sensors on a per-face basis.
* The scrollbar handle in Breeze style is now a little thicker than previous editions.
* A new option in Dolphin file manager enables you to show hidden files first before folders.
* You can delete selected items in clipboard popup using DEL key.
* KDE now enables you to contribute to store.kde.org with your designed icons, themes directly from the Plasma desktop.
##### Wayland Updates
* When you launch applications, the cursor now shows the animated icon feedback in Wayland sessions.
* Copying text from notification now works.
* Middle click paste now works in Wayland and XWayland applications.
Remember, there are hundreds of bug fixes, improvements that lands in each release. These are merely scratching the surface which I collected. So, make sure to visit GitLab or KDE Planets to learn more in detail about changes in applications and Plasma shell.
### Unstable Edition Download
You can experience all the above features right now via KDE Neon Unstable edition via below link. Download the .iso and test. Make sure to report bugs if you found any. This unstable edition is not for serious usage or production deployments.
[KDE NEON UNSTABLE EDITION][5]
### Closing Notes
KDE Plasma 5.23 keeps improving under the hood with new features in every release. Although, this release is not going to be a massive one, but all these optimizations, improvements eventually sums up to the stability, adaptation and better user experience. And more Wayland updates (seriously, Wayland compatibility always seems “work in progress” like for a decade now. Thats another topic for discussion).
Cheers.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/kde-plasma-5-23/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/2021/06/kde-plasma-5-22-release/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-Kickoff-Options-in-KDE-Plasma-5.23.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Changes-in-kickoff.jpeg
[4]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/New-power-profiles.jpeg
[5]: https://neon.kde.org/download

View File

@ -0,0 +1,81 @@
[#]: subject: "“Apps for GNOME” is a New Web Portal to Showcase Best Linux Apps for GNOME"
[#]: via: "https://news.itsfoss.com/apps-for-gnome-portal/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
“Apps for GNOME” is a New Web Portal to Showcase Best Linux Apps for GNOME
======
There are several apps built for GNOME. Most of the stock (default) GNOME apps do not get enough spotlight as a separate mention.
While Flathub as a platform helps highlight some fantastic applications for GNOME, it limits to Flatpak apps only.
Also, it is not just dedicated to GNOME, of course.
Hence, there is a new website to focus more on the GNOME ecosystem and highlight the best GNOME apps.
### Apps for GNOME
![][1]
A [blog post][2] by Sophie Herold on Planet GNOME announced the availability of the platform.
[apps.gnome.org][3] is where you can find all the GNOME apps, both default and third-party applications tailored primarily for the GNOME environment.
With this portal, they aim to encourage users to participate and contribute to the development of such applications.
When you head to explore an app on the platform, you will be presented with plenty of information that includes where to submit feedback for the app, help translate, and contribute financially.
![][4]
It is not something out-of-the-box, but it presents all the information related to a GNOME app in a single place.
You get a complete picture for a GNOME app starting with the description, screenshots, latest version, information about the maintainers, and translation status.
![][5]
Not just limited to desktop GNOME apps, but you will also find applications marked with a mobile icon if it is supported on GNOME mobile devices.
In addition to the key GNOME apps, it also aims to feature applications that do not offer a flatpak package but suits well for the GNOME platform.
[Apps for GNOME][3]
### Making Information More Accessible
I find it much more insightful than what Flathub seems to provide. And, I think this is not just going to help highlight GNOME apps, but it should help new users get to know more about the applications they use.
Of course, it should also encourage users to get involved, which is the primary focus.
While KDE already had an [application portal][6], it might need an upgrade if they take Apps for GNOME as an example to improve.
_What do you think about the Apps for GNOME initiative?_ _Feel free to share your thoughts in the comments._
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/apps-for-gnome-portal/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU2MiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[2]: https://blogs.gnome.org/sophieh/2021/08/26/apps-gnome-org-is-online/
[3]: https://apps.gnome.org
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI4MiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ3OSIgd2lkdGg9Ijc0NCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: https://apps.kde.org

View File

@ -0,0 +1,122 @@
[#]: subject: "Open Source Video Editor OpenShot 2.6 Released With AI Effects & Major Improvements"
[#]: via: "https://news.itsfoss.com/openshot-2-6-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Source Video Editor OpenShot 2.6 Released With AI Effects & Major Improvements
======
OpenShot is one of the most popular [open-source video editors][1] out there.
It is not just for Linux, but it is an impressive free video editor for Windows and Mac users as well.
While it was already a functional, easy-to-use, feature-rich video editor, it stepped up a notch with the latest release.
Here, we discuss some key additions in OpenShot 2.6.0 release.
### OpenShot 2.6.0 Released: Whats New?
![][2]
The primary highlight of this release is the inclusion of AI and computer vision effects. But, there is more to it than meets the eye.
Here are the highlights for OpenShot 2.6.0 changes:
* New AI and computer vision effects
* New audio effects
* New zoom slider
* Improved transform tool
* Improved video effects
* Improved snapping
* More emoji support
* Improved performance
* Bug fixes
Considering the fundamental changes, OpenShot is now a more compelling option for professional video editors.
![Official YouTube video for OpenShot 2.6][3]
### AI Effects
Taking the help of an AI to process images/videos is becoming increasingly common these days.
Hence, OpenShot adds the support for AI effects to make it easier to enhance and edit videos.
One of the features includes eliminating any shake/motion in a video by calculating it.
![][4]
You can also track particular objects in a video. This is undoubtedly helpful for animation or any other creative work where you need to follow a specific element of the video.
Like a real-time feed where the camera detects vehicles, it can also identify objects in the video. While this feature is in beta, it should be fun to experiment with it.
### Audio Effects
OpenShot video editor featured most of the essential audio effects. And, in this release, some more important audio effects have been added that include:
* Compressor
* Expander
* Echo
* Delay
* Distortion
* Noise
* EQ
* Robotic voice and whispering voice effects
### New &amp; Improved Tools
![][5]
Vital tools in snapping and transform mode have been improved.
The improved transform tool lets you resize, rotate, and work seamlessly to create complex animations.
Furthermore, when trimming the clip, the snapping tool allows you better align the edges of the clips.
A new zoom slider tool has been added to give you better control over the timeline. You can easily drag and work with a specific portion of the timeline as needed.
### Other Improvements
In addition to the essential changes, you can find performance improvements and numerous bug fixes.
You can find the latest version as an AppImage file as of now. It should reflect soon in the Flathub repository and other sources as well. Consider reading [how to use AppImage files][6] if you are not aware of it.
[Download OpenShot 2.6.0][7]
To explore more about the release, you may refer to the [official release announcement][8].
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/openshot-2-6-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/open-source-video-editors/
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjYwOSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[3]: https://i0.wp.com/i.ytimg.com/vi/06sgvsYB378/hqdefault.jpg?w=780&ssl=1
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM2MSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: https://itsfoss.com/use-appimage-linux/
[7]: https://www.openshot.org/download/
[8]: https://www.openshot.org/blog/2021/08/25/new_openshot_release_260/

View File

@ -0,0 +1,154 @@
[#]: subject: "Linux Kernel 5.14 Released Right After the 30th Anniversary of Linux"
[#]: via: "https://news.itsfoss.com/kernel-5-14-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Kernel 5.14 Released Right After the 30th Anniversary of Linux
======
Back in June, I looked at [Linux Kernel 5.13][1], where we received preliminary support for the M1, RISC-V improvements, and support for new GPUs.
Now, Linux kernel 5.14 is here! Linus Torvalds just [announced it on the kernel mailing list][2]:
![Kernel 5.14 announcement mail][3]
While this release is not quite as large as the aforementioned one, it still has many improvements, especially for ARM devices.
Let us take a quick look at the key highlights of this release.
### Linux Kernel 5.14: Whats New?
Linux kernel 5.14 contains a wide variety of new features, especially for ARM-based systems. This is all happening despite Linus Torvalds claiming that this is a relatively small release in the initial [kernel announcement][4].
Fast forward to its release candidate v7 before its final release, Linus mentioned:
> Most of the changes here are drivers (GPU and networking stand out),
>
> and the rest is pretty random stuff: arch, tracing, core networking, a
>
> couple of VM fixes..
Linus Torvalds, Linux kernel 5.14 RC7 announcement
This release contains a variety of new features. Here is a list of the key new features present in Linux kernel 5.14:
* The [Raspberry Pi 400][5] can now work completely with this kernel, thanks to the work done for the past couple of months.
* The [Rockchip RK3568 SoC][6] is now supported
* Initial support for the Sony Xperia 1/1II and 5/5II
* Various updates added for Microsoft Surface Duo
* Updates to DIY BananaPi M5 board added
* [Important updates][7] for RISC-V
* Improved support for Intel Alder Lake P and Alder Lake M graphics cards
* New hot-unplug support on AMD Radeon graphics cards
* Secret memory areas introduced with a new system called memfd_secret
* Improvements to [lower the latency of its USB audio driver][8]s
* Improved support for USB4
* Initial groundwork to support Intel Alder lake processors
In this article, we will be looking at what these features are, and what they mean for the end user.
#### Raspberry Pi 400
Last year, the Raspberry Pi Foundation launched the [Raspberry Pi 400][5], a keyboard computer similar to those of the 1980s. Unfortunately, this computer requires a custom kernel version to function due to non-mainline drivers.
However, with the kernel 5.14 release, this appears to have changed. After months of development, the Raspberry Pi 400 can now be booted using the Linux kernel 5.14. While it is unfortunate for support to take this long, it is much better late than never.
#### RK35xx SoC Support
This year has truly been a glorious year for [Rockchip][9]. They started off by launching their rk35xx series of SoCs, with many manufacturers integrating the newly-released SoCs into their products.
One of the most notable uses of the RK35xx series is in the Quartz64, an SBC developed by [Pine64][10] (which I am currently helping mainline). And Linux 5.14 brings support for one of these SoCs, the RK3568.
For all the upcoming boards based on this SoC, this inclusion is extremely important as it greatly simplifies distro porting.
#### Initial Support for Sony Xperia 1/1II and 5/5II
[Sony][11] is one of the few mobile phone manufacturers that actively support running Linux on their phones. This is demonstrated through their compatibility with operating systems such as [Sailfish OS][12] and [Ubuntu Touch][13].
Now, with the Sony Xperia 1/1II and 5/5II being mainlined, it should be much easier to get an even wider variety of distributions booted. However, it should be also be kept in mind that this is only initial support, much like Linux 5.13s M1 support.
#### RISC-V Updates
One of the trends I have noticed over the past few kernel updates is the ever-improving support for [RISC-V][14] processors. Last update, we got some significant build system improvements, a re-arranged kernel memory map, and support for the kernel debugging module KProbes.
This time, it appears that this trend is continuing, with the addition of a few RISC-V-specific improvements. These include:
* Support for transparent huge pages
* An optimized copy_{to,from}_user.
* Generic PCI resources mapping support
* Support for KFENCE (Kernel Electric Fence) for memory safety error detection/validation
While mostly minor, these updates should pave the way for future RISC-V based devices.
#### Radeon Hot-Unplug
Perhaps my favorite feature of this release, AMD Radeon cards are getting a new hot-unplug feature. Previously, ripping your GPU out while your system was running would result in a kernel panic. Now, you can remove your (Radeon) GPU at any time and your system will continue to function normally, at least in theory.
I just hope that this feature works better on Linux than my experience with it on Windows. While I wouldnt recommend randomly pulling your GPU out of your system mid-update, it is still a nice feature to see, and it will be interesting to see what people do with it.
#### USB 4 Support
As we see an increasing number of new laptops shipping with USB 4, it has become more and more important for Linux to start supporting it. Fortunately, the Linux kernel 5.14 has a wide variety of improvements for USB 4 users.
These include:
* More USB 4 support added to the thunderbolt core
* Build warning fixes all over the place
* USB-serial driver updates and new device support
* A wide variety of driver updates
* Lots of other tiny things
While not game-changing, these improvements should help many current and future users of USB 4.
### Wrapping Up
Between the improved USB support, multitude of updates for ARM and RISC-V devices, and minor GPU upgrades, this release is looking pretty good. As I mentioned before, I am most excited about the Radeon hot-unplug support, as this should make GPU swapping that little bit easier.
Similarly to last time, Id recommend waiting for your distribution to offer official updates before upgrading to Linux kernel 5.14. Fortunately, users of distributions such as Arch and Manjaro should receive the updates very shortly. [Advanced Ubuntu users can install the latest mainline Kernel][15] with some effort though it should be avoided.
_What do you think about the improvements in Linux Kernel 5.14? Let me know down in the comments!_
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kernel-5-14-release/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/linux-kernel-5-13-release/
[2]: https://lkml.org/lkml/2021/8/29/382
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ1NiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: http://lkml.iu.edu/hypermail/linux/kernel/2107.1/02943.html
[5]: https://www.raspberrypi.org/products/raspberry-pi-400/
[6]: https://www.96rocks.com/blog/2020/11/28/introduce-rockchip-rk3568/
[7]: https://lore.kernel.org/lkml/mhng-423e8bdb-977e-4b99-a1bb-b8c530664a51@palmerdabbelt-glaptop/
[8]: http://lkml.iu.edu/hypermail/linux/kernel/2107.1/00919.html
[9]: https://www.rock-chips.com/a/en/index.html
[10]: http://pine64.org
[11]: https://electronics.sony.com/c/mobile
[12]: https://sailfishos.org/
[13]: https://ubuntu-touch.io/
[14]: https://riscv.org/
[15]: https://itsfoss.com/upgrade-linux-kernel-ubuntu/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/open-organization/21/8/leadership-cultural-social-norms"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "zz-air"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -57,7 +57,7 @@ via: https://opensource.com/open-organization/21/8/leadership-cultural-social-no
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[zz-air](https://github.com/zz-air)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,99 @@
[#]: subject: "10 steps to more open, focused, and energizing meetings"
[#]: via: "https://opensource.com/open-organization/21/8/10-steps-better-meetings"
[#]: author: "Catherine Louis https://opensource.com/users/catherinelouis"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 steps to more open, focused, and energizing meetings
======
Constructing your meetings with open organization principles in mind can
help you avoid wasted time, effort, and talent.
![Open lightbulbs.][1]
The negative impact of poorly run meetings is [huge][2]. So leaders face a challenge: how do we turn poorly run meetings—which have a negative impact on team creativity, success, and even cause stress and anxiety—to meetings with positive outcomes? But to make the situation even tougher, we now find most meetings are being held remotely, online, where attendees' cameras are off and you're likely staring at a green dot at the top of your screen. That makes holding genuinely productive and useful meetings an even greater challenge.
Thinking about your meetings differently—and constructing your meetings with [open organization principles][3] in mind—can help turn your next remote meeting into an energizing experience with positive outcomes. Here are some guidelines to get you started. I'll explain steps you can take as you _prepare for_, _hold_, and _follow up from_ your meetings.
### Preparing for your meeting:
#### 1\. Protect everyone's time
First, you'll need to reflect on the reason you've calling a meeting in the first place. As a meeting leader, you must recognize your role as the person who could kill productivity and destroy the ability for attendees to be mindfully present. By holding a meeting and asking people to be there, you are removing hours from people's days, exhausting the time they have to spend—and time is a non-replenishable resource. So imagine instead that you are a _guardian_ of people's time. You need to treat their time with respect. Consider that the only reason _why_ you're holding a meeting in the first place is to _keep from_ wasting time*.* For example, if you see a group thrashing over a decision again and again because the wrong people were involved in an email chain, instead suggest holding a half-hour meeting to reach a consensus, thereby saving everyone's time in the end. One way to think about this: Treat employees the same way you'd treat your customers. You would never want a customer to feel they were invited to a meeting that was a waste of their time. Adopting this mindset, you'll instantly become sensitive to scheduling meetings over someone's lunch hour. If you commit to becoming a time saver, you'll become more intentional in _all aspects_ of meeting planning, executing, and closing. And you will get better and better at this as a result.
#### 2\. Use tools to be more inclusive
Like all meetings, remote meetings can contain their moments of silence, as people think, reflect, or take notes. But don't take silence as an indication of understanding, agreement, or even presence. You want to hear input and feedback from everyone at the meeting—not just those who are most comfortable or chatty. Familiarize yourself with some of the growing list of fantastic apps (Mentimeter, Klaxoon, Sli.do, Meeting pulse, Poll Everywhere, and other [open source tools][4]) designed to help attendees collaborate during meetings, even vote and reach a consensus. Make use of video when you can. Use your chat room technology for attendees to share if they missed something or raise a hand to speak, or even as a second channel of communication. Remote meeting tools are multiplying at an exponential rate; bone up on new ones to keep meetings interesting and help increase engagement.
#### 3\. Hone your invitation list
When preparing invitations to your meeting, keep the list as small as possible. The larger the group, the more time you'll need, and the quality of a meeting tends to decrease as the size of the meeting increases. One way to make sure you have the right people: instead of sending out topics for discussion, send a preliminary note to participants you think could add value, and solicit questions. Those who answer the preliminary questions will be those who need to attend, and these are the people who you need to invite.
Treat employees the same way you'd treat your customers. You would never want a customer to feel they were invited to a meeting that was a waste of their time.
#### 4\. Time box everything, and adapt
With our shorter attention spans, [stop defaulting to hour long meetings][5]. Don't hesitate to schedule even just 15 minutes for a meeting. Reducing the meeting length creates positive pressure; [research shows][6] that groups operating under a level of time pressure (using time boxing) perform more optimally due to increased focus. Imagine that after five minutes of one person speaking, everyone else in the meeting will begin to multitask. This means that as a facilitator you have just five minutes to present a concept. Use just these five minutes, then ask for connections: Who knows what about this topic? You'll learn there are experts in the room. Time box this activity, too, to five minutes. Next, break into small groups to discuss concrete practices and steps. Time box this for just 10 minutes, then share how far folks got in that shortened time box. Iterate and adjust for the next time box, and reserve yet another one for takeaways and conclusions.
#### 5\. Make your agenda transparent
Make meeting details as transparent as possible to everyone who's invited. The meeting agenda, for example, should have a desired outcome in the subject line. The opening line of the agenda should state clearly why the meeting needs to be held. For example:
"The choice of go-forward strategy for Product A has been thrashing for two weeks with an estimate of 60 or more hours of back and forth discussion. This meeting is being called with the people involved to agree on our go-forward plan."
Agenda details should outline the time boxes you've outlined to accomplish the goal. Logistics are critical: if you wish cameras to be on, ask for cameras to be on. And even though you've thought thoroughly about your invitee list, note that you may still have invited someone who doesn't need to be there. Provide an opt-out opportunity. For example:
"If you feel you cannot contribute to this meeting and someone else should, please reach out to me with their contact information."
### Conducting your meeting:
#### 6\. Be punctual
Start and end the meeting on time! Arrive early to check the technology. As the meeting leader, recognize that your mood will set the tone for your attendees. So consider beginning the meeting with appreciations, recognitions, and statements of gratitude. Beginning a meeting on a positive note establishes a positive mood and promotes creativity, active listening, and participation. You'll find your meeting will be more constructive as a result.
Like all meetings, remote meetings can contain their moments of silence, as people think, reflect, or take notes. But don't take silence as an indication of understanding, agreement, or even presence.
#### 7\. Engineer your meeting's culture
In the meeting itself, use Strategyzer's [Culture map][7] to create the culture you want for the meeting itself. You do this by agreeing the desired outcome of the meeting, asking what can enable or block attendees from achieving this outcome, and identifying the behaviors the group must exhibit to make this happen. Silently brainstorm with post-its on a jamboard, then have folks actively share what can make this meeting successful for all.
#### 8\. Invite collaboration
In openly run meetings, the best ideas should emerge. But this can only happen with your help. Recognize your role as a meeting leader who must remain neutral and encourage collaboration. Look for those who aren't participating and provide tools (or encouragement) that will help them get involved. For example, instead of verbal brainstorming, do a silent and anonymous brainstorm using stickies in a jamboard. You'll begin to see participation. Stick to the agenda and its time boxes, and watch for folks that talk over others: 
"Sara, Fred wasn't finished with his thought. Please let him finish."
### Closing and and reviewing your meeting:
#### 9\. Write it down
Openly run meetings should result in openly recorded outcomes. Be sure your agenda includes time for the group to clarify takeaways, assign action items, and identify stakeholders who'll be responsible for completing work.
#### 10\. Close the loop
Finally, review the meeting with a retrospective. Ask for feedback on the meeting itself. What worked in your facilitation? What was lacking? Does anyone have ideas for ways to improve the next meeting? Were any questions unanswered? Any epiphanies reached? Taking in this feedback, actually coming up with a new experiment for the next meeting to address the improvements. Attendees at your next meeting will be more than grateful, and in the long run you'll improve your meeting facilitation skills.
The path to collaboration is usually paved with the best intentions. We all know too well that this...
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/21/8/10-steps-better-meetings
作者:[Catherine Louis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/catherinelouis
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_openlightbulbs.png?itok=nrv9hgnH (Open lightbulbs.)
[2]: https://ideas.ted.com/the-economic-impact-of-bad-meetings/
[3]: https://theopenorganization.org/definition
[4]: https://opensource.com/article/20/3/open-source-working-home
[5]: https://opensource.com/open-organization/18/3/open-approaches-meetings
[6]: https://learn.filtered.com/hubfs/Definitive%20100%20Most%20Useful%20Productivity%20Hacks.pdf
[7]: https://www.strategyzer.com/blog/posts/2015/10/13/the-culture-map-a-systematic-intentional-tool-for-designing-great-company-culture

View File

@ -0,0 +1,67 @@
[#]: subject: "When Linus Torvalds Was Wrong About Linux (And I am Happy He Was Wrong)"
[#]: via: "https://news.itsfoss.com/trovalds-linux-announcement/"
[#]: author: "Abhishek https://news.itsfoss.com/author/root/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
When Linus Torvalds Was Wrong About Linux (And I am Happy He Was Wrong)
======
Linus Torvalds, the creator of Linux kernel and Git, needs no introduction.
A shy geek who does not talk much in public but prefers mailing lists. Loves codes and gadgets more than other things. Prefers working from home than spending time in shiny offices.
Torvalds expresses his opinion on Linux related things quite vocally. We cant forget the finger to Nvidia moment that forced Nvidia to improve Linux support (it was way worse back in 2012).
Generally, I agree with his opinion and most often his views have turned out to be correct. Except in this one case (and thats a good thing).
### Torvalds “incorrect prediction” on Linux
30 years ago, Torvalds announced the Linux project. He was a university student at that time and wanted to create a UNIX-like operating system because UNIX itself was too costly.
While announcing the project, Torvalds mentioned that the project was just a hobby and it wont be big and professional like GNU.
> Im doing a (free) operating system (just a hobby, wont be big and professional like gnu) for 386(486) AT clones.
Linus Torvalds while announcing the Linux project
Little did Torvalds knew that his hobby will become the backbone of todays IT world and the face of a successful open source project.
Heres the complete message he sent:
Hello everybody out there using minix
Ive currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that Ill get something practical within a few months, and Id like to know what features most people would want. Any suggestions are welcome, but I wont promise Ill implement them 🙂
Linus ([torv…@kruuna.helsinki.fi][1])
PS. Yes its free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as thats all I have :-(.
That was on 25th August 1991. Torvalds announced the Linux project and then on 5th October 1991, he released the first Linux kernel. The [interesting fact about Linux][2] is that it was not open source initially. It was released under GPL license a year later.
The Linux Kernel is 30 years old today. Happy 30th to this amazing open source project.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/trovalds-linux-announcement/
作者:[Abhishek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/root/
[b]: https://github.com/lujun9972
[1]: https://groups.google.com/
[2]: https://itsfoss.com/facts-linux-kernel/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -292,7 +292,7 @@ via: https://opensource.com/article/20/2/external-libraries-java
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,183 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginners guide to SSH for remote connection on Linux)
[#]: via: (https://opensource.com/article/20/9/ssh)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
A beginners guide to SSH for remote connection on Linux
======
Establish connections with remote computers using secure shell.
![woman on laptop sitting at the window][1]
One of Linux's most appealing features is the ability to skillfully use a computer with nothing but commands entered into the keyboard—and better yet, to be able to do that on computers anywhere in the world. Thanks to OpenSSH, [POSIX][2] users can open a secure shell on any computer they have permission to access and use it from a remote location. It's a daily task for many Linux users, but it can be confusing for someone who has yet to try it. This article explains how to configure two computers for secure shell (SSH) connections, and how to securely connect from one to the other without a password.
### Terminology
When discussing more than one computer, it can be confusing to identify one from the other. The IT community has well-established terms to help clarify descriptions of the process of networking computers together.
* **Service:** A service is software that runs in the background so it can be used by computers other than the one it's installed on. For instance, a web server hosts a web-sharing _service_. The term implies (but does not insist) that it's software without a graphical interface.
* **Host:** A host is any computer. In IT, computers are called a _host_ because technically any computer can host an application that's useful to some other computer. You might not think of your laptop as a "host," but you're likely running some service that's useful to you, your mobile, or some other computer.
* **Local:** The local computer is the one you or some software is using. Every computer refers to itself as `localhost`, for example.
* **Remote:** A remote computer is one you're not physically in front of nor physically using. It's a computer in a _remote_ location.
Now that the terminology is settled, you can begin.
### Activate SSH on each host
For two computers to be connected over SSH, each host must have SSH installed. SSH has two components: the command you use on your local machine to start a connection, and a _server_ to accept incoming connection requests. Some computers come with one or both parts of SSH already installed. The commands vary, depending on your system, to verify whether you have both the command and the server installed, so the easiest method is to look for the relevant configuration files:
```
$ file /etc/ssh/ssh_config
/etc/ssh/ssh_config: ASCII text
```
Should this return a `No such file or directory` error, then you don't have the SSH command installed.
Do a similar check for the SSH service (note the `d` in the filename):
```
$ file /etc/ssh/sshd_config
/etc/ssh/sshd_config: ASCII text
```
Install one or the other, as needed:
```
`$ sudo dnf install openssh-clients openssh-server`
```
On the remote computer, enable the SSH service with systemd:
```
`$ sudo systemctl enable --now sshd`
```
Alternately, you can enable the SSH service from within **System Settings** on GNOME or **System Preferences** on macOS. On the GNOME desktop, it's located in the **Sharing** panel:
![Activate SSH in GNOME System Settings][3]
(Seth Kenlon, [CC BY-SA 4.0][4])
### Start a secure shell
Now that you've installed and enabled SSH on the remote computer, you can try logging in with a password as a test. To access the remote computer, you must have a user account and a password.
Your remote user doesn't have to be the same as your local user. You can log in as any user on the remote machine as long as you have that user's password. For instance, I'm `sethkenlon` on my work computer, but I'm `seth` on my personal computer. If I'm on my personal computer (making it my current local machine) and I want to SSH into my work computer, I can do that by identifying myself as `sethkenlon` and using my work password.
To SSH into the remote computer, you must know its internet protocol (IP) address or its resolvable hostname. To find the remote machine's IP address, use the `ip` command (on the remote computer):
```
$ ip addr show | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.1.1.5/27 brd 10.1.1.31 [...]
```
If the remote computer doesn't have the `ip` command, try `ifconfig` instead (or even `ipconfig` on Windows).
The address 127.0.0.1 is a special one and is, in fact, the address of `localhost`. It's a "loopback" address, which your system uses to reach itself. That's not useful when logging into a remote machine, so in this example, the remote computer's correct IP address is 10.1.1.5. In real life, I would know that because my local network uses the 10.1.1.0 subnet. If the remote computer is on a different network, then the IP address could be nearly anything (never 127.0.0.1, though), and some special routing is probably necessary to reach it through various firewalls. Assume your remote computer is on the same network, but if you're interested in reaching computers more remote than your own network, [read my article about opening ports in your firewall][5].
If you can ping the remote machine by its IP address _or_ its hostname, and have a login account on it, then you can SSH into it:
```
$ ping -c1 10.1.1.5
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
64 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=4.66 ms
$ ping -c1 akiton.local
PING 10.1.1.5 (10.1.1.5) 56(84) bytes of data.
```
That's a success. Now use SSH to log in:
```
$ whoami
seth
$ ssh sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
The test login works, so now you're ready to activate passwordless login.
### Create an SSH key
To log in securely to another computer without a password, you must have an SSH key. You may already have an SSH key, but it doesn't hurt to create a new one. An SSH key begins its life on your local machine. It consists of two components: a private key, which you never share with anyone or anything, and a public one, which you copy onto any remote machine you want to have passwordless access to.
Some people create one SSH key and use it for everything from remote logins to GitLab authentication. However, I use different keys for different groups of tasks. For instance, I use one key at home to authenticate to local machines, a different key to authenticate to web servers I maintain, a separate one for Git hosts, another for Git repositories I host, and so on. In this example, I'll create a unique key to use on computers within my local area network.
To create a new SSH key, use the `ssh-keygen` command:
```
`$ ssh-keygen -t ed25519 -f ~/.ssh/lan`
```
The `-t` option stands for _type_ and ensures that the encryption used for the key is higher than the default. The `-f` option stands for _file_ and sets the key's file name and location. After running this command, you're left with an SSH private key called `lan` and an SSH public key called `lan.pub`.
To get the public key over to your remote machine, use the `ssh-copy-id`. For this to work, you must verify that you have SSH access to the remote machine. If you can't log into the remote host with a password, you can't set up passwordless login either:
```
`$ ssh-copy-id -i ~/.ssh/lan.pub sethkenlon@10.1.1.5`
```
During this process, you'll be prompted for your login password on the remote host.
Upon success, try logging in again, but this time using the `-i` option to point the SSH command to the appropriate key (`lan`, in this example):
```
$ ssh -i ~/.ssh/lan sethkenlon@10.1.1.5
bash$ whoami
sethkenlon
```
Repeat this process for all computers on your network, and you'll be able to wander through each host without ever thinking about passwords again. In fact, once you have passwordless authentication set up, you can edit the `/etc/ssh/sshd_config` file to disallow password authentication. This prevents anyone from using SSH to authenticate to a computer unless they have your private key. To do this, open `/etc/ssh/sshd_config` in a text editor with `sudo` permissions and search for the string `PasswordAuthentication`. Change the default line to this:
```
`PasswordAuthentication no`
```
Save it and restart the SSH server (or just reboot):
```
$ sudo systemctl restart sshd &amp;&amp; echo "OK"
OK
$
```
### Using SSH every day
OpenSSH changes your view of computing. No longer are you bound to just the computer in front of you. With SSH, you have access to any computer in your house, or servers you have accounts on, and even mobile and Internet of Things devices. Unlocking the power of SSH also unlocks the power of the Linux terminal. If you're not using SSH every day, start now. Get comfortable with it, collect some keys, live more securely, and expand your world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ssh
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://opensource.com/sites/default/files/uploads/gnome-activate-remote-login.png (Activate SSH in GNOME System Settings)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/8/open-ports-your-firewall

View File

@ -1,265 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (chunibyo-wly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deploy a deep learning model on Kubernetes)
[#]: via: (https://opensource.com/article/20/9/deep-learning-model-kubernetes)
[#]: author: (Chaimaa Zyani https://opensource.com/users/chaimaa)
Deploy a deep learning model on Kubernetes
======
Learn how to deploy, scale, and manage a deep learning model that serves
up image recognition predictions with Kubermatic Kubernetes Platform.
![Brain on a computer screen][1]
As enterprises increase their use of artificial intelligence (AI), machine learning (ML), and deep learning (DL), a critical question arises: How can they scale and industrialize ML development? These conversations often focus on the ML model; however, this is only one step along the way to a complete solution. To achieve in-production application and scale, model development must include a repeatable process that accounts for the critical activities that precede and follow development, including getting the model into a public-facing deployment.
This article demonstrates how to deploy, scale, and manage a deep learning model that serves up image recognition predictions using [Kubermatic Kubernetes Platform][2].
Kubermatic Kubernetes Platform is a production-grade, open source Kubernetes cluster-management tool that offers flexibility and automation to integrate with ML/DL workflows with full cluster lifecycle management.
### Get started
This example deploys a deep learning model for image recognition. It uses the [CIFAR-10][3] dataset that consists of 60,000 32x32 color images in 10 classes with the [Gluon][4] library in [Apache MXNet][5] and NVIDIA GPUs to accelerate the workload. If you want to use a pre-trained model on the CIFAR-10 dataset, check out the [getting started guide][6].
The model was trained over a span of 200 epochs, as long as the validation error kept decreasing slowly without causing the model to overfit. This plot shows the training process:
![Deep learning model training plot][7]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
After training, it's essential to save the model's parameters so they can be loaded later:
```
file_name = "net.params"
net.save_parameters(file_name)
```
Once the model is ready, wrap your prediction code in a Flask server. This allows the server to accept an image as an argument to its request and return the model's prediction in the response:
```
from gluoncv.model_zoo import get_model
import matplotlib.pyplot as plt
from mxnet import gluon, nd, image
from mxnet.gluon.data.vision import transforms
from gluoncv import utils
from PIL import Image
import io
import flask
app = flask.Flask(__name__)
@app.route("/predict",methods=["POST"])
def predict():
    if flask.request.method == "POST":
        if flask.request.files.get("img"):
           img = Image.open(io.BytesIO(flask.request.files["img"].read()))
            transform_fn = transforms.Compose([
            transforms.Resize(32),
            transforms.CenterCrop(32),
            transforms.ToTensor(),
            transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
            img = transform_fn(nd.array(img))
            net = get_model('cifar_resnet20_v1', classes=10)
            net.load_parameters('net.params')
            pred = net(img.expand_dims(axis=0))
            class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
                       'dog', 'frog', 'horse', 'ship', 'truck']
            ind = nd.argmax(pred, axis=1).astype('int')
            prediction = 'The input picture is classified as [%s], with probability %.3f.'%
                         (class_names[ind.asscalar()], nd.softmax(pred)[0][ind].asscalar())
    return prediction
if __name__ == '__main__':
   app.run(host='0.0.0.0')
```
### Containerize the model
Before you can deploy your model to Kubernetes, you need to install Docker and create a container image with your model.
1. Download, install, and start Docker: [code]
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo <https://download.docker.com/linux/centos/docker-ce.repo>
sudo yum install docker-ce
sudo systemctl start docker
```
2. Create a directory where you can organize your code and dependencies: [code]
mkdir kubermatic-dl
cd kubermatic-dl
```
3. Create a `requirements.txt` file to contain the packages the code needs to run: [code]
flask
gluoncv
matplotlib
mxnet
requests
Pillow
```
4. Create the Dockerfile that Docker will read to build and run the model: [code]
FROM python:3.6
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY app.py /app
CMD ["python", "app.py"]~
[/code] This Dockerfile can be broken down into three steps. First, it creates the Dockerfile and instructs Docker to download a base image of Python 3. Next, it asks Docker to use the Python package manager `pip` to install the packages in `requirements.txt`. Finally, it tells Docker to run your script via `python app.py`.
5. Build the Docker container: [code]`sudo docker build -t kubermatic-dl:latest .`[/code] This instructs Docker to build a container for the code in your current working directory, `kubermatic-dl`.
6. Check that your container is working by running it on your local machine: [code]`sudo docker run -d -p 5000:5000 kubermatic-dl`
```
7. Check the status of your container by running `sudo docker ps -a`:
![Checking the container's status][9]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### Upload the model to Docker Hub
Before you can deploy the model on Kubernetes, it must be publicly available. Do that by adding it to [Docker Hub][10]. (You will need to create a Docker Hub account if you don't have one.)
1. Log into your Docker Hub account: [code]`sudo docker login`
```
2. Tag the image so you can refer to it for versioning when you upload it to Docker Hub: [code]
sudo docker tag &lt;your-image-id&gt; &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
sudo docker push &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
```
![Tagging the image][11]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. Check your image ID by running `sudo docker images`.
### Deploy the model to a Kubernetes cluster
1. Create a project on the Kubermatic Kubernetes Platform, then create a Kubernetes cluster using the [quick start tutorial][12].
![Create a Kubernetes cluster][13]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
2. Download the `kubeconfig` used to configure access to your cluster, change it into the download directory, and export it into your environment:
![Kubernetes cluster example][14]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. Using `kubectl`, check the cluster information, such as the services that `kube-system` starts on your cluster: [code]`kubectl cluster-info`
```
![Checking the cluster info][15]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
4. To run the container in the cluster, you need to create a deployment (`deployment.yaml`) and apply it to the cluster: [code]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubermatic-dl-deployment
spec:
  selector:
    matchLabels:
      app: kubermatic-dl
  replicas: 3
  template:
    metadata:
      labels:
        app: kubermatic-dl
    spec:
     containers:
     - name: kubermatic-dl
       image: kubermatic00/kubermatic-dl:latest
       imagePullPolicy: Always
       ports:
       - containerPort: 8080
[/code] [code]`kubectl apply -f deployment.yaml`
```
5. To expose your deployment to the outside world, you need a service object that will create an externally reachable IP for your container: [code]`kubectl expose deployment kubermatic-dl-deployment  --type=LoadBalancer --port 80 --target-port 5000`
```
6. You're almost there! Check your services to determine the status of your deployment and get the IP address to call your image recognition API: [code]`kubectl get service`
```
![Get the IP address to call your image recognition API][16]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
7. Test your API with these two images using the external IP:
![Horse][17]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![Dog][18]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![Testing the API][19]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### Summary
In this tutorial, you created a deep learning model to be served as a [REST API][20] using Flask. It put the application inside a Docker container, uploaded the container to Docker Hub, and deployed it with Kubernetes. Then, with just a few commands, Kubermatic Kubernetes Platform deployed the app and exposed it to the world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/deep-learning-model-kubernetes
作者:[Chaimaa Zyani][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/chaimaa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen)
[2]: https://www.loodse.com/products/kubermatic/
[3]: https://www.cs.toronto.edu/~kriz/cifar.html
[4]: https://gluon.mxnet.io/
[5]: https://mxnet.apache.org/
[6]: https://gluon-cv.mxnet.io/build/examples_classification/demo_cifar10.html
[7]: https://opensource.com/sites/default/files/uploads/trainingplot.png (Deep learning model training plot)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/containerstatus.png (Checking the container's status)
[10]: https://hub.docker.com/
[11]: https://opensource.com/sites/default/files/uploads/tagimage.png (Tagging the image)
[12]: https://docs.kubermatic.com/kubermatic/v2.13/installation/install_kubermatic/_installer/
[13]: https://opensource.com/sites/default/files/uploads/kubernetesclusterempty.png (Create a Kubernetes cluster)
[14]: https://opensource.com/sites/default/files/uploads/kubernetesexamplecluster.png (Kubernetes cluster example)
[15]: https://opensource.com/sites/default/files/uploads/clusterinfo.png (Checking the cluster info)
[16]: https://opensource.com/sites/default/files/uploads/getservice.png (Get the IP address to call your image recognition API)
[17]: https://opensource.com/sites/default/files/uploads/horse.jpg (Horse)
[18]: https://opensource.com/sites/default/files/uploads/dog.jpg (Dog)
[19]: https://opensource.com/sites/default/files/uploads/testapi.png (Testing the API)
[20]: https://www.redhat.com/en/topics/api/what-is-a-rest-api

View File

@ -1,114 +0,0 @@
[#]: subject: (How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux])
[#]: via: (https://itsfoss.com/check-mbr-or-gpt/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to Know if Your System Uses MBR or GPT Partitioning [on Windows and Linux]
======
Knowing the correct partitioning scheme of your disk could be crucial when you are installing Linux or any other operating system.
There are two popular partitioning schemes; the older MBR and the newer GPT. Most computers use GPT these days.
While creating the live or bootable USB, some tools (like [Rufus][1]) ask you the type of disk partitioning in use. If you choose GPT with an MBR disk, the bootable USB might not work.
In this tutorial, Ill show various methods to check the disk partitioning scheme on Windows and Linux systems.
### Check whether your system uses MBR or GPT on Windows systems
While there are several ways to check the disk partitioning scheme in Windows including command line ones, Ill stick with the GUI methods.
Press the Windows button and search for disk and then click on “**Create and format disk partitions**“.
![][2]
In here, **right-click on the disk** for which you want to check the partitioning scheme. In the right-click context menu, **select Properties**.
![Right click on the disk and select properties][3]
In the Properties, go to **Volumes** tab and look for **Partition style**.
![In Volumes tab, look for Partition style][4]
As you can see in the screenshot above, the disk is using GPT partitioning scheme. For some other systems, it could show MBR or MSDOS partitioning scheme.
Now you know how to check disk partitioning scheme in Windows. In the next section, youll learn to do the same in Linux.
### Check whether your system uses MBR or GPT on Linux
There are several ways to check whether a disk uses MBR or GPT partitioning scheme in Linux as well. This includes commands and GUI tools.
Let me first show the command line method and then Ill show a couple of GUI methods.
#### Check disk partitioning scheme in Linux command line
The command line method should work on all Linux distributions.
Open a terminal and use the following command with sudo:
```
sudo parted -l
```
The above command is actually a CLI-based [partitioning manager in Linux][5]. With the option -l, it lists the disks on your system along with the details about those disks. It includes partitioning scheme information.
In the output, look for the line starting with **Partition Table**:
![][6]
In the above screenshot, the disk has GPT partitioning scheme. For **MBR**, it would show **msdos**.
You learned the command line way. But if you are not comfortable with the terminal, you can use graphical tools as well.
#### Checking disk information with GNOME Disks tool
Ubuntu and many other GNOME-based distributions have a built-in graphical tool called Disks that lets you handle the disks in your system.
You can use the same tool for getting the partition type of the disk as well.
![][7]
#### Checking disk information with Gparted graphical tool
If you dont have the option to use GNOME Disks tool, no worries. There are other tools available.
One such popular tool is Gparted. You should find it in the repositories of most Linux distributions. If not installed already, [install Gparted][8] using your distributions software center or [package manager][9].
In Gparted, select the disk and from the menu select **View-&gt;Device** Information. It will start showing the disk information in the bottom-left area and this information includes the partitioning scheme.
![][10]
See, not too complicated, was it? Now you know multiple ways of figuring our whether the disks in your system use GPT or MBR partitioning scheme.
On the same note, I would also like to mention that sometimes disks also have a [hybrid partitioning scheme][11]. This is not common and most of the time it is either MBR or GPT.
Questions? Suggestions? Please leave a comment below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/check-mbr-or-gpt/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://rufus.ie/en_US/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/03/disc-management-windows.png?resize=800%2C561&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-1.png?resize=800%2C603&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/gpt-check-windows-2-1.png?resize=800%2C600&ssl=1
[5]: https://itsfoss.com/partition-managers-linux/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux.png?resize=800%2C446&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-if-mbr-or-gpt-in-Linux-gui.png?resize=800%2C548&ssl=1
[8]: https://itsfoss.com/gparted/
[9]: https://itsfoss.com/package-manager/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/check-disk-partitioning-scheme-linux-gparted.jpg?resize=800%2C555&ssl=1
[11]: https://www.rodsbooks.com/gdisk/hybrid.html

View File

@ -1,404 +0,0 @@
[#]: subject: (Analyze the Linux kernel with ftrace)
[#]: via: (https://opensource.com/article/21/7/linux-kernel-ftrace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Analyze the Linux kernel with ftrace
======
Ftrace is a great way to learn more about the internal workings of the
Linux kernel.
![Linux keys on the keyboard for a desktop computer][1]
An operating system's kernel is one of the most elusive pieces of software out there. It's always there running in the background from the time your system gets turned on. Every user achieves their computing work with the help of the kernel, yet they never interact with it directly. The interaction with the kernel occurs by making system calls or having those calls made on behalf of the user by various libraries or applications that they use daily.
I've covered how to trace system calls in an earlier article using `strace`. However, with `strace`, your visibility is limited. It allows you to view the system calls invoked with specific parameters and, after the work gets done, see the return value or status indicating whether they passed or failed. But you had no idea what happened inside the kernel during this time. Besides just serving system calls, there's a lot of other activity happening inside the kernel that you're oblivious to.
### Ftrace Introduction
This article aims to shed some light on tracing the kernel functions by using a mechanism called `ftrace`. It makes kernel tracing easily accessible to any Linux user, and with its help you can learn a lot about Linux kernel internals.
The default output generated by the `ftrace` is often massive, given that the kernel is always busy. To save space, I've kept the output to a minimum and, in many cases truncated the output entirely.
I am using Fedora for these examples, but they should work on any of the latest Linux distributions.
### Enabling ftrace
`Ftrace` is part of the Linux kernel now, and you no longer need to install anything to use it. It is likely that, if you are using a recent Linux OS, `ftrace` is already enabled. To verify that the `ftrace` facility is available, run the mount command and search for `tracefs`. If you see output similar to what is below, `ftrace` is enabled, and you can easily follow the examples in this article:
```
$ sudo mount | grep tracefs
none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)
```
To make use of `ftrace`, you first must navigate to the special directory as specified in the mount command above, from where you'll run the rest of the commands in the article:
```
`$ cd /sys/kernel/tracing`
```
### General work flow
First of all, you must understand the general workflow of capturing a trace and obtaining the output. If you're using `ftrace` directly, there isn't any special `ftrace-`specific commands to run. Instead, you basically write to some files and read from some files using standard command-line Linux utilities.
The general steps:
1. Write to some specific files to enable/disable tracing.
2. Write to some specific files to set/unset filters to fine-tune tracing.
3. Read generated trace output from files based on 1 and 2.
4. Clear earlier output or buffer from files.
5. Narrow down to your specific use case (kernel functions to trace) and repeat steps 1, 2, 3, 4.
### Types of available tracers
There are several different kinds of tracers available to you. As mentioned earlier, you need to be in a specific directory before running any of these commands because the files of interest are present there. I use relative paths (as opposed to absolute paths) in my examples.
You can view the contents of the `available_tracers` file to see all the types of tracers available. You can see a few listed below. Don't worry about all of them just yet:
```
$ pwd
/sys/kernel/tracing
$ sudo cat available_tracers
hwlat blk mmiotrace function_graph wakeup_dl wakeup_rt wakeup function nop
```
Out of all the given tracers, I focus on three specific ones: `function` and `function_graph` to enable tracing, and `nop` to disable tracing.
### Identify current tracer
Usually, by default, the tracer is set to `nop`. That is, "No operation" in the special file `current_tracer`, which usually means tracing is currently off:
```
$ pwd
/sys/kernel/tracing
$ sudo cat current_tracer
nop
```
### View Tracing output
Before you enable any tracing, take a look at the file where the tracing output gets stored. You can view the contents of the file named `trace` using the [cat][2] command:
```
$ sudo cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 0/0   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
```
### Enable function tracer
You can enable your first tracer called `function` by writing `function` to the file `current_tracer` (its earlier content was `nop`, indicating that tracing was off.) Think of this operation as a way of enabling tracing:
```
$ pwd
/sys/kernel/tracing
$ sudo cat current_tracer
nop
$ echo function &gt; current_tracer
$
$ cat current_tracer
function
```
### View updated tracing output for function tracer
Now that you've enabled tracing, it's time to view the output. If you view the contents of the `trace` file, you see a lot of data being written to it continuously. I've piped the output and am currently viewing only the top 20 lines to keep things manageable. If you follow the headers in the output on the left, you can see which task and Process ID are running on which CPU. Toward the right side of the output, you see the exact kernel function running, followed by its parent function. There is also time stamp information in the center:
```
$ sudo cat trace | head -20
# tracer: function
#
# entries-in-buffer/entries-written: 409936/4276216   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
          &lt;idle&gt;-0       [000] d...  2088.841739: tsc_verify_tsc_adjust &lt;-arch_cpu_idle_enter
          &lt;idle&gt;-0       [000] d...  2088.841739: local_touch_nmi &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: rcu_nocb_flush_deferred_wakeup &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: tick_check_broadcast_expired &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: cpuidle_get_cpu_driver &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: cpuidle_not_available &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: cpuidle_select &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: menu_select &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: cpuidle_governor_latency_req &lt;-menu_select
```
Remember that tracing is on, which means the output of tracing continues to get written to the trace file until you turn tracing off.
### Turn off tracing
Turning off tracing is simple. All you have to do is replace `function` tracer with `nop` in the `current_tracer` file and tracing gets turned off:
```
$ sudo cat current_tracer
function
$ sudo echo nop &gt; current_tracer
$ sudo cat current_tracer
nop
```
### Enable function_graph tracer
Now try the second tracer, called `function_graph`. You can enable this using the same steps as before: write `function_graph` to the `current_tracer` file:
```
$ sudo echo function_graph &gt; current_tracer
$ sudo cat current_tracer
function_graph
```
### Tracing output of function_graph tracer
Notice that the output format of the `trace` file has changed. Now, you can see the CPU ID and the duration of the kernel function execution. Next, you see curly braces indicating the beginning of a function and what other functions were called from inside it:
```
$ sudo cat trace | head -20
# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 6)               |              n_tty_write() {
 6)               |                down_read() {
 6)               |                  __cond_resched() {
 6)   0.341 us    |                    rcu_all_qs();
 6)   1.057 us    |                  }
 6)   1.807 us    |                }
 6)   0.402 us    |                process_echoes();
 6)               |                add_wait_queue() {
 6)   0.391 us    |                  _raw_spin_lock_irqsave();
 6)   0.359 us    |                  _raw_spin_unlock_irqrestore();
 6)   1.757 us    |                }
 6)   0.350 us    |                tty_hung_up_p();
 6)               |                mutex_lock() {
 6)               |                  __cond_resched() {
 6)   0.404 us    |                    rcu_all_qs();
 6)   1.067 us    |                  }
```
### Enable trace settings to increase the depth of tracing
You can always tweak the tracer slightly to see more depth of the function calls using the steps below. After which, you can view the contents of the `trace` file and see that the output is slightly more detailed. For readability, the output of this example is omitted:
```
$ sudo cat max_graph_depth
0
$ sudo echo 1 &gt; max_graph_depth
$ # or
$ sudo echo 2 &gt; max_graph_depth
$ sudo cat trace
```
### Finding functions to trace
The steps above are sufficient to get started with tracing. However, the amount of output generated is enormous, and you can often get lost while trying to find out items of interest. Often you want the ability to trace specific functions only and ignore the rest. But how do you know which processes to trace if you don't know their exact names? There is a file that can help you with this—`available_filter_functions` provides you with a list of available functions for tracing:
```
$ sudo wc -l available_filter_functions  
63165 available_filter_functions
```
### Search for general kernel functions
Now try searching for a simple kernel function that you are aware of. User-space has `malloc` to allocate memory, while the kernel has its `kmalloc` function, which provides similar functionality. Below are all the `kmalloc` related functions:
```
$ sudo grep kmalloc available_filter_functions
debug_kmalloc
mempool_kmalloc
kmalloc_slab
kmalloc_order
kmalloc_order_trace
kmalloc_fix_flags
kmalloc_large_node
__kmalloc
__kmalloc_track_caller
__kmalloc_node
__kmalloc_node_track_caller
[...]
```
### Search for kernel module or driver related functions
From the output of `available_filter_functions`, you can see some lines ending with text in brackets, such as `[kvm_intel]` in the example below. These functions are related to the kernel module `kvm_intel`, which is currently loaded. You can run the `lsmod` command to verify:
```
$ sudo grep kvm available_filter_functions | tail
__pi_post_block [kvm_intel]
vmx_vcpu_pi_load [kvm_intel]
vmx_vcpu_pi_put [kvm_intel]
pi_pre_block [kvm_intel]
pi_post_block [kvm_intel]
pi_wakeup_handler [kvm_intel]
pi_has_pending_interrupt [kvm_intel]
pi_update_irte [kvm_intel]
vmx_dump_dtsel [kvm_intel]
vmx_dump_sel [kvm_intel]
$ lsmod  | grep -i kvm
kvm_intel             335872  0
kvm                   987136  1 kvm_intel
irqbypass              16384  1 kvm
```
### Trace specific functions only
To enable tracing of specific functions or patterns, you can make use of the `set_ftrace_filter` file to specify which functions from the above output you want to trace.
This file also accepts the `*` pattern, which expands to include additional functions with the given pattern. As an example, I am using the `ext4` filesystem on my machine. I can specify `ext4` specific kernel functions to trace using the following commands:
```
$ sudo mount | grep home
/dev/mapper/fedora-home on /home type ext4 (rw,relatime,seclabel)
$ pwd
/sys/kernel/tracing
$ sudo cat set_ftrace_filter
#### all functions enabled ####
$
$ echo ext4_* &gt; set_ftrace_filter
$
$ cat set_ftrace_filter
ext4_has_free_clusters
ext4_validate_block_bitmap
ext4_get_group_number
ext4_get_group_no_and_offset
ext4_get_group_desc
[...]
```
Now, when you see the tracing output, you can only see functions `ext4` related to kernel functions for which you had set a filter earlier. All the other output gets ignored:
```
$ sudo cat trace |head -20
# tracer: function
#
# entries-in-buffer/entries-written: 3871/3871   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
           cupsd-1066    [004] ....  3308.989545: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.989547: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.989552: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.989553: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.990097: ext4_file_open &lt;-do_dentry_open
           cupsd-1066    [004] ....  3308.990111: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.990111: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.990122: ext4_llseek &lt;-ksys_lseek
           cupsd-1066    [004] ....  3308.990130: ext4_file_read_iter &lt;-new_sync_read
```
### Exclude functions from being traced
You don't always know what you want to trace but, you surely know what you don't want to trace. For that, there is this file aptly named `set_ftrace_notrace`—notice the "no" in there. You can write your desired pattern in this file and enable tracing, upon which everything except the mentioned pattern gets traced. This is often helpful to remove common functionality that clutters our output:
```
$ sudo cat set_ftrace_notrace
#### no functions disabled ####
```
### Targetted tracing
So far, you've been tracing everything that has happened in the kernel. But that won't help us if you wish to trace events related to a specific command. To achieve this, you can turn tracing on and off on-demand and, and in between them, run our command of choice so that you do not get extra output in your trace output. You can enable tracing by writing `1` to `tracing_on`, and `0` to turn it off:
```
$ sudo cat tracing_on
0
$ sudo echo 1 &gt; tracing_on
$ sudo cat tracing_on
1
$ # Run some specific command that we wish to trace here
$ sudo echo 0 &gt; tracing_on
$ cat tracing_on
0
```
### Tracing specific PID
If you want to trace activity related to a specific process that is already running, you can write that PID to a file named `set_ftrace_pid` and then enable tracing. That way, tracing is limited to this PID only, which is very helpful in some instances:
```
`$ sudo echo $PID > set_ftrace_pid`
```
### Conclusion
`Ftrace` is a great way to learn more about the internal workings of the Linux kernel. With some practice, you can learn to fine-tune `ftrace` and narrow down your searches. To understand `ftrace` in more detail and its advanced usage, see these excellent articles written by the core author of `ftrace` himself—Steven Rostedt.
* [Debugging the Linux kernel, part 1][3]
* [Debugging the Linux kernel, part 2][4]
* [Debugging the Linux kernel, part 3][5]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/linux-kernel-ftrace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/article/19/2/getting-started-cat-command
[3]: https://lwn.net/Articles/365835/
[4]: https://lwn.net/Articles/366796/
[5]: https://lwn.net/Articles/370423/

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/7/linux-kernel-trace-cmd)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -364,7 +364,7 @@ via: https://opensource.com/article/21/7/linux-kernel-trace-cmd
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,247 +0,0 @@
[#]: subject: (Brave vs. Firefox: Your Ultimate Browser Choice for Private Web Experience)
[#]: via: (https://itsfoss.com/brave-vs-firefox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Brave vs. Firefox: Your Ultimate Browser Choice for Private Web Experience
======
Web browsers have evolved over the years. From downloading files to accessing a full-fledged web application, we have come a long way.
For a lot of users, the web browser is the only thing they need to get their work done these days.
Hence, choosing the right browser becomes an important task that could help improve your workflow over the years.
### Brave vs. Firefox Browser
Brave and Mozillas Firefox are two of the most popular web browsers for privacy-conscious users and open-source enthusiasts.
Considering that both focus heavily on privacy and security, let us look at what exactly they have to offer, to help you decide what you should go with.
Here are the comparison pointers that Ive used, you can directly navigate to any of them:
* [User Interface][1]
* [Performance][2]
* [Browser Engine][3]
* [Ad &amp; Tracking Blocking Capabilities][4]
* [Containers][5]
* [Rewards][6]
* [Cross-Platform Availability][7]
* [Synchronization][8]
* [Service Integrations][9]
* [Customizability][10]
* [Extension Support][11]
### User Interface
The user interface is what makes the biggest difference with the workflow and experience when using the browser.
Of course, you can have your personal preferences, but the easier, snappier, and cleaner it looks, the better it is.
![Brave browser][12]
To start with, Brave shares a similar look and feel to Chrome and Microsoft Edge. It offers a clean experience with minimal UI elements and all the essential options accessible through the browser menu.
It offers a black theme as well. The subtle animations make the interaction a pleasant experience.
To customize it, you can choose to use themes available from the chrome web store.
When it comes to Mozilla Firefox, it has had a couple of major redesigns over the years, and the latest user interface tries to offer a closer experience to Chrome.
![Firefox browser][13]
The Firefox design looks impressive and provides a clean user experience. It also lets you opt for a dark theme if needed and there are several theme options to download/apply as well.
Both web browsers offer a good user experience.
If you want a familiar experience, but with a pinch of uniqueness, Mozillas Firefox can be a good pick.
But, if you want a snappier experience with a better feel for the animations, Brave gets the edge.
### Performance
Practically, I find Brave loading web pages faster. Also, the overall user experience feels snappy.
Firefox is not terribly slow, but it definitely felt slower than Brave.
To give you some perspective, I also utilized [Basemark][14] to run a benchmark to see if that is true on paper.
You can check with other browser benchmark tools available, but Basemark performs a variety of tests, so well go with that for this article.
![Firefox benchmark score][15]
![Brave benchmark score][16]
Firefox managed to score **630** and Brave pulled it off better with ~**792**.
Do note that these benchmarks were run with default browser settings without any browser extensions installed.
Of course, synthetic scores may vary depending on what you have going on in the background and the hardware configuration of your system.
This is what I got with **i5-7400, 16 GB RAM, and GTX 1050ti GPU** on my desktop.
In general, Brave browser is a fast browser compared to most of the popular options available.
Both utilize a decent chunk of system resources and that varies to a degree with the number of tabs, types of webpages accessed, and the kind of blocking extension used.
For instance, Brave blocks aggressively by default but Firefox does not block display advertisements by default. And, this affects the system resource usage.
### Browser Engine
Firefox utilizes its own Gecko engine as the foundation and is using components on top of that from [servo research project][17] to improve.
Currently, it is essentially an improved Gecko engine dubbed by a project name “Quantum” which was introduced with the release of Firefox Quantum.
On the other hand, Brave uses Chromiums engine.
While both are capable enough to handle modern web experiences, Chromium-based engine is just more popular and web developers often tailor their sites for the best experience on Chrome-based browsers
Also, some services happen to exclusively support Chrome-based browsers.
### Ad &amp; Tracker Blocking Capabilities
![][18]
As I have mentioned before, Brave is aggressive in blocking trackers and advertisements. By default, it comes with the blocking feature enabled.
Firefox also enables the enhanced privacy protection by default but does not block display advertisements.
You will have to opt for the “**Strict**” privacy protection mode with Firefox if you want to get rid of display advertisements.
With that being said, Firefox enforces some unique tracking protection technology that includes Total Cookie Protection which isolates cookies for each site and prevents cross-site cookie tracking.
![][19]
This was introduced with [Firefox 86][20] and to use it, you need to enable a strict privacy protection mode.
Overall, Brave might look like a better option out of the box, and Mozilla Firefox offers better privacy protection features.
### Containers
Firefox also offers a way to isolate site activity when you use Facebook with help of a container. In other words, it prevents Facebook from tracking your offsite activity.
You can also use containers to organize your tabs and separate sessions when needed.
Brave does not offer anything similar but it does block cross-site trackers and cookies out-of-the-box.
### Rewards
![][21]
Unlike Firefox, Brave offers its own advertising network by blocking other advertisements on the web.
When you opt in to display privacy-friendly ads by Brave, you get rewarded with tokens to a crypto wallet. And you can use these tokens to give back to your favorite websites.
While this is a good business strategy to get away from mainstream advertising, for users who do not want any kind of advertisements, it may not be useful.
So, Brave offers an alternative in the form of rewards to help websites even if you block advertisements. If it is something you appreciate, Brave will be a good pick for you.
### Cross-Platform Availability
You will find both Brave and Firefox available for Linux, Windows, and macOS. Mobile apps are also available for iOS and Android.
For Linux users, Firefox comes baked in with most of the Linux distributions. And, you can also find it available in the software center. In addition to that, there is also a [Flatpak][22] package available.
Brave is not available through default repositories and the software center. Hence, you need to follow the official instructions to add the private repository and then [get Brave installed in your Linux distro][23].
### Synchronization
With Mozilla Firefox, you get to create a Firefox account to sync all your data cross-platform.
![][24]
Brave also lets you sync cross-platform but you need access to one of the devices in order to successfully do it.
![][25]
Hence, Firefox sync is more convenient.
Also, you get access to Firefoxs VPN, data breach monitor, email relay, and password manager with the Firefox account.
### Service Integrations
Right off the bat, Firefox offers more service integrations that include Pocket, VPN, password manager, and also some of its new offerings like Firefox relay.
If you want access to these services through your browser, Firefox will be the convenient option for you.
While Brave does offer crypto wallets, it is not for everyone.
![][26]
Similarly, if you like using [Brave Search][27], you may have a seamless experience when using it with Brave browser because of the user experience.
### Customizability &amp; Security
Firefox shines when it comes to customizability. You get more options to tweak the experience and also take control of the privacy/security of your browser.
The ability to customize lets you make Firefox more secure than the Brave browser.
While hardening Firefox is a separate topic which well talk about. To give you an example, [Tor Browser][28] is just a customized Firefox browser.
However, that does not make Brave less secure. It is a secure browser overall but you do get more options with Firefox.
### Extension Support
Theres no doubt that the Chrome web store offers way more extensions.
So, Brave gets a clear edge over Firefox if you are someone who utilizes a lot of extensions (or constantly try new ones).
Firefox may not have the biggest catalog of extensions, it does support most of the extensions. For common use-cases, you will rarely find an extension that is not available as an addon for Firefox.
### What Should You Choose?
If you want the best compatibility with the modern web experience and want access to more extensions, Brave browser seems to make more sense.
On the other hand, Firefox is an excellent choice for everyday browsing with industry-first privacy features, and a convenient sync option for non-tech savvy users.
You will have a few trade-offs when selecting either of them. So, your will have to prioritize what you want the most.
Let me know about your final choice for your use case in the comments down below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/brave-vs-firefox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: tmp.5yJseRG2rb#ui
[2]: tmp.5yJseRG2rb#perf
[3]: tmp.5yJseRG2rb#engine
[4]: tmp.5yJseRG2rb#ad
[5]: tmp.5yJseRG2rb#container
[6]: tmp.5yJseRG2rb#reward
[7]: tmp.5yJseRG2rb#cp
[8]: tmp.5yJseRG2rb#sync
[9]: tmp.5yJseRG2rb#service
[10]: tmp.5yJseRG2rb#customise
[11]: tmp.5yJseRG2rb#extensions
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-ui-new.jpg?resize=800%2C450&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-ui.jpg?resize=800%2C450&ssl=1
[14]: https://web.basemark.com
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-basemark.png?resize=800%2C598&ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/basemark-brave.png?resize=800%2C560&ssl=1
[17]: https://servo.org
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-blocker.png?resize=800%2C556&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-blocker.png?resize=800%2C564&ssl=1
[20]: https://news.itsfoss.com/firefox-86-release/
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-rewards.png?resize=800%2C560&ssl=1
[22]: https://itsfoss.com/what-is-flatpak/
[23]: https://itsfoss.com/brave-web-browser/
[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/firefox-sync.png?resize=800%2C651&ssl=1
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-sync.png?resize=800%2C383&ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/07/brave-crypto-wallet.png?resize=800%2C531&ssl=1
[27]: https://itsfoss.com/brave-search-features/
[28]: https://itsfoss.com/install-tar-browser-linux/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/8/linux-terminal"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "fisherue "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -108,7 +108,7 @@ via: https://opensource.com/article/21/8/linux-terminal
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[译者ID][c]
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -127,3 +127,4 @@ via: https://opensource.com/article/21/8/linux-terminal
[10]: https://opensource.com/article/21/7/terminal-basics-copying-files-linux-terminal
[11]: https://opensource.com/article/21/7/terminal-basics-removing-files-and-folders-linux-terminal
[12]: https://opensource.com/downloads/bash-scripting-ebook
[c]: https://github.com/fisherue

View File

@ -1,154 +0,0 @@
[#]: subject: "Check free disk space in Linux with ncdu"
[#]: via: "https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Check free disk space in Linux with ncdu
======
Get an interactive report about disk usage with the ncdu Linux command.
![Check disk usage][1]
Computer users tend to amass a lot of data over the years, whether it's important personal projects, digital photos, videos, music, or code repositories. While hard drives tend to be pretty big these days, sometimes you have to step back and take stock of what you're actually storing on your drives. The classic Linux commands [` df`][2] and [` du`][3] are quick ways to gain insight about what's on your drive, and they provide a reliable report that's easy to parse and process. That's great for scripting and processing, but the human brain doesn't always respond well to hundreds of lines of raw data. In recognition of this, the `ncdu` command aims to provide an interactive report about the space you're using on your hard drive.
### Installing ncdu on Linux
On Linux, you can install `ncdu` from your software repository. For instance, on Fedora or CentOS:
```
`$ sudo dnf install ncdu`
```
On BSD, you can use [pkgsrc][4].
On macOS, you can install from [MacPorts][5] or [HomeBrew][6].
Alternately, you can [compile ncdu from source code][7].
### Using ncdu
The interface of `ncdu` uses the ncurses library, which turns your terminal window into a rudimentary graphical application so you can use the Arrow keys to navigate visual menus.
![ncdu interface][8]
CC BY-SA Seth Kenlon
That's one of the main appeals of `ncdu`, and what sets it apart from the original `du` command.
To get a complete listing of a directory, launch `ncdu`. It defaults to the current directory.
```
$ ncdu
ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help                                                                  
\--- /home/tux -----------------------------------------------
   22.1 GiB [##################] /.var                                                                                        
   19.0 GiB [###############   ] /Iso
   10.0 GiB [########          ] /.local
    7.9 GiB [######            ] /.cache
    3.8 GiB [###               ] /Downloads
    3.6 GiB [##                ] /.mail
    2.9 GiB [##                ] /Code
    2.8 GiB [##                ] /Documents
    2.3 GiB [#                 ] /Videos
[...]
```
The listing shows the largest directory first (in this example, that's the `~/.var` directory, full of many many flatpaks).
Using the Arrow keys on your keyboard, you can navigate through the listing to move deeper into a directory so you can gain better insight into what's taking up the most space.
### Get the size of a specific directory
You can run `ncdu` on an arbitrary directory by providing the path of a folder when launching it:
```
`$ ncdu ~/chromiumos`
```
### Excluding directories
By default, `ncdu` includes everything it can, including symbolic links and pseudo-filesystems such as procfs and sysfs. `You can` exclude these with the `--exclude-kernfs`.
You can exclude arbitrary files and directories using the --exclude option, followed by a pattern to match.
```
$ ncdu --exclude ".var"
   19.0 GiB [##################] /Iso                                                                                          
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
Alternately, you can list files and directories to exclude in a file, and cite the file using the `--exclude-from` option:
```
$ ncdu --exclude-from myexcludes.txt /home/tux                                                                                    
   10.0 GiB [#########         ] /.local
    7.9 GiB [#######           ] /.cache
    3.8 GiB [###               ] /Downloads
[...]
```
### Color scheme
You can add some color to ncdu with the `--color dark` option.
![ncdu color scheme][9]
CC BY-SA Seth Kenlon
### Including symlinks
The `ncdu` output treats symlinks literally, meaning that a symlink pointing to a 9 GB file takes up just 40 bytes.
```
$ ncdu ~/Iso
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso                                                    
@   0.0   B [                  ]  fake.iso
```
You can force ncdu to follow symlinks with the `--follow-symlinks` option:
```
$ ncdu --follow-symlinks ~/Iso
    9.3 GiB [##################]  fake.iso                                                                                    
    9.3 GiB [##################]  CentOS-Stream-8-x86_64-20210427-dvd1.iso
```
### Disk usage
It's not fun to run out of disk space, so monitoring your disk usage is important. The `ncdu` command makes it easy and interactive. Try `ncdu` the next time you're curious about what you've got stored on your PC, or just to explore your filesystem in a new way.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncdu-check-free-disk-space-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/du-splash.png?itok=nRLlI-5A (Check disk usage)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-df
[3]: https://opensource.com/article/21/7/check-disk-space-linux-du
[4]: https://opensource.com/article/19/11/pkgsrc-netbsd-linux
[5]: https://opensource.com/article/20/11/macports
[6]: https://opensource.com/article/20/6/homebrew-mac
[7]: https://dev.yorhel.nl/ncdu
[8]: https://opensource.com/sites/default/files/ncdu.jpg (ncdu interface)
[9]: https://opensource.com/sites/default/files/ncdu-dark.jpg (ncdu color scheme)

View File

@ -1,182 +0,0 @@
[#]: subject: "Debian vs Ubuntu: Whats the Difference? Which One Should You Use?"
[#]: via: "https://itsfoss.com/debian-vs-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Debian vs Ubuntu: Whats the Difference? Which One Should You Use?
======
You can [use apt-get commands][1] for managing applications in both Debian and Ubuntu. You can install DEB packages in both distributions as well. Many times, youll find common package installation instructions for both distributions.
So, whats the difference between the two, if they are so similar?
Debian and Ubuntu belong to the same side of the distribution spectrum. Debian is the original distribution created by Ian Murdock in 1993. Ubuntu was created in 2004 by Mark Shuttleworth and it is based on Debian.
### Ubuntu is based on Debian: What does it mean?
While there are hundreds of Linux distributions, only a handful of them are independent ones, created from scratch. [Debian][2], Arch, Red Hat are some of the biggest distributions that do not derive from any other distribution.
Ubuntu is derived from Debian. It means that Ubuntu uses the same APT packaging system as Debian and shares a huge number of packages and libraries from Debian repositories. It utilizes the Debian infrastructure as base.
![Ubuntu uses Debian as base][3]
Thats what most derived distributions do. They use the same package management system and share packages as the base distribution. But they also add some packages and changes of their own. And that is how Ubuntu is different from Debian despite being derived from it.
### Difference between Ubuntu and Debian
So, Ubuntu is built on Debian architecture and infrastructure and uses .DEB packages same as Debian.
Does it mean using Ubuntu is the same as using Debian? Not quite so. There are many more factors involved that distinguish one distribution from the other.
Let me discuss these factors one by one to compare Ubuntu and Debian. Please keep in mind that some comparisons are applicable to desktop editions while some apply to the server editions.
![][4]
#### 1\. Release cycle
Ubuntu has two kinds of releases: LTS and regular. [Ubuntu LTS (long term support) release][5] comes out every two years and they get support for five years. You have the option to upgrade to the next available LTS release. The LTS releases are considered more stable.
There are also non-LTS releases, every six months. These releases are supported for nine months only, but they have newer software versions and features. You have to upgrade to the next Ubuntu versions when the current on reaches end of life.
So basically, you have the option to choose between stability and new features based on these releases.
On the other hand, Debian has three different releases: Stable, Testing and Unstable. Unstable is for actual testing and should be avoided.
The testing branch is not that unstable. It is used for preparing the next stable branch. Some Debian users prefer the testing branch to get newer features.
And then comes the stable branch. This is the main Debian release. It may not have the latest software and feature but when it comes to stability, Debian Stable is rock solid.
There is a new stable release every two years and it is supported for a total of three years. After that, you have to upgrade to the next available stable release.
#### 2\. Software freshness
![][6]
Debians focus on stability means that it does not always aim for the latest versions of the software. For example, the latest Debian 11 features GNOME 3.38, not the latest GNOME 3.40.
The same goes for other software like GIMP, LibreOffice, etc. This is a compromise you have to make with Debian. This is why “Debian stable = Debian stale” joke is popular in the Linux community.
Ubuntu LTS releases also focus on stability. But they usually have more recent versions of the popular software.
You should note that for _some software_, installing from developers repository is also an option. For example, if you want the latest Docker version, you can add Docker repository in both Debian and Ubuntu.
Overall, software in Debian Stable often have older versions when compared to Ubuntu.
#### 3\. Software availability
Both Debian and Ubuntu has a huge repository of software. However, [Ubuntu also has PPA][7] (Personal Package Archive). With PPA, installing newer software or getting the latest software version becomes a bit more easy.
![][8]
You may try using PPA in Debian but it wont be a smooth experience. Youll encounter issues most of the time.
#### 4\. Supported platforms
Ubuntu is available on 64-bit x86 and ARM platforms. It does not provide 32-bit ISO anymore.
Debian, on the other hand, supports both 32 bit and 64 bit architecture. Apart from that Debian also supports 64-bit ARM (arm64), ARM EABI (armel), ARMv7 (EABI hard-float ABI, armhf), little-endian MIPS (mipsel), 64-bit little-endian MIPS (mips64el), 64-bit little-endian PowerPC (ppc64el) and IBM System z (s390x).
No wonder it is called the universal operating system.
#### 5\. Installation
[Installing Ubuntu][9] is a lot easier than installing Debian. I am not kidding. Debian could be confusing even for intermediate Linux user.
When you download Debian, it provides a minimal ISO by default. This ISO has no non-free (not open source) firmware. You go on to install it and realize that your network adapters and other hardware wont be recognized.
There is a separate non-free ISO that contains firmware but it is hidden and if you do not know that, you are in for a bad surprise.
![Getting non-free firmware is a pain in Debian][10]
Ubuntu is a lot more forgiving when it comes to including proprietary drivers and firmware in the default ISO.
Also, the Debian installer looks old whereas Ubuntu installer is modern looking. Ubuntu installer also recognizes other installed operating systems on the disk and gives you the option to install Ubuntu alongside the existing ones (dual boot). I have not noticed it with Debian installer in my testing.
![Installing Ubuntu is smoother][11]
#### 6\. Out of the box hardware support
As mentioned earlier, Debian focuses primarily on [FOSS][12] (free and open source software). This means that the kernel provided by Debian does not include proprietary drivers and firmware.
Its not that you cannot make it work but youll have to do add/enable additional repositories and install it manually. This could be discouraging, specially for the beginners.
Ubuntu is not perfect but it is a lot better than Debian for providing drivers and firmware out of the box. This means less hassle and a more complete out-of-the-box experience.
#### 7\. Desktop environment choices
Ubuntu uses a customized GNOME desktop environment by default. You may install [other desktop environments][13] on top of it or opt for [various desktop based Ubuntu flavors][14] like Kubuntu (for KDE), Xubuntu (for Xfce) etc.
Debian also installs GNOME by default. But its installer gives you choice to install desktop environment of your choice during the installation process.
![][15]
You may also get [DE specific ISO images from its website][16].
#### 8\. Gaming
Gaming on Linux has improved in general thanks to Steam and its Proton project. Still, gaming depends a lot on hardware.
And when it comes to hardware compatibility, Ubuntu is better than Debian for supporting proprietary drivers.
Not that it cannot be done in Debian but it will require some time and effort to achieve that.
#### 9\. Performance
There is no clear winner in the performance section, whether it is on the server or on the desktop. Both Debian and Ubuntu are popular as desktop as well as server operating systems.
The performance depends on your systems hardware and the software component you use. You can tweak and control your system in both operating systems.
#### 10\. Community and support
Debian is a true community project. Everything about this project is governed by its community members.
Ubuntu is backed by [Canonical][17]. However, it is not entirely a corporate project. It does have a community but the final decision on any matter is in Canonicals hands.
As far the support goes, both Ubuntu and Debian have dedicated forums where users can seek help and advice.
Canonical also offers professional support for a fee to its enterprise clients. Debian has no such features.
### Conclusion
Both Debian and Ubuntu are solid choices for desktop or server operating systems. The apt package manager and DEB packaging is common to both and thus giving a somewhat similar experience.
However, Debian still needs a certain level of expertise, specially on the desktop front. If you are new to Linux, sticking with Ubuntu will be a better choice for you. In my opinion, you should gain some experience, get familiar with Linux in general and then try your hands on Debian.
Its not that you cannot jump onto the Debian wagon from the start, but it is more likely to be an overwhelming experience for Linux beginners.
**Your opinion on this Debian vs Ubuntu debate is welcome.**
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-vs-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/apt-get-linux-guide/
[2]: https://www.debian.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-ubuntu-upstream.png?resize=800%2C400&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-vs-ubuntu.png?resize=800%2C450&ssl=1
[5]: https://itsfoss.com/long-term-support-lts/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/apt-cache-policy.png?resize=795%2C456&ssl=1
[7]: https://itsfoss.com/ppa-guide/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ffmpeg_add_ppa.jpg?resize=800%2C222&ssl=1
[9]: https://itsfoss.com/install-ubuntu/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-firmware.png?resize=800%2C600&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/choose-something-else-installing-ubuntu.png?resize=800%2C491&ssl=1
[12]: https://itsfoss.com/what-is-foss/
[13]: https://itsfoss.com/best-linux-desktop-environments/
[14]: https://itsfoss.com/which-ubuntu-install/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-install-desktop-environment.png?resize=640%2C479&ssl=1
[16]: https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/
[17]: https://canonical.com/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/8/linux-stat-file-status"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "New-World-2019"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -2,7 +2,7 @@
[#]: via: "https://itsfoss.com/youtube-dl-audio-only/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,109 +0,0 @@
[#]: subject: "How to set up your printer on Linux"
[#]: via: "https://opensource.com/article/21/8/add-printer-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "fisherue "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to set up your printer on Linux
======
In the event that your printer isn't auto-detected, this article teaches
you how to add a printer on Linux manually.
![printing on Linux][1]
Even though it's the future now and we're all supposed to be using e-ink and AR, there are still times when a printer is useful. Printer manufacturers have yet to standardize how their peripherals communicate with computers, so there's a necessary maze of printer drivers out there, regardless of what platform you're on. The IEEE-ISTO Printer Working Group (PWG) and the OpenPrinting.org site are working tirelessly to make printing as easy as possible, though. Today, many printers are autodetected with no interaction from the user.
In the event that your printer isn't auto-detected, this article teaches you how to add a printer on Linux manually. This article assumes you're on the GNOME desktop, but the basic workflow is the same for KDE and most other desktops.
### Printer drivers
Before attempting to interface with a printer from Linux, you should first verify that you have updated printer drivers.
There are three varieties of printer drivers:
* Open source [Gutenprint drivers][2] bundled with Linux and as an installable package
* Drivers provided by the printer manufacturer
* Drivers created by a third party
It's worth installing the open source drivers because there are over 700 of them, so having them available increases the chance of attaching a printer and having it automatically configured for you.
### Installing open source drivers
Your Linux distribution probably already has these installed, but if not, you can install them with your package manager. For example, on Fedora, CentOS, Mageia, and similar:
```
`$ sudo dnf install gutenprint`
```
For HP printers, also install Hewlett-Packard's Linux Imaging and Printing (HPLIP) project. For example, on Debian, Linux Mint, and similar:
```
`$ sudo apt install hplip`
```
### Installing vendor drivers
Sometimes a printer manufacturer uses non-standard protocols, so the open source drivers don't work. Other times, the open source drivers work but may lack special vendor-only features. When that happens, you must visit the manufacturer's website and search for a Linux driver for your printer model. The install process varies, so read the install instructions carefully.
In the event that your printer isn't supported at all by the vendor, there are [third-party driver authors][3] that may support your printer. These drivers aren't open source, but neither are most vendor drivers. It's frustrating to have to spend an extra $45 to get support for a printer, but the alternative is to throw the printer into the rubbish, and now you know at least one brand to avoid when you purchase your next printer!
### Common Unix Printing System (CUPS)
The Common Unix Printing System (CUPS) was developed in 1997 by Easy Software Products, and purchased by Apple in 2007. It's the open source basis for printing on Linux, but most modern distributions provide a customized interface for it. Thanks to CUPS, your computer can find printers attached to it by a USB cable and even a shared printer over a network.
Once you've gotten the necessary drivers installed, you can add your printer manually. First, attach your printer to your computer and power them both on. Then open the **Printers** application from the **Activities** screen or application menu.
![printer settings][4]
CC BY-SA Opensource.com
There's a possibility that your printer is autodetected by Linux, by way of the drivers you've installed, and that no further configuration is required.
![printer settings][5]
CC BY-SA Opensource.com
Provided that you see your printer listed, you're all set, and you can already print from Linux!
If you see that you need to add a printer, click the **Unlock** button in the top right corner of the **Printers** window. Enter your administrative password and the button transforms into an **Add** button.
Click the **Add** button.
Your computer searches for attached printers (also called a _local_ printer). To have your computer look for a shared network printer, enter the IP address of the printer or its host.
![searching for a printer][6]
CC BY-SA Opensource.com
Select the printer you want to add to your system and click the **Add** button.
### Print from Linux
Printing from Linux is as easy as printing can be, whether you're using a local or networked printer. If you're looking for a printer to purchase, then check the [OpenPrinting.org database][7] to confirm that a printer has an open source driver before you spend your money. If you already have a printer, you now know how to use it on your Linux computer.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/add-printer-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/fisherue)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/happy-printer.png?itok=9J44YaDs (printing on Linux)
[2]: http://gimp-print.sourceforge.net/
[3]: https://www.turboprint.info/
[4]: https://opensource.com/sites/default/files/system-settings-printer_0.png (printer settings)
[5]: https://opensource.com/sites/default/files/settings-printer.png (printer settings)
[6]: https://opensource.com/sites/default/files/printer-search.png (searching for a printer)
[7]: http://www.openprinting.org/printers/

View File

@ -1,137 +0,0 @@
[#]: subject: "How to Monitor Log Files in Real Time in Linux [Desktop and Server]"
[#]: via: "https://www.debugpoint.com/2021/08/monitor-log-files-real-time/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Monitor Log Files in Real Time in Linux [Desktop and Server]
======
This tutorial explains how you can monitor Linux log files (desktop,
server or applications) in real time for diagnosis and troubleshooting
purpose.Basic SyntaxUsage
When you ran into problems in your Linux desktop, or server or any application, you first look into the respective log files. The log files are generally a stream of text and messages from applications with a timestamp attached to it. It helps you to narrow down specific instances and helps you find the cause of any problem. It can also help to get assistance from the web as well.
In general, all log files are located in /var/log. This directory contains log files with extension .log for specific applications, services, and it also contains separate other directories which contains their log files.
![log files in var-log][1]
So, that said, if you want to monitor a bunch of log files Or, a specific one heres are some ways how you can do it.
### Monitor Log Files in real time Linux
#### Using tail command
Using the tail command is the most basic way of following a log file in real time. Specially, if you are in a server with only just a terminal, no GUI. This is very helpful.
Examples:
```
tail /path/to/log/file
```
![Monitoring multiple log files via tail][2]
Use the switch -f to follow the log file, which updates in real time. For example, if you want to follow syslog, you can use the following command.
```
tail -f /var/log/syslog
```
You can monitor multiple log files using a single command using
```
tail -f /var/log/syslog /var/log/dmesg
```
If you want to monitor http or sftp or any server, you can also their respective log files in this command.
Remember, above commands requires admin privileges.
#### Using lnav (The Logfile Navigator)
![lnav Running][3]
The lnav is a nice utility which you can use to monitor log files in a more structured way with color coded messages. This is not installed by default in Linux systems. You can install it using the below command:
```
sudo apt install lnav (Ubuntu)
sudo dnf install lnav (Fedora)
```
The good thing about lnav is, if you do not want to install it, you can just download its pre-compiled executable and run in anywhere. Even from a USB stick. No setup is required, plus loaded with features. Using lnav you can query the log files via SQL among other cool features which you can learn on it [official website][4].
[][5]
SEE ALSO:   This App is An Advanced Log File Viewer - lnav
Once installed, you can simply run lnav from terminal with admin privilege, and it will show all the logs from /var/log by default and start monitoring in real time.
#### A note about journalctl of systemd
All modern Linux distributions today use systemd, mostly. The systemd provides basic framework and components which runs Linux operating system in general. The systemd provides journal services via journalctl which helps to manage logs from all systemd services. You can also monitor respective systemd services and logs in real time using the following command.
```
journalctl -f
```
Here are some of the specific journalctl commands which you can use for several cases. You can combine these with -f switch above to start monitoring in real time.
* To emergency system messages use
```
journalctl -p 0
```
* Show errors with explanations
```
journalctl -xb -p 3
```
* Use time controls to filter out
```
journalctl --since "2020-12-04 06:00:00"
journalctl --since "2020-12-03" --until "2020-12-05 03:00:00"
journalctl --since yesterday
journalctl --since 09:00 --until "1 hour ago"
```
If you want to learn more about and want to find out details about journalctl I have written a [guide here][6].
### Closing Notes
I hope these commands and tricks helps you find out the root cause of your problem/errors in your desktop or servers. For more details, you can always refer to the man pages and play around with various switches. Let me know using the comment box below, if you have any comments or what do you think about this article.
Cheers.
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/monitor-log-files-real-time/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/log-files-in-var-log-1024x312.jpeg
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/Monitoring-multiple-log-files-via-tail-1024x444.jpeg
[3]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/lnav-Running-1024x447.jpeg
[4]: https://lnav.org/features
[5]: https://www.debugpoint.com/2016/11/advanced-log-file-viewer-lnav-ubuntu-linux/
[6]: https://www.debugpoint.com/2020/12/systemd-journalctl/

View File

@ -1,107 +0,0 @@
[#]: subject: "Access your iPhone on Linux with this open source tool"
[#]: via: "https://opensource.com/article/21/8/libimobiledevice-iphone-linux"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Access your iPhone on Linux with this open source tool
======
Communicate with iOS devices from Linux by using Libimobiledevice.
![A person looking at a phone][1]
The iPhone and iPad aren't by any means open source, but they're popular devices. Many people who own an iOS device also happen to use a lot of open source, including Linux. Users of Windows and macOS can communicate with an iOS device by using software provided by Apple, but Apple doesn't support Linux users. Open source programmers came to the rescue back in 2007 (just a year after the iPhone's release) with Libimobiledevice (then called libiphone), a cross-platform solution for communicating with iOS. It runs on Linux, Android, Arm systems such as the Raspberry Pi, Windows, and even macOS.
Libimobiledevice is written in C and uses native protocols to communicate with services running on iOS devices. It doesn't require any libraries from Apple, so it's fully free and open source.
Libimobiledevice is an object-oriented API, and there are a number of terminal utilities that come bundled with it for your convenience. The library supports Apple's earliest iOS devices all the way up to its latest models. This is the result of years of research and development. Applications in the project include **usbmuxd**, **ideviceinstaller**, **idevicerestore**, **ifuse**, **libusbmuxd**, **libplist**, **libirecovery**, and **libideviceactivation**.
### Install Libimobiledevice on Linux
On Linux, you may already have **libimobiledevice** installed by default. You can find out through your package manager or app store, or by running one of the commands included in the project:
```
`$ ifuse --help`
```
You can install **libimobiledevice** using your package manager. For instance, on Fedora or CentOS:
```
`$ sudo dnf install libimobiledevice ifuse usbmuxd`
```
On Debian and Ubuntu:
```
`$ sudo apt install usbmuxd libimobiledevice6 libimobiledevice-utils`
```
Alternatively, you can [download][2] and install **libimobiledevice** from source code.
### Connecting your device
Once you have the required packages installed, connect your iOS device to your computer.
Make a directory as a mount point for your iOS device.
```
`$ mkdir ~/iPhone`
```
Next, mount the device:
```
`$ ifuse ~/iPhone`
```
Your device prompts you to trust the computer you're using to access it.
![iphone prompts to trust the computer][3]
Figure 1: The iPhone prompts you to trust the computer.
Once the trust issue is resolved, you see new icons on your desktop.
![iphone icons appear on desktop][4]
Figure 2: New icons for the iPhone appear on the desktop.
Click on the **iPhone** icon to reveal the folder structure of your iPhone.
![iphone folder structure displayed][5]
Figure 3: The iPhone folder structure is displayed.
The folder I usually access most frequently is **DCIM**, where my iPhone photos are stored. Sometimes I use these photos in articles I write, and sometimes there are photos I want to enhance with open source applications like Gimp. Having direct access to the images instead of emailing them to myself is one of the benefits of using the Libimobiledevice utilities. I can copy any of these folders to my Linux computer. I can create folders on the iPhone and delete them too.
### Find out more
[Martin Szulecki][6] is the lead developer for the project. The project is looking for developers to add to their [community][7]. Libimobiledevice can change the way you use your peripherals, regardless of what platform you're on. It's another win for open source, which means it's a win for everyone.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/libimobiledevice-iphone-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://github.com/libimobiledevice/libimobiledevice/
[3]: https://opensource.com/sites/default/files/1trust_0.png
[4]: https://opensource.com/sites/default/files/2docks.png
[5]: https://opensource.com/sites/default/files/2iphoneicon.png
[6]: https://github.com/FunkyM
[7]: https://libimobiledevice.org/#community

View File

@ -1,87 +0,0 @@
[#]: subject: "Apps for daily needs part 4: audio editors"
[#]: via: "https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apps for daily needs part 4: audio editors
======
![][1]
Photo by [Brooke Cagle][2] on [Unsplash][3]
Audio editor applications or digital audio workstations (DAW) were only used in the past by professionals, such as record producers, sound engineers, and musicians. But nowadays many people who are not professionals also need them. These tools are used for narration on presentations, video blogs, and even just as a hobby. This is especially true now since there are so many online platforms that facilitate everyone sharing audio works, such as music, songs, podcast, etc. This article will introduce some of the open source audio editors or DAW that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article [Things to do after installing Fedora 34 Workstation][4]. Here is a list of a few apps for daily needs in the audio editors or DAW category.
### Audacity
Im sure many already know Audacity. It is a popular multi-track audio editor and recorder that can be used for post-processing all types of audio. Most people use Audacity to record their voices, then do editing to make the results better. The results can be used as a podcast or a narration for a video blog. In addition, people also use Audacity to create music and songs. You can record live audio through a microphone or mixer. It also supports 32 bit sound quality.
Audacity has a lot of features that can support your audio works. It has support for plugins, and you can even write your own plugin. Audacity provides many built-in effects, such as noise reduction, amplification, compression, reverb, echo, limiter, and many more. You can try these effects while listening to the audio directly with the real-time preview feature. The built in plugin-manager lets you manage frequently used plugins and effects.
![][5]
More information is available at this link: <https://www.audacityteam.org/>
* * *
### LMMS
LMMS or Linux MultiMedia Studio is a comprehensive music creation application. You can use LMMS to produce your music from scratch with your computer. You can create melodies and beats according to your creativity, and make it better with selection of sound instruments and various effects. There are several built-in features related to musical instruments and effects, such as 16 built-in sythesizers, embedded ZynAddSubFx, drop-in VST effect plug-in support, bundled graphic and parametric equalizer, built-in analyzer, and many more. LMMS also supports MIDI keyboards and other audio peripherals.
![][6]
More information is available at this link: <https://lmms.io/>
* * *
### Ardour
Ardour has capabilities similar to LMMS as a comprehensive music creation application. It says on its website that Ardour is a DAW application that is the result of collaboration between musicians, programmers, and professional recording engineers from around the world. Ardour has various functions that are needed by audio engineers, musicians, soundtrack editors, and composers.
Ardour provides complete features for recording, editing, mixing, and exporting. It has unlimited multichannel tracks, non-linear editor with unlimited undo/redo, a full featured mixer, built-in plugins, and much more. Ardour also comes with video playback tools, so it is also very helpful in the process of creating and editing soundtracks for video projects.
![][7]
More information is available at this link: <https://ardour.org/>
* * *
### TuxGuitar
TuxGuitar is a tablature and score editor. It comes with a tablature editor, score viewer, multitrack display, time signature management, and tempo management. It includes various effects, such as bend, slide, vibrato, etc. While TuxGuitar focuses on the guitar, it allows you to write scores for other instruments. It can also serve as a basic MIDI editor. You need to have an understanding of tablature and music scoring to be able to use it.
![][8]
More information is available at this link: <http://www.tuxguitar.com.ar/>
* * *
### Conclusion
This article presented four audio editors as apps for your daily needs and use on Fedora Linux. Actually there are many other audio editors, or DAW, that you can use on Fedora Linux. You can also use Mixxx, Rosegarden, Kwave, Qtractor, MuseScore, musE, and many more. Hopefully this article can help you investigate and choose the right audio editor or DAW. If you have experience using these applications, please share your experiences in the comments.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/FedoraMagz-Apps-4-Audio-816x345.jpg
[2]: https://unsplash.com/@brookecagle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/meeting-on-cafe-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
[5]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-audacity-1024x575.png
[6]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-lmms-1024x575.png
[7]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-ardour-1024x592.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-tuxguitar-1024x575.png

View File

@ -0,0 +1,207 @@
[#]: subject: "Solve the repository impedance mismatch in CI/CD"
[#]: via: "https://opensource.com/article/21/8/impedance-mismatch-cicd"
[#]: author: "Evan "Hippy" Slatis https://opensource.com/users/hippyod"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Solve the repository impedance mismatch in CI/CD
======
Aligning deployment images and descriptors can be difficult, but here
are few strategies to streamline the process.
![Tips and gears turning][1]
An _impedance mismatch_ in software architecture happens when there's a set of conceptual and technical difficulties between two components. It's actually a term borrowed from electrical engineering, where the impedance of electrical input and output must match for the circuit to work.
In software development, an impedance mismatch exists between images stored in an image repository and its deployment descriptors stored in the SCM. How do you know whether the deployment descriptors stored in the SCM are actually meant for the image in question? The two repositories don't track the data they hold the same way, so matching an image (an immutable binary stored individually in an image repository) to its specific deployment descriptors (text files stored as a series of changes in Git) isn't straightforward.
**NOTE**: This article assumes at least a passing familiarity with the following concepts:
* Source Control Management (SCM) systems and branching
* Docker/OCI-compliant images and containers
* Container Orchestration Platforms (COP) such as Kubernetes
* Continuous Integration/Continuous Delivery (CI/CD)
* Software development lifecycle (SDLC) environments
### Impedance mismatch: SCM and image repositories
To fully understand where this becomes a problem, consider a set of basic Software Development LifeCycle (SDLC) environments typically used in any given project; for example, dev, test, and prod (or release) environments.
The dev environment does not suffer from an impedance mismatch. Best practices, which today include using CI/CD, dictate that the latest commit to your development branch should reflect what's deployed in the development environment. So, given a typical, successful CI/CD development workflow:
1. A commit is made to the development branch in the SCM
2. The commit triggers an image build
3. The new, distinct image is pushed to the image repository and tagged as being in dev
4. The image is deployed to the dev environment in a Container Orchestration Platform (COP) with the latest deployment descriptors pulled from the SCM
In other words, the latest image is always matched to the latest deployment descriptors in the development environment. Rolling back to a previous build isn't an issue, either, because that implies rolling back the SCM, too.
Eventually, though, development progresses to the point where more formal testing needs to occur, so an image—which implicitly relates to a specific commit in the SCM—is promoted to a test environment. Again, assuming a successful build, this isn't much of a problem because the image promoted from development should reflect the latest in the development branch:
1. The latest deployment to development is approved for promotion, and the promotion process is triggered
2. The latest development image tagged as being in test
3. The image is pulled and deployed to the test environment using the latest deployment descriptors pulled from the SCM
So far, so good, right? But what happens in either of the following scenarios?
**Scenario A**. The image is promoted to the next downstream environment, e.g., user acceptance testing (UAT) or even a production environment.
**Scenario B**. A breaking bug is discovered in the test environment, and the image needs to be rolled back to a known good image.
In either scenario, it's not as if development has stopped, which means one or more commits to the development branch may have occurred, which in turn means it's possible the latest deployment descriptors have changed, and the latest image isn't the same as what was previously deployed in test. Changes to the deployment descriptors may or may not apply to older versions of an image, but they certainly can't be trusted. If they have changed, they certainly aren't the same deployment descriptors you've been testing with up to now with the image you want to deploy.
And that's the crux of the problem: I**f the image being deployed isn't the latest from the image repository, how do you identify which deployment descriptors in the SCM apply specifically to the image being deployed?** The short answer is, you can't. The two repositories have an impedance mismatch. The longer answer is that you can, but you have to work for it, which will be the subject of the rest of this article. Note that the following isn't necessarily the only solution to this problem, but it has been put into production and proven to work for dozens of projects that, in turn, have been built and deployed in production for more than a year now.
### Binaries and deployment descriptors
A common artifact produced from building source code is a Docker or OCI-compliant image, and that image will typically be deployed to a Container Orchestration Platform (COP) such as Kubernetes. Deploying to a COP requires deployment descriptors defining how the image is to be deployed and run as a container, e.g., [Kubernetes Deployments][2] or [CronJobs][3]. It is because of the fundamental difference between what an image is and its deployment descriptors where the impedance mismatch manifests itself. For this discussion, think of images as immutable binaries stored in an image repository. Any change in the source code does not change the image but rather replaces it with a distinct, new image.
By contrast, deployment descriptors are text files and thus can be considered source code and mutable. If best practices are being followed, then the deployment descriptors are stored in SCM, and all changes are committed there first to be properly tracked.
### Solving the impedance mismatch
The first part of the proposed solution is to ensure that a method exists of matching the image in the image repository to the source commit in the SCM, which holds the deployment descriptors. The most straightforward solution is to tag the image with its source commit hash. This will keep different versions of the image separate, easily identifiable, and provide enough information to find the correct deployment descriptors so that the image can be properly deployed in the COP.
Reviewing the scenarios above again:
**Scenario A**. _Promoting an image from one downstream environment to the next_: When the image is promoted from test to UAT, the image's tag tells us from which source commit in the SCM to pull the deployment descriptors.
**Scenario B**. _When an image needs to be rolled back in a downstream environment_: Whichever image we choose to roll back to will also tell us from which source commit in the SCM to pull the correct deployment descriptors.
In each case, it doesn't matter how many development branch commits and builds have taken place since a particular image has been deployed in test since every image that's been promoted can find the exact deployment descriptors it was originally deployed with.
This isn't a complete solution to the impedance mismatch, however. Consider two additional scenarios:
**Scenario C**. In a load testing environment, different deployment descriptors are tried at various times to see how a particular build performs.
**Scenario D**. An image is promoted to a downstream environment, and there's an error in the deployment descriptors for that environment.
In each of these scenarios, changes need to be made to the deployment descriptors, but right now all we have is a source commit hash. Remember that best practices require all source code changes to be committed back to SCM first. The commit at that hash is immutable by itself, so a better solution than just tracking the initial source commit hash is clearly needed.
The solution here is a new branch created at the original source commit hash. This will be dubbed a **Deployment Branch**. Every time an image is promoted to a downstream test or release environment, you should create a new Deployment Branch **from the head of the previous SDLC environment's Deployment Branch**.
This will allow the same image to be deployed differently and repeatedly within each SDLC environment and also pick up any changes discovered or applied for that image in each subsequent environment.
**NOTE:** How changes applied in one environment's deployment descriptors are applied to the next, whether by tools that enable sharing values such as Helm Charts or by manually cutting and pasting across directories, is beyond the scope of this article.
So, when an image is promoted from one SDLC environment to the next:
1. A Deployment Branch is created
1. If the image is being promoted from the dev environment, the branch is created from the source commit hash that built the image
2. Otherwise, _the Deployment Branch is created from the head of the current Deployment Branch_
2. The image is deployed into the next SDLC environment using the deployment descriptors from the newly created Deployment Branch for that environment
![deployment branching tree][4]
Figure 1: Deployment branches
1. Development branch
2. First downstream environment's Deployment Branch with a single commit
3. Second downstream environment's Deployment Branch with a single commit
Revisiting Scenarios C and D from above with Deployment Branches as a solution:
**Scenario C**. Change the deployment descriptors for an image deployed to a downstream SDLC environment
**Scenario D**. Fix an error in the deployment descriptors for a particular SDLC environment
In each scenario, the workflow is as follows:
1. Commit the changes to the deployment descriptors to the Deployment Branch for the SLDC environment and image
2. Redeploy the image into the SLDC environment using the deployment descriptors at the head of the Deployment Branch
Thus, Deployment Branches fully resolve the impedance mismatch between image repositories storing a single, immutable image representing a unique build and SCM repositories storing mutable deployment descriptors for one more downstream SDLC environments.
### Practical considerations
While this seems like a workable solution, it also opens up several new practical questions for developers and operations resources alike, such as:
A. Where should deployment descriptors be kept as source to best facilitate Deployment Branch management, i.e., in the same or a different SCM repository than the source that built the image?
Up until now, we've avoided speaking about which repository the deployment descriptors should reside. Without going into too much detail, we recommend putting the deployment descriptors for all SDLC environments into the same SCM repository as the image source. As Deployment Branches are created, the source for the images will follow and act as an easy-to-find reference for what is actually running in the container being deployed.
As mentioned above, images will be associated with the original source commit via their tag. Finding the reference for the source at a particular commit in a separate repository would add a level of difficulty to developers, even with tooling, which is unnecessary by keeping everything in a single repository.
B. Should the source code that built the image be modified on a Deployment Branch?
Short answer: **NEVER**.
Longer answer: No, because images should never be built from Deployment Branches. They're built from development branches. Changing the source that defines an image in a Deployment Branch will destroy the record of what built the image being deployed and doesn't actually modify the functionality of the image. This could also become an issue when comparing two Deployment Branches from different versions. It might give a false positive for differences in functionality between them (a small but additional benefit to using Deployment Branches).
C. Why an image tag? Couldn't image labels be used?
Tags are easily readable and searchable for images stored in a repository. Reading and searching for labels with a particular value over a group of images requires pulling the manifest for each image, which adds complexity and reduces performance. Also, tagging images for different versions is still necessary for historical record and finding different versions, so using the source commit hash is the easiest solution that guarantees uniqueness while also containing instantly useful information.
D. What is the most practical way to create Deployment Branches?
The first three rules of DevOps are _automate_, _automate_, _automate_.
Relying on resources to enforce best practices uniformly is hit and miss at best, so when implementing a CI/CD pipeline for image promotion, rollback, etc., incorporate automated Deployment Branching into the script.
E. Any suggestions for a naming convention for Deployment Branches?
&lt;_**deployment-branch-identifier**_&gt;-&lt;_**env**_&gt;-&lt;_**src-commit-hash**_&gt;
* _**deployment-branch-identifier:**_ A unique string used by every Deployment Branch to identify it as a Deployment Branch; e.g. 'deployment' or 'deploy'
* _**env:**_ The SDLC environment the Deployment Branch pertains to; e.g. 'qa', 'stg', or' prod' for the test, staging, and production environments, respectively
* _**src-commit-hash:**_ The source code commit hash that holds the original code that built the image being deployed, which allows developers to easily find the original commit that created the image while ensuring the branch name is unique
For example, _**deployment-qa-asdf78s**_ or _**deployment-stg-asdf78s**_ for Deployment Branches promoted to the QA and STG environments, respectively.
F. How do you tell which version of the image is running in the environment?
Our suggestion is to [label][5] all your deployment resources with the latest Deployment Branch commit hash and the source commit hash. These two unique identifiers will allow developers and operations personnel to find everything that was deployed and from where. It also makes cleanup of resources trivial using those selectors on deployments of different versions, e.g., on rollback or roll forward operations.
G. When is it appropriate to merge changes from Deployment Branches back into the development branch?
It's completely up to the development team on what makes sense.
If you're making changes for load testing purposes just to see what will break your application, for example, then those changes may not be the best thing to merge back into the development branch. On the other hand, if you find and fix an error or tune a deployment in a downstream environment, merging the Deployment Branch changes back into the development branch makes sense.
H. Is there a working example of Deployment Branching to test with first?
[el-CICD][6] has been successfully using this strategy for a year and a half in production for more than a hundred projects across all SDLC downstream environments, including managing deployments to production. If you have access to an [OKD][7], Red Hat OpenShift lab cluster, or [Red Hat CodeReady Containers][8], you can download the [latest el-CICD version][9] and run through the [tutorial][10] to see how and when Deployment Branches are created and used.
### Wrap up
Using the working example above would be a good exercise to help you better understand the issues surrounding impedance mismatches in development processes. Maintaining alignment between images and deployment descriptors is a critical part of successfully managing deployments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/impedance-mismatch-cicd
作者:[Evan "Hippy" Slatis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hippyod
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
[2]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[3]: https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
[4]: https://opensource.com/sites/default/files/picture1.png
[5]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[6]: https://github.com/elcicd
[7]: https://www.okd.io/
[8]: https://cloud.redhat.com/openshift/create/local
[9]: https://github.com/elcicd/el-CICD-RELEASES
[10]: https://github.com/elcicd/el-CICD-docs/blob/master/tutorial.md

View File

@ -1,126 +0,0 @@
[#]: subject: "Ulauncher: A Super Useful Application Launcher for Linux"
[#]: via: "https://itsfoss.com/ulauncher/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ulauncher: A Super Useful Application Launcher for Linux
======
_**Brief:**_ _Ulauncher is a fast application launcher with extension and shortcut support to help you quickly access application and files in Linux._
An application launcher lets you quickly access or open an app without hovering over the application menu icons.
By default, I found the application launcher with Pop!_OS super handy. But, not every Linux distribution offers an application launcher out-of-the-box.
Fortunately, there is a solution with which you can add the application launcher to most of the popular distros out there.
### Ulauncher: Open Source Application Launcher
![][1]
Ulauncher is a quick application launcher built using Python while utilizing GTK+.
It gives a decent amount of customization and control options to tweak. Overall, you can adjust its behavior and experience to suit your taste.
Let me highlight some of the features that you can expect with it.
### Ulauncher Features
The options that you get with Ulauncher are super accessible and easy to customize. Some key highlights include:
* Fuzzy search algorithm, which lets you find applications even if you misspell them
* Remembers your last searched application in the same session
* Frequently used apps display (optional)
* Custom color themes
* Preset color themes that include a dark theme
* Shortcut to summon the launcher can be easily customized
* Browse files and directories
* Support for extensions to get extra functionality (emoji, weather, speed test, notes, password manager, etc.)
* Shortcuts for browsing sites like Google, Wikipedia, and Stack Overflow
It provides almost every helpful ability that you may expect in an application launcher, and even better.
### How to Use Ulauncher in Linux?
By default, you need to press **Ctrl + Space** to get the application launcher after you open it from the application menu for the first time.
Start typing in to search for an application. And, if you are looking for a file or directory, start typing with “**~**” or “**/**” (ignoring the quotes).
![][2]
There are default shortcuts like “**g XYZ**” where XYZ is the search term you want to search for in Google.
![][3]
Similarly, you can search for something directly taking you to Wikipedia or Stack Overflow, with “**wiki**” and “**so**” shortcuts, respectively.
Without any extensions, you can also calculate things on the go and copy the results directly to the keyboard.
![][4]
This should come in handy for quick calculations without needing to launch the calculator app separately.
You can head to its [extensions page][5] and browse for useful extensions along with screenshots that should instruct you how to use it.
To change how it works, enable frequent applications display, and adjust the theme — click on the gear icon on the right side of the launcher.
![][6]
You can set it to auto-start. But, if it does not work on your Systemd enabled distro, you can refer to its GitHub page to add it to the service manager.
The options are self-explanatory and are easy to customize, as shown in the screenshot below.
![][7]
### Installing Ulauncher in Linux
Ulauncher provides a **.deb** package for Debian or Ubuntu-based distributions. You can explore [how to install Deb][8] [f][8][iles][8] if youre new to Linux.
In either case, you can also add its PPA and install it via terminal by following the commands below:
```
sudo add-apt-repository ppa:agornostal/ulauncher
sudo apt update
sudo apt install ulauncher
```
You can also find it available in the [AUR][9] for Arch and Fedoras default repositories.
For more information, you can head to its official website or the [GitHub page][10].
[Ulauncher][11]
Ulauncher should be an impressive addition to any Linux distro. Especially, if you want the functionality of a quick launcher like Pop!_OS offers, this is a fantastic option to consider.
_Have you tried Ulauncher yet? You are welcome to share your thoughts on how this might help you get things done quickly._
--------------------------------------------------------------------------------
via: https://itsfoss.com/ulauncher/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher.png?resize=800%2C512&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-directory.png?resize=800%2C503&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-google.png?resize=800%2C449&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-calculator.png?resize=800%2C429&ssl=1
[5]: https://ext.ulauncher.io
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-gear-icon.png?resize=800%2C338&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-settings.png?resize=800%2C492&ssl=1
[8]: https://itsfoss.com/install-deb-files-ubuntu/
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://github.com/Ulauncher/Ulauncher/
[11]: https://ulauncher.io

View File

@ -0,0 +1,281 @@
[#]: subject: "Auto-updating podman containers with systemd"
[#]: via: "https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/"
[#]: author: "Daniel Schier https://fedoramagazine.org/author/danielwtd/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Auto-updating podman containers with systemd
======
![][1]
Auto-Updating containers can be very useful in some cases. Podman provides mechanisms to take care of container updates automatically. This article demonstrates how to use Podman Auto-Updates for your setups.
### Podman
Podman is a daemonless Docker replacement that can handle rootfull and rootless containers. It is fully aware of SELinux and Firewalld. Furthermore, it comes pre-installed with Fedora Linux so you can start using it right away.
If Podman is not installed on your machine, use one of the following commands to install it. Select the appropriate command for your environment.
```
# Fedora Workstation / Server / Spins
$ sudo dnf install -y podman
# Fedora Silverblue, IoT, CoreOS
$ rpm-ostree install podman
```
Podman is also available for many other Linux distributions like CentOS, Debian or Ubuntu. Please have a look at the [Podman Install Instructions][2].
### Auto-Updating Containers
Updating the Operating System on a regular basis is somewhat mandatory to get the newest features, bug fixes, and security updates. But what about containers? These are not part of the Operating System.
#### Why Auto-Updating?
If you want to update your Operating System, it can be as easy as:
```
$ sudo dnf update
```
This will not take care of the deployed containers. But why should you take care of these? If you check the content of containers, you will find the application (for example MariaDB in the docker.io/library/mariadb container) and some dependencies, including basic utilities.
Running updates for containers can be tedious and time-consuming, since you have to:
1. pull the new image
2. stop and remove the running container
3. start the container with the new image
This procedure must be done for every container. Updating 10 containers can easily end up taking 30-40 commands that must be run.
Automating these steps will save time and ensure, that everything is up-to-date.
#### Podman and systemd
Podman has built-in support for systemd. This means you can start/stop/restart containers via systemd without the need of a separate daemon. The Podman Auto-Update feature requires you to have containers running via systemd. This is the only way to automatically ensure that all desired containers are running properly. Some articles like these for [Bitwarden][3] and [Matrix Server][4] already had a look at this feature. For this article, I will use an even simpler [Apache httpd][5] container.
First, start the container with the desired settings.
```
# Run httpd container with some custom settings
$ sudo podman container run -d -t -p 80:80 --name web -v web-volume:/usr/local/apache2/htdocs/:Z docker.io/library/httpd:2.4
# Just a quick check of the container
$ sudo podman container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
58e5b07febdf docker.io/library/httpd:2.4 httpd-foreground 4 seconds ago Up 5 seconds ago 0.0.0.0:80->80/tcp web
# Also check the named volume
$ sudo podman volume ls
DRIVER VOLUME NAME
local web-volume
```
Now, set up systemd to handle the deployment. Podman will generated the necessary file.
```
# Generate systemd service file
$ sudo podman generate systemd --new --name --files web
/home/USER/container-web.service
```
This will generate the file _container-web service_ in your current directory. Review and edit the file to your liking. Here is the file contents with added newlines and formatting to improve readability.
```
# container-web.service
[Unit]
Description=Podman container-web.service
Documentation=man:podman-generate-systemd(1)
Wants=network.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/container-web.pid %t/container-web.ctr-id
ExecStart=/usr/bin/podman container run \
--conmon-pidfile %t/container-web.pid \
--cidfile %t/container-web.ctr-id \
--cgroups=no-conmon \
--replace \
-d \
-t \
-p 80:80 \
--name web \
-v web-volume:/usr/local/apache2/htdocs/ \
docker.io/library/httpd:2.4
ExecStop=/usr/bin/podman container stop \
--ignore \
--cidfile %t/container-web.ctr-id \
-t 10
ExecStopPost=/usr/bin/podman container rm \
--ignore \
-f \
--cidfile %t/container-web.ctr-id
PIDFile=%t/container-web.pid
Type=forking
[Install]
WantedBy=multi-user.target default.target
```
Now, remove the current container, copy the file to the proper systemd directory, and start/enable the service.
```
# Remove the temporary container
$ sudo podman container rm -f web
# Copy the service file
$ sudo cp container-web.service /etc/systemd/system/container-web.service
# Reload systemd
$ sudo systemctl daemon-reload
# Enable and start the service
$ sudo systemctl enable --now container-web
# Another quick check
$ sudo podman container ls
$ sudo systemctl status container-web
```
Please be aware, that the container can now only be managed via systemd. Starting and stopping the container with the “podman” command may interfere with systemd.
Now that the general setup is out of the way, have a look at auto-updating this container.
#### Manual Auto-Updates
The first thing to look at is manual auto-updates. Sounds weird? This feature allows you to avoid the 3 steps per container, but you will have full control over the update time and date. This is very useful if you only want to update containers in a maintenance window or on the weekend.
Edit the _/etc/systemd/system_/_container-web.service_ file and add the label shown below to it.
```
--label "io.containers.autoupdate=registry"
```
The changed file will have a section appearing like this:
```
...snip...
ExecStart=/usr/bin/podman container run \
--conmon-pidfile %t/container-web.pid \
--cidfile %t/container-web.ctr-id \
--cgroups=no-conmon \
--replace \
-d \
-t \
-p 80:80 \
--name web \
-v web-volume:/usr/local/apache2/htdocs/ \
--label "io.containers.autoupdate=registry" \
docker.io/library/httpd:2.4
...snip...
```
Now reload systemd and restart the container service to apply the changes.
```
# Reload systemd
$ sudo systemctl daemon-reload
# Restart container-web service
$ sudo systemctl restart container-web
```
After this setup you can run a simple command to update a running instance to the latest available image for the used tag. In this example case, if a new 2.4 image is available in the registry, Podman will download the image and restart the container automatically with a single command.
```
# Update containers
$ sudo podman auto-update
```
#### Scheduled Auto-Updates
Podman also provides a systemd timer unit that enables container updates on a schedule. This can be very useful if you dont want to handle the updates on your own. If you are running a small home server, this might be the right thing for you, so you are getting the latest updates every week or so.
Enable the systemd timer for podman as follows:
```
# Enable podman auto update timer unit
$ sudo systemctl enable --now podman-auto-update.timer
Created symlink /etc/systemd/system/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/system/podman-auto-update.timer.
```
Optionally, you can edit the schedule of the timer. By default, the update will run every Monday morning, which is ok for me. Edit the timer module using this command:
```
$ sudo systemctl edit podman-auto-update.timer
```
This will bring up your default editor. Changing the schedule is beyond the scope of this article but the link to _systemd.timer_ below will help. The Demo section of [Systemd Timers for Scheduling Tasks][6] contains details as well.
Thats it. Nothing more to do. Podman will now take care of image updates and also prune old images on a schedule.
### Hints &amp; Tips
Auto-Updating seems like the perfect solution for container updates, but you should consider some things, before doing so.
* avoid using the “latest” tag, since it can include major updates
* consider using tags like “2” or “2.4”, if the image provider has them
* test auto-updates beforehand (does the container support updates without additional steps?)
* consider having backups of your Podman volumes, in case something goes sideways
* auto-updates might not be very useful for highly productive setups, where you need full control over the image version in use
* updating a container also restarts the container and prunes the old image
* occasionally check if the updates are being applied
If you take care of the above hints, you should be good to go.
### Docs &amp; Links
If you want to learn more about this topic, please check out the links below. There is a lot of useful information in the official documentation and some blogs.
* <https://docs.podman.io/en/latest/markdown/podman-auto-update.1.html>
* <https://docs.podman.io/en/latest/markdown/podman-generate-systemd.1.html>
* <https://www.freedesktop.org/software/systemd/man/systemd.service.html>
* <https://www.freedesktop.org/software/systemd/man/systemd.timer.html>
* [Systemd Timers for Scheduling Tasks][6]
### Conclusion
As you can see, without the use of additional tools, you can easily run auto-updates on Podman containers manually or on a schedule. Scheduling allows unattended updates overnight, and you will get all the latest security updates, features, and bug fixes. Some setups I have tested successfully are: MariaDB, Ghost Blog, WordPress, Gitea, Redis, and PostgreSQL.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/auto-updating-podman-containers-with-systemd/
作者:[Daniel Schier][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/danielwtd/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/auto-updating-podman-containers-816x345.jpg
[2]: https://podman.io/getting-started/installation
[3]: https://fedoramagazine.org/manage-your-passwords-with-bitwarden-and-podman/
[4]: https://fedoramagazine.org/deploy-your-own-matrix-server-on-fedora-coreos/
[5]: https://hub.docker.com/_/httpd
[6]: https://fedoramagazine.org/systemd-timers-for-scheduling-tasks/

View File

@ -0,0 +1,140 @@
[#]: subject: "Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux"
[#]: via: "https://itsfoss.com/enable-fractional-scaling-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Icons Look too Small? Enable Fractional Scaling to Enjoy Your HiDPI 4K Screen in Ubuntu Linux
======
A few months ago, I bought a Dell XPS laptop with a 4K UHD screen. The screen resolution is 3840×2400 resolution with a 16:10 aspect ratio.
When I was installing Ubuntu on it, everything looked so small. The desktop icons, applications, menus, items in the top panel, everything.
Its because the screen has too many pixels but the desktop icons and rest of the elements remain the same in size (as on a regular screen of 1920×1080). Hence, they look too small on the HiDPI screen.
![Icons and other elements look too small on a HiDPI screen in Ubuntu][1]
This is not pretty and makes it very difficult to use your Linux system. Thankfully, there is a solution for GNOME desktop users.
If you too have a 2K or 4K screen where the desktop icons and other elements look too small, heres what you need to do.
### Scale-up display if the screen looks too small
If you have a 4K screen, you can scale the display to 200%. This means that you are making every element twice its size.
Press the Windows key and search for Settings:
![Go to Settings][2]
In Settings, go to Display settings.
![Access the Display Settings and look for Scaling][3]
Here, select 200% as the scale factor and click on Apply button.
![Scaling the display in Ubuntu][4]
It will change the display settings and ask you to confirm whether you want to keep the changed settings or revert to the original. If things look good to you, select “Keep Changes.”
Your display settings will be changed and remain the same even after reboots until you change it again.
### Enable fractional scaling (suitable for 2K screens)
200% scaling is good for 4K screens however if you have a 2K screen, the 200% scaling will make the icons look too big for the screen.
Now you are in the soup. You have the screen looking too small or too big. What about a mid-point?
Thankfully, [GNOME][5] has a fractional scaling feature that allows you to set the scaling to 125%, 150%, and 175%.
#### Using fractional scaling on Ubuntu 20.04 and newer versions
Ubuntu 20.04 and the new versions have newer versions of GNOME desktop environment and it allows you to enable or disable fractional scaling from Display settings itself.
Just go to the Display settings and look for the Fractional Scaling switch. Toggle it to enable or disable it.
When you enable the fractional scaling, youll see new scaling factors between 100% to 200%. You can choose the one which is suitable for your screen.
![Enable fractional scaling][6]
#### Using fractional scaling on Ubuntu 18.04
Youll have to make some additional efforts to make it work on the older Ubuntu 18.04 LTS version.
First, [switch to Wayland from Xorg][7].
Second, enable fractional scaling as an experimental feature using this command:
```
gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
```
Third, restart your system and then go to the Display settings and you should see the fractional scaling toggle button now.
#### Disabling fractional scaling on Ubuntu 18.04
If you are experiencing issues with fractional scaling, like increased power consumption and mouse lagging, you may want to disable it. Wayland could also be troublesome for some applications.
First, toggle the fractional scaling switch in the display settings. Now use the following command to disable the experimental feature.
```
gsettings reset org.gnome.mutter experimental-features
```
Switch back to Xorg from Wayland again.
### Multi-monitor setup and fractional scaling
4K screen is good but I prefer a multi-monitor setup for work. The problem here is that I have two Full HD (1080p) monitors. Pairing them with my 4K laptop screen requires little settings change.
What I do here is to keep the 4K screen at 200% scaling at 3840×2400 resolution. At the same time, I keep the full-HD monitors at 100% scaling with 1920×1080 resolution.
![HiDPI screen is set at 200%][8]
![Full HD screens are set at 100%][9]
![Full HD screens are set at 100%][10]
To ensure a smooth experience, you should take care of the following:
* Use Wayland display server: It is a lot better at handling multi-screens and HiDPI screens than the legacy Xorg.
* Even if you use only 100% and 200% scaling, enabling fractional scaling is a must, otherwise, it doesnt work properly. I know it sounds weird but thats what I have experienced.
### Did it help?
HiDPI support in Linux is far from perfect but it is certainly improving. Newer desktop environment versions of GNOME and KDE keep on improving on this front.
Fractional scaling with Wayland works quite well. It is improving with Xorg as well but it struggles especially on a multi-monitor set up.
I hope this quick tip helped you to enable fractional scaling in Ubuntu and enjoy your Linux desktop on a UHD screen.
Please leave your questions and suggestions in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/enable-fractional-scaling-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2021/08/HiDPI-screen-icons-too-small-in-Ubuntu.webp
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/settings-application-ubuntu.jpg?resize=800%2C247&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/display-settings-scaling-ubuntu.png?resize=800%2C432&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/scale-display-ubuntu.png?resize=800%2C443&ssl=1
[5]: https://www.gnome.org/
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/enable-fractional-scaling.png?resize=800%2C452&ssl=1
[7]: https://itsfoss.com/switch-xorg-wayland/
[8]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-3.webp
[9]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-2.webp
[10]: https://itsfoss.com/wp-content/uploads/2021/08/fractional-scaling-ubuntu-multi-monitor-1.webp

View File

@ -0,0 +1,217 @@
[#]: subject: "Use this open source tool for automated unit testing"
[#]: via: "https://opensource.com/article/21/8/tackle-test"
[#]: author: "Saurabh Sinha https://opensource.com/users/saurabhsinha"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Use this open source tool for automated unit testing
======
Tackle-test is an automatic generator of unit test cases for Java
applications.
![Looking at a map][1]
Modernizing and transforming legacy applications is a challenging activity that involves several tasks. One of the key tasks is validating that the modernized application preserves the functionality of the legacy application. Unfortunately, this can be tedious and hard to perform. Legacy applications often do not have automated test cases, or, if available, test coverage might be inadequate, both in general and specifically for covering modernization-related changes. A poorly maintained test suite might also contain many obsolete tests (accumulated over time as the application evolved). Therefore, validation is mainly done manually in most modernization projects—it is a process that is time-consuming and may not test the application sufficiently. In some reported case studies, testing accounted for approximately 70% to 80% of the time spent on modernization projects [1]. Tackle-test is an automated testing tool designed to address this challenge.
### Overview of Tackle-test
At its core, Tackle-test is an automatic generator of unit test cases for Java applications. It can generate tests with assertions, which makes the tool especially useful in modernization projects, where application transformation is typically functionality-preserving—thus, useful test assertions can be created by observing runtime states of legacy application versions. This can make differential testing between the legacy and modernized application versions much more effective; test cases without assertions would detect only those differences where the modernized version crashes on a test input on which the legacy version executes successfully. The assertions that Tackle-test generates capture created object values after each code statement, as illustrated in the next section.
Tackle-test uses a novel test-generation technique that applies combinatorial test design (CTD)—also called combinatorial testing or combinatorial interaction testing [2]—to method interfaces, with the goal of performing rigorous testing of methods with “complex interfaces,” where interface complexity is characterized over the space of parameter-type combinations that a method can be invoked with. CTD is a well-known, effective, and efficient test-design technique. It typically requires a manual definition of the test space in the form of a CTD model, consisting of a set of parameters, their respective values, and constraints on the value combinations. A valid test in the test space is defined as an assignment of one value to each parameter that satisfies the constraints. A CTD algorithm automatically constructs a subset of the set of valid tests to cover all legitimate value combinations of every _t_ parameters, where *t *is usually a user input.
Although CTD is typically applied to program inputs in a black-box manner and the CTD model is created manually, Tackle-test automatically builds a parameter-type-based white-box CTD model for each method under test. It then generates a test plan consisting of coverage goals from the model and synthesizes test sequences for covering rows of the test plan. The test plan can be generated at different, user-configurable interaction levels, where higher levels result in the generation of more test cases and more thorough testing, but at the cost of increased test-generation time.
Tackle-test also leverages some existing and commonly used test-generation strategies to maximize code coverage. Specifically, the strategies include feedback-driven random test generation (via the [Randoop][2] open source tool) and evolutionary and constraint-based test generation (via the [EvoSuite][3] open source tool). These tools compute coverage goals in code elements, such as methods, statements, and branches.
![tackle-test components][4]
Figure 1: High-level components of Tackle-test.
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Figure 1 presents a high-level view of the main components of Tackle-test. It consists of a Java-based core test generator that generates CTD-driven tests and a Python-based command-line interface (CLI), which is the primary mechanism for user interaction.
### Getting started with the tool
Tackle-test is released as open source under the Konveyor organization (<https://github.com/konveyor/tackle-test-generator-cli>). To get started, clone the repo, and follow the instructions for installing and running the tool provided in the repo readme. There are two installation options: using docker/docker-compose or a local installation.
The CLI provides two main commands: `generate` for generating JUnit test cases and `execute` for executing them. To verify your installation completed successfully, use the sample `irs` application located in the test/data folder to run these two commands.
The `generate` command is accompanied by a subcommand specifying the test-generation strategy (`ctd-amplified`, `randoop`, or `evosuite`) and creates JUnit test cases. By default, diff assertions are added to the generated test cases. Lets run the generate command on the `irs` sample, using the CTD-guided strategy.
```
$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose generate ctd-amplified
[tkltest|18:00:11.171] Loading config file ./test/data/irs/tkltest_config.toml
[tkltest|18:00:11.175] Computing coverage goals using CTD
* CTD interaction level: 1
* Total number of classes: 5
* Targeting 5 classes
* Created a total of 20 test combinations for 20 target methods of 5 target classes
[tkltest|18:00:12.816] Computing test plans with CTD took 1.64 seconds
[tkltest|18:00:12.816] Generating basic block test sequences using CombinedTestGenerator
[tkltest|18:00:12.816] Test generator output will be written to irs_CombinedTestGenerator_output.log
[tkltest|18:01:02.693] Generating basic block test sequences with CombinedTestGenerator took 49.88 seconds
[tkltest|18:01:02.693] Extending sequences to reach coverage goals and generating junit tests
* === total CTD test-plan coverage rate: 90.00% (18/20)
* Added a total of 64 diff assertions across all sequences
* wrote summary file for generation of CTD-amplified tests (JSON)
* wrote 5 test class files to "irs-ctd-amplified-tests/monolithic" with 18 total test methods
* wrote CTD test-plan coverage report (JSON)
[tkltest|18:01:06.694] JUnit tests are saved in ./irs-ctd-amplified-tests
[tkltest|18:01:06.695] Extending test sequences and writing junit tests took 4.0 seconds
[tkltest|18:01:06.700] CTD coverage report is saved in ./irs-tkltest-reports/ctd report/ctdsummary.html
[tkltest|18:01:06.743] Generated Ant build file ./irs-ctd-amplified-tests/build.xml
[tkltest|18:01:06.743] Generated Maven build file ./irs-ctd-amplified-tests/pom.xml
```
Test generation takes a couple of minutes on the `irs` sample. By default, the tool spends 10 seconds per class on initial test sequence generation. However, the overall runtime can be longer due to additional steps, as explained in the following section. Note that the time limit per class option is configurable and that for large applications, test generation might take several hours. Therefore, it is a good practice to start with a limited scope of a few classes to get a feel for the tool before performing test generation on all application classes.
When test generation completes, the test cases are written to a designated directory named `irs-ctd-amplified-tests` as output by the tool, along with Maven and Ant scripts for compiling and executing them. The test cases are in a subdirectory named `monolith`. A separate test file is created for each application class. Each such file contains multiple test approaches for testing the public methods of the class with different combinations of parameter types, as specified by the CTD test plan. A CTD coverage report is created that summarizes the test plan parts for which unit tests could be generated in a directory named `irs-tkltest-reports`. In the above output, we can see that Tackle-test created test cases for 18 of the 20 test-plan rows, resulting in 90% test-plan coverage.
![amplified tests][6]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Now lets look at one of the generated test methods for the `irs.IRS` class.
```
  @Test    
   public void test1() throws Throwable {
       irs.IRS iRS0 = new irs.IRS();
       java.util.ArrayList&lt;irs.Salary&gt; salaryList1 = new java.util.ArrayList&lt;irs.Salary&gt;();                 
       irs.Salary salary5 = new irs.Salary(0, 0, (double)100);
       assertEquals(0, ((irs.Salary) salary5).getEmployerId());
       assertEquals(0, ((irs.Salary) salary5).getEmployeeId());
       assertEquals(100.0, (double) ((irs.Salary) salary5).getSalary(), 1.0E-4);
       boolean boolean6 = salaryList1.add(salary5);
        assertEquals(true, boolean6);
       iRS0.setSalaryList((java.util.List&lt;irs.Salary&gt;)salaryList1);
    }
```
This test method intends to test the `setSalaryList` method of IRS, which receives a list of `irs.Salary` objects as its input. We can see that statements of the test case are followed by calls to the `assertEquals` method, comparing the values of generated objects to the values recorded during the generation of this test. When the test executes again, e.g., on the modernized version of the application, if any value differs from the recorded one, an assertion failure would occur, potentially indicating broken code that did not preserve the functionality of the legacy application.
Next, we will compile and run the generated test cases using the CLI `execute`command. We note that these are standard JUnit test cases that can be run in an IDE or using any JUnit test runner; they can also be integrated into a CI pipeline. When executed with the CLI, JUnit reports are generated and optionally also code-coverage reports (created using [JaCoCo][7]).
```
$ tkltest --config-file ./test/data/irs/tkltest_config.toml --verbose execute
[tkltest|18:12:46.446] Loading config file ./test/data/irs/tkltest_config.toml
[tkltest|18:12:46.457] Total test classes: 5
[tkltest|18:12:46.457] Compiling and running tests in ./irs-ctd-amplified-tests
Buildfile: ./irs-ctd-amplified-tests/build.xml
delete-classes:
compile-classes_monolithic:
      [javac] Compiling 5 source files
execute-tests_monolithic:
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/raw
      [mkdir] Created dir: ./irs-tkltest-reports/junit-reports/monolithic/html
[jacoco:coverage] Enhancing junit with coverage
...
BUILD SUCCESSFUL
Total time: 2 seconds
[tkltest|18:12:49.772] JUnit reports are saved in ./irs-tkltest-reports/junit-reports
[tkltest|18:12:49.773] Jacoco code coverage reports are saved in ./irs-tkltest-reports/jacoco-reports
```
The Ant script executes the unit tests by default, but the user can configure the tool to use Maven instead. Gradle will also be supported soon.
Looking at the JUnit report, located in `irs-tkltest-reports`, we can see that all JUnit test methods passed. This is expected because we executed them on the same version of the application on which they were generated.
![junit report][8]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
From the JaCoCo code-coverage report, also located in `irs-tkltest-reports`, we can see that CTD-guided test generation achieved overall 71% statement coverage and 94% branch coverage on the irs sample. We can also drill down to the class and method levels to see their coverage rates. The missing coverage is the result of test-plan rows for which the test generator was unable to generate a passing sequence. Increasing the test-generation time limit per class can increase the coverage rate.
![jacoco][9]
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
### CTD-guided test generation
Figure 2 illustrates the test-generation flow for CTD-guided test generation, implemented in the core test-generation engine of Tackle-test. The input to the test-generation flow is a specification of (1) the application classes, (2) the library dependencies of the application, and (3) optionally, the set of application classes to target for test generation (if unspecified, all application classes are targeted). This specification is provided via a [TOML][10] configuration file. The output from the flow consists of: (1) JUnit test cases (with or without assertions), (2) Maven and Ant build files, and (3) JSON files containing a summary of test generation and CTD test-plan coverage.
![ctd-guided test generation][11]
Figure 2: The process for CTD-guided test generation.
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
The flow starts with the generation of the CTD test plan. This involves creating a CTD model for each public method of the targeted classes. The CTD model for each method captures all possible concrete types for every formal parameter of the method, including elements that can be added to collection/map/array parameter types. Tackle-test incorporates lightweight static analysis to deduce the feasible concrete types for each parameter of each method.
Next, a CTD test plan is generated automatically from the model at a given (user-configurable) interaction level. Each row in the test plan describes a specific combination of concrete parameter types with which the method should be invoked. By default, the interaction level is set to one, which results in one-way testing: each possible concrete parameter type appears in at least one row of the test plan. Setting the Interaction level to two, a.k.a. pairwise testing, would result in a test plan that includes every pair of concrete types for each pair of method parameters in at least one of its rows.
The CTD test plan provides a set of coverage goals for which test sequences need to be synthesized. Tackle-test does this in two steps. In the first step, it uses Randoop and/or EvoSuite (the user can configure which tools are used) to create base test sequences. The base test sequences are analyzed to generate sequence pools at method and class levels from which the test-generation engine samples sequences to put together a covering sequence for each test-plan row. If a covering sequence is successfully created, the engine executes it to ensure that the sequence is valid in the sense that it does not cause the application to crash. During this execution, runtime states in terms of objects created are also recorded to be used later for assertion generation. Failing sequences are discarded. The engine adds assertions to passing sequences if the user specifies the assertion option. Finally, the engine exports the sequences, grouped by classes, to JUnit class files. The engine also creates Ant `build.xml` and Maven `pom.xml` files, which can be used if needed for running the generated test cases.
### Other tool features
Tackle-test is highly configurable and provides several configuration options using which the user can tailor the behavior of the tool: for example, which classes to generate tests for, which tools to use for test generation, how much time to spend on test generation, whether to add assertions to test cases, what interaction level to use for generating CTD test plans, how many executions to perform for extended test sequences, etc.
### Effectiveness of different test-generation strategies
Tackle-test has been evaluated on several open source Java applications and is currently being applied to enterprise-grade Java applications as well.
![instruction coverage results][12]
Figure 3: Instruction coverage achieved by test cases generated using different strategies and interaction levels for two small open-source Java applications taken from the[ SF110 benchmark][13].
(Saurabh Sinha and Rachel Tzoref-Brill, [CC BY-SA 4.0][5])
Figure 3 presents data about statement coverage achieved by tests generated using different testing strategies on two small open source Java applications. The applications were taken from the [SF110 benchmark][13], a large corpus of open source Java applications created to facilitate empirical studies of automated testing techniques. One of the applications, `jni-inchi`, consists of 24 classes and 74 methods; the other, `gaj`, consists of 14 classes and 17 methods. The box plot shows that targeting CTD test-plan rows by itself can achieve good statement coverage and, compared to test suites of the same size as the CTD-guided test suite sampled out of Randoop- and EvoSuite-generated test cases, the CTD-guided test suite achieves higher statement coverage, making it more efficient.
A large-scale evaluation of Tackle-test, using more applications from the SF110 benchmark and some proprietary enterprise Java applications, is currently being conducted.
If you prefer to see a video demonstration, you can watch it [here][14].
We encourage you to try out the tool and provide feedback to help us improve it by submitting a pull request. We also invite you to help improve the tool by contributing to the project.
#### Migrate to Kubernetes with the Konveyor community
Tackle-test is part of the Konveyor community. This community is helping others modernize and migrate their applications to the hybrid cloud by building tools, identifying patterns, and providing advice on breaking down monoliths, adopting containers, and embracing Kubernetes.
This community includes open source tools that migrate virtual machines to KubeVirt, Cloud Foundry, or Docker containers to Kubernetes, or namespaces between Kubernetes clusters. These are a few of the use cases we solve for.
For updates on these tools and invites to meetups where practitioners show how they moved to Kubernetes, [join the community][15].
#### References
[1] COBOL to Java and Newspapers Still Get Delivered, <https://arxiv.org/pdf/1808.03724.pdf>, 2018.
[2] D. R. Kuhn, R. N. Kacker, and Y. Lei. Introduction to Combinatorial Testing. Chapman &amp; Hall/CRC, 2013.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/tackle-test
作者:[Saurabh Sinha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/saurabhsinha
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
[2]: https://randoop.github.io/randoop/
[3]: https://www.evosuite.org/
[4]: https://opensource.com/sites/default/files/1tackle-test-components.png
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/2amplified-tests.png (amplified tests)
[7]: https://www.eclemma.org/jacoco/
[8]: https://opensource.com/sites/default/files/3junit-report.png (junit report)
[9]: https://opensource.com/sites/default/files/4jacoco.png (jacoco)
[10]: https://toml.io/en/
[11]: https://opensource.com/sites/default/files/5ctd-guided-test-generation.png (ctd-guided test generation)
[12]: https://opensource.com/sites/default/files/6instructioncoverage.png (instruction coverage results)
[13]: https://www.evosuite.org/experimental-data/sf110/
[14]: https://youtu.be/qThqTFh2PM4
[15]: https://www.konveyor.io/

View File

@ -0,0 +1,80 @@
[#]: subject: "Automatically Light Up a Sign When Your Webcam is in Use"
[#]: via: "https://fedoramagazine.org/automatically-light-up-a-sign-when-your-webcam-is-in-use/"
[#]: author: "John Boero https://fedoramagazine.org/author/boeroboy/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Automatically Light Up a Sign When Your Webcam is in Use
======
![][1]
Automatic WFH sign tells others when you're in a conference.
At the beginning of COVID lockdown and multiple people working from home it was obvious there was a need to let others know when Im in a meeting or on a live webcam. So naturally it took me one year to finally do something about it. Now Im here to share what I learned along the way. You too can have your very own “do not disturb” sign automatically light up outside your door to tell people not to walk in half-dressed on laundry day.
At first I was surprised Zoom doesnt have this kind of feature built in. But then again I might use Teams, Meet, Hangouts, WebEx, Bluejeans, or any number of future video collaboration apps. Wouldnt it make sense to just use a system-wide watch for active webcams or microphones? Like most problems in life, this one can be helped with the Linux kernel. A simple check of the _uvcvideo_ module will show if a video device is in use. Without using events all that is left is to poll it for changes. I chose to build a taskbar icon for this. I would normally do this with my trusty C++. But I decided to step out of my usual comfort zone and use Python in case someone wanted to port it to other platforms. I also wanted to renew my lesser Python-fu and face my inner white space demons. I came up with the following ~90 lines of practical and simple but insecure Python:
<https://github.com/jboero/livewebcam/blob/main/livewebcam>
Aside from the icon bits, a daemon thread performs the following basic check every 1s, calling scripts as changed:
```
def run(self):
while True:
val=subprocess.check_output(['lsmod | grep \'^uvcvideo\' | awk \'{print $3}\''], shell=True, text=True).strip()
if val != self.status:
self.status = val
if val == '0':
val=subprocess.check_output(['~/bin/webcam_deactivated.sh'])
else:
val=subprocess.check_output(['~/bin/webcam_activated.sh'])
time.sleep(1)
```
Rather than implement the parsing of modules, just using a hard-coded shell command got the job done. Now whatever scripts you choose to put in ~/bin/ will be used when at least one webcam activates or deactivates. I recently had a futile go at the kernel maintainers regarding a bug in usb_core triggered by uvcvideo. I would just as soon not go a step further and attempt an events patch to uvcvideo. Also, this leaves room for Mac or Windows users to port their own simple checks.
Now that I had a happy icon that sits in my KDE system tray I could implement scripts for on and off. This is where things got complicated. At first I was going to stick a magnetic bluetooth LED badge on my door to flash “LIVE” whenvever I was in a call. These things are ubiquitous on the internet and cost about $10 for basically an embedded ARM Cortex-M0 with an LED screen, bluetooth, and battery. They are basically a full Raspberry Pi Pico kit but soldered onto the board.
![These Bluetooth LED badges with 48Mhz ARM Cortex-M0 chips have a lot of potential, but they need custom firmware to be any use.][2]
Unfortunately these badges use a fixed firmware that is either listening to Bluetooth transmissions or showing your message it doesnt do both which is silly. Many people have posted feedback that they should be so much more. Sure enough someone has already tinkered with [custom firmware][3]. Unfortunately the firmware was for older USB variants and Im not about to de-solder or buy an ISP programmer to flash eeprom just for this. That would be a super interesting project for later and would be a great Rpi alternative but all I want right now is a remote controlled light outside my door. I looked at everything including WiFi [smart bulbs][4] to replace my recessed lighting bulbs, to [BTLE candles][5] which are an interesting option. Along the way I learned a lot about Bluetooth Low Energy including how a kernel update can waste 4 hours of weekend with bluetooth stack crashes. BTLE is really interesting and makes a lot more sense after reading up on it. Sure enough there is Python that can set the display [message on your LED badge][6] across the room, but once it is set, Bluetooth will stop listening for you to change it or shut it off. Darn. I guess I should just make do with USB, which actually has a standard command to control power to ports. Lets see if something exists for this already.
![A programmable Bluetooth LED sign costs £10 or for £30 you can have a single LED up to 59 inches away.][7]
It looked like there are options out there even if theyre not ideal. Then suddenly I found it. Neon sign “ON AIR” for £15 and its as dumb as they come just using 5v from USB power. Perfect.
![Bingo now all I needed to do was control the power to it.][8]
The command to control USB power is _uhubctl_ which is in Fedora repos. Unfortunately most USB hubs dont support this command. In fact very few support it [going back 20 years][9] which seems silly. Hubs will happily report that power has been disconnected even though no such disconnection has been made. I assume its just a few cents extra to build in this feature but Im not a USB hub manufacturer. Therefore I needed to source a pre-owned one. In the end I found a BYTECC BT-UH340 from the US. This was all I needed to finalize it. Adding udev rules to allow the _wheel_ group to control USB power, I can now perform a simple _uhubctl -a off -l 1-1 -p 1_ to turn anything off.
![The BYTECC BT-UH340 is one of few hubs I could actually find to support uhubctl power.][10]
Now with a spare USB extension cable lead to my door I finally have a complete solution. There is an “ON AIR” sign on the outside of my door that lights up automatically whenever any of my webcams are in use. I would love to see a Mac port or improvements in pull requests. Im sure it can all be better. Even further I would love to hone my IoT skills and sort out flashing those Bluetooth badges. If anybody wants to replicate this please be my guest, and suggestions are always welcome.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/automatically-light-up-a-sign-when-your-webcam-is-in-use/
作者:[John Boero][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/boeroboy/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/onair-1890x800-1-816x345.jpg
[2]: https://fedoramagazine.org/wp-content/uploads/2021/03/IMG_20210322_164346-1024x768.jpg
[3]: https://github.com/Effix/LedBadge
[4]: https://www.amazon.co.uk/AvatarControls-Dimmable-Bluetooth-Connection-2700K-6100K/dp/B08P21MSTW/ref=sr_1_6_mod_primary_lightning_deal?dchild=1&keywords=bluetooth+bulb+spot&qid=1616345349&sbo=Tc8eqSFhUl4VwMzbE4fw%2Fw%3D%3D&smid=A2GE8P68TQ1YXI&sr=8-6
[5]: http://nilhcem.com/iot/reverse-engineering-simple-bluetooth-devices
[6]: http://nilhcem.com/iot/reverse-engineering-bluetooth-led-name-badge
[7]: https://fedoramagazine.org/wp-content/uploads/2021/03/image-7-1024x416.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/03/IMG_20210322_163624-1024x768.jpg
[9]: https://github.com/mvp/uhubctl#compatible-usb-hubs
[10]: https://c1.neweggimages.com/ProductImage/17-145-089-02.jpg

View File

@ -0,0 +1,172 @@
[#]: subject: "Calculate date and time ranges in Groovy"
[#]: via: "https://opensource.com/article/21/8/groovy-date-time"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Calculate date and time ranges in Groovy
======
Use Groovy date and time to discover and display time increments.
![clock][1]
Every so often, I need to do some calculations related to dates. A few days ago, a colleague asked me to set up a new project definition in our (open source, of course!) project management system. This project is to start on the 1st of August and finish on the 31st of December. The service to be provided is budgeted at 10 hours per week.
So, yeah, I had to figure out how many weeks between 2021-08-01 and 2021-12-31 inclusive.
This is the perfect sort of problem to solve with a tiny [Groovy][2] script.
### Install Groovy on Linux
Groovy is based on Java, so it requires a Java installation. Both a recent and decent version of Java and Groovy might be in your Linux distribution's repositories. Alternately, you can install Groovy by following the instructions on the [groovy-lang.org][2].
A nice alternative for Linux users is [SDKMan][3], which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's latest Groovy release.
### Solving the problem with Groovy
Since Java 8, time and date calculations have been folded into a new package called **java.time**, and Groovy provides access to that. Heres the script:
```
import java.time.*
import java.time.temporal.*
def start = LocalDate.parse('2021-08-01','yyyy-MM-dd')
def end = LocalDate.parse('2022-01-01','yyyy-MM-dd')
println "${ChronoUnit.WEEKS.between(start,end)} weeks between $start and $end"
```
Copy this code into a file called **wb.groovy** and run it on the command line to see the results:
```
$ groovy wb.groovy
21 weeks between 2021-08-01 and 2022-01-01
```
Lets review whats going on.
### Date and time
The [**java.time.LocalDate** class][4] provides many useful static methods (like **parse()** shown above, which lets us convert from a string to a **LocalDate** instance according to a pattern, in this case, **yyyy-MM-dd**). The format characters are explained in quite a number of placesfor example, the documentation for [**java.time.format.DateTimeFormat**][5]. Notice that **M** represents “month,” not **m**, which represents “minute.” So this pattern defines a date formatted as a four-digit year, followed by a hyphen, followed by a two-digit month number (1-12), followed by another hyphen, followed by a two-digit day-of-month number (1-31).
Notice as well that in Java, **parse()** requires an instance of **DateTimeFormat**:
```
`parse(CharSequence text, DateTimeFormatter formatter)`
```
As a result, parsing becomes a two-step operation, whereas Groovy provides an additional version of **parse()** that accepts the format string directly in place of the **DateTimeFormat** instance.
The [**java.time.temporal.ChronoUnit** class][6], actually an **Enum**, provides several **Enum constants**, like **WEEKS** (or **DAYS**, or **CENTURIES**...) which in turn provide the **between()** method that allows us to calculate the interval of those units between two **LocalDates** (or other similar date or time data types). Note that I used January 1, 2022, as the value for **end**; this is because **between()** spans the time period starting on the first date given up to but not including the second date given.
### More date arithmetic
Every so often, I need to know how many working days are in a specific time frame (like, say, a month). This handy script will calculate that for me:
```
import java.time.*
def holidaySet = [LocalDate.parse('2021-01-01'), LocalDate.parse('2021-04-02'),
    LocalDate.parse('2021-04-03'), LocalDate.parse('2021-05-01'),
    LocalDate.parse('2021-05-15'), LocalDate.parse('2021-05-16'),
    LocalDate.parse('2021-05-21'), LocalDate.parse('2021-06-13'),
    LocalDate.parse('2021-06-21'), LocalDate.parse('2021-06-28'),
    LocalDate.parse('2021-06-16'), LocalDate.parse('2021-06-18'),
    LocalDate.parse('2021-08-15'), LocalDate.parse('2021-09-17'),
    LocalDate.parse('2021-09-18'), LocalDate.parse('2021-09-19'),
    LocalDate.parse('2021-10-11'), LocalDate.parse('2021-10-31'),
    LocalDate.parse('2021-11-01'), LocalDate.parse('2021-11-21'),
    LocalDate.parse('2021-12-08'), LocalDate.parse('2021-12-19'),
    LocalDate.parse('2021-12-25')] as [Set][7]
def weekendDaySet = [DayOfWeek.SATURDAY,DayOfWeek.SUNDAY] as [Set][7]
int calcWorkingDays(start, end, holidaySet, weekendDaySet) {
    (start..&lt;end).inject(0) { subtotal, d -&gt;
        if (!(d in holidaySet || DayOfWeek.from(d) in weekendDaySet))
            subtotal + 1
        else
            subtotal
    }
}
def start = LocalDate.parse('2021-08-01')
def end = LocalDate.parse('2021-09-01')
println "${calcWorkingDays(start,end,holidaySet,weekendDaySet)} working day(s) between $start and $end"
```
Copy this code into a file called **wdb.groovy** and run it from the command line to see the results:
```
$ groovy wdb.groovy
22 working day(s) between 2021-08-01 and 2021-09-01
```
Lets review this.
First, I create a set of holiday dates (these are Chiles “días feriados” for 2021, in case you wondered) called holidaySet. Note that the default pattern for **LocalDate.parse()** is **yyyy-MM-dd**, so Ive left the pattern out here. Note as well that Im using the Groovy shorthand **[a,b,c]** to create a **List** and then coercing it to a **Set**.
Next, I want to skip Saturdays and Sundays, so I create another set incorporating two **enum** values of [**java.time.DayOfWeek**][8]**SATURDAY** and **SUNDAY**.
Then I define a method **calcWorkingDays()** that takes as arguments the start date, the end date (which following the previous example of **between()** is the first value outside the range I want to consider), the holiday set, and the weekend day set. Line by line, this method:
* Defines a range between **start** and **end**, open on the **end**, (thats what **&lt;end** means) and executes the closure argument passed to the **inject()** method (**inject()** implements the 'reduce' operation on **List** in Groovy) on the successive elements **d** in the range:
* As long as **d** is neither in the **holidaySet** nor in the **weekendDaySet**, increments the **subtotal** by 1
* Returns the value of the result returned by **inject()**
Next, I define the **start** and **end** dates between which I want to calculate working days.
Finally, I call **println** using a Groovy [**GString**][9] to evaluate the **calcWorkingDays()** method and display the result.
Note that I could have used the **each** closure instead of **inject**, or even a **for** loop. I could have also used Java Streams rather than Groovy ranges, lists, and closures. Lots of options.
### But why not use groovy.Date?
Some of you old Groovy users may be wondering why Im not using good old [**groovy.Date**][10]. The answer is, I could use it. But Groovy Date is based on Java Date, and there are some good reasons for moving to **java.time**, even though Groovy Date added quite a few nice things to Java Date.
For me, the main reason is that there are some not-so-great design decisions buried in the implementation of Java Date, the worst being that it is unnecessarily mutable. I spent a while tracking down a weird bug that arose from my poor understanding of the **clearTime()** method on Groovy Date. I learned it actually clears the time field of the date instance, rather than returning the date value with the time part set to 00:00:00.
Date instances also arent thread-safe, which can be kind of challenging for multithreaded applications.
Finally, having both date and time wrapped up in a single field isnt always convenient and can lead to some weird data modeling contortions. Think, for instance, of a day on which multiple events occur: Ideally, the _date_ field would be on the day, and the _time_ field would be on each event; but thats not easy to do with Groovy Date.
### Groovy is groovy
Groovy is an Apache project, and it provides a simplified syntax for Java so you can use it for quick and simple scripts in addition to complex applications. You retain the power of Java, but you access it with an efficient toolset. [Try it soon][11], and see if you find your groove with Groovy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/groovy-date-time
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/clock_1.png?itok=lbyiCJWV (clock)
[2]: https://groovy-lang.org/
[3]: https://sdkman.io/
[4]: https://docs.groovy-lang.org/latest/html/groovy-jdk/java/time/LocalDate.html
[5]: https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
[6]: https://docs.oracle.com/javase/8/docs/api/java/time/temporal/ChronoUnit.html
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+set
[8]: https://docs.oracle.com/javase/8/docs/api/java/time/DayOfWeek.html
[9]: https://docs.groovy-lang.org/latest/html/api/groovy/lang/GString.html
[10]: https://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Date.html
[11]: https://groovy.apache.org/download.html

View File

@ -0,0 +1,252 @@
[#]: subject: "How to Easily Install Debian Linux"
[#]: via: "https://itsfoss.com/install-debian-easily/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "guevaraya "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Easily Install Debian Linux
======
Installing Debian could be easy or complicated depending upon the ISO you choose.
If you go with the default ISO provided by the Debian website, youll have a hard time installing Debian. Youll be stuck at a screen that asks for network drivers to be installed from external removable media.
![Installing Debian from default ISO is problematic for new users][1]
You may, of course, troubleshoot that, but it makes things unnecessarily complicated.
Dont worry. Let me show the steps for installing Debian comfortably and easily.
### The easy way of installing Debian as a desktop
Before you see the steps, please have a look at things you need.
* A USB key (pen drive) with at least 4 GB in size.
* A system with internet connection (could be the same system where it will be installed).
* A system where youll be installing Debian. It will wipe out everything on this system so please copy your important data on some other external disk.
What kind of system specification you should have for Debian? It depends on the [desktop environment][2] you are going to use. For example, GNOME desktop environment could work on 4 GB RAM but it will work a lot better on an 8 GB RAM. If you have 4 GB or less, try using KDE, Cinnamon or Xfce desktops.
Debian also has both [32-bit and 64-bit architecture][3] support. Youll have to get the Debian ISO according to your processor architecture.
Your system should have at least 25 GB of disk space to function. The more, the merrier.
Warning!
This method removes all the other operating systems along with the data present on the disk.
You may save your personal files, documents, pictures etc on an external USB disk or cloud storage if you want to use it later.
In this tutorial, I am going to show the steps for installing Debian 11 Bullseye with GNOME desktop environment. The steps should be the same even if you choose some other desktop environment.
_**This tutorial is tested on a UEFI system with GPT partitioning. If you have [MBR instead of GPT][4] or [legacy BIOS instead of UEFI][5], the live USB creation step will be different.**_
#### Step 1: Getting the correct Debian ISO
Half of the battle in installing Debian is choosing the correct ISO. Surprisingly, it is really difficult to navigate through its website and find that ISO which is the easiest for a new Debian user.
If you click the Download button on the [homepage of Debian website][6], it downloads a minimal net install file which will be super complicate for a regular user. Please DO NOT use this.
Instead, you should go for the live ISO. But here is a catch, there are separate live versions with non-free software (includes drivers for your networking hardware).
You should download this non-free live ISO. Another problem here is that you wont get it mentioned prominently on the website and there are various URLs for torrents or direct downloads for various architecture.
Let me link them here.
[Main repo for 32 and 64 bit][7]
[Debian 11 Direct][8]
[Debian 11 Torrent][9]
Youll see several files with the of desktop environment mentioned in the filename. Choose the one with desktop environment of your choice. For direct downloads, click on the links that end with .iso.
![Downloading the Debian Live Non-free ISO][10]
Once you have the appropriate ISO downloaded, the rest is standard procedure that you may have experienced with other Linux distributions.
#### Step 2: Creating live USB of Debian
Plug in the USB into your system. It will be wise to format it just for the sake of it. It will be formatted anyway.
You can use any live USB creation tool of your choice. If you are using Windows, go with Rufus. I am going to use Etcher here because it is available for both Windows and Linux.
Download Etcher from its website.
[Download Etcher][11]
I have a dedicated [tutorial on using Etcher in Linux][12] and thus I am not going to go in detail here. Just run the downloaded executable file, browse to the Debian ISO, make sure that correct USB is selected and then hit the Flash button.
![Creating Live Debian USB with Etcher][13]
It may take a couple of minutes to create the live USB. Once that is ready, it is time to boot from it.
#### Step 3: Boot from the live USB
Restart the system where you want to install Debian. When it is showing the manufacturers logo, press F2/F10 or F12 key to access the boot settings. You may also [access the UEFI firmware settings from Windows.][14]
Some systems do not allow booting from live USB if secure boot is enabled. If that is the case, please [disable secure boot from the BIOS settings][15].
The screen may look different for different manufacturers.
![][16]
Once you make the change, press F10 to save and exit. Your system will boot once again.
Again, press F2/F10 or F12 to access the boot settings when it shows the manufacturers logo. You should see the option to boot from the USB. Select it.
![][17]
It takes a little bit of time and then you should see a screen like this. Go with the first option here.
![Debian live boot screen][18]
#### Step 4: Start Debian installation
When you enter the live Debian session, it may show some welcome screen with option to choose your keyboard and language if you are using GNOME desktop. Just hit next when you see those screens.
![Debian live welcome screen][19]
Once you are past the welcome screen, press the Windows/Super key to bring the activity area. You should see the Debian install button here.
![Start Debian Installation][20]
It opens the friendly [Calamares graphical installer][21]. Things are pretty straightforward from here.
![Debian 11 Calamares graphical installer][22]
It asks you to select your geographical location and time zone.
![Select your location and time zone][23]
On the next screen, youll be asked to select the keyboard. Please **pay attention** here. Your keyboard is automatically selected based on your location. For example, I had used India as my location and it automatically set the default Indian keyboard with Hindi language. I had to change it to English India.
![Choosing keyboard layout][24]
The next screen is about the disk partition and where you would like to install Debian. In this case, you are going to install Debian as the only operating system on your computer.
The easiest option would to go with Erase disk option. Debian will put everything under root except the mandatory ESP partition and Swap space. In fact, it shows what your disk would like after your chosen installation method.
![Disk partitioning][25]
If you want to take matter in your hands, you may also opt for manual partitioning and choose how much you want to allot to root, home, boot or swap. Only do that when you know what you are doing.
On the next screen, you have to provide the username and password. It does not set root password and keeps it empty.
![Set Username and password][26]
This also means that you can use sudo with the newly created user. In the complicated Debian install, you could also set root password but then youll have to add the normal user to sudoer list manually. See, this installation method is easier for beginners, right?
Before it goes on with the actual installation, it presents you with a summary of the choices you have made. If things look good, hit the install button.
![Summary of your installation choices][27]
Now it is just a matter of waiting for the installation to finish.
![Installing Debian][28]
It takes a few minutes to complete the installation. When the installation finishes, it asks for a restart.
![Finished Debian installation][29]
Restart your system and if everything goes well, you should see the grub screen with Debian.
![Debian boot screen][30]
### Troubleshooting tip (if your system does not boot into Debian)
In my case, my Dell system did not recognize any operating system to boot. This was weird because I had see Debian creating an ESP partition.
If it is the same case with you, go to BIOS settings. Check the boot sequence. If you do not see anything, click on the Add boot option.
![Add new boot option][31]
It should give you an option to add an EFI file.
![Browse to EFi file][32]
Since Debian created ESP partition during installation, there is an EFI directory created with necessary files.
![Select EFI directory][33]
It should show a Debian folder along with some other folders. Select Debian folder.
![Select Debian][34]
In this Debian folder, youll find files like grubx64.efi, shimx64.efi. Select shimx64.efi.
![Select shim.efi][35]
You may give this file an appropriate name. The final screen may look like this.
![Adding the new boot option with efi file][36]
Now, you should have this boot option. Since I named it Debian, it shows two Debian boot options (one of them coming from the efi file I guess). Press F10 to save and exit the BIOS settings.
![New boot option added][37]
When your system boots now, you should see the grub screen with Debian boot option. You can start enjoying Debian now.
![][30]
### Were you able to install Debian?
I hope I made things simpler here. It is not that you cannot install Debian from the default net installer ISO. It just takes (a lot) more effort.
Was this tutorial helpful for you in installing Debian? Are you still facing issues? Please let me know in the comment section and Ill try to help you out.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-debian-easily/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-firmware.png?resize=800%2C600&ssl=1
[2]: https://itsfoss.com/what-is-desktop-environment/
[3]: https://itsfoss.com/32-bit-64-bit-ubuntu/
[4]: https://itsfoss.com/check-mbr-or-gpt/
[5]: https://itsfoss.com/check-uefi-or-bios/
[6]: https://www.debian.org/
[7]: https://cdimage.debian.org/images/unofficial/non-free/images-including-firmware/11.0.0-live+nonfree/
[8]: https://cdimage.debian.org/images/unofficial/non-free/images-including-firmware/11.0.0-live+nonfree/amd64/iso-hybrid/
[9]: https://cdimage.debian.org/images/unofficial/non-free/images-including-firmware/11.0.0-live+nonfree/amd64/bt-hybrid/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/downloading-Debian-live-non-free-iso.png?resize=800%2C490&ssl=1
[11]: https://www.balena.io/etcher/
[12]: https://itsfoss.com/install-etcher-linux/
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/creating-live-debian-usb-with-etcher-800x518.png?resize=800%2C518&ssl=1
[14]: https://itsfoss.com/access-uefi-settings-windows-10/
[15]: https://itsfoss.com/disable-secure-boot-windows/
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2014/05/Disable_Secure_Boot_Windows8.jpg?resize=700%2C525&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/boot-from-windows-disk-ventoy.jpg?resize=800%2C611&ssl=1
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-live-boot-screen.png?resize=617%2C432&ssl=1
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-live-welcome-screen.png?resize=800%2C450&ssl=1
[20]: https://itsfoss.com/wp-content/uploads/2021/08/start-Debian-installation-800x473.webp
[21]: https://calamares.io/
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-1.png?resize=800%2C441&ssl=1
[23]: https://itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-2-800x441.webp
[24]: https://itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-4-800x441.webp
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-5.png?resize=800%2C441&ssl=1
[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-6.png?resize=800%2C441&ssl=1
[27]: https://itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-7-800x500.webp
[28]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-8.png?resize=800%2C500&ssl=1
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Installing-Debian-9.png?resize=800%2C500&ssl=1
[30]: https://itsfoss.com/wp-content/uploads/2021/08/debian-boot-screen.webp
[31]: https://itsfoss.com/wp-content/uploads/2021/08/add-new-boot-option.webp
[32]: https://itsfoss.com/wp-content/uploads/2021/08/add-efi-file-for-boot-option.webp
[33]: https://itsfoss.com/wp-content/uploads/2021/08/select-efi-file-boot-option.webp
[34]: https://itsfoss.com/wp-content/uploads/2021/08/select-debian-folder-for-uefi.webp
[35]: https://itsfoss.com/wp-content/uploads/2021/08/select-shim-boot.webp
[36]: https://itsfoss.com/wp-content/uploads/2021/08/new-boot-option.webp
[37]: https://itsfoss.com/wp-content/uploads/2021/08/new-boot-option-added.webp

View File

@ -0,0 +1,121 @@
[#]: subject: "Linux kernel modules we can't live without"
[#]: via: "https://opensource.com/article/21/8/linux-kernel-module"
[#]: author: "Jen Wike Huger https://opensource.com/users/jen-wike"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux kernel modules we can't live without
======
Open source enthusiasts weigh in on the Linux kernel modules they love.
![Linux keys on the keyboard for a desktop computer][1]
The Linux kernel is turning 30 this year! If you're like us, that's a big deal and we are celebrating Linux this week with a couple of special posts.
Today we start with a roundup of responses from around the community answering "What Linux kernel module can you not live without? And, why?" Let's hear what these 10 enthusiasts have to say.
* * *
I guess some kernel developers will run away screaming when they hear my answer. Still, I list here two of the most controversial modules:
* First is NVIDIA, as I have an NVIDIA graphics card on my work laptop and my personal desktop.
* The other one probably generates less hatred—the VMNET and VMMON modules from VMware to be able to run VMware Workstation. —[Peter Czanik][2]
* * *
My favorite is the [zram][3] module. It creates a compressed block device in memory, which can then be used as a swap partition. Using a zram-based swap partition is ideal when memory is limited (for example, on virtual machines) and if you are worried about wearing out your SSD or, even worse, your flash-based storage because of frequent I/O operations. —[Stephan Avenwedde][4]
* * *
The most useful kernel module is definitively snd-hda-intel since it supports most integrated sound cards. I listen to music while coding an audio sequencer on the Linux desktop. —[Joël Krähemann][5]
* * *
My laptop would be worthless without the kmod-wl that I generate with the Broadcom file. I sometimes get messages about tainting the kernel, but what good is a laptop without wireless? —[Gregory Pittman][6]
* * *
I can't live without Bluetooth. Without it, my mouse, keyboard, speakers, and headset would be doorstops. —[Gary Smith][7]
* * *
I'm going to go out on a limb and say _all of them_. Seriously, we've gotten to the point where I grab a random piece of hardware, plug it in, and it just works.
* USB serial adapter just works
* Video card just works (though maybe not at its best)
* Network card just works
* Sound card just works
It's tough not to be utterly impressed with the broad scope of the driver work that all the modules bring to the whole. I remember the bad old days when we used to yell out xrandr magic strings to make projectors work, and now—yeah, it's a genuine rarity when stuff doesn't (mostly) just work.
If I had to nail it down to one, though, it'd be raid6. —[John 'Warthog9' Hawley][8]
* * *
I'm going to go back to the late 1990s for this one. I was a Unix systems administrator (and double duty as IS manager) for a small company. Our tape backup system died, and because of "small company" limited budgets, we didn't have a rush replacement or onsite repair on it. So we had to send it in for repair.
During those two weeks, we didn't have a way to make tape backups. No systems administrator wants to be in that position.
But then I remembered reading the [Floppy Tape How-to][9], and we happened to have a tower PC we'd just replaced that had a floppy tape drive.
So I reinstalled it with Linux, set up the **ftape** kernel driver module, ran a few backup/recovery tests, then ran our most important backups to QIC tapes. For those two weeks, we relied on **ftape** backups of critical data.
So to the unsung hero out there who made floppy tape drives work on 1990s Linux, you are awesome! —[Jim Hall][10]
* * *
Well, that's easy. It's the kvm kernel modules. On a personal front, I cannot imagine doing my day-to-day work without VMs. I'd like to believe that's the case with most of us. The kvm modules also play a big part in making Linux central to the cloud strategy. —[Gaurav Kamathe][11]
* * *
For me, it's dm-crypt, which is used for LUKS. See:
* <https://www.redhat.com/sysadmin/disk-encryption-luks>
* <https://manpages.debian.org/unstable/cryptsetup-bin/cryptsetup.8.en.html>
It's fantastic to know others cannot see what's on your disk, for example, if you lose your notebook or it gets stolen. —[Maximilian Kolb][12]
* * *
For cryptography basics, it's hard to beat the crypto module and its C API, which is straightforward.
For day-to-day life, is there anything more valuable than the plug-and-play that Bluetooth provides? —[Marty Kalin][13]
* * *
Share with us in the comments: What Linux kernel module can you not live without?
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/linux-kernel-module
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/users/czanik
[3]: https://en.wikipedia.org/wiki/Zram
[4]: https://opensource.com/users/hansic99
[5]: https://opensource.com/users/joel2001k
[6]: https://opensource.com/users/greg-p
[7]: https://opensource.com/users/greptile
[8]: https://opensource.com/users/warthog9
[9]: https://tldp.org/HOWTO/Ftape-HOWTO.html
[10]: https://opensource.com/users/jim-hall
[11]: https://opensource.com/users/gkamathe
[12]: https://opensource.com/users/kolb
[13]: https://opensource.com/users/mkalindepauledu

View File

@ -0,0 +1,184 @@
[#]: subject: "Parse command-line options in Groovy"
[#]: via: "https://opensource.com/article/21/8/parsing-command-options-groovy"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Parse command-line options in Groovy
======
Learn to add options to your Groovy applications.
![Woman sitting in front of her computer][1]
A recent article provided an [introduction to parsing command-line options in Java][2]. Because I really like Groovy, and because Groovy is well suited for scripting, and because it's fun to compare Java and Groovy solutions, I decided to paraphrase Seth's article, but using Groovy.
### Install Groovy
Groovy is based on Java, so it requires a Java installation. Both a recent and decent version of Java and Groovy might be in your Linux distribution's repositories. Alternately, you can install Groovy by following the instructions on the [groovy-lang.org][3].
A nice alternative for Linux users is [SDKMan][4], which can be used to get multiple versions of Java, Groovy, and many other related tools. For this article, I'm using my distro's OpenJDK11 release and SDKMan's latest Groovy release.
### Parsing command-line options in Groovy
When we create a script—a kind of short, often informal program—to be run from the command line, we normally follow the practice of passing arguments to the script on the command line. A good example of this is the `ls` command, used to list all the files and subfolders in a given folder, perhaps showing attributes and sorted in reverse order of last modification date, as in:
```
`$ ls -lt /home/me`
```
To show the contents of my home folder like this:
```
total 252
drwxr-xr-x 5 me me 4096 Aug 10 12:23 Downloads
drwx------ 11 me me 4096 Aug 10 08:59 Dropbox
drwxr-xr-x 27 me me 12288 Aug 9 11:58 Pictures
-rw-rw-r-- 1 me me 235 Jul 28 16:22 wb.groovy
drwxr-xr-x 2 me me 4096 Jul 20 22:04 Desktop
drwxrwxr-x 2 me me 4096 Jul 20 15:16 Fixed
drwxr-xr-x 2 me me 16384 Jul 19 08:49 Music
-rw-rw-r-- 1 me me 433 Jul 7 13:24 foo
drwxr-xr-x 6 me me 4096 Jun 29 10:25 Documents
drwxr-xr-x 2 me me 4096 Jun 14 22:15 Templates
-rw-rw-r-- 1 me me 803 Jun 14 11:33 bar
```
Of course, arguments to commands can be handled by inspecting them and deciding what to do in each case; but this ends up being a duplication of effort that can be avoided by using a library designed for that purpose.
Seth's Java article introduces the [Apache Commons CLI library][5], a great API for handling command-line options. In fact, this library is so great that the good people who develop Groovy make it available by default in the Groovy installation. Therefore, once you have Groovy installed, you have access to this library through [**groovy.cli.picocli.CliBuilder**][6], which is already imported for you by default.
Here's a Groovy script that uses this CLI builder to achieve the same results as Seth's Java program:
```
1 def cli = new CliBuilder(usage: 'ho.groovy [-a] -c')
2 cli.with {
3    a longOpt: 'alpha', 'Activate feature alpha'
4    c longOpt: 'config', args:1, argName: 'config', required: true, 'Set config file'
5 }
6 def options = cli.parse(args)
7 if (!options) {
8    return
9 }
10 if (options.a) {
11    println' Alpha activated'
12 }
13 if (options.c) {
14    println "Config set to ${options.c}"
15 }
```
I've included line numbers here to facilitate the discussion. Save this script without the line numbers in a file called **ho.groovy**.
On line 1, we define the variable **cli** and set it to a new instance of **CliBuilder** with a defined **usage** attribute. This is a string that will be printed if the **usage()** method is called.
On lines 2-5, we use [the **with()** method][7] that Groovy adds to objects, together with the DSL defined by **CliBuilder**, to set up the option definitions.
On line 3, we define the option '**a**', setting its **longOpt** field to '**alpha**' and its description to '**Activate feature alpha**'.
Similarly, on line 4, we define the option '**c**', setting its **longOpt** field to '**config**' and specifying that this option takes one argument whose name is '**config**'. Moreover, this is a **required** option (sounds funny, I know), and its description is '**Set config file**'.
Pausing briefly here for a bit of background, you can read all about these various options at the **CliBuilder** link above. More generally, things written in the form **longOpt: 'alpha'** are Groovy notation for key-value entries to be put in a **Map** instance, which you can read about [here][8]. Each key, in this case, corresponds to a method of the same name provided by the CliBuilder. If you're wondering what's going on with a line like:
```
`a longOpt: 'alpha', 'Activate feature alpha'`
```
then it may be useful to mention that Groovy allows us to drop parentheses in certain circumstances; so the above is equivalent to:
```
`a(longOpt: 'alpha', 'Activate feature alpha')`
```
i.e., it's a method call. Moreover, Groovy allows both positional and named parameters, the latter using that key: value syntax.
Onward! On lines 6-9, we call the **parse()** method of the **CliBuilder** instance **cli**, passing the **args—**an array of **String** values created by the Groovy run-time and containing the arguments from the command line. This method returns a **Map** of the options where the keys are the short-form of the predefined options—in this case, '**a**' and '**c**'. If the parsing fails, then **parse()** emits the **usage** message, a reasonable error message, and returns a null value, so we don't have to use a try-catch block (which one doesn't see as often in Groovy). So here—line 8—we just return since all our work is done for us.
On lines 10-12, we check to see if option '_a_' was included on the command line and if it is, print a message saying so.
Similarly, on lines 13-15, we check to see if option '**c**' was included on the command line and if so, print a message showing the argument provided to it.
### Running the command
Lets run the script a few times; first with no arguments:
```
$ groovy ho.groovy
error: Missing required option: c
usage: ho.groovy [-a] -c
 -a,--alpha Activate feature alpha
 -c,--config &lt;config&gt; [Set][9] config file
$
```
Notice the complaint about missing the required option '**c**'.
Then with the '**c**' option but no argument:
```
$ groovy ho.groovy -c
error: Missing argument for option: c
usage: ho.groovy [-a] -c
 -a,--alpha
Activate feature alpha
 -c,--config &lt;config&gt; [Set][9] config file
$
```
Cool, the **CliBuilder** instance method **parse()** noticed no argument was provided to '**c**'.
Finally, let's try with both options and an argument to '**c**', in their long form:
```
$ groovy ho.groovy --alpha --config bar
Alpha activated
Config set to bar
$
```
Looks good!
Since the idea of the '**c**' option is to provide a config file, we could also tell the **CliBuilder** instance that the type of this argument is File, and it will return that instead of a String. But we'll leave that for another day.
So, there you have it—command-line option parsing in Groovy.
### Groovy resources
The Groovy website has a lot of great documentation. Another great Groovy resource is [Mr. Haki][10], and specifically [this lovely article on CliBuilder][11].
Another great reason to learn Groovy is [Grails][12], a wonderfully productive full-stack web framework built on top of excellent components like Hibernate, Spring Boot, and Micronaut.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/parsing-command-options-groovy
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_3.png?itok=qw2A18BM (Woman sitting in front of her computer)
[2]: https://opensource.com/article/21/8/java-commons-cli
[3]: https://groovy-lang.org/
[4]: https://sdkman.io/
[5]: https://commons.apache.org/proper/commons-cli/
[6]: https://docs.groovy-lang.org/latest/html/gapi/groovy/cli/picocli/CliBuilder.html
[7]: https://objectpartners.com/2014/07/09/groovys-with-and-multiple-assignment/
[8]: https://www.baeldung.com/groovy-maps
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+set
[10]: https://blog.mrhaki.com/
[11]: https://blog.mrhaki.com/2009/09/groovy-goodness-parsing-commandline.html
[12]: https://grails.org/

View File

@ -0,0 +1,138 @@
[#]: subject: "Position text on your screen in Linux with ncurses"
[#]: via: "https://opensource.com/article/21/8/ncurses-linux"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Position text on your screen in Linux with ncurses
======
Use ncurses in Linux to place text at specific locations on the screen
and enable more user-friendly interfaces.
![Person using a laptop][1]
Most Linux utilities just scroll text from the bottom of the screen. But what if you wanted to position text on the screen, such as for a game or a data display? That's where **ncurses** comes in.
**curses** is an old Unix library that supports cursor control on a text terminal screen. The name _curses_ comes from the term _cursor control_. Years later, others wrote an improved version of **curses** to add new features, called _new curses_ or **ncurses**. You can find **ncurses** in every modern Linux distribution, although the development libraries, header files, and documentation may not be installed by default. For example, on Fedora, you will need to install the **ncurses-devel** package with this command:
```
`$ sudo dnf install ncurses-devel`
```
### Using ncurses in a program
To directly address the screen, you'll first need to initialize the **ncurses** library. Most programs will do that with these three lines:
* initscr(); Initialize the screen and the **ncurses** code
* cbreak(); Disable buffering and make typed input immediately available
* noecho(); Turn off echo, so user input is not displayed to the screen
These functions are defined in the **curses.h** header file, which you'll need to include in your program with:
```
`#include <curses.h>`
```
After initializing the terminal, you're free to use any of the **ncurses** functions, some of which we'll explore in a sample program.
When you're done with **ncurses** and want to go back to regular terminal mode, use **endwin();** to reset everything. This command resets any screen colors, moves the cursor to the lower-left of the screen, and makes the cursor visible. You usually do this right before exiting the program.
### Addressing the screen
The first thing to know about **ncurses** is that screen coordinates are _row,col_, and start in the upper-left at 0,0. **ncurses** defines two global variables to help you identify the screen size: LINES is the number of lines on the screen, and COLS is the number of columns. The bottom-right position is LINES-1,COLS-1.
For example, if you wanted to move the cursor to line 10 and column 30, you could use the move function with those coordinates:
```
`move(10, 30);`
```
Any text you display after that will start at that screen location. To display a single character, use the **addch(c)** function with a single character. To display a string, use **addstr(s)** with your string. For formatted output that's similar to **printf**, use **printw(fmt, …)** with the usual options.
Moving to a screen location and displaying text is such a common thing that **ncurses** provides a shortcut to do both at once. The **mvaddch(row, col, c)** function will display a character at screen location _row,col_. And the **mvaddstr(row, col, s)** function will display a string at that location. For a more direct example, using **mvaddstr(10, 30, "Welcome to ncurses");** in a program will display the text "Welcome to ncurses" starting at row 10 and column 30. And the line **mvaddch(0, 0, '+');** will display a single plus sign in the upper-left corner at row 0 and column 0.
Drawing text to the terminal screen can have a performance impact on certain systems, especially on older hardware terminals. So **ncurses** lets you "stack up" a bunch of text to display to the screen, then use the **refresh()** function to make all of those changes visible to the user.
Let's look at a simple example that pulls everything together:
```
#include &lt;curses.h&gt;
int
main()
{
  initscr();
  cbreak();
  noecho();
  mvaddch(0, 0, '+');
  mvaddch(LINES - 1, 0, '-');
  mvaddstr(10, 30, "press any key to quit");
  refresh();
  getch();
  endwin();
}
```
The program starts by initializing the terminal, then prints a plus sign in the upper-left corner, a minus in the lower-left corner, and the text "press any key to quit" at row 10 and column 30. The program gets a single character from the keyboard using the getch() function, then uses **endwin()** to reset the terminal before the program exits completely.
**getch()** is a useful function that you could use for many things. I often use it as a way to pause before I quit the program. And as with most **ncurses** functions, there's also a version of **getch()** called **mvgetch(row, col)** to move to screen position _row,col_ before waiting for a character.
### Compiling with ncurses
If you tried to compile that sample program in the usual way, such as `gcc pause.c`, you'll probably get a huge list of errors from the linker. That's because the **ncurses** library is not linked automatically by the GNU C Compiler. Instead, you'll need to load it for linking using the `-l ncurses` command-line option.
```
`$ gcc -o pause pause.c -lncurses`
```
Running the new program will print a simple "press any key to quit" message that's more or less centered on the screen:
![centered message in a program window][2]
Figure 1: A centered "press any key to quit" message in a program.
### Building better programs with ncurses
Explore the **ncurses** library functions to learn about other ways to display text to the screen. You can find a list of all **ncurses** functions in the man ncurses manual page. This gives a general overview of **ncurses** and provides a table-like list of the different **ncurses** functions, with a reference to the manual page that has full details. For example, **printw** is described in the _curs_printw(3X)_ manual page, which you can view with:
```
`$ man 3x curs_printw`
```
or just:
```
`$ man curs_printw`
```
With **ncurses**, you can create more interesting programs. By printing text at specific locations on the screen, you can create games and advanced utilities to run in the terminal.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/ncurses-linux
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://opensource.com/sites/default/files/press-key_0.png

View File

@ -0,0 +1,232 @@
[#]: subject: "How to install only security and bugfixes updates with DNF"
[#]: via: "https://fedoramagazine.org/how-to-install-only-security-and-bugfixes-updates-with-dnf/"
[#]: author: "Mateus Rodrigues Costa https://fedoramagazine.org/author/mateusrodcosta/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to install only security and bugfixes updates with DNF
======
![][1]
Photo by [Scott Webb][2] on [Unsplash][3]
This article will explore how to filter the updates available to your Fedora Linux system by type. This way you can choose to, for example, only install security or bug fixes updates. This article will demo running the _dnf_ commands inside toolbox instead of using a real Fedora Linux install.
You might also want to read [Use dnf updateinfo to read update changelogs][4] before reading this article.
### Introduction
If you have been managing system updates for Fedora Linux or any other GNU/Linux distro, you might have noticed how, when you run a system update (with _dnf update_, in the case of Fedora Workstation), you usually are not installing only security updates.
Due to how package management in a GNU/Linux distro works, generally (with the exception of software running in a container, under Flatpak, or similar technologies) you are updating every single package regardless of whether its a “system” software or an “app”.
DNF divides updates in three types: “security”, “bugfix” and “enhancement”. And, as you will see, DNF allows filtering which types you want to operate on.
But, why would you want to update only a subset of packages?
Well, this might depend on how you personally choose to deal with system updates. If you are not comfortable at the moment with updating everything, then restricting the current update to only security updates might be a good choice. You could also install bug fix updates as well and only install enhancements and other types of updates during a future opportunity.
### How to filter security and bug fix updates
Start by creating a Fedora Linux 34 toolbox:
```
toolbox create --distro fedora --release f34 updatefilter-demo
```
Then enter that toolbox:
```
toolbox enter updatefilter-demo
```
From now on commands can be run on a real Fedora Linux install.
First, run _dnf check-update_ to see the unfiltered list of packages:
```
$ dnf check-update
audit-libs.x86_64 3.0.5-1.fc34 updates
avahi.x86_64 0.8-14.fc34 updates
avahi-libs.x86_64 0.8-14.fc34 updates
...
vim-minimal.x86_64 2:8.2.3318-1.fc34 updates
xkeyboard-config.noarch 2.33-1.fc34 updates
yum.noarch 4.8.0-1.fc34 updates
```
DNF supports passing the types of updates to operate on as parameter: _security_ for security updates, _bugfix_ for bug fix updates and _enhancement_ for enhancement updates. Those work on commands such as _dnf check-update_, _dnf update_ and _dnf updateinfo_.
For example, this is how you filter the list of available updates by security updates only:
```
$ dnf check-update --security
avahi.x86_64 0.8-14.fc34 updates
avahi-libs.x86_64 0.8-14.fc34 updates
curl.x86_64 7.76.1-7.fc34 updates
...
libgcrypt.x86_64 1.9.3-3.fc34 updates
nettle.x86_64 3.7.3-1.fc34 updates
perl-Encode.x86_64 4:3.12-460.fc34 updates
```
And now same thing but by bug fix updates only:
```
$ dnf check-update --bugfix
audit-libs.x86_64 3.0.5-1.fc34 updates
ca-certificates.noarch 2021.2.50-1.0.fc34 updates
coreutils.x86_64 8.32-30.fc34 updates
...
systemd-pam.x86_64 248.7-1.fc34 updates
systemd-rpm-macros.noarch 248.7-1.fc34 updates
yum.noarch 4.8.0-1.fc34 updates
```
They can even be combined, so you can use two or more of them at the same time. For example, you can filter the list to show both security and bug fix updates:
```
$ dnf check-update --security --bugfix
audit-libs.x86_64 3.0.5-1.fc34 updates
avahi.x86_64 0.8-14.fc34 updates
avahi-libs.x86_64 0.8-14.fc34 updates
...
systemd-pam.x86_64 248.7-1.fc34 updates
systemd-rpm-macros.noarch 248.7-1.fc34 updates
yum.noarch 4.8.0-1.fc34 updates
```
As mentioned, _dnf updateinfo_ also works with this filtering, so you can filter _dnf updateinfo_, _dnf updateinfo list_ and _dnf updateinfo info_. For example, for the list of security updates and their IDs:
```
$ dnf updateinfo list --security
FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-0.8-14.fc34.x86_64
FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-libs-0.8-14.fc34.x86_64
FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64
FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64
FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64
FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64
FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64
FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64
FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64
FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64
FEDORA-2021-92e07de1dd Important/Sec. perl-Encode-4:3.12-460.fc34.x86_64
```
If desired, you can install only security updates:
```
# dnf update --security
================================================================================
Package Arch Version Repository Size
================================================================================
Upgrading:
avahi x86_64 0.8-14.fc34 updates 289 k
avahi-libs x86_64 0.8-14.fc34 updates 68 k
curl x86_64 7.76.1-7.fc34 updates 297 k
...
perl-Encode x86_64 4:3.12-460.fc34 updates 1.7 M
Installing weak dependencies:
glibc-langpack-en x86_64 2.33-20.fc34 updates 563 k
Transaction Summary
================================================================================
Install 1 Package
Upgrade 11 Packages
Total download size: 9.7 M
Is this ok [y/N]:
```
Or even to install both security and bug fix updates while ignoring enhancement updates:
```
# dnf update --security --bugfix
================================================================================
Package Arch Version Repo Size
================================================================================
Upgrading:
audit-libs x86_64 3.0.5-1.fc34 updates 116 k
avahi x86_64 0.8-14.fc34 updates 289 k
avahi-libs x86_64 0.8-14.fc34 updates 68 k
...
rpm-plugin-systemd-inhibit x86_64 4.16.1.3-1.fc34 fedora 23 k
shared-mime-info x86_64 2.1-2.fc34 fedora 374 k
sqlite x86_64 3.34.1-2.fc34 fedora 755 k
Transaction Summary
================================================================================
Install 11 Packages
Upgrade 45 Packages
Total download size: 32 M
Is this ok [y/N]:
```
### Install only specific updates
You may also choose to only install the updates with a specific ID, such as _FEDORA-2021-74ebf2f06f_ for avahi by using _advisory_ and specifying the ID:
```
# dnf update --advisory=FEDORA-2021-74ebf2f06f
================================================================================
Package Architecture Version Repository Size
================================================================================
Upgrading:
avahi x86_64 0.8-14.fc34 updates 289 k
avahi-libs x86_64 0.8-14.fc34 updates 68 k
Transaction Summary
================================================================================
Upgrade 2 Packages
Total download size: 356 k
Is this ok [y/N]:
```
Or even multiple updates, with _advisories_:
```
# dnf update --advisories=FEDORA-2021-74ebf2f06f,FEDORA-2021-83fdddca0f
================================================================================
Package Architecture Version Repository Size
================================================================================
Upgrading:
avahi x86_64 0.8-14.fc34 updates 289 k
avahi-libs x86_64 0.8-14.fc34 updates 68 k
curl x86_64 7.76.1-7.fc34 updates 297 k
libcurl x86_64 7.76.1-7.fc34 updates 284 k
Transaction Summary
================================================================================
Upgrade 4 Packages
Total download size: 937 k
Is this ok [y/N]:
```
### Conclusion
In the end it all comes down to how you personally prefer to manage your updates. But if you need, for whichever reason, to only install security updates, then these filters will surely come in handy!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-install-only-security-and-bugfixes-updates-with-dnf/
作者:[Mateus Rodrigues Costa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mateusrodcosta/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/how-to-install-only-security-and-bugfixes-updates-with-dnf-816x345.jpg
[2]: https://unsplash.com/@scottwebb?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/security?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/use-dnf-updateinfo-to-read-update-changelogs/

View File

@ -0,0 +1,143 @@
[#]: subject: "Linux Jargon Buster: What is sudo rm -rf? Why is it Dangerous?"
[#]: via: "https://itsfoss.com/sudo-rm-rf/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Jargon Buster: What is sudo rm -rf? Why is it Dangerous?
======
When you are new to Linux, youll often come across advice to never run `sudo rm -rf /`. There are so many memes in the Linux world around `sudo rm -rf`.
![][1]
But it seems that there are some confusions around it. In the tutorial on [cleaning Ubuntu to make free space][2], I advised running some command that involved sudo and rm -rf. An Its FOSS reader asked me why I am advising that if sudo rm -rf is a dangerous Linux command that should not be run.
And thus I thought of writing this chapter of Linux jargon buster and clear the misconceptions.
### sudo rm -rf: what does it do?
Lets learn things in steps.
The rm command is used for [removing files and directories in Linux command line][3].
```
[email protected]:$ rm agatha
[email protected]:$
```
But some files will not be removed immediate because of read only [file permissions][4]. They have to be forced delete with the option `-f`.
```
[email protected]:$ rm books
rm: remove write-protected regular file 'books'? y
[email protected]:$ rm -f christie
[email protected]:$
```
However, rm command cannot be used to delete directories (folders) directly. You have to use the recursive option `-r` with the rm command.
```
[email protected]:$ rm new_dir
rm: cannot remove 'new_dir': Is a directory
```
And thus ultimately, rm -rf command means recursively force delete the given directory.
```
[email protected]:~$ rm -r new_dir
rm: remove write-protected regular file 'new_dir/books'? ^C
[email protected]:$ rm -rf new_dir
[email protected]:$
```
Heres a screenshot of all the above commands:
![Example explaining rm command][5]
If you add sudo to the rm -rf command, you are deleting files with root power. That means you could delete system files owned by [root user][6].
### So, sudo rm -rf is a dangerous Linux command?
Well, any command that deletes something could be dangerous if you are not sure of what you are deleting.
Consider **rm -rf command** as a knife. Is knife a dangerous thing? Possibly. If you cut vegetables with the knife, its good. If you cut your fingers with the knife, it is bad, of course.
The same goes for rm -rf command. It is not dangerous in itself. It is used for deleting files after all. But if you use it to delete important files unknowingly, then it is a problem.
Now coming to sudo rm -rf /.
You know that with sudo, you run a command as root, which allows you to make any changes to the system.
/ is the symbol for the root directory. /var means the var directory under root. /var/log/apt means apt directory under log, under root.
![Linux directory hierarchy representation][7]
As per [Linux directory hierarchy][8], everything in a Linux file system starts at root. If you delete root, you are basically removing all the files of your system.
And this is why it is advised to not run `sudo rm -rf /` command because youll wipe out your entire Linux system.
Please note that in some cases, you could be running a command like sudo rm -rf /var/log/apt which could be fine. Again, you have to pay attention on what you are deleting, the same as you have to pay attention on what you are cutting with a knife.
### I play with danger: what if I run sudo rm -rf / to see what happens?
Most Linux distributions provide a failsafe protection against accidentally deleting the root directory.
```
[email protected]:~$ sudo rm -rf /
[sudo] password for abhishek:
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe
```
I mean it is human to make typos and if you accidentally typed “/ var/log/apt” instead of “/var/log/apt” (a space between / and var meaning that you are providing / and var directories to for deletion), youll be deleting the root directory.
![Pay attention when using sudo rm -rf][9]
Thats quite good. Your Linux system takes care of such accidents.
Now, what if you are hell-bent on destroying your system with sudo rm -rf /? Youll have to use It will ask you to use no-preserve-root with it.
No, please do not do that on your own. Let me show it to you.
So, I have elementary OS running in a virtual machine. I run `sudo rm -rf / --no-preserve-root` and you can see the lights going out literally in the video below (around 1 minute).
[Subscribe to our YouTube channel for more Linux videos][10]
### Clear or still confused?
Linux has an active community where most people try to help new users. Most people because there are some evil trolls lurking to mess with the new users. They will often suggest running rm -rf / for the simplest of the problems faced by beginners. These idiots get some sort of supremacist satisfaction I think for such evil acts. I ban them immediately from the forums and groups I administer.
I hope this article made things clearer for you. Its possible that you still have some confusion, specially because it involves root, file permissions and other things new users might not be familiar with. If thats the case, please let me know your doubts in the comment section and Ill try to clear them.
In the end, remember. Dont drink and root. Stay safe while running your Linux system :)
![][11]
--------------------------------------------------------------------------------
via: https://itsfoss.com/sudo-rm-rf/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/04/sudo-rm-rf.gif?resize=400%2C225&ssl=1
[2]: https://itsfoss.com/free-up-space-ubuntu-linux/
[3]: https://linuxhandbook.com/remove-files-directories/
[4]: https://linuxhandbook.com/linux-file-permissions/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/rm-rf-command-example-800x487.png?resize=800%2C487&ssl=1
[6]: https://itsfoss.com/root-user-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/linux-directory-structure.png?resize=800%2C400&ssl=1
[8]: https://linuxhandbook.com/linux-directory-structure/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/sudo-rm-rf-example.png?resize=798%2C346&ssl=1
[10]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/dont-drink-and-root.jpg?resize=800%2C450&ssl=1

View File

@ -0,0 +1,113 @@
[#]: subject: "Print from anywhere with CUPS on Linux"
[#]: via: "https://opensource.com/article/21/8/share-printer-cups"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Print from anywhere with CUPS on Linux
======
Share your printer with the Common Unix Printing System (CUPS).
![Two hands holding a resume with computer, clock, and desk chair ][1]
I have a printer in my office, but sometimes I work on my laptop in another room of the house. This isn't a problem for me for two reasons. First of all, I rarely print anything on paper and have gone months without using the printer. Secondly, though, I've set the printer to be shared over my home network, so I can send files to print from anywhere in the house. I didn't need any special equipment for this setup. It's accomplished with just my usual Linux computer and the Common Unix Printing System (CUPS).
### Installing CUPS on Linux
If you're running Linux, BSD, or macOS, then you probably already have CUPS installed. CUPS has been the open source solution to Unix printing since 1997. Apple relied on it so heavily for their fledgling Unix-based OS X that they ended up buying it in 2007 to ensure its continued development and maintenance.
If your system doesn't already have CUPS installed, you can install it with your package manager. For example, on Fedora, Mageia, or CentOS:
```
`$ sudo dnf install cups`
```
On Debian, Linux Mint, and similar:
```
`$ sudo apt install cups`
```
### Accessing CUPS on Linux and Mac
To access CUPS, open a web browser and navigate to `localhost:631`, which tells your computer to open whatever's on port 631 on itself (your computer always [refers to itself as localhost][2]).
Your web browser opens a page providing you access to your system's printer settings. From here, you can add printers, modify printer defaults, monitor queued jobs, and allow printers to be shared over your local network.
![CUPS web user interface][3]
Figure 1: The CUPS web user interface.
### Configuring a printer with CUPS
You can either add a new printer or modify an existing one from within the CUPS interface. Modifying a printer involves the exact same pages as adding a new one, except that when you're adding a printer, you make new choices, and when you're modifying a printer, you confirm or change existing ones.
First, click on the **Administration** tab, and then the **Add Printer** button.
If you're only modifying an existing printer, click **Manage Printers** instead, and then choose the printer you want to change. Choose **Modify Printer** from the **Administration** drop-down menu.
Regardless of whether you're modifying or adding, you must enter administrative authorization before CUPS allows you to continue. You can either log in as root, if that's available to you, or as your normal user identity, as long as you have `sudo` privileges.
Next, you're presented with a list of printer interfaces and protocols that you can use for a printer. If your printer is plugged directly into your computer and is on, it's listed as a _Local Printer_. If the printer has networking built into it and is attached to a switch or router on your network, you can usually use the Internet Printing Protocol (ipp) to access it (you may have to look at your router to determine the IP address of the printer, but read your printer's documentation for details). If the printer is a Hewlett-Packard, you may also be able to use HPLIP to access it.
Use whatever protocol makes sense for your physical setup. If you're unsure of what to use, you can try one, attempt to print a test page, and then try a different one in the case of failure.
The next screen asks for human-friendly details about the printer. This is mostly for your reference. Enter a name for the printer that makes sense (I usually use the model number, but large organizations sometimes name their printers after things like fictional starships or capital cities), a description, and the location.
You may also choose to share the printer with other computers on your network.
![CUPS web UI to share printers][4]
Figure 2: CUPS web user interface to share printers.
If sharing is not currently enabled, click the checkbox to enable sharing.
### Drivers
On the next screen, you must set your printer driver. Open source drivers for printers can often be found on [openprinting.org][5]. There's a good chance you already have a valid driver, as long as you have the `gutenprint` package installed, or have installed drivers bundled with the printer. If the printer is a PostScript printer (many laser printers are), you may only need a PPD file from [openprinting.org][5] rather than a driver.
Assuming you have drivers installed, you can choose your printer's make (manufacturer) for a list of available drivers. Select the appropriate driver and continue.
### Connecting to a shared printer
Now that you have successfully installed and configured your printer, you can connect to it from any other computer on your network. For example, suppose you have a laptop called **client** that you use around the house. You want to add your shared printer to it.
On the GNOME and Plasma desktops, you can add a printer from the **Printer** screen of **Settings:**
* If you have your printer connected to a computer, then you enter the IP address of the _computer_ (because the printer is accessible through its host).
* If you have your printer connected to a switch or router, then enter the IP address of the printer itself.
On macOS, printer settings can be found in **System Preferences**.
Alternately, you can keep using the CUPS interface on your client computer. The process to access CUPS is the same: Ensure CUPS is installed, open a network, and navigate to `localhost:631`.
Once you've accessed the CUPS web interface, select the **Administration** tab. Click the **Find New Printers** button in the **Printers** section, and then add the shared printer to your network. You can also set the printer's IP address manually in CUPS by going through the normal **Add Printer** process.
### Print from anywhere
It's the 21st century! Put the USB thumb drive down, stop emailing yourself files to print from another computer, and make your printer available to your home network. It's surprisingly easy and supremely convenient. And best of all, you'll look like a networking wizard to all of your housemates!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/share-printer-cups
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI (Two hands holding a resume with computer, clock, and desk chair )
[2]: https://opensource.com/article/21/4/network-management
[3]: https://opensource.com/sites/default/files/cups-web-ui.jpeg
[4]: https://opensource.com/sites/default/files/cups-web-ui-share_0.jpeg
[5]: http://openprinting.org

View File

@ -0,0 +1,164 @@
[#]: subject: "Write a guessing game in ncurses on Linux"
[#]: via: "https://opensource.com/article/21/8/guess-number-game-ncurses-linux"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Write a guessing game in ncurses on Linux
======
Use the flexibility and power of ncurses to create a guess-the-number
game on Linux.
![question mark in chalk][1]
In my [last article][2], I gave a brief introduction to using the **ncurses** library to write text-mode interactive applications in C. With **ncurses**, we can control where and how text gets displayed on the terminal. If you explore the **ncurses** library functions by reading the manual pages, youll find there are a ton of different ways to display text, including bold text, colors, blinking text, windows, borders, graphic characters, and other features to make your application stand out.
If youd like to explore a more advanced program that demonstrates a few of these interesting features, heres a simple “guess the number” game, updated to use **ncurses**. The program picks a random number in a range, then asks the user to make repeated guesses until they find the secret number. As the user makes their guess, the program lets them know if the guess was too low or too high.
Note that this program limits the possible numbers from 0 to 7. Keeping the values to a limited range of single-digit numbers makes it easier to use **getch()** to read a single number from the user. I also used the **getrandom** kernel system call to generate random bits, masked with the number 7 to pick a random number from 0 (binary 0000) to 7 (binary 0111).
```
#include &lt;curses.h&gt;
#include &lt;string.h&gt;          /* for strlen */
#include &lt;sys/random.h&gt;      /* for getrandom */
int
random0_7()
{
   int num;
   getrandom(&amp;num, sizeof(int), GRND_NONBLOCK);
   return (num &amp; 7); /* from 0000 to 0111 */
}
int
read_guess()
{
  int ch;
  do {
    ch = getch();
  } while ((ch &lt; '0') || (ch &gt; '7'));
  return (ch - '0'); /* turn into a number */
}
```
By using **ncurses**, we can add some visual interest. Lets add functions to display important text at the top of the screen and a message line to display status information at the bottom of the screen.
```
void
print_header(const char *text)
{
  move(0, 0);
  clrtoeol();
  attron(A_BOLD);
  mvaddstr(0, (COLS / 2) - (strlen(text) / 2), text);
  attroff(A_BOLD);
  refresh();
}
void
print_status(const char *text)
{
  move(LINES - 1, 0);
  clrtoeol();
 
  attron(A_REVERSE);
  mvaddstr(LINES - 1, 0, text);
  attroff(A_REVERSE);
  refresh();
}
```
With these functions, we can construct the main part of our number-guessing game. First, the program sets up the terminal for **ncurses**, then picks a random number from 0 to 7. After displaying a number scale, the program then enters a loop to ask the user for their guess.
As the user makes their guess, the program provides visual feedback. If the guess is too low, the program prints a left square bracket under the number on the screen. If the guess is too high, the game prints a right square bracket. This helps the user to narrow their choice until they guess the correct number.
```
int
main()
{
  int number, guess;
  initscr();
  cbreak();
  noecho();
  number = random0_7();
  mvprintw(1, COLS - 1, "%d", number); /* debugging */
  print_header("Guess the number 0-7");
  mvaddstr(9, (COLS / 2) - 7, "0 1 2 3 4 5 6 7");
  print_status("Make a guess...");
  do {
    guess = read_guess();
    move(10, (COLS / 2) - 7 + (guess * 2));
    if (guess &lt; number) {
      addch('[');
      print_status("Too low");
    }
    else if (guess &gt; number) {
      addch(']');
      print_status("Too high");
    }
    else {
      addch('^');
    }
  } while (guess != number);
  print_header("That's right!");
  print_status("Press any key to quit");
  getch();
  endwin();
  return 0;
}
```
Copy this program and compile it for yourself to try it out. Dont forget that you need to tell GCC to link with the **ncurses** library:
```
`$ gcc -o guess guess.c -lncurses`
```
Ive left the debugging line in there, so you can see the secret number near the upper-right corner of the screen:
![guess number game interface][3]
Figure 1: Guess the number game. Notice the secret number in the upper right.
### Get yourself going with ncurses
This program uses a bunch of other features of **ncurses** that you can use as a starting point. For example, the print_header function prints a message in bold text centered at the top of the screen, and the print_status function prints a message in reverse text at the bottom-left of the screen. Use this to help you get started with **ncurses** programming.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/guess-number-game-ncurses-linux
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/question-mark_chalkboard.jpg?itok=DaG4tje9 (question mark in chalk)
[2]: https://opensource.com/article/21/8/ncurses-linux
[3]: https://opensource.com/sites/default/files/guessnumber07.png

View File

@ -0,0 +1,99 @@
[#]: subject: "Zulip: An Interesting Open-Source Alternative to Slack"
[#]: via: "https://itsfoss.com/zulip/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Zulip: An Interesting Open-Source Alternative to Slack
======
_**Brief:** Zulip is an open-source collaboration platform that pitches itself as a better replacement to Slack. Let us take a closer look._
Messaging and collaboration platforms make a big difference when it comes to your work.
While there are several options available, Slack is a popular one used by many organizations. But, what about an open-source alternative to Slack that you can self-host?
Zulip is one such software.
### Zulip: Open Source Collaboration Messaging App
![][1]
If you want to explore, I must mention that there are more [open-source alternatives to Slack][2] out there.
Here, I focus on Zulip.
Zulip is a free and open-source messaging application with paid hosted options and the ability to self-host.
It aims to provide a similar experience to Slack while striving to help you improve the effectiveness of conversations using topics.
In contrast to channels in Slack, Zulip chat adds topics (which are like tags) to quickly filter through the conversations that matter to you.
### Features of Zulip
![][3]
You get most of the essential features with Zulip. To list the key highlights, you can find:
* Markdown support
* Topics for channels
* Drag and drop file support
* Code blocks
* GitHub integration to track issues
* Email notification support
* Self-host option
* Message editing
* GIPHY integration
* Video calls with Zoom, Jitsi, or BigBlueButton
In addition to the features mentioned, you should expect the basic options that you usually get with Slack and others.
Also, you can integrate it with Matrix and IRC if you want.
![][4]
In my brief test usage, the user interface is good enough for effective communication. However, I failed to find any dark mode or the ability to change a theme.
It looks more straightforward than Slack so that it can improve the user experience side of things.
### Install Zulip in Linux
Zulip is available as an AppImage file from its official website. You may refer to our guide on [using AppImage in Linux][5] in case you need help.
It is also available as a snap package. So, you can utilize either of them for any Linux distro.
You can also install it through the terminal for Ubuntu/Debian-based distros using APT. Take a look at its [official instructions][6] if you want that.
Zulip is available for Windows, Mac, and Linux. You should also find it available for Android and iOS mobile phones.
[Zulip][7]
Considering that you can use Zulip on the web, desktop, and smartphones, it is a suitable replacement for Slack.
_Have you tried it yet? What messaging platform do you use to collaborate for work? Feel free to share your thoughts in the comments._
--------------------------------------------------------------------------------
via: https://itsfoss.com/zulip/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/zulip-chat-new.png?resize=800%2C551&ssl=1
[2]: https://itsfoss.com/open-source-slack-alternative/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/zulip-chat-screenshot.png?resize=800%2C550&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/zulip-settings.png?resize=800%2C546&ssl=1
[5]: https://itsfoss.com/use-appimage-linux/
[6]: https://zulip.com/help/desktop-app-install-guide
[7]: https://zulip.com/

View File

@ -1,177 +0,0 @@
[#]: collector: "lujun9972"
[#]: translator: "fisherue"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "5 ways to improve your Bash scripts"
[#]: via: "https://opensource.com/article/20/1/improve-bash-scripts"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
5 步提升你的脚本程序
======
巧用 Bash 脚本程序能帮助你完成很多极具挑战的任务。
![A person working.工作者图片][1]
系统管理员经常写脚本程序,不论长短,这些脚本可以完成某种任务。
你是否曾经查看过某个软件发行方提供的安装用的脚本 (script) 程序?为了能够适应不同用户的系统配置,顺利完成安装,这些脚本程序经常包含很多函数和逻辑分支。多年来,我已经集合了提升我的脚本程序的一些技巧,这里分享几个技巧,希望能对朋友们也有用。这里列出一组短脚本示例,展示给大家做脚本样本。
### 初步尝试
我尝试写一个脚本程序时,原始程序往往就是一组命令行,通常就是调用标准命令完成诸如更新网页内容之类的工作,这样可以节省时间。其中一个类似的工作是解压文件到阿帕奇 (Apache) 网站服务器的主目录里,我的最初脚本程序大概是下面这样:
```
cp january_schedule.tar.gz /usr/apache/home/calendar/
cd /usr/apache/home/calendar/
tar zvxf january_schedule.tar.gz
```
这帮我节省了时间,也减少了键入多条命令操作。时日久了,我掌握了另外的技巧,可以用 Bash 脚本程序完成更难的一些工作,比如说创建软件安装包、安装软件、备份文件系统等工作。
### 1\. 条件分支结构
和众多其他编程语言一样,脚本程序的条件分支结构同样是强大的常用技能。条件分支结构赋予了计算机程序逻辑能力,我的很多实例都是基于条件逻辑分支。
基本的条件分支结构就是 IF 条件分支结构。通过判定是否满足特定条件,可以控制程序选择执行相应的脚本命令段。比如说,想要判断系统是否安装了 Java ,可以通过判断系统有没有一个 Java 库目录;如果找到这个目录,就把这个目录路径添加到可运行程序路径,也就可以调用 Java 库应用了。
```
if [ -d "$JAVA_HOME/bin" ] ; then
    PATH="$JAVA_HOME/bin:$PATH"
```
### 2\. 限定运行权限
你或许想只允许特定的用户才能执行某个脚本程序。除了 Linux 的权限许可管理,比如对用户和用户组设定权限、通过 SELinux 设定此类的保护权限等,你还可以在脚本里设置逻辑判断来设置执行权限。类似的情况可能是,你需要确保只有网站程序的所有者才能执行相应的网站初始化操作脚本。甚至你可以限定只有根用户才能执行某个脚本。这个可以通过在脚本程序里设置逻辑判断实现, Linux 提供的几个环境变量可以帮忙。其中一个是保存用户名称的变量 **$USER**, 另一个是保存用户识别码的变量 **$UID** 。在脚本程序里,执行用户的 UID 值就保存在 **$UID** 变量里。
#### 用户名判别
第一个例子里,我在一个多用户环境里指定只有用户 jboss1 可以执行脚本程序。条件 if 语句猜测判断,‘要求执行这个脚本程序的用户不是 jboss1?’ 如果我猜测的没错,就会输出提示“用户不是 jboss1 ”然后直接退出这个脚本程序返回码为1 **exit 1**
```
if [ "$USER" != 'jboss1' ]; then
     echo "Sorry, this script must be run as JBOSS1!"
     exit 1
fi
echo "continue script"
```
#### 根用户判别
接下来的例子是要求只有根用户才能执行脚本程序。根用户的用户识别码 (UID) 是0,设置的条件判断采用大于操作符 (**-gt**) ,所有 UID 值大于0的用户都被禁止执行该脚本程序。
```
if [ "$UID" -gt 0 ]; then
     echo "Sorry, this script must be run as ROOT!"
     exit 1
fi
echo "continue script"
```
### 3\. 带参数执行程序
可执行程序可以附带参数作为执行选项,命令行脚本程序也是一样,下面给出几个例子。在这之前,我想告诉你,能写出好的程序并不只是写出我们想要它执行什么就执行什么的程序,程序还需要按照我们不想让它执行什么它可以按我们的意愿能够不执行相应操作。如果运行程序时没有提供参数造成程序缺少足够信息,我愿意脚本程序不要做任何破坏性的操作。因而,程序的第一步就是确认命令行是否提供了参数,判定的条件就是参数 **$# **的值是否为 0 ,如果是(意味着没有提供参数),就直接终止脚本程序并退出操作。
```
if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi
echo "arguments found: $#"
```
#### 多个运行参数
可以传递给脚本程序的参数不知一个。脚本使用内部变量指代这些参数,内部变量名用非负整数递增标识,也就是 **$1**,** $2**,** $3 **等等递增。我只是扩展前面的程序,输出显示用户提供的前三个参数。显然,要针对所有的每个参数有对应的响应需要更多的逻辑判断,这里的例子只是简单展示参数的使用。
```
`echo $1 $2 $3`
```
我们在讨论这些参数变量名,你或许有个疑问,“参数变量名怎么跳过了**$0**,(而直接从**$1**开始)?”
是的,是这样,这是有原因的。变量名 **$0** 确实存在,也非常有用,它储存的是被执行的脚本程序的名称。
```
`echo $0`
```
程序执行过程中有一个变量名指代程序名称,很重要的一个原因是,可以在生成的日志文件名称里包含程序名称,最简单的方式应该是调用一个 "echo" 语句。
```
`echo test >> $0.log`
```
当然,你或许要增加一些代码,确保这个日志文件存放在你希望的路径,日志名称包含你认为有用的信息。
### 4\. 交互输入
脚本程序的另一个好用的特性是可以在执行过程中接受输入,最简单的情况是让用户可以输入一些信息。
```
echo "enter a word please:"
 read word
 echo $word
```
这样也可以让用户在程序执行中作出选择。
```
read -p "Install Software ?? [Y/n]: " answ
 if [ "$answ" == 'n' ]; then
   exit 1
 fi
   echo "Installation starting..."
```
### 5\. 出错退出执行
几年前,我写了各脚本,想在自己的电脑上安装最新版本的 Java 开发工具包 (JDK) 。这个脚本把 JDK 文件解压到指定目录,创建更新一些符号链接,再做一下设置告诉系统使用这个最新的版本。如果解压过程出现错误,在执行后面的操作就会使 Java 系统破坏不能使用。因而,这种情况下需要终止程序。如果解压过程没有成功,就不应该再继续进行之后的更新操作。下面语句段可以完成这个功能。
```
tar kxzmf jdk-8u221-linux-x64.tar.gz -C /jdk --checkpoint=.500; ec=$?
if [ $ec -ne 0 ]; then
     echo "Installation failed - exiting."
     exit 1
fi
```
下面的一行语句组可以快速展示变量 **$?** 的用法。
```
`ls T; ec=$?; echo $ec`
```
先用 **touch T** 命令创建一个文件名为 **T** 的文件,然后执行这个文件行,变量 **ec **的值会是0。然后,用 **rm T **命令删除文件,在执行示例的文件行,变量 **ec** 的值会是2,因为文件 T 不存在,命令 **ls **找不到指定文件报错,相应返回值是2。
在逻辑条件里利用这个出错标识,参照前文我使用的条件判断,可以使脚本文件按需完成设定操作。
### 结语
要完成复杂的功能,或许我们觉得应该使用诸如 Python, C, 或 Java 这类的高级编程语言,然而并不尽然,脚本编程语言也很强大,可以完成类似任务。要充分发挥脚本的作用,有很多需要学习的,希望这里的几个例子能让你意识到脚本编程的功能。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/improve-bash-scripts
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl "工作者图片"

View File

@ -0,0 +1,267 @@
[#]: collector: (lujun9972)
[#]: translator: (chunibyo-wly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deploy a deep learning model on Kubernetes)
[#]: via: (https://opensource.com/article/20/9/deep-learning-model-kubernetes)
[#]: author: (Chaimaa Zyani https://opensource.com/users/chaimaa)
在 Kubernetes 上部署一个深度学习模型
======
了解如何使用 Kubermatic Kubernetes 平台部署、缩放与管理图像识别深度学习模型。
![Brain on a computer screen][1]
随着企业增加了对人工智能AI、机器学习ML与深度学习DL的使用出现了一个关键问题如何将机器学习的发展进行规模化与产业化这些讨论经常聚焦于机器学习模型本身然而模型仅仅只是完整解决方案的其中一环。为了达到生产环境的应用和规模模型的开发过程必须还包括一个可以说明开发前后关键活动以及可公用部署的可重复过程。
本文演示了如何使用[Kubermatic Kubernetes Platform][2]对图像识别预测的深度学习模型进行部署,缩放与管理。
Kubermatic Kubernetes 平台是一个可以与机器学习/深度学习工作流结合进行完整集群生命周期管理的一个自动且灵活的开源生产级 Kubernetes 集群管理工具。
### 开始
这个例子部署了一个图像识别的深度学习模型。它使用了包含 60,000 张分属 10 个类别的 32x32 彩色图 [CIFAR-10][3] 像数据集,同时使用了 [Apache MXNet][5] 的 [Gluon][4] 与 NVIDIA GPUs 进行加速计算。如果你希望使用 CIFAR-10 数据集的预训练模型,可以查阅 [getting started guide][6]。
使用训练集中的样本对模型训练 200 次,只要训练误差保持缓慢减少,就可以保证模型不会过拟合。下方图展示了训练的过程:
![深度学习模型训练 loss 图][7]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
训练结束后,必须保存模型训练所得到的参数,以便稍后可以加载它们:
```python
file_name = "net.params"
net.save_parameters(file_name)
```
一旦你的模型训练好了,就可以用 Flask 服务器来封装它。下方的程序演示了如何接收 request 中的一张图片作为参数并且在 response 中返回模型的预测结果:
```python
from gluoncv.model_zoo import get_model
import matplotlib.pyplot as plt
from mxnet import gluon, nd, image
from mxnet.gluon.data.vision import transforms
from gluoncv import utils
from PIL import Image
import io
import flask
app = flask.Flask(__name__)
@app.route("/predict",methods=["POST"])
def predict():
    if flask.request.method == "POST":
        if flask.request.files.get("img"):
           img = Image.open(io.BytesIO(flask.request.files["img"].read()))
            transform_fn = transforms.Compose([
            transforms.Resize(32),
            transforms.CenterCrop(32),
            transforms.ToTensor(),
            transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
            img = transform_fn(nd.array(img))
            net = get_model('cifar_resnet20_v1', classes=10)
            net.load_parameters('net.params')
            pred = net(img.expand_dims(axis=0))
            class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
                       'dog', 'frog', 'horse', 'ship', 'truck']
            ind = nd.argmax(pred, axis=1).astype('int')
            prediction = 'The input picture is classified as [%s], with probability %.3f.'%
                         (class_names[ind.asscalar()], nd.softmax(pred)[0][ind].asscalar())
    return prediction
if __name__ == '__main__':
   app.run(host='0.0.0.0')
```
### 容器化模型
在将模型部署到 Kubernetes 前,你需要先安装 Docker 并使用你的模型创建一个镜像。
1. 下载、安装并启动 Docker
```bash
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo <https://download.docker.com/linux/centos/docker-ce.repo>
sudo yum install docker-ce
sudo systemctl start docker
```
2. 创建一个你用来管理代码与依赖的文件夹:
```bash
mkdir kubermatic-dl
cd kubermatic-dl
```
3. 创建 `requirements.txt` 文件管理代码运行时需要的所有依赖:
```
flask
gluoncv
matplotlib
mxnet
requests
Pillow
```
4. 创建 DockerfileDocker 将根据这个文件创建镜像:
```
FROM python:3.6
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY app.py /app
CMD ["python", "app.py"]
```
这个 Dockerfile 主要可以分为三个部分。首先Docker 会下载 Python 的基础镜像。然后Docker 会使用 Python 的包管理工具 `pip` 安装 `requirements.txt` 记录的包。最后Docker 会通过执行 `python app.py` 来运行你的脚本。
1. 构建 Docker 容器: `sudo docker build -t kubermatic-dl:latest .` 这条命令使用 `kubermatic-dl` 镜像为你当前工作目录的代码创建了一个容器。
2. 使用 `sudo docker run -d -p 5000:5000 kubermatic-dl` 命令检查你的容器可以在你的主机上正常运行。
3. 使用 `sudo docker ps -a` 命令查看你本地容器的运行状态:
![查看容器的运行状态][9]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### 将你的模型上传到 Docker Hub
在向 Kubernetes 上部署模型前,你的镜像首先需要是公开可用的。你可以通过将你的模型上传到 [Docker Hub][10] 来将它公开。(如果你没有 Docker Hub 的账号,你需要先创建一个)
1. 在终端中登录 Docker Hub 账号:`sudo docker login`
2. 给你的镜像打上 tag ,这样你的模型上传到 Docker Hub 后也能拥有版本信息
```bash
sudo docker tag &lt;your-image-id&gt; &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
sudo docker push &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
```
![给镜像打上 tag][11]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. 使用 `sudo docker images` 命令检查你的镜像的 ID。
### 部署你的模型到 Kubernetes 集群
1. 首先在 Kubermatic Kubernetes 平台创建一个项目, 然后根据 [快速开始][12] 创建一个 Kubernetes 集群。
![创建一个 Kubernetes 集群][13]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
2. 下载用于访问你的集群的 `kubeconfig`,将它放置在下载目录中,并记得设置合适的环境变量,使得你的环境能找到它:
![Kubernetes 集群示例][14]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. 使用 `kubectl` 命令检查集群信息,例如,需要检查 `kube-system` 是否在你的集群正常启动了就可以使用命令 `kubectl cluster-info`
![查看集群信息][15]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
4. 为了在集群中运行容器,你需要创建一个部署用的配置文件(`deployment.yaml`),再运行 `apply` 命令将其应用于集群中:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubermatic-dl-deployment
spec:
  selector:
    matchLabels:
      app: kubermatic-dl
  replicas: 3
  template:
    metadata:
      labels:
        app: kubermatic-dl
    spec:
     containers:
     - name: kubermatic-dl
       image: kubermatic00/kubermatic-dl:latest
       imagePullPolicy: Always
       ports:
       - containerPort: 8080
``` `kubectl apply -f deployment.yaml`
```
5. 为了将你的部署开放到公网环境,你需要一个能够给你的容器创建外部可达 IP 地址的服务:`kubectl expose deployment kubermatic-dl-deployment  --type=LoadBalancer --port 80 --target-port 5000`
6. 就快大功告成了!首先检查你布署的服务的状态,然后通过 IP 请求的你图像识别 API`kubectl get service`
![获取请求图像识别 API 的 IP 地址][16]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
7. 最后根据你的外部 IP 使用以下两张图片对你的图像识别服务进行测试:
![马][17]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![狗][18]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![测试 API][19]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### 总结
在这篇教程中,你可以创建一个深度学习模型,并且使用 Flask 提供 [REST API][20] 服务。它介绍了如何将应用放在 Docker 容器中,如何将这个镜像上传到 Docker Hub 中,以及如何使用 Kubernetes 部署你的服务。只需几个简单的命令,你就可以使用 Kubermatic Kubernetes 平台部署该应用程序,并且开放服务给别人使用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/deep-learning-model-kubernetes
作者:[Chaimaa Zyani][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/chunibyo-wly)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/chaimaa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen)
[2]: https://www.loodse.com/products/kubermatic/
[3]: https://www.cs.toronto.edu/~kriz/cifar.html
[4]: https://gluon.mxnet.io/
[5]: https://mxnet.apache.org/
[6]: https://gluon-cv.mxnet.io/build/examples_classification/demo_cifar10.html
[7]: https://opensource.com/sites/default/files/uploads/trainingplot.png (Deep learning model training plot)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/containerstatus.png (Checking the container's status)
[10]: https://hub.docker.com/
[11]: https://opensource.com/sites/default/files/uploads/tagimage.png (Tagging the image)
[12]: https://docs.kubermatic.com/kubermatic/v2.13/installation/install_kubermatic/_installer/
[13]: https://opensource.com/sites/default/files/uploads/kubernetesclusterempty.png (Create a Kubernetes cluster)
[14]: https://opensource.com/sites/default/files/uploads/kubernetesexamplecluster.png (Kubernetes cluster example)
[15]: https://opensource.com/sites/default/files/uploads/clusterinfo.png (Checking the cluster info)
[16]: https://opensource.com/sites/default/files/uploads/getservice.png (Get the IP address to call your image recognition API)
[17]: https://opensource.com/sites/default/files/uploads/horse.jpg (Horse)
[18]: https://opensource.com/sites/default/files/uploads/dog.jpg (Dog)
[19]: https://opensource.com/sites/default/files/uploads/testapi.png (Testing the API)
[20]: https://www.redhat.com/en/topics/api/what-is-a-rest-api

View File

@ -0,0 +1,381 @@
[#]: subject: (Analyze the Linux kernel with ftrace)
[#]: via: (https://opensource.com/article/21/7/linux-kernel-ftrace)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
通过 `ftrace` 来分析 Linux 内核
======
通过 `ftrace` 来了解 Linux 内核内部工作方式是一个好方法。
![Linux keys on the keyboard for a desktop computer][1]
一个操作系统的内核是最难以理解的软件之一。自从你的系统启动后,它会一直在后台运行。尽管每个用户都不与内核直接交互,但他们在内核的帮助下完成自己的计算任务。与内核的交互发生在调用系统调用或者用户日常使用的各种库或应用间接调用了系统调用。
在之前的文章里我介绍了如何使用 `strace` 来追踪系统调用。然而,使用 `strace` 时你的可见性是受限的。它允许你查看特定参数的系统调用。并在工作完成后,看到其返回值或状态,来表明是通过还是失败。但是你无法知道内核在这段时间内发生了什么。除了系统调用外,内核中还有很多其他活动发生时却被忽略了。
### `ftrace` 介绍
本文的目的是通过使用一个名为 `ftrace` 的机制来追踪内核函数。任何 Linux 用户可以通过使用它来轻松地追踪内核,并且了解更多关于 Linux 内核内部如何工作。
`ftrace` 默认产生的输出是巨大的,因为内核总是忙的。为了节省空间,很多情况下我会通过截断来给出最小输出。
我使用 Fedora 来演示下面的例子,但是它们应该在其他最新的 Linux 发行版上同样可以运行。
### 启用 `ftrace`
`ftrace` 现在已经是内核中的一部分了,你不再需要事先安装它了。也就是说,如果你在使用最近的 Linux 系统,那么 `ftrace` 是已经启动了的。为了验证 `ftrace` 是否可用,运行 `mount` 命令并查找 `tracefs`。如果你看到类似下面的输出,表示 `ftrace` 已经启用,你可以轻松地尝试本文中下面的例子。下面的命令需要在 root 用户下使用(`sudo` 是不够的)
```
$ sudo mount | grep tracefs
none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)
```
要想使用 `ftrace`,你首先需要进入上面 `mount` 命令中找到的特定目录中,在那个目录下运行文章中的其他命令。
```
`$ cd /sys/kernel/tracing`
```
### 一般的工作流程
首先,你需要理解捕捉踪迹和获取输出的一般流程。如果你直接运行 `ftrace`,没有任何特定的 `ftrace-` 命令会被运行。相反的,你基本上是通过标准 Linux 命令来写入或读取一些文件。
通用的步骤如下:
1. 通过写入一些特定文件来启用/结束追踪
2. 通过写入一些特定文件来设置/取消追踪时的过滤规则
3. 读取基于第 1 和 2 步的追踪输出
4. 清除输出的文件或缓存
5. 缩小到特定用例你要追踪的内核函数重复1,2,3,4 步
First of all, you must understand the general workflow of capturing a trace and obtaining the output. If you're using `ftrace` directly, there isn't any special `ftrace-`specific commands to run. Instead, you basically write to some files and read from some files using standard command-line Linux utilities.
### 可用的追踪器类型
有多种不同的追踪器可供你使用。之前提到,在运行任何命令前,你需要进入一个特定的目录下,因为文件在这些目录下。我在我的例子中使用相对路径(与绝对路径相反)
你可以查看 `available_tracers` 文件内容来查看所有可用的追踪器类型。你可以可以看下面列出了几个。不需要担心有这么多。
```
$ pwd
/sys/kernel/tracing
$ sudo cat available_tracers
hwlat blk mmiotrace function_graph wakeup_dl wakeup_rt wakeup function nop
```
在所有输出的追踪器中,我会聚焦于下面三个特殊的:启用追踪的 `function``function_graph`,以及停止追踪的 `nop`
### 确认当前的追踪器
通常情况默认的追踪器设定为 `nop`。即在特殊文件中 `current_tracer` 中的 “无操作”,这意味着追踪目前是关闭的。
```
$ pwd
/sys/kernel/tracing
$ sudo cat current_tracer
nop
```
### 查看追踪输出
在启用任何追踪功能之前,请你看一下保存追踪输出的文件。你可以用 [cat](2) 命令查看名为 `trace` 的文件的内容。
```
$ sudo cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 0/0   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
```
### 启用 `function` 追踪器
你可以通过向 `current_tracer` 文件写入 `function` 来启用第一个追踪器 `function`(文件原本内容为 `nop`,意味着追踪是关闭的)。把这个操作看成是启用追踪的一种方式。
```
$ pwd
/sys/kernel/tracing
$ sudo cat current_tracer
nop
$ echo function &gt; current_tracer
$
$ cat current_tracer
function
```
### 查看 `function` 追踪器的更新追踪输出
现在你已启动追踪,是时候查看输出了。如果你查看 `trace` 文件内容,你将会看到许多被连续写入的内容。我通过管道只展示了文件内容的前 20 行。根据左边输出的标题,你可以看到在某个 CPU 上运行的任务和进程 ID。根据右边输出的内容你可以看到具体的内核函数和其副函数。中间显示了时间信息。
```
$ sudo cat trace | head -20
# tracer: function
#
# entries-in-buffer/entries-written: 409936/4276216   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
          &lt;idle&gt;-0       [000] d...  2088.841739: tsc_verify_tsc_adjust &lt;-arch_cpu_idle_enter
          &lt;idle&gt;-0       [000] d...  2088.841739: local_touch_nmi &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: rcu_nocb_flush_deferred_wakeup &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: tick_check_broadcast_expired &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: cpuidle_get_cpu_driver &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841740: cpuidle_not_available &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: cpuidle_select &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: menu_select &lt;-do_idle
          &lt;idle&gt;-0       [000] d...  2088.841741: cpuidle_governor_latency_req &lt;-menu_select
```
请记住当追踪打开后,这意味着追踪结果会被一直连续写入直至你关闭追踪。
### 关闭追踪
关闭追踪是简单的。你只需要在 `current_tracer` 文件中用 `nop` 替换 `function` 追踪器即可:
```
$ sudo cat current_tracer
function
$ sudo echo nop &gt; current_tracer
$ sudo cat current_tracer
nop
```
### 启用 `function_graph` 追踪器
现在尝试第二个名为 `function_graph` 的追踪器。你可以使用和上面相同的步骤:在 `current_tracer` 文件中写入 `function_graph`
```
$ sudo echo function_graph &gt; current_tracer
$ sudo cat current_tracer
function_graph
```
### `function_tracer` 追踪器的追踪输出
注意到目前 `trace` 文件的输出格式已经发生变化。现在,你可以看到 CPU ID 和内核函数的执行时间。接下来,一个花括号表示一个函数的开始,以及它内部调用了哪些其他函数。
```
$ sudo cat trace | head -20
# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 6)               |              n_tty_write() {
 6)               |                down_read() {
 6)               |                  __cond_resched() {
 6)   0.341 us    |                    rcu_all_qs();
 6)   1.057 us    |                  }
 6)   1.807 us    |                }
 6)   0.402 us    |                process_echoes();
 6)               |                add_wait_queue() {
 6)   0.391 us    |                  _raw_spin_lock_irqsave();
 6)   0.359 us    |                  _raw_spin_unlock_irqrestore();
 6)   1.757 us    |                }
 6)   0.350 us    |                tty_hung_up_p();
 6)               |                mutex_lock() {
 6)               |                  __cond_resched() {
 6)   0.404 us    |                    rcu_all_qs();
 6)   1.067 us    |                  }
```
### 启用追踪的设置来增加追踪的深度
你可以使用下面的步骤来调整追踪器以看到更深层次的函数调用。完成之后,你可以查看 `trace` 文件的内容并发现输出变得更加详细了。为了文章的可读性,这个例子的输出被省略了:
```
$ sudo cat max_graph_depth
0
$ sudo echo 1 &gt; max_graph_depth
$ # or
$ sudo echo 2 &gt; max_graph_depth
$ sudo cat trace
```
### 查找要追踪的函数
上面的步骤足以让你开始追踪。但是它产生的输出内容是巨大的,当你想试图找到自己感兴趣的内容时,往往会很困难。通常你更希望能够之追踪特定的函数,而忽略其他函数。但如果你不知道它们确切的名称,你怎么知道要追踪哪些进程?有一个文件可以帮助你解决这个问题 —— `available_filter_functions` 文件提供了一个可供追踪的函数列表。
```
$ sudo wc -l available_filter_functions  
63165 available_filter_functions
```
### 查找一般的内核函数
现在试着搜索一个你所知道的简单内核函数。用户空间有 `malloc` 函数用来分配内存,而内核有 `kmalloc` 函数,它提供类似的功能。下面是所有与 `kmalloc` 相关的函数。
```
$ sudo grep kmalloc available_filter_functions
debug_kmalloc
mempool_kmalloc
kmalloc_slab
kmalloc_order
kmalloc_order_trace
kmalloc_fix_flags
kmalloc_large_node
__kmalloc
__kmalloc_track_caller
__kmalloc_node
__kmalloc_node_track_caller
[...]
```
### 查找内核模块或者驱动相关函数
`available_filter_functions` 文件的输出中,你可以看到一些以括号内文字结尾的行,例如下面的例子中的 `[kvm_intel]`。这些函数与当前加载的内核模块 `kvm_intel` 有关。你可以运行 `lsmod` 命令来验证。
```
$ sudo grep kvm available_filter_functions | tail
__pi_post_block [kvm_intel]
vmx_vcpu_pi_load [kvm_intel]
vmx_vcpu_pi_put [kvm_intel]
pi_pre_block [kvm_intel]
pi_post_block [kvm_intel]
pi_wakeup_handler [kvm_intel]
pi_has_pending_interrupt [kvm_intel]
pi_update_irte [kvm_intel]
vmx_dump_dtsel [kvm_intel]
vmx_dump_sel [kvm_intel]
$ lsmod  | grep -i kvm
kvm_intel             335872  0
kvm                   987136  1 kvm_intel
irqbypass              16384  1 kvm
```
### 仅追踪特定的函数
为了实现对特定函数或模式的追踪,你可以利用 `set_ftrace_filter` 文件来指定你要追踪上述输出中的哪些函数。这个文件也接受 `*` 模式它可以扩展到包括具有给定模式的其他函数。作为一个例子我在我的机器上使用ext4文件系统。我可以用下面的命令指定ext4的特定内核函数来追踪。
```
$ sudo mount | grep home
/dev/mapper/fedora-home on /home type ext4 (rw,relatime,seclabel)
$ pwd
/sys/kernel/tracing
$ sudo cat set_ftrace_filter
#### all functions enabled ####
$
$ echo ext4_* &gt; set_ftrace_filter
$
$ cat set_ftrace_filter
ext4_has_free_clusters
ext4_validate_block_bitmap
ext4_get_group_number
ext4_get_group_no_and_offset
ext4_get_group_desc
[...]
```
现在当你可以看到追踪输出时,你只能看到与内核函数有关的 `ext4` 函数,而你之前已经为其设置了一个过滤器。所有其他的输出都被忽略了。
```
$ sudo cat trace |head -20
# tracer: function
#
# entries-in-buffer/entries-written: 3871/3871   #P:8
#
#                                _-----=&gt; irqs-off
#                               / _----=&gt; need-resched
#                              | / _---=&gt; hardirq/softirq
#                              || / _--=&gt; preempt-depth
#                              ||| /     delay
#           TASK-PID     CPU#  ||||   TIMESTAMP  FUNCTION
#              | |         |   ||||      |         |
           cupsd-1066    [004] ....  3308.989545: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.989547: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.989552: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.989553: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.990097: ext4_file_open &lt;-do_dentry_open
           cupsd-1066    [004] ....  3308.990111: ext4_file_getattr &lt;-vfs_fstat
           cupsd-1066    [004] ....  3308.990111: ext4_getattr &lt;-ext4_file_getattr
           cupsd-1066    [004] ....  3308.990122: ext4_llseek &lt;-ksys_lseek
           cupsd-1066    [004] ....  3308.990130: ext4_file_read_iter &lt;-new_sync_read
```
### 排除要被追踪的函数
你并不总是知道你想追踪什么,但是,你肯定知道你不想追踪什么。因此,有一个 `set_ftrace_notrace` —— 请注意其中的 "no"。你可以在这个文件中写下你想要的模式,并启用追踪。这样除了所提到的模式外,任何其他东西都会被追踪到。这通常有助于删除那些使我们的输出变得混乱的普通功能。
```
$ sudo cat set_ftrace_notrace
#### no functions disabled ####
```
### 具有目标性的追踪
到目前为止,你一直在追踪内核中发生的一切。但是,他无法帮助你追踪与某个特定命令有关的事件。为了达到这个目的,你可以按需打开和关闭跟踪,并且在它们之间,运行我们选择的命令,这样你就不会在跟踪输出中得到额外的输出。你可以通过向 `tracing_on` 写入 `1` 来启用跟踪,写 `0` 来关闭跟踪。
```
$ sudo cat tracing_on
0
$ sudo echo 1 &gt; tracing_on
$ sudo cat tracing_on
1
$ # Run some specific command that we wish to trace here
$ sudo echo 0 &gt; tracing_on
$ cat tracing_on
0
```
### 追踪特定的 PID
如果你想追踪与正在运行的特定进程有关的活动,你可以将该 PID 写入一个名为 `set_ftrace_pid` 的文件然后启用追踪。这样一来追踪就只限于这个PID这在某些情况下是非常有帮助的。
```
`$ sudo echo $PID > set_ftrace_pid`
```
### 总结
`ftrace` 是一个了解 Linux 内核内部工作的很好方式。通过一些练习,你可以学会对 `ftrace` 进行调整以缩小搜索范围。要想更详细地了解 `ftrace` 和它的高级用法,请看 `ftrace` 的核心作者 Steven Rostedt 写的这些优秀文章。
* [调试 Linux 内核,第一部分](3)
* [调试 Linux 内核,第二部分](4)
* [调试 Linux 内核,第三部分](5)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/linux-kernel-ftrace
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/article/19/2/getting-started-cat-command
[3]: https://lwn.net/Articles/365835/
[4]: https://lwn.net/Articles/366796/
[5]: https://lwn.net/Articles/370423/

View File

@ -1,71 +0,0 @@
[#]: subject: "4 alternatives to cron in Linux"
[#]: via: "https://opensource.com/article/21/7/alternatives-cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux 中 cron 命令的 4 种替代方案
======
在 Linux 系统中有一些其他开源项目可以结合或者替代 cron 命令使用。
![Alarm clocks with different time][1]
[Linux `cron` 系统][2] 是一项经过时间检验的成熟技术,然而在任何情况下它都是最合适的系统自动化工具吗?答案是否定的。有一些开源项目就可以用来与 `cron` 结合或者直接代替 `cron` 使用。
### at 命令
`cron` 适用于长期重复任务。如果你设置了一个工作任务,它会从现在开始定期运行,直到计算机报废为止。但有些情况下你可能只想设置一个一次性命令,以备不在计算机旁时该命令可以自动运行。这时你可以选择使用 `at` 命令。
`at` 的语法比 `cron` 语法简单和灵活得多,并且兼具交互式和非交互式调度方法。(只要你想,你甚至可以使用 `at` 作业创建一个 `at` 作业。)
```
$ echo "rsync -av /home/tux/ me@myserver:/home/tux/" | at 1:30 AM
```
该命令语法自然且易用,并且不需要用户清理旧作业,因为它们一旦运行后就完全被计算机遗忘了。
阅读有关 [at 命令][3] 的更多信息并开始使用吧。
### systemd 命令
除了管理计算机上的进程外,`systemd` 还可以帮你调度这些进程。与传统的 `cron` 作业一样,`systemd` 计时器可以在指定的时间间隔触发事件,例如 shell 脚本和命令。时间间隔可以是每月特定日期的一天一次(例如在星期一的时候触发),或者在 09:00 到 17:00 的工作时间内每 15 分钟一次。
此外 `systemd` 里的计时器还可以做一些 `cron` 作业不能做的事情。
例如,计时器可以在一个事件 _之后_ 触发脚本或程序来运行特定时长,这个事件可以是开机,可以是前置任务的完成,甚至可以是计时器本身调用的服务单元的完成!
如果你的系统运行着 `systemd` 服务,那么你的机器就已经在技术层面上使用 `systemd` 计时器了。默认计时器会执行一些琐碎的任务,例如滚动日志文件、更新 mlocate 数据库、管理 DNF 数据库等。创建自己的计时器很容易,具体可以参阅 David Both 的文章 [使用 systemd 计时器来代替 cron][4]。
### anacron 命令
`cron` 专门用于在特定时间运行命令,这适用于从不休眠或断电的服务器。然而对笔记本电脑和台式工作站而言,时常有意或无意地关机是很常见的。当计算机处于关机状态时,`cron` 不会运行,因此设定在这段时间内的一些重要工作(例如备份数据)也就会跳过执行。
`anacron` 系统旨在确保作业定期运行,而不是按计划时间点运行。这就意味着你可以将计算机关机几天,再次启动时仍然靠 `anacron` 来运行基本任务。`anacron` 与 `cron` 协同工作,因此严格来说前者不是后者的替代品,而是一种调度任务的有效可选方案。许多系统管理员配置了一个 `cron` 作业来在深夜备份远程工作者计算机上的数据,结果却发现该作业在过去六个月中只运行过一次。`anacron` 确保重要的工作在 _可执行的时候_ 发生,而不是必须在安排好的 _特定时间点_ 发生。
点击参阅关于 [使用 anacron 获得更好的 crontab 效果][5] 的更多内容。
### 自动化
计算机和技术旨在让人们的生活更美好工作更轻松。Linux 为用户提供了许多有用的功能以确保完成重要的操作系统任务。查看这些可用的功能然后试着将这些功能用于你自己的工作任务吧。LCTT译注作者本段有些语焉不详读者可参阅譬如 [Ansible 自动化工具安装、配置和快速入门指南](https://linux.cn/article-13142-1.html) 等关于 Linux 自动化的文章)
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/alternatives-cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
[2]: https://opensource.com/article/21/7/cron-linux
[3]: https://opensource.com/article/21/7/intro-command
[4]: https://opensource.com/article/20/7/systemd-timers
[5]: https://opensource.com/article/21/2/linux-automation

View File

@ -1,87 +0,0 @@
[#]: subject: "Automatically Synchronize Subtitle With Video Using SubSync"
[#]: via: "https://itsfoss.com/subsync/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 SubSync 自动将字幕与视频同步化
======
让我分享一个场景。你正试图观看一部电影或视频,你需要字幕。你下载了字幕,却发现字幕没有正确同步。没有其他好的字幕可用。现在该怎么做?
你可以[在 VLC 中按 G 或 H 键来同步字幕][1]。它为字幕增加了一个延迟。如果字幕在整个视频中以相同的时间间隔不同步这可能会起作用。但如果不是这种情况SubSync 在这里会有很大帮助。
### SubSync: 字幕语音同步器
[SubSync][2] 是一个灵巧的开源工具,可用于 Linux、macOS 和 Windows。
它通过收听音轨来同步字幕,这就是它的神奇之处。即使音轨和字幕使用的是不同的语言,它也能发挥作用。如果有必要,它也可以被翻译,但我没有测试这个功能。
我做了一个简单的测试,使用一个与我正在播放的视频不同步的字幕。令我惊讶的是,它工作得很顺利,我得到了完美的同步字幕。
使用 SubSync 很简单。你启动应用,它要求你添加字幕文件和视频文件。
![User interface for SubSync][3]
你必须在界面上指定字幕和视频的语言。它可能会根据使用的语言下载额外的资源。
![SubSync may download additional packages for language support][4]
请记住,同步字幕需要一些时间,这取决于视频和字幕的长度。在等待过程完成时,你可以拿起你的茶/咖啡或啤酒。
你可以看到正在进行的同步状态,甚至可以在完成之前保存它。
![SubSync synchronization in progress][5]
同步完成后,你就可以点击保存按钮,把修改的内容保存到原文件中,或者把它保存为新的字幕文件。
![Synchronization completed][6]
我不能说它在所有情况下都能工作,但在我运行的样本测试中它是有效的。
### 安装 SubSync
SubSync 是一个跨平台的应用,你可以从它的[下载页面][7]获得 Windows 和 MacOS 的安装文件。
对于 Linux 用户SubSync 是作为一个 Snap 包提供的。如果你的发行版已经启用了 Snap 支持,使用下面的命令来安装 SubSync
```
sudo snap install subsync
```
请记住,下载 SubSync snap 包将需要一些时间。所以要有一个良好的网络连接或足够的耐心。
### 最后
就我个人而言,我对字幕很上瘾。即使我在 Netflix 上看英文电影,我也会把字幕打开。它有助于清楚地理解每段对话,特别是在有强烈口音的情况下。如果没有字幕,我永远无法理解[电影 Snatch 中 Mickey O'Neil由 Brad Pitt 扮演)的一句话][8]。
使用 SubSync 比[使用 Subtitle Editor][9] 同步字幕要容易得多。在[企鹅字幕播放器][10]之后,对于像我这样在整个互联网上搜索不同国家的稀有或推荐(神秘)电影的人来说,这是另一个很棒的工具。
如果你是一个“字幕用户”,我感觉你会喜欢这个工具。如果你使用过它,请在评论区分享你的使用经验。
--------------------------------------------------------------------------------
via: https://itsfoss.com/subsync/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-synchronize-subtitles-with-movie-quick-tip/
[2]: https://subsync.online/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-interface.png?resize=593%2C280&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize.png?resize=522%2C189&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-1.png?resize=424%2C278&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/subsync-subtitle-synchronize-2.png?resize=424%2C207&ssl=1
[7]: https://subsync.online/en/download.html
[8]: https://www.youtube.com/watch?v=tGDO-9hfaiI
[9]: https://itsfoss.com/subtitld/
[10]: https://itsfoss.com/penguin-subtitle-player/

View File

@ -0,0 +1,182 @@
[#]: subject: "Debian vs Ubuntu: Whats the Difference? Which One Should You Use?"
[#]: via: "https://itsfoss.com/debian-vs-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Debian 和 Ubuntu有什么不同应该选择哪一个
======
在 Debian 和 Ubuntu 系统中,你都可以 [使用 apt-get 命令][1] 来管理应用。你也可以在这两个发型版中安装 DEB 安装包。很多时候,你会在这两个发行版中发现同样的包安装介绍。
它们两者是如此的相似,那么,它们两者之间有什么区别呢?
Debian 和 Ubuntu 属于同一系列的发行版。Debain 是由Ian Murdock 在 1993 年创建的最初的发行版。Ubuntu 是 Mark Shuttleworth 在 2004 年基于 Debian 创建的发行版。
### Ubuntu 基于 Debian这意味着什么
Linux 发行版虽然有数百个,但其中只有少数是从零开始的独立发行版。 [Debian][2]ArchRed Hat 是其中几个不派生于其它发行版的使用最广的发行版。
Ubuntu 源自 Debian。这意味着 Ubuntu 使用与 Debian 相同的 `APT` 包管理系统,并共享 Debian 库中的大量包和库。它建立在 Debian 基础架构上。
![Ubuntu uses Debian as base][3]
这就是大多数“衍生”发行版所做的。它们使用相同的包管理器,并将包共享为基本发行版。但他们也做了一些改变,添加了一些自己的包。这就是 Ubuntu 和 Debian 的不同之处,尽管它是从 Debian 衍生而来的。
### Ubuntu 和 Debian 的不同之处
因此Ubuntu 构建在 Debian 架构和基础设施上,也与 Debian 一样是用 `.DEB` 格式的软件包。
这意味着使用 Ubuntu 和使用 Debian 是一样的吗?并不完全如此。有很多因素可以用来区分两个不同的发行版。
让我逐一讨论这些因素来比较 Ubuntu 和 Debian。请记住有些比较适用于桌面版本而有些比较适用于服务器版本。
![][4]
#### 1\. 发布周期
Ubuntu 有两种发布版本LTS 和 regular。[Ubuntu LTS (长期支持) 版本][5] 每两年发布一次,并且会提供五年的支持。你可以选择升级到下一个可用的 LTS 版本。LTS 版本更稳定。
还有一个非 LTS 版本,每六个月发布一次。这些版本仅仅提供九个月的支持,但是它们会有一些新的软件版本和功能。当当前的版本已经不在维护了,你必须升级到下一个 Ubuntu 版本。
所以基本上,你可以根据这些版本在稳定性和新特性之间进行选择。
另一方面Debian 有三个不同的版本:稳定版、测试版和非稳定版 。非稳定版是为了实际测试,应该避免使用。
测试版并不是非稳定版。它是用来为下一个稳定版做准备。有一些 Debian 用户更倾向于使用测试版来获取新的特性。
然后是稳定版。这是 Debian 的主要版本。Debian 稳定版可能没有最新的软件和功能,但在稳定性方面毋庸置疑。
每两年 Debian 会发布一个稳定版,并且会提供三年的支持。此后,你必须升级到下一个可用的稳定版。
#### 2\. 软件更新
![][6]
Debian 更关注稳定性,这意味着它并不总是使用最新版本的软件。例如,最新的 Debian 11 用的 GNOME 版本为 3.38,并不是最新版的 GNOME 3.40。
对于 GIMP、LibreOffice 等其它软件也是如此。这是你必须对 Debian 做出的妥协。这就是“Debian stable = Debian stale”笑话在 Linux 社区流行的原因。
Ubuntu LTS 版本也关注稳定性。但是它们通常拥有较新版本的常见软件。
你应该注意,对于某些软件,从开发人员仓库安装也是一种选择。例如,如果你想要安装最新版的 Docker你可以在 Debian 和 Ubuntu 中添加 Docker 仓库。
总体来说,相比较于 Ubuntu Debian 稳定版的软件版本会更旧。
#### 3\. 软件可用性
Debian 和 Ubuntu 都拥有一个巨大的软件仓库。然而,[Ubuntu 同时有PPA][7]<ruby>Personal Package Archive<ruby>。通过 `PPA`,安装更新版本的软件或者获取最新版本的软件都将会变的更容易。
![][8]
你可以在 Debian 中尝试使用 PPA但是体验并不好。大多数时候你都会遇到问题。
#### 4\. 支持的平台
Ubuntu 可以在 64 位的 x86 和 ARM 平台上使用。它不再提供 32 位的镜像。
另一方面Debian 支持 32 位和 64 位架构。除此之外Debian 还支持 64 位 ARMarm64、ARM EABIarmel、ARMv7EABI hard-float ABIarmhf、little-endian MIPSmipsel、64 位 little-endian MIPSmips64el、64 位 little-endian PowerPCppc64el 和 IBM System zs390x
所以它也被称为 “通用操作系统”。
#### 5\. 安装
[安装 Ubuntu][9] 比安装 Debian 容易得多。我并不是在骗你。即使对于中级 Linux 用户Debian 也可能令人困惑。
当你下载 Debian 的时候,它默认提供的是最小化镜像。 此镜像没有非免费(非开源)固件。如果你继续安装它,你就可能会发现你的网络适配器和其它硬件将无法识别。
有一个单独的非免费镜像包含固件,但它是隐藏的,如果你不知道,你可能会大吃一惊。
![Getting non-free firmware is a pain in Debian][10]
Ubuntu 在默认提供的镜像中包含专有驱动程序和固件时要宽容的多。
此外Debian 安装程序看起来很旧,而 Ubuntu 安装程序看起来就比较现代化。Ubuntu 安装程序还可以识别磁盘上其它已安装的操作系统,并为你提供将 Ubuntu 与现有操作系统一起安装的选项(双引导)。但我在测试时并没有注意到 Debian 有此选项。
![Installing Ubuntu is smoother][11]
#### 6\. 开箱即用的硬件支持
就像之前提到的Debian 主要关注 [FOSS][12](自由和开源软件)。这意味着 Debian 提供的内核不包括专有驱动程序和固件。
这并不是说你无法使其工作,而是你必须添加/启动其它存储库并手动安装。这可能令人沮丧,特别是对于初学者来说。
Ubuntu 并不完美,但在提供开箱即用的驱动程序和固件方面,它比 Debian 好得多。这意味着更少的麻烦和更完整的开箱即用体验。
#### 7\. 桌面环境选择
Ubuntu 默认使用定制的 GNOME 桌面环境。你可以在其上安装其它桌面环境,或者选择各种基于桌面的 Ubuntu 风格,如 Kubuntu使用 KDE 桌面Xubuntu使用 Xfce 桌面)等。
Debian 也默认使用的 GNOME 桌面。但是它会让你在安装的过程中选择你要安装的桌面环境。
![][15]
你还可以从其网站获取[DE 特定的 ISO 镜像][16]。
#### 8\. 游戏性
由于 Stream 及其 Proton 项目Linux 上的游戏总体上有所改善。尽管如此,游戏在很大程度上取决于硬件。
在硬件兼容性上Ubuntu 比 Debian 更好的支持专有驱动程序。
并不是说它在 Debian 中不能完成,而是需要一些时间和精力来实现。
#### 9\. 性能
性能部分没有明显的“赢家”,无论是在服务器版本还是在桌面版本。 Debian 和 Ubuntu 作为桌面和服务器操作系统都很受欢迎。
性能取决于你系统的硬件和你所使用的软件组件。你可以在你的操作系统中调整和控制你的系统。
#### 10\. 社区和支持
Debian 是社区项目。此项目的一切都由其社区成员管理。
Ubuntu 由 [Canonical][17] 提供支持。然而,它并不是一个真正意义上的企业项目。它确实有一个社区,但任何事情的最终决定权都掌握在 Canonical 手中。
就支持而言Ubuntu 和 Debian 都有专门的论坛,用户可以在其中寻求帮助和提出建议。
Canonical 还为其企业客户提供收费的专业支持。 Debian 没有这样的功能。
### 结论
Debian 和 Ubuntu 都是桌面或服务器操作系统的可靠选择。 apt 包管理器和 DEB 包对两者都是通用的,因此提供了一些相似的体验。
然而Debian 仍然需要一定程度的专业知识,特别是在桌面方面。如果你是 Linux 新手,坚持使用 Ubuntu 将是你更好的选择。在我看来,你应该获得一些经验,熟悉 Linux然后尝试使用 Debian。
并不是说你不能从一开始就使用 Debian但对于 Linux 初学者来说,这并不是一种很好的体验。
**欢迎你对这场 Debian 与 Ubuntu 辩论发表意见。**
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-vs-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/apt-get-linux-guide/
[2]: https://www.debian.org/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-ubuntu-upstream.png?resize=800%2C400&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-vs-ubuntu.png?resize=800%2C450&ssl=1
[5]: https://itsfoss.com/long-term-support-lts/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/apt-cache-policy.png?resize=795%2C456&ssl=1
[7]: https://itsfoss.com/ppa-guide/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ffmpeg_add_ppa.jpg?resize=800%2C222&ssl=1
[9]: https://itsfoss.com/install-ubuntu/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/Debian-firmware.png?resize=800%2C600&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/choose-something-else-installing-ubuntu.png?resize=800%2C491&ssl=1
[12]: https://itsfoss.com/what-is-foss/
[13]: https://itsfoss.com/best-linux-desktop-environments/
[14]: https://itsfoss.com/which-ubuntu-install/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/debian-install-desktop-environment.png?resize=640%2C479&ssl=1
[16]: https://cdimage.debian.org/debian-cd/current-live/amd64/iso-hybrid/
[17]: https://canonical.com/

View File

@ -0,0 +1,105 @@
[#]: subject: "How to set up your printer on Linux"
[#]: via: "https://opensource.com/article/21/8/add-printer-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "fisherue "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Linux 系统设置打印机
======
如果系统没有自动检测到你的打印机,这篇文章教你任何在 Linux 系统手动添加打印机。
![printing on Linux][1]
即使未来已来,电子墨水 (e-ink) 和 AR 技术可以现实应用,我们还是会用到打印机的。打印机制造商还不能做到让自己的专利打印机可以与各种计算机完全标准化传递信息,以至于我们需要各种打印机驱动程序,在任何操作系统上都是如此。电子电气工程师协会信息科学与技术处 (IEEE-ISTO) 下属的打印机工作组 (PWG) 和开放打印技术组织 (OpenPrinting.org) 长期合作致力于让人们可以(使用任何型号打印机)轻松打印。带来的便利就是,很多打印机可以不需要用户进行配置就可以自动被识别使用。
如果系统没有自动检测到你的打印机,你可以在这篇文章中找到如何在 Linux 系统手动添加打印机。文中假定你使用的是 GNOME 图形桌面系统,其设置流程同样适用于 KDE 或其他多数桌面系统。
### 打印机驱动程序
在你尝试用打印机打印文件时,要先确认你的 Linux 系统上是不是已经安装了匹配的打印机驱动程序。
可以尝试安装的打印机驱动程序有三大类:
* 在你的 Linux 系统作为安装包提供的开源打印机驱动程序 [Gutenprint drivers][2]
* 打印机制造商提供的专用驱动程序
* 第三方开发提供的打印机驱动程序
开源打印机驱动程序库可以驱动 700 多种打印机,值得安装,这里面可能就有你的打印机的驱动,说不定可以自动设置好你的打印机(,你就可以使用它了)。
### 安装开源驱动程序包(库)
有些 Linux 发行版已经预装了开源打印机驱动程序包,如果没有,你可以用包管理器来安装。比如说,在 Fedora, CentOS, Magela 等类似发行版的 Linux 系统上,执行下面命令来安装:
```
`$ sudo dnf install gutenprint`
```
惠普 (HP) 系列的打印机,还需要安装惠普的 Linux 图形及打印系统软件包 (Hewlett-Packard's Linux Imaging and Printing (HPLIP) ). 类似 Debian, Linux Mint 等系列的系统,使用下面的命令:
```
`$ sudo apt install hplip`
```
### 安装制造商提供的驱动程序
很多时候因为打印机制造商使用非标准的接口协议,这种情况开源打印机驱动程序就不能驱动打印机。另外的情况就是,开源驱动程序可以驱动打印机工作,但是会缺少品牌特有的有些性能。这些情况,你需要访问制造商的网站,找到适合你的打印机型号的 Linux 平台驱动。安装过程各异,仔细阅读安装指南逐步安装。
即便是厂家的驱动也不能驱动你的打印机工作,你或许也只能尝试第三方提供的该型号打印机的驱动软件 [third-party driver authors][3] 了。这类第三方驱动程序不是开源的,和打印机专用驱动程序一样闭源。如果你需要额外花费 45 美元(约 400 员人民币)从供应商那里获取帮助服务才能安装好驱动并使用你的打印机,那是很心疼,或者你索性把这台打印机扔掉,至少你知道下次再也不会购买这个品牌的打印机了。(译者注:国内售后服务收费没有北美那么高,有需要还是先电话咨询售后,有没有 Linux 平台的专用驱动可真是碰运气。
### 统一接口打印驱动系统(CUPS)
统一接口打印驱动系统 (CUPS) 是由 Easy Software Products 公司于 1997 年开发的2007 年被苹果公司 (Apple) 收购。这是 Linux 平台打印的开源基础软件包,很多改进的发行版本提供定制化的界面。得益于 CUPS 技术,你可以发使用通过 USB 接口连接到电脑的打印机,甚至连接在同一网络的共享打印机。
一旦你安装了需要的驱动程序包,你就能手工添加你的打印机了。首先,把打印机连接到运行的电脑上,并打开打印机电源。然后从启动器 **Activities**)或者应用列表中找到并打开打印机设置**Printers**。![printer settings][4]
CC BY-SA Opensource.com
基于你已经安装的驱动包,你的 Linux 系统有可能自动检测识别到你的打印机型号,不需要额外的设置就可以使用你的打印机了。
![printer settings][5]
CC BY-SA Opensource.com
一旦你在列表中找到你的打印机型号,设置使用这个驱动,恭喜你就可以在 Linux 系统上用它打印了。
(如果你的打印机没有被自动识别,)你需要自行添加打印机。在打印机设置界面**Printers**,点击右上角的解锁按钮(**Unlock**),输入管理用户密码,按钮转换成**添加打印机**按钮 (**Add**) 。
然后点击这个**添加打印机**按钮 (**Add**) ,电脑会搜索已经连接的本地打印机型号并匹配相应驱动程序。如果要添加网络共享打印机,在搜索框输入打印机或者其服务器机的 IP 地址。
![searching for a printer][6]
CC BY-SA Opensource.com
选中你想添加的打印机型号,点击**添加**按钮 (**Add**) 把打印机驱动加入系统,就可以使用它了。
### 在 Linux 系统上打印
在 Linux 系统上打印很容易,不管你是在使用本地打印机还是网络打印机。如果你计划购买打印机,建议查看开放打印技术组织的(可支持打印机)数据库 ( [OpenPrinting.org database][7] ) ,看看你想购买的打印机是不是有相应的开源驱动程序。如果你已经拥有一台打印机,你现在也知道怎样在你的 Linux 系统上使用你的打印机了。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/add-printer-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[fisherue](https://github.com/fisherue)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/happy-printer.png?itok=9J44YaDs "printing on Linux"
[2]: http://gimp-print.sourceforge.net/
[3]: https://www.turboprint.info/
[4]: https://opensource.com/sites/default/files/system-settings-printer_0.png "printer settings"
[5]: https://opensource.com/sites/default/files/settings-printer.png "printer settings"
[6]: https://opensource.com/sites/default/files/printer-search.png "searching for a printer"
[7]: http://www.openprinting.org/printers/

View File

@ -0,0 +1,87 @@
[#]: subject: "Apps for daily needs part 4: audio editors"
[#]: via: "https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
满足日常需求的应用第四部分:音频编辑器
======
![][1]
照片由 [Brooke Cagle][2] 在 [Unsplash][3] 上发布。
音频编辑应用或数字音频工作站DAW在过去只被专业人士使用如唱片制作人、音响工程师和音乐家。但现在很多不是专业人士的人也需要它们。这些工具被用于演示文稿解说、视频博客甚至只是作为一种爱好。现在尤其如此因为有这么多的在线平台方便大家分享音频作品如音乐、歌曲、播客等。本文将介绍一些你可以在 Fedora Linux 上使用的开源音频编辑器或 DAW。你可能需要安装提到的软件。如果你不熟悉如何在 Fedora Linux 中添加软件包,请参阅我之前的文章[安装 Fedora 34 工作站后要做的事情][4]。这里列出了音频编辑器或 DAW 类的一些日常需求的应用。
### Audacity
我相信很多人已经知道 Audacity 了。它是一个流行的多轨音频编辑器和录音机,可用于对所有类型的音频进行后期处理。大多数人使用 Audacity 来记录他们的声音,然后进行编辑,使其结果更好。其结果可以作为播客或视频博客的解说词。此外,人们还用 Audacity 来创作音乐和歌曲。你可以通过麦克风或调音台录制现场音频。它还支持 32 位的声音质量。
Audacity 有很多功能可以支持你的音频作品。它有对插件的支持你甚至可以自己编写插件。Audacity 提供了许多内置效果,如降噪、放大、压缩、混响、回声、限制器等。你可以利用实时预览功能在直接聆听音频的同时尝试这些效果。内置的插件管理器可以让你管理经常使用的插件和效果。
![][5]
更多信息可在此链接中找到: <https://www.audacityteam.org/>
* * *
### LMMS
LMMS 或 Linux MultiMedia Studio 是一个全面的音乐创作应用。你可以使用 LMMS 用你的电脑从头开始制作你的音乐。你可以根据自己的创意创造旋律和节拍,并通过选择声音乐器和各种效果使其更加完美。有几个与乐器和效果有关的内置功能,如 16 个内置合成器、嵌入式 ZynAddSubFx、支持插入式 VST 效果插件、捆绑图形和参数均衡器、内置分析器等等。LMMS 还支持 MIDI 键盘和其他音频外围设备。
![][6]
更多信息可在此链接中获得: <https://lmms.io/>
* * *
### Ardour
Ardour 作为一个全面的音乐创作应用,其功能与 LMMS 相似。它在其网站上说Ardour 是一个 DAW 应用是来自世界各地的音乐家、程序员和专业录音工程师合作的结果。Ardour 拥有音频工程师、音乐家、配乐编辑和作曲家需要的各种功能。
Ardour 为录音、编辑、混音和输出提供了完整的功能。它有无限的多声道音轨、无限撤销/重做的非线性编辑器、一个全功能的混音器、内置插件等。Ardour 还带有视频播放工具,所以它在为视频项目创建和编辑配乐的过程中也很有帮助。
![][7]
更多信息可在此链接中获得: <https://ardour.org/>
* * *
### TuxGuitar
TuxGuitar 是一个指法谱和乐谱编辑器。它配备了指法编辑器、乐谱查看器、多轨显示、拍号管理和速度管理。它包括各种效果,如弯曲、滑动、颤音等。虽然 TuxGuitar 专注于吉他,但它也允许你为其他乐器写乐谱。它也可以作为一个基本的 MIDI 编辑器。你需要对指法谱和乐谱有一定的了解才能使用它。
![][8]
更多的信息可以在这个链接上获得: <http://www.tuxguitar.com.ar/>
* * *
### 总结
这篇文章介绍了四个音频编辑器,作为你在 Fedora Linux 上的日常需要和使用的应用。实际上,还有许多其他的你可以在 Fedora Linux 上使用的音频编辑器或者 DAW。你也可以使用 Mixxx、Rosegarden、Kwave、Qtractor、MuseScore、musE 等等。希望这篇文章能帮助你调查和选择合适的音频编辑器或者 DAW。如果你有使用这些应用的经验请在评论中分享你的经验。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/apps-for-daily-needs-part-4-audio-editors/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/07/FedoraMagz-Apps-4-Audio-816x345.jpg
[2]: https://unsplash.com/@brookecagle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/meeting-on-cafe-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
[5]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-audacity-1024x575.png
[6]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-lmms-1024x575.png
[7]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-ardour-1024x592.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/08/audio-tuxguitar-1024x575.png

View File

@ -0,0 +1,126 @@
[#]: subject: "Ulauncher: A Super Useful Application Launcher for Linux"
[#]: via: "https://itsfoss.com/ulauncher/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ulauncher一个超级实用的 Linux 应用启动器
======
_**简介:**_ _Ulauncher 是一个快速的应用启动器,支持扩展和快捷方式,帮助你在 Linux 中快速访问应用和文件。_
一个应用启动器可以让你快速访问或打开一个应用,而无需在应用菜单图标上徘徊。
在默认情况下,我发现 Pop!_OS 的应用启动器超级方便。但是,并不是每个 Linux 发行版都提供开箱即用的应用启动器。
幸运的是,有一个你可以在大多数流行的发行版中添加应用启动器的方案。
### Ulauncher开源应用启动器
![][1]
Ulauncher 是一个使用 Python 还有 GTK+ 构建的快速应用启动器。
它提供了相当数量的自定义和控制选项来进行调整。总的来说,你可以调整它的行为和体验以适应你的喜好。
让我来说一下你可以期待它的一些功能。
### Ulauncher 功能
Ulauncher 中的选项非常非常易于访问且易于定制。一些关键的亮点包括:
* 模糊搜索算法,让你找到应用,即使你拼错了它们
* 记住你在同一会话中最后搜索的应用
* 经常使用的应用显示(可选)
* 自定义颜色主题
* 预设颜色主题,包括一个黑暗主题
* 召唤启动器的快捷方式可以轻松定制
* 浏览文件和目录
* 支持扩展,以获得额外的功能(表情符号、天气、速度测试、笔记、密码管理器等)
* 浏览谷歌、维基百科和 Stack Overflow 等网站的快捷方式
它几乎提供了你在一个应用启动器中所期望的所有有用的能力,甚至更好。
### 如何在 Linux 中使用 Ulauncher
默认情况下,首次从应用菜单打开应用启动器后,你需要按 **Ctrl + Space** 打开应用启动器。
开始输入以搜索一个应用。而且,如果你正在寻找一个文件或目录,开始输入 “**~**” 或者 “**/**” (忽略引号)。
![][2]
有一些默认的快捷键,如 “**g XYZ**”,其中 XYZ 是你想在谷歌中搜索的搜索词。
![][3]
同样,你可以通过 “**wiki**” 和 “**so**” 快捷键,直接在维基百科或 Stack Overflow 搜索。
在没有任何扩展的情况下,你也可以直接计算内容,并将结果直接复制到键盘上。
![][4]
这在快速计算时应该很方便,不需要单独启动计算器应用。
你可以前往它的[扩展页面][5],浏览有用的扩展,以及指导你如何使用它的截图。
要改变它的工作方式,启用频繁的应用显示,并调整主题,请点击启动器右侧的齿轮图标。
![][6]
你可以把它设置为自动启动。但是,如果它在你的支持 Systemd 的发行版上不工作,你可以参考它的 GitHub 页面,把它添加到服务管理器中。
这些选项是非常只管,且易于定制,如下图所示。
![][7]
### 在 Linux 中安装 Ulauncher
Ulauncher 为基于 Debian 或 Ubuntu 的发行版提供了一个 **.deb** 包。如果你是 Linux 新手,你可以探索[如何安装 Deb 文件][8] 。
在这两种情况下,你也可以添加它的 PPA并通过终端按照下面的命令来安装它
```
sudo add-apt-repository ppa:agornostal/ulauncher
sudo apt update
sudo apt install ulauncher
```
你也可以在 [AUR][9] 中找到它,用于 Arch 和 Fedora 的默认仓库。
对于更多信息,你可以前往其官方网站或 [GitHub页面][10]。
[Ulauncher][11]
Ulauncher 应该是任何 Linux 发行版中一个令人印象深刻的补充。特别是,如果你想要一个像 Pop!_OS 提供的快速启动器的功能,这是一个值得考虑的奇妙选择。
_你试过 Ulauncher了吗欢迎你就这如何帮助你快速完成工作分享你的想法。_
--------------------------------------------------------------------------------
via: https://itsfoss.com/ulauncher/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher.png?resize=800%2C512&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-directory.png?resize=800%2C503&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-google.png?resize=800%2C449&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-calculator.png?resize=800%2C429&ssl=1
[5]: https://ext.ulauncher.io
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-gear-icon.png?resize=800%2C338&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/ulauncher-settings.png?resize=800%2C492&ssl=1
[8]: https://itsfoss.com/install-deb-files-ubuntu/
[9]: https://itsfoss.com/aur-arch-linux/
[10]: https://github.com/Ulauncher/Ulauncher/
[11]: https://ulauncher.io

View File

@ -0,0 +1,85 @@
[#]: subject: "Elementary OS 6 Odin Review Late Arrival but a Solid One"
[#]: via: "https://www.debugpoint.com/2021/08/elementary-os-6-odin-review/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "imgradeone"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
elementary OS 6 Odin 评测 迟到的新版本,但也实至名归
======
> 这篇 elementary OS 6 的评测将为您呈现该系统在旧款测试设备上的表现。
elementary OS 的粉丝们焦急等待 elementary OS 6 Odin 发布已经将近两年了。如此焦急的原因,主要在于早期版本 elementary OS 5.1 的内核和软件包在 2021 年来说过于陈旧。而且,这一旧版本基于 Ubuntu 18.04 LTS 构建。因此,用户都急切地等待着基于 Ubuntu 20.04 LTS 的全新版本 —— 最重要的是Ubutnu 20.04 LTS 已经发布一年,接下来也将有下一个 LTS 版本发布。
你应该也明白的,过长的等待时间,很可能导致用户跳槽到其他发行版。
但即便如此,新版本终于还是 [在 8 月发布了][1],它也在用户和粉丝群体中引起了很大的轰动。
于是,我在一周前为一台旧设备(我知道新设备的体验会更好)安装了 elementary OS 6 Odin下面就是测评。
![elementary OS 6 Odin 的桌面][2]
### elementary OS 6 Odin 测评
测试设备
* CPU Intel Core i34 GB 运行内存
* 硬盘 SSD 固态硬盘
* 显卡 Nvidia GeForce340
#### 安装
在这一版本中elementary 团队针对 elementary OS 安装器做了易用性优化,而这一次的安装器也是自制安装器。新安装器减少了安装前的准备步骤,虽然它还是需要依赖 GParted 进行分区操作(当然 GParted 本身是一款不错的工具)。
在前述测试设备中,安装过程大约花费了 10 分钟没有任何报错。初始化之后GRUB 也正常更新,没有任何意外。这是一个带有 Legacy BIOS 的三系统启动器。
<!-- 不太确定“这是一个带有 Legacy BIOS 的三系统启动器”这句话的翻译。原句It was a triple boot system with Legacy Bios. -->
#### 初见印象
如果你刚听说 elementary OS 和 Pantheon 桌面,或者从其他传统菜单型桌面环境迁移过来,你可能需要一两天时间来适应这款桌面。当然,如果你已经是 elementary OS 的老用户的话,那么你将获得一致的体验,外加性能和外观的优化。
你应该可以察觉到一些明显可见的 [elementary OS 6 的新特性][3],像是强调色、原生暗黑模式,以及一组不错的新壁纸。
[][4]
#### 稳定性与性能
我已经使用 elementary OS 6 Odin 超过一周的时间。在日常使用后,我只能说,它很稳定,没有突然的崩溃和意外。其他额外软件(需要从 apt 独立安装)也运作正常,没有性能损耗。
在近乎闲置的情况下CPU 使用率处在 5%-10% 之间,内存占用约为 900 MB。CPU / 内存的消耗主要分配在 GalaPantheon 的窗口管理器、Wingpanel顶栏和应用中心。
![elementary OS 6 的系统性能][5]
考虑到系统的视觉效果,我认为这些占用数据也十分合理。不过,当你打开更多软件,例如 LibreOffice、Chrome、Kdenlive 之后,消耗的资源肯定会更多。
#### 应用程序与应用中心
elementary OS 的应用程序列表经过精选,几乎所有类型的软件都可以从应用中心获取,包括 Flatpak 应用。不过elementary OS 并没有预装一些重要的应用程序,像是 Firefox、LibreOffice、Torrent 客户端、硬盘分区工具、照片编辑器之类 —— 这些重要的程序需要在安装系统后再自行安装。我认为预装软件这一块有很大的改进空间。
### 结束语
在这一周的测试中,我也多次遇到了一个 bugWi-Fi 有时会突然断开,不过这完全是 Ubuntu 20.04 上游的问题 —— 多年以来,它一直有奇怪的 Wi-Fi 问题。抛开这个问题elementary OS 确实是一款稳定、优秀的 Linux 发行版。如果 elementary OS 有滚动更新的版本,也许会更好。因此,这是一款值得推荐的发行版,尤其适合 macOS 的迁移者。
* * *
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2021/08/elementary-os-6-odin-review/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://blog.elementary.io/elementary-os-6-odin-released/
[2]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/elementary-OS-6-ODIN-Desktop-1024x576.jpeg
[3]: https://www.debugpoint.com/2021/08/elementary-os-6/
[4]: https://www.debugpoint.com/2020/09/elementary-os-6-odin-new-features-release-date/
[5]: https://www.debugpoint.com/blog/wp-content/uploads/2021/08/System-performance-of-elementary-OS-6.jpeg