Merge pull request #3 from LCTT/master

update
This commit is contained in:
MjSeven 2018-05-06 11:07:30 +08:00 committed by GitHub
commit f485bcb062
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
51 changed files with 5333 additions and 1674 deletions

View File

@ -3,9 +3,10 @@ Caffeinated 6.828:练习 shell
通过在 shell 中实现多项功能,该作业将使你更加熟悉 Unix 系统调用接口和 shell。你可以在支持 Unix API 的任何操作系统(一台 Linux Athena 机器、装有 Linux 或 Mac OS 的笔记本电脑等)上完成此作业。请在第一次上课前将你的 shell 提交到[网站][1]。
如果你在练习中遇到困难或不理解某些内容时,你不要羞给[员工邮件列表][2]发送邮件,但我们确实希望全班的人能够自行处理这级别的 C 编程。如果你对 C 不是很熟悉,可以认为这个是你对 C 熟悉程度的检查。再说一次,如果你有任何问题,鼓励你向我们寻求帮助。
如果你在练习中遇到困难或不理解某些内容时,你不要羞给[员工邮件列表][2]发送邮件,但我们确实希望全班的人能够自行处理这级别的 C 编程。如果你对 C 不是很熟悉,可以认为这个是你对 C 熟悉程度的检查。再说一次,如果你有任何问题,鼓励你向我们寻求帮助。
下载 xv6 shell 的[框架][3],然后查看它。框架 shell 包含两个主要部分:解析 shell 命令并实现它们。解析器只能识别简单的 shell 命令,如下所示:
```
ls > y
cat < y | sort | uniq | wc > y1
@ -13,31 +14,30 @@ cat y1
rm y1
ls | sort | uniq | wc
rm y
```
将这些命令剪切并粘贴到 `t.sh `中。
将这些命令剪切并粘贴到 `t.sh` 中。
你可以按如下方式编译框架 shell 的代码:
你可以按如下方式编译框架 shell
```
$ gcc sh.c
```
它会生成一个名为 `a.out` 的文件,你可以运行它:
```
$ ./a.out < t.sh
```
执行会崩溃,因为你还没有实现几个功能。在本作业的其余部分中,你将实现这些功能。
执行会崩溃,因为你还没有实现其中的几个功能。在本作业的其余部分中,你将实现这些功能。
### 执行简单的命令
实现简单的命令,例如:
```
$ ls
```
解析器已经为你构建了一个 `execcmd`,所以你唯一需要编写的代码是 `runcmd` 中的 case ' '。要测试你可以运行 “ls”。你可能会发现查看 `exec` 的手册页是很有用的。输入 `man 3 exec`
@ -47,10 +47,10 @@ $ ls
### I/O 重定向
实现 I/O 重定向命令,这样你可以运行:
```
echo "6.828 is cool" > x.txt
cat < x.txt
```
解析器已经识别出 '>' 和 '<',并且为你构建了一个 `redircmd`,所以你的工作就是在 `runcmd` 中为这些符号填写缺少的代码。确保你的实现在上面的测试输入中正确运行。你可能会发现 `open``man 2 open``close` 的 man 手册页很有用。
@ -60,9 +60,9 @@ cat < x.txt
### 实现管道
实现管道,这样你可以运行命令管道,例如:
```
$ ls | sort | uniq | wc
```
解析器已经识别出 “|”,并且为你构建了一个 `pipecmd`,所以你必须编写的唯一代码是 `runcmd` 中的 case '|'。测试你可以运行上面的管道。你可能会发现 `pipe`、`fork`、`close` 和 `dup` 的 man 手册页很有用。
@ -71,7 +71,6 @@ $ ls | sort | uniq | wc
```
$ ./a.out < t.sh
```
无论是否完成挑战任务,不要忘记将你的答案提交给[网站][1]。
@ -80,11 +79,10 @@ $ ./a.out < t.sh
如果你想进一步尝试,可以将所选的任何功能添加到你的 shell。你可以尝试以下建议之一
* 实现由 `;` 分隔的命令列表
  * 通过实现 `(``)` 来实现子 shell
  * 通过支持 ```wait` 在后台执行命令
  * 实现参数引用
* 实现由 `;` 分隔的命令列表
* 通过实现 `(``)` 来实现子 shell
* 通过支持 ```wait` 在后台执行命令
* 实现参数引用
所有这些都需要改变解析器和 `runcmd` 函数。
@ -95,7 +93,7 @@ via: https://sipb.mit.edu/iap/6.828/lab/shell/
作者:[mit][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,12 +1,13 @@
一个简单的命令行片段管理器
Pet一个简单的命令行片段管理器
=====
![](https://www.ostechnix.com/wp-content/uploads/2018/01/pet-6-720x340.png)
我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经审查了两个有用的工具,名为 "Bashpast" 和 "Keep"。使用 Bashpast我们可以轻松地为 Linux 命令添加书签以便更轻松地重复调用。而且Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以按需使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在向 "Pet" 打个招呼,这是一个用 Go 语言编写的简单的命令行代码管理器。
我们不可能记住所有的命令,对吧?是的。除了经常使用的命令之外,我们几乎不可能记住一些很少使用的长命令。这就是为什么需要一些外部工具来帮助我们在需要时找到命令。在过去,我们已经点评了两个有用的工具,名为 “Bashpast” 和 “Keep”。使用 Bashpast我们可以轻松地为 Linux 命令添加书签,以便更轻松地重复调用。而 Keep 实用程序可以用来在终端中保留一些重要且冗长的命令,以便你可以随时使用它们。今天,我们将看到该系列中的另一个工具,以帮助你记住命令。现在让我们认识一下 “Pet”这是一个用 Go 语言编写的简单的命令行代码管理器。
使用 Pet你可以
* 注册/添加你重要的冗长和复杂的命令片段。
* 注册/添加你重要的冗长和复杂的命令片段。
* 以交互方式来搜索保存的命令片段。
* 直接运行代码片段而无须一遍又一遍地输入。
* 轻松编辑保存的代码片段。
@ -14,68 +15,78 @@
* 在片段中使用变量
* 还有很多特性即将来临。
#### 安装 Pet 命令行接口代码管理器
### 安装 Pet 命令行接口代码管理器
由于它是用 Go 语言编写的,所以确保你在系统中已经安装了 Go。
安装 Go 后,从 [**Pet 发布页面**][3] 获取最新的二进制文件。
```
wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_amd64.zip
```
对于 32 位计算机:
```
wget https://github.com/knqyf263/pet/releases/download/v0.2.4/pet_0.2.4_linux_386.zip
```
解压下载的文件:
```
unzip pet_0.2.4_linux_amd64.zip
```
对于 32 位:
```
unzip pet_0.2.4_linux_386.zip
```
将 pet 二进制文件复制到 PATH**/usr/local/bin** 之类的)。
`pet` 二进制文件复制到 PATH`/usr/local/bin` 之类的)。
```
sudo cp pet /usr/local/bin/
```
最后,让它可以执行:
```
sudo chmod +x /usr/local/bin/pet
```
如果你使用的是基于 Arch 的系统,那么你可以使用任何 AUR 帮助工具从 AUR 安装它。
使用 [**Pacaur**][4]:
使用 [Pacaur][4]
```
pacaur -S pet-git
```
使用 [**Packer**][5]:
使用 [Packer][5]
```
packer -S pet-git
```
使用 [**Yaourt**][6]:
使用 [Yaourt][6]
```
yaourt -S pet-git
```
使用 [**Yay** :][7]
使用 [Yay][7]
```
yay -S pet-git
```
此外,你需要安装 **[fzf][8]** 或 [**peco**][9] 工具已启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。
此外,你需要安装 [fzf][8] 或 [peco][9] 工具以启用交互式搜索。请参阅官方 GitHub 链接了解如何安装这些工具。
#### 用法
### 用法
运行没有任何参数的 `pet` 来查看可用命令和常规选项的列表。
运行没有任何参数的 'pet' 来查看可用命令和常规选项的列表。
```
$ pet
pet - Simple command-line snippet manager.
@ -103,21 +114,23 @@ Use "pet [command] --help" for more information about a command.
```
要查看特定命令的帮助部分,运行:
```
$ pet [command] --help
```
**配置 Pet**
它只适用于默认值。但是,你可以更改默认目录来保存片段,选择要使用的选择器 (fzf 或 peco),默认文本编辑器编辑片段,添加 GIST id 详细信息等。
#### 配置 Pet
默认配置其实工作的挺好。但是你可以更改保存片段的默认目录选择要使用的选择器fzf 或 peco编辑片段的默认文本编辑器添加 GIST id 详细信息等。
要配置 Pet运行
```
$ pet configure
```
该命令将在默认的文本编辑器中打开默认配置(例如我是 **vim**),根据你的要求更改或编辑特定值。
该命令将在默认的文本编辑器中打开默认配置(例如我是 vim根据你的要求更改或编辑特定值。
```
[General]
snippetfile = "/home/sk/.config/pet/snippet.toml"
@ -133,24 +146,27 @@ $ pet configure
~
```
**创建片段**
#### 创建片段
为了创建一个新的片段,运行:
```
$ pet new
```
添加命令和描述,然后按下 ENTER 键保存它。
添加命令和描述,然后按下回车键保存它。
```
Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9'
Description> Remove numbers from output.
```
[![][10]][11]
![][11]
这是一个简单的命令,用于从 echo 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 "CTRL+r" 搜索历史记录,但 "Pet" 会更容易。另外Pet 可以帮助你添加任意数量的条目。
这是一个简单的命令,用于从 `echo` 命令输出中删除所有数字。你可以很轻松地记住它。但是,如果你很少使用它,几天后你可能会完全忘记它。当然,我们可以使用 `CTRL+R` 搜索历史记录,但 Pet 会更容易。另外Pet 可以帮助你添加任意数量的条目。
另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 `.bashrc``.zshrc` 文件中添加以下行。
另一个很酷的功能是我们可以轻松添加以前的命令。为此,在你的 **.bashrc** 或 **.zshrc** 文件中添加以下行。
```
function prev() {
PREV=$(fc -lrn | head -n 1)
@ -159,46 +175,53 @@ function prev() {
```
执行以下命令来使保存的更改生效。
```
source .bashrc
```
或者
或者:
```
source .zshrc
```
现在,运行任何命令,例如:
```
$ cat Documents/ostechnix.txt | tr '|' '\n' | sort | tr '\n' '|' | sed "s/.$/\\n/g"
```
要添加上述命令,你不必使用 "pet new" 命令。只需要:
要添加上述命令,你不必使用 `pet new` 命令。只需要:
```
$ prev
```
将说明添加到命令代码片段中,然后按下 ENTER 键保存。
将说明添加到该命令代码片段中,然后按下回车键保存。
![][12]
**片段列表**
#### 片段列表
要查看保存的片段,运行:
```
$ pet list
```
![][13]
**编辑片段**
#### 编辑片段
如果你想编辑代码片段的描述或命令,运行:
如果你想编辑描述或代码片段的命令,运行:
```
$ pet edit
```
这将在你的默认文本编辑器中打开所有保存的代码片段,你可以根据需要编辑或更改片段。
```
[[snippets]]
description = "Remove numbers from output."
@ -211,33 +234,35 @@ $ pet edit
output = ""
```
**在片段中使用标签**
#### 在片段中使用标签
要将标签用于判断,使用下面的 `-t` 标志。
要将标签用于判断,使用下面的 **-t** 标志。
```
$ pet new -t
Command> echo 'Hell1o, Welcome1 2to OSTechNix4' | tr -d '1-9
Description> Remove numbers from output.
Tag> tr command examples
```
**执行片段**
#### 执行片段
要执行一个保存的片段,运行:
```
$ pet exec
```
从列表中选择你要运行的代码段,然后按 ENTER 键来运行它:
从列表中选择你要运行的代码段,然后按回车键来运行它:
![][14]
记住你需要安装 fzf 或 peco 才能使用此功能。
**寻找片段**
#### 寻找片段
如果你有很多要保存的片段,你可以使用字符串或关键词如 below.qjz 轻松搜索它们。
```
$ pet search
```
@ -246,40 +271,43 @@ $ pet search
![][15]
**同步片段**
#### 同步片段
首先,你需要获取访问令牌。转到此链接 <https://github.com/settings/tokens/new> 并创建访问令牌(只需要 "gist" 范围)。
首先,你需要获取访问令牌。转到此链接 <https://github.com/settings/tokens/new> 并创建访问令牌(只需要 “gist” 范围)。
使用以下命令来配置 Pet
```
$ pet configure
```
标记设置到 **[Gist]** 字段中的 **access_token**
令牌设置到 `[Gist]` 字段中的 `access_token`
设置完成后,你可以像下面一样将片段上传到 Gist。
```
$ pet sync -u
Gist ID: 2dfeeeg5f17e1170bf0c5612fb31a869
Upload success
```
你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 **[Gist]** 中将 **Gist ID** 设置为 **gist_id**
你也可以在其他 PC 上下载片段。为此,编辑配置文件并在 `[Gist]` 中将 `gist_id` 设置为 GIST id。
之后,使用以下命令下载片段:
之后,下载片段使用以下命令:
```
$ pet sync
Download success
```
获取更多细节,参阅 help 选项:
获取更多细节,参阅帮助选项:
```
pet -h
```
或者
或者:
```
pet [command] -h
```
@ -289,14 +317,13 @@ pet [command] -h
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/pet-simple-command-line-snippet-manager/
作者:[SK][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,17 +1,17 @@
如何使用 Rsync 通过 SSH 恢复部分传输的文件
如何使用 rsync 通过 SSH 恢复部分传输的文件
======
![](https://www.ostechnix.com/wp-content/uploads/2016/02/Resume-Partially-Transferred-Files-Over-SSH-Using-Rsync.png)
由于诸如电源故障、网络故障或用户干预等各种原因,使用 SCP 命令通过 SSH 复制的大型文件可能会中断,取消或损坏。有一天,我将 Ubuntu 16.04 ISO 文件复制到我的远程系统。不幸的是断电了,网络连接立即丢失。结果么复制过程终止这只是一个简单的例子。Ubuntu ISO 并不是那么大,一旦电源恢复,我就可以重新启动复制过程。但在生产环境中,当你在传输大型文件时,你可能并不希望这样做。
由于诸如电源故障、网络故障或用户干预等各种原因,使用 `scp` 命令通过 SSH 复制的大型文件可能会中断、取消或损坏。有一天,我将 Ubuntu 16.04 ISO 文件复制到我的远程系统。不幸的是断电了,网络连接立即断了。结果么复制过程终止这只是一个简单的例子。Ubuntu ISO 并不是那么大,一旦电源恢复,我就可以重新启动复制过程。但在生产环境中,当你在传输大型文件时,你可能并不希望这样做。
而且,你不能总是使用 **scp** 命令恢复被中止的进度。因为,如果你这样做,它只会覆盖现有的文件。这时你会怎么做?别担心!这是 **Rsync** 派上用场的地方Rsync 可以帮助你恢复中断的复制或下载过程。对于那些好奇的人Rsync 是一个快速、多功能的文件复制程序,可用于复制和传输远程和本地系统中的文件或文件夹。
而且,你不能继续使用 `scp` 命令恢复被中止的进度。因为,如果你这样做,它只会覆盖现有的文件。这时你会怎么做?别担心!这是 `rsync` 派上用场的地方!`rsync` 可以帮助你恢复中断的复制或下载过程。对于那些好奇的人,`rsync` 是一个快速、多功能的文件复制程序,可用于复制和传输远程和本地系统中的文件或文件夹。
它提供了大量控制其行为的每个方面的选项,并允许非常灵活地指定要复制的一组文件。它以增量传输算法而闻名,它通过仅发送源文件和目标中现有文件之间的差异来减少通过网络发送的数据量。 Rsync 广泛用于备份和镜像,以及日常使用中改进的复制命令。
它提供了大量控制其各种行为的选项,并允许非常灵活地指定要复制的一组文件。它以增量传输算法而闻名,它通过仅发送源文件和目标中现有文件之间的差异来减少通过网络发送的数据量。 `rsync` 广泛用于备份和镜像,以及日常使用中改进的复制命令。
就像 SCP 一样rsync 也会通过 SSH 复制文件。如果你想通过 SSH 下载或传输大文件和文件夹,我建议您使用 rsync。请注意**应该在两边都安装 rsync**(远程和本地系统)来恢复部分传输的文件。
就像 `scp` 一样,`rsync` 也会通过 SSH 复制文件。如果你想通过 SSH 下载或传输大文件和文件夹,我建议您使用 `rsync`。请注意,应该在两边(远程和本地系统)都安装 `rsync` 来恢复部分传输的文件。
### 使用 Rsync 恢复部分传输的文件
### 使用 rsync 恢复部分传输的文件
好吧,让我给你看一个例子。我将使用命令将 Ubuntu 16.04 ISO 从本地系统复制到远程系统:
@ -21,33 +21,32 @@ $ scp Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.
这里,
* **sk**是我的远程系统的用户名
* **192.168.43.2** 是远程机器的 IP 地址。
* `sk`是我的远程系统的用户名
* `192.168.43.2` 是远程机器的 IP 地址。
现在,我按下 `CTRL+C` 结束它。
现在,我按下 **CTRL+c** 结束它。
**示例输出:**
示例输出:
```
sk@192.168.43.2's password:
ubuntu-16.04-desktop-amd64.iso 26% 372MB 26.2MB/s 00:39 ETA^c
```
[![][1]][2]
![][2]
正如你在上面的输出中看到的,当它达到 26 时,我终止了复制过程。
如果我重新运行上面的命令,它只会覆盖现有的文件。换句话说,复制过程不会在我断开的地方恢复。
为了恢复复制过程,我们可以使用 **rsync** 命令,如下所示。
为了恢复复制过程,我们可以使用 `rsync` 命令,如下所示。
```
$ rsync -P -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
```
**示例输出:**
示例输出:
```
sk@192.168.1.103's password:
sending incremental file list
@ -55,14 +54,15 @@ ubuntu-16.04-desktop-amd64.iso
                   380.56M 26% 41.05MB/s 0:00:25
```
[![][1]][4]
![][4]
看见了吗?现在,复制过程在我们之前断开的地方恢复了。你也可以像下面那样使用 `-partial` 而不是 `-P` 参数。
看见了吗?现在,复制过程在我们之前断开的地方恢复了。你也可以像下面那样使用 “-partial” 而不是 “-P” 参数。
```
$ rsync --partial -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
```
这里,参数 “-partial” 或 “-P” 告诉 rsync 命令保留部分下载的文件并恢复进度。
这里,参数 `-partial``-P` 告诉 `rsync` 命令保留部分下载的文件并恢复进度。
或者,我们也可以使用以下命令通过 SSH 恢复部分传输的文件。
@ -76,26 +76,24 @@ $ rsync -avP Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.
rsync -av --partial Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
```
就是这样了。你现在知道如何使用 rsync 命令恢复取消、中断和部分下载的文件。正如你所看到的,它也不是那么难。如果两个系统都安装了 rsync我们可以轻松地通过上面描述的那样恢复复制进度。
就是这样了。你现在知道如何使用 `rsync` 命令恢复取消、中断和部分下载的文件。正如你所看到的,它也不是那么难。如果两个系统都安装了 `rsync`,我们可以轻松地通过上面描述的那样恢复复制进度。
如果你觉得本教程有帮助,请在你的社交、专业网络上分享,并支持 OSTechNix。还有更多的好东西。敬请关注!
如果你觉得本教程有帮助,请在你的社交、专业网络上分享,并支持我们。还有更多的好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-resume-partially-downloaded-or-transferred-files-using-rsync/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png ()
[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png
[3]:/cdn-cgi/l/email-protection
[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png ()
[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png

View File

@ -0,0 +1,69 @@
如何使用 Linux 防火墙隔离本地欺骗地址
======
> 如何使用 iptables 防火墙保护你的网络免遭黑客攻击。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
即便是被入侵检测和隔离系统所保护的远程网络黑客们也在寻找各种精巧的方法入侵。IDS/IPS 不能停止或者减少那些想要接管你的网络控制权的黑客攻击。不恰当的配置允许攻击者绕过所有部署的安全措施。
在这篇文章中,我将会解释安全工程师或者系统管理员该怎样避免这些攻击。
几乎所有的 Linux 发行版都带着一个内建的防火墙来保护运行在 Linux 主机上的进程和应用程序。大多数防火墙都按照 IDS/IPS 解决方案设计,这样的设计的主要目的是检测和避免恶意包获取网络的进入权。
Linux 防火墙通常有两种接口iptables 和 ipchains 程序LCTT 译注:在支持 systemd 的系统上,采用的是更新的接口 firewalld。大多数人将这些接口称作 iptables 防火墙或者 ipchains 防火墙。这两个接口都被设计成包过滤器。iptables 是有状态防火墙其基于先前的包做出决定。ipchains 不会基于先前的包做出决定,它被设计为无状态防火墙。
在这篇文章中,我们将会专注于内核 2.4 之后出现的 iptables 防火墙。
有了 iptables 防火墙你可以创建策略或者有序的规则集规则集可以告诉内核该如何对待特定的数据包。在内核中的是Netfilter 框架。Netfilter 既是框架也是 iptables 防火墙的项目名称。作为一个框架Netfilter 允许 iptables 勾连被设计来操作数据包的功能。概括地说iptables 依靠 Netfilter 框架构筑诸如过滤数据包数据的功能。
每个 iptables 规则都被应用到一个表中的链上。一个 iptables 链就是一个比较包中相似特征的规则集合。而表(例如 `nat` 或者 `mangle`)则描述不同的功能目录。例如, `mangle` 表用于修改包数据。因此,特定的修改包数据的规则被应用到这里;而过滤规则被应用到 `filter` 表,因为 `filter` 表过滤包数据。
iptables 规则有一个匹配集,以及一个诸如 `Drop` 或者 `Deny` 的目标,这可以告诉 iptables 对一个包做什么以符合规则。因此没有目标和匹配集iptables 就不能有效地处理包。如果一个包匹配了一条规则,目标会指向一个将要采取的特定措施。另一方面,为了让 iptables 处理,每个数据包必须匹配才能被处理。
现在我们已经知道 iptables 防火墙如何工作,让我们着眼于如何使用 iptables 防火墙检测并拒绝或丢弃欺骗地址吧。
### 打开源地址验证
作为一个安全工程师,在处理远程的欺骗地址的时候,我采取的第一步是在内核打开源地址验证。
源地址验证是一种内核层级的特性这种特性丢弃那些伪装成来自你的网络的包。这种特性使用反向路径过滤器方法来检查收到的包的源地址是否可以通过包到达的接口可以到达。LCTT 译注:到达的包的源地址应该可以从它到达的网络接口反向到达,只需反转源地址和目的地址就可以达到这样的效果)
利用下面简单的脚本可以打开源地址验证而不用手工操作:
```
#!/bin/sh
#作者: Michael K Aboagye
#程序目标: 打开反向路径过滤
#日期: 7/02/18
#在屏幕上显示 “enabling source address verification”
echo -n "Enabling source address verification…"
#将值0覆盖为1来打开源地址验证
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
echo "completed"
```
上面的脚本在执行的时候只显示了 `Enabling source address verification` 这条信息而不会换行。默认的反向路径过滤的值是 `0``0` 表示没有源验证。因此,第二行简单地将默认值 `0` 覆盖为 `1`。`1` 表示内核将会通过确认反向路径来验证源地址。
最后,你可以使用下面的命令通过选择 `DROP` 或者 `REJECT` 目标之一来丢弃或者拒绝来自远端主机的欺骗地址。但是,处于安全原因的考虑,我建议使用 `DROP` 目标。
像下面这样,用你自己的 IP 地址代替 `IP-address` 占位符。另外,你必须选择使用 `REJECT` 或者 `DROP` 中的一个,这两个目标不能同时使用。
```
iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP
iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT / DROP
```
这篇文章只提供了如何使用 iptables 防火墙来避免远端欺骗攻击的基础知识。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-linux-firewall
作者:[Michael Kwaku Aboagye][a]
译者:[leemeans](https://github.com/leemeans)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/revoks

View File

@ -1,43 +1,43 @@
使用 PGP 保护代码完整性 - 第 3 部分:生成 PGP 子密钥
使用 PGP 保护代码完整性(三):生成 PGP 子密钥
======
> 在第三篇文章中,我们将解释如何生成用于日常工作的 PGP 子密钥。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/binary.jpg?itok=h62HujOC)
在本系列教程中,我们提供了使用 PGP 的实用指南。在此之前,我们介绍了[基本工具和概念][1],并介绍了如何[生成并保护您的主 PGP 密钥][2]。在第三篇文章中,我们将解释如何生成 PGP 子密钥,以及它们在日常工作中使用。
在本系列教程中,我们提供了使用 PGP 的实用指南。在此之前,我们介绍了[基本工具和概念][1],并介绍了如何[生成并保护您的主 PGP 密钥][2]。在第三篇文章中,我们将解释如何生成用于日常工作的 PGP 子密钥。
### 清单
1. 生成 2048 位加密子密钥(必要)
  2. 生成 2048 位签名子密钥(必要)
  3. 生成一个 2048 位验证子密钥(可选)
  3. 生成一个 2048 位验证子密钥(推荐)
  4. 将你的公钥上传到 PGP 密钥服务器(必要)
  5. 设置一个刷新的定时任务(必要)
### 注意事项
现在我们已经创建了主密钥,让我们创建用于日常工作的密钥。我们创建 2048 位的密钥是因为很多专用硬件(我们稍后会讨论这个)不能处理更长的密钥,但同样也是出于实用的原因。如果我们发现自己处于一个 2048 位 RSA 密钥也不够好的世界,那将是由于计算或数学有了基本突破,因此更长的 4096 位密钥不会产生太大的差别。
#### 注意事项
现在我们已经创建了主密钥,让我们创建用于日常工作的密钥。我们创建了 2048 位密钥,因为很多专用硬件(我们稍后会讨论这个)不能处理更长的密钥,但同样也是出于实用的原因。如果我们发现自己处于一个 2048 位 RSA 密钥也不够好的世界,那将是由于计算或数学的基本突破,因此更长的 4096 位密钥不会产生太大的差别。
##### 创建子密钥
### 创建子密钥
要创建子密钥,请运行:
```
$ gpg --quick-add-key [fpr] rsa2048 encr
$ gpg --quick-add-key [fpr] rsa2048 sign
```
你也可以创建验证密钥,这能让你使用你的 PGP 密钥来使用 ssh
用你密钥的完整指纹替换 `[fpr]`
你也可以创建验证密钥,这能让你将你的 PGP 密钥用于 ssh
```
$ gpg --quick-add-key [fpr] rsa2048 auth
```
你可以使用 gpg --list-key [fpr] 来查看你的密钥信息:
你可以使用 `gpg --list-key [fpr]` 来查看你的密钥信息:
```
pub rsa4096 2017-12-06 [C] [expires: 2019-12-06]
111122223333444455556666AAAABBBBCCCCDDDD
@ -45,55 +45,57 @@ uid [ultimate] Alice Engineer <alice@example.org>
uid [ultimate] Alice Engineer <allie@example.net>
sub rsa2048 2017-12-06 [E]
sub rsa2048 2017-12-06 [S]
```
##### 上传你的公钥到密钥服务器
### 上传你的公钥到密钥服务器
你的密钥创建已完成,因此现在需要你将其上传到一个公共密钥服务器,使其他人能更容易找到密钥。 (如果你不打算实际使用你创建的密钥,请跳过这一步,因为这只会在密钥服务器上留下垃圾数据。)
```
$ gpg --send-key [fpr]
```
如果此命令不成功,你可以尝试指定一台密钥服务器以及端口,这很有可能成功:
```
$ gpg --keyserver hkp://pgp.mit.edu:80 --send-key [fpr]
```
大多数密钥服务器彼此进行通信,因此你的密钥信息最终将与所有其他密钥信息同步。
**关于隐私的注意事项:**密钥服务器是完全公开的,因此在设计上会泄露有关你的潜在敏感信息,例如你的全名、昵称以及个人或工作邮箱地址。如果你签名了其他人的钥匙或某人签名你的钥匙,那么密钥服务器还会成为你的社交网络的泄密者。一旦这些个人信息发送给密钥服务器,就不可能编辑或删除。即使你撤销签名或身份,它也不会将你的密钥记录删除,它只会将其标记为已撤消 - 这甚至会显得更突出
**关于隐私的注意事项:**密钥服务器是完全公开的,因此在设计上会泄露有关你的潜在敏感信息,例如你的全名、昵称以及个人或工作邮箱地址。如果你签名了其他人的钥匙或某人签名你的钥匙,那么密钥服务器还会成为你的社交网络的泄密者。一旦这些个人信息发送给密钥服务器,就不可能编辑或删除。即使你撤销签名或身份,它也不会将你的密钥记录删除,它只会将其标记为已撤消 —— 这甚至会显得更显眼
也就是说,如果你参与公共项目的软件开发,以上所有信息都是公开记录,因此通过密钥服务器另外让这些信息可见,不会导致隐私的净损失。
###### 上传你的公钥到 GitHub
### 上传你的公钥到 GitHub
如果你在开发中使用 GitHub谁不是呢则应按照他们提供的说明上传密钥
- [添加 PGP 密钥到你的 GitHub 账户](https://help.github.com/articles/adding-a-new-gpg-key-to-your-github-account/)
要生成适合粘贴的公钥输出,只需运行:
```
$ gpg --export --armor [fpr]
```
##### 设置一个刷新定时任务
### 设置一个刷新定时任务
你需要定期刷新你的钥匙环,以获取其他人公钥的最新更改。你可以设置一个定时任务来做到这一点:
你需要定期刷新你的 keyring以获取其他人公钥的最新更改。你可以设置一个定时任务来做到这一点
```
$ crontab -e
```
在新行中添加以下内容:
```
@daily /usr/bin/gpg2 --refresh >/dev/null 2>&1
```
**注意:**检查你的 gpg 或 gpg2 命令的完整路径,如果你的 gpg 是旧式的 GnuPG v.1,请使用 gpg2。
**注意:**检查你的 `gpg``gpg2` 命令的完整路径,如果你的 `gpg` 是旧式的 GnuPG v.1,请使用 gpg2。
通过 Linux 基金会和 edX 的免费“[Introduction to Linux](https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux)” 课程了解关于 Linux 的更多信息。
--------------------------------------------------------------------------------
@ -101,10 +103,10 @@ via: https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-p
作者:[Konstantin Ryabitsev][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key
[1]:https://linux.cn/article-9524-1.html
[2]:https://linux.cn/article-9529-1.html

View File

@ -1,60 +1,58 @@
如何将树莓派配置为打印服务器
======
> 用树莓派和 CUPS 打印服务器将你的打印机变成网络打印机。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2)
我喜欢在家做一些小项目,因此,今年我选择使用一个 [树莓派 3 Model B][1],这是一个像我这样的业余爱好者非常适合的东西。使用树莓派 3 Model B 的无线功能,我可以不使用线缆将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种所需要的地方。
我喜欢在家做一些小项目,因此,今年我买了一个 [树莓派 3 Model B][1],这是一个非常适合像我这样的业余爱好者的东西。使用树莓派 3 Model B 的内置无线功能,我可以不使用线缆就将树莓派连接到我的家庭网络中。这样可以很容易地将树莓派用到各种所需要的地方。
在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,一般情况下,使用我的笔记本电脑时,我并不连接打印机,因为,我做的大多数工作并不需要打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。
### 基本设置
在家里,我和我的妻子都使用笔记本电脑,但是我们只有一台打印机:一台使用的并不频繁的 HP 彩色激光打印机。因为我们的打印机并不内置无线网卡,因此,它不能直接连接到无线网络中,我们一般把打印机连接到我的笔记本电脑上,因为通常是我在打印。虽然这种安排在大多数时间都没有问题,但是,有时候,我的妻子想在不 “麻烦” 我的情况下,自己去打印一些东西。
我觉得我们需要一个将打印机连接到无线网络的解决方案,以便于我们都能够随时随地打印。我本想买一个无线打印服务器将我的 USB 打印机连接到家里的无线网络上。后来,我决定使用我的树莓派,将它设置为打印服务器,这样就可以让家里的每个人都可以随时来打印。
设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它引导连接了一个 HDMI 显示器、一个 USB 键盘和一个 USB 鼠标的树莓派。之后,我们开始对它进行设置!
### 基本设置
设置树莓派是非常简单的事。我下载了 [Raspbian][2] 镜像,并将它写入到我的 microSD 卡中。然后,使用它来引导一个连接了 HDMI 显示器、 USB 键盘和 USB 鼠标的树莓派。之后,我们开始对它进行设置!
这个树莓派系统自动引导到一个图形桌面,然后我做了一些基本设置:设置键盘语言、连接无线网络、设置普通用户帐户(`pi`)的密码、设置管理员用户(`root`)的密码。
我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,而且不以 `pi` 用户自动登入。
我并不打算将树莓派运行在桌面环境下。我一般是通过我的普通的 Linux 计算机远程来使用它。因此,我使用树莓派的图形化管理工具,去设置将树莓派引导到控制台模式,不以 `pi` 用户自动登入。
重新启动树莓派之后,我需要做一些其它的系统方面的小调整,以便于我在家用网络中使用树莓派做为 “服务器”。我设置它的 DHCP 客户端为使用静态 IP 地址默认情况下DHCP 客户端可能任选一个可用的网络地址,这样我会不知道应该用哪个地址连接到树莓派。我的家用网络使用一个私有的 A 类地址,因此,我的路由器的 IP 地址是 `10.0.0.1`,并且我的全部可用地 IP 地址是 `10.0.0.x`。在我的案例中,低位的 IP 地址是安全的,因此,我通过在 `/etc/dhcpcd.conf` 中添加如下的行,设置它的无线网络使用 `10.0.0.11` 这个静态地址。
```
interface wlan0
static ip_address=10.0.0.11/24
static routers=10.0.0.1
static domain_name_servers=8.8.8.8 8.8.4.4
```
在我再次重启之前,我需要去确认安全 shell 守护程序SSHD已经正常运行你可以在 “偏好” 中设置哪些服务在引导时启动它)。这样我就可以使用 SSH 从普通的 Linux 系统上基于网络连接到树莓派中。
### 打印设置
现在,我的树莓派已经在网络上正常工作了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。
现在,我的树莓派已经连到网络上了,我通过 SSH 从我的 Linux 电脑上远程连接它,接着做剩余的设置。在继续设置之前,确保你的打印机已经连接到树莓派上。
设置打印机很容易。现代的打印服务器被称为 CUPS意即“通用 Unix 打印系统”。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。
设置打印机很容易。现在的打印服务器都称为 CUPS它是标准的通用 Unix 打印系统。任何最新的 Unix 系统都可以通过 CUPS 打印服务器来打印。为了在树莓派上设置 CUPS 打印服务器。你需要通过几个命令去安装 CUPS 软件,并使用新的配置来重启打印服务器,这样就可以允许其它系统来打印了。
```
$ sudo apt-get install cups
$ sudo cupsctl --remote-any
$ sudo /etc/init.d/cups restart
```
在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631因此你可以在浏览器中收藏这个地址
在 CUPS 中设置打印机也是非常简单的,你可以通过一个 Web 界面来完成。CUPS 监听端口是 631因此你用常用的浏览器来访问这个地址
```
https://10.0.0.11:631/
```
你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 ”接受它“,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板:
你的 Web 浏览器可能会弹出警告,因为它不认可这个 Web 浏览器的 https 证书;选择 “接受它”,然后以管理员用户登入系统,你将看到如下的标准的 CUPS 面板:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-1-home.png?itok=t9OFJgSX)
这时候,导航到管理标签,选择 “Add Printer"
这时候,导航到管理标签,选择 “Add Printer
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/cups-2-administration.png?itok=MlEINoYC)
@ -64,9 +62,9 @@ https://10.0.0.11:631/
### 客户端设置
从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME你可以从 GNOME 的设置应用程序中添加网络打印机。只需要导航到设备和打印机,然后解锁这个面板。点击 “Add" 按钮去添加打印机。
从 Linux 中设置一台网络打印机非常简单。我的桌面环境是 GNOME你可以从 GNOME 的设置应用程序中添加网络打印机。只需要导航到“设备和打印机”,然后解锁这个面板。点击 “添加” 按钮去添加打印机。
在我的系统中GNOME 设置”自动发现网络打印机并添加它。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。
在我的系统中GNOME 的“设置”应用程序会自动发现网络打印机并添加它。如果你的系统不是这样,你需要通过树莓派的 IP 地址,手动去添加打印机。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gnome-settings-printers.png?itok=NOQLTaLs)
@ -78,7 +76,7 @@ via: https://opensource.com/article/18/3/print-server-raspberry-pi
作者:[Jim Hall][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,66 +2,67 @@
=====
![](https://www.ostechnix.com/wp-content/uploads/2018/03/rwho-1-720x340.png)
有很多监控工具可用来监控本地和远程 Linux 系统,一个很好的例子是 [**Cockpit**][1]。但是,这些工具的安装和使用比较复杂,至少对于新手管理员来说是这样。新手管理员可能需要花一些时间来弄清楚如何配置这些工具来监视系统。如果你想要以快速且粗略地在局域网中一次监控多台主机,你可能需要查看一下 **“rwho”** 工具。只要安装 rwho 实用程序,它将立即快速地监控本地和远程系统。你什么都不用配置!你所要做的就是在要监视的系统上安装 “rwho” 工具。
请不要将 rwho 视为功能丰富且完整的监控工具。这只是一个简单的工具,它只监视远程系统的**正常运行时间****加载**和**登录用户**。使用 “rwho” 使用程序,我们可以发现谁在哪台计算机上登录,一个被监视的计算机的列表,有正常运行时间(自上次重新启动以来的时间),有多少用户登录了,以及在过去的 1、5、15 分钟的平均负载。不多不少!而且,它只监视同一子网中的系统。因此,它非常适合小型和家庭办公网络。
有很多监控工具可用来监控本地和远程 Linux 系统,一个很好的例子是 [Cockpit][1]。但是,这些工具的安装和使用比较复杂,至少对于新手管理员来说是这样。新手管理员可能需要花一些时间来弄清楚如何配置这些工具来监视系统。如果你想要以快速且粗略地在局域网中一次监控多台主机,你可能需要了解一下 “rwho” 工具。只要安装了 rwho 实用程序,它将立即快速地监控本地和远程系统。你什么都不用配置!你所要做的就是在要监视的系统上安装 “rwho” 工具。
请不要将 rwho 视为功能丰富且完整的监控工具。这只是一个简单的工具,它只监视远程系统的“正常运行时间”(`uptime`),“负载”(`load`)和**登录的用户**。使用 “rwho” 使用程序,我们可以发现谁在哪台计算机上登录;一个被监视的计算机的列表,列出了正常运行时间(自上次重新启动以来的时间);有多少用户登录了;以及在过去的 1、5、15 分钟的平均负载。不多不少!而且,它只监视同一子网中的系统。因此,它非常适合小型和家庭办公网络。
### 在 Linux 中监控多台主机
让我来解释一下 rwho 是如何工作的。每个在网络上使用 rwho 的系统都将广播关于它自己的信息,其他计算机可以使用 rwhod-daemon 来访问这些信息。因此,网络上的每台计算机都必须安装 rwho。此外为了分发或访问其他主机的信息必须允许 rwho 端口(例如端口 513/UDP通过防火墙/路由器。
让我来解释一下 `rwho` 是如何工作的。每个在网络上使用 `rwho` 的系统都将广播关于它自己的信息,其他计算机可以使用 `rwhod` 守护进程来访问这些信息。因此,网络上的每台计算机都必须安装 `rwho`。此外,为了分发或访问其他主机的信息,必须允许 `rwho` 端口(例如端口 `513/UDP`)通过防火墙/路由器。
好的,让我们来安装它。
我在 Ubuntu 16.04 LTS 服务器上进行了测试rwho 在默认仓库中可用,所以,我们可以使用像下面这样的 APT 软件包管理器来安装它。
我在 Ubuntu 16.04 LTS 服务器上进行了测试,`rwho` 在默认仓库中可用,所以,我们可以使用像下面这样的 APT 软件包管理器来安装它。
```
$ sudo apt-get install rwho
```
在基于 RPM 的系统如 CentOS, Fedora, RHEL上使用以下命令来安装它
在基于 RPM 的系统如 CentOS、 Fedora、 RHEL 上,使用以下命令来安装它:
```
$ sudo yum install rwho
```
如果你在防火墙/路由器之后,确保你已经允许使用 rwhod 513 端口。另外,使用命令验证 rwhod-daemon 是否正在运行:
如果你在防火墙/路由器之后,确保你已经允许使用 rwhod 513 端口。另外,使用命令验证 `rwhod` 守护进程是否正在运行:
$ sudo systemctl status rwhod
如果它尚未启动,运行以下命令启用并启动 rwhod 服务:
如果它尚未启动,运行以下命令启用并启动 `rwhod` 服务:
```
$ sudo systemctl enable rwhod
$ sudo systemctl start rwhod
```
现在是时候来监视系统了。运行以下命令以发现谁在哪台计算机上登录:
```
$ rwho
ostechni ostechnix:pts/5 Mar 12 17:41
root server:pts/0 Mar 12 17:42
```
正如你所看到的,目前我的局域网中有两个系统。本地系统用户是 **ostechnix** (Ubuntu 16.04 LTS),远程系统的用户是 **root** (CentOS 7)。可能你已经猜到了rwho 与 “who” 命令相似,但它会监视远程系统。
正如你所看到的,目前我的局域网中有两个系统。本地系统用户是 `ostechnix` Ubuntu 16.04 LTS远程系统的用户是 `root` CentOS 7。可能你已经猜到了`rwho` 与 `who` 命令相似,但它会监视远程系统。
而且,我们可以使用以下命令找到网络上所有正在运行的系统的正常运行时间:
```
$ ruptime
ostechnix up 2:17, 1 user, load 0.09, 0.03, 0.01
server up 1:54, 1 user, load 0.00, 0.01, 0.05
```
这里,ruptime类似于 “uptime” 命令)显示了我的 Ubuntu本地 and CentOS远程系统的总运行时间。明白了吗棒极了以下是我的 Ubuntu 16.04 LTS 系统的示例屏幕截图:
这里,`ruptime`(类似于 `uptime` 命令)显示了我的 Ubuntu本地 CentOS远程系统的总运行时间。明白了吗棒极了以下是我的 Ubuntu 16.04 LTS 系统的示例屏幕截图:
![][3]
你可以在以下位置找到有关局域网中所有其他机器的信息:
```
$ ls /var/spool/rwho/
whod.ostechnix whod.server
```
它很小,但却非常有用,可以发现谁在哪台计算机上登录,以及正常运行时间和系统负载详情。
@ -71,23 +72,22 @@ whod.ostechnix whod.server
请注意,这种方法有一个严重的漏洞。由于有关每台计算机的信息都通过网络进行广播,因此该子网中的每个人都可能获得此信息。通常情况下可以,但另一方面,当有关网络的信息分发给非授权用户时,这可能是不必要的副作用。因此,强烈建议在受信任和受保护的局域网中使用它。
更多的信息,查找 man 手册页。
```
$ man rwho
```
好了,这就是全部了。更多好东西要来了,敬请期待!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-quickly-monitor-multiple-hosts-in-linux/
作者:[SK][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,77 +1,74 @@
在 Linux 下 9 个有用的 touch 命令示例
=====
touch 命令用于创建空文件,并且更改 Unix 和 Linux 系统上现有文件时间戳。这里更改时间戳意味着更新文件和目录的访问以及修改时间。
`touch` 命令用于创建空文件,也可以更改 Unix 和 Linux 系统上现有文件时间戳。这里所说的更改时间戳意味着更新文件和目录的访问以及修改时间。
[![touch-command-examples-linux][1]![touch-command-examples-linux][2]][2]
![touch-command-examples-linux][2]
让我们来看看 touch 命令的语法和选项:
让我们来看看 `touch` 命令的语法和选项:
**语法** # touch {选项} {文件}
**语法**
touch 命令中使用的选项:
```
# touch {选项} {文件}
```
![touch-command-options][1]
`touch` 命令中使用的选项:
![touch-command-options][3]
在这篇文章中,我们将介绍 Linux 中 9 个有用的 touch 命令示例。
在这篇文章中,我们将介绍 Linux 中 9 个有用的 `touch` 命令示例。
### 示例:1 使用 touch 创建一个空文件
### 示例1 使用 touch 创建一个空文件
要在 Linux 系统上使用 `touch` 命令创建空文件,键入 `touch`,然后输入文件名。如下所示:
要在 Linux 系统上使用 touch 命令创建空文件,键入 touch然后输入文件名。如下所示
```
[root@linuxtechi ~]# touch devops.txt
[root@linuxtechi ~]# ls -l devops.txt
-rw-r--r--. 1 root root 0 Mar 29 22:39 devops.txt
[root@linuxtechi ~]#
```
### 示例:2 使用 touch 创建批量空文件
### 示例2 使用 touch 创建批量空文件
可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 `touch` 命令轻松实现:
可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 touch 命令轻松实现:
```
[root@linuxtechi ~]# touch sysadm-{1..20}.txt
```
在上面的例子中,我们创建了 20 个名为 sysadm-1.txt 到 sysadm-20.txt 的空文件,你可以根据需要更改名称和数字。
在上面的例子中,我们创建了 20 个名为 `sysadm-1.txt``sysadm-20.txt` 的空文件,你可以根据需要更改名称和数字。
### 示例:3 改变/更新文件和目录的访问时间
### 示例3 改变/更新文件和目录的访问时间
假设我们想要改变名为 `devops.txt` 文件的访问时间,在 `touch` 命令中使用 `-a` 选项,然后输入文件名。如下所示:
假设我们想要改变名为 **devops.txt** 文件的访问时间,在 touch 命令中使用 **-a** 选项,然后输入文件名。如下所示:
```
[root@linuxtechi ~]# touch -a devops.txt
[root@linuxtechi ~]#
```
现在使用 `stat` 命令验证文件的访问时间是否已更新:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fd00h/64768d    Inode: 67324178    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
File: 'devops.txt'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: fd00h/64768d Inode: 67324178 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Context: unconfined_u:object_r:admin_home_t:s0
Access: 2018-03-29 23:03:10.902000000 -0400
Modify: 2018-03-29 22:39:29.365000000 -0400
Change: 2018-03-29 23:03:10.902000000 -0400
 Birth: -
[root@linuxtechi ~]#
Birth: -
```
**改变目录的访问时间**
**改变目录的访问时间:**
假设我们在 `/mnt` 目录下有一个 `nfsshare` 文件夹,让我们用下面的命令改变这个文件夹的访问时间:
假设我们在 /mnt 目录下有一个 nfsshare 文件夹,让我们用下面的命令改变这个文件夹的访问时间:
```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]#
[root@linuxtechi ~]# stat /mnt/nfsshare/
  File: /mnt/nfsshare/
  File: '/mnt/nfsshare/'
  Size: 6               Blocks: 0          IO Block: 4096   directory
Device: fd00h/64768d    Inode: 2258        Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
@ -80,37 +77,34 @@ Access: 2018-03-29 23:34:38.095000000 -0400
Modify: 2018-03-03 10:42:45.194000000 -0500
Change: 2018-03-29 23:34:38.095000000 -0400
 Birth: -
[root@linuxtechi ~]#
```
### 示例:4 更改访问时间而不用创建新文件
### 示例4 更改访问时间而不用创建新文件
在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 `-c` 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。
在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 **-c** 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。
```
[root@linuxtechi ~]# touch -c sysadm-20.txt
[root@linuxtechi ~]# touch -c winadm-20.txt
[root@linuxtechi ~]# ls -l winadm-20.txt
ls: cannot access winadm-20.txt: No such file or directory
[root@linuxtechi ~]#
```
### 示例:5 更改文件和目录的修改时间
### 示例5 更改文件和目录的修改时间
在 touch 命令中使用 **-m** 选项,我们可以更改文件和目录的修改时间。
`touch` 命令中使用 `-m` 选项,我们可以更改文件和目录的修改时间。
让我们更改名为 `devops.txt` 文件的更改时间:
让我们更改名为 “devops.txt” 文件的更改时间:
```
[root@linuxtechi ~]# touch -m devops.txt
[root@linuxtechi ~]#
```
现在使用 stat 命令来验证修改时间是否改变:
现在使用 `stat` 命令来验证修改时间是否改变:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
  File: 'devops.txt'
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fd00h/64768d    Inode: 67324178    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
@ -119,21 +113,19 @@ Access: 2018-03-29 23:03:10.902000000 -0400
Modify: 2018-03-29 23:59:49.106000000 -0400
Change: 2018-03-29 23:59:49.106000000 -0400
 Birth: -
[root@linuxtechi ~]#
```
同样的,我们可以改变一个目录的修改时间:
```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]#
```
使用 stat 交叉验证访问和修改时间:
使用 `stat` 交叉验证访问和修改时间:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
  File: 'devops.txt'
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fd00h/64768d    Inode: 67324178    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
@ -142,47 +134,47 @@ Access: 2018-03-30 00:06:20.145000000 -0400
Modify: 2018-03-30 00:06:20.145000000 -0400
Change: 2018-03-30 00:06:20.145000000 -0400
 Birth: -
[root@linuxtechi ~]#
```
### 示例:7 将访问和修改时间设置为特定的日期和时间
### 示例7 将访问和修改时间设置为特定的日期和时间
每当我们使用 touch 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。
每当我们使用 `touch` 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。
假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 touch 命令中的 -c-t 选项来实现。
假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 `touch` 命令中的 `-c``-t` 选项来实现。
日期和时间可以使用以下格式指定:{CCYY}MMDDhhmm.ss
日期和时间可以使用以下格式指定:
```
{CCYY}MMDDhhmm.ss
```
其中:
* CC 年份的前两位数字
* YY 年份的后两位数字
* MM 月份 (01-12)
* DD 天 (01-31)
* hh 小时 (00-23)
* mm 分钟 (00-59)
* `CC` 年份的前两位数字
* `YY` 年份的后两位数字
* `MM` 月份 (01-12)
* `DD` 天 (01-31)
* `hh` 小时 (00-23)
* `mm` 分钟 (00-59)
让我们将 `devops.txt` 文件的访问和修改时间设置为未来的一个时间2025 年 10 月 19 日 18 时 20 分)。
让我们将 devops.txt file 文件的访问和修改时间设置为未来的一个时间( 2025 年, 10 月, 19 日, 18 时 20 分)。
```
[root@linuxtechi ~]# touch -c -t 202510191820 devops.txt
```
使用 stat 命令查看更新访问和修改时间:
![stat-command-output-linux][1]
使用 `stat` 命令查看更新访问和修改时间:
![stat-command-output-linux][4]
根据日期字符串设置访问和修改时间,在 touch 命令中使用 -d 选项,然后指定日期字符串,后面跟文件名。如下所示:
根据日期字符串设置访问和修改时间,在 `touch` 命令中使用 `-d` 选项,然后指定日期字符串,后面跟文件名。如下所示:
```
[root@linuxtechi ~]# touch -c -d "2010-02-07 20:15:12.000000000 +0530" sysadm-29.txt
[root@linuxtechi ~]#
```
使用 stat 命令验证文件的状态:
使用 `stat` 命令验证文件的状态:
```
[root@linuxtechi ~]# stat sysadm-20.txt
  File: sysadm-20.txt
@ -194,39 +186,43 @@ Access: 2010-02-07 20:15:12.000000000 +0530
Modify: 2010-02-07 20:15:12.000000000 +0530
Change: 2018-03-30 10:23:31.584000000 +0530
 Birth: -
[root@linuxtechi ~]#
```
**注意:**在上述命令中,如果我们不指定 -c那么 touch 命令将创建一个新文件以防系统中存在该文件,并将时间戳设置为命令中给出的。
**注意:**在上述命令中,如果我们不指定 `-c`,如果系统中不存在该文件那么 `touch` 命令将创建一个新文件,并将时间戳设置为命令中给出的。
### 示例:8 使用参考文件设置时间戳(-r
### 示例8 使用参考文件设置时间戳(-r
在 touch 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 “devops.txt” 文件上设置与文件 “sysadm-20.txt” 文件相同的时间戳touch 命令中使用 -r 选项可以轻松实现。
`touch` 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 `devops.txt` 文件上设置与文件 `sysadm-20.txt` 文件相同的时间戳,`touch` 命令中使用 `-r` 选项可以轻松实现。
**语法:**
```
# touch -r {参考文件} 真正文件
```
**语法:**# touch -r {参考文件} 真正文件
```
[root@linuxtechi ~]# touch -r sysadm-20.txt devops.txt
[root@linuxtechi ~]#
```
### 示例:9 在符号链接文件上更改访问和修改时间
### 示例9 在符号链接文件上更改访问和修改时间
默认情况下,每当我们尝试使用 touch 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 touch 命令中的 -h 选项来实现。
默认情况下,每当我们尝试使用 `touch` 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 `touch` 命令中的 `-h` 选项来实现。
**语法:**
```
# touch -h {符号链接文件}
```
**语法:** # touch -h {符号链接文件}
```
[root@linuxtechi opt]# ls -l /root/linuxgeeks.txt
lrwxrwxrwx. 1 root root 15 Mar 30 10:56 /root/linuxgeeks.txt -> linuxadmins.txt
[root@linuxtechi ~]# touch -t 203010191820 -h linuxgeeks.txt
[root@linuxtechi ~]# ls -l linuxgeeks.txt
lrwxrwxrwx. 1 root root 15 Oct 19  2030 linuxgeeks.txt -> linuxadmins.txt
[root@linuxtechi ~]#
```
这就是本教程的全部了。我希望这些例子能帮助你理解 touch 命令。请分享你的宝贵意见和评论。
这就是本教程的全部了。我希望这些例子能帮助你理解 `touch` 命令。请分享你的宝贵意见和评论。
--------------------------------------------------------------------------------
@ -234,7 +230,7 @@ via: https://www.linuxtechi.com/9-useful-touch-command-examples-linux/
作者:[Pradeep Kumar][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,7 +1,10 @@
对于 Linux 新手来说 10 个基础的命令
每个 Linux 新手都应该知道的 10 个命令
=====
> 通过这 10 个基础命令开始掌握 Linux 命令行。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
你可能认为你是 Linux 新手,但实际上并不是。全球互联网用户有 [3.74 亿][1],他们都以某种方式使用 Linux因为 Linux 服务器占据了互联网的 90%。大多数现代路由器运行 Linux 或 Unix[TOP500 超级计算机][2] 也依赖于 Linux。如果你拥有一台 Android 智能手机,那么你的操作系统就是由 Linux 内核构建的。
换句话说Linux 无处不在。
@ -10,118 +13,124 @@
下面是你需要知道的基本的 Linux 命令。每一个都很简单,也很容易记住。换句话说,你不必成为比尔盖茨就能理解它们。
### 1\. ls
### 1 ls
你可能会想:“这是什么东西?”不,那不是一个印刷错误 - 我真的打算输入一个小写的 l。`ls`,或者 “list,” 是你需要知道的使用 Linux CLI 的第一个命令。这个 list 命令在 Linux 终端中运行,以显示在相应文件系统下归档的所有主要目录。例如,这个命令:
你可能会想:“这是is什么东西?”不,那不是一个印刷错误 —— 我真的打算输入一个小写的 l。`ls`,或者 “list” 是你需要知道的使用 Linux CLI 的第一个命令。这个 list 命令在 Linux 终端中运行,以显示在存放在相应文件系统下的所有主要目录。例如,这个命令:
`ls /applications`
```
ls /applications
```
显示存储在 applications 文件夹下的每个文件夹,你将使用它来查看文件、文件夹和目录。
显示存储在 `applications` 文件夹下的每个文件夹,你将使用它来查看文件、文件夹和目录。
显示所有隐藏的文件都可以使用命令 `ls -a`
### 2\. cd
### 2 cd
这个命令是你用来跳转(或“更改”)到一个目录的。它指导你如何从一个文件夹导航到另一个文件夹。假设你位于 Downloads 文件夹中,但你想到名为 Gym Playlist 的文件夹中,简单地输入 `cd Gym Playlist` 将不起作用,(译注:这应该是 Gym 目录下的 Playlist 文件夹)因为 shell 不会识别它,并会报告你正在查找的文件夹不存在。要跳转到那个文件夹,你需要包含一个反斜杠。改命令如下所示:
这个命令是你用来跳转(或“更改”)到一个目录的。它指导你如何从一个文件夹导航到另一个文件夹。假设你位于 `Downloads` 文件夹中,但你想到名为 `Gym Playlist` 的文件夹中,简单地输入 `cd Gym Playlist` 将不起作用,因为 shell 不会识别它,并会报告你正在查找的文件夹不存在LCTT 译注:这是因为目录名中有空格)。要跳转到那个文件夹,你需要包含一个反斜杠。改命令如下所示:
`cd Gym\ Playlist`
```
cd Gym\ Playlist
```
要从当前文件夹返回到上一个文件夹,你可以输入 `cd ..` 后跟着文件夹名称(译注:返回上一层目录不应该是 cd .. )。把这两个点想象成一个后退按钮。
要从当前文件夹返回到上一个文件夹,你可以在该文件夹输入 `cd ..`。把这两个点想象成一个后退按钮。
### 3\. mv
### 3 mv
该命令将文件从一个文件夹转移到另一个文件夹;`mv` 代表“移动”。你可以使用这个简单的命令,就像你把一个文件拖到 PC 上的一个文件夹一样。
例如,如果我想创建一个名为 `testfile` 的文件来演示所有基本的 Linux 命令,并且我想将它移动到我的 Documents 文件夹中,我将输入这个命令:
例如,如果我想创建一个名为 `testfile` 的文件来演示所有基本的 Linux 命令,并且我想将它移动到我的 `Documents` 文件夹中,我将输入这个命令:
`mv /home/sam/testfile /home/sam/Documents/`
```
mv /home/sam/testfile /home/sam/Documents/
```
命令的第一部分(`mv`)说我想移动一个文件,第二部分(`home/sam/testfile`)表示我想移动的文件,第三部分(`/home/sam/Documents/`)表示我希望传输文件的位置。
### 4\. 快捷键
### 4 快捷键
好吧,这不止一个命令,但我忍不住把它们都包括进来。为什么?因为它们能节省时间并避免经历头痛。
`CTRL+K` 从光标处剪切文本直至本行结束
`CTRL+Y` 粘贴文本
`CTRL+E` 将光标移到本行的末尾
`CTRL+A` 将光标移动到本行的开头
`ALT+F` 跳转到下一个空格处
`ALT+B` 回到之前的空格处
`ALT+Backspace` 删除前一个词
`CTRL+W` 将光标前一个词剪贴
`Shift+Insert` 将文本粘贴到终端中
`Ctrl+D` 注销
- `CTRL+K` 从光标处剪切文本直至本行结束
- `CTRL+Y` 粘贴文本
- `CTRL+E` 将光标移到本行的末尾
- `CTRL+A` 将光标移动到本行的开头
- `ALT+F` 跳转到下一个空格处
- `ALT+B` 回到前一个空格处
- `ALT+Backspace` 删除前一个词
- `CTRL+W` 剪切光标前一个词
- `Shift+Insert` 将文本粘贴到终端中
- `Ctrl+D` 注销
这些命令在许多方面都能派上用场。例如,假设你在命令行文本中拼错了一个单词:
`sudo apt-get intall programname`
```
sudo apt-get intall programname
```
你可能注意到 "insatll" 拼写错了,因此该命令无法工作。但是快捷键可以让你分容易回去修复它。如果我的光标在这一行的末尾,我可以按下两次 `ALT+B` 来将光标移动到下面用 `^` 符号标记的地方:
你可能注意到 `install` 拼写错了,因此该命令无法工作。但是快捷键可以让你很容易回去修复它。如果我的光标在这一行的末尾,我可以按下两次 `ALT+B` 来将光标移动到下面用 `^` 符号标记的地方:
`sudo apt-get^intall programname`
```
sudo apt-get^intall programname
```
现在,我们可以快速地添加字母 `s` 来修复 `install`,十分简单!
### 5\. mkdir
### 5 mkdir
这是你用来在 Linux 环境下创建目录或文件夹的命令。例如,如果你像我一样喜欢 DIY你可以输入 `mkdir DIY` 为你的 DIY 项目创建一个目录。
### 6\. at
### 6 at
如果你想在特定时间运行 Linux 命令,你可以将 `at` 添加到语句中。语法是 `at` 后面跟着你希望命令运行的日期和时间,然后命令提示符变为 `at>`,这样你就可以输入在上面指定的时间运行的命令。
例如:
`at 4:08 PM Sat`
`at> cowsay 'hello'`
`at> CTRL+D`
```
at 4:08 PM Sat
at> cowsay 'hello'
at> CTRL+D
```
这将会在周六下午 4:08 运行 cowsay 程序。
这将会在周六下午 4:08 运行 `cowsay` 程序。
### 7\. rmdir
### 7 rmdir
这个命令允许你通过 Linux CLI 删除一个目录。例如:
`rmdir testdirectory`
```
rmdir testdirectory
```
请记住,这个命令不会删除里面有文件的目录。这只在删除空目录时才起作用。
### 8\. rm
### 8 rm
如果你想删除文件,`rm` 命令就是你想要的。它可以删除文件和目录。要删除一个文件,键入 `rm testfile`,或者删除一个目录和里面的文件,键入 `rm -r`
### 9\. touch
### 9 touch
`touch` 命令,也就是所谓的 "make file 命令",允许你使用 Linux CLI 创建新的、空的文件。很像 `mkdir` 创建目录,`touch` 会创建文件。例如,`touch testfile` 将会创建一个名为 testfile 的空文件。
`touch` 命令,也就是所谓的 “make file 的命令”,允许你使用 Linux CLI 创建新的、空的文件。很像 `mkdir` 创建目录,`touch` 会创建文件。例如,`touch testfile` 将会创建一个名为 testfile 的空文件。
### 10\. locate
### 10 locate
这个命令是你在 Linux 系统中用来查找文件的命令。就像在 Windows 中搜索一样,如果你忘了存储文件的位置或它的名字,这是非常有用的。
例如,如果你有一个关于区块链用例的文档,但是你忘了标题,你可以输入 `locate -blockchain` 或者通过用星号分隔单词来查找 "blockchain use cases",或者星号(`*`)。例如:
`locate -i*blockchain*use*cases*`
```
locate -i*blockchain*use*cases*
```
还有很多其他有用的 Linux CLI 命令,比如 `pkill` 命令,如果你开始关机但是你意识到你并不想这么做,那么这条命令很棒。但是这里描述的 10 个简单而有用的命令是你开始使用 Linux 命令行所需的基本知识。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/10-commands-new-linux-users
作者:[Sam Bocetta][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,44 +1,41 @@
Vrms 助你在 Debian 中查找非自由软件
vrms 助你在 Debian 中查找非自由软件
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/vrms-1-720x340.png)
有一天,我在阅读一篇有趣的指南,它解释了[**在数字海洋中的自由和开源软件之间的区别**][1]。在此之前,我认为两者都差不多。但是,我错了。它们之间有一些显著差异。在阅读那篇文章时,我想知道如何在 Linux 中找到非自由软件,因此有了这篇文章。
有一天,我在 Digital ocean 上读到一篇有趣的指南,它解释了[自由和开源软件之间的区别][1]。在此之前,我认为两者都差不多。但是,我错了。它们之间有一些显著差异。在阅读那篇文章时,我想知道如何在 Linux 中找到非自由软件,因此有了这篇文章。
### 向 “Virtual Richard M. Stallman” 问好,这是一个在 Debian 中查找非自由软件的 Perl 脚本
**Virtual Richard M. Stallman** ,简称 **vrms**,是一个用 Perl 编写的程序,它在你基于 Debian 的系统上分析已安装软件的列表,并报告所有来自非自由和 contrib 树的已安装软件包。对于那些疑惑的人,免费软件应该符合以下[**四项基本自由**][2]。
**Virtual Richard M. Stallman** ,简称 **vrms**,是一个用 Perl 编写的程序,它在你基于 Debian 的系统上分析已安装软件的列表,并报告所有来自非自由和 contrib 树的已安装软件包。对于那些不太清楚区别的人,自由软件应该符合以下[**四项基本自由**][2]。
* **自由 0** 不管任何目的,随意运行程序的自由。
* **自由 1** 自由研究程序如何工作,并根据你的需求进行调整。访问源代码是一个先决条件。
* **自由 2** 自由重新分发拷贝,这样你可以帮助别人。
* **自由 3** 自由改进程序,并向公众发布改进,以便整个社区获益。访问源代码是一个先决条件。
* **自由 1** 研究程序如何工作的自由,并根据你的需求进行调整。访问源代码是一个先决条件。
* **自由 2** 重新分发副本的自由,这样你可以帮助别人。
* **自由 3** 改进程序,并向公众发布改进的自由,以便整个社区获益。访问源代码是一个先决条件。
任何不满足上述四个条件的软件都不被视为自由软件。简而言之,**自由软件意味着用户可以自由运行、拷贝、分发、研究、修改和改进软件。**
任何不满足上述四个条件的软件都不被视为自由软件。简而言之,**自由软件意味着用户有运行、复制、分发、研究、修改和改进软件的自由。**
现在让我们来看看安装的软件是自由的还是非自由的,好么?
Vrms 包存在于 Debian 及其衍生版(如 Ubuntu的默认仓库中。因此你可以使用 apt 包管理器安装它,使用下面的命令。
vrms 包存在于 Debian 及其衍生版(如 Ubuntu的默认仓库中。因此你可以使用 `apt` 包管理器安装它,使用下面的命令。
```
$ sudo apt-get install vrms
```
安装完成后,运行以下命令,在基于 debian 的系统中查找非自由软件。
```
$ vrms
```
在我的 Ubuntu 16.04 LTS 桌面版上输出的示例。
```
Non-free packages installed on ostechnix
Non-free packages installed on ostechnix
unrar Unarchiver for .rar files (non-free version)
1 non-free packages, 0.0% of 2103 installed packages.
```
![][4]
@ -46,33 +43,30 @@ unrar Unarchiver for .rar files (non-free version)
如你在上面的截图中看到的那样,我的 Ubuntu 中安装了一个非自由软件包。
如果你的系统中没有任何非自由软件包,则应该看到以下输出。
```
No non-free or contrib packages installed on ostechnix! rms would be proud.
```
Vrms 不仅可以在 Debian 上找到非自由软件包,还可以在 Ubuntu、Linux Mint 和其他基于 deb 的系统中找到非自由软件包。
vrms 不仅可以在 Debian 上找到非自由软件包,还可以在 Ubuntu、Linux Mint 和其他基于 deb 的系统中找到非自由软件包。
**限制**
Vrms 虽然有一些限制。就像我已经提到的那样,它列出了安装的非自由和 contrib 部分的软件包。但是,某些发行版并未遵循确保专有软件仅在 vrm 识别为“非自由”的仓库中存在,并且它们不努力维护分离。在这种情况下Vrms 将不会识别非自由软件,并且始终会报告你的系统上安装了非自由软件。如果你使用的是像 Debian 和 Ubuntu 这样的发行版,遵循将专有软件保留在非自由仓库的策略,Vrms 一定会帮助你找到非自由软件包。
vrms 虽然有一些限制。就像我已经提到的那样,它列出了安装的非自由和 contrib 部分的软件包。但是,某些发行版并未遵循确保专有软件仅在 vrms 识别为“非自由”的仓库中存在,并且它们不努力维护这种分离。在这种情况下vrms 将不能识别非自由软件,并且始终会报告你的系统上安装了非自由软件。如果你使用的是像 Debian 和 Ubuntu 这样的发行版,遵循将专有软件保留在非自由仓库的策略,vrms 一定会帮助你找到非自由软件包。
就是这些。希望它是有用的。还有更好的东西。敬请关注!
祝世上所有的泰米尔人在泰米尔新年快乐!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
9 ways to improve collaboration between developers and designers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab1.png?itok=ULQdGjlV)
This article was co-written with [Jason Porter][1].
Design is a crucial element in any software project. Sooner or later, the developers' reasons for writing all this code will be communicated to the designers, human beings who aren't as familiar with its inner workings as the development team.
Stereotypes exist on both side of the divide; engineers often expect designers to be flaky and irrational, while designers often expect engineers to be inflexible and demanding. The truth is considerably more nuanced and, at the end of the day, the fates of designers and developers are forever intertwined.
Here are nine things that can improve collaboration between the two.
### 1\. First, knock down the wall. Seriously.
There are loads of memes about the "wall of confusion" in just about every industry. No matter what else you do, the first step toward tearing down this wall is getting both sides to agree it needs to be gone. Once everyone agrees the existing processes aren't functioning optimally, you can pick and choose from the rest of these ideas to begin fixing the problems.
### 2\. Learn to empathize.
Before rolling up any sleeves to build better communication, take a break. This is a great junction point for team building. A time to recognize that we're all people, we all have strengths and weaknesses, and most importantly, we're all on the same team. Discussions around workflows and productivity can become feisty, so it's crucial to build a foundation of trust and cooperation before diving on in.
### 3\. Recognize differences.
Designers and developers attack the same problem from different angles. Given a similar problem, designers will seek the solution with the biggest impact while developers will seek the solution with the least amount of waste. These two viewpoints do not have to be mutually exclusive. There is plenty of room for negotiation and compromise, and somewhere in the middle is where the end user receives the best experience possible.
### 4\. Embrace similarities.
This is all about workflow. CI/CD, scrum, agile, etc., are all basically saying the same thing: Ideate, iterate, investigate, and repeat. Iteration and reiteration are common denominators for both kinds of work. So instead of running a design cycle followed by a development cycle, it makes much more sense to run them concurrently and in tandem. Syncing cycles allows teams to communicate, collaborate, and influence each other every step of the way.
### 5\. Manage expectations.
All conflict can be distilled down to one simple idea: incompatible expectations. Therefore, an easy way to prevent systemic breakdowns is to manage expectations by ensuring that teams are thinking before talking and talking before doing. Setting expectations often evolves organically through everyday conversation. Forcing them to happen by having meetings can be counterproductive.
### 6\. Meet early and meet often.
Meeting once at the beginning of work and once at the end simply isn't enough. This doesn't mean you need daily or even weekly meetings. Setting a cadence for meetings can also be counterproductive. Let them happen whenever they're necessary. Great things can happen with impromptu meetings—even at the watercooler! If your team is distributed or has even one remote employee, video conferencing, text chat, or phone calls are all excellent ways to meet. It's important that everyone on the team has multiple ways to communicate with each other.
### 7\. Build your own lexicon.
Designers and developers sometimes have different terms for similar ideas. One person's card is another person's tile is a third person's box. Ultimately, the fit and accuracy of a term aren't as important as everyone's agreement to use the same term consistently.
### 8\. Make everyone a communication steward.
Everyone in the group is responsible for maintaining effective communication, regardless of how or when it happens. Each person should strive to say what they mean and mean what they say.
### 9\. Give a darn.
It only takes one member of a team to sabotage progress. Go all in. If every individual doesn't care about the product or the goal, there will be problems with motivation to make changes or continue the process.
This article is based on [Designers and developers: Finding common ground for effective collaboration][2], a talk the authors will be giving at [Red Hat Summit 2018][3], which will be held May 8-10 in San Francisco. [Register by May 7][3] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/9-ways-improve-collaboration-developers-designers
作者:[Jason Brock][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jkbrock
[1]:https://opensource.com/users/lightguardjp
[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154267
[3]:https://www.redhat.com/en/summit/2018

View File

@ -0,0 +1,143 @@
Cloud Commander A Web File Manager With Console And Editor
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-A-Web-File-Manager-With-Console-And-Editor-720x340.png)
**Cloud commander** is a web-based file manager application that allows you to view, access, and manage the files and folders of your system from any computer, mobile, and tablet Pc via a web browser. It has two simple and classic panels, and automatically converts its size as per your devices display size. It also has two built-in editors namely **Dword** and **Edward** with support of Syntax-highlighting and one **Console** with support of your systems command line. So you can edit your files on the go. Cloud Commander server is a cross-platform application that runs on Linux, Windows and Mac OS X operating systems, and the client will run on any web browser. It is written using **JavaScript/Node.Js** , and is licensed under **MIT**.
In this brief tutorial, let us see how to install Cloud Commander in Ubuntu 18.04 LTS server.
### Prerequisites
As I mentioned earlier, Cloud Commander is written using Node.Js. So, in order to install Cloud Commander we need to install Node.Js first. To do so, refer the following guide.
### Install Cloud Commander
After installing Node.Js, run the following command to install Cloud Commander:
```
$ npm i cloudcmd -g
```
Congratulations! Cloud Commander has been installed. Let us go ahead and see the basic usage of Cloud Commander.
### Getting started with Cloud Commander
Run the following command to start Cloud Commander:
```
$ cloudcmd
```
**Sample output:**
```
url: http://localhost:8000
```
Now, open your web browser and navigate to the URL: **<http://localhost:8000** or> **<http://IP-address:8000>**.
From now on, you can create, delete, view, manage files or folders right in the web browser from the local system or remote system, or mobile, tablet etc.
![][2]
As you can see in the above screenshot, Cloud Commander has two panels, ten hotkeys (F1 to F10), and Console.
Each hotkey does a unique job.
* F1 Help
* F2 Rename file/folder
* F3 View files and folders
* F4 Edit files
* F5 Copy files/folders
* F6 Move files/folders
* F7 Create new directory
* F8 Delete file/folder
* F9 Open Menu
* F10 Open config
#### Cloud Commander console
Click on the Console icon. This will open your default systems shell.
![][3]
From this console you can do all sort of administration tasks such as installing packages, removing packages, update your system etc. You can even shutdown or reboot system. Therefore, Cloud Commander is not just a file manager, but also has the functionality of a remote administration tool.
#### Creating files/folders
To create a new file or folder Right click on any empty place and go to **New - >File or Directory**.
![][4]
#### View files
You can view pictures, watch audio and video files.
![][5]
#### Upload files
The other cool feature is we can easily upload a file to Cloud Commander system from any system or device.
To upload a file, right click on any empty space in the Cloud Commander panel, and click on the **Upload** option.
![][6]
Select the files you want to upload.
Also, you can upload files from the Cloud services like Google drive, Dropbox, Amazon cloud drive, Facebook, Twitter, Gmail, GtiHub, Picasa, Instagram and many.
To upload files from Cloud, right click on any empty space in the panel and select **Upload from Cloud**.
![][7]
Select any web service of your choice, for example Google drive. Click **Connect to Google drive** button.
![][8]
In the next step, authenticate your google drive with Cloud Commander. Finally, select the files from your Google drive and click **Upload**.
![][9]
#### Update Cloud Commander
To update Cloud Commander to the latest available version, run the following command:
```
$ npm update cloudcmd -g
```
#### Conclusion
As far as I tested Cloud Commander, It worked like charm. I didnt face a single issue during the testing in my Ubuntu server. Also, Cloud Commander is not just a web-based file manager, but also acts as a remote administration tool that performs most Linux administration tasks. You can create a files/folders, rename, delete, edit, and view them. Also, You can install, update, upgrade, and remove any package as the way you do in the local system from the Terminal. And, of course, you can even shutdown or restart the system from the Cloud Commander console itself. What do you need more? Give it a try, you will find it useful.
Thats all for now. I will be here soon with another interesting article. Until then, stay tuned with OSTechNix.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cloud-commander-a-web-file-manager-with-console-and-editor/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-Google-Chrome_006-4.jpg
[3]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-Google-Chrome_007-2.jpg
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-commander-file-folder-1.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_008-1.jpg
[6]:http://www.ostechnix.com/wp-content/uploads/2016/05/cloud-commander-upload-2.png
[7]:http://www.ostechnix.com/wp-content/uploads/2016/05/upload-from-cloud-1.png
[8]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_009-2.jpg
[9]:http://www.ostechnix.com/wp-content/uploads/2016/05/Cloud-Commander-home-sk-Google-Chrome_010-1.jpg

View File

@ -1,113 +0,0 @@
Advanced image viewing tricks with ImageMagick
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-green.png?itok=qiDqmXV1)
In my [introduction to ImageMagick][1], I showed how to use the application's menus to edit and add effects to your images. In this follow-up, I'll show additional ways to use this open source image editor to view your images.
### Another effect
Before diving into advanced image viewing with ImageMagick, I want to share another interesting, yet simple, effect using the **convert** command, which I discussed in detail in my previous article. This involves the
**-edge** option, then **negate** :
```
convert DSC_0027.JPG -edge 3 -negate edge3+negate.jpg
```
![Using the edge and negate options on an image.][3]
Before and after example of using the edge and negate options on an image.
There are a number of things I like about the edited image--the appearance of the sea, the background and foreground vegetation, but especially the sun and its reflection, and also the sky.
### Using display to view a series of images
If you're a command-line user like I am, you know that the shell provides a lot of flexibility and shortcuts for complex tasks. Here I'll show one example: the way ImageMagick's **display** command can overcome a problem I've had reviewing images I import with the [Shotwell][4] image manager for the GNOME desktop.
Shotwell creates a nice directory structure that uses each image's [Exif][5] data to store imported images based on the date they were taken or created. You end up with a top directory for the year, subdirectories for each month (01, 02, 03, and so on), followed by another level of subdirectories for each day of the month. I like this structure, because finding an image or set of images based on when they were taken is easy.
This structure is not so great, however, when I want to review all my images for the last several months or even the whole year. With a typical image viewer, this involves a lot of jumping up and down the directory structure, but ImageMagick's **display** command makes it simple. For example, imagine that I want to look at all my pictures for this year. If I enter **display** on the command line like this:
```
display -resize 35 % 2017 /*/*/*.JPG
```
I can march through the year, month by month, day by day.
Now imagine I'm looking for an image, but I can't remember whether I took it in the first half of 2016 or the first half of 2017. This command:
```
display -resize 35% 201[6-7]/0[1-6]/*/*.JPG
```
restricts the images shown to January through June of 2016 and 2017.
### Using montage to view thumbnails of images
Now say I'm looking for an image that I want to edit. One problem is that **display** shows each image's filename, but not its place in the directory structure, so it's not obvious where I can find that image. Also, when I (sporadically) download images from my camera, I clear them from the camera's storage, so the filenames restart at **DSC_0001.jpg** at unpredictable times. Finally, it can take a lot of time to go through 12 months of images when I use **display** to show an entire year.
This is where the **montage** command, which puts thumbnail versions of a series of images into a single image, can be very useful. For example:
```
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-4]/*/*.JPG 2017JanApr.jpg
```
From left to right, this command starts by specifying a label for each image that consists of the filename ( **%f** ) and its directory ( **%d** ) structure, separated with **/**. Next, the command specifies the main directory as the title, then instructs the montage to tile the images in five columns, with each image resized to 10% (which fits my monitor's screen easily). The geometry setting puts whitespace around each image. Finally, it specifies which images to include in the montage, and an appropriate filename to save the montage ( **2017JanApr.jpg** ). So now the image **2017JanApr.jpg** becomes a reference I can use over and over when I want to view all my images from this time period.
### Managing memory
You might wonder why I specified just a four-month period (January to April) for this montage. Here is where you need to be a bit careful, because **montage** can consume a lot of memory. My camera creates image files that are about 2.5MB each, and I have found that my system's memory can pretty easily handle 60 images or so. When I get to around 80, my computer freezes when other programs, such as Firefox and Thunderbird, are running the background. This seems to relate to memory usage, which goes up to 80% or more of available RAM for **montage**. (You can check this by running **top** while you do this procedure.) If I shut down all other programs, I can manage 80 images before my system freezes.
Here's how you can get some sense of how many files you're dealing with before running the **montage** command:
```
ls 2017/0[1-4/*/*.JPG > filelist; wc -l filelist
```
The command **ls** generates a list of the files in our search and saves it to the arbitrarily named filelist. Then, the **wc** command with the **-l** option reports how many lines are in the file, in other words, how many files **ls** found. Here's my output:
```
163 filelist
```
Oops! There are 163 images taken from January through April, and creating a montage of all of them would almost certainly freeze up my system. I need to trim down the list a bit, maybe just to March or even earlier. But what if I took a lot of pictures from April 20 to 30, and I think that's a big part of my problem. Here's how the shell can help us figure this out:
```
ls 2017/0[1-3]/*/*.JPG > filelist; ls 2017/04/0[1-9]/*.JPG >> filelist; ls 2017/04/1[0-9]/*.JPG >> filelist; wc -l filelist
```
This is a series of four commands all on one line, separated by semicolons. The first command specifies the number of images taken from January to March; the second adds April 1 through 9 using the **> >** append operator; the third appends April 10 through 19. The fourth command, **wc -l** , reports:
```
81 filelist
```
I know 81 files should be doable if I shut down my other applications.
Managing this with the **montage** command is easy, since we're just transposing what we did above:
```
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-3]/*/*.JPG 2017/04/0[1-9]/*.JPG 2017/04/1[0-9]/*.JPG 2017Jan01Apr19.jpg
```
The last filename in the **montage** command will be the output; everything before that is input and is read from left to right. This took just under three minutes to run and resulted in an image about 2.5MB in size, but my system was sluggish for a bit afterward.
### Displaying the montage
When you first view a large montage using the **display** command, you may see that the montage's width is OK, but the image is squished vertically to fit the screen. Don't worry; just left-click the image and select **View > Original Size**. Click again to hide the menu.
I hope this has been helpful in showing you new ways to view your images. In my next article, I'll discuss more complex image manipulation.
### About The Author
Greg Pittman;Greg Is A Retired Neurologist In Louisville;Kentucky;With A Long-Standing Interest In Computers;Programming;Beginning With Fortran Iv In The When Linux;Open Source Software Came Along;It Kindled A Commitment To Learning More;Eventually Contributing. He Is A Member Of The Scribus Team.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/9/imagemagick-viewing-images
作者:[Greg Pittman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/greg-p
[1]:https://opensource.com/article/17/8/imagemagick
[2]:/file/370946
[3]:https://opensource.com/sites/default/files/u128651/edge3negate.jpg (Using the edge and negate options on an image.)
[4]:https://wiki.gnome.org/Apps/Shotwell
[5]:https://en.wikipedia.org/wiki/Exif

View File

@ -0,0 +1,119 @@
Testing IPv6 Networking in KVM: Part 2
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_4.png?itok=yZBHylwd)
When last we met, in [Testing IPv6 Networking in KVM: Part 1][1], we learned about IPv6 private addressing. Today, we're going to use KVM to create networks for testing IPv6 to our heart's content.
Should you desire a refresh in using KVM, see [Creating Virtual Machines in KVM: Part 1][2] and [Creating Virtual Machines in KVM: Part 2 - Networking][3].
### Creating Networks in KVM
You need at least two virtual machines in KVM. Of course, you may create as many as you like. My little setup has Fedora, Ubuntu, and openSUSE. To create a new IPv6 network, open Edit > Connection Details > Virtual Networks in the main Virtual Machine Manager window. Click on the button with the green cross on the bottom left to create a new network (Figure 1).
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kvm-fig-1_0.png?itok=ruqjPXxd)
Figure 1: Create a network.
Give your new network a name, then click the Forward button. You may opt to not create an IPv4 network if you wish. When you create a new IPv4 network the Virtual Machine Manager will not let you create a duplicate network, or one with an invalid address. On my host Ubuntu system a valid address is highlighted in green, and an invalid address is highlighted in a tasteful rosy hue. On my openSUSE machine there are no colored highlights. Enable DHCP or not, and create a static route or not, then move on to the next window.
Check "Enable IPv6 network address space definition" and enter your private address range. You may use any IPv6 address class you wish, being careful, of course, to not allow your experiments to leak out of your network. We shall use the nice IPv6 unique local addresses (ULA), and use the online address generator at [Simple DNS Plus][4] to create our network address. Copy the "Combined/CID" address into the Network field (Figure 2).
![network address][6]
Figure 2: Copy the "Combined/CID" address into the Network field.
[Used with permission][7]
Virtual Machine Manager thinks my address is not valid, as evidenced by the rose highlight. Can it be right? Let us use ipv6calc to check:
```
$ ipv6calc -qi fd7d:844d:3e17:f3ae::/64
Address type: unicast, unique-local-unicast, iid, iid-local
Registry for address: reserved(RFC4193#3.1)
Address type has SLA: f3ae
Interface identifier: 0000:0000:0000:0000
Interface identifier is probably manual set
```
ipv6calc thinks it's fine. Just for fun, change one of the numbers to something invalid, like the letter g, and try it again. (Asking "What if...?" and trial and error is the awesomest way to learn.)
Let us carry on and enable DHCPv6 (Figure 3). You can accept the default values, or set your own.
![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-3.png?itok=F-oAAtN9)
We shall skip creating a default route definition and move on to the next screen, where we shall enable "Isolated Virtual Network" and "Enable IPv6 internal routing/networking".
### VM Network Selection
Now you can configure your virtual machines to use your new network. Open your VMs, and then click the "i" button at the top left to open its "Show virtual hardware details" screen. In the "Add Hardware" column click on the NIC button to open the network selector, and select your nice new IPv6 network. Click Apply, and then reboot. (Or use your favorite method for restarting networking, or renewing your DHCP lease.)
### Testing
What does ifconfig tell us?
```
$ ifconfig
ens3: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.30.207 netmask 255.255.255.0
broadcast 192.168.30.255
inet6 fd7d:844d:3e17:f3ae::6314
prefixlen 128 scopeid 0x0
inet6 fe80::4821:5ecb:e4b4:d5fc
prefixlen 64 scopeid 0x20
```
And there is our nice new ULA, fd7d:844d:3e17:f3ae::6314, and the auto-generated link-local address that is always present. Let's have some ping fun, pinging another VM on the network:
```
vm1 ~$ ping6 -c2 fd7d:844d:3e17:f3ae::2c9f
PING fd7d:844d:3e17:f3ae::2c9f(fd7d:844d:3e17:f3ae::2c9f) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=1 ttl=64 time=0.635 ms
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=2 ttl=64 time=0.365 ms
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
When you're struggling to understand subnetting, this gives you a fast, easy way to try different addresses and see whether they work. You can assign multiple IP addresses to a single interface and then ping them to see what happens. In a ULA, the interface, or host, portion of the IP address is the last four quads, so you can do anything to those and still be in the same subnet, which in this example is f3ae. This example changes only the interface ID on one of my VMs, to show how you really can do whatever you want with those four quads:
```
vm1 ~$ sudo /sbin/ip -6 addr add fd7d:844d:3e17:f3ae:a:b:c:6314 dev ens3
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
Now try it with a different subnet, which in this example is f4ae instead of f3ae:
```
$ ping6 -c2 fd7d:844d:3e17:f4ae:a:b:c:6314
PING fd7d:844d:3e17:f4ae:a:b:c:6314(fd7d:844d:3e17:f4ae:a:b:c:6314) 56 data bytes
From fd7d:844d:3e17:f3ae::1 icmp_seq=1 Destination unreachable: No route
From fd7d:844d:3e17:f3ae::1 icmp_seq=2 Destination unreachable: No route
```
This is also a great time to practice routing, which we will do in a future installment along with setting up auto-addressing without DHCP.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-2
作者:[CARLA SCHRODER][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1
[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1
[3]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking
[4]:http://simpledns.com/private-ipv6.aspx
[5]:/files/images/kvm-fig-2png
[6]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-2.png?itok=gncdPGj- (network address)
[7]:https://www.linux.com/licenses/category/used-permission

View File

@ -1,3 +1,5 @@
pinewall translating
Kubernetes distributed application deployment with sample Face Recognition App
============================================================

View File

@ -1,349 +0,0 @@
Translating by jessie-pang
How To Edit Multiple Files Using Vim Editor
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png)
Sometimes, you will find yourself in a situation where you want to make changes in multiple files or you might want to copy the contents of one file to another. If youre on GUI mode, you could simply open the files in any graphical text editor, like gedit, and use CTRL+C and CTRL+V to copy/paste the contents. In CLI mode, you cant use such editors. No worries! Where there is vim editor, there is a way! In this tutorial, we are going to learn to edit multiple files at the same time using Vim editor. Trust me, this is very interesting read.
### Installing Vim
Vim editor is available in the official repositories of most Linux distributions. So you can install it using the default package manager. For example, on Arch Linux and its variants you can install it using command:
```
$ sudo pacman -S vim
```
On Debian, Ubuntu:
```
$ sudo apt-get install vim
```
On RHEL, CentOS:
```
$ sudo yum install vim
```
On Fedora:
```
$ sudo dnf install vim
```
On openSUSE:
```
$ sudo zypper install vim
```
### Edit multiple files at a time using Vim editor in Linux
Let us now get down to the business. We can do this in two methods.
#### Method 1
I have two files namely **file1.txt** and **file2.txt** , with a bunch of random words. Let us have a look at them.
```
$ cat file1.txt
ostechnix
open source
technology
linux
unix
$ cat file2.txt
line1
line2
line3
line4
line5
```
Now, let us edit these two files at a time. To do so, run:
```
$ vim file1.txt file2.txt
```
Vim will display the contents of the files in an order. The first files contents will be shown first and then second file and so on.
![][2]
**Switch between files**
To move to the next file, type:
```
:n
```
![][3]
To go back to previous file, type:
```
:N
```
Vim wont allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type:
```
ZZ
```
Please note that it is double capital letters ZZ (SHIFT+zz).
To abandon the changes and move to the previous file, type:
```
:N!
```
To view the files which are being currently edited, type:
```
:buffers
```
![][4]
You will see the list of loaded files at the bottom.
![][5]
To switch to the next file, type **:buffer** followed by the buffer number. For example, to switch to the first file, type:
```
:buffer 1
```
![][6]
**Opening additional files for editing**
We are currently editing two files namely file1.txt, file2.txt. I want to open another file named **file3.txt** for editing.
What will you do? Its easy! Just type **:e** followed by the file name like below.
```
:e file3.txt
```
![][7]
Now you can edit file3.txt.
To view how many files are being edited currently, type:
```
:buffers
```
![][8]
Please note that you can not switch between opened files with **:e** using either **:n** or **:N**. To switch to another file, type **:buffer** followed by the file buffer number.
**Copying contents of one file into another**
You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.
To do so, first switch to file1.txt:
```
:buffer 1
```
Place the move cursor in-front of a line that wants to copy and type **yy** to yank(copy) the line. Then. move to file2.txt:
```
:buffer 2
```
Place the mouse cursor where you want to paste the copied lines from file1.txt and type **p**. For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type **p**.
Sample output:
```
line1
line2
ostechnix
line3
line4
line5
```
![][9]
To save the changes made in the current file, type:
```
ZZ
```
Again, please note that this is double capital ZZ (SHIFT+z).
To save the changes in all files and exit vim editor. type:
```
:wq
```
Similarly, you can copy any line from any file to other files.
**Copying entire file contents into another**
We know how to copy a single line. What about the entire file contents? Thats also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.
To do so, open the file2.txt first:
```
$ vim file2.txt
```
If the files are already loaded, you can switch to file2.txt by typing:
```
:buffer 2
```
Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key:
```
:r file1.txt
```
![][10]
Here, **r** means **read**.
Now you will see the contents of file1.txt is pasted after line5 in file2.txt.
```
line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
```
![][11]
To save the changes in the current file, type:
```
ZZ
```
To save all changes in all loaded files and exit vim editor, type:
```
:wq
```
#### Method 2
The another method to open multiple files at once is by using either **-o** or **-O** flags.
To open multiple files in horizontal windows, run:
```
$ vim -o file1.txt file2.txt
```
![][12]
To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, you the following shortcuts to move between windows.
* **CTRL-w k** top window
* **CTRL-w j** bottom window
To open multiple files in vertical windows, run:
```
$ vim -O file1.txt file2.txt file3.txt
```
![][13]
To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, use the following shortcuts to move between windows.
* **CTRL-w l** left window
* **CTRL-w h** right window
Everything else is same as described in method 1.
For example, to list currently loaded files, run:
```
:buffers
```
To switch between files:
```
:buffer 1
```
To open an additional file, type:
```
:e file3.txt
```
To copy entire contents of a file into another:
```
:r file1.txt
```
The only difference in method 2 is once you saved the changes in the current file using **ZZ** , the file will automatically close itself. Also, you need to close the files one by one by typing **:wq**. But, had you followed the method 1, when typing **:wq** all changes will be saved in all files and all files will be closed at once.
For more details, refer man pages.
```
$ man vim
```
**Suggested read:**
You know now how to edit multiples files using vim editor in Linux. As you can see, editing multiple files is not that difficult. Vim editor has more powerful features. We will write more about Vim editor in the days to come.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-1-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-5.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-6.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-7.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-8.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-10-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-11.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-12.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-13.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-16.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-17.png

View File

@ -1,3 +1,6 @@
Translating by MjSeven
Useful Resources for Those Who Want to Know More About Linux
======

View File

@ -0,0 +1,429 @@
Some Common Concurrent Programming Mistakes
============================================================
Go is a language supporting built-in concurrent programming. By using the `go` keyword to create goroutines (light weight threads) and by [using][8] [channels][9] and [other concurrency][10] [synchronization techniques][11] provided in Go, concurrent programming becomes easy, flexible and enjoyable.
One the other hand, Go doesn't prevent Go programmers from making some concurrent programming mistakes which are caused by either carelessnesses or lacking of experiences. The remaining of the current article will show some common mistakes in Go concurrent programming, to help Go programmers avoid making such mistakes.
### No Synchronizations When Synchronizations Are Needed
Code lines may be [not executed by the appearance orders][2].
There are two mistakes in the following program.
* First, the read of `b` in the main goroutine and the write of `b` in the new goroutine might cause data races.
* Second, the condition `b == true` can't ensure that `a != nil` in the main goroutine. Compilers and CPUs may make optimizations by [reordering instructions][1] in the new goroutine, so the assignment of `b`may happen before the assignment of `a` at run time, which makes that slice `a` is still `nil` when the elements of `a` are modified in the main goroutine.
```
package main
import (
"time"
"runtime"
)
func main() {
var a []int // nil
var b bool // false
// a new goroutine
go func () {
a = make([]int, 3)
b = true // write b
}()
for !b { // read b
time.Sleep(time.Second)
runtime.Gosched()
}
a[0], a[1], a[2] = 0, 1, 2 // might panic
}
```
The above program may run well on one computer, but may panic on another one. Or it may run well for  _N_ times, but may panic at the  _(N+1)_ th time.
We should use channels or the synchronization techniques provided in the `sync` standard package to ensure the memory orders. For example,
```
package main
func main() {
var a []int = nil
c := make(chan struct{})
// a new goroutine
go func () {
a = make([]int, 3)
c <- struct{}{}
}()
<-c
a[0], a[1], a[2] = 0, 1, 2
}
```
### Use `time.Sleep` Calls To Do Synchronizations
Let's view a simple example.
```
ppackage main
import (
"fmt"
"time"
)
func main() {
var x = 123
go func() {
x = 789 // write x
}()
time.Sleep(time.Second)
fmt.Println(x) // read x
}
```
We expect the program to print `789`. If we run it, it really prints `789`, almost always. But is it a program with good syncrhonization? No! The reason is Go runtime doesn't guarantee the write of `x` happens before the read of `x` for sure. Under certain conditions, such as most CPU resources are cunsumed by other programs running on same OS, the write of `x` might happen after the read of `x`. This is why we should never use `time.Sleep` calls to do syncrhonizations in formal projects.
Let's view another example.
```
package main
import (
"fmt"
"time"
)
var x = 0
func main() {
var num = 123
var p = &num
c := make(chan int)
go func() {
c <- *p + x
}()
time.Sleep(time.Second)
num = 789
fmt.Println(<-c)
}
```
What do you expect the program will output? `123`, or `789`? In fact, the output is compiler dependent. For the standard Go compiler 1.10, it is very possible the program will output `123`. But in theory, it might output `789`, or another random number.
Now, let's change `c <- *p + x` to `c <- *p` and run the program again. You will find the output becomes to `789` (for the he standard Go compiler 1.10). Again, the output is compiler dependent.
Yes, there are data races in the above program. The expression `*p` might be evaluated before, after, or when the assignment `num = 789` is processed. The `time.Sleep` call can't guarantee the evaluation of `*p`happens before the assignment is processed.
For this specified example, we should store the value to be sent in a temporary value before creating the new goroutine and send the temporary value instead in the new goroutine to remove the data races.
```
...
tmp := *p + x
go func() {
c <- tmp
}()
...
```
### Leave Goroutines Hanging
Hanging goroutines are the goroutines staying in blocking state for ever. There are many reasons leading goroutines into hanging. For example,
* a goroutine tries to receive a value from a nil channel or from a channel which no more other goroutines will send values to.
* a goroutine tries to send a value to nil channel or to a channel which no more other goroutines will receive values from.
* a goroutine is dead locked by itself.
* a group of goroutines are dead locked by each other.
* a goroutine is blocked when executing a `select` code block without `default` branch, and all the channel operations following the `case` keywords in the `select` code block keep blocking for ever.
Except sometimes we deliberately let the main goroutine in a program hanging to avoid the program exiting, most other hanging goroutine cases are unexpected. It is hard for Go runtime to judge whether or not a goroutine in blocking state is hanging or stays in blocking state temporarily. So Go runtime will never release the resources consumed by a hanging goroutine.
In the [first-response-wins][12] channel use case, if the capacity of the channel which is used a future is not large enough, some slower response goroutines will hang when trying to send a result to the future channel. For example, if the following function is called, there will be 4 goroutines stay in blocking state for ever.
```
func request() int {
c := make(chan int)
for i := 0; i < 5; i++ {
i := i
go func() {
c <- i // 4 goroutines will hang here.
}()
}
return <-c
}
```
To avoid the four goroutines hanging, the capacity of channel `c` must be at least `4`.
In [the second way to implement the first-response-wins][13] channel use case, if the channel which is used as a future is an unbufferd channel, it is possible that the channel reveiver will never get a response and hang. For example, if the following function is called in a goroutine, the goroutine might hang. The reason is, if the five try-send operations all happen before the receive operation `<-c` is ready, then all the five try-send operations will fail to send values so that the caller goroutine will never receive a value.
```
func request() int {
c := make(chan int)
for i := 0; i < 5; i++ {
i := i
go func() {
select {
case c <- i:
default:
}
}()
}
return <-c
}
```
Changing the channel `c` as a buffered channel will guarantee at least one of the five try-send operations succeed so that the caller goroutine will never hang in the above function.
### Copy Values Of The Types In The `sync` Standard Package
In practice, values of the types in the `sync` standard package shouldn't be copied. We should only copy pointers of such values.
The following is bad concurrent programming example. In this example, when the `Counter.Value` method is called, a `Counter` receiver value will be copied. As a field of the receiver value, the respective `Mutex` field of the `Counter` receiver value will also be copied. The copy is not synchronized, so the copied `Mutex` value might be corrupt. Even if it is not corrupt, what it protects is the accessment of the copied `Counter` receiver value, which is meaningless generally.
```
import "sync"
type Counter struct {
sync.Mutex
n int64
}
// This method is okay.
func (c *Counter) Increase(d int64) (r int64) {
c.Lock()
c.n += d
r = c.n
c.Unlock()
return
}
// The method is bad. When it is called, a Counter
// receiver value will be copied.
func (c Counter) Value() (r int64) {
c.Lock()
r = c.n
c.Unlock()
return
}
```
We should change the reveiver type of the `Value` method to the poiner type `*Counter` to avoid copying `Mutex` values.
The `go vet` command provided in the official Go SDK will report potential bad value copies.
### Call Methods Of `sync.WaitGroup` At Wrong Places
Each `sync.WaitGroup` value maintains a counter internally, The initial value of the counter is zero. If the counter of a `WaitGroup` value is zero, a call to the `Wait` method of the `WaitGroup` value will not block, otherwise, the call blocks until the counter value becomes zero.
To make the uses of `WaitGroup` value meaningful, when the counter of a `WaitGroup` value is zero, a call to the `Add` method of the `WaitGroup` value must happen before the corresponding call to the `Wait` method of the `WaitGroup` value.
For example, in the following program, the `Add` method is called at an improper place, which makes that the final printed number is not always `100`. In fact, the final printed number of the program may be an arbitrary number in the range `[0, 100)`. The reason is none of the `Add` method calls are guaranteed to happen before the `Wait` method call.
```
package main
import (
"fmt"
"sync"
"sync/atomic"
)
func main() {
var wg sync.WaitGroup
var x int32 = 0
for i := 0; i < 100; i++ {
go func() {
wg.Add(1)
atomic.AddInt32(&x, 1)
wg.Done()
}()
}
fmt.Println("To wait ...")
wg.Wait()
fmt.Println(atomic.LoadInt32(&x))
}
```
To make the program behave as expected, we should move the `Add` method calls out of the new goroutines created in the `for` loop, as the following code shown.
```
...
for i := 0; i < 100; i++ {
wg.Add(1)
go func() {
atomic.AddInt32(&x, 1)
wg.Done()
}()
}
...
```
### Use Channels As Futures Improperly
From the article [channel use cases][14], we know that some functions will return [channels as futures][15]. Assume `fa` and `fb` are two such functions, then the following call uses future arguments improperly.
```
doSomethingWithFutureArguments(<-fa(), <-fb())
```
In the above code line, the two channel receive operations are processed in sequentially, instead of concurrently. We should modify it as the following to process them concurrently.
```
ca, cb := fa(), fb()
doSomethingWithFutureArguments(<-c1, <-c2)
```
### Close Channels Not From The Last Active Sender Goroutine
A common mistake made by Go programmers is closing a channel when there are still some other goroutines will potentially send values to the channel later. When such a potential send (to the closed channel) really happens, a panic will occur.
This mistake was ever made in some famous Go projects, such as [this bug][3] and [this bug][4] in the kubernetes project.
Please read [this article][5] for explanations on how to safely and gracefully close channels.
### Do 64-bit Atomic Operations On Values Which Are Not Guaranteed To Be 64-bit Aligned
Up to now (Go 1.10), for the standard Go compiler, the address of the value involved in a 64-bit atomic operation is required to be 64-bit aligned. Failure to do so may make the current goroutine panic. For the standard Go compiler, such failure can only happen on 32-bit architectures. Please read [memory layouts][6] to get how to guarantee the addresses of 64-bit word 64-bit aligned on 32-bit OSes.
### Not Pay Attention To Too Many Resources Are Consumed By Calls To The `time.After` Function
The `After` function in the `time` standard package returns [a channel for delay notification][7]. The function is convenient, however each of its calls will create a new value of the `time.Timer` type. The new created `Timer` value will keep alive in the duration specified by the passed argument to the `After` function. If the function is called many times in the duration, there will be many `Timer` values alive and consuming much memory and computation.
For example, if the following `longRunning` function is called and there are millions of messages coming in one minute, then there will be millions of `Timer` values alive in a certain period, even if most of these `Timer`values have already become useless.
```
import (
"fmt"
"time"
)
// The function will return if a message arrival interval
// is larger than one minute.
func longRunning(messages <-chan string) {
for {
select {
case <-time.After(time.Minute):
return
case msg := <-messages:
fmt.Println(msg)
}
}
}
```
To avoid too many `Timer` values being created in the above code, we should use a single `Timer` value to do the same job.
```
func longRunning(messages <-chan string) {
timer := time.NewTimer(time.Minute)
defer timer.Stop()
for {
select {
case <-timer.C:
return
case msg := <-messages:
fmt.Println(msg)
if !timer.Stop() {
<-timer.C
}
}
// The above "if" block can also be put here.
timer.Reset(time.Minute)
}
}
```
### Use `time.Timer` Values Incorrectly
An idiomatic use example of a `time.Timer` value has been shown in the last section. One detail which should be noted is that the `Reset` method should always be invoked on stopped or expired `time.Timer`values.
At the end of the first `case` branch of the `select` block, the `time.Timer` value has expired, so we don't need to stop it. But we must stop the timer in the second branch. If the `if` code block in the second branch is missing, it is possible that a send (by the Go runtime) to the channel `timer.C` races with the `Reset`method call, and it is possible that the `longRunning` function returns earlier than expected, for the `Reset`method will only reset the internal timer to zero, it will not clear (drain) the value which has been sent to the `timer.C` channel.
For example, the following program is very possible to exit in about one second, instead of ten seconds. and more importantly, the program is not data race free.
```
package main
import (
"fmt"
"time"
)
func main() {
start := time.Now()
timer := time.NewTimer(time.Second/2)
select {
case <-timer.C:
default:
time.Sleep(time.Second) // go here
}
timer.Reset(time.Second * 10)
<-timer.C
fmt.Println(time.Since(start)) // 1.000188181s
}
```
A `time.Timer` value can be leaved in non-stopping status when it is not used any more, but it is recommended to stop it in the end.
It is bug prone and not recommended to use a `time.Timer` value concurrently in multiple goroutines.
We should not rely on the return value of a `Reset` method call. The return result of the `Reset` method exists just for compatibility purpose.
--------------------------------------------------------------------------------
via: https://go101.org/article/concurrent-common-mistakes.html
作者:[go101.org ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:go101.org
[1]:https://go101.org/article/memory-model.html
[2]:https://go101.org/article/memory-model.html
[3]:https://github.com/kubernetes/kubernetes/pull/45291/files?diff=split
[4]:https://github.com/kubernetes/kubernetes/pull/39479/files?diff=split
[5]:https://go101.org/article/channel-closing.html
[6]:https://go101.org/article/memory-layout.html
[7]:https://go101.org/article/channel-use-cases.html#timer
[8]:https://go101.org/article/channel-use-cases.html
[9]:https://go101.org/article/channel.html
[10]:https://go101.org/article/concurrent-atomic-operation.html
[11]:https://go101.org/article/concurrent-synchronization-more.html
[12]:https://go101.org/article/channel-use-cases.html#first-response-wins
[13]:https://go101.org/article/channel-use-cases.html#first-response-wins-2
[14]:https://go101.org/article/channel-use-cases.html
[15]:https://go101.org/article/channel-use-cases.html#future-promise

View File

@ -1,349 +0,0 @@
pinewall translating
How to do math on the Linux command line
======
![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg)
Can you do math on the Linux command line? You sure can! In fact, there are quite a few commands that can make the process easy and some you might even find interesting. Let's look at some very useful commands and syntax for command line math.
### expr
First and probably the most obvious and commonly used command for performing mathematical calculations on the command line is the **expr** (expression) command. It can manage addition, subtraction, division, and multiplication. It can also be used to compare numbers. Here are some examples:
#### Incrementing a variable
```
$ count=0
$ count=`expr $count + 1`
$ echo $count
1
```
#### Performing a simple calculations
```
$ expr 11 + 123
134
$ expr 134 / 11
12
$ expr 134 - 11
123
$ expr 11 * 123
expr: syntax error <== oops!
$ expr 11 \* 123
1353
$ expr 20 % 3
2
```
Notice that you have to use a \ character in front of * to avoid the syntax error. The % operator is for modulo calculations.
Here's a slightly more complex example:
```
participants=11
total=156
share=`expr $total / $participants`
remaining=`expr $total - $participants \* $share`
echo $share
14
echo $remaining
2
```
If we have 11 participants in some event and 156 prizes to distribute, each participant's fair share of the take is 14, leaving 2 in the pot.
#### Making comparisons
Now let's look at the logic for comparisons. These statements may look a little odd at first. They are not setting values, but only comparing the numbers. What **expr** is doing in the examples below is determining whether the statements are true. If the result is 1, the statement is true; otherwise, it's false.
```
$ expr 11 = 11
1
$ expr 11 = 12
0
```
Read them as "Does 11 equal 11?" and "Does 11 equal 12?" and you'll get used to how this works. Of course, no one would be asking if 11 equals 11 on the command line, but they might ask if $age equals 11.
```
$ age=11
$ expr $age = 11
1
```
If you put the numbers in quotes, you'd actually be doing a string comparison rather than a numeric one.
```
$ expr "11" = "11"
1
$ expr "eleven" = "11"
0
```
In the following examples, we're asking whether 10 is greater than 5 and, then, whether it's greater than 99.
```
$ expr 10 \> 5
1
$ expr 10 \> 99
0
```
Of course, having true comparisons resulting in 1 and false resulting in 0 goes against what we generally expect on Linux systems. The example below shows that using **expr** in this kind of context doesn't work because **if** works with the opposite orientation (0=true).
```
#!/bin/bash
echo -n "Cost to us> "
read cost
echo -n "Price we're asking> "
read price
if [ `expr $price \> $cost` ]; then
echo "We make money"
else
echo "Don't sell it"
fi
```
Now, let's run this script:
```
$ ./checkPrice
Cost to us> 11.50
Price we're asking> 6
We make money
```
That sure isn't going to help with sales! With a small change, this would work as we'd expect:
```
#!/bin/bash
echo -n "Cost to us> "
read cost
echo -n "Price we're asking> "
read price
if [ `expr $price \> $cost` == 1 ]; then
echo "We make money"
else
echo "Don't sell it"
fi
```
### factor
The **factor** command works just like you'd probably expect. You feed it a number, and it tells you what its factors are.
```
$ factor 111
111: 3 37
$ factor 134
134: 2 67
$ factor 17894
17894: 2 23 389
$ factor 1987
1987: 1987
```
NOTE: The factor command didn't get very far on factoring that last value because 1987 is a **prime number**.
### jot
The **jot** command allows you to create a list of numbers. Provide it with the number of values you want to see and the number that you want to start with.
```
$ jot 8 10
10
11
12
13
14
15
16
17
```
You can also use **jot** like this. Here we're asking it to decrease the numbers by telling it we want to stop when we get to 2:
```
$ jot 8 10 2
10
9
8
7
5
4
3
2
```
The **jot** command can be useful if you want to iterate through a series of numbers to create a list for some other purpose.
```
$ for i in `jot 7 17`; do echo April $i; done
April 17
April 18
April 19
April 20
April 21
April 22
April 23
```
### bc
The **bc** command is probably one of the best tools for doing calculations on the command line. Enter the calculation that you want performed, and pipe it to the command like this:
```
$ echo "123.4+5/6-(7.89*1.234)" | bc
113.664
```
Notice that **bc** doesn't shy away from precision and that the string you need to enter is fairly straightforward. It can also make comparisons, handle Booleans, and calculate square roots, sines, cosines, tangents, etc.
```
$ echo "sqrt(256)" | bc
16
$ echo "s(90)" | bc -l
.89399666360055789051
```
In fact, **bc** can even calculate pi. You decide how many decimal points you want to see:
```
$ echo "scale=5; 4*a(1)" | bc -l
3.14156
$ echo "scale=10; 4*a(1)" | bc -l
3.1415926532
$ echo "scale=20; 4*a(1)" | bc -l
3.14159265358979323844
$ echo "scale=40; 4*a(1)" | bc -l
3.1415926535897932384626433832795028841968
```
And **bc** isn't just for receiving data through pipes and sending answers back. You can also start it interactively and enter the calculations you want it to perform. Setting the scale (as shown below) determines how many decimal places you'll see.
```
$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
scale=2
3/4
.75
2/3
.66
quit
```
Using **bc** , you can also convert numbers between different bases. The **obase** setting determines the output base.
```
$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
obase=16
16 <=== entered
10 <=== response
256 <=== entered
100 <=== response
quit
```
One of the easiest ways to convert between hex and decimal is to use **bc** like this:
```
$ echo "ibase=16; F2" | bc
242
$ echo "obase=16; 242" | bc
F2
```
In the first example above, we're converting from hex to decimal by setting the input base (ibase) to hex (base 16). In the second, we're doing the reverse by setting the outbut base (obase) to hex.
### Easy bash math
With sets of double-parentheses, we can do some easy math in bash. In the examples below, we create a variable and give it a value and then perform addition, decrement the result, and then square the remaining value.
```
$ ((e=11))
$ (( e = e + 7 ))
$ echo $e
18
$ ((e--))
$ echo $e
17
$ ((e=e**2))
$ echo $e
289
```
The arithmetic operators allow you to:
```
+ - Add and subtract
++ -- Increment and decrement
* / % Multiply, divide, find remainder
^ Get exponent
```
You can also use both logical and boolean operators:
```
$ ((x=11)); ((y=7))
$ if (( x > y )); then
> echo "x > y"
> fi
x > y
$ ((x=11)); ((y=7)); ((z=3))
$ if (( x > y )) >> (( y > z )); then
> echo "letters roll downhill"
> fi
letters roll downhill
```
or if you prefer ...
```
$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi
letters roll downhill
```
Now let's raise 2 to the 3rd power:
```
$ echo "2 ^ 3"
2 ^ 3
$ echo "2 ^ 3" | bc
8
```
### Wrap-up
There are sure a lot of different ways to work with numbers and perform calculations on the command line on Linux systems. I hope you picked up a new trick or two by reading this post.
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -1,3 +1,5 @@
pinewall translating
Getting started with Anaconda Python for data science
======

View File

@ -1,82 +0,0 @@
translating---geekpi
A Perl module for better debugging
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
It's occasionally useful to have a block of Perl code that you use only for debugging or development tweaking. That's fine, but having blocks like this can be expensive to performance, particularly if the decision whether to execute it is made at runtime.
[Curtis "Ovid" Poe][1] recently wrote a module that can help with this problem: [Keyword::DEVELOPMENT][2]. The module utilizes Keyword::Simple and the pluggable keyword architecture introduced in Perl 5.012 to create a new keyword: DEVELOPMENT. It uses the value of the PERL_KEYWORD_DEVELOPMENT environment variable to determine whether or not a block of code is to be executed.
Using it couldn't be easier:
```
use Keyword::DEVELOPMENT;
       
sub doing_my_big_loop {
    my $self = shift;
    DEVELOPMENT {
        # insert expensive debugging code here!
    }
}Keyworddoing_my_big_loopDEVELOPMENT
```
At compile time, the code inside the DEVELOPMENT block is optimized away and simply doesn't exist.
Do you see the advantage here? Set up the PERL_KEYWORD_DEVELOPMENT environment variable to be true on your sandbox and false on your production environment, and valuable debugging tools can be committed to your code repo, always there when you need them.
You could also use this module, in the absence of a more evolved configuration management system, to handle variations in settings between production and development or test environments:
```
sub connect_to_my_database {
       
    my $dsn = "dbi:mysql:productiondb";
    my $user = "db_user";
    my $pass = "db_pass";
   
    DEVELOPMENT {
        # Override some of that config information
        $dsn = "dbi:mysql:developmentdb";
    }
   
    my $db_handle = DBI->connect($dsn, $user, $pass);
}connect_to_my_databaseDEVELOPMENTDBI
```
Later enhancement to this snippet would have you reading in configuration information from somewhere else, perhaps from a YAML or INI file, but I hope you see the utility here.
I looked at the source code for Keyword::DEVELOPMENT and spent about a half hour wondering, "Gosh, why didn't I think of that?" Once Keyword::Simple is installed, the module that Curtis has given us is surprisingly simple. It's an elegant solution to something I've needed in my own coding practice for a long time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -1,124 +0,0 @@
translating----geekpi
How to start developing on Java in Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/java-getting-started-816x345.jpg)
Java is one of the most popular programming languages in the world. It is widely-used to develop IOT appliances, Android apps, web, and enterprise applications. This article will provide a quick guide to install and configure your workstation using [OpenJDK][1].
### Installing the compiler and tools
Installing the compiler, or Java Development Kit (JDK), is easy to do in Fedora. At the time of this article, versions 8 and 9 are available. Simply open a terminal and enter:
```
sudo dnf install java-1.8.0-openjdk-devel
```
This will install the JDK for version 8. For version 9, enter:
```
sudo dnf install java-9-openjdk-devel
```
For the developer who requires additional tools and libraries such as Ant and Maven, the **Java Development** group is available. To install the suite, enter:
```
sudo dnf group install "Java Development"
```
To verify the compiler is installed, run:
```
javac -version
```
The output shows the compiler version and looks like this:
```
javac 1.8.0_162
```
### Compiling applications
You can use any basic text editor such as nano, vim, or gedit to write applications. This example provides a simple “Hello Fedora” program.
Open your favorite text editor and enter the following:
```
public class HelloFedora {
      public static void main (String[] args) {
              System.out.println("Hello Fedora!");
      }
}
```
Save the file as HelloFedora.java. In the terminal change to the directory containing the file and do:
```
javac HelloFedora.java
```
The compiler will complain if it runs into any syntax errors. Otherwise it will simply display the shell prompt beneath.
You should now have a file called HelloFedora, which is the compiled program. Run it with the following command:
```
java HelloFedora
```
And the output will display:
```
Hello Fedora!
```
### Installing an Integrated Development Environment (IDE)
Some programs may be more complex and an IDE can make things flow smoothly. There are quite a few IDEs available for Java programmers including:
+ Geany, a basic IDE that loads quickly, and provides built-in templates
+ Anjuta
+ GNOME Builder, which has been covered in the article Builder a new IDE specifically for GNOME app developers
However, one of the most popular open-source IDEs, mainly written in Java, is [Eclipse][2]. Eclipse is available in the official repositories. To install it, run this command:
```
sudo dnf install eclipse-jdt
```
When the installation is complete, a shortcut for Eclipse appears in the desktop menu.
For more information on how to use Eclipse, consult the [User Guide][3] available on their website.
### Browser plugin
If youre developing web applets and need a plugin for your browser, [IcedTea-Web][4] is available. Like OpenJDK, it is open source and easy to install in Fedora. Run this command:
```
sudo dnf install icedtea-web
```
As of Firefox 52, the web plugin no longer works. For details visit the Mozilla support site at [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5].
Congratulations, your Java development environment is ready to use.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/start-developing-java-fedora/
作者:[Shaun Assam][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/sassam/
[1]:http://openjdk.java.net/
[2]:https://www.eclipse.org/
[3]:http://help.eclipse.org/oxygen/nav/0
[4]:https://icedtea.classpath.org/wiki/IcedTea-Web
[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct

View File

@ -1,93 +0,0 @@
translating---geekpi
How to reset a root password on Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/resetrootpassword-816x345.jpg)
A system administrator can easily reset a password for a user that has forgotten their password. But what happens if the system administrator forgets the root password? This guide will show you how to reset a lost or forgotten root password. Note that to reset the root password, you need to have physical access to the machine in order to reboot and to access GRUB settings. Additionally, if the system is encrypted, you will also need to know the LUKS passphrase.
### Edit the GRUB settings
First you need to interrupt the boot process. So youll need to turn on the system or restart, if its already powered on. The first step is tricky because the grub menu tends to flash by very quickly on the screen.
Press **E** on your keyboard when you see the GRUB menu:
![][1]
After pressing e the following screen is shown:
![][2]
Use your arrow keys to move the the **linux16** line.
![][3]
Using your **del** key or **backspace** key, remove **rhgb quiet** and replace with the following.
```
rd.break enforcing=0
```
![][4]
After editing the lines, Press **Ctrl-x** to start the system. If the system is encrypted, you will be prompted for the LUKS passphase here.
**Note:** Setting enforcing=0, avoids performing a complete system SELinux relabeling. Once the system is rebooted, restore the correct SELinux context for the /etc/shadow file. (this is explained a little further in this process)
### Mounting the filesystem
The system will now be in emergency mode. Remount the hard drive with read-write access:
```
# mount o remount,rw /sysroot
```
### **Password Change
**
Run chroot to access the system.
```
# chroot /sysroot
```
You can now change the root password.
```
# passwd
```
Type the new root password twice when prompted. If you are successful, you should see a message that **all authentication tokens updated successfully.**
Type **exit** , twice to reboot the system.
Log in as root and restore the SELinux label to the /etc/shadow file.
```
# restorecon -v /etc/shadow
```
Turn SELinux back to enforcing mode.
```
# setenforce 1
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/reset-root-password-fedora/
作者:[Curt Warfield][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/rcurtiswarfield/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png
[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png
[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png
[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png

View File

@ -2,6 +2,7 @@ Configuring local storage in Linux with Stratis
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl)
Configuring local storage is something desktop Linux users do very infrequently—maybe only once, during installation. Linux storage tech moves slowly, and many storage tools used 20 years ago are still used regularly today. But some things have improved since then. Why aren't people taking advantage of these new capabilities?
This article is about Stratis, a new project that aims to bring storage advances to all Linux users, from the simple laptop single SSD to a hundred-disk array. Linux has the capabilities, but its lack of an easy-to-use solution has hindered widespread adoption. Stratis's goal is to make Linux's advanced storage features accessible.

View File

@ -1,3 +1,5 @@
translating---geekpi
Enhance your Python with an interactive shell
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/python-shells-816x345.jpg)

View File

@ -0,0 +1,106 @@
Continuous Profiling of Go programs
============================================================
One of the most interesting parts of Google is our fleet-wide continuous profiling service. We can see who is accountable for CPU and memory usage, we can continuously monitor our production services for contention and blocking profiles, and we can generate analysis and reports and easily can tell what are some of the highly impactful optimization projects we can work on.
I briefly worked on [Stackdriver Profiler][2], our new product that is filling the gap of cloud-wide profiling service for Cloud users. Note that you DONT need to run your code on Google Cloud Platform in order to use it. Actually, I use it at development time on a daily basis now. It also supports Java and Node.js.
#### Profiling in production
pprof is safe to use in production. We target an additional 5% overhead for CPU and heap allocation profiling. The collection is happening for 10 seconds for every minute from a single instance. If you have multiple replicas of a Kubernetes pod, we make sure we do amortized collection. For example, if you have 10 replicas of a pod, the overhead will be 0.5%. This makes it possible for users to keep the profiling always on.
We currently support CPU, heap, mutex and thread profiles for Go programs.
#### Why?
Before explaining how you can use the profiler in production, it would be helpful to explain why you would ever want to profile in production. Some very common cases are:
* Debug performance problems only visible in production.
* Understand the CPU usage to reduce billing.
* Understand where the contention cumulates and optimize.
* Understand the impact of new releases, e.g. seeing the difference between canary and production.
* Enrich your distributed traces by [correlating][1] them with profiling samples to understand the root cause of latency.
#### Enabling
Stackdriver Profiler doesnt work with the  _net/http/pprof_  handlers and require you to install and configure a one-line agent in your program.
```
go get cloud.google.com/go/profiler
```
And in your main function, start the profiler:
```
if err := profiler.Start(profiler.Config{
Service: "indexing-service",
ServiceVersion: "1.0",
ProjectID: "bamboo-project-606", // optional on GCP
}); err != nil {
log.Fatalf("Cannot start the profiler: %v", err)
}
```
Once you start running your program, the profiler package will report the profilers for 10 seconds for every minute.
#### Visualization
As soon as profiles are reported to the backend, you will start seeing a flamegraph at [https://console.cloud.google.com/profiler][4]. You can filter by tags and change the time span, as well as break down by service name and version. The data will be around up to 30 days.
![](https://cdn-images-1.medium.com/max/900/1*JdCm1WwmTgExzee5-ZWfNw.gif)
You can choose one of the available profiles; break down by service, zone and version. You can move in the flame and filter by tags.
#### Reading the flame
Flame graph visualization is explained by [Brendan Gregg][5] very comprehensively. Stackdriver Profiler adds a little bit of its own flavor.
![](https://cdn-images-1.medium.com/max/900/1*QqzFJlV9v7U1s1reYsaXog.png)
We will examine a CPU profile but all also applies to the other profiles.
1. The top-most x-axis represents the entire program. Each box on the flame represents a frame on the call path. The width of the box is proportional to the CPU time spent to execute that function.
2. Boxes are sorted from left to right, left being the most expensive call path.
3. Frames from the same package have the same color. All runtime functions are represented with green in this case.
4. You can click on any box to expand the execution tree further.
![](https://cdn-images-1.medium.com/max/900/1*1jCm6f-Fl2mpkRe3-57mTg.png)
You can hover on any box to see detailed information for any frame.
#### Filtering
You can show, hide and and highlight by symbol name. These are extremely useful if you specifically want to understand the cost of a particular call or package.
![](https://cdn-images-1.medium.com/max/900/1*ka9fA-AAuKggAuIBq_uhGQ.png)
1. Choose your filter. You can combine multiple filters. In this case, we are highlighting runtime.memmove.
2. The flame is going to filter the frames with the filter and visualize the filtered boxes. In this case, it is highlighting all runtime.memmove boxes.
--------------------------------------------------------------------------------
via: https://medium.com/google-cloud/continuous-profiling-of-go-programs-96d4416af77b
作者:[JBD ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@rakyll?source=post_header_lockup
[1]:https://rakyll.org/profiler-labels/
[2]:https://cloud.google.com/profiler/
[3]:http://cloud.google.com/go/profiler
[4]:https://console.cloud.google.com/profiler
[5]:http://www.brendangregg.com/flamegraphs.html

View File

@ -1,6 +1,5 @@
What Stratis learned from ZFS, Btrfs, and Linux Volume Manager | Opensource.com
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k)
As discussed in [Part 1][1] of this series, Stratis is a volume-managing filesystem (VMF) with functionality similar to [ZFS][2] and [Btrfs][3]. In designing Stratis, we studied the choices that developers of existing solutions made.

View File

@ -0,0 +1,135 @@
3 Python template libraries compared
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/library-libraries-search.png?itok=xH8xSU_G)
In my day job, I spend a lot of time wrangling data from various sources into human-readable information. While a lot of the time this just takes the form of a spreadsheet or some type of chart or other data visualization, there are other times when it makes sense to present the data instead in a written format.
But a pet peeve of mine is copying and pasting. If youre moving data from its source to a standardized template, you shouldnt be copying and pasting either. Its error-prone, and honestly, its not a good use of your time.
So for any piece of information I send out regularly which follows a common pattern, I tend to find some way to automate at least a chunk of it. Maybe that involves creating a few formulas in a spreadsheet, a quick shell script, or some other solution to autofill a template with information pulled from an outside source.
But lately, Ive been exploring Python templating to do much of the work of creating reports and graphs from other datasets.
Python templating engines are hugely powerful. My use case of simplifying report creation only scratches the surface of what they can be put to work for. Many developers are making use of these tools to build full-fledged web applications and content management systems. But you dont have to have a grand vision of a complicated web app to make use of Python templating tools.
### Why templating?
Each templating tool is a little different, and you should read the documentation to understand the exact usage. But lets create a hypothetical example. Lets say Id like to create a short page listing all of the Python topics I've written about recently. Something like this:
```
html>
  head>
    title>/title>
  /head>
  body>
    p>/p>
    ul>
      li>/li>
      li>/li>
      li>/li>
    /ul>
  /body>
/html>My Python articlesThese are some of the things I have written about Python:Python GUIsPython IDEsPython web scrapers
```
Simple enough to maintain when its just these three items. But what happens when I want to add a fourth, or fifth, or sixty-seventh? Rather than hand-coding this page, could I generate it from a CSV or other data file containing a list of all of my pages? Could I easily create duplicates of this for every topic I've written on? Could I programmatically change the text or title or heading on each one of those pages? That's where a templating engine can come into play.
There are many different options to choose from, and today I'll share with you three, in no particular order: [Mako][6], [Jinja2][7], and [Genshi][8].
### Mako
[Mako][6] is a Python templating tool released under the MIT license that is designed for fast performance (not unlike Jinja2). Mako has been used by Reddit to power their web pages, as well as being the default templating language for web frameworks like Pyramid and Pylons. It's also fairly simple and straightforward to use; you can design templates with just a couple of lines of code. Supporting both Python 2.x and 3.x, it's a powerful and feature-rich tool with [good documentation][9], which I consider a must. Features include filters, inheritance, callable blocks, and a built-in caching system, which could be import for large or complex web projects.
### Jinja2
Jinja2 is another speedy and full-featured option, available for both Python 2.x and 3.x under a BSD license. Jinja2 has a lot of overlap from a feature perspective with Mako, so for a newcomer, your choice between the two may come down to which formatting style you prefer. Jinja2 also compiles your templates to bytecode, and has features like HTML escaping, sandboxing, template inheritance, and the ability to sandbox portions of templates. Its users include Mozilla, SourceForge, NPR, Instagram, and others, and also features [strong documentation][10]. Unlike Mako, which uses Python inline for logic inside your templates, Jinja2 uses its own syntax.
### Genshi
[Genshi][8] is the third option I'll mention. It's really an XML tool which has a strong templating component, so if the data you are working with is already in XML format, or you need to work with formatting beyond a web page, Genshi might be a good solution for you. HTML is basically a type of XML (well, not precisely, but that's beyond the scope of this article and a bit pedantic), so formatting them is quite similar. Since a lot of the data I work with commonly is in one flavor of XML or another, I appreciated working with a tool I could use for multiple things.
The release version currently only supports Python 2.x, although Python 3 support exists in trunk, I would caution you that it does not appear to be receiving active development. Genshi is made available under a BSD license.
### Example
So in our hypothetical example above, rather than update the HTML file every time I write about a new topic, I can update it programmatically. I can create a template, which might look like this:
```
html>
  head>
    title>/title>
  /head>
  body>
    p>/p>
    ul>
      %for topic in topics:
      li>/li>
      %endfor
    /ul>
  /body>
/html>My Python articlesThese are some of the things I have written about Python:%for topic in topics:${topic}%endfor
```
And then I can iterate across each topic with my templating library, in this case, Mako, like this:
```
from mako.template import Template
mytemplate = Template(filename='template.txt')
print(mytemplate.render(topics=("Python GUIs","Python IDEs","Python web scrapers")))
```
Of course, in a real-world usage, rather than listing the contents manually in a variable, I would likely pull them from an outside data source, like a database or an API.
These are not the only Python templating engines out there. If youre starting down the path of creating a new project which will make heavy use of templates, youll want to consider more than just these three. Check out this much more comprehensive list on the [Python wiki][11] for more projects that are worth considering.
--------------------------------------------------------------------------------
via: https://opensource.com/resources/python/template-libraries
作者:[Jason Baker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]:https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
[2]:https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
[3]:https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
[4]:https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
[5]:https://developers.redhat.com/?intcmp=7016000000127cYAAQ
[6]:http://www.makotemplates.org/
[7]:http://jinja.pocoo.org/
[8]:https://genshi.edgewall.org/
[9]:http://docs.makotemplates.org/en/latest/
[10]:http://jinja.pocoo.org/docs/2.10/
[11]:https://wiki.python.org/moin/Templating

View File

@ -0,0 +1,130 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
## Introduction to the Go compiler
`cmd/compile` contains the main packages that form the Go compiler. The compiler
may be logically split in four phases, which we will briefly describe alongside
the list of packages that contain their code.
You may sometimes hear the terms "front-end" and "back-end" when referring to
the compiler. Roughly speaking, these translate to the first two and last two
phases we are going to list here. A third term, "middle-end", often refers to
much of the work that happens in the second phase.
Note that the `go/*` family of packages, such as `go/parser` and `go/types`,
have no relation to the compiler. Since the compiler was initially written in C,
the `go/*` packages were developed to enable writing tools working with Go code,
such as `gofmt` and `vet`.
It should be clarified that the name "gc" stands for "Go compiler", and has
little to do with uppercase GC, which stands for garbage collection.
### 1. Parsing
* `cmd/compile/internal/syntax` (lexer, parser, syntax tree)
In the first phase of compilation, source code is tokenized (lexical analysis),
parsed (syntactic analyses), and a syntax tree is constructed for each source
file.
Each syntax tree is an exact representation of the respective source file, with
nodes corresponding to the various elements of the source such as expressions,
declarations, and statements. The syntax tree also includes position information
which is used for error reporting and the creation of debugging information.
### 2. Type-checking and AST transformations
* `cmd/compile/internal/gc` (create compiler AST, type checking, AST transformations)
The gc package includes an AST definition carried over from when it was written
in C. All of its code is written in terms of it, so the first thing that the gc
package must do is convert the syntax package's syntax tree to the compiler's
AST representation. This extra step may be refactored away in the future.
The AST is then type-checked. The first steps are name resolution and type
inference, which determine which object belongs to which identifier, and what
type each expression has. Type-checking includes certain extra checks, such as
"declared and not used" as well as determining whether or not a function
terminates.
Certain transformations are also done on the AST. Some nodes are refined based
on type information, such as string additions being split from the arithmetic
addition node type. Some other examples are dead code elimination, function call
inlining, and escape analysis.
### 3. Generic SSA
* `cmd/compile/internal/gc` (converting to SSA)
* `cmd/compile/internal/ssa` (SSA passes and rules)
In this phase, the AST is converted into Static Single Assignment (SSA) form, a
lower-level intermediate representation with specific properties that make it
easier to implement optimizations and to eventually generate machine code from
it.
During this conversion, function intrinsics are applied. These are special
functions that the compiler has been taught to replace with heavily optimized
code on a case-by-case basis.
Certain nodes are also lowered into simpler components during the AST to SSA
conversion, so that the rest of the compiler can work with them. For instance,
the copy builtin is replaced by memory moves, and range loops are rewritten into
for loops. Some of these currently happen before the conversion to SSA due to
historical reasons, but the long-term plan is to move all of them here.
Then, a series of machine-independent passes and rules are applied. These do not
concern any single computer architecture, and thus run on all `GOARCH` variants.
Some examples of these generic passes include dead code elimination, removal of
unneeded nil checks, and removal of unused branches. The generic rewrite rules
mainly concern expressions, such as replacing some expressions with constant
values, and optimizing multiplications and float operations.
### 4. Generating machine code
* `cmd/compile/internal/ssa` (SSA lowering and arch-specific passes)
* `cmd/internal/obj` (machine code generation)
The machine-dependent phase of the compiler begins with the "lower" pass, which
rewrites generic values into their machine-specific variants. For example, on
amd64 memory operands are possible, so many load-store operations may be combined.
Note that the lower pass runs all machine-specific rewrite rules, and thus it
currently applies lots of optimizations too.
Once the SSA has been "lowered" and is more specific to the target architecture,
the final code optimization passes are run. This includes yet another dead code
elimination pass, moving values closer to their uses, the removal of local
variables that are never read from, and register allocation.
Other important pieces of work done as part of this step include stack frame
layout, which assigns stack offsets to local variables, and pointer liveness
analysis, which computes which on-stack pointers are live at each GC safe point.
At the end of the SSA generation phase, Go functions have been transformed into
a series of obj.Prog instructions. These are passed to the assembler
(`cmd/internal/obj`), which turns them into machine code and writes out the
final object file. The object file will also contain reflect data, export data,
and debugging information.
### Further reading
To dig deeper into how the SSA package works, including its passes and rules,
head to `cmd/compile/internal/ssa/README.md`.
--------------------------------------------------------------------------------
via: https://github.com/golang/go/blob/master/src/cmd/compile/README.md
作者:[mvdan ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://github.com/mvdan

View File

@ -0,0 +1,144 @@
How to Compile a Linux Kernel
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ)
Once upon a time the idea of upgrading the Linux kernel sent fear through the hearts of many a user. Back then, the process of upgrading the kernel involved a lot of steps and even more time. Now, installing a new kernel can be easily handled with package managers like apt. With the addition of certain repositories, you can even easily install experimental or specific kernels (such as real-time kernels for audio production) without breaking a sweat.
Considering how easy it is to upgrade your kernel, why would you bother compiling one yourself? Here are a few possible reasons:
* You simply want to know how its done.
* You need to enable or disable specific options into a kernel that simply arent available via the standard options.
* You want to enable hardware support that might not be found in the standard kernel.
* Youre using a distribution that requires you compile the kernel.
* Youre a student and this is an assignment.
Regardless of why, knowing how to compile a Linux kernel is very useful and can even be seen as a right of passage. When I first compiled a new Linux kernel (a long, long time ago) and managed to boot from said kernel, I felt a certain thrill coursing through my system (which was quickly crushed the next time I attempted and failed).
With that said, lets walk through the process of compiling a Linux kernel. Ill be demonstrating on Ubuntu 16.04 Server. After running through a standard sudo apt upgrade, the installed kernel is 4.4.0-121. I want to upgrade to kernel 4.17. Lets take care of that.
A word of warning: I highly recommend you practice this procedure on a virtual machine. By working with a VM, you can always create a snapshot and back out of any problems with ease. DO NOT upgrade the kernel this way on a production machine… not until you know what youre doing.
### Downloading the kernel
The first thing to do is download the kernel source file. This can be done by finding the URL of the kernel you want to download (from [Kernel.org][1]). Once you have the URL, download the source file with the following command (Ill demonstrate with kernel 4.17 RC2):
```
wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz
```
While that file is downloading, there are a few bits to take care of.
### Installing requirements
In order to compile the kernel, well need to first install a few requirements. This can be done with a single command:
```
sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison
```
Do note: You will need at least 12GB of free space on your local drive to get through the kernel compilation process. So make sure you have enough space.
### Extracting the source
From within the directory housing our newly downloaded kernel, extract the kernel source with the command:
```
tar xvzf linux-4.17-rc2.tar.gz
```
Change into the newly created directory with the command cd linux-4.17-rc2.
### Configuring the kernel
Before we actually compile the kernel, we must first configure which modules to include. There is actually a really easy way to do this. With a single command, you can copy the current kernels config file and then use the tried and true menuconfig command to make any necessary changes. To do this, issue the command:
```
cp /boot/config-$(uname -r) .config
```
Now that you have a configuration file, issue the command make menuconfig. This command will open up a configuration tool (Figure 1) that allows you to go through every module available and enable or disable what you need or dont need.
![menuconfig][3]
Figure 1: The make menuconfig in action.
[Used with permission][4]
It is quite possible you might disable a critical portion of the kernel, so step through menuconfig with care. If youre not sure about an option, leave it alone. Or, better yet, stick with the configuration we just copied from the running kernel (as we know it works). Once youve gone through the entire list (its quite long), youre ready to compile!
### Compiling and installing
Now its time to actually compile the kernel. The first step is to compile using the make command. So issue make and then answer the necessary questions (Figure 2). The questions asked will be determined by what kernel youre upgrading from and what kernel youre upgrading to. Trust me when I say theres a ton of questions to answer, so give yourself plenty of time here.
![make][6]
Figure 2: Answering the questions for the make command.
[Used with permission][4]
After answering the litany of questions, you can then install the modules youve enabled with the command:
```
make modules_install
```
Once again, this command will take some time, so either sit back and watch the output, or go do something else (as it will not require your input). Chances are, youll want to undertake another task (unless you really enjoy watching output fly by in a terminal).
Now we install the kernel with the command:
```
sudo make install
```
Again, another command thats going to take a significant amount of time. In fact, the make install command will take even longer than the make modules_install command. Go have lunch, configure a router, install Linux on a few servers, or take a nap.
### Enable the kernel for boot
Once the make install command completes, its time to enable the kernel for boot. To do this, issue the command:
```
sudo update-initramfs -c -k 4.17-rc2
```
Of course, you would substitute the kernel number above for the kernel youve compiled. When that command completes, update grub with the command:
```
sudo update-grub
```
You should now be able to restart your system and select the newly installed kernel.
### Congratulations!
Youve compiled a Linux kernel! Its a process that may take some time; but, in the end, youll have a custom kernel for your Linux distribution, as well as an important skill that many Linux admins tend to overlook.
Learn more about Linux through the free ["Introduction to Linux" ][7] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/how-compile-linux-kernel-0
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.kernel.org/
[2]:/files/images/kernelcompile1jpg
[3]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_1.jpg?itok=ZNybYgEt (menuconfig)
[4]:/licenses/category/used-permission
[5]:/files/images/kernelcompile2jpg
[6]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel_compile_2.jpg?itok=TYfV02wC (make)
[7]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,3 +1,5 @@
translating---geekpi
How to use FIND in Linux
======

View File

@ -0,0 +1,311 @@
A Beginners Guide To Flatpak
======
![](https://www.ostechnix.com/wp-content/uploads/2016/06/flatpak-720x340.jpg)
A while, we have written about [**Ubuntus Snaps**][1]. Snaps are introduced by Canonical for Ubuntu operating system, and later it was adopted by other Linux distributions such as Arch, Gentoo, and Fedora etc. A snap is a single binary package bundled with all required libraries and dependencies, and you can install it on any Linux distribution, regardless of its version and architecture. Similar to Snaps, there is also another tool called **Flatpak**. As you may already know, packaging distributed applications for different Linux distributions are quite time consuming and difficult process. Each distributed application has different set of libraries and dependencies for various Linux distributions. But, Flatpak, the new framework for desktop applications that completely reduces this burden. Now, you can build a single Flatpak app and install it on various operating systems. How cool, isnt it?
Also, the users dont have to worry about the libraries and dependencies, everything is bundled within the app itself. Most importantly, Flaptpak apps are sandboxed and isolated from the rest of the host operating system, and other applications. Another notable feature is we can install multiple versions of the same application at the same time in the same system. For example, you can install VLC player version 2.1, 2.2, and 2.3 on the same system. So, the developers can test different versions of same application at a time.
In this tutorial, we will see how to install Flatpak in GNU/Linux.
### Install Flatpak
Flatpak is available for many popular Linux distributions such as Arch Linux, Debian, Fedora, Gentoo, Red Hat, Linux Mint, openSUSE, Solus, Mageia and Ubuntu distributions.
To install Flatpak on Arch Linux, run:
```
$ sudo pacman -S flatpak
```
Flatpak is available in the default repositories of Debian Stretch and newer. To install it, run:
```
$ sudo apt install flatpak
```
On Fedora, Flatpak is installed by default. All you have to do is enable enable Flathub as described in the next section.
Just in case, it is not installed for any reason, run:
```
$ sudo dnf install flatpak
```
On RHEL 7, run:
```
$ sudo yum install flatpak
```
On Linux Mint 18.3, flatpak is installed by default. So, no setup required.
On openSUSE Tumbleweed, Flatpak can also be installed using Zypper:
```
$ sudo zypper install flatpak
```
On Ubuntu, add the following repository and install Flatpak as shown below.
```
$ sudo add-apt-repository ppa:alexlarsson/flatpak
$ sudo apt update
$ sudo apt install flatpak
```
The Flatpak plugin for the Software app makes it possible to install apps without needing the command line. To install this plugin, run:
```
$ sudo apt install gnome-software-plugin-flatpak
```
For other Linux distributions, refer the official installation [**link**][2].
### Getting Started With Flatpak
There are many popular applications such as Gimp, Kdenlive, Steam, Spotify, Visual studio code etc., available as flatpaks.
Let us now see the basic usage of flatpak command.
First of all, we need to add remote repositories.
#### Adding Remote Repositories**
**Enable Flathub Repository:**
**Flathub** is nothing but a central repository where all flatpak applications available to users. To enable it, just run:
```
$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
Flathub is enough to install most popular apps. Just in case you wanted to try some GNOME apps, add the GNOME repository.
**Enable GNOME Repository:**
The GNOME repository contains all GNOME core applications. GNOME flatpak repository itself is available as two versions, **stable** and **nightly**.
To add GNOME stable repository, run the following commands:
```
$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg
$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/
```
Applications in this repository require the **3.20 version of the org.gnome.Platform runtime**.
To install the stable runtimes, run:
```
$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg gnome https://sdk.gnome.org/repo/
```
To add the GNOME nightly apps repository, run:
```
$ wget https://sdk.gnome.org/nightly/keys/nightly.gpg
$ sudo flatpak remote-add --gpg-import=nightly.gpg --if-not-exists gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/
```
Applications in this repository require the **nightly version of the org.gnome.Platform runtime**.
To install the nightly runtimes, run:
```
$ sudo flatpak remote-add --gpg-import=nightly.gpg gnome-nightly https://sdk.gnome.org/nightly/repo/
```
#### Listing Remotes
To list all configured remote repositories, run:
```
$ flatpak remotes
Name Options
flathub system
gnome system
gnome-apps system
gnome-nightly system
gnome-nightly-apps system
```
As you can see, the above command lists the remotes that you have added in your system. It also lists whether the remote has been added per-user or system-wide.
#### Removing Remotes
To remove a remote, for example flathub, simply do;
```
$ sudo flatpak remote-delete flathub
```
Here **flathub** is remote name.
#### Installing Flatpak Applications
In this section, we will see how to install flatpak apps. To install a flatpak application
To install an application, simply do:
```
$ sudo flatpak install flathub com.spotify.Client
```
All the apps in the GNOME stable repository uses the version name of “stable”.
To install any Stable GNOME applications, for example **Evince** , run:
```
$ sudo flatpak install gnome-apps org.gnome.Evince stable
```
All the apps in the GNOME nightly repository uses the version name of “master”.
For example, to install gedit, run:
```
$ sudo flatpak install gnome-nightly-apps org.gnome.gedit master
```
If you dont want to install apps system-wide, you also can install flatpak apps per-user like below.
```
$ flatpak install --user <name-of-app>
```
All installed apps will be stored in **$HOME/.var/app/** location.
```
$ ls $HOME/.var/app/
com.spotify.Client
```
#### Running Flatpak Applications
You can launch the installed applications at any time from the application launcher. From command line, you can run it, for example Spotify, using command:
```
$ flatpak run com.spotify.Client
```
#### Listing Applications
To view the installed applications and runtimes, run:
```
$ flatpak list
```
To view only the applications, not run times, use this command instead.
```
$ flatpak list --app
```
You can also view the list of available applications and runtimes from all remotes using command:
```
$ flatpak remote-ls
```
To list only applications not runtimes, run:
```
$ flatpak remote-ls --app
```
To list applications and runtimes from a specific repository, for example **gnome-apps** , run:
```
$ flatpak remote-ls gnome-apps
```
To list only the applications from a remote repository, run:
```
$ flatpak remote-ls flathub --app
```
#### Updating Applications
To update all your flatpak applications, run:
```
$ flatpak update
```
To update a specific application, we do:
```
$ flatpak update com.spotify.Client
```
#### Getting Details Of Applications
To display the details of a installed application, run:
```
$ flatpak info io.github.mmstick.FontFinder
```
Sample output:
```
Ref: app/io.github.mmstick.FontFinder/x86_64/stable
ID: io.github.mmstick.FontFinder
Arch: x86_64
Branch: stable
Origin: flathub
Date: 2018-04-11 15:10:31 +0000
Subject: Workaround appstream issues (391ef7f5)
Commit: 07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4
Parent: dbff9150fce9fdfbc53d27e82965010805f16491ec7aa1aa76bf24ec1882d683
Location: /var/lib/flatpak/app/io.github.mmstick.FontFinder/x86_64/stable/07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4
Installed size: 2.5 MB
Runtime: org.gnome.Platform/x86_64/3.28
```
#### Removing Applications
To remove a flatpak application, run:
```
$ sudo flatpak uninstall com.spotify.Client
```
For details, refer flatpak help section.
```
$ flatpak --help
```
And, thats all for now. Hope you had basic idea about Flatpak.
If you find this guide useful, please share it on your social, professional networks and support OSTechNix.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://www.ostechnix.com/introduction-ubuntus-snap-packages/
[2]:https://flatpak.org/setup/

View File

@ -0,0 +1,273 @@
Asynchronous Processing with Go using Kafka and MongoDB
============================================================
In my previous blog post ["My First Go Microservice using MongoDB and Docker Multi-Stage Builds"][9], I created a Go microservice sample which exposes a REST http endpoint and saves the data received from an HTTP POST to a MongoDB database.
In this example, I decoupled the saving of data to MongoDB and created another microservice to handle this. I also added Kafka to serve as the messaging layer so the microservices can work on its own concerns asynchronously.
> In case you have time to watch, I recorded a walkthrough of this blog post in the [video below][1] :)
Here is the high-level architecture of this simple asynchronous processing example wtih 2 microservices.
![rest-kafka-mongo-microservice-draw-io](https://www.melvinvivas.com/content/images/2018/04/rest-kafka-mongo-microservice-draw-io.jpg)
Microservice 1 - is a REST microservice which receives data from a /POST http call to it. After receiving the request, it retrieves the data from the http request and saves it to Kafka. After saving, it responds to the caller with the same data sent via /POST
Microservice 2 - is a microservice which subscribes to a topic in Kafka where Microservice 1 saves the data. Once a message is consumed by the microservice, it then saves the data to MongoDB.
Before you proceed, we need a few things to be able to run these microservices:
1. [Download Kafka][2] - I used version kafka_2.11-1.1.0
2. Install [librdkafka][3] - Unfortunately, this library should be present in the target system
3. Install the [Kafka Go Client by Confluent][4]
4. Run MongoDB. You can check my [previous blog post][5] about this where I used a MongoDB docker image.
Let's get rolling!
Start Kafka first, you need Zookeeper running before you run the Kafka server. Here's how
```
$ cd /<download path>/kafka_2.11-1.1.0
$ bin/zookeeper-server-start.sh config/zookeeper.properties
```
Then run Kafka - I am using port 9092 to connect to Kafka. If you need to change the port, just configure it in config/server.properties. If you are just a beginner like me, I suggest to just use default ports for now.
```
$ bin/kafka-server-start.sh config/server.properties
```
After running Kafka, we need MongoDB. To make it simple, just use this docker-compose.yml.
```
version: '3'
services:
mongodb:
image: mongo
ports:
- "27017:27017"
volumes:
- "mongodata:/data/db"
networks:
- network1
volumes:
mongodata:
networks:
network1:
```
Run the MongoDB docker container using Docker Compose
```
docker-compose up
```
Here is the relevant code of Microservice 1. I just modified my previous example to save to Kafka rather than MongoDB.
[rest-to-kafka/rest-kafka-sample.go][10]
```
func jobsPostHandler(w http.ResponseWriter, r *http.Request) {
//Retrieve body from http request
b, err := ioutil.ReadAll(r.Body)
defer r.Body.Close()
if err != nil {
panic(err)
}
//Save data into Job struct
var _job Job
err = json.Unmarshal(b, &_job)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
saveJobToKafka(_job)
//Convert job struct into json
jsonString, err := json.Marshal(_job)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
//Set content-type http header
w.Header().Set("content-type", "application/json")
//Send back data as response
w.Write(jsonString)
}
func saveJobToKafka(job Job) {
fmt.Println("save to kafka")
jsonString, err := json.Marshal(job)
jobString := string(jsonString)
fmt.Print(jobString)
p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost:9092"})
if err != nil {
panic(err)
}
// Produce messages to topic (asynchronously)
topic := "jobs-topic1"
for _, word := range []string{string(jobString)} {
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(word),
}, nil)
}
}
```
Here is the code of Microservice 2. What is important in this code is the consumption from Kafka, the saving part I already discussed in my previous blog post. Here are the important parts of the code which consumes the data from Kafka.
[kafka-to-mongo/kafka-mongo-sample.go][11]
```
func main() {
//Create MongoDB session
session := initialiseMongo()
mongoStore.session = session
receiveFromKafka()
}
func receiveFromKafka() {
fmt.Println("Start receiving from Kafka")
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": "localhost:9092",
"group.id": "group-id-1",
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
c.SubscribeTopics([]string{"jobs-topic1"}, nil)
for {
msg, err := c.ReadMessage(-1)
if err == nil {
fmt.Printf("Received from Kafka %s: %s\n", msg.TopicPartition, string(msg.Value))
job := string(msg.Value)
saveJobToMongo(job)
} else {
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
break
}
}
c.Close()
}
func saveJobToMongo(jobString string) {
fmt.Println("Save to MongoDB")
col := mongoStore.session.DB(database).C(collection)
//Save data into Job struct
var _job Job
b := []byte(jobString)
err := json.Unmarshal(b, &_job)
if err != nil {
panic(err)
}
//Insert job into MongoDB
errMongo := col.Insert(_job)
if errMongo != nil {
panic(errMongo)
}
fmt.Printf("Saved to MongoDB : %s", jobString)
}
```
Let's get down to the demo, run Microservice 1\. Make sure Kafka is running.
```
$ go run rest-kafka-sample.go
```
I used Postman to send data to Microservice 1
![Screenshot-2018-04-29-22.20.33](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.20.33.png)
Here is the log you will see in Microservice 1\. Once you see this, it means data has been received from Postman and saved to Kafka
![Screenshot-2018-04-29-22.22.00](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.22.00.png)
Since we are not running Microservice 2 yet, the data saved by Microservice 1 will just be in Kafka. Let's consume it and save to MongoDB by running Microservice 2.
```
$ go run kafka-mongo-sample.go
```
Now you'll see that Microservice 2 consumes the data and saves it to MongoDB
![Screenshot-2018-04-29-22.24.15](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.24.15.png)
Check if data is saved in MongoDB. If it is there, we're good!
![Screenshot-2018-04-29-22.26.39](https://www.melvinvivas.com/content/images/2018/04/Screenshot-2018-04-29-22.26.39.png)
Complete source code can be found here
[https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice][12]
Shameless plug! If you like this blog post, please follow me in Twitter [@donvito][6]. I tweet about Docker, Kubernetes, GoLang, Cloud, DevOps, Agile and Startups. Would love to connect in [GitHub][7] and [LinkedIn][8]
[VIDEO](https://youtu.be/xa0Yia1jdu8)
Enjoy!
--------------------------------------------------------------------------------
via: https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/
作者:[Melvin Vivas ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.melvinvivas.com/author/melvin/
[1]:https://www.melvinvivas.com/developing-microservices-using-kafka-and-mongodb/#video1
[2]:https://kafka.apache.org/downloads
[3]:https://github.com/confluentinc/confluent-kafka-go
[4]:https://github.com/confluentinc/confluent-kafka-go
[5]:https://www.melvinvivas.com/my-first-go-microservice/
[6]:https://twitter.com/donvito
[7]:https://github.com/donvito
[8]:https://www.linkedin.com/in/melvinvivas/
[9]:https://www.melvinvivas.com/my-first-go-microservice/
[10]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice/rest-to-kafka
[11]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice/kafka-to-mongo
[12]:https://github.com/donvito/learngo/tree/master/rest-kafka-mongo-microservice

View File

@ -0,0 +1,633 @@
3 practical Python tools: magic methods, iterators and generators, and method magic
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/serving-bowl-forks-dinner.png?itok=a3YqPwr5)
Python offers a unique set of tools and language features that help make your code more elegant, readable, and intuitive. By selecting the right tool for the right problem, your code will be easier to maintain. In this article, we'll examine three of those tools: magic methods, iterators and generators, and method magic.
### Magic methods
Magic methods can be considered the plumbing of Python. They're the methods that are called "under the hood" for certain built-in methods, symbols, and operations. A common magic method you may be familiar with is, `__init__()`,which is called when we want to initialize a new instance of a class.
You may have seen other common magic methods, like `__str__` and `__repr__`. There is a whole world of magic methods, and by implementing a few of them, we can greatly modify the behavior of an object or even make it behave like a built-in datatype, such as a number, list, or dictionary.
Let's take this `Money` class for example:
```
class Money:
    currency_rates = {
        '$': 1,
        '€': 0.88,
    }
    def __init__(self, symbol, amount):
        self.symbol = symbol
        self.amount = amount
    def __repr__(self):
        return '%s%.2f' % (self.symbol, self.amount)
    def convert(self, other):
        """ Convert other amount to our currency """
        new_amount = (
            other.amount / self.currency_rates[other.symbol]
            * self.currency_rates[self.symbol])
        return Money(self.symbol, new_amount)
```
The class defines a currency rate for a given symbol and exchange rate, specifies an initializer (also known as a constructor), and implements `__repr__`, so when we print out the class, we see a nice representation such as `$2.00` for an instance `Money('$', 2.00)` with the currency symbol and amount. Most importantly, it defines a method that allows you to convert between different currencies with different exchange rates.
Using a Python shell, let's say we've defined the costs for two food items in different currencies, like so:
```
>>> soda_cost = Money('$', 5.25)
>>> soda_cost
    $5.25
>>> pizza_cost = Money('€', 7.99)
>>> pizza_cost
    €7.99
```
We could use magic methods to help instances of this class interact with each other. Let's say we wanted to be able to add two instances of this class together, even if they were in different currencies. To make that a reality, we could implement the `__add__` magic method on our `Money` class:
```
class Money:
    # ... previously defined methods ...
    def __add__(self, other):
        """ Add 2 Money instances using '+' """
        new_amount = self.amount + self.convert(other).amount
        return Money(self.symbol, new_amount)
```
Now we can use this class in a very intuitive way:
```
>>> soda_cost = Money('$', 5.25)
>>> pizza_cost = Money('€', 7.99)
>>> soda_cost + pizza_cost
    $14.33
>>> pizza_cost + soda_cost
    €12.61
```
When we add two instances together, we get a result in the first defined currency. All the conversion is done seamlessly under the hood. If we wanted to, we could also implement `__sub__` for subtraction, `__mul__` for multiplication, and many more. Read about [emulating numeric types][1], or read this [guide to magic methods][2] for others.
We learned that `__add__` maps to the built-in operator `+`. Other magic methods can map to symbols like `[]`. For example, to access an item by index or key (in the case of a dictionary), use the `__getitem__` method:
```
>>> d = {'one': 1, 'two': 2}
>>> d['two']
2
>>> d.__getitem__('two')
2
```
Some magic methods even map to built-in functions, such as `__len__()`, which maps to `len()`.
```
class Alphabet:
    letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
    def __len__(self):
        return len(self.letters)
>>> my_alphabet = Alphabet()
>>> len(my_alphabet)
    26
```
### Custom iterators
Custom iterators are an incredibly powerful but unfortunately confusing topic to new and seasoned Pythonistas alike.
Many built-in types, such as lists, sets, and dictionaries, already implement the protocol that allows them to be iterated over under the hood. This allows us to easily loop over them.
```
>>> for food in ['Pizza', 'Fries']:
         print(food + '. Yum!')
Pizza. Yum!
Fries. Yum!
```
How can we iterate over our own custom classes? First, let's clear up some terminology.
* To be iterable, a class needs to implement `__iter__()`
* The `__iter__()` method needs to return an iterator
* To be an iterator, a class needs to implement `__next__()` (or `next()` [in Python 2][3]), which must raise a `StopIteration` exception when there are no more items to iterate over.
Whew! It sounds complicated, but once you remember these fundamental concepts, you'll be able to iterate in your sleep.
When might we want to use a custom iterator? Let's imagine a scenario where we have a `Server` instance running different services such as `http` and `ssh` on different ports. Some of these services have an `active` state while others are `inactive`.
```
class Server:
    services = [
        {'active': False, 'protocol': 'ftp', 'port': 21},
        {'active': True, 'protocol': 'ssh', 'port': 22},
        {'active': True, 'protocol': 'http', 'port': 80},
    ]
```
When we loop over our `Server` instance, we only want to loop over `active` services. Let's create a new class, an `IterableServer`:
```
class IterableServer:
    def __init__(self):
        self.current_pos = 0
    def __next__(self):
        pass  # TODO: Implement and remember to raise StopIteration
```
First, we initialize our current position to `0`. Then, we define a `__next__()` method, which will return the next item. We'll also ensure that we raise `StopIteration` when there are no more items to return. So far so good! Now, let's implement this `__next__()` method.
```
class IterableServer:
    def __init__(self):
        self.current_pos = 0.  # we initialize our current position to zero
    def __iter__(self):  # we can return self here, because __next__ is implemented
        return self
    def __next__(self):
        while self.current_pos < len(self.services):
            service = self.services[self.current_pos]
            self.current_pos += 1
            if service['active']:
                return service['protocol'], service['port']
        raise StopIteration
    next = __next__  # optional python2 compatibility
```
We keep looping over the services in our list while our current position is less than the length of the services but only returning if the service is active. Once we run out of services to iterate over, we raise a `StopIteration` exception.
Because we implement a `__next__()` method that raises `StopIteration` when it is exhausted, we can return `self` from `__iter__()` because the `IterableServer` class adheres to the `iterable` protocol.
Now we can loop over an instance of `IterableServer`, which will allow us to look at each active service, like so:
```
>>> for protocol, port in IterableServer():
        print('service %s is running on port %d' % (protocol, port))
service ssh is running on port 22
service http is running on port 21
```
That's pretty great, but we can do better! In an instance like this, where our iterator doesn't need to maintain a lot of state, we can simplify our code and use a [generator][4] instead.
```
class Server:
    services = [
        {'active': False, 'protocol': 'ftp', 'port': 21},
        {'active': True, 'protocol': 'ssh', 'port': 22},
        {'active': True, 'protocol': 'http', 'port': 21},
    ]
    def __iter__(self):
        for service in self.services:
            if service['active']:
                yield service['protocol'], service['port']
```
What exactly is the `yield` keyword? Yield is used when defining a generator function. It's sort of like a `return`. While a `return` exits the function after returning the value, `yield` suspends execution until the next time it's called. This allows your generator function to maintain state until it resumes. Check out [yield's documentation][5] to learn more. With a generator, we don't have to manually maintain state by remembering our position. A generator knows only two things: what it needs to do right now and what it needs to do to calculate the next item. Once we reach a point of execution where `yield` isn't called again, we know to stop iterating.
This works because of some built-in Python magic. In the [Python documentation for `__iter__()`][6] we can see that if `__iter__()` is implemented as a generator, it will automatically return an iterator object that supplies the `__iter__()` and `__next__()` methods. Read this great article for a deeper dive of [iterators, iterables, and generators][7].
### Method magic
Due to its unique aspects, Python provides some interesting method magic as part of the language.
One example of this is aliasing functions. Since functions are just objects, we can assign them to multiple variables. For example:
```
>>> def foo():
       return 'foo'
>>> foo()
'foo'
>>> bar = foo
>>> bar()
'foo'
```
We'll see later on how this can be useful.
Python provides a handy built-in, [called `getattr()`][8], that takes the `object, name, default` parameters and returns the attribute `name` on `object`. This programmatically allows us to access instance variables and methods. For example:
```
>>> class Dog:
        sound = 'Bark'
        def speak(self):
            print(self.sound + '!', self.sound + '!')
>>> fido = Dog()
>>> fido.sound
'Bark'
>>> getattr(fido, 'sound')
'Bark'
>>> fido.speak
<bound method Dog.speak of <__main__.Dog object at 0x102db8828>>
>>> getattr(fido, 'speak')
<bound method Dog.speak of <__main__.Dog object at 0x102db8828>>
>>> fido.speak()
Bark! Bark!
>>> speak_method = getattr(fido, 'speak')
>>> speak_method()
Bark! Bark!
```
Cool trick, but how could we practically use `getattr`? Let's look at an example that allows us to write a tiny command-line tool to dynamically process commands.
```
class Operations:
    def say_hi(self, name):
        print('Hello,', name)
    def say_bye(self, name):
        print ('Goodbye,', name)
    def default(self, arg):
        print ('This operation is not supported.')
if __name__ == '__main__':
    operations = Operations()
    # let's assume we do error handling
    command, argument = input('> ').split()
    func_to_call = getattr(operations, command, operations.default)
    func_to_call(argument)
```
The output of our script is:
```
$ python getattr.py
> say_hi Nina
Hello, Nina
> blah blah
This operation is not supported.
```
Next, we'll look at `partial`. For example, **`functool.partial(func, *args, **kwargs)`** allows you to return a new [partial object][9] that behaves like `func` called with `args` and `kwargs`. If more `args` are passed in, they're appended to `args`. If more `kwargs` are passed in, they extend and override `kwargs`. Let's see it in action with a brief example:
```
>>> from functools import partial
>>> basetwo = partial(int, base=2)
>>> basetwo
<functools.partial object at 0x1085a09f0>
>>> basetwo('10010')
18
# This is the same as
>>> int('10010', base=2)
```
Let's see how this method magic ties together in some sample code from a library I enjoy using [called][10]`agithub`, which is a (poorly named) REST API client with transparent syntax that allows you to rapidly prototype any REST API (not just GitHub) with minimal configuration. I find this project interesting because it's incredibly powerful yet only about 400 lines of Python. You can add support for any REST API in about 30 lines of configuration code. `agithub` knows everything it needs to about protocol (`REST`, `HTTP`, `TCP`), but it assumes nothing about the upstream API. Let's dive into the implementation.
Here's a simplified version of how we'd define an endpoint URL for the GitHub API and any other relevant connection properties. View the [full code][11] instead.
```
class GitHub(API):
    def __init__(self, token=None, *args, **kwargs):
        props = ConnectionProperties(api_url = kwargs.pop('api_url', 'api.github.com'))
        self.setClient(Client(*args, **kwargs))
        self.setConnectionProperties(props)
```
Then, once your [access token][12] is configured, you can start using the [GitHub API][13].
```
>>> gh = GitHub('token')
>>> status, data = gh.user.repos.get(visibility='public', sort='created')
>>> # ^ Maps to GET /user/repos
>>> data
... ['tweeter', 'snipey', '...']
```
Note that it's up to you to spell things correctly. There's no validation of the URL. If the URL doesn't exist or anything else goes wrong, the error thrown by the API will be returned. So, how does this all work? Let's figure it out. First, we'll check out a simplified example of the [`API` class][14]:
```
class API:
    # ... other methods ...
    def __getattr__(self, key):
        return IncompleteRequest(self.client).__getattr__(key)
    __getitem__ = __getattr__
```
Each call on the `API` class ferries the call to the [`IncompleteRequest` class][15] for the specified `key`.
```
class IncompleteRequest:
    # ... other methods ...
    def __getattr__(self, key):
        if key in self.client.http_methods:
            htmlMethod = getattr(self.client, key)
            return partial(htmlMethod, url=self.url)
        else:
            self.url += '/' + str(key)
            return self
    __getitem__ = __getattr__
class Client:
    http_methods = ('get')  # ... and post, put, patch, etc.
    def get(self, url, headers={}, **params):
        return self.request('GET', url, None, headers)
```
If the last call is not an HTTP method (like 'get', 'post', etc.), it returns an `IncompleteRequest` with an appended path. Otherwise, it gets the right function for the specified HTTP method from the [`Client` class][16] and returns a `partial` .
What happens if we give a non-existent path?
```
>>> status, data = this.path.doesnt.exist.get()
>>> status
... 404
```
And because `__getitem__` is aliased to `__getattr__`:
```
>>> owner, repo = 'nnja', 'tweeter'
>>> status, data = gh.repos[owner][repo].pulls.get()
>>> # ^ Maps to GET /repos/nnja/tweeter/pulls
>>> data
.... # {....}
```
Now that's some serious method magic!
### Learn more
Python provides plenty of tools that allow you to make your code more elegant and easier to read and understand. The challenge is finding the right tool for the job, but I hope this article added some new ones to your toolbox. And, if you'd like to take this a step further, you can read about decorators, context managers, context generators, and `NamedTuple`s on my blog [nnja.io][17]. As you become a better Python developer, I encourage you to get out there and read some source code for well-architected projects. [Requests][18] and [Flask][19] are two great codebases to start with.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/elegant-solutions-everyday-python-problems
作者:[Nina Zakharenko][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/nnja
[1]:https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types
[2]:https://rszalski.github.io/magicmethods/
[3]:https://docs.python.org/2/library/stdtypes.html#iterator.next
[4]:https://docs.python.org/3/library/stdtypes.html#generator-types
[5]:https://docs.python.org/3/reference/expressions.html#yieldexpr
[6]:https://docs.python.org/3/reference/datamodel.html#object.__iter__
[7]:http://nvie.com/posts/iterators-vs-generators/
[8]:https://docs.python.org/3/library/functions.html#getattr
[9]:https://docs.python.org/3/library/functools.html#functools.partial
[10]:https://github.com/mozilla/agithub
[11]:https://github.com/mozilla/agithub/blob/master/agithub/GitHub.py
[12]:https://github.com/settings/tokens
[13]:https://developer.github.com/v3/repos/#list-your-repositories
[14]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L30-L58
[15]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L60-L100
[16]:https://github.com/mozilla/agithub/blob/dbf7014e2504333c58a39153aa11bbbdd080f6ac/agithub/base.py#L102-L231
[17]:http://nnja.io
[18]:https://github.com/requests/requests
[19]:https://github.com/pallets/flask
[20]:https://us.pycon.org/2018/schedule/presentation/164/
[21]:https://us.pycon.org/2018/

View File

@ -0,0 +1,87 @@
Easily Search And Install Google Web Fonts In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/Font-Finder-720x340.png)
**Font Finder** is the Rust implementation of good old [**Typecatcher**][1], which is used to easily search and install Google web fonts from [**Googles font archive**][2]. It helps you to install hundreds of free and open source fonts in your Linux desktop. In case youre looking for beautiful fonts for your web projects and apps and whatever else, Font Finder can easily get them for you. It is free, open source GTK3 application written in Rust programming language. Unlike Typecatcher, which is written using Python, Font Finder can filter fonts by their categories, has zero Python runtime dependencies, and has much better performance and resource consumption.
In this brief tutorial, we are going to see how to install and use Font Finder in Linux.
### Install Font Finder
Since Fond Finder is written using Rust programming language, you need to install Rust in your system as described below.
After installing Rust, run the following command to install Font Finder:
```
$ cargo install fontfinder
```
Font Finder is also available as [**flatpak app**][3]. First install Flatpak in your system as described in the link below.
Then, install Font Finder using command:
```
$ flatpak install flathub io.github.mmstick.FontFinder
```
### Search And Install Google Web Fonts In Linux Using Font Finder
You can launch font finder either from the application launcher or run the following command to launch it.
```
$ flatpak run io.github.mmstick.FontFinder
```
This is how Font Finder default interface looks like.
![][5]
As you can see, Font Finder user interface is very simple. All Google web fonts are listed in the left pane and the preview of the respective font is given at the right pane. You can type any words in the preview box to view how the words will look like in the selected font. There is also a search box on the top left which allows you to quickly search for a font of your choice.
By default, Font Finder displays all type of fonts. You can, however, display the fonts by category-wise from the category drop-down box above the the search box.
![][6]
To install a font, just choose it and click the **Install** button on the top.
![][7]
You can test the newly installed fonts in any text processing applications.
![][8]
Similarly, to remove a font, just choose it from the Font Finder dashboard and click the **Uninstall** button. Its that simple!
The Settings button (the gear button) on the top left corner provides the option to switch to dark preview.
![][9]
As you can see, Font Finder is very simple and does the job exactly as advertised in its home page. If youre looking for an application to install Google web fonts, Font Finder is one such application.
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/font-finder-easily-search-and-install-google-web-fonts-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/install-google-web-fonts-ubuntu/
[2]:https://fonts.google.com/
[3]:https://flathub.org/apps/details/io.github.mmstick.FontFinder
[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-1.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-2.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-3.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-5.png
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/font-finder-4.png

View File

@ -0,0 +1,136 @@
PCGen: An easy way to generate RPG characters
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_gaming.png?itok=poe7HXJ7)
Do you remember the first time you built a role-playing game (RPG) character? It was exciting and full of possibility, and your imagination ran wild. If you're an avid gamer, it was probably a major milestone for you.
But do you also remember struggling to decipher an empty character sheet and what you were supposed to write down in each box? Remember poring over the core rulebook, cross-referencing one table with a class write-up, the spellbook with your chosen school of magic, and skills to your race?
Whether you thought it was fun or perplexing—or both, if you play RPGs, the process of building and tracking a character is probably as natural to you now as using a computer.
That's an appropriate analogy, because, as we all know, character sheets have been computerized. It's a sensible match; computers are great for tracking information that changes frequently. They certainly handle it a lot better than scratches on paper worn thin by repeated erasing and scribbling and more erasing.
Sure, you could build custom spreadsheets in [Libre Office][1], but then again you could also try [PCGen][2], a Java-based application that makes character creation and maintenance sublimely simple without taking the fun out of either. While it doesn't have a mobile version, there is a [PCGen viewer][3] for Android so you can access your build whenever you need to.
### Downloading and installing
PCGen is a Java application, so it runs on anything that has Java installed. This isn't quite the same thing as Java in your web browser; PCGen is a downloadable application that runs locally on your computer. You likely already have Java installed; if not, download and install it from your distribution's repository. If you're not sure whether you have it installed, you can [download PCGen][4] first, try to run it, and install Java if it fails to run.
Since PCGen is a Java application, you don't have to install it after you download it (because you've already got the Java runtime installed). The application should just run if you double-click the `pcgen.jar` file, but if your computer hasn't been told what to do with a Java app yet, you may need to tell it explicitly to run in Java. You usually do this by right-clicking and specifying what application to open the file in. The application you want, of course, is Java or, if you're asked to input the application launch command manually, `java -jar`.
Linux and BSD users can customize this experience:
1. Download PCGen to a directory, such as `/opt` or `~/bin`.
2. Unzip the archive with `unzip pcgen-x.yy.zz-full.zip`.
3. Download a suitable icon (e.g., `wget https://openclipart.org/image/300px/svg_to_png/285672/d20-blank.png -O pcgen/d20.png`.
4. Create a file called `pcgen.desktop` in your `~/.local/share/applications` directory. Open it in a text editor and type the following, adjusting as appropriate:
```
[Desktop Entry] Version=1.0 Type=Application Name=PCGen Exec="/home/your-username/bin/pcgen/pcgen.sh" Encoding=UTF-8 Icon=/home/your-username/bin/pcgen/d20.png
```
Now you can launch PCGen from your system's application menu as you would any other applications.
### Player agency
Many hours of my childhood were spent poring over my friends' D&D Player Handbooks, rolling up characters that I'd never play (thanks to the infamous "satanic panic," I wasn't allowed to play the game). What I learned from this is creating a character for an RPG is a kind of mini-game itself. There are rules to follow, important choices to make, and ultimately a needed narrative to make it all come together.
A new player might think it's a good idea to allow an application to do a build for them, but most experienced players probably agree that the best way to learn is by doing. And besides, letting something build your character would rob you of the mini-game that is character building. If an application is nothing more than a pre-gen factory, one of the most important parts of being a player is removed from the game, and nobody wants that.
On the other hand, nobody wants the character building process to discourage new players.
PCGen manages to strike a perfect balance between guiding you through a character build and staying out of your way as you tinker. Primarily it does this by using an unobtrusive alert system that keeps you updated about any required tasks left in your character build. It's helpful without grabbing the steering wheel out of your hands to take over completely.
![PCGen to-do list][6]
No annoying Clippy, but plenty of helpful hints
### Getting started
PCGen essentially has two modes: the character build and the character sheet. When you launch it, PCGen first asks you to choose the game system you're building for.
![System selection][8]
Selecting your game system
Included systems are OGL (Open Game License) systems, including D&D 5e (3 and 3.5 editions), Pathfinder, and Fantasy Craft. Better still, PCGen comes preloaded with all manner of add-on material, so not only can you design characters from advanced and third-party modules, the dungeon master (DM) can even create stats for monsters and villains.
Once you've double-clicked the system you're using, you're presented with a helpful screen letting you either load an existing build you have saved or start building a new one. Each new character gets its own tab in PCGen, so if you want to build a whole party or if a DM wants to track a whole hoard of monsters, it's easy to load up a cast of characters and have them at the ready.
Starting from the top left, your character build starts with the basics: choosing a name, gender, and alignment. PCGen includes a random-name generator with lots of knobs and switches to adjust for etymology (real and fantasy), race, and gender.
### Rolling for abilities
When it's time to roll for ability scores, PCGen has lots of options. It defaults to manual entry. You roll physical dice and enter the numbers.
Alternately, you can let PCGen roll for you, and you can set the rolling style. You can have PCGen roll 4d6 and drop the lowest, roll 3d6, or roll 2d6 and add 6.
You can also choose to use a point-purchasing mode with a budget of anything between 15 and 25. This method might appeal to players coming from video games, many of which use this method to allocate attributes.
### Classes and levels
Once you pick a class and add your first level, your attributes are locked in and you get a budget for all remaining class- and level-dependent aspects of your character. What exactly those are, of course, depends on what system you're playing, but PCGen keeps you updated on any remaining required tasks as you go.
There are a lot of tabs in PCGen, and it can sometimes seem just as overwhelming as staring at a physical 300-page Player's Handbook, but as long as you follow the tips, PCGen can keep you on the straight and narrow.
### Character sheets
As if building your character wasn't enough fun, the most satisfying step is yet to come: seeing all your choices laid out in a proper character-sheet format.
The final tab of PCGen is the Character Sheet tab, and it formats your character's attributes into a layout of your choice. The default is a standard, generic layout. It's easily printable and easy to understand.
There are several variations and addendums available, too. For spellcasters, there's a spellbook sheet that lists available spells, and there are several alternate layouts, some optimized for screen and others for print.
If you're using PCGen as you play, you can add temporary effects to your character sheet to let PCGen adjust your attributes accordingly.
If you export your character and import it into the PCGen Importer on your Android phone or tablet, you can use your digital character sheet to track spells and current health and make temporary adjustments to armour class and other attribute modifiers.
![Exported character sheet on Android][10]
Exported character sheet on Android
### PCGen's advantages
The market is full of digital character-sheet trackers, but PCGen stands out for several reasons. First, it's cross-platform. That may or may not mean much to you, because we tend to standardize our workflow to devices that play nice with one another. In my gaming groups, though, we have Linux users and Windows users, and we have people who want to work off a mobile device and others who prefer a proper computer. Choosing an application that everyone can use and become comfortable with makes the process of updating stats more efficient.
Second, because PCGen is open source, all your character data remains in an open and parsable format. As an XML fan, I find it invaluable to get my character data as an XML dump, and it's doubly useful to me as a DM as I prepare for an upcoming adventure and need custom monster stat blocks to insert in my notes.
On a related note, knowing that PCGen will always be available regardless of a player's financial circumstance is also nice. When I changed jobs a year ago, I was lucky to go from one job to the next without interruption in income. In one of my gaming groups, however, two members have been made redundant recently and a third is a university student without much disposable income. The fact that we don't have to worry about monthly membership fees or whether we can all afford to invest in software that is, at the end of the day, a minor convenience over pen and paper gives us confidence in our choice of using digital tools.
PCGen's open source license also lends itself to rapid development and expansion and ensured maintenance. The reason there's a mobile edition is that the code and data are open. Who knows what's next?
While PCGen's default datasets revolve, naturally, around OGL content (because the OGL is open and allows content to be freely redistributed), since the application is also open, you can add whatever data you want. It's not necessarily trivial, but games like Open Legend, Dungeon Delvers, and other openly licensed games are ripe for porting to PCGen.
### Pen and paper
The pen-and-paper tradition remains important. PCGen strikes a healthy balance between the desire to make stats accounting more convenient by leveraging the latest technology while maintaining the joy of manually updating characters.
Whether you're an old-school gamer who banishes digital devices from your table or a progressive gamer who embraces technology, it's fair to say most of us have encountered a few times when a game has come to a halt because of a phone or tablet. The fact is, everyone has a mobile device now (even me, even if it's only because my job pays for it), so they will make their way onto your game table. I have found that encouraging game-relevant information to be on screens has helped focus players on the game; I'd rather my players stare at their character sheets and game reference documents than surf social media sites on their devices.
PCGen, in my experience, is the most true-to-form digital character sheet available. It allows for user control, it offers useful guidance as needed, and it's as close to pen-and-paper convenience as possible. Take a look at it, open gamers!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/pcgen-rpg-character-generator
作者:[Seth Kenlon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/seth
[1]:http://libreoffice.org
[2]:http://pcgen.org
[3]:https://play.google.com/store/apps/details?id=com.dysfunctional.apps.pcgencharactersheet
[4]:http://pcgen.org/download/
[5]:/file/394491
[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_tip.jpg?itok=GXOz_OJ_ (PCGen to-do list)
[7]:/file/394486
[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_sys.jpg?itok=Zn0_9hkQ (System selection)
[9]:/file/394481
[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pcgen_screen.jpg?itok=4V6AZPud (Exported character sheet on Android)

View File

@ -0,0 +1,82 @@
translating---geekpi
Reset a lost root password in under 5 minutes
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
A system administrator can easily reset passwords for users who have forgotten theirs. But what happens if the system administrator forgets the root password, or leaves the company? This guide will show you how to reset a lost or forgotten root password on a Red Hat-compatible system, including Fedora and CentOS, in less than 5 minutes.
Please note, if the entire system hard disk has been encrypted with LUKS, you would need to provide the LUKS password when prompted. Also, this procedure is applicable to systems running systemd which has been the default init system since Fedora 15, CentOS 7.14.04, and Red Hat Enterprise Linux 7.0.
First, you need to interrupt the boot process, so you'll need to turn the system on or restart it if its already powered on. The first step is tricky because the GRUB menu tends to flash very quickly on the screen. You may need to try this a few times until you are able to do it.
Press **e** on your keyboard when you see this screen:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub0.png?itok=cz9nk5BT)
If you've done this correctly, you should see a screen similar to this one:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub1.png?itok=3ZY5uiGq)
Use your arrow keys to move to the Linux16 line:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub2_0.png?itok=8epRyqOl)
Using your **del** key or your **backspace** key, remove `rhgb quiet` and replace with the following:
`rd.break enforcing=0`
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub3.png?itok=JDdMXnUb)
Setting `enforcing=0` will allow you to avoid performing a complete system SELinux relabeling. Once the system is rebooted, you'll only have to restore the correct SELinux context for the `/etc/shadow` file. I'll show you how to do this too.
Press **Ctrl-x** to start.
**The system will now be in emergency mode.**
Remount the hard drive with read-write access:
```
# mount o remount,rw /sysroot
```
Run `chroot` to access the system:
```
# chroot /sysroot
```
You can now change the root password:
```
# passwd
```
Type the new root password twice when prompted. If you are successful, you should see a message that reads " **all authentication tokens updated successfully**. "
Type **exit** twice to reboot the system.
Log in as root and restore the SELinux label to the `/etc/shadow` file.
```
# restorecon -v /etc/shadow
```
Turn SELinux back to enforcing mode:
```
# setenforce 1
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/reset-lost-root-password
作者:[Curt Warfield][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rcurtiswarfield

View File

@ -0,0 +1,100 @@
How To Use Vim Editor To Input Text Anywhere
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-720x340.png)
Howdy Vim users! Today, I have come up with a good news to all of you. Say hello to **Vim-anywhere** , a simple script that allows you to use the Vim editor to input text anywhere in your Linux box. That means you can simply invoke your favorite Vim editor, type whatever you want and paste the text on any application or on a website. The text will be available in your clipboard until you restart your system. This utility is absolutely useful for those who love to use the Vim keybindings often in non-vim environment.
### Install Vim-anywhere in Linux
The Vim-anywhere utility will work on any GNOME based (or derivatives) Linux distributions. Also, make sure you have installed the following prerequisites.
* Curl
* Git
* gVim
* xclip
For instance, you can those utilities in Ubuntu as shown below.
```
$ sudo apt install curl git vim-gnome xclip
```
Then, run the following command to install Vim-anywhere:
```
$ curl -fsSL https://raw.github.com/cknadler/vim-anywhere/master/install | bash
```
Vim-anywhere has been installed. Now let us see how to use it.
### Use Vim Editor To Input Text Anywhere
Let us say you need to create a word document. But youre much more comfortable using Vim editor than LibreOffice writer. No problem, this is where Vim-anywhere comes in handy. It automates the entire process. It simply invokes the Vim editor, so you can write whatever you want in it and paste it in the .doc file.
Let me show you an example. Open LibreOffice writer or any graphical text editor of your choice. Then, open Vim-anywhere. To do so, simply press **CTRL+ALT+V**. It will open the gVim editor. Press “i” to switch to interactive mode and input the text. Once done, save and close it by typing **:wq**.
![][2]
The text will be available in the clipboard until you restart the system. After you closed the editor, your previous application is refocused. Just press **CTRL+P** to paste the text in it.
![][3]
Its just an example. You can even use Vim-anywhere to write something on an annoying web form or any other applications. Once Vim-anywhere invoked, it will open a buffer. Close it and its contents are automatically copied to your clipboard and your previous application is refocused.
The vim-anywhere utility will create a temporary file in **/tmp/vim-anywhere** when invoked. These temporary files stick around until you restart your system, giving you a temporary history.
```
$ ls /tmp/vim-anywhere
```
You can re-open your most recent file using command:
```
$ vim $( ls /tmp/vim-anywhere | sort -r | head -n 1 )
```
**Update Vim-anywhere**
Run the following command to update Vim-anywhere:
```
$ ~/.vim-anywhere/update
```
**Change keyboard shotcut**
The default keybinding to invoke Vim-anywhere is CTRL+ALT+V. You can change it to any custom keybinding using gconf tool.
```
$ gconftool -t str --set /desktop/gnome/keybindings/vim-anywhere/binding <custom binding>
```
**Uninstall Vim-anywhere**
Some of you might think that opening Vim editor each time to input text and paste the text back to another application might be pointless and completely unnecessary.
If you dont find this utility useful, simply uninstall it using command:
```
$ ~/.vim-anywhere/uninstall
```
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-use-vim-editor-to-input-text-anywhere/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-1-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/vim-anywhere-2.png

View File

@ -0,0 +1,163 @@
Customizing your text colors on the Linux command line
======
![](https://images.idgesg.net/images/article/2018/05/numbers-100756457-large.jpg)
If you spend much time on the Linux command line (and you probably wouldn't be reading this if you didn't), you've undoubtedly noticed that the ls command displays your files in a number of different colors. You've probably also come to recognize some of the distinctions — directories appearing in one color, executable files in another, etc.
How that all happens and what options are available for you to change the color assignments might not be so obvious.
One way to get a big dose of data showing how these colors are assigned is to run the **dircolors** command. It will show you something like this:
```
$ dircolors
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do
=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg
=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01
;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01
;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=0
1;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31
:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.
xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.t
bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.j
ar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.a
lz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.r
z=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.
mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:
*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:
*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;3
5:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;
35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01
;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01
;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01
;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;3
5:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;3
5:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;3
6:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;
36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;
36:*.spx=00;36:*.xspf=00;36:';
export LS_COLORS
```
If you're good at parsing, you probably noticed that there's a pattern to this listing. Break it on the colons, and you'll see something like this:
```
$ dircolors | tr ":" "\n" | head -10
LS_COLORS='rs=0
di=01;34
ln=01;36
mh=00
pi=40;33
so=01;35
do=01;35
bd=40;33;01
cd=40;33;01
or=40;31;01
```
OK, so we have a pattern here — a series of definitions that have one to three numeric components. Let's hone in on one of definition.
```
pi=40;33
```
The first question someone is likely to ask is "What is pi?" We're working with colors and file types here, so this clearly isn't the intriguing number that starts with 3.14. No, this "pi" stands for "pipe" — a particular type of file on Linux systems that makes it possible to send data from one program to another. So, let's set one up.
```
$ mknod /tmp/mypipe p
$ ls -l /tmp/mypipe
prw-rw-r-- 1 shs shs 0 May 1 14:00 /tmp/mypipe
```
When we look at our pipe and a couple other files in a terminal window, the color differences are quite obvious.
![font colors][1] Sandra Henry-Stocker
The "40" in the definition of pi (shown above) makes the file show up in the terminal (or PuTTY) window with a black background. The 31 makes the font color red. Pipes are special files, and this special handling makes them stand out in a directory listing.
The **bd** and **cd** definitions are identical to each other — 40;33;01 and have an extra setting. The settings cause block (bd) and character (cd) devices to be displayed with a black background, an orange font, and one other effect — the characters will be in bold.
The following list shows the color and font assignments that are made by **file type** :
```
setting file type
======= =========
rs=0 reset to no color
di=01;34 directory
ln=01;36 link
mh=00 multi-hard link
pi=40;33 pipe
so=01;35 socket
do=01;35 door
bd=40;33;01 block device
cd=40;33;01 character device
or=40;31;01 orphan
mi=00 missing?
su=37;41 setuid
sg=30;43 setgid
ca=30;41 file with capability
tw=30;42 directory with sticky bit and world writable
ow=34;42 directory that is world writable
st=37;44 directory with sticky bit
ex=01;93 executable
```
You may have noticed that in our **dircolors** command output, most of our definitions started with asterisks (e.g., *.wav=00;36). These define display attributes by **file extension** rather than file type. Here's a sampling:
```
$ dircolors | tr ":" "\n" | tail -10
*.mpc=00;36
*.ogg=00;36
*.ra=00;36
*.wav=00;36
*.oga=00;36
*.opus=00;36
*.spx=00;36
*.xspf=00;36
';
export LS_COLORS
```
These settings (all 00:36 in the listing above) would have these file names displaying in cyan. The available colors are shown below.
![all colors][2] Sandra Henry-Stocker
### How to change your settings
The colors and font changes described require that you use an alias for ls that turns on the color feature. This is usually the default on Linux systems and will look like this:
```
alias ls='ls --color=auto'
```
If you wanted to turn off font colors, you could run the **unalias ls** command and your file listings would then show in only the default font color.
You can alter your text colors by modifying your $LS_COLORS settings and exporting the modified setting:
```
$ export LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;...
```
NOTE: The command above is truncated.
If you want your modified text colors to be permanent, you would need to add your modified LS_COLORS definition to one of your startup files (e.g., .bashrc).
### More on command line text
You can find additional information on text colors in this [November 2016][3] post on NetworkWorld.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3269587/linux/customizing-your-text-colors-on-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://images.idgesg.net/images/article/2018/05/font-colors-100756483-large.jpg
[2]:https://images.techhive.com/images/article/2016/11/all-colors-100691990-large.jpg
[3]:https://www.networkworld.com/article/3138909/linux/coloring-your-world-with-ls-colors.html

View File

@ -0,0 +1,108 @@
Writing Systemd Services for Fun and Profit
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest.png?itok=Houi9zf9)
Let's say you want to run a games server, a server that runs [Minetest][1], a very cool and open source mining and crafting sandbox game. You want to set it up for your school or friends and have it running on a server in your living room. Because, you know, if thats good enough for the kernel mailing list admins, then it's good enough for you.
However, you soon realize it is a chore to remember to run the server every time you switch your computer on and a nuisance to power down safely when you want to switch off.
First, you have to run the server as a daemon:
```
minetest --server &
```
Take note of the PID (you'll need it later).
Then you have to tell your friends the server is up by emailing or messaging them. After that you can start playing.
Suddenly it is 3 am. Time to call it a day! But you can't just switch off your machine and go to bed. First, you have to tell the other players the server is coming down, locate the bit of paper where you wrote the PID we were talking about earlier, and kill the Minetest server gracefully...
```
kill -2 <PID>
```
...because just pulling the plug is a great way to end up with corrupted files. Then and only then can you power down your computer.
There must be a way to make this easier.
### Systemd Services to the Rescue
Let's start off by making a systemd service you can run (manually) as a regular user and build up from there.
Services you can run without admin privileges live in _~/.config/systemd/user/_ , so start by creating that directory:
```
cd
mkdir -p ~/.config/systemd/user/
```
There are several types of systemd _units_ (the formal name of systemd scripts), such as _timers_ , _paths_ , and so on; but what you want is a service. Create a file in _~/.config/systemd/user/_ called _minetest.service_ and open it with your text editor and type the following into it:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
ExecStart= /usr/games/minetest --server
```
Notice how units have different sections: The `[Unit]` section is mainly informative. It contains information for users describing what the unit is and where you can read more about it.
The meat of your script is in the `[Service]` section. Here you start by stating what kind of service it is using the `Type` directive. [There are several types][2] of service. If, for example, the process you run first sets up an environment and then calls in another process (which is the main process) and then exits, you would use the `forking` type; if you needed to block the execution of other units until the process in your unit finished, you would use `oneshot`; and so on.
None of the above is the case for the Minetest server, however. You want to start the server, make it go to the background, and move on. This is what the `simple` type does.
Next up is the `ExecStart` directive. This directive tells systemd what program to run. In this case, you are going to run `minetest` as headless server. You can add options to your executables as shown above, but you can't chain a bunch of Bash commands together. A line like:
```
ExecStart: lsmod | grep nvidia > videodrive.txt
```
Would not work. If you need to chain Bash commands, it is best to wrap them in a script and execute that.
Also notice that systemd requires you give the full path to the program. So, even if you have to run something as simple as _ls_ you will have to use `ExecStart= /bin/ls`.
There is also an `ExecStop` directive that you can use to customize how your service should be terminated. We'll be talking about this directive more in part two, but for now you must know that, if you don't specify an `ExecStop`, systemd will take it on itself to finish the process as gracefully as possible.
There is a full list of directives in the _systemd.directives_ man page or, if you prefer, [you can check them out on the web][3] and click through to see what each does.
Although only 6 lines long, your _minetest.service_ is already a fully functional systemd unit. You can run it by executing
```
systemd --user start minetest
```
And stop it with
```
systemd --user stop minetest
```
The `--user` option tells systemd to look for the service in your own directories and to execute the service with your user's privileges.
That wraps up this part of our server management story. In part two, well go beyond starting and stopping and look at how to send emails to players, alerting them of the servers availability. Stay tuned.
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.minetest.net/
[2]:http://man7.org/linux/man-pages/man5/systemd.service.5.html
[3]:http://man7.org/linux/man-pages/man7/systemd.directives.7.html
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,367 @@
zzupdate - Single Command To Upgrade Ubuntu
======
Ubuntu 18.04 was already out and got good feedback from multiple community because Ubuntu 18.04 is the most exciting release of Ubuntu in years.
By default Ubuntu and its derivatives can be upgraded from one version to another version using standard commands, which is official and recommended way to upgrade the system to latest version.
### Ubuntu 18.04 Features/Highlights
This release is contains vast of improvement and features and i picked only major things. Navigate to [Ubuntu 18.04 official][1] release page, if you want to know more detailed release information.
* It ships with Linux kernel 4.15, which delivers new features inherited from upstream.
* It feature the latest GNOME 3.28
* It offers minimal install option similar to RHEL, this allow users to install basic desktop environment with a web browser and core system utilities.
* For new installs, a swap file will be used by default instead of a swap partition.
* You can enable Livepatch to install Kernel updates without rebooting.
* laptops will automatically suspend after 20 minutes of inactivity while on battery power
* 32-bit installer images are no longer provided for Ubuntu Desktop
**Note :**
1) Dont forget to take backup of your important/valuable data. If something goes wrong we will install freshly and restore the data.
2) Upgrade will take time based on your Internet connection and application which you have installed.
### What Is zzupdate?
We can upgrade Ubuntu PC/Server from one version to another version with just a single command using [zzupdate][2] utility. Its a free and open source utility and it doesnt required any scripting knowledge to work on this because its purely configfile-driven script.
There were two shell files are available in the utility, which make the utility to do the work as expected. The provided setup.sh auto-installs/updates the code and makes the script available as a new, simple shell command (zzupdate). The zzupdate.sh will do the actual upgrade from one version to next available version.
### How To Install zzupdate?
To install zzupdate, just execute the following command.
```
$ curl -s https://raw.githubusercontent.com/TurboLabIt/zzupdate/master/setup.sh | sudo sh
.
.
Installing...
-------------
Cloning into 'zzupdate'...
remote: Counting objects: 57, done.
remote: Total 57 (delta 0), reused 0 (delta 0), pack-reused 57
Unpacking objects: 100% (57/57), done.
Checking connectivity... done.
Already up-to-date.
Setup completed!
----------------
See https://github.com/TurboLabIt/zzupdate for the quickstart guide.
```
To upgrade the Ubuntu system from one version to another version, you dont want to run multiple commands and also no need to initiate the reboot. Just fire the below zzupdate command and sit back rest it will take care.
Make a note, When you are upgrading the remote system, i would advise you to use any of the one below utility because it will help you to reconnect the session in case of any disconnection.
**Suggested Read :** [How To Keep A Process/Command Running After Disconnecting SSH Session][3]
### How To Configure zzupdate [optional]
By default zzupdate works out of the box and no need to configure anything. Its optional and if you want to configure something yes, you can. To do so, copy the provided sample configuration file `zzupdate.default.conf` to your own `zzupdate.conf` and set your preference.
```
$ sudo cp /usr/local/turbolab.it/zzupdate/zzupdate.default.conf /etc/turbolab.it/zzupdate.conf
```
Open the file and the default values are below.
```
$ sudo nano /etc/turbolab.it/zzupdate.conf
REBOOT=1
REBOOT_TIMEOUT=15
VERSION_UPGRADE=1
VERSION_UPGRADE_SILENT=0
COMPOSER_UPGRADE=1
SWITCH_PROMPT_TO_NORMAL=0
```
* **`REBOOT=1 :`**System will automatically reboot once upgrade is done.
* **`REBOOT_TIMEOUT=15 :`**Default time out value for reboot.
* **`VERSION_UPGRADE=1 :`**It perform version upgrade from one version to another.
* **`VERSION_UPGRADE_SILENT=0 :`**It disable automatic upgrade perform version upgrade from one version to another.
* **`COMPOSER_UPGRADE=1 :`**This will automatically upgrade the composer.
* **`SWITCH_PROMPT_TO_NORMAL=0 :`**If its “0” then it looks for same kind of version upgrade. If you are running on LTS version then it will looking for LTS version upgrade and not for the normal release upgrade. If its “1” then it looks for the latest release whether you are running an LTS or a normal release.
Im currently running Ubuntu 17.10 and see the details.
```
$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=17.10
DISTRIB_CODENAME=artful
DISTRIB_DESCRIPTION="Ubuntu 17.10"
NAME="Ubuntu"
VERSION="17.10 (Artful Aardvark)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.10"
VERSION_ID="17.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=artful
UBUNTU_CODENAME=artful
```
To upgrade the Ubuntu to latest release, just execute the below command.
```
$ sudo zzupdate
O===========================================================O
zzupdate - Wed May 2 17:31:16 IST 2018
O===========================================================O
Self-update and update of other zzScript
----------------------------------------
.
.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Updating...
----------
Already up-to-date.
Setup completed!
----------------
See https://github.com/TurboLabIt/zzupdate for the quickstart guide.
Channel switching is disabled: using pre-existing setting
---------------------------------------------------------
Cleanup local cache
-------------------
Update available packages informations
--------------------------------------
Hit:1 https://download.docker.com/linux/ubuntu artful InRelease
Ign:2 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:3 http://security.ubuntu.com/ubuntu artful-security InRelease
Hit:4 http://in.archive.ubuntu.com/ubuntu artful InRelease
Hit:5 http://dl.google.com/linux/chrome/deb stable Release
Hit:6 http://in.archive.ubuntu.com/ubuntu artful-updates InRelease
Hit:7 http://in.archive.ubuntu.com/ubuntu artful-backports InRelease
Hit:9 http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful InRelease
Hit:10 http://ppa.launchpad.net/papirus/papirus/ubuntu artful InRelease
Hit:11 http://ppa.launchpad.net/twodopeshaggy/jarun/ubuntu artful InRelease
.
.
UPGRADE PACKAGES
----------------
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
The following packages were automatically installed and are no longer required:
.
.
Interactively upgrade to a new release, if any
----------------------------------------------
Reading cache
Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Ign http://dl.google.com/linux/chrome/deb stable InRelease
Hit https://download.docker.com/linux/ubuntu artful InRelease
Hit http://security.ubuntu.com/ubuntu artful-security InRelease
Hit http://dl.google.com/linux/chrome/deb stable Release
Hit http://in.archive.ubuntu.com/ubuntu artful InRelease
Hit http://in.archive.ubuntu.com/ubuntu artful-updates InRelease
Hit http://in.archive.ubuntu.com/ubuntu artful-backports InRelease
Hit http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful InRelease
Hit http://ppa.launchpad.net/papirus/papirus/ubuntu artful InRelease
Hit http://ppa.launchpad.net/twodopeshaggy/jarun/ubuntu artful InRelease
Fetched 0 B in 6s (0 B/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
```
We need to disable `Third Party` repository by hitting the `Enter` button to move forward the upgrade.
```
Updating repository information
Third party sources disabled
Some third party entries in your sources.list were disabled. You can
re-enable them after the upgrade with the 'software-properties' tool
or your package manager.
To continue please press [ENTER]
.
.
Get:35 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe i386 Packages [2,180 B]
Get:36 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [1,644 B]
Fetched 38.2 MB in 6s (1,276 kB/s)
Checking package manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating the changes
Calculating the changes
```
Start Downloading the `Ubuntu 18.04 LTS` packages, It will take a while based on your Internet connection. Its time to have a cup of coffee.
```
Do you want to start the upgrade?
63 installed packages are no longer supported by Canonical. You can
still get support from the community.
4 packages are going to be removed. 175 new packages are going to be
installed. 1307 packages are going to be upgraded.
You have to download a total of 999 M. This download will take about
12 minutes with your connection.
Installing the upgrade can take several hours. Once the download has
finished, the process cannot be canceled.
Continue [yN] Details [d]y
Fetching
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 base-files amd64 10.1ubuntu2 [58.2 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 debianutils amd64 4.8.4 [85.7 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 bash amd64 4.4.18-2ubuntu1 [614 kB]
Get:4 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 locales all 2.27-3ubuntu1 [3,612 kB]
.
.
Get:1477 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 liblouisutdml-bin amd64 2.7.0-1 [9,588 B]
Get:1478 http://in.archive.ubuntu.com/ubuntu bionic/universe amd64 libtbb2 amd64 2017~U7-8 [110 kB]
Get:1479 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 libyajl2 amd64 2.1.0-2build1 [20.0 kB]
Get:1480 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 usb-modeswitch amd64 2.5.2+repack0-2ubuntu1 [53.6 kB]
Get:1481 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 usb-modeswitch-data all 20170806-2 [30.7 kB]
Get:1482 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 xbrlapi amd64 5.5-4ubuntu2 [61.8 kB]
Fetched 999 MB in 6s (721 kB/s)
```
Few services need to be restart, While installing new packages. Hit `Yes` button, it will automatically restart the required services.
```
Upgrading
Inhibiting until Ctrl+C is pressed...
Preconfiguring packages ...
Preconfiguring packages ...
Preconfiguring packages ...
Preconfiguring packages ...
(Reading database ... 441375 files and directories currently installed.)
Preparing to unpack .../base-files_10.1ubuntu2_amd64.deb ...
Warning: Stopping motd-news.service, but it can still be activated by:
motd-news.timer
Unpacking base-files (10.1ubuntu2) over (9.6ubuntu102) ...
Setting up base-files (10.1ubuntu2) ...
Installing new version of config file /etc/debian_version ...
Installing new version of config file /etc/issue ...
Installing new version of config file /etc/issue.net ...
Installing new version of config file /etc/lsb-release ...
motd-news.service is a disabled or a static unit, not starting it.
(Reading database ... 441376 files and directories currently installed.)
.
.
Progress: [ 80%]
Progress: [ 85%]
Progress: [ 90%]
Progress: [ 95%]
```
Its time to remove obsolete (Which is anymore needed for system) packages. Hit `y` to remove it.
```
Searching for obsolete software
ing package lists... 97%
ding package lists... 98%
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading state information... 23%
Reading state information... 47%
Reading state information... 71%
Reading state information... 94%
Reading state information... Done
Remove obsolete packages?
88 packages are going to be removed.
Continue [yN] Details [d]y
.
.
.
done
Removing perlmagick (8:6.9.7.4+dfsg-16ubuntu6) ...
Removing snapd-login-service (1.23-0ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Fetched 0 B in 0s (0 B/s)
```
Upgrade is successfully completed and need to restart the system. Hit `Y` to restart the system.
```
System upgrade is complete.
Restart required
To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.
Continue [yN]y
```
**`Note :`** Few times, it will ask you to confirm the configuration file replacement to move forward the installation.
See the upgraded system details.
```
$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
NAME="Ubuntu"
VERSION="18.04 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/zzupdate-single-command-to-upgrade-ubuntu-18-04/
作者:[PRAKASH SUBRAMANIAN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/
[1]:https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes
[2]:https://github.com/TurboLabIt/zzupdate
[3]:https://www.2daygeek.com/how-to-keep-a-process-command-running-after-disconnecting-ssh-session/

View File

@ -0,0 +1,135 @@
How to build container images with Buildah
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/buildah-816x345.png)
Project Atomic, through their efforts on the Open Container Initiative (OCI), have created a great tool called [Buildah][1]. Buildah helps with creating, building and updating container images supporting Docker formatted images as well as OCI compliant images.
Buildah handles building container images without the need to have a full container runtime or daemon installed. This particularly shines for setting up a continuous integration and continuous delivery pipeline for building containers.
Buildah makes the containers filesystem directly available to the build host. Meaning that the build tooling is available on the host and not needed in the container image, keeping the build faster and the image smaller and safer. There are Buildah packages for CentOS, Fedora, and Debian.
### Installing Buildah
Since Fedora 26 Buildah can be installed using dnf.
```
$ sudo dnf install buildah -y
```
The current version of buildah is 0.16, which can be displayed by the following command.
```
$ buildah --version
```
### Basic commands
The first step needed to build a container image is to get a base image, this is done by the FROM statement in a Dockerfile. Buildah does handle this in a similar way.
```
$ sudo buildah from fedora
```
This command pulls the Fedora based image and stores it on the host. It is possible to inspect the images available on the host, by running the following.
```
$ sudo buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE
9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB
```
After pulling the base image, a running container instance of this image is available, this is a “working-container”.
The following command displays the running containers.
```
$ sudo buildah containers
CONTAINER ID BUILDER IMAGE ID IMAGE NAME
CONTAINER NAME
6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container
```
Buildah also provides a very useful command to stop and remove all the containers that are currently running.
```
$ sudo buildah rm --all
```
The full list of command is available using the help option.
```
$ buildah --help
```
### Building an Apache web server container image
Lets see how to use Buildah to install an Apache web server on a Fedora base image, then copy a custom index.html to be served by the server.
First lets create the custom index.html.
```
$ echo "Hello Fedora Magazine !!!" > index.html
```
Then install the httpd package inside the running container.
```
$ sudo buildah from fedora
$ sudo buildah run fedora-working-container dnf install httpd -y
```
Lets copy index.html to /var/www/html/.
```
$ sudo buildah copy fedora-working-container index.html /var/www/html/index.html
```
Then configure the container entrypoint to start httpd.
```
$ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container
```
Now to make the “working-container” available, the commit command saves the container to an image.
```
$ sudo buildah commit fedora-working-container hello-fedora-magazine
```
The hello-fedora-magazine image is now available, and can be pushed to a registry to be used.
```
$ sudo buildah images
IMAGE ID IMAGE NAME CREATED
AT SIZE
9110ae7f579f docker.io/library/fedora:latest
Mar 7, 2018 22:51 234.7 MB
49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest
Apr 27, 2018 11:01 427.7 MB
```
It is also possible to use Buildah to test this image by running the following steps.
```
$ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine
$ sudo buildah run hello-magazine
```
Accessing <http://localhost> will display “Hello Fedora Magazine !!!“
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/daemon-less-container-management-buildah/
作者:[Ashutosh Sudhakar Bhakare][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/ashutoshbhakare/
[1]:https://github.com/projectatomic/buildah

View File

@ -1,47 +1,48 @@
#[递归:梦中梦][1]
递归是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压以及让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”这就是人们不愿意使用递归的原因。这是很糟糕的,因为在算法中,递归是最强大的。
[递归:梦中梦][1]
======
**递归**是很神奇的,但是在大多数的编程类书藉中对递归讲解的并不好。它们只是给你展示一个递归阶乘的实现,然后警告你递归运行的很慢,并且还有可能因为栈缓冲区溢出而崩溃。“你可以将头伸进微波炉中去烘干你的头发,但是需要警惕颅内高压以及让你的头发生爆炸,或者你可以使用毛巾来擦干头发。”难怪人们不愿意使用递归。但这种建议是很糟糕的,因为在算法中,递归是一个非常强大的观点。
我们来看一下这个经典的递归阶乘:
递归阶乘 - factorial.c
```
#include <stdio.h>
int factorial(int n)
{
int previous = 0xdeadbeef;
if (n == 0 || n == 1) {
return 1;
}
int previous = 0xdeadbeef;
if (n == 0 || n == 1) {
return 1;
}
previous = factorial(n-1);
return n * previous;
previous = factorial(n-1);
return n * previous;
}
int main(int argc)
{
int answer = factorial(5);
printf("%d\n", answer);
int answer = factorial(5);
printf("%d\n", answer);
}
```
函数的目的是调用它自己,这在一开始是让人很难理解的。为了解具体的内容,当调用 `factorial(5)` 并且达到 `n == 1` 时,[在栈上][3] 究竟发生了什么?
函数调用自身的这个观点在一开始是让人很难理解的。为了让这个过程更形象具体,下图展示的是当调用 `factorial(5)` 并且达到 `n == 1`这行代码 时,[栈上][3] 端点的情况:
![](https://manybutfinite.com/img/stack/factorial.png)
每次调用 `factorial` 都生成一个新的 [栈帧][4]。这些栈帧的创建和 [销毁][5] 是递归慢于迭代的原因。在调用返回之前,累积的这些栈帧可能会耗尽栈空间,进而使你的程序崩溃。
每次调用 `factorial` 都生成一个新的 [栈帧][4]。这些栈帧的创建和 [销毁][5] 是使得递归版本的阶乘慢于其相应的迭代版本的原因。在调用返回之前,累积的这些栈帧可能会耗尽栈空间,进而使你的程序崩溃。
而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 最多可以被运行 ~512,000 次。这是一个 [巨大无比的结果][6],它相当于 8,971,833 比特,因此,栈空间根本就不是什么问题:一个极小的整数 - 甚至是一个 64 位的整数 - 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。
而这些担心经常是存在于理论上的。例如,对于每个 `factorial` 的栈帧占用 16 字节(这可能取决于栈排列以及其它因素)。如果在你的电脑上运行着现代的 x86 的 Linux 内核,一般情况下你拥有 8 GB 的栈空间,因此,`factorial` 程序中的 `n` 最多可以达到 512,000 左右。这是一个 [巨大无比的结果][6],它将花费 8,971,833 比特来表示这个结果,因此,栈空间根本就不是什么问题:一个极小的整数 - 甚至是一个 64 位的整数 - 在我们的栈空间被耗尽之前就早已经溢出了成千上万次了。
过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法总结为将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是,你只是把它用作一种另外的数据结构。我希望示意图可以让你明白这一点。
过一会儿我们再去看 CPU 的使用,现在,我们先从比特和字节回退一步,把递归看作一种通用技术。我们的阶乘算法可归结为:将整数 N、N-1、 … 1 推入到一个栈,然后将它们按相反的顺序相乘。实际上我们使用了程序调用栈来实现这一点,这是它的细节:我们在堆上分配一个栈并使用它。虽然调用栈具有特殊的特性,但是它也只是额外的一种数据结构,你可以随意处置。我希望示意图可以让你明白这一点。
当你看到栈调用作为一种数据结构使用,有些事情将变得更加清晰明了:将那些整数堆积起来,然后再将它们相乘,这并不是一个好的想法。那是一种有缺陷的实现:就像你拿螺丝刀去钉钉子一样。相对更合理的是使用一个迭代过程去计算阶乘。
当你将栈调用视为一种数据结构,有些事情将变得更加清晰明了:将那些整数堆积起来,然后再将它们相乘,这并不是一个好的想法。那是一种有缺陷的实现:就像你拿螺丝刀去钉钉子一样。相对更合理的是使用一个迭代过程去计算阶乘。
但是,螺丝钉太多了,我们只能挑一个。有一个经典的面试题,在迷宫里有一只老鼠,你必须帮助这只老鼠找到一个奶酪。假设老鼠能够在迷宫中向左或者向右转弯。你该怎么去建模来解决这个问题?
就像现实生活中的很多问题一样,你可以将这个老鼠找奶酪的问题简化为一个图,一个二叉树的每个结点代表在迷宫中的一个位置。然后你可以让老鼠在任何可能的地方都左转,而当它进入一个死胡同时,再返回来右转。这是一个老鼠行走的 [迷宫示例][7]:
就像现实生活中的很多问题一样,你可以将这个老鼠找奶酪的问题简化为一个图,一个二叉树的每个结点代表在迷宫中的一个位置。然后你可以让老鼠在任何可能的地方都左转,而当它进入一个死胡同时,再回溯回去,再右转。这是一个老鼠行走的 [迷宫示例][7]:
![](https://manybutfinite.com/img/stack/mazeGraph.png)
@ -55,40 +56,40 @@ int main(int argc)
int explore(maze_t *node)
{
int found = 0;
int found = 0;
if (node == NULL)
{
return 0;
}
if (node->hasCheese){
return 1;// found cheese
}
if (node == NULL)
{
return 0;
}
if (node->hasCheese){
return 1;// found cheese
}
found = explore(node->left) || explore(node->right);
return found;
}
found = explore(node->left) || explore(node->right);
return found;
}
int main(int argc)
{
int found = explore(&maze);
}
int main(int argc)
{
int found = explore(&maze);
}
```
当我们在 `maze.c:13` 中找到奶酪时,栈的情况如下图所示。你也可以在 [GDB 输出][8] 中看到更详细的数据,它是使用 [命令][9] 采集的数据。
![](https://manybutfinite.com/img/stack/mazeCallStack.png)
它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,递归是一种使用较多的算法,而不是被排除在外的。当进行搜索时、当进行遍历树和其它数据结构时、当进行解析时、当需要排序时:它的用途无处不在。正如众所周知的 pi 或者 e它们在数学中像“神”一样的存在因为它们是宇宙万物的基础而递归也和它们一样只是它在计算的结构中。
它展示了递归的良好表现,因为这是一个适合使用递归的问题。而且这并不奇怪:当涉及到算法时,递归是规则,而不是例外。它出现在如下情景中:当进行搜索时、当进行遍历树和其它数据结构时、当进行解析时、当需要排序时:它无处不在。正如众所周知的 pi 或者 e它们在数学中像“神”一样的存在因为它们是宇宙万物的基础而递归也和它们一样只是它在计算的结构中。
Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过“战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个较好的做法是,去读 McCarthy 的 [LISP 上的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。
Steven Skienna 的优秀著作 [算法设计指南][10] 的精彩之处在于,他通过“战争故事” 作为手段来诠释工作,以此来展示解决现实世界中的问题背后的算法。这是我所知道的拓展你的算法知识的最佳资源。另一个读物是 McCarthy 的 [关于 LISP 实现的的原创论文][11]。递归在语言中既是它的名字也是它的基本原理。这篇论文既可读又有趣,在工作中能看到大师的作品是件让人兴奋的事情。
回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 “RRLL” 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录奶酪的状态。你仍然是去实现一个递归的过程,但是需要你实现一个自己的数据结构。
回到迷宫问题上。虽然它在这里很难离开递归,但是并不意味着必须通过调用栈的方式来实现。你可以使用像 `RRLL` 这样的字符串去跟踪转向,然后,依据这个字符串去决定老鼠下一步的动作。或者你可以分配一些其它的东西来记录追寻奶酪的整个状态。你仍然是去实现一个递归的过程,但是需要你实现一个自己的数据结构。
那样似乎更复杂一些,因为栈调用更合适。每个栈帧记录的不仅是当前节点,也记录那个节点上的计算状态(在这个案例中,我们是否只让它走左边,或者已经尝试向右)。因此,代码已经变得不重要了。然而,有时候我们因为害怕溢出和期望中的性能而放弃这种优秀的算法。那是很愚蠢的!
正如我们所见,栈空间是非常大的,在耗尽栈空间之前往往会遇到其它的限制。一方面可以通过检查问题大小来确保它能够被安全地处理。而对 CPU 的担心是由两个广为流传的有问题的示例所导致的哑阶乘dumb factorial和可怕的无记忆的 O(2n) [Fibonacci 递归][12]。它们并不是栈递归算法的正确代表。
事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热点,并且是由专门的指令来操作它。同时,使用你自己定义的堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。要注意牺牲简单性与保持性能的关系。[测量][15]
事实上栈操作是非常快的。通常,栈对数据的偏移是非常准确的,它在 [缓存][13] 中是热点,并且是由专门的指令来操作它。同时,使用你自己定义的堆上分配的数据结构的相关开销是很大的。经常能看到人们写的一些比栈调用递归更复杂、性能更差的实现方法。最后,现代的 CPU 的性能都是 [非常好的][14] ,并且一般 CPU 不会是性能瓶颈所在。在考虑牺牲程序的简单性时要特别注意,就像经常考虑程序的性能及性能的[测量][15]那样
下一篇文章将是探秘栈系列的最后一篇了,我们将了解尾调用、闭包、以及其它相关概念。然后,我们就该深入我们的老朋友—— Linux 内核了。感谢你的阅读!
@ -100,7 +101,7 @@ via:https://manybutfinite.com/post/recursion/
作者:[Gustavo Duarte][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[FSSlc](https://github.com/FSSlc)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,108 @@
ImageMagick 的一些高级图片查看技巧
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-photo-camera-green.png?itok=qiDqmXV1)
图片源于 [Internet Archive Book Images](https://www.flickr.com/photos/internetarchivebookimages/14759826206/in/photolist-ougY7b-owgz5y-otZ9UN-waBxfL-oeEpEf-xgRirT-oeMHfj-wPAvMd-ovZgsb-xhpXhp-x3QSRZ-oeJmKC-ovWeKt-waaNUJ-oeHPN7-wwMsfP-oeJGTK-ovZPKv-waJnTV-xDkxoc-owjyCW-oeRqJh-oew25u-oeFTm4-wLchfu-xtjJFN-oxYznR-oewBRV-owdP7k-owhW3X-oxXxRg-oevDEY-oeFjP1-w7ZB6f-x5ytS8-ow9C7j-xc6zgV-oeCpG1-oewNzY-w896SB-wwE3yA-wGNvCL-owavts-oevodT-xu9Lcr-oxZqZg-x5y4XV-w89d3n-x8h6fi-owbfiq),Opensource.com 修改,[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)协议
在我先前的[ImageMagick 入门:使用命令行来编辑图片](https://linux.cn/article-8851-1.html) 文章中,我展示了如何使用 ImageMagick 的菜单栏进行图片的编辑和变换风格。在这篇续文里,我将向你展示使用这个开源的图像编辑器来查看图片的额外方法。
### 别样的风格
在深入 ImageMagick 的高级图片查看技巧之前,我想先分享另一个使用 **convert** 达到的有趣但简单的效果,在上一篇文章中我已经详细地介绍了 **convert** 命令,这个技巧涉及这个命令的 `edge``negate` 选项:
```
convert DSC_0027.JPG -edge 3 -negate edge3+negate.jpg
```
![在图片上使用 `edge``negate` 选项][3]
使用`edge` 和 `negate` 选项前后的图片对比
编辑后的图片让我更加喜爱,具体是因为如下因素:海的外观,作为前景和背景的植被,特别是太阳及其在海上的反射,最后是天空。
### 使用 `display` 来查看一系列图片
假如你跟我一样是个命令行用户,你就知道 shell 为复杂任务提供了更多的灵活性和快捷方法。下面我将展示一个例子来佐证这个观点。ImageMagick 的 **display** 命令可以克服我在 GNOME 桌面上使用 [Shotwell][4] 图像管理器导入图片时遇到的问题。
Shotwell 会根据每张导入图片的 [Exif][5] 数据,创建以图片被生成或者拍摄时的日期为名称的目录结构。最终的效果是最上层的目录以年命名,接着的子目录是以月命名 (01, 02, 03 等等),然后是以每月的日期命名的子目录。我喜欢这种结构,因为当我想根据图片被创建或者拍摄时的日期来查找它们时将会非常方便。
但这种结构也并不是非常完美的,当我想查看最近几个月或者最近一年的所有图片时就会很麻烦。使用常规的图片查看器,我将不停地在不同层级的目录间跳转,但 ImageMagick 的 **display** 命令可以使得查看更加简单。例如,假如我想查看最近一年的图片,我便可以在命令行中键入下面的 **display** 命令:
```
display -resize 35% 2017/*/*/*.JPG
```
我可以匹配一年中的每一月和每一天。
现在假如我想查看某张图片,但我不确定我是在 2016 年的上半年还是在 2017 的上半年拍摄的,那么我便可以使用下面的命令来找到它:
```
display -resize 35% 201[6-7]/0[1-6]/*/*.JPG
```
限制查看的图片拍摄于 2016 和 2017 年的一月到六月
### 使用 `montage` 来查看图片的缩略图
假如现在我要查找一张我想要编辑的图片,使用 **display** 的一个问题是它只会显示每张图片的文件名,而不显示其在目录结构中的位置,所以想要找到那张图片并不容易。另外,假如我很偶然地在从相机下载图片的过程中将这些图片从相机的内存里面清除了它们,结果使得下次拍摄照片的名称又从 **DSC_0001.jpg** 开始命名,那么当使用 **display** 来展示一整年的图片时,将会在这 12 个月的图片中花费很长的时间来查找它们。
这时 **montage** 命令便可以派上用场了。它可以将一系列的图片放在一张图片中,这样就会非常有用。例如可以使用下面的命令来完成上面的任务:
```
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-4]/*/*.JPG 2017JanApr.jpg
```
从左到右,这个命令以标签开头,标签的形式是包含文件名( **%f** )和以 **/** 分割的目录( **%d** )结构,接着这个命令以目录的名称(2017)来作为标题,然后将图片排成 5 列,每个图片缩放为 10% (这个参数可以很好地匹配我的屏幕)。`geometry` 的设定将在每张图片的四周留白,最后在接上要处理的对象和一个合适的名称( **2017JanApr.jpg** )。现在图片 **2017JanApr.jpg** 便可以成为一个索引,使得我可以不时地使用它来查看这个时期的所有图片。
### 注意内存消耗
你可能会好奇为什么我在上面的合成图中只特别指定了为期 4 个月(从一月到四月)的图片。因为 **montage** 将会消耗大量内存,所以你需要多加注意。我的相机产生的图片每张大约有 2.5MB,我发现我的系统可以很轻松地处理 60 张图片。但一旦图片增加到 80 张,如果此时还有另外的程序(例如 Firefox 、Thunderbird)在后台工作,那么我的电脑将会死机,这似乎和内存使用相关,**montage**可能会占用可用 RAM 的 80% 乃至更多(你可以在此期间运行 **top** 命令来查看内存占用)。假如我关掉其他的程序,我便可以在我的系统死机前处理 80 张图片。
下面的命令可以让你知晓在你运行 **montage** 命令前你需要处理图片张数:
```
ls 2017/0[1-4/*/*.JPG > filelist; wc -l filelist
```
**ls** 命令生成我们搜索的文件的列表,然后通过重定向将这个列表保存在任意以 filelist 为名称的文件中。接着带有 **-l** 选项的 **wc** 命令输出该列表文件共有多少行,换句话说,展示出需要处理的文件个数。下面是我运行命令后的输出:
```
163 filelist
```
啊呀!从一月到四月我居然有 163 张图片,使用这些图片来创建一张合成图一定会使得我的系统死机的。我需要将这个列表减少点,可能只处理到 3 月份或者更早的图片。但如果我在4月20号到30号期间拍摄了很多照片我想这便是问题的所在。下面的命令便可以帮助指出这个问题
```
ls 2017/0[1-3]/*/*.JPG > filelist; ls 2017/04/0[1-9]/*.JPG >> filelist; ls 2017/04/1[0-9]/*.JPG >> filelist; wc -l filelist
```
上面一行中共有 4 个命令,它们以分号分隔。第一个命令特别指定从一月到三月期间拍摄的照片;第二个命令使用 **>>** 将拍摄于 4 月 1 日至 9 日的照片追加到这个列表文件中;第三个命令将拍摄于 4 月 1 0 日到 19 日的照片追加到列表中。最终它的显示结果为:
```
81 filelist
```
我知道假如我关掉其他的程序,处理 81 张图片是可行的。
使用 **montage** 来处理它们是很简单的,因为我们只需要将上面所做的处理添加到 **montage** 命令的后面即可:
```
montage -label %d/%f -title 2017 -tile 5x -resize 10% -geometry +4+4 2017/0[1-3]/*/*.JPG 2017/04/0[1-9]/*.JPG 2017/04/1[0-9]/*.JPG 2017Jan01Apr19.jpg
```
从左到右,**montage** 命令后面最后的那个文件名将会作为输出,在它之前的都是输入。这个命令将花费大约 3 分钟来运行,并生成一张大小约为 2.5MB 的图片,但我的系统只是有一点反应迟钝而已。
### 展示合成图片
当你第一次使用 **display** 查看一张巨大的合成图片时,你将看到合成图的宽度很合适,但图片的高度被压缩了,以便和屏幕相适应。不要慌,只需要左击图片,然后选择 **View > Original Size** 便会显示整个图片。再次点击图片便可以使菜单栏隐藏。
我希望这篇文章可以在你使用新方法查看图片时帮助你。在我的下一篇文章中,我将讨论更加复杂的图片操作技巧。
### 作者简介
Greg Pittman - Greg 肯塔基州路易斯维尔的一名退休的神经科医生,对计算机和程序设计有着长期的兴趣,最早可以追溯到 1960 年代的 Fortran IV 。当 Linux 和开源软件相继出现时,他开始学习更多的相关知识,并分享自己的心得。他是 Scribus 团队的成员。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/9/imagemagick-viewing-images
作者:[Greg Pittman][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/greg-p
[1]:https://opensource.com/article/17/8/imagemagick
[2]:/file/370946
[3]:https://opensource.com/sites/default/files/u128651/edge3negate.jpg "Using the edge and negate options on an image."
[4]:https://wiki.gnome.org/Apps/Shotwell
[5]:https://en.wikipedia.org/wiki/Exif

View File

@ -1,80 +0,0 @@
如何使用Linux防火墙隔离局域网受欺骗地址
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA)
即便是被入侵检测和隔离系统保护的远程网络黑客们也在寻找精致的方法入侵。IDS/IPS是不能停止或者减少那些想要接管你的网络的黑客的攻击的。不恰当的配置允许攻击者绕过所有部署的安全措施。
在这篇文章中,我将会解释安全工程师或者系统管理员怎样可以避免这些攻击。
几乎所有的Linux发行版都带着一个内建的防火墙来保护运行在Linux宿主机上的进程和应用程序。大多数都按照IDS/IPS解决方案设计这样的设计的主要目的是检测和避免恶意包获取网络的进入权。
Linux防火墙通常带有两个接口iptable和ipchain程序。大多数人将这些接口称作iptables防火墙或者ipchains防火墙。这两个接口都被设计成包过滤器。iptables是有状态防火墙基于先前的包做出决定。
在这篇文章中我们将会专注于内核2.4之后出现的iptables防火墙。
有了iptables防火墙你可以创建策略或者有序的规则集规则集可以告诉内核如何对待特定的数据包。在内核中的是Netfilter框架。Netfilter既是框架也是iptables防火墙的工程名。作为一个框架Netfilter允许iptables勾取被设计来操作数据包的函数。概括地说iptables依靠Netfilter框架构筑诸如过滤数据包数据的功能。
每个iptables规则都被应用到一个含表的链中。一个iptables链就是一个比较包中相似字符的规则的集合。而表(例如nat或者mangle)则描述不同的功能目录。例如一个mangle表转化包数据。因此特定的改变包数据的规则被应用到这里而过滤规则被应用到filter表因为filter表过滤包数据。
iptables规则有一系列匹配伴随着一个诸如`Drop`或者`Deny`的目标这可以告诉iptables对一个包做什么符合规则。因此没有一个目标和一系列匹配iptables就不能有效地处理包。如果一个包匹配一条规则一个目标简单地指向一个将要采取的特定措施。另一方面为了让iptables处理匹配必须被每个包满足吗。
现在我们已经知道iptables防火墙如何工作开始着眼于如何使用iptables防火墙检测并拒绝或丢弃被欺骗的地址吧。
### 打开源地址验证
作为一个安全工程师,在处理远程主机被欺骗地址的时候,我采取的第一步是在内核打开源地址验证。
源地址验证是一种内核层级的特性,这种特性丢弃那些伪装成来自你的网络的包。这种特性使用反向路径过滤器方法来检查收到的包的源地址是否可以通过包到达的接口可以到达。(译注:到达的包的源地址应该可以从它到达的网络接口反向到达,只需反转源地址和目的地址就可以达到这样的效果)
利用下面简单的脚本可以打开源地址验证而不用手工操作:
```
#!/bin/sh
#作者: Michael K Aboagye
#程序目标: 打开反向路径过滤
#日期: 7/02/18
#在屏幕上显示 “enabling source address verification”
echo -n "Enabling source address verification…"
#将值0覆盖为1来打开源地址验证
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
echo "completed"
```
先前的脚本在执行的时候只显示了`Enabling source address verification`这条信息而没有添加新行。默认的反向路径过滤的值是00表示没有源验证。因此第二行简单地将默认值0覆盖为1。1表示内核将会通过确认反向路径来验证源(地址)。
最后,你可以使用下面的命令通过选择`DROP`或者`REJECT`目标中的一个来丢弃或者拒绝来自远端主机的被欺骗地址。但是,处于安全原因的考虑,我建议使用`DROP`目标。
像下面这样用你自己的IP地址代替“IP-address” 占位符。另外,你必须选择使用`REJECT`或者`DROP`中的一个,这两个目标不能同时使用。
```
iptables -A INPUT -i internal_interface -s IP_address -j REJECT / DROP
iptables -A INPUT -i internal_interface -s 192.168.0.0/16 -j REJECT/ DROP
```
这篇文章只提供了如何使用iptables防火墙来避免远端欺骗攻击的基础(知识)。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/block-local-spoofed-addresses-using-linux-firewall
作者:[Michael Kwaku Aboagye][a]
译者:[leemeans](https://github.com/leemeans)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/revoks

View File

@ -0,0 +1,351 @@
如何使用 Vim 编辑器编辑多个文件
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png)
有时候,您可能需要修改多个文件,或要将一个文件的内容复制到另一个文件中。在图形用户界面中,您可以在任何图形文本编辑器(如 gedit中打开文件并使用 CTRL + C 和 CTRL + V 复制和粘贴内容。在命令行模式下,您不能使用这种编辑器。不过别担心,只要有 vim 编辑器就有办法。在本教程中,我们将学习使用 Vim 编辑器同时编辑多个文件。相信我,很有意思哒。
### 安装 Vim
Vim 编辑器可在大多数 Linux 发行版的官方软件仓库中找到,所以您可以用默认的软件包管理器来安装它。例如,在 Arch Linux 及其变体上,您可以使用如下命令:
```
$ sudo pacman -S vim
```
在 Debian 和 Ubuntu 上:
```
$ sudo apt-get install vim
```
在 RHEL 和 CentOS 上:
```
$ sudo yum install vim
```
在 Fedora 上:
```
$ sudo dnf install vim
```
在 openSUSE 上:
```
$ sudo zypper install vim
```
### 使用 Linux 的 Vim 编辑器同时编辑多个文件
现在让我们谈谈正事,我们可以用两种方法做到这一点。
#### 方法一
有两个文件,即 **file1.txt****file2.txt**,带有一堆随机单词:
```
$ cat file1.txt
ostechnix
open source
technology
linux
unix
$ cat file2.txt
line1
line2
line3
line4
line5
```
现在,让我们同时编辑这两个文件。请运行:
```
$ vim file1.txt file2.txt
```
Vim 将按顺序显示文件的内容。首先显示第一个文件的内容,然后显示第二个文件,依此类推。
![][2]
**在文件中切换**
要移至下一个文件,请键入:
```
:n
```
![][3]
要返回到前一个文件,请键入:
```
:N
```
如果有任何未保存的更改Vim 将不允许您移动到下一个文件。要保存当前文件中的更改,请键入:
```
ZZ
```
请注意,是两个大写字母 ZZSHIFT + zz
要放弃更改并移至上一个文件,请键入:
```
:N!
```
要查看当前正在编辑的文件,请键入:
```
:buffers
```
![][4]
您将在底部看到加载文件的列表。
![][5]
要切换到下一个文件,请输入 **:buffer**,后跟缓冲区编号。例如,要切换到第一个文件,请键入:
```
:buffer 1
```
![][6]
**打开其他文件进行编辑**
目前我们正在编辑两个文件,即 file1.txt 和 file2.txt。我想打开另一个名为 **file3.txt** 的文件进行编辑。
您会怎么做?这很容易。只需键入 **:e**,然后输入如下所示的文件名即可:
```
:e file3.txt
```
![][7]
现在你可以编辑 file3.txt 了。
要查看当前正在编辑的文件数量,请键入:
```
:buffers
```
![][8]
请注意,对于使用 **:e** 打开的文件,您无法使用 **:n** 或 **:N** 进行切换。要切换到另一个文件,请输入 **:buffer**,然后输入文件缓冲区编号。
**将一个文件的内容复制到另一个文件中**
您已经知道了如何同时打开和编辑多个文件。有时,您可能想要将一个文件的内容复制到另一个文件中。这也是可以做到的。切换到您选择的文件,例如,假设您想将 file1.txt 的内容复制到 file2.txt 中:
首先,请切换到 file1.txt
```
:buffer 1
```
将光标移动至在想要复制的行的前面,并键入**yy** 以抽出(复制)该行。然后,移至 file2.txt
```
:buffer 2
```
将光标移至要从 file1.txt 粘贴复制行的位置,然后键入 **p**。例如,您想要将复制的行粘贴到 line2 和 line3 之间,请将鼠标光标置于行前并键入 **p**
输出示例:
```
line1
line2
ostechnix
line3
line4
line5
```
![][9]
要保存当前文件中所做的更改,请键入:
```
ZZ
```
再次提醒,是两个大写字母 ZZSHIFT + z
保存所有文件的更改并退出 vim 编辑器,键入:
```
:wq
```
同样,您可以将任何文件的任何行复制到其他文件中。
**将整个文件内容复制到另一个文件中**
我们知道如何复制一行,那么整个文件的内容呢?也是可以的。比如说,您要将 file1.txt 的全部内容复制到 file2.txt 中。
先打开 file2.txt
```
$ vim file2.txt
```
如果文件已经加载,您可以通过输入以下命令切换到 file2.txt
```
:buffer 2
```
将光标移动到您想要粘贴 file1.txt 内容的位置。我想在 file2.txt 的第 5 行之后粘贴 file1.txt 的内容,所以我将光标移动到第 5 行。然后,键入以下命令并按回车键:
```
:r file1.txt
```
![][10]
这里,**r** 代表 **read**
现在您会看到 file1.txt 的内容被粘贴在 file2.txt 的第 5 行之后。
```
line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
```
![][11]
要保存当前文件中的更改,请键入:
```
ZZ
```
要保存所有文件的所有更改并退出 vim 编辑器,请输入:
```
:wq
```
#### 方法二
另一种同时打开多个文件的方法是使用 **-o** 或 **-O** 标志。
要在水平窗口中打开多个文件,请运行:
```
$ vim -o file1.txt file2.txt
```
![][12]
要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,您可以使用以下快捷方式在窗口之间移动:
* **CTRL-w k** 上面的窗口
* **CTRL-w j** 下面的窗口
要在垂直窗口中打开多个文件,请运行:
```
$ vim -O file1.txt file2.txt file3.txt
```
![][13]
要在窗口之间切换,请按 **CTRL-w w**(即按 **CTRL + w** 并再次按 **w**)。或者,使用以下快捷方式在窗口之间移动:
* **CTRL-w l** 左面的窗口
* **CTRL-w h** 右面的窗口
其他的一切都与方法一的描述相同。
例如,要列出当前加载的文件,请运行:
```
:buffers
```
在文件之间切换:
```
:buffer 1
```
打开其他文件,请键入:
```
:e file3.txt
```
将文件的全部内容复制到另一个文件中:
```
:r file1.txt
```
方法二的唯一区别是,只要您使用 **ZZ** 保存对当前文件的更改,文件将自动关闭。然后,您需要依次键入 **:wq ** 来关闭文件。但是,如果您按照方法一进行操作,输入 **:wq** 时,所有更改将保存在所有文件中,并且所有文件将立即关闭。
有关更多详细信息,请参阅手册页。
```
$ man vim
```
**建议阅读:**
您现在掌握了如何在 Linux 中使用 vim 编辑器编辑多个文件。正如您所见编辑多个文件并不难。Vim 编辑器还有更强大的功能。我们接下来会提供更多关于 Vim 编辑器的内容。
再见!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/
作者:[SK][a]
译者:[jessie-pang](https://github.com/jessie-pang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-1-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-5.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-6.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-7.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-8.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-10-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-11.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-12.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-13.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-16.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-17.png

View File

@ -1,90 +1,76 @@
pinewall translating
Running Jenkins builds in containers
在容器中运行 Jenkins 构建
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
Running applications in containers has become a well-accepted practice in the enterprise sector, as [Docker][1] with [Kubernetes][2] (K8s) now provides a scalable, manageable application platform. The container-based approach also suits the [microservices architecture][3] that's gained significant momentum in the past few years.
由于 [Docker][1] 和 [Kubernetes][2] (K8s) 目前提供了可扩展、可管理的应用平台,将应用运行在容器中的实践已经被企业广泛接受。近些年势头很猛的[微服务架构][3]也很适合用容器实现。
One of the most important advantages of a container application platform is the ability to dynamically bring up isolated containers with resource limits. Let's check out how this can change the way we run our continuous integration/continuous development (CI/CD) tasks.
容器应用平台可以动态启动指定资源配额、互相隔离的容器,这是其最主要的优势之一。让我们看看这会对我们运行持续集成/持续部署 (continuous integration/continuous development, CI/CD) 任务的方式产生怎样的改变。
Building and packaging an application requires an environment that can download the source code, access dependencies, and have the build tools installed. Running unit and component tests as part of the build may use local ports or require third-party applications (e.g., databases, message brokers, etc.) to be running. In the end, we usually have multiple, pre-configured build servers with each running a certain type of job. For tests, we maintain dedicated instances of third-party apps (or try to run them embedded) and avoid running jobs in parallel that could mess up each other's outcome. The pre-configuration for such a CI/CD environment can be a hassle, and the required number of servers for different jobs can significantly change over time as teams shift between versions and development platforms.
构建并打包应用需要一定的环境,要求能够下载源代码、使用相关依赖及已经安装构建工具。作为构建的一部分,运行单元及组件测试可能会用到本地端口或需要运行第三方应用 (例如数据库及消息中间件等)。另外,我们一般定制化多台构建服务器,每台执行一种指定类型的构建任务。为方便测试,我们维护一些实例专门用于运行第三方应用 (或者试图在构建服务器上启动这些第三方应用),避免并行运行构建任务防止结果互相干扰。为 CI/CD 环境定制化构建服务器是一项繁琐的工作,而且随着开发团队使用的开发平台或其版本变更,所需构建服务器的数量也会变更。
Once we have access to a container platform (onsite or in the cloud), it makes sense to move the resource-intensive CI/CD task executions into dynamically created containers. In this scenario, build environments can be independently started and configured for each job execution. Tests during the build have free reign to use available resources in this isolated box, while we can also bring up a third-party application in a side container that exists only for this job's lifecycle.
一旦我们有了容器管理平台 (自建或在云端),将资源密集型的 CI/CD 任务在动态生成的容器中执行是比较合理的。在这种方案中,每个构建任务运行在独立启动并配置的构建环境中。构建过程中,构建任务的测试环节可以任意使用隔离环境中的可用资源;此外,我们也可以在辅助容器中启动一个第三方应用,只在构建任务生命周期中为测试提供服务。
It sounds nice… Let's see how it works in real life.
听上去不错,让我们在现实环境中实践一下。
Note: This article is based on a real-world solution for a project running on a [Red Hat OpenShift][4] v3.7 cluster. OpenShift is the enterprise-ready version of Kubernetes, so these practices work on a K8s cluster as well. To try, download the [Red Hat CDK][5] and run the `jenkins-ephemeral` or `jenkins-persistent` [templates][6] that create preconfigured Jenkins masters on OpenShift.
注:本文基于现实中已有的解决方案,即在 [Red Hat OpenShift][4] v3.7 集群上运行项目。OpenShift 是企业就绪版本的 Kubernetes故这些实践也适用于 K8s 集群。如果愿意尝试,可以下载 [Red Hat CDK][5],运行 `jenkins-ephemeral``jenkins-persistent` [模板][6]在 OpenShift 上创建定制化好的 Jenkins 管理节点。
### Solution overview
### 解决方案概述
The solution to executing CI/CD tasks (builds, tests, etc.) in containers on OpenShift is based on [Jenkins distributed builds][7], which means:
在 OpenShift 容器中执行 CI/CD 任务 (构建和测试等) 的方案基于[分布式 Jenkins 构建][7],具体如下:
* 我们需要一个 Jenkins 主节点;可以运行在集群中,也可以是外部提供
* 支持 Jenkins 特性和插件,故已有项目仍可使用
* 可以用 Jenkins GUI 配置、运行任务或查看任务输出
* 如果你愿意编码,也可以使用 [Jenkins Pipeline][8]
* We need a Jenkins master; it may run inside the cluster but also works with an external master
* Jenkins features/plugins are available as usual, so existing projects can be used
* The Jenkins GUI is available to configure, run, and browse job output
* if you prefer code, [Jenkins Pipeline][8] is also available
From a technical point of view, the dynamic containers to run jobs are Jenkins agent nodes. When a build kicks off, first a new node starts and "reports for duty" to the Jenkins master via JNLP (port 5000). The build is queued until the agent node comes up and picks up the build. The build output is sent back to the master—just like with regular Jenkins agent servers—but the agent container is shut down once the build is done.
从技术角度来看,运行任务的动态容器是 Jenkins 代理节点。当构建启动时,首先是一个新节点启动,通过 Jenkins 主节点的 JNLP (5000 端口) 告知就绪状态。在代理节点启动并提取构建任务之前,构建任务处于排队状态。就像通常 Jenkins 代理服务器那样,构建输出会送达主节点;不同的是,构建完成后代理节点容器会自动关闭。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/1_running_jenkinsincontainers.png?itok=fR4ntnn8)
Different kinds of builds (e.g., Java, NodeJS, Python, etc.) need different agent nodes. This is nothing new—labels could previously be used to restrict which agent nodes should run a build. To define the config for these Jenkins agent containers started for each job, we will need to set the following:
不同类型的构建任务 (例如 Java, NodeJS, Python等) 对应不同的代理节点。这并不新奇,之前也是使用标签来限制哪些代理节点可以运行指定的构建任务。启动用于构建任务的 Jenkins 代理节点容器需要配置参数,具体如下:
* 用于启动容器的 Docker 镜像
* 资源限制
* 环境变量
* 挂载卷
* The Docker image to boot up
* Resource limits
* Environment variables
* Volumes mounted
这里用到的关键组件是 [Jenkins Kubernetes 插件][9]。该插件 (通过使用一个服务账号) 与 K8s 集群交互可以启动和关闭代理节点。在插件的配置管理中多种代理节点类型表现为多种Kubernetes pod 模板,它们通过项目标签对应。
这些[代理节点镜像][10]以开箱即用的方式提供 (也有 [CentOS7][11] 系统的版本):
* [jenkins-slave-base-rhel7][12]:基础镜像,启动与 Jenkins 主节点连接的代理节点;其中 Java 堆大小根据容器内容设置
* [jenkins-slave-maven-rhel7][13]:用于 Maven 和 Gradle 构建的镜像 (从基础镜像扩展)
* [jenkins-slave-nodejs-rhel7][14]:包含 NodeJS4 工具的镜像 (从基础镜像扩展)
注意:本解决方案与 OpenShift 中的 [Source-to-Image (S2I)][15] 构建不同,虽然后者也可以用于某些特定的 CI/CD 任务。
The core component here is the [Jenkins Kubernetes plugin][9]. This plugin interacts with the K8s cluster (by using a ServiceAccount) and starts/stops the agent nodes. Multiple agent types can be defined as Kubernetes pod templates under the plugin's configuration (refer to them by label in projects).
### 入门学习资料
These [agent images][10] are provided out of the box (also on [CentOS7][11]):
有很多不错的博客和文档介绍了如何在 OpenShift 上执行 Jenkins 构建。不妨从下面这些开始:
* [OpenShift Jenkins][29] 镜像文档及 [源代码][30]
* 网络直播[基于 OpenShift 的 CI/CD][31]
* [外部 Jenkins 集成][32] playbook
* [jenkins-slave-base-rhel7][12]: Base image starting the agent that connects to Jenkins master; the Java heap is set according to container memory
* [jenkins-slave-maven-rhel7][13]: Image for Maven and Gradle builds (extends base)
* [jenkins-slave-nodejs-rhel7][14]: Image with NodeJS4 tools (extends base)
阅读这些博客和文档有助于完整的理解本解决方案。在本文中,我们主要关注具体实践中遇到的各类问题。
### 构建我的应用
作为[示例项目][16],我们选取了包含如下构建步骤的 Java 项目:
Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] build, which can also be used for certain CI/CD tasks.
* **代码源:** 从一个Git代码库中获取项目代码
* **使用 Maven 编译:** 依赖可从内部仓库获取,(不妨使用 Apache Nexus) 从外部 Maven 仓库镜像
* **发布 artifact** 将编译好的 JAR 上传至内部仓库
### Background learning material
在 CI/CD 过程中,我们需要与 Git 和 Nexus 交互,故 Jenkins 任务需要能够访问这些系统。这涉及参数配置和已存储凭证,可以在下列位置进行存放及管理:
* **在 Jenkins 中:** 我们可以在 Jenkins 中添加凭证,通过 Git 插件获取项目代码 (使用容器不会改变操作)
* **在 OpenShift 中:** 使用 ConfigMap 和 Secret 对象,以文件或环境变量的形式附加到 Jenkins 代理容器中
* **在高度定制化的 Docker 容器中:** 镜像是定制化的,已包含完成特定类型构建的全部特性;从一个代理镜像进行扩展即可得到。
There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with:
你可以按自己的喜好选择一种实现方式,甚至你最终可能混用多种实现方式。下面我们采用第二种实现方式,即首选在 OpenShift 中管理参数配置。使用 Kubernetes 插件配置来定制化 Maven 代理容器,包括设置环境变量和映射文件等。
Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices.
注意:对于 Kubernetes 插件 v1.0 版,由于 [bug][17],在 UI 界面增加环境变量并不生效。可以升级插件,或 (作为变通方案) 直接修改 `config.xml` 文件并重启 Jenkins。
### Build my application
### 从 Git 获取源代码
For our [example][16], let's assume a Java project with the following build steps:
* **Source:** Pull project source from a Git repository
* **Build with Maven:** Dependencies come from an internal repository (let's use Apache Nexus) mirroring external Maven repos
* **Deploy artifact:** The built JAR is uploaded to the repository
During the CI/CD process, we need to interact with Git and Nexus, so the Jenkins jobs have be able to access those systems. This requires configuration and stored credentials that can be managed at different places:
* **In Jenkins:** We can add credentials to Jenkins that the Git plugin can use and add files to the project (using containers doesn't change anything).
* **In OpenShift:** Use ConfigMap and secret objects that are added to the Jenkins agent containers as files or environment variables.
* **In a fully customized Docker image:** These are pre-configured with everything to run a type of job; just extend one of the agent images.
Which approach you use is a question of taste, and your final solution may be a mix. Below we'll look at the second option, where the configuration is managed primarily in OpenShift. Customize the Maven agent container via the Kubernetes plugin configuration by setting environment variables and mounting files.
Note: Adding environment variables through the UI doesn't work with Kubernetes plugin v1.0 due to a [bug][17]. Either update the plugin or (as a workaround) edit `config.xml` directly and restart Jenkins.
### Pull source from Git
Pulling a public Git is trivial. For a private Git repo, authentication is required and the client also needs to trust the server for a secure connection. A Git pull can typically be done via two protocols:
* HTTPS: Authentication is with username/password. The server's SSL certificate must be trusted by the job, which is only tricky if it's signed by a custom CA.
从公共 Git 仓库获取源代码很容易。但对于私有 Git 仓库,不仅需要认证操作,客户端还需要信任服务器以便建立安全连接。一般而言,通过两种协议获取源代码:
* HTTPS验证通过用户名/密码完成。Git 服务器的 SSL 证书必须被代理节点信任,这仅在证书被自建 CA 签名时才需要特别关注。
```
@ -92,7 +78,7 @@ git clone https://git.mycompany.com:443/myapplication.git
```
* SSH: Authentication is with a private key. The server is trusted when its public key's fingerprint is found in the `known_hosts` file.
* SSH:验证通过私钥完成。如果服务器的公钥指纹出现在 `known_hosts` 文件中,那么该服务器是被信任的。
```
@ -100,24 +86,23 @@ git clone ssh://git@git.mycompany.com:22/myapplication.git
```
Downloading the source through HTTP with username/password is OK when it's done manually; for automated builds, SSH is better.
对于手动操作,使用用户名/密码通过 HTTP 方式下载源代码是可行的但对于自动构建而言SSH 是更佳的选择。
#### Git with SSH
#### 通过 SSH 方式使用 Git
For a SSH download, we need to ensure that the SSH connection works between the agent container and the Git's SSH port. First, we need a private-public key pair. To generate one, run:
要通过 SSH 方式下载源代码,我们需要保证代理容器与 Git 的 SSH 端口之间可以建立 SSH 连接。首先,我们需要创建一个私钥-公钥对。使用如下命令生成:
```
ssh keygen -t rsa -b 2048 -f my-git-ssh -N ''
```
It generates a private key in `my-git-ssh` (empty passphrase) and the matching public key in `my-git-ssh.pub`. Add the public key to the user on the Git server (preferably a ServiceAccount); web UIs usually support upload. To make the SSH connection work, we need two files on the agent container:
命令生成的私钥位于 `my-git-ssh` 文件中 (无密码口令),对应的公钥位于 `my-git-ssh.pub` 文件中。将公钥添加至 Git 服务器的对应用户下 (推荐使用服务账号);网页界面一般支持公钥上传。为建立 SSH 连接,我们还需要在代理容器上配置两个文件:
* The private key at `~/.ssh/id_rsa`
* The server's public key in `~/.ssh/known_hosts`. To get this, try `ssh git.mycompany.com` and accept the fingerprint; this will create a new line in the `known_hosts` file. Use that.
* 私钥文件位于 `~/.ssh/id_rsa`
* 服务器的公钥位于 `~/.ssh/known_hosts`。要实现这一点,运行 `ssh git.mycompany.com` 并接受服务器指纹,系统会在 `~/.ssh/known_hosts` 文件中增加一行。这样需求得到了满足。
Store the private key as `id_rsa` and server's public key as `known_hosts` in an OpenShift secret (or config map).
`id_rsa` 对应的私钥和 `known_hosts` 对应的公钥保存到一个 OpenShift secret (或 config map) 对象中。
```
apiVersion: v1
@ -143,7 +128,7 @@ stringData:
```
Then configure this as a volume in the Kubernetes plugin for the Maven pod at mount point `/home/jenkins/.ssh/`. Each item in the secret will be a file matching the key name under the mount directory. We can use the UI (`Manage Jenkins / Configure / Cloud / Kubernetes`), or edit Jenkins config `/var/lib/jenkins/config.xml`:
在 Kubernetes 插件中将 secret 对象配置为卷,挂载到 `/home/jenkins/.ssh/`,供 Maven pod 使用。secret 中的每个对象对应挂载目录的一个文件,文件名与 key 名称相符。我们可以使用 UI (管理 Jenkins / 配置 / 云 / Kubernetes),也可以直接编辑 Jenkins 配置文件 `/var/lib/jenkins/config.xml`:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
@ -165,9 +150,9 @@ Then configure this as a volume in the Kubernetes plugin for the Maven pod at mo
```
Pulling a Git source through SSH should work in the jobs running on this agent now.
此时,在代理节点上运行的任务应该可以通过 SSH 方式从 Git 代码库获取源代码。
Note: It's also possible to customize the SSH connection in `~/.ssh/config`, for example, if we don't want to bother with `known_hosts` or the private key is mounted to a different location:
注:我们也可以在 `~/.ssh/config` 文件中自定义 SSH 连接。例如,如果你不想处理 `known_hosts` 或 私钥位于其它挂载目录中:
```
Host git.mycompany.com
@ -177,11 +162,11 @@ Host git.mycompany.com
```
#### Git with HTTP
#### 通过 HTTP 方式使用 Git
If you prefer an HTTP download, add the username/password to a [Git-credential-store][18] file somewhere:
如果你选择使用 HTTP 方式下载,在指定的 [Git-credential-store][18] 文件中添加用户名/密码:
* E.g. `/home/jenkins/.config/git-secret/credentials` from an OpenShift secret, one site per line:
* 例如,在一个 OpenShift secret 对象中增加 `/home/jenkins/.config/git-secret/credentials` 文件对应,其中每个站点对应文件中的一行:
```
@ -191,7 +176,7 @@ https://user:pass@github.com
```
* Enable it in [git-config][19] expected at `/home/jenkins/.config/git/config`:
* 在 [git-config][19] 配置中启用该文件,其中配置文件默认路径为 `/home/jenkins/.config/git/config`
```
@ -200,11 +185,10 @@ https://user:pass@github.com
  helper = store --file=/home/jenkins/.config/git-secret/credentials
```
如果 Git 服务使用了自有 CA 签名的证书,为代理容器设置环境变量 `GIT_SSL_NO_VERIFY=true` 是最便捷的方式。更恰当的解决方案包括如下两步:
If the Git service has a certificate signed by a custom certificate authority (CA), the quickest hack is to set the `GIT_SSL_NO_VERIFY=true` environment variable (EnvVar) for the agent. The proper solution needs two things:
* Add the custom CA's public certificate to the agent container from a config map to a path (e.g. `/usr/ca/myTrustedCA.pem`).
* Tell Git the path to this cert in an EnvVar `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` or in the `git-config` file mentioned above:
* 利用 config map 将自有 CA 的公钥映射到一个路径下的文件中,例如 `/usr/ca/myTrustedCA.pem`)。
* 通过环境变量 `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` 或上面提到的 `git-config` 文件的方式,将证书路径告知 Git。
```
@ -214,26 +198,25 @@ If the Git service has a certificate signed by a custom certificate authority (C
```
Note: In OpenShift v3.7 (and earlier), the config map and secret mount points [must not overlap][20], so we can't map to `/home/jenkins` and `/home/jenkins/dir` at the same time. This is why we didn't use the well-known file locations above. A fix is expected in OpenShift v3.9.
注:在 OpenShift v3.7 及早期版本中config map 及 secret 的挂载点之间[不能相互覆盖][20],故我们不能同时映射 `/home/jenkins``/home/jenkins/dir`。因此,上面的代码中并没有使用常见的文件路径。预计 OpenShift v3.9 版本会修复这个问题。
### Maven
To make a Maven build work, there are usually two things to do:
要完成 Maven 构建,一般需要完成如下两步:
* A corporate Maven repository (e.g., Apache Nexus) should be set up to act as a proxy for external repos. Use this as a mirror.
* This internal repository may have an HTTPS endpoint with a certificate signed by a custom CA.
* 建立一个社区 Maven 库 (例如 Apache Nexus),充当外部库的代理。将其当作镜像使用。
* 这个内部库可能提供 HTTPS 服务,其中使用自建 CA 签名的证书。
对于容器中运行构建的实践而言,使用内部 Maven 库是非常关键的,因为容器启动后并没有本地库或缓存,这导致每次构建时 Maven 都下载全部的 Jar 文件。在本地网络使用内部代理库下载明显快于从因特网下载。
Having an internal Maven repository is practically essential if builds run in containers because they start with an empty local repository (cache), so Maven downloads all the JARs every time. Downloading from an internal proxy repo on the local network is obviously quicker than downloading from the Internet.
The [Maven Jenkins agent][13] image supports an environment variable that can be used to set the URL for this proxy. Set the following in the Kubernetes plugin container template:
[Maven Jenkins 代理][13]镜像允许配置环境变量,指定代理的 URL。在 Kubernetes 插件的容器模板中设置如下:
```
MAVEN_MIRROR_URL=https://nexus.mycompany.com/repository/maven-public
```
The build artifacts (JARs) should also be archived in a repository, which may or may not be the same as the one acting as a mirror for dependencies above. Maven `deploy` requires the repo URL in the `pom.xml` under [Distribution management][21] (this has nothing to do with the agent image):
构建好的 artifacts (JARs) 也应该保存到库中可以是上面提到的用于提供依赖的镜像库也可以是其它库。Maven 完成 `deploy` 操作需要在 `pom.xml` 的[分发管理][21] 下配置库 URL这与代理镜像无关。
```
<project ...>
@ -259,9 +242,9 @@ The build artifacts (JARs) should also be archived in a repository, which may or
```
Uploading the artifact may require authentication. In this case, username/password must be set in the `settings.xml` under the server ID matching the one in `pom.xml`. We need to mount a whole `settings.xml` with the URL, username, and password on the Maven Jenkins agent container from an OpenShift secret. We can also use environment variables as below:
上传 artifact 可能涉及认证。在这种情况下,需要在 `settings.xml` 中配置用户名/密码,其中 server ID 要与 `pom.xml` 文件中的 server ID 对应。我们可以使用 OpenShift secret 将包含 URL、用户名和密码的完整 `settings.xml` 映射到 Maven Jenkins 代理容器中。另外,也可以使用环境变量。具体如下:
* Add environment variables from a secret to the container:
* 利用 secret 为容器添加环境变量:
```
@ -271,7 +254,7 @@ MAVEN_SERVER_PASSWORD=admin123
```
* Mount `settings.xml` from a config map to `/home/jenkins/.m2/settings.xml`:
* 利用 config map 将 `settings.xml` 挂载至 `/home/jenkins/.m2/settings.xml`
```
@ -309,9 +292,9 @@ MAVEN_SERVER_PASSWORD=admin123
```
Disable interactive mode (use batch mode) to skip the download log by using `-B` for Maven commands or by adding `<interactiveMode>false</interactiveMode>` to `settings.xml`.
禁用交互模式 (即使用批处理模式) 可以忽略下载日志,一种方式是在 Maven 命令中增加 `-B` 参数,另一种方式是在 `settings.xml` 配置文件中增加 `<interactiveMode>false</interactiveMode>` 配置。
If the Maven repository's HTTPS endpoint uses a certificate signed by a custom CA, we need to create a Java KeyStore using the [keytool][22] containing the CA certificate as trusted. This KeyStore should be uploaded as a config map in OpenShift. Use the `oc` command to create a config map from files:
如果 Maven 库的 HTTPS 服务使用自建 CA 签名的证书,我们需要使用 [keytool][22] 工具创建一个将 CA 公钥添加至信任列表的 Java KeyStore。在 OpenShift 中使用 config map 将这个 Keystore 上传。使用 `oc` 命令基于文件创建一个 config map
```
 oc create configmap maven-settings--from-file=settings.xml=settings.xml--from-
@ -319,9 +302,9 @@ file=myTruststore.jks=myTruststore.jks
```
Mount the config map somewhere on the Jenkins agent. In this example we use `/home/jenkins/.m2`, but only because we have `settings.xml` in the same config map. The KeyStore can go under any path.
将这个 config map 挂载至 Jenkins 代理容器。在本例中我们使用 `/home/jenkins/.m2` 目录,但这仅仅是因为配置文件 `settings.xml` 也对应这个 config mapKeyStore 可以放置在任意路径下。
Then make the Maven Java process use this file as a trust store by setting Java parameters in the `MAVEN_OPTS` environment variable for the container:
接着在容器环境变量 `MAVEN_OPTS` 中设置 Java 参数,以便让 Maven 对应的 Java 进程使用该文件:
```
MAVEN_OPTS=
@ -331,30 +314,28 @@ MAVEN_OPTS=
```
### Memory usage
### 内存使用量
This is probably the most important part—if we don't set max memory correctly, we'll run into intermittent build failures after everything seems to work.
这可能是最重要的一部分设置,如果没有正确的设置最大内存,我们会遇到间歇性构建失败,虽然每个组件都似乎工作正常。
Running Java in a container can cause high memory usage errors if we don't set the heap in the Java command line. The JVM [sees the total memory of the host machine][23] instead of the container's memory limit and sets the [default max heap][24] accordingly. This is typically much more than the container's memory limit, and OpenShift simply kills the container when a Java process allocates more memory for the heap.
如果没有在 Java 命令行中设置堆大小,在容器中运行 Java 可能导致高内存使用量的报错。JVM [可以利用全部的主机内存][23],因而使用[默认的堆大小限制][24]。这通常会超过容器的内存资源总额,故当 Java 进程为堆分配过多内存时OpenShift 会直接杀掉容器。
Although the `jenkins``-slave-base` image has a built-in [script to set max heap ][25]to half the container memory (this can be modified via EnvVar `CONTAINER_HEAP_PERCENT=0.50`), it only applies to the Jenkins agent Java process. In a Maven build, we have important additional Java processes running:
虽然 `jenkins` `-slave-base` 镜像包含一个内建[脚本设置堆最大为][25]容器内存的一半 (可以通过环境变量 `CONTAINER_HEAP_PERCENT=0.50` 修改),但这只适用于 Jenkins 代理节点中的 Java 进程。在 Maven 构建中,还有其它重要的 Java 进程运行:
* The `mvn` command itself is a Java tool.
* The [Maven Surefire-plugin][26] executes the unit tests in a forked JVM by default.
* `mvn` 命令本身就是一个 Java 工具。
* [Maven Surefire 插件][26] 按默认参数派生的 JVM 用于运行单元测试。
总结一下,容器中同时运行着三个重要的 Java 进程,预估内存使用量以避免 pod 被误杀是很重要的。每个进程都有不同的方式设置 JVM 参数:
At the end of the day, we'll have three Java processes running at the same time in the container, and it's important to estimate their memory usage to avoid unexpectedly killed pods. Each process has a different way to set JVM options:
* Jenkins agent heap is calculated as mentioned above, but we definitely shouldn't let the agent have such a big heap. Memory is needed for the other two JVMs. Setting `JAVA_OPTS` works for the Jenkins agent.
* The `mvn` tool is called by the Jenkins job. Set `MAVEN_OPTS` to customize this Java process.
* The JVM spawned by the Maven `surefire` plugin for the unit tests can be customized by the [argLine][27] Maven property. It can be set in the `pom.xml`, in a profile in `settings.xml` or simply by adding `-DargLine=… to mvn` command in `MAVEN_OPTS`.
* 我们在上面提到了 Jenkins 代理容器堆最大值的计算方法,但我们显然不应该让代理容器使用如此大的堆,毕竟还有两个 JVM 需要使用内存。对于 Jenkins 代理容器,可以设置 `JAVA_OPTS`
* `mvn` 工具被 Jenkins 任务调用。设置 `JAVA_OPTS` 可以用于自定义这类 Java 进程。
* Maven `surefire` 插件派生的用于单元测试的 JVM 可以通过 Maven [argLine][27] 属性自定义。可以在 `pom.xml``settings.xml` 的某个配置文件中设置,也可以直接在 `maven` 命令参数 `MAVEN_OPS` 中增加 `-DargLine=…`
Here is an example of how to set these environment variables for the Maven agent container:
下面例子给出 Maven 代理容器环境变量设置方法:
```
 JAVA_OPTS=-Xms64m -Xmx64m
JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS}
@ -362,17 +343,17 @@ SUREFIRE_OPTS=-Xms256m -Xmx256m
```
These numbers worked in our tests with 1024Mi agent container memory limit building and running unit tests for a SpringBoot app. These are relatively low numbers and a bigger heap size; a higher limit may be needed for complex Maven projects and unit tests.
我们的测试环境是具有 1024Mi 内存限额的代理容器,使用上述参数可以正常构建一个 SpringBoot 应用并进行单元测试。测试环境使用的资源相对较小,对于复杂的 Maven 项目和对应的单元测试,我们需要更大的堆大小及更大的容器内存限额。
Note: The actual memory usage of a Java8 process is something like `HeapSize + MetaSpace + OffHeapMemory`, and this can be significantly more than the max heap size set. With the settings above, the three Java processes took more than 900Mi memory in our case. See RSS memory for processes within the container: `ps -e -o ``pid``,user``,``rss``,comm``,args`
Java8 进程的实际内存使用量包括 `堆大小 + 元数据 + 堆外内存`,因此内存使用量会明显高于设置的最大堆大小。在我们上面的测试环境中,三个 Java 进程使用了超过 900Mi 的内存。可以在容器内查看进程的 RSS 内存使用情况,命令如下:`ps -e -o ``pid``,user``,``rss``,comm``,args`。
The Jenkins agent images have both JDK 64 bit and 32 bit installed. For `mvn` and `surefire`, the 64-bit JVM is used by default. To lower memory usage, it makes sense to force 32-bit JVM as long as `-Xmx` is less than 1.5 GB:
Jenkins 代理镜像同时安装了 JDK 64 位和 32 位版本。对于 `mvn``surefire`,默认使用 64 位版本 JVM。为减低内存使用量只要 `-Xmx` 不超过 1.5 GB强制使用 32 位 JVM 都是有意义的。
```
JAVA_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-1.8.0.1610.b14.el7_4.i386
```
Note that it's also possible to set Java arguments in the `JAVA_TOOL_OPTIONS` EnvVar, which is picked up by any JVM started. The parameters in `JAVA_OPTS` and `MAVEN_OPTS` overwrite the ones in `JAVA_TOOL_OPTIONS`, so we can achieve the same heap configuration for our Java processes as above without using `argLine`:
注意到我们可以在 `JAVA_TOOL_OPTIONS` 环境变量中设置 Java 参数,每个 JVM 启动时都会读取该参数。`JAVA_OPTS` 和 `MAVEN_OPTS` 中的参数会覆盖 `JAVA_TOOL_OPTIONS` 中的对应值,故我们可以不使用 `argLine`,实现对 Java 进程同样的堆配置:
```
JAVA_OPTS=-Xms64m -Xmx64m
@ -382,11 +363,11 @@ JAVA_TOOL_OPTIONS=-Xms256m -Xmx256m
```
It's still a bit confusing, as all JVMs log `Picked up JAVA_TOOL_OPTIONS:`
但缺点是每个 JVM 的日志中都会显示 `Picked up JAVA_TOOL_OPTIONS:`,这可能让人感到迷惑。
### Jenkins Pipeline
### Jenkins 流水线
Following the settings above, we should have everything prepared to run a successful build. We can pull the code, download the dependencies, run the unit tests, and upload the artifact to our repository. Let's create a Jenkins Pipeline project that does this:
完成上述配置,我们应该已经可以完成一次成功的构建。我们可以获取源代码,下载依赖,运行单元测试并将 artifact 上传到我们的库中。我们可以通过创建一个 Jenkins 流水线项目来完成上述操作。
```
pipeline {
@ -438,15 +419,15 @@ pipeline {
```
For a real project, of course, the CI/CD pipeline should do more than just the Maven build; it could deploy to a development environment, run integration tests, promote to higher environments, etc. The learning articles linked above show examples of how to do those things.
当然对应真实项目CI/CD 流水线不仅仅完成 Maven 构建,还可以部署到开发环境,运行集成测试,提升至更接近于生产的环境等。上面给出的学习资料中有执行这些操作的案例。
### Multiple containers
### 多容器
One pod can be running multiple containers with each having their own resource limits. They share the same network interface, so we can reach started services on `localhost`, but we need to think about port collisions. Environment variables are set separately, but the volumes mounted are the same for all containers configured in one Kubernetes pod template.
一个 pod 可以运行多个容器,每个容器有单独的资源限制。这些容器共享网络接口,故我们可以从 `localhost` 访问已启动的服务,但我们需要考虑端口冲突的问题。在一个 Kubernetes pod 模板中,每个容器的环境变量是单独设置的,但挂载的卷是统一的。
Bringing up multiple containers is useful when an external service is required for unit tests and an embedded solution doesn't work (e.g., database, message broker, etc.). In this case, this second container also starts and stops with the Jenkins agent.
当一个外部服务需要单元测试且嵌入式方案无法工作 (例如,数据库、消息中间件等) 时,可以启动多个容器。在这种情况下,第二个容器会随着 Jenkins 代理容器启停。
See the Jenkins `config.xml` snippet where we start an `httpbin` service on the side for our Maven build:
查看 Jenkins `config.xml` 片段,其中我们启动了一个辅助的 `httpbin` 服务用于 Maven 构建:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
@ -504,9 +485,9 @@ See the Jenkins `config.xml` snippet where we start an `httpbin` service on the
```
### Summary
### 总结
For a summary, see the created OpenShift resources and the Kubernetes plugin configuration from Jenkins `config.xml` with the configuration described above.
作为总结,我们查看上面已描述配置的 Jenkins `config.xml` 对应创建的 OpenShift 资源以及 Kubernetes 插件的配置。
```
apiVersion: v1
@ -604,7 +585,7 @@ items:
```
One additional config map was created from files:
基于文件创建另一个 config map
```
 oc create configmap maven-settings --from-file=settings.xml=settings.xml
@ -612,7 +593,7 @@ One additional config map was created from files:
```
Kubernetes plugin configuration:
Kubernetes 插件配置如下:
```
<?xml version='1.0' encoding='UTF-8'?>
@ -910,16 +891,16 @@ MIIC6jCC...
```
Happy builds!
尝试愉快的构建吧!
This was originally published on [ITNext][28] and is reprinted with permission.
原文发表于 [ITNext][28],已获得翻版授权。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/running-jenkins-builds-containers
作者:[Balazs Szeti][a]
译者:[译者ID](https://github.com/译者ID)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
@ -954,3 +935,7 @@ via: https://opensource.com/article/18/4/running-jenkins-builds-containers
[26]:http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
[27]:http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#argLine
[28]:https://itnext.io/running-jenkins-builds-in-containers-458e90ff2a7b
[29]:https://docs.openshift.com/container-platform/3.7/using_images/other_images/jenkins.html
[30]:https://github.com/openshift/jenkins
[31]:https://blog.openshift.com/cicd-with-openshift/
[32]:http://v1.uncontained.io/playbooks/continuous_delivery/external-jenkins-integration.html

View File

@ -0,0 +1,344 @@
Linux 命令行下的数学运算
======
![](https://images.techhive.com/images/article/2014/12/math_blackboard-100534564-large.jpg)
可以在 Linux 命令行下做数学运算吗?当然可以!事实上,有不少命令可以轻松完成这些操作,其中一些甚至让你大吃一惊。让我们来学习这些有用的数学运算命令或命令语法吧。
### expr
首先,对于在命令行使用命令进行数学运算,可能最容易想到、最常用的命令就是 **expr** (expression)。它可以完成四则运算,也可以用于比较大小。下面是几个例子:
#### 变量递增
```
$ count=0
$ count=`expr $count + 1`
$ echo $count
1
```
#### 完成简单运算
```
$ expr 11 + 123
134
$ expr 134 / 11
12
$ expr 134 - 11
123
$ expr 11 * 123
expr: syntax error <== oops!
$ expr 11 \* 123
1353
$ expr 20 % 3
2
```
注意,你需要在 * 运算符之前增加 \ 符号,避免语法错误。% 运算符用于取余运算。
下面是一个稍微复杂的例子:
```
participants=11
total=156
share=`expr $total / $participants`
remaining=`expr $total - $participants \* $share`
echo $share
14
echo $remaining
2
```
假设某个活动中有 11 位参与者,需要颁发的奖项总数为 156那么平均每个参与者获得 14 项奖项,额外剩余 2 个奖项。
#### 比较大小
下面让我们看一下比较大小的操作。从第一印象来看,语句看似有些怪异;这里并不是设置数值,而是进行数字大小比较。在本例中 **expr** 判断表达式是否为真:如果结果是 1那么表达式为真反之表达式为假。
```
$ expr 11 = 11
1
$ expr 11 = 12
0
```
请读作"11 是否等于 11"及"11 是否等于 12",你很快就会习惯这种写法。当然,我们不会在命令行上执行上述比较,可能的比较是 $age 是否等于 11。
```
$ age=11
$ expr $age = 11
1
```
如果将数字放到引号中间,那么你将进行字符串比较,而不是数值比较。
```
$ expr "11" = "11"
1
$ expr "eleven" = "11"
0
```
在本例中,我们判断 10 是否大于 5以及是否 大于 99。
```
$ expr 10 \> 5
1
$ expr 10 \> 99
0
```
的确,返回 1 和 0 分别代表比较的结果为真和假,我们一般预期在 Linux 上得到这个结果。在下面的例子中,按照上述逻辑使用 **expr** 并不正确,因为 **if** 的工作原理刚好相反,即 0 代表真。
```
#!/bin/bash
echo -n "Cost to us> "
read cost
echo -n "Price we're asking> "
read price
if [ `expr $price \> $cost` ]; then
echo "We make money"
else
echo "Don't sell it"
fi
```
下面,我们运行这个脚本:
```
$ ./checkPrice
Cost to us> 11.50
Price we're asking> 6
We make money
```
这显然与我们预期不符!我们稍微修改一下,以便使其按我们预期工作:
```
#!/bin/bash
echo -n "Cost to us> "
read cost
echo -n "Price we're asking> "
read price
if [ `expr $price \> $cost` == 1 ]; then
echo "We make money"
else
echo "Don't sell it"
fi
```
### factor
**factor** 命令的功能基本与你预期相符。你给出一个数字,该命令会给出对应数字的因子。
```
$ factor 111
111: 3 37
$ factor 134
134: 2 67
$ factor 17894
17894: 2 23 389
$ factor 1987
1987: 1987
```
factor 命令对于最后一个数字没有返回很多,这是因为 1987 是一个 **质数**
### jot
**jot** 命令可以创建一系列数字。给定数字总数及起始数字即可。
```
$ jot 8 10
10
11
12
13
14
15
16
17
```
你也可以用如下方式使用 **jot**,这里我们要求递减至数字 2。
```
$ jot 8 10 2
10
9
8
7
5
4
3
2
```
**jot** 可以帮你构造一系列数字组成的列表,该列表可以用于其它任务。
```
$ for i in `jot 7 17`; do echo April $i; done
April 17
April 18
April 19
April 20
April 21
April 22
April 23
```
### bc
**bc** 基本上是命令行数学运算最佳工具之一。输入你想执行的运算,使用管道发送至该命令即可:
```
$ echo "123.4+5/6-(7.89*1.234)" | bc
113.664
```
可见 **bc** 并没有忽略精度,而且输入的字符串也相当直截了当。它还可以进行大小比较、处理布尔值、计算平方根、正弦、余弦和正切等。
```
$ echo "sqrt(256)" | bc
16
$ echo "s(90)" | bc -l
.89399666360055789051
```
事实上,**bc** 甚至可以计算 pi。你需要指定需要的精度。
```
$ echo "scale=5; 4*a(1)" | bc -l
3.14156
$ echo "scale=10; 4*a(1)" | bc -l
3.1415926532
$ echo "scale=20; 4*a(1)" | bc -l
3.14159265358979323844
$ echo "scale=40; 4*a(1)" | bc -l
3.1415926535897932384626433832795028841968
```
除了通过管道接收数据并返回结果,**bc**还可以交互式运行,输入你想执行的运算即可。本例中提到的 scale 设置可以指定有效数字的个数。
```
$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
scale=2
3/4
.75
2/3
.66
quit
```
你还可以使用 **bc** 完成数字进制转换。**obase** 用于设置输出的数字进制。
```
$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
obase=16
16 <=== entered
10 <=== response
256 <=== entered
100 <=== response
quit
```
按如下方式使用 **bc** 也是完成十六进制与十进制转换的最简单方式之一:
```
$ echo "ibase=16; F2" | bc
242
$ echo "obase=16; 242" | bc
F2
```
在上面第一个例子中,我们将输入进制 (ibase) 设置为十六进制 (hex),完成十六进制到为十进制的转换。在第二个例子中,我们执行相反的操作,即将输出进制 (obase) 设置为十六进制。
### 简单的 bash 数学运算
通过使用双括号,我们可以在 bash 中完成简单的数学运算。在下面的例子中,我们创建一个变量,为变量赋值,然后依次执行加法、自减和平方。
```
$ ((e=11))
$ (( e = e + 7 ))
$ echo $e
18
$ ((e--))
$ echo $e
17
$ ((e=e**2))
$ echo $e
289
```
允许使用的运算符包括:
```
+ - 加法及减法
++ -- 自增与自减
* / % 乘法,除法及求余数
^ 指数运算
```
你还可以使用逻辑运算符和布尔运算符:
```
$ ((x=11)); ((y=7))
$ if (( x > y )); then
> echo "x > y"
> fi
x > y
$ ((x=11)); ((y=7)); ((z=3))
$ if (( x > y )) >> (( y > z )); then
> echo "letters roll downhill"
> fi
letters roll downhill
```
或者如下方式:
```
$ if [ x > y ] << [ y > z ]; then echo "letters roll downhill"; fi
letters roll downhill
```
下面计算 2 的 3 次幂:
```
$ echo "2 ^ 3"
2 ^ 3
$ echo "2 ^ 3" | bc
8
```
### 总结
在 Linux 系统中,有很多不同的命令行工具可以完成数字运算。希望你在读完本文之后,能掌握一两个新工具。
使用 [Facebook][1] 或 [LinkedIn][2] 加入 Network World 社区,点评你最喜爱的主题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3268964/linux/how-to-do-math-on-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,80 @@
一个更好的调试 Perl 模块
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
只有在调试或开发调整时才使用 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时决定是否执行它。
[Curtis“Ovid”Poe][1] 编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 Keyword::Simple 和 Perl 5.012 中引入的可插入关键字架构来创建新的关键字DEVELOPMENT。它使用 PERL_KEYWORD_DEVELOPMENT 环境变量的值来确定是否要执行一段代码。
使用它并不容易:
```
use Keyword::DEVELOPMENT;
       
sub doing_my_big_loop {
    my $self = shift;
    DEVELOPMENT {
        # insert expensive debugging code here!
    }
}Keyworddoing_my_big_loopDEVELOPMENT
```
在编译时DEVELOPMENT 块内的代码已经被优化掉了,根本就不存在。
你看到好处了么?在沙盒中将 PERL_KEYWORD_DEVELOPMENT 环境变量设置为 true在生产环境设为 false并且可以将有价值的调试工具提交到你的代码库中在你需要的时候随时可用。
在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异:
```
sub connect_to_my_database {
       
    my $dsn = "dbi:mysql:productiondb";
    my $user = "db_user";
    my $pass = "db_pass";
   
    DEVELOPMENT {
        # Override some of that config information
        $dsn = "dbi:mysql:developmentdb";
    }
   
    my $db_handle = DBI->connect($dsn, $user, $pass);
}connect_to_my_databaseDEVELOPMENTDBI
```
稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。
我查看了关键字 Keyword::DEVELOPMENT 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 Keyword::Simple 后Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中需要的一个优雅解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -0,0 +1,122 @@
如何在 Fedora 上开始 Java 开发
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/java-getting-started-816x345.jpg)
Java 是世界上最流行的编程语言之一。它广泛用于开发物联网设备、Android 程序Web 和企业应用。本文将提供使用 [OpenJDK][1] 安装和配置工作站的指南。
### 安装编译器和工具
在 Fedora 中安装编译器或 Java Development KitJDK很容易。在写这篇文章时可以用 v8 和 v9。只需打开一个终端并输入
```
sudo dnf install java-1.8.0-openjdk-devel
```
这安装 JDK v8。对于 v9请输入
```
sudo dnf install java-9-openjdk-devel
```
对于需要其他工具和库(如 Ant 和 Maven的开发人员可以使用 **Java Development** 组。要安装套件,请输入:
```
sudo dnf group install "Java Development"
```
要验证编译器是否已安装,请运行:
```
javac -version
```
输出显示编译器版本,如下所示:
```
javac 1.8.0_162
```
### 编译程序
你可以使用任何基本的文本编辑器(如 nano、vim 或 gedit编写程序。这个例子提供了一个简单的 “Hello Fedora” 程序。
打开你最喜欢的文本编辑器并输入以下内容:
```
public class HelloFedora {
      public static void main (String[] args) {
              System.out.println("Hello Fedora!");
      }
}
```
将文件保存为 HelloFedora.java。在终端切换到包含该文件的目录并执行以下操作
```
javac HelloFedora.java
```
如果编译器遇到任何语法错误,它会发出错误。否则,它只会在下面显示 shell 提示符。
你现在应该有一个名为 HelloFedora 的文件,它是编译好的程序。使用以下命令运行它:
```
java HelloFedora
```
输出将显示:
```
Hello Fedora!
```
### 安装集成开发环境IDE
有些程序可能更复杂IDE 可以帮助顺利进行。Java 程序员有很多可用的 IDE其中包括
+ Geany一个加载快速的基本 IDE并提供内置模板
+ Anjuta
+ GNOME Builder已经在 Builder 的文章中介绍过 - 这是一个专门面向 GNOME 程序开发人员的新 IDE
然而,主要用 Java 编写的最流行的开源 IDE 之一是 [Eclipse][2]。 Eclipse 在官方仓库中有。要安装它,请运行以下命令:
```
sudo dnf install eclipse-jdt
```
安装完成后Eclipse 的快捷方式会出现在桌面菜单中。
有关如何使用 Eclipse 的更多信息,请参阅其网站上的[用户指南][3]。
### 浏览器插件
如果你正在开发 Web 小程序并需要一个用于浏览器的插件,则可以使用 [IcedTea-Web][4]。像 OpenJDK 一样,它是开源的并易于在 Fedora 中安装。运行这个命令:
```
sudo dnf install icedtea-web
```
从 Firefox 52 开始Web 插件不再有效。有关详细信息,请访问 Mozilla 支持网站 [https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct][5]。
恭喜,你的 Java 开发环境已准备完毕。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/start-developing-java-fedora/
作者:[Shaun Assam][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/sassam/
[1]:http://openjdk.java.net/
[2]:https://www.eclipse.org/
[3]:http://help.eclipse.org/oxygen/nav/0
[4]:https://icedtea.classpath.org/wiki/IcedTea-Web
[5]:https://support.mozilla.org/en-US/kb/npapi-plugins?as=u&utm_source=inproduct

View File

@ -0,0 +1,89 @@
如何重置 Fedora 上的 root 密码
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/resetrootpassword-816x345.jpg)
系统管理员可以轻松地为忘记密码的用户重置密码。但是如果系统管理员忘记 root 密码会发生什么?本指南将告诉你如何重置遗失或忘记的 root 密码。请注意,要重置 root 密码,你需要能够接触到本机以重新启动并访问 GRUB 设置。此外,如果系统已加密,你还需要知道 LUKS 密码。
### 编辑 GRUB 设置
首先你需要中断启动过程。所以你需要打开系统,如果已经打开就重新启动。第一步很棘手,因为 grub 菜单往往会在屏幕上快速闪过。
当你看到 GRUB 菜单时,请按键盘上的 **E** 键:
![][1]
按下 e 后显示以下屏幕:
![][2]
使用箭头键移动到 **linux16** 这行。
![][3]
使用您的**删除**键或**后退**键,删除 **rhgb quiet** 并替换为以下内容。
```
rd.break enforcing=0
```
![][4]
编辑好后,按下 **Ctrl-x** 启动系统。如果系统已加密,则系统会提示你输入 LUKS 密码。
**注意:** 设置 enforcing=0避免执行完整的系统 SELinux 重新标记。系统重启后,为 /etc/shadow 恢复正确的 SELinux 上下文。(这个会进一步解释)
### 挂载文件系统
系统现在将处于紧急模式。以读写权限重新挂载硬盘:
```
# mount o remount,rw /sysroot
```
### 更改密码
运行 chroot 访问系统。
```
# chroot /sysroot
```
你现在可以更改 root 密码。
```
# passwd
```
出现提示时输入新的 root 密码两次。如果成功,你应该看到一条 **all authentication tokens updated successfully** 消息。
输入 **exit** 两次重新启动系统。
以 root 身份登录并将 SELinux 标签恢复到 /etc/shadow 中。
```
# restorecon -v /etc/shadow
```
将 SELinux 变回 enforce 模式。
```
# setenforce 1
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/reset-root-password-fedora/
作者:[Curt Warfield][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/rcurtiswarfield/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png
[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png
[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png
[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png