Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-08-04 16:01:58 +08:00
commit 161bdca6f2
14 changed files with 1514 additions and 625 deletions

View File

@ -1,7 +1,7 @@
Linux 下 cut 命令的 4 个本质且实用的示例
Linux 下 cut 命令的 4 个基础实用的示例
============================================================
`cut` 命令是用来从文本文件中移除“某些列”的经典工具。在本文中的“一列”可以被定义为按照一行中位置区分的一系列字符串或者字节, 或者是以某个分隔符为间隔的某些域。
`cut` 命令是用来从文本文件中移除“某些列”的经典工具。在本文中的“一列”可以被定义为按照一行中位置区分的一系列字符串或者字节,或者是以某个分隔符为间隔的某些域。
先前我已经介绍了[如何使用 AWK 命令][13]。在本文中,我将解释 linux 下 `cut` 命令的 4 个本质且实用的例子,有时这些例子将帮你节省很多时间。
@ -11,26 +11,13 @@ Linux 下 cut 命令的 4 个本质且实用的示例
假如你想,你可以观看下面的视频,视频中解释了本文中我列举的 cut 命令的使用例子。
目录:
- https://www.youtube.com/PhE_cFLzVFw
* [作用在一系列字符上][8]
* [范围如何定义?][1]
### 1、 作用在一系列字符上
* [作用在一系列字节上][9]
* [作用在多字节编码的字符上][2]
当启用 `-c` 命令行选项时,`cut` 命令将移除一系列字符。
* [作用在域上][10]
* [处理不包含分隔符的行][3]
* [改变输出的分隔符][4]
* [非 POSIX GNU 扩展][11]
### 1\. 作用在一系列字符上
当启用 `-c` 命令行选项时cut 命令将移除一系列字符。
和其他的过滤器类似, cut 命令不会就地改变输入的文件,它将复制已修改的数据到它的标准输出里去。你可以通过重定向命令的结果到一个文件中来保存修改后的结果,或者使用管道将结果送到另一个命令的输入中,这些都由你来负责。
和其他的过滤器类似, `cut` 命令不会直接改变输入的文件,它将复制已修改的数据到它的标准输出里去。你可以通过重定向命令的结果到一个文件中来保存修改后的结果,或者使用管道将结果送到另一个命令的输入中,这些都由你来负责。
假如你已经下载了上面视频中的[示例测试文件][26],你将看到一个名为 `BALANCE.txt` 的数据文件,这些数据是直接从我妻子在她工作中使用的某款会计软件中导出的:
@ -50,7 +37,7 @@ ACCDOC ACCDOCDATE ACCOUNTNUM ACCOUNTLIB ACCDOCLIB
上述文件是一个固定宽度的文本文件,因为对于每一项数据,都使用了不定长的空格做填充,使得它看起来是一个对齐的列表。
这样一来,每一列数据开始和结束的位置都是一致的。从 cut 命令的字面意思去理解会给我们带来一个小陷阱:`cut` 命令实际上需要你指出你想_保留_的数据范围而不是你想_移除_的范围。所以假如我_只_需要上面文件中的 `ACCOUNTNUM``ACCOUNTLIB` 列,我需要这么做:
这样一来,每一列数据开始和结束的位置都是一致的。从 `cut` 命令的字面意思去理解会给我们带来一个小陷阱:`cut` 命令实际上需要你指出你想_保留_的数据范围而不是你想_移除_的范围。所以假如我_只_需要上面文件中的 `ACCOUNTNUM``ACCOUNTLIB` 列,我需要这么做:
```
sh$ cut -c 25-59 BALANCE.txt | head
@ -68,17 +55,17 @@ ACCOUNTNUM ACCOUNTLIB
#### 范围如何定义?
正如我们上面看到的那样, cut 命令需要我们特别指定需要保留的数据的_范围_。所以下面我将更正式地介绍如何定义范围对于 `cut` 命令来说,范围是由连字符(`-`)分隔的起始和结束位置组成,范围是基于 1 计数的,即每行的第一项是从 1 开始计数的,而不是从 0 开始。范围是一个闭区间开始和结束位置都将包含在结果之中正如它们之间的所有字符那样。如果范围中的结束位置比起始位置小则这种表达式是错误的。作为快捷方式你可以省略起始_或_结束值正如下面的表格所示
正如我们上面看到的那样, `cut` 命令需要我们特别指定需要保留的数据的_范围_。所以下面我将更正式地介绍如何定义范围对于 `cut` 命令来说,范围是由连字符(`-`)分隔的起始和结束位置组成,范围是基于 1 计数的,即每行的第一项是从 1 开始计数的,而不是从 0 开始。范围是一个闭区间开始和结束位置都将包含在结果之中正如它们之间的所有字符那样。如果范围中的结束位置比起始位置小则这种表达式是错误的。作为快捷方式你可以省略起始_或_结束值正如下面的表格所示
|||
|--|--|
| 范围 | 含义 |
|---|---|
| `a-b` | a 和 b 之间的范围(闭区间) |
|`a` | 与范围 `a-a` 等价 |
| `-b` | 与范围 `1-a` 等价 |
| `b-` | 与范围 `b-∞` 等价 |
cut 命令允许你通过逗号分隔多个范围,下面是一些示例:
`cut` 命令允许你通过逗号分隔多个范围,下面是一些示例:
```
# 保留 1 到 24 之间(闭区间)的字符
@ -108,8 +95,7 @@ Files /dev/fd/63 and /dev/fd/62 are identical
类似的,`cut` 命令 _不会重复数据_
```
# One might expect that could be a way to repeat
# the first column three times, but no...
# 某人或许期待这可以第一列三次,但并不会……
cut -c -10,-10,-10 BALANCE.txt | head -5
ACCDOC
4
@ -118,13 +104,13 @@ ACCDOC
5
```
值得提及的是,曾经有一个提议,建议使用 `-o` 选项来实现上面提到的两个限制,使得 `cut` 工具可以重排或者重复数据。但这个提议被 [POSIX 委员会拒绝了][14]_“因为这类增强不属于 IEEE P1003.2b 草案标准的范围”_。
值得提及的是,曾经有一个提议,建议使用 `-o` 选项来去除上面提到的两个限制,使得 `cut` 工具可以重排或者重复数据。但这个提议被 [POSIX 委员会拒绝了][14]_“因为这类增强不属于 IEEE P1003.2b 草案标准的范围”_。
据我所知,我还没有见过哪个版本的 cut 程序实现了上面的提议,以此来作为扩展,假如你知道某些例外,请使用下面的评论框分享给大家!
据我所知,我还没有见过哪个版本的 `cut` 程序实现了上面的提议,以此来作为扩展,假如你知道某些例外,请使用下面的评论框分享给大家!
### 2\. 作用在一系列字节上
### 2 作用在一系列字节上
当使用 `-b` 命令行选项时cut 命令将移除字节范围。
当使用 `-b` 命令行选项时,`cut` 命令将移除字节范围。
咋一看使用_字符_范围和使用_字节_没有什么明显的不同
@ -197,11 +183,11 @@ ACCDOC ACCDOCDATE ACCOUNTLIB DEBIT CREDIT
36 1012017 VAT BS/ENC 00000000013,83
```
已经_毫无删减地_复制了上面命令的输出。所以可以很明显地看出列对齐那里有些问题。
我_毫无删减地_复制了上面命令的输出。所以可以很明显地看出列对齐那里有些问题。
对此我的解释是原来的数据文件只包含 US-ASCII 编码的字符(符号、标点符号、数字和没有发音符号的拉丁字母)。
但假如你仔细地查看经软件升级后产生的文件你可以看到新导出的数据文件保留了带发音符号的字母。例如名为“ALNÉENRE”的公司现在被合理地记录了,而不是先前的 “ALNEENRE”没有发音符号
但假如你仔细地查看经软件升级后产生的文件,你可以看到新导出的数据文件保留了带发音符号的字母。例如现在合理地记录了名为 “ALNÉENRE” 的公司,而不是先前的 “ALNEENRE”没有发音符号
`file -i` 正确地识别出了改变,因为它报告道现在这个文件是 [UTF-8 编码][15] 的。
@ -231,28 +217,26 @@ sh$ sed '2!d' BALANCE-V2.txt | hexdump -C
在 `hexdump`  输出的 00000030 那行,在一系列的空格(字节 `20`)之后,你可以看到:
* 字母 `A` 被编码为 `41`
* 字母 `L` 被编码为 `4c`
* 字母 `N` 被编码为 `4e`
但对于大写的[带有注音的拉丁大写字母 E][16] (这是它在 Unicode 标准中字母 _É_ 的官方名称),则是使用 _2_ 个字节 `c3 89` 来编码的。
这样便出现问题了:对于使用固定宽度编码的文件, 使用字节位置来表示范围的 `cut` 命令工作良好,但这并不适用于使用变长编码的 UTF-8 或者 [Shift JIS][17] 编码。这种情况在下面的 [POSIX标准的非规范性摘录][18] 中被明确地解释过:
这样便出现问题了:对于使用固定宽度编码的文件, 使用字节位置来表示范围的 `cut` 命令工作良好,但这并不适用于使用变长编码的 UTF-8 或者 [Shift JIS][17] 编码。这种情况在下面的 [POSIX 标准的非规范性摘录][18] 中被明确地解释过:
> 先前版本的 cut 程序将字节和字符视作等同的环境下运作(正如在某些实现下对 退格键<backspace> 和制表键<tab> 的处理)。在针对多字节字符的情况下,特别增加了 `-b` 选项。
> 先前版本的 `cut` 程序将字节和字符视作等同的环境下运作(正如在某些实现下对退格键 `<backspace>` 和制表键 `<tab>` 的处理)。在针对多字节字符的情况下,特别增加了 `-b` 选项。
嘿,等一下!我并没有在上面“有错误”的例子中使用 '-b' 选项,而是 `-c` 选项呀所以难道_不应该_能够成功处理了吗
是的确实_应该_但是很不幸即便我们现在已身处 2018 年GNU Coreutils 的版本为 8.30 了cut 程序的 GNU 版本实现仍然不能很好地处理多字节字符。引用 [GNU 文档][19] 的话说_`-c` 选项“现在和 `-b` 选项是相同的,但对于国际化的情形将有所不同[...]”_。需要提及的是这个问题距今已有 10 年之久了!
是的确实_应该_但是很不幸即便我们现在已身处 2018 年GNU Coreutils 的版本为 8.30 了,`cut` 程序的 GNU 版本实现仍然不能很好地处理多字节字符。引用 [GNU 文档][19] 的话说_`-c` 选项“现在和 `-b` 选项是相同的,但对于国际化的情形将有所不同[...]”_。需要提及的是这个问题距今已有 10 年之久了!
另一方面,[OpenBSD][20] 的实现版本和 POSIX 相吻合,这将归功于当前的本地化(locale) 设定来合理地处理多字节字符:
另一方面,[OpenBSD][20] 的实现版本和 POSIX 相吻合,这将归功于当前的本地化`locale`设定来合理地处理多字节字符:
```
# 确保随后的命令知晓我们现在处理的是 UTF-8 编码的文本文件
openbsd-6.3$ export LC_CTYPE=en_US.UTF-8
# 使用 `-c` 选项, cut 能够合理地处理多字节字符
# 使用 `-c` 选项, `cut` 能够合理地处理多字节字符
openbsd-6.3$ cut -c -24,36-59,93- BALANCE-V2.txt
ACCDOC ACCDOCDATE ACCOUNTLIB DEBIT CREDIT
4 1012017 TIDE SCHEDULE 00000001615,00
@ -286,7 +270,7 @@ ACCDOC ACCDOCDATE ACCOUNTLIB DEBIT CREDIT
36 1012017 VAT BS/ENC 00000000013,83
```
正如期望的那样,当使用 `-b` 选项而不是 `-c` 选项后, OpenBSD 版本的 cut 实现和传统的 `cut` 表现是类似的:
正如期望的那样,当使用 `-b` 选项而不是 `-c` 选项后, OpenBSD 版本的 `cut` 实现和传统的 `cut` 表现是类似的:
```
openbsd-6.3$ cut -b -24,36-59,93- BALANCE-V2.txt
@ -322,7 +306,7 @@ ACCDOC ACCDOCDATE ACCOUNTLIB DEBIT CREDIT
36 1012017 VAT BS/ENC 00000000013,83
```
### 3\. 作用在域上
### 3 作用在域上
从某种意义上说,使用 `cut` 来处理用特定分隔符隔开的文本文件要更加容易一些,因为只需要确定好每行中域之间的分隔符,然后复制域的内容到输出就可以了,而不需要烦恼任何与编码相关的问题。
@ -342,9 +326,9 @@ ACCDOC;ACCDOCDATE;ACCOUNTNUM;ACCOUNTLIB;ACCDOCLIB;DEBIT;CREDIT
6;1012017;623795;TOURIST GUIDE BOOK;FACT FA00006253 - BIT QUIROBEN;00000001531,00;
```
你可能知道上面文件是一个 [CSV][29] 格式的文件(它以逗号来分隔),即便有时候域分隔符不是逗号。例如分号(`;`)也常被用来作为分隔符,并且对于那些总使用逗号作为 [十进制分隔符][30]的国家(例如法国,所以上面我的示例文件中选用了他们国家的字符),当导出数据为 "CSV" 格式时,默认将使用分号来分隔数据。另一种常见的情况是使用 [tab 键][32] 来作为分隔符,从而生成叫做 [tab 分隔值][32] 的文件。最后,在 Unix 和 Linux 领域,冒号 (`:`) 是另一种你能找到的常见分隔符号,例如在标准的 `/etc/passwd``/etc/group` 这两个文件里。
你可能知道上面文件是一个 [CSV][29] 格式的文件(它以逗号来分隔),即便有时候域分隔符不是逗号。例如分号(`;`)也常被用来作为分隔符,并且对于那些总使用逗号作为 [十进制分隔符][30]的国家(例如法国,所以上面我的示例文件中选用了他们国家的字符),当导出数据为 “CSV” 格式时,默认将使用分号来分隔数据。另一种常见的情况是使用 [tab 键][32] 来作为分隔符,从而生成叫做 [tab 分隔值][32] 的文件。最后,在 Unix 和 Linux 领域,冒号 (`:`) 是另一种你能找到的常见分隔符号,例如在标准的 `/etc/passwd``/etc/group` 这两个文件里。
当处理使用分隔符隔开的文本文件格式时,你可以向带有 `-f` 选项的 cut 命令提供需要保留的域的范围,并且你也可以使用 `-d` 选项来定分隔符(当没有使用 `-d` 选项时,默认以 tab 字符来作为分隔符):
当处理使用分隔符隔开的文本文件格式时,你可以向带有 `-f` 选项的 `cut` 命令提供需要保留的域的范围,并且你也可以使用 `-d` 选项来定分隔符(当没有使用 `-d` 选项时,默认以 tab 字符来作为分隔符):
```
sh$ cut -f 5- -d';' BALANCE.csv | head
@ -362,9 +346,9 @@ FACT FA00006253 - BIT QUIROBEN;00000001531,00;
#### 处理不包含分隔符的行
但要是输入文件中的某些行没有分隔符又该怎么办呢?很容易地认为可以将这样的行视为只包含第一个域。但 cut 程序并 _不是_ 这样做的。
但要是输入文件中的某些行没有分隔符又该怎么办呢?很容易地认为可以将这样的行视为只包含第一个域。但 `cut` 程序并 _不是_ 这样做的。
默认情况下,当使用 `-f` 选项时, cut 将总是原样输出不包含分隔符的那一行(可能假设它是非数据行,就像表头或注释等):
默认情况下,当使用 `-f` 选项时,`cut` 将总是原样输出不包含分隔符的那一行(可能假设它是非数据行,就像表头或注释等):
```
sh$ (echo "# 2018-03 BALANCE"; cat BALANCE.csv) > BALANCE-WITH-HEADER.csv
@ -388,8 +372,7 @@ DEBIT;CREDIT
00000001333,00;
```
假如你好奇心强,你还可以探索这种特性,来作为一种相对
隐晦的方式去保留那些只包含给定字符的行:
假如你好奇心强,你还可以探索这种特性,来作为一种相对隐晦的方式去保留那些只包含给定字符的行:
```
# 保留含有一个 `e` 的行
@ -398,7 +381,7 @@ sh$ printf "%s\n" {mighty,bold,great}-{condor,monkey,bear} | cut -s -f 1- -d'e'
#### 改变输出的分隔符
作为一种扩展, GNU 版本实现的 cut 允许通过使用 `--output-delimiter` 选项来为结果指定一个不同的域分隔符:
作为一种扩展, GNU 版本实现的 `cut` 允许通过使用 `--output-delimiter` 选项来为结果指定一个不同的域分隔符:
```
sh$ cut -f 5,6- -d';' --output-delimiter="*" BALANCE.csv | head
@ -416,10 +399,12 @@ FACT FA00006253 - BIT QUIROBEN*00000001531,00*
需要注意的是,在上面这个例子中,所有出现域分隔符的地方都被替换掉了,而不仅仅是那些在命令行中指定的作为域范围边界的分隔符。
### 4\. 非 POSIX GNU 扩展
### 4 非 POSIX GNU 扩展
说到非 POSIX GNU 扩展,它们中的某些特别有用。特别需要提及的是下面的扩展也同样对字节、字符或者域范围工作良好(相对于当前的 GNU 实现来说)。
`--complement`
想想在 sed 地址中的感叹符号(`!`),使用它,`cut` 将只保存**没有**被匹配到的范围:
```
@ -436,7 +421,9 @@ ACCDOC;ACCDOCDATE;ACCOUNTNUM;ACCOUNTLIB;DEBIT;CREDIT
4;1012017;445452;VAT BS/ENC;00000000323,00;
```
使用 [NUL 字符][6] 来作为行终止符,而不是 [新行newline字符][7]。当你的数据包含 新行 字符时, `-z` 选项就特别有用了,例如当处理文件名的时候(因为在文件名中 新行 字符是可以使用的,而 NUL 则不可以)。
`--zero-terminated (-z)`
使用 [NUL 字符][6] 来作为行终止符,而不是 [<ruby>新行<rt>newline</rt></ruby>字符][7]。当你的数据包含 新行字符时, `-z` 选项就特别有用了,例如当处理文件名的时候(因为在文件名中新行字符是可以使用的,而 NUL 则不可以)。
为了展示 `-z` 选项,让我们先做一点实验。首先,我们将创建一个文件名中包含换行符的文件:
@ -448,7 +435,7 @@ BALANCE-V2.txt
EMPTY?FILE?WITH FUNKY?NAME.txt
```
现在假设我想展示每个 `*.txt` 文件的前 5 个字符。一个想当然的解法将会失败:
现在假设我想展示每个 `*.txt` 文件的前 5 个字符。一个想当然的解决方法将会失败:
```
sh$ ls -1 *.txt | cut -c 1-5
@ -460,7 +447,7 @@ WITH
NAME.
```
你可以已经知道 `[ls][21]` 是为了[方便人类使用][33]而特别设计的,并且在一个命令管道中使用它是一个反模式(确实是这样的)。所以让我们用 `[find][22]` 来替换它:
你可以已经知道 [ls][21] 是为了[方便人类使用][33]而特别设计的,并且在一个命令管道中使用它是一个反模式(确实是这样的)。所以让我们用 [find][22] 来替换它:
```
sh$ find . -name '*.txt' -printf "%f\n" | cut -c 1-5
@ -484,11 +471,11 @@ EMPTY
BALAN
```
通过上面最后的例子,我们就达到了本文的最后部分了,所以我将让你自己试试 `-printf` 后面那个有趣的 `"%f\0"` 参数或者理解为什么我在管道的最后使用了 `[tr][23]` 命令。
通过上面最后的例子,我们就达到了本文的最后部分了,所以我将让你自己试试 `-printf` 后面那个有趣的 `"%f\0"` 参数或者理解为什么我在管道的最后使用了 [tr][23] 命令。
### 使用 cut 命令可以实现更多功能
我只是列举了 cut 命令的最常见且在我眼中最实质的使用方式。你甚至可以将它以更加实用的方式加以运用,这取决于你的逻辑和想象。
我只是列举了 `cut` 命令的最常见且在我眼中最基础的使用方式。你甚至可以将它以更加实用的方式加以运用,这取决于你的逻辑和想象。
不要再犹豫了,请使用下面的评论框贴出你的发现。最后一如既往的,假如你喜欢这篇文章,请不要忘记将它分享到你最喜爱网站和社交媒体中!
@ -496,9 +483,9 @@ BALAN
via: https://linuxhandbook.com/cut-command/
作者:[Sylvain Leroux ][a]
作者:[Sylvain Leroux][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,32 +3,35 @@
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR 是个人软件仓库[集合][1],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费和开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不被 Fedora 基础设施不支持或没有被该项目所签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
COPR 是个人软件仓库[集合][1],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不被 Fedora 基础设施不支持或没有被该项目所签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
这是 COPR 中一组新的有趣项目。
### Hledger
[Hledger][2]是用于跟踪货币或其他商品的命令行程序。它使用简单的纯文本格式日志来存储数据和复式记帐。除了命令行界面hledger 还提供终端界面和 Web 客户端,可以显示帐户余额图。
[Hledger][2] 是用于跟踪货币或其他商品的命令行程序。它使用简单的纯文本格式日志来存储数据和复式记帐。除了命令行界面hledger 还提供终端界面和 Web 客户端,可以显示帐户余额图。
![][3]
#### 安装说明
该仓库目前为 Fedora 27、28 和 Rawhide 提供了 hledger。要安装 hledger请使用以下命令
```
sudo dnf copr enable kefah/HLedger
sudo dnf install hledger
```
### Neofetch
[Neofetch][4] 是一个命令行工具,可显示有关操作系统、软件和硬件的信息。其主要目的是以紧凑的方式显示数据来截图。你可以使用命令行标志和配置文件将 Neofetch 配置为完全按照你希望的方式显示。
![][5]
#### 安装说明
仓库目前为 Fedora 28 提供 Neofetch。要安装 Neofetch请使用以下命令
```
sudo dnf copr enable sysek/neofetch
sudo dnf install neofetch
@ -38,29 +41,31 @@ sudo dnf install neofetch
### Remarkable
[Remarkable][6]是 Markdown 文本编辑器,它使用类似 GitHub 的 Markdown 风格。它提供了文档的预览,以及导出为 PDF 和 HTML 的选项。Markdown 有几种可用的样式,包括使用 CSS 创建自己的样式的选项。此外Remarkable 支持用于编写方程的 LaTeX 语法和源代码的语法高亮。
![][7]
#### 安装说明
该仓库目前为 Fedora 28 和 Rawhide 提供 Remarkable。要安装 Remarkable请使用以下命令
```
sudo dnf copr enable neteler/remarkable
sudo dnf install remarkable
```
### Aha
[Aha][8](或 ANSI HTML Adapter是一个命令行工具可将终端转义成 HTML 代码。这允许你将 git diff 或 htop 的输出共享为静态 HTML 页面。
[Aha][8](即 ANSI HTML Adapter是一个命令行工具可将终端转义成 HTML 代码。这允许你将 git diff 或 htop 的输出共享为静态 HTML 页面。
![][9]
#### 安装说明
[仓库][10] 目前为 Fedora 26、27、28 和 Rawhide、EPEL 6 和 7 以及其他发行版提供 aha。要安装 aha请使用以下命令
```
sudo dnf copr enable scx/aha
sudo dnf install aha
```
@ -71,7 +76,7 @@ via: https://fedoramagazine.org/4-try-copr-july-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,6 @@
Translating by MjSeven
API Star: Python 3 API Framework Polyglot.Ninja()
======
For building quick APIs in Python, I have mostly depended on [Flask][1]. Recently I came across a new API framework for Python 3 named “API Star” which seemed really interesting to me for several reasons. Firstly the framework embraces modern Python features like type hints and asyncio. And then it goes ahead and uses these features to provide awesome development experience for us, the developers. We will get into those features soon but before we begin, I would like to thank Tom Christie for all the work he has put into Django REST Framework and now API Star.

View File

@ -1,3 +1,5 @@
Translating by jessie-pang
How To Check All Running Services In Linux
======

View File

@ -1,290 +0,0 @@
Translating by MjSeven
A Set Of Useful Utilities For Debian And Ubuntu Users
======
![](https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debian-and-ubuntu-users/)
Are you using a Debian-based system? Great! I am here today with a good news for you. Say hello to **“Debian-goodies”** , a collection of useful utilities for Debian-based systems, like Ubuntu, Linux Mint. These set of utilities provides some additional useful commands which are not available by default in the Debian-based systems. Using these tools, the users can find which programs are consuming more disk space, which services need to be restarted after updating the system, search for a file matching a pattern in a package, list the installed packages based on the search string and a lot more. In this brief guide, we will be discussing some useful Debian goodies.
### Debian-goodies Useful Utilities For Debian And Ubuntu Users
The debian-goodies package is available in the official repositories of Debian and its derivative Ubuntu and other Ubuntu variants such as Linux Mint. To install debian-goodies package, simply run:
```
$ sudo apt-get install debian-goodies
```
Debian-goodies has just been installed. Let us go ahead and see some useful utilities.
#### **1. Checkrestart**
Let me start from one of my favorite, the **“checkrestart”** utility. When installing security updates, some running applications might still use the old libraries. In order to apply the security updates completely, you need to find and restart all of them. This is where Checkrestart comes in handy. This utility will find which processes are still using the old versions of libs. You can then restart the services.
To check which daemons need to be restarted after library upgrades, run:
```
$ sudo checkrestart
[sudo] password for sk:
Found 0 processes using old versions of upgraded files
```
Since I didnt perform any security updates lately, it shows nothing.
Please note that Checkrestart utility does work well. However, there is a new similar tool named “needrestart” available latest Debian systems. The needrestart is inspired by the checkrestart utility and it does exactly the same job. Needrestart is actively maintained and supports newer technologies such as containers (LXC, Docker).
Here are the features of Needrestart:
* supports (but does not require) systemd
* binary blacklisting (i.e. display managers)
* tries to detect pending kernel upgrades
* tries to detect required restarts of interpreter based daemons (supports Perl, Python, Ruby)
* fully integrated into apt/dpkg using hooks
It is available in the default repositories too. so, you can install it using command:
```
$ sudo apt-get install needrestart
```
Now you can check the list of daemons need to be restarted after updating your system using command:
```
$ sudo needrestart
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
Failed to check for processor microcode upgrades.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
```
The good thing is Needrestart works on other Linux distributions too. For example, you can install on Arch Linux and its variants from AUR using any AUR helper programs like below.
```
$ yaourt -S needrestart
```
On fedora:
```
$ sudo dnf install needrestart
```
#### 2. Check-enhancements
The check-enhancements utility is used to find packages which enhance the installed packages. This utility will list all packages that enhances other packages but are not strictly necessary to run it. You can find enhancements for a single package or all installed installed packages using “-ip” or “installed-packages” flag.
For example, I am going to list the enhancements for gimp package.
```
$ check-enhancements gimp
gimp => gimp-data: Installed: (none) Candidate: 2.8.22-1
gimp => gimp-gmic: Installed: (none) Candidate: 1.7.9+zart-4build3
gimp => gimp-gutenprint: Installed: (none) Candidate: 5.2.13-2
gimp => gimp-help-ca: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-de: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-el: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-en: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-es: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-fr: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-it: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ja: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ko: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-nl: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-nn: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-pt: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ru: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-sl: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-sv: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-plugin-registry: Installed: (none) Candidate: 7.20140602ubuntu3
gimp => xcftools: Installed: (none) Candidate: 1.0.7-6
```
To list the enhancements for all installed packages, run:
```
$ check-enhancements -ip
autoconf => autoconf-archive: Installed: (none) Candidate: 20170928-2
btrfs-progs => snapper: Installed: (none) Candidate: 0.5.4-3
ca-certificates => ca-cacert: Installed: (none) Candidate: 2011.0523-2
cryptsetup => mandos-client: Installed: (none) Candidate: 1.7.19-1
dpkg => debsig-verify: Installed: (none) Candidate: 0.18
[...]
```
#### 3. dgrep
As the name implies, dgrep is used to search all files in specified packages based on the given regex. For instance, I am going to search for files that contains the regex “text” in Vim package.
```
$ sudo dgrep "text" vim
Binary file /usr/bin/vim.tiny matches
/usr/share/doc/vim-tiny/copyright: that they must include this license text. You can also distribute
/usr/share/doc/vim-tiny/copyright: include this license text. You are also allowed to include executables
/usr/share/doc/vim-tiny/copyright: 1) This license text must be included unmodified.
/usr/share/doc/vim-tiny/copyright: text under a) applies to those changes.
/usr/share/doc/vim-tiny/copyright: context diff. You can choose what license to use for new code you
/usr/share/doc/vim-tiny/copyright: context diff will do. The e-mail address to be used is
/usr/share/doc/vim-tiny/copyright: On Debian systems, the complete text of the GPL version 2 license can be
[...]
```
The dgrep supports most of greps options. Refer the following guide to learn grep commands.
#### 4 dglob
The dglob utility generates a list of package names which match a pattern. For example, find the list of packages that matches the string “vim”.
```
$ sudo dglob vim
vim-tiny:amd64
vim:amd64
vim-common:all
vim-runtime:all
```
By default, dglob will display only the installed packages. If you want to list all packages (installed and not installed), use **-a** flag.
```
$ sudo dglob vim -a
```
#### 5. debget
The **debget** utility will download a .deb for a package in APTs database. Please note that it will only download the given package, not the dependencies.
```
$ debget nano
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
Fetched 231 kB in 2s (113 kB/s)
```
#### 6. dpigs
This is another useful utility in this collection. The **dpigs** utility will find and show you which installed packages occupy the most disk space.
```
$ dpigs
260644 linux-firmware
167195 linux-modules-extra-4.15.0-20-generic
75186 linux-headers-4.15.0-20
64217 linux-modules-4.15.0-20-generic
55620 snapd
31376 git
31070 libicu60
28420 vim-runtime
25971 gcc-7
24349 g++-7
```
As you can see, the linux-firmware packages occupies the most disk space. By default, it will display the **top 10** packages that occupies the most disk space. If you want to display more packages, for example 20, run the following command:
```
$ dpigs -n 20
```
#### 7. debman
The **debman** utility allows you to easily view man pages from a binary **.deb** without extracting it. You dont even need to install the .deb package. The following command displays the man page of nano package.
```
$ debman -f nano_2.9.3-2_amd64.deb nano
```
If you dont have a local copy of the .deb package, use **-p** flag to download and view packages man page.
```
$ debman -p nano nano
```
**Suggested read:**
#### 8. debmany
An installed Debian package has not only a man page, but also includes other files such as acknowledgement, copy right, and read me etc. The **debmany** utility allows you to view and read those files.
```
$ debmany vim
```
![][1]
Choose the file you want to view using arrow keys and hit ENTER to view the selected file. Press **q** to go back to the main menu.
If the specified package is not installed, debmany will download it from the APT database and display the man pages. The **dialog** package should be installed to read the man pages.
#### 9. popbugs
If youre a developer, the **popbugs** utility will be quite useful. It will display a customized release-critical bug list based on packages you use (using popularity-contest data). For those who dont know, the popularity-contest package sets up a cron job that will periodically anonymously submit to the Debian developers statistics about the most used Debian packages on this system. This information helps Debian make decisions such as which packages should go on the first CD. It also lets Debian improve future versions of the distribution so that the most popular packages are the ones which are installed automatically for new users.
To generate a list of critical bugs and display the result in your default web browser, run:
```
$ popbugs
```
Also, you can save the result in a file as shown below.
```
$ popbugs --output=bugs.txt
```
#### 10. which-pkg-broke
This command will display all the dependencies of the given package and when each dependency was installed. By using this information, you can easily find which package might have broken another at what time after upgrading the system or a package.
```
$ which-pkg-broke vim
Package <debconf-2.0> has no install time info
debconf Wed Apr 25 08:08:40 2018
gcc-8-base:amd64 Wed Apr 25 08:08:41 2018
libacl1:amd64 Wed Apr 25 08:08:41 2018
libattr1:amd64 Wed Apr 25 08:08:41 2018
dpkg Wed Apr 25 08:08:41 2018
libbz2-1.0:amd64 Wed Apr 25 08:08:41 2018
libc6:amd64 Wed Apr 25 08:08:42 2018
libgcc1:amd64 Wed Apr 25 08:08:42 2018
liblzma5:amd64 Wed Apr 25 08:08:42 2018
libdb5.3:amd64 Wed Apr 25 08:08:42 2018
[...]
```
#### 11. dhomepage
The dhomepage utility will display the official website of the given package in your default web browser. For example, the following command will open Vim editors home page.
```
$ dhomepage vim
```
And, thats all for now. Debian-goodies is a must-have tool in your arsenal. Even though, we dont use all those utilities often, they are worth to learn and I am sure they will be really helpful at times.
I hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debian-and-ubuntu-users/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://www.ostechnix.com/wp-content/uploads/2018/05/debmany.png

View File

@ -1,3 +1,5 @@
Translating by jessie-pang
How CERN Is Using Linux and Open Source
============================================================

View File

@ -0,0 +1,218 @@
BriFuture is translating
Twitter Sentiment Analysis using NodeJS
============================================================
![](https://i.imgur.com/7hIfpzt.png)
If you want to know how people feel about something, there is no better place than Twitter. It is a continuous stream of opinion, with around 6,000 new tweets being created every second. The internet is quick to react to events and if you want to be updated with the latest and hottest, Twitter is the place to be.
Now, we live in an age where data is king and companies put Twitter's data to good use. From gauging the reception of their new products to trying to predict the next market trend, analysis of Twitter data has many uses. Businesses use it to market their product that to the right customers, to gather feedback on their brand and improve or to assess the reasons for the failure of a product or promotional campaign. Not only businesses, many political and economic decisions are made based on observation of people's opinion. Today, I will try and give you a taste of simple [sentiment analysis][1] of tweets to determine whether a tweet is positive, negative or neutral. It won't be as sophisticated as those used by professionals, but nonetheless, it will give you an idea about opinion mining.
We will be using NodeJs since JavaScript is ubiquitous nowadays and is one of the easiest languages to get started with.
### Prerequisite:
* NodeJs and NPM installed
* A little experience with NodeJs and NPM packages
* some familiarity with the command line.
Alright, that's it. Let's get started.
### Getting Started
Make a new directory for your project and go inside the directory. Open a terminal (or command line). Go inside the newly created directory and run the `npm init -y` command. This will create a `package.json` in your directory. Now we can install the npm packages we need. We just need to create a new file named `index.js` and then we are all set to start coding.
### Getting the tweets
Well, we want to analyze tweets and for that, we need programmatic access to Twitter. For this, we will use the [twit][2] package. So, let's install it with the `npm i twit` command. We also need to register an App through our account to gain access to the Twitter API. Head over to this [link][3], fill in all the details and copy Consumer Key, Consumer Secret, Access token and Access Token Secret from 'Keys and Access Token' tabs in a `.env` file like this:
```
# .env
# replace the stars with values you copied
CONSUMER_KEY=************
CONSUMER_SECRET=************
ACCESS_TOKEN=************
ACCESS_TOKEN_SECRET=************
```
Now, let's begin.
Open `index.js` in your favorite code editor. We need to install the `dotenv`package to read from `.env` file with the command `npm i dotenv`. Alright, let's create an API instance.
```
const Twit = require('twit');
const dotenv = require('dotenv');
dotenv.config();
const { CONSUMER_KEY
, CONSUMER_SECRET
, ACCESS_TOKEN
, ACCESS_TOKEN_SECRET
} = process.env;
const config_twitter = {
consumer_key: CONSUMER_KEY,
consumer_secret: CONSUMER_SECRET,
access_token: ACCESS_TOKEN,
access_token_secret: ACCESS_TOKEN_SECRET,
timeout_ms: 60*1000
};
let api = new Twit(config_twitter);
```
Here we have established a connection to the Twitter with the required configuration. But we are not doing anything with it. Let's define a function to get tweets.
```
async function get_tweets(q, count) {
let tweets = await api.get('search/tweets', {q, count, tweet_mode: 'extended'});
return tweets.data.statuses.map(tweet => tweet.full_text);
}
```
This is an async function because of the `api.get` the function returns a promise and instead of chaining `then`s, I wanted an easy way to extract the text of the tweets. It accepts two arguments -q and count, `q` being the query or keyword we want to search for and `count` is the number of tweets we want the `api` to return.
So now we have an easy way to get the full texts from the tweets. But we still have a problem, the text that we will get now may contain some links or may be truncated if it's a retweet. So we will write another function that will extract and return the text of the tweets, even for retweets and remove the links if any.
```
function get_text(tweet) {
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
}
async function get_tweets(q, count) {
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
return tweets.data.statuses.map(get_text);
}
```
So, now we have the text of tweets. Our next step is getting the sentiment from the text. For this, we will use another package from `npm` - [`sentiment`][4]package. Let's install it like the other packages and add to our script.
```
const sentiment = require('sentiment')
```
Using `sentiment` is very easy. We will just have to call the `sentiment`function on the text that we want to analyze and it will return us the comparative score of the text. If the score is below 0, it expresses a negative sentiment, a score above 0 is positive and 0, as you may have guessed, is neutral. So based on this, we will print the tweets in different colors - green for positive, red for negative and blue for neutral. For this, we will use the [`colors`][5] package. Let's install it like the other packages and add to our script.
```
const colors = require('colors/safe');
```
Alright, now let us bring it all together in a `main` function.
```
async function main() {
let keyword = \* define the keyword that you want to search for *\;
let count = \* define the count of tweets you want *\;
let tweets = await get_tweets(keyword, count);
for (tweet of tweets) {
let score = sentiment(tweet).comparative;
tweet = `${tweet}\n`;
if (score > 0) {
tweet = colors.green(tweet);
} else if (score < 0) {
tweet = colors.red(tweet);
} else {
tweet = colors.blue(tweet);
}
console.log(tweet);
}
}
```
And finally, execute the `main` function.
```
main();
```
There you have it, a short script of analyzing the basic sentiments of a tweet.
```
\\ full script
const Twit = require('twit');
const dotenv = require('dotenv');
const sentiment = require('sentiment');
const colors = require('colors/safe');
dotenv.config();
const { CONSUMER_KEY
, CONSUMER_SECRET
, ACCESS_TOKEN
, ACCESS_TOKEN_SECRET
} = process.env;
const config_twitter = {
consumer_key: CONSUMER_KEY,
consumer_secret: CONSUMER_SECRET,
access_token: ACCESS_TOKEN,
access_token_secret: ACCESS_TOKEN_SECRET,
timeout_ms: 60*1000
};
let api = new Twit(config_twitter);
function get_text(tweet) {
let txt = tweet.retweeted_status ? tweet.retweeted_status.full_text : tweet.full_text;
return txt.split(/ |\n/).filter(v => !v.startsWith('http')).join(' ');
}
async function get_tweets(q, count) {
let tweets = await api.get('search/tweets', {q, count, 'tweet_mode': 'extended'});
return tweets.data.statuses.map(get_text);
}
async function main() {
let keyword = 'avengers';
let count = 100;
let tweets = await get_tweets(keyword, count);
for (tweet of tweets) {
let score = sentiment(tweet).comparative;
tweet = `${tweet}\n`;
if (score > 0) {
tweet = colors.green(tweet);
} else if (score < 0) {
tweet = colors.red(tweet);
} else {
tweet = colors.blue(tweet)
}
console.log(tweet)
}
}
main();
```
--------------------------------------------------------------------------------
via: https://boostlog.io/@anshulc95/twitter-sentiment-analysis-using-nodejs-5ad1331247018500491f3b6a
作者:[Anshul Chauhan][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://boostlog.io/@anshulc95
[1]:https://en.wikipedia.org/wiki/Sentiment_analysis
[2]:https://github.com/ttezel/twit
[3]:https://boostlog.io/@anshulc95/apps.twitter.com
[4]:https://www.npmjs.com/package/sentiment
[5]:https://www.npmjs.com/package/colors
[6]:https://boostlog.io/tags/nodejs
[7]:https://boostlog.io/tags/twitter
[8]:https://boostlog.io/@anshulc95

View File

@ -0,0 +1,143 @@
bestony is translating
Becoming a senior developer: 9 experiences you'll encounter
============================================================
![](https://www.hpe.com/content/dam/hpe/insights/articles/2018/07/becoming-a-senior-developer-9-experiences-youll-encounter/featuredStory/do-You-Want-To-Be-a-Master-Programmer.jpg.transform/nxt-1043x496-crop/image.jpeg)
Plenty of career guides suggest appropriate steps to take if you want a management track. But what if you want to stay technical—and simply become the best possible programmer? These non-obvious markers let you know youre on the right path.
Many programming career guidelines stress the skills a software developer is expected to acquire. Such general advice suggests that someone who wants to focus on a technical track—as opposed to, say, [taking a management path to CIO][5]—should go after the skills needed to mentor junior developers, design future application features, build out release engineering systems, and set company standards.
That isnt this article.
Being a developer—a good one—isn't just about writing code. To be successful, you do a lot of planning, you deal with catastrophes, and you prevent catastrophes. Not to mention you spend plenty of time [working with other humans][6] about what your code should do.
Following are a number of markers youll likely encounter as your career progresses and you become a more accomplished developer. Youll have highs that boost you up and remind you how awesome you are. You'll also encounter lows that keep you humble and give you wisdom—at least in retrospect, if you respond to them appropriately.
These experiences may feel good, they may be uncomfortable, or they may be downright scary. They're all learning experiences—at least for those developers who sincerely want to move forward, in both skills and professional ambition. These experiences often change the way developers look at their job or how they approach the next problem. It's why an experienced developer's value to a company is more than just a list of technology buzzwords.
Here, in no particular order, is a sampling of what you'll run into on your way to becoming a senior developer—not in terms of a specific job title but being confident about creating quality code that serves users.
### You write your first big bug into production
Probably your initial step into the big leagues is the first bug you write into production. It's a sickening feeling. You know that the software you're working on is now broken in some significant way because of something you did, code you wrote, or a test you didn't run.
No matter how good a programmer you are, you'll make mistakes. You're a human, and that's part of what we do.
Most developers learn from the “bug that went live” experience. You promise never to make the same bug again. You analyze what happened, and you think about how the bug could have been prevented. For me, one effect of discovering I let a bug into production code is that it reinforced my belief that compiler warnings and static analysis tools are a programmer's best friend.
You repeat the process when it happens again. It  _will_  happen again, but as your programming skill improves, it happens less frequently.
### You delete production data for the first time
It might be a `DROP TABLE` in production or [a mistaken `rm -rf`][7]. Maybe you clicked on the wrong volume to format. You get an uneasy feeling that "this is taking longer to run than I would expect. It's not running on... oh, no!" followed by a mad scramble to fix it.
Data loss has long-term effects on a growing-wiser developer much like the production bug. Afterward, you re-examine how you work. It teaches you to take more safeguards than you did previously. Maybe you decide to create a more rigorous rotation schedule for backups, or even start having a backup schedule at all.
As with the bug in production, you learn that you can survive making a mistake, and it's not the end of the world.
### You automate away part of your job
There's an old saying that you can't get promoted if you can't be replaced. Anything that ties you to a specific job or task is an anchor on your ability to move up in the company or be assigned newer and more interesting tasks.
When good programmers find themselves doing drudgework as part of their job, they find a way to let a machine do it. If they are stuck [scanning server logs][8] every Monday looking for problems, they'll install a tool like Logwatch to summarize the results. When there are many servers to be monitored, a good programmer will turn to a more capable tool that analyzes logs on multiple servers.
Unsure how to get started with containers? Yes, we have a guide for that. Get Containers for Dummies.
[Download now][4]
In each case, wise programmers provide more value to their company, because an automated system is much cheaper than a senior programmers salary. They also grow personally by eliminating drudgery, leaving them more time to work on more challenging tasks.
### You use existing code instead of writing your own
A senior programmer knows that code that doesn't get written doesn't have bugs, and that many problems, both common and uncommon, have already been solved—in many cases, multiple times.
Senior programmers know that the chances are very low that they can write, test, and debug their own code for a task faster or cheaper than existing code that does what they want. It doesn't have to be perfect to make it worth their while.
It might take a little bit of turning down your ego to make it happen, but that's an excellent skill for senior programmers to have, too.
### You are publicly recognized for achievements
Many people aren't comfortable with public recognition. It's embarrassing. We have these amazing skills, and we like the feeling of helping others, but we can be embarrassed when it's called out.
Praise comes in many forms and many sizes. Maybe it's winning an "employee of the quarter" award for a project you drove and being presented a plaque onstage. It could be as low-key as your team leader saying, "Thanks to Cheryl for implementing that new microservice."
Whatever it is, accept it graciously and appreciatively, even if you're embarrassed by the attention. Don't diminish the praise you receive with, "Oh, it was nothing" or anything similar. Accept credit for the things that users and co-workers appreciate. Thank the speaker and say you were glad you could be of service.
First, this is the polite thing to do. When people praise you, they want it to be acknowledged. In addition, that warm recognition helps you in the future. Remembering it gets you through those crappy days, such as when you uncover bugs in your code.
### You turn down a user request
As much as we love being superheroes who can do amazing things with computers, sometimes turning down a request is best for the organization. Part of being a senior programmer is knowing when not to write code. A senior programmer knows that every bit of code in a codebase is a chance for things to go wrong and a potential future cost for maintenance.
You might be uncomfortable the first time you tell a user that you wont be incorporating his maybe-even-useful suggestion. But this is a notable occasion. It means you understand the application and its role in a larger context. It also means you “own” the software, in a positive, confident way.
The organization need not be an employer, either. Open source project managers deal with this all the time, when they have to tell a user, "Sorry, it doesn't fit with where the project is going.”
### You know when to fight for what's right and when it really doesn't matter
Rookie programmers are full of knowledge straight from school, having learned all the right ways to do things. They're eager to apply their knowledge and make amazing things happen for their employers. However, they're often surprised to find that out in the business world, things sometimes don't get done the "right" way.
There's an old military saying: No plan survives contact with the enemy. It's the same with new programmers and project plans. Sometimes in the heat of the battle of business, the purist computer science techniques learned in school fall by the wayside.
Maybe the database schema gets slapped together in a way that isn't perfect [fifth normal form][9]. Sometimes code gets cut and pasted rather than refactored out into a new function or library. Plenty of production systems run on shell scripts and prayers. The wise programmer knows when to push for the right way to do things and when to take the cheap way out.
The first time you do it, it feels like you're selling out your principles. Its not. The balance between academic purism and the realities of getting work done can be a delicate one, and that knowledge of when to do things less than perfectly is part of the wisdom youll acquire.
### You are asked what to do
After a while, you'll have earned a reputation in your organization for getting things done. It wont be just for having expertise in a certain area—itll be wisdom. Someone will come to you and ask for guidance with a project or a problem.
That person isn't just asking you for help with a problem. You are being asked to lead.
A common situation is when you are asked to help a team of less-experienced developers that's navigating difficult new terrain or needs shepherding on a project. That's when you'll be called on to help not just do things but show people how to improve their own skills.
It might also be leadership from a technical point of view. Your boss might say, "We need a new indexing solution. Find out what you can about FooIndex and BarSearch, and let me know what you propose." That's the sort of responsibility given only to someone who has demonstrated wisdom and experience.
### You are seriously headhunted for the first time
Recruiting professionals are always looking for talent. Most recruiters seem to do random emailing and LinkedIn harvesting. But every so often, they find out about talented performers and hunt them down.
When that happens, it's a feather in your cap. Maybe a former colleague spoke to a recruiter friend trying to place a developer at a company that needs the skills you have. If you get a personal recommendation for a position—even if you dont want the job—it means you've really arrived. You're recognized as an expert, or someone who brings value to an organization, enough to recommend you to others.
### Onward
I hope that my little list helps prompt some thought about [where you are in your career][10] or [where you might be headed][11]. Markers and milestones can help you understand whats around you and what to expect.
This list is far from complete, of course. Everyone has their own story. In fact, one of the ways to know youve hit a milestone is when you find yourself telling a story about it to others. When you do find yourself looking back at a tough situation, make sure to reflect on what it means to you and why. Experience is a great teacher—if you listen to it.
What are your markers? How did you know you had finally become a senior programmer? Tweet at [@enterprisenxt][12] and let me know.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.
[![](https://www.hpe.com/content/dam/hpe/insights/contributors/andy-lester/AndyLester_headshot-400x400.jpg.transform/nxt-116x116/image.jpeg)][13]
### 作者简介
Andy Lester has been a programmer and developer since the 1980s, when COBOL walked the earth. He is the author of the job-hunting guide [Land the Tech Job You Love][2] (2009, Pragmatic Bookshelf). Andy has been an active contributor to the open source community for decades, most notably as the creator of the grep-like code search tool [ack][3].
--------------------------------------------------------------------------------
via: https://www.hpe.com/us/en/insights/articles/becoming-a-senior-developer-9-experiences-youll-encounter-1807.html
作者:[Andy Lester ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.hpe.com/us/en/insights/contributors/andy-lester.html
[1]:https://www.hpe.com/us/en/insights/contributors/andy-lester.html
[2]:https://pragprog.com/book/algh/land-the-tech-job-you-love
[3]:https://beyondgrep.com/
[4]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_seniordev0718
[5]:https://www.hpe.com/us/en/insights/articles/7-career-milestones-youll-meet-on-the-cio-and-it-management-track-1805.html
[6]:https://www.hpe.com/us/en/insights/articles/how-to-succeed-in-it-without-social-skills-1705.html
[7]:https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-never-use-1712.html
[8]:https://www.hpe.com/us/en/insights/articles/back-to-basics-what-sysadmins-must-know-about-logging-and-monitoring-1805.html
[9]:http://www.bkent.net/Doc/simple5.htm
[10]:https://www.hpe.com/us/en/insights/articles/career-interventions-when-your-it-career-needs-a-swift-kick-1806.html
[11]:https://www.hpe.com/us/en/insights/articles/how-to-avoid-an-it-career-dead-end-1806.html
[12]:https://twitter.com/enterprisenxt
[13]:https://www.hpe.com/us/en/insights/contributors/andy-lester.html

View File

@ -0,0 +1,121 @@
FSSlc is translating
netdev day 1: IPsec!
============================================================
Hello! This year, like last year, Im at the [netdev conference][3]. (here are my [notes from last year][4]).
Today at the conference I learned a lot about IPsec, so were going to talk about IPsec! There was an IPsec workshop given by Sowmini Varadhan and [Paul Wouters][5]. All of the mistakes in this post are 100% my fault though :).
### whats IPsec?
IPsec is a protocol used to encrypt IP packets. Some VPNs are implemented with IPsec. One big thing I hadnt really realized until today is that there isnt just one protocol used for VPNs I think VPN is just a general term meaning “your IP packets get encrypted and sent through another server” and VPNs can be implemented using a bunch of different protocols (OpenVPN, PPTP, SSTP, IPsec, etc) in a bunch of different ways.
Why is IPsec different from other VPN protocols? (like, why was there a tutorial about it at netdev and not the other protocols?) My understanding is that there are 2 things that make it different:
* Its an IETF standard, documented in eg [RFC 6071][1] (did you know the IETF is the group that makes RFCs? I didnt until today!)
* its implemented in the Linux kernel (so it makes sense that there was a netdev tutorial on it, since netdev is a Linux kernel networking conference :))
### How does IPsec work?
So lets say your laptop is using IPsec to encrypt its packets and send them through another device. How does that work? There are 2 parts to IPsec: a userspace part, and a kernel part.
The userspace part of IPsec is responsible for key exchange, using a protocol called [IKE][6] (“internet key exchange”). Basically when you open a new VPN connection, you need to talk to the VPN server and negotiate a key to do encryption.
The kernel part of IPsec is responsible for the actual encryption of packets once a key is generated using IKE, the userspace part of IPsec will tell the kernel which encryption key to use. Then the kernel will use that key to encrypt packets!
### Security Policy & Security Associations
The kernel part of IPSec has two databases: the security policy database(SPD) and the security association database (SAD).
The security policy database has IP ranges and rules for what to do to packets for that IP range (“do IPsec to it”, “drop the packet”, “let it through”). I find this a little confusing because Im used to rules about what to do to packets in various IP ranges being in the route table (`sudo ip route list`), but apparently you can have IPsec rules too and theyre in a different place!
The security association database I think has the encryption keys to use for various IPs.
The way you inspect these databases is, extremely unintuitively, using a command called `ip xfrm`. What does xfrm mean? I dont know!
```
# security policy database
$ sudo ip xfrm policy
$ sudo ip x p
# security association database
$ sudo ip xfrm state
$ sudo ip x s
```
### Why is IPsec implemented in the Linux kernel and TLS isnt?
For both TLS and IPsec, you need to do a key exchange when opening the connection (using Diffie-Hellman or something). For some reason that might be obvious but that I dont understand yet (??) people dont want to do key exchange in the kernel.
The reason IPsec is easier to implement in the kernel is that with IPsec, you need to negotiate key exchanges much less frequently (once for every IP address you want to open a VPN connection with), and IPsec sessions are much longer lived. So its easy for userspace to do a key exchange, get the key, and hand it off to the kernel which will then use that key for every IP packet.
With TLS, there are a couple of problems:
a. youre constantly doing new key exchanges every time you open a new TLS connection, and TLS connections are shorter-lived b. there isnt a natural protocol boundary where you need to start doing encryption with IPsec, you just encrypt every IP packet in a given IP range, but with TLS you need to look at your TCP stream, recognize whether the TCP packet is a data packet or not, and decide to encrypt it
Theres actually a patch [implementing TLS in the Linux kernel][7] which lets userspace do key exchange and then pass the kernel the keys, so this obviously isnt impossible, but its a much newer thing and I think its more complicated with TLS than with IPsec.
### What software do you use to do IPsec?
The ones I know about are Libreswan and Strongswan. Todays tutorial focused on Libreswan.
Somewhat confusingly, even though Libreswan and Strongswan are different software packages, they both install a binary called `ipsec` for managing IPsec connections, and the two `ipsec` binaries are not the same program (even though they do have the same role).
Strongswan and Libreswan do whats described in the “how does IPsec work” section above they do key exchange with IKE and tell the kernel about keys to configure it to do encryption.
### IPsec isnt only for VPNs!
At the beginning of this post I said “IPsec is a VPN protocol”, which is true, but you dont have to use IPsec to implement VPNs! There are actually two ways to use IPsec:
1. “transport mode”, where the IP header is unchanged and only the contents of the IP packet are encrypted. This mode is a little more like using TLS you talk to the server youre communicating with directly (not through a VPN server or something), its just that the contents of the IP packet get encrypted
2. “tunnel mode”, where the IP header and its contents are all encrypted and encapsulated into another UDP packet. This is the mode thats used for VPNs you take your packet that youre sending to secret_site.com, encrypt it, send it to your VPN server, and the VPN server passes it on for you.
### opportunistic IPsec
An interesting application of “transport mode” IPsec I learned about today (where you open an IPsec connection directly with the host youre communicating with instead of some other intermediary server) is this thing called “opportunistic IPsec”. Theres an opportunistic IPsec server here:[http://oe.libreswan.org/][8].
I think the idea is that if you set up Libreswan and unbound up on your computer, then when you connect to [http://oe.libreswan.org][9], what happens is:
1. `unbound` makes a DNS query for the IPSECKEY record of oe.libreswan.org (`dig ipseckey oe.libreswan.org`) to get a public key to use for that domain. (this requires DNSSEC to be secure which when I learn about it will be a whole other blog post, but you can just run that DNS query with dig and it will work if you want to see the results)
2. `unbound` gives the public key to libreswan, which uses it to do a key exchange with the IKE server running on oe.libreswan.org
3. `libreswan` finishes the key exchange, gives the encryption key to the kernel, and tells the kernel to use that encryption key when talking to `oe.libreswan.org`
4. Your connection is now encrypted! Even though its a HTTP connection! so interesting!
### IPsec and TLS learn from each other
One interesting tidbit from the tutorial today was that the IPsec and TLS protocols have actually learned from each other over time like they said IPsecs IKE protocol had perfect forward secrecy before TLS, and IPsec has also learned some things from TLS. Its neat to hear about how different internet protocols are learning & changing over time!
### IPsec is interesting!
Ive spent quite a lot of time learning about TLS, which is obviously a super important networking protocol (lets encrypt the internet! :D). But IPsec is an important internet encryption protocol too, and it has a different role from TLS! Apparently some mobile phone protocols (like 5G/LTE) use IPsec to encrypt their network traffic!
Im happy I know a little more about it now! As usual several things in this post are probably wrong, but hopefully not too wrong :)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/07/11/netdev-day-1--ipsec/
作者:[ Julia Evans][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://tools.ietf.org/html/rfc6071
[2]:https://jvns.ca/categories/netdev
[3]:https://www.netdevconf.org/0x12/
[4]:https://jvns.ca/categories/netdev/
[5]:https://nohats.ca/
[6]:https://en.wikipedia.org/wiki/Internet_Key_Exchange
[7]:https://blog.filippo.io/playing-with-kernel-tls-in-linux-4-13-and-go/
[8]:http://oe.libreswan.org/
[9]:http://oe.libreswan.org/

View File

@ -1,272 +0,0 @@
Translating by qhwdw
A sysadmin's guide to SELinux: 42 answers to the big questions
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
> "It is an important and popular fact that things are not always what they seem…"
> ―Douglas Adams, The Hitchhiker's Guide to the Galaxy
Security. Hardening. Compliance. Policy. The Four Horsemen of the SysAdmin Apocalypse. In addition to our daily tasks—monitoring, backup, implementation, tuning, updating, and so forth—we are also in charge of securing our systems. Even those systems where the third-party provider tells us to disable the enhanced security. It seems like a job for Mission Impossible's [Ethan Hunt][1].
Faced with this dilemma, some sysadmins decide to [take the blue pill][2] because they think they will never know the answer to the big question of life, the universe, and everything else. And, as we all know, that answer is **[42][3]**.
In the spirit of The Hitchhiker's Guide to the Galaxy, here are the 42 answers to the big questions about managing and using [SELinux][4] with your systems.
1. SELinux is a LABELING system, which means every process has a LABEL. Every file, directory, and system object has a LABEL. Policy rules control access between labeled processes and labeled objects. The kernel enforces these rules.
2. The two most important concepts are: Labeling (files, process, ports, etc.) and Type enforcement (which isolates processes from each other based on types).
3. The correct Label format is `user:role:type:level` (optional).
4. The purpose of Multi-Level Security (MLS) enforcement is to control processes (domains) based on the security level of the data they will be using. For example, a secret process cannot read top-secret data.
5. Multi-Category Security (MCS) enforcement protects similar processes from each other (like virtual machines, OpenShift gears, SELinux sandboxes, containers, etc.).
6. Kernel parameters for changing SELinux modes at boot:
* `autorelabel=1` → forces the system to relabel
* `selinux=0` → kernel doesn't load any part of the SELinux infrastructure
* `enforcing=0` → boot in permissive mode
7. If you need to relabel the entire system:
`# touch /.autorelabel #reboot`
If the system labeling contains a large amount of errors, you might need to boot in permissive mode in order for the autorelabel to succeed.
8. To check if SELinux is enabled: `# getenforce`
9. To temporarily enable/disable SELinux: `# setenforce [1|0]`
10. SELinux status tool: `# sestatus`
11. Configuration file: `/etc/selinux/config`
12. How does SELinux work? Here's an example of labeling for an Apache Web Server:
* Binary: `/usr/sbin/httpd`→`httpd_exec_t`
* Configuration directory: `/etc/httpd`→`httpd_config_t`
* Logfile directory: `/var/log/httpd``httpd_log_t`
* Content directory: `/var/www/html``httpd_sys_content_t`
* Startup script: `/usr/lib/systemd/system/httpd.service``httpd_unit_file_d`
* Process: `/usr/sbin/httpd -DFOREGROUND``httpd_t`
* Ports: `80/tcp, 443/tcp``httpd_t, http_port_t`
A process running in the `httpd_t` context can interact with an object with the `httpd_something_t` label.
13. Many commands accept the argument `-Z` to view, create, and modify context:
* `ls -Z`
* `id -Z`
* `ps -Z`
* `netstat -Z`
* `cp -Z`
* `mkdir -Z`
Contexts are set when files are created based on their parent directory's context (with a few exceptions). RPMs can set contexts as part of installation.
14. There are four key causes of SELinux errors, which are further explained in items 15-21 below:
* Labeling problems
* Something SELinux needs to know
* A bug in an SELinux policy/app
* Your information may be compromised
15. Labeling problem: If your files in `/srv/myweb` are not labeled correctly, access might be denied. Here are some ways to fix this:
* If you know the label:
`# semanage fcontext -a -t httpd_sys_content_t '/srv/myweb(/.*)?'`
* If you know the file with the equivalent labeling:
`# semanage fcontext -a -e /srv/myweb /var/www`
* Restore the context (for both cases):
`# restorecon -vR /srv/myweb`
16. Labeling problem: If you move a file instead of copying it, the file keeps its original context. To fix these issues:
* Change the context command with the label:
`# chcon -t httpd_system_content_t /var/www/html/index.html`
* Change the context command with the reference label:
`# chcon --reference /var/www/html/ /var/www/html/index.html`
* Restore the context (for both cases): `# restorecon -vR /var/www/html/`
17. If SELinux needs to know HTTPD listens on port 8585, tell SELinux:
`# semanage port -a -t http_port_t -p tcp 8585`
18. SELinux needs to know booleans allow parts of SELinux policy to be changed at runtime without any knowledge of SELinux policy writing. For example, if you want httpd to send email, enter: `# setsebool -P httpd_can_sendmail 1`
19. SELinux needs to know booleans are just off/on settings for SELinux:
* To see all booleans: `# getsebool -a`
* To see the description of each one: `# semanage boolean -l`
* To set a boolean execute: `# setsebool [_boolean_] [1|0]`
* To configure it permanently, add `-P`. For example:
`# setsebool httpd_enable_ftp_server 1 -P`
20. SELinux policies/apps can have bugs, including:
* Unusual code paths
* Configurations
* Redirection of `stdout`
* Leaked file descriptors
* Executable memory
* Badly built libraries Open a ticket (do not file a Bugzilla report; there are no SLAs with Bugzilla).
21. Your information may be compromised if you have confined domains trying to:
* Load kernel modules
* Turn off the enforcing mode of SELinux
* Write to `etc_t/shadow_t`
* Modify iptables rules
22. SELinux tools for the development of policy modules:
`# yum -y install setroubleshoot setroubleshoot-server`
Reboot or restart `auditd` after you install.
23. Use `journalctl` for listing all logs related to `setroubleshoot`:
`# journalctl -t setroubleshoot --since=14:20`
24. Use `journalctl` for listing all logs related to a particular SELinux label. For example:
`# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0`
25. Use `setroubleshoot` log when an SELinux error occurs and suggest some possible solutions. For example, from `journalctl`:
[code] Jun 14 19:41:07 web1 setroubleshoot: SELinux is preventing httpd from getattr access on the file /var/www/html/index.html. For complete message run: sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
# sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
SELinux is preventing httpd from getattr access on the file /var/www/html/index.html.
***** Plugin restorecon (99.5 confidence) suggests ************************
If you want to fix the label,
/var/www/html/index.html default label should be httpd_syscontent_t.
Then you can restorecon.
Do
# /sbin/restorecon -v /var/www/html/index.html
```
26. Logging: SELinux records information all over the place:
* `/var/log/messages`
* `/var/log/audit/audit.log`
* `/var/lib/setroubleshoot/setroubleshoot_database.xml`
27. Logging: Looking for SELinux errors in the audit log:
`# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today`
28. To search for SELinux Access Vector Cache (AVC) messages for a particular service:
`# ausearch -m avc -c httpd`
29. The `audit2allow` utility gathers information from logs of denied operations and then generates SELinux policy-allow rules. For example:
* To produce a human-readable description of why the access was denied: `# audit2allow -w -a`
* To view the type enforcement rule that allows the denied access: `# audit2allow -a`
* To create a custom module: `# audit2allow -a -M mypolicy`
The `-M` option creates a type enforcement file (.te) with the name specified and compiles the rule into a policy package (.pp): `mypolicy.pp mypolicy.te`
* To install the custom module: `# semodule -i mypolicy.pp`
30. To configure a single process (domain) to run permissive: `# semanage permissive -a httpd_t`
31. If you no longer want a domain to be permissive: `# semanage permissive -d httpd_t`
32. To disable all permissive domains: `# semodule -d permissivedomains`
33. Enabling SELinux MLS policy: `# yum install selinux-policy-mls`
In `/etc/selinux/config:`
`SELINUX=permissive`
`SELINUXTYPE=mls`
Make sure SELinux is running in permissive mode: `# setenforce 0`
Use the `fixfiles` script to ensure that files are relabeled upon the next reboot:
`# fixfiles -F onboot # reboot`
34. Create a user with a specific MLS range: `# useradd -Z staff_u john`
Using the `useradd` command, map the new user to an existing SELinux user (in this case, `staff_u`).
35. To view the mapping between SELinux and Linux users: `# semanage login -l`
36. Define a specific range for a user: `# semanage login --modify --range s2:c100 john`
37. To correct the label on the user's home directory (if needed): `# chcon -R -l s2:c100 /home/john`
38. To list the current categories: `# chcat -L`
39. To modify the categories or to start creating your own, modify the file as follows:
`/etc/selinux/_<selinuxtype>_/setrans.conf`
40. To run a command or script in a specific file, role, and user context:
`# runcon -t initrc_t -r system_r -u user_u yourcommandhere`
* `-t` is the file context
* `-r` is the role context
* `-u` is the user context
41. Containers running with SELinux disabled:
* With Podman: `# podman run --security-opt label=disable`
* With Docker: `# docker run --security-opt label=disable`
42. If you need to give a container full access to the system:
* With Podman: `# podman run --privileged`
* With Docker: `# docker run --privileged`
And with this, you already know the answer. So please: **Don't panic, and turn on SELinux**.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/sysadmin-guide-selinux
作者:[Alex Callejas][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/darkaxl
[1]:https://en.wikipedia.org/wiki/Ethan_Hunt
[2]:https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
[3]:https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_%2842%29
[4]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux

View File

@ -0,0 +1,283 @@
FSSlc is translating
A sysadmin's guide to SELinux: 42 answers to the big questions
============================================================
> Get answers to the big questions about life, the universe, and everything else about Security-Enhanced Linux.
![Lock](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum "Lock")
Image credits : [JanBaby][13], via Pixabay [CC0][14].
> "It is an important and popular fact that things are not always what they seem…"
> ―Douglas Adams,  _The Hitchhiker's Guide to the Galaxy_
Security. Hardening. Compliance. Policy. The Four Horsemen of the SysAdmin Apocalypse. In addition to our daily tasks—monitoring, backup, implementation, tuning, updating, and so forth—we are also in charge of securing our systems. Even those systems where the third-party provider tells us to disable the enhanced security. It seems like a job for  _Mission Impossible_ 's [Ethan Hunt][15].
Faced with this dilemma, some sysadmins decide to [take the blue pill][16] because they think they will never know the answer to the big question of life, the universe, and everything else. And, as we all know, that answer is **[42][2]**.
In the spirit of  _The Hitchhiker's Guide to the Galaxy_ , here are the 42 answers to the big questions about managing and using [SELinux][17] with your systems.
1. SELinux is a LABELING system, which means every process has a LABEL. Every file, directory, and system object has a LABEL. Policy rules control access between labeled processes and labeled objects. The kernel enforces these rules.
1. The two most important concepts are:  _Labeling_  (files, process, ports, etc.) and  _Type enforcement_  (which isolates processes from each other based on types).
1. The correct Label format is `user:role:type:level` ( _optional_ ).
1. The purpose of  _Multi-Level Security (MLS) enforcement_  is to control processes ( _domains_ ) based on the security level of the data they will be using. For example, a secret process cannot read top-secret data.
1. _Multi-Category Security (MCS) enforcement_  protects similar processes from each other (like virtual machines, OpenShift gears, SELinux sandboxes, containers, etc.).
1. Kernel parameters for changing SELinux modes at boot:
* `autorelabel=1` → forces the system to relabel
* `selinux=0` → kernel doesn't load any part of the SELinux infrastructure
* `enforcing=0` → boot in permissive mode
1. If you need to relabel the entire system:
`# touch /.autorelabel
#reboot`
If the system labeling contains a large amount of errors, you might need to boot in permissive mode in order for the autorelabel to succeed.
1. To check if SELinux is enabled: `# getenforce`
1. To temporarily enable/disable SELinux: `# setenforce [1|0]`
1. SELinux status tool: `# sestatus`
1. Configuration file: `/etc/selinux/config`
1. How does SELinux work? Here's an example of labeling for an Apache Web Server:
* Binary: `/usr/sbin/httpd`→`httpd_exec_t`
* Configuration directory: `/etc/httpd`→`httpd_config_t`
* Logfile directory: `/var/log/httpd` → `httpd_log_t`
* Content directory: `/var/www/html` → `httpd_sys_content_t`
* Startup script: `/usr/lib/systemd/system/httpd.service` → `httpd_unit_file_d`
* Process: `/usr/sbin/httpd -DFOREGROUND` → `httpd_t`
* Ports: `80/tcp, 443/tcp` → `httpd_t, http_port_t`
A process running in the `httpd_t` context can interact with an object with the `httpd_something_t` label.
1. Many commands accept the argument `-Z` to view, create, and modify context:
* `ls -Z`
* `id -Z`
* `ps -Z`
* `netstat -Z`
* `cp -Z`
* `mkdir -Z`
Contexts are set when files are created based on their parent directory's context (with a few exceptions). RPMs can set contexts as part of installation.
1. There are four key causes of SELinux errors, which are further explained in items 15-21 below:
* Labeling problems
* Something SELinux needs to know
* A bug in an SELinux policy/app
* Your information may be compromised
1. _Labeling problem:_  If your files in `/srv/myweb` are not labeled correctly, access might be denied. Here are some ways to fix this:
* If you know the label:
`# semanage fcontext -a -t httpd_sys_content_t '/srv/myweb(/.*)?'`
* If you know the file with the equivalent labeling:
`# semanage fcontext -a -e /srv/myweb /var/www`
* Restore the context (for both cases):
`# restorecon -vR /srv/myweb`
1. _Labeling problem:_  If you move a file instead of copying it, the file keeps its original context. To fix these issues:
* Change the context command with the label:
`# chcon -t httpd_system_content_t /var/www/html/index.html`
* Change the context command with the reference label:
`# chcon --reference /var/www/html/ /var/www/html/index.html`
* Restore the context (for both cases): `# restorecon -vR /var/www/html/`
1. If  _SELinux needs to know_  HTTPD listens on port 8585, tell SELinux:
`# semanage port -a -t http_port_t -p tcp 8585`
1. _SELinux needs to know_  booleans allow parts of SELinux policy to be changed at runtime without any knowledge of SELinux policy writing. For example, if you want httpd to send email, enter: `# setsebool -P httpd_can_sendmail 1`
1. _SELinux needs to know_  booleans are just off/on settings for SELinux:
* To see all booleans: `# getsebool -a`
* To see the description of each one: `# semanage boolean -l`
* To set a boolean execute: `# setsebool [_boolean_] [1|0]`
* To configure it permanently, add `-P`. For example:
`# setsebool httpd_enable_ftp_server 1 -P`
1. SELinux policies/apps can have bugs, including:
* Unusual code paths
* Configurations
* Redirection of `stdout`
* Leaked file descriptors
* Executable memory
* Badly built libraries
Open a ticket (do not file a Bugzilla report; there are no SLAs with Bugzilla).
1. _Your information may be compromised_  if you have confined domains trying to:
* Load kernel modules
* Turn off the enforcing mode of SELinux
* Write to `etc_t/shadow_t`
* Modify iptables rules
1. SELinux tools for the development of policy modules:
`# yum -y install setroubleshoot setroubleshoot-server`
Reboot or restart `auditd` after you install.
1. Use `journalctl` for listing all logs related to `setroubleshoot`:
`# journalctl -t setroubleshoot --since=14:20`
1. Use `journalctl` for listing all logs related to a particular SELinux label. For example:
`# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0`
1. Use `setroubleshoot` log when an SELinux error occurs and suggest some possible solutions. For example, from `journalctl`:
```
Jun 14 19:41:07 web1 setroubleshoot: SELinux is preventing httpd from getattr access on the file /var/www/html/index.html. For complete message run: sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
# sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
SELinux is preventing httpd from getattr access on the file /var/www/html/index.html.
***** Plugin restorecon (99.5 confidence) suggests ************************
If you want to fix the label,
/var/www/html/index.html default label should be httpd_syscontent_t.
Then you can restorecon.
Do
# /sbin/restorecon -v /var/www/html/index.html
```
1. Logging: SELinux records information all over the place:
* `/var/log/messages`
* `/var/log/audit/audit.log`
* `/var/lib/setroubleshoot/setroubleshoot_database.xml`
1. Logging: Looking for SELinux errors in the audit log:
`# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today`
1. To search for SELinux Access Vector Cache (AVC) messages for a particular service:
`# ausearch -m avc -c httpd`
1. The `audit2allow` utility gathers information from logs of denied operations and then generates SELinux policy-allow rules. For example:
* To produce a human-readable description of why the access was denied: `# audit2allow -w -a`
* To view the type enforcement rule that allows the denied access: `# audit2allow -a`
* To create a custom module: `# audit2allow -a -M mypolicy`
The `-M` option creates a type enforcement file (.te) with the name specified and compiles the rule into a policy package (.pp): `mypolicy.pp mypolicy.te`
* To install the custom module: `# semodule -i mypolicy.pp`
1. To configure a single process (domain) to run permissive: `# semanage permissive -a httpd_t`
1. If you no longer want a domain to be permissive: `# semanage permissive -d httpd_t`
1. To disable all permissive domains: `# semodule -d permissivedomains`
1. Enabling SELinux MLS policy: `# yum install selinux-policy-mls`
In `/etc/selinux/config:`
`SELINUX=permissive`
`SELINUXTYPE=mls`
Make sure SELinux is running in permissive mode: `# setenforce 0`
Use the `fixfiles` script to ensure that files are relabeled upon the next reboot:
`# fixfiles -F onboot # reboot`
1. Create a user with a specific MLS range: `# useradd -Z staff_u john`
Using the `useradd` command, map the new user to an existing SELinux user (in this case, `staff_u`).
1. To view the mapping between SELinux and Linux users: `# semanage login -l`
1. Define a specific range for a user: `# semanage login --modify --range s2:c100 john`
1. To correct the label on the user's home directory (if needed): `# chcon -R -l s2:c100 /home/john`
1. To list the current categories: `# chcat -L`
1. To modify the categories or to start creating your own, modify the file as follows:
`/etc/selinux/_<selinuxtype>_/setrans.conf`
1. To run a command or script in a specific file, role, and user context:
`# runcon -t initrc_t -r system_r -u user_u yourcommandhere`
* `-t` is the  _file context_
* `-r` is the  _role context_
* `-u` is the  _user context_
1. Containers running with SELinux disabled:
* With Podman: `# podman run --security-opt label=disable` …
* With Docker: `# docker run --security-opt label=disable` …
1. If you need to give a container full access to the system:
* With Podman: `# podman run --privileged` …
* With Docker: `# docker run --privileged` …
And with this, you already know the answer. So please: **Don't panic, and turn on SELinux**.
### About the author
Alex Callejas - Alex Callejas is a Technical Account Manager of Red Hat in the LATAM region, based in Mexico City. With more than 10 years of experience as SysAdmin, he has strong expertise on Infrastructure Hardening. Enthusiast of the Open Source, supports the community sharing his knowledge in different events of public access and universities. Geek by nature, Linux by choice, Fedora of course.[More about me][11]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/sysadmin-guide-selinux
作者:[ Alex Callejas][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/darkaxl
[1]:https://opensource.com/article/18/7/sysadmin-guide-selinux?rate=hR1QSlwcImXNksBPPrLOeP6ooSoOU7PZaR07aGFuYVo
[2]:https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_%2842%29
[3]:https://fedorapeople.org/~dwalsh/SELinux/SELinux
[4]:https://opensource.com/users/rhatdan
[5]:https://opensource.com/business/13/11/selinux-policy-guide
[6]:http://people.redhat.com/tcameron/Summit2018/selinux/SELinux_for_Mere_Mortals_Summit_2018.pdf
[7]:http://twitter.com/thomasdcameron
[8]:http://blog.linuxgrrl.com/2014/04/16/the-selinux-coloring-book/
[9]:https://opensource.com/users/mairin
[10]:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/index
[11]:https://opensource.com/users/darkaxl
[12]:https://opensource.com/user/219886/feed
[13]:https://pixabay.com/en/security-secure-technology-safety-2168234/
[14]:https://creativecommons.org/publicdomain/zero/1.0/deed.en
[15]:https://en.wikipedia.org/wiki/Ethan_Hunt
[16]:https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
[17]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux
[18]:https://opensource.com/users/darkaxl
[19]:https://opensource.com/users/darkaxl
[20]:https://opensource.com/article/18/7/sysadmin-guide-selinux#comments
[21]:https://opensource.com/tags/security
[22]:https://opensource.com/tags/linux
[23]:https://opensource.com/tags/sysadmin

View File

@ -0,0 +1,124 @@
FSSlc is translating
netdev day 2: moving away from "as fast as possible" in networking code
============================================================
Hello! Today was day 2 of netdev. I only made it to the morning of the conference, but the morning was VERY EXCITING. The highlight of this morning was a keynote by [Van Jacobson][1] about the future of congestion control on the internet (!!!) called “Evolving from As Fast As Possible: Teaching NICs about time”
Im going to try to summarize what I learned from this talk. I almost certainly have some things wrong, but lets go!
This talk was about how the internet has changed since 1988, why we need new algorithms today, and how we can change Linuxs networking stack to implement those algorithms more easily.
### whats congestion control?
Everyone on the internet is sending packets all at once, all the time. The links on the internet are of dramatically different speeds (some are WAY slower than others), and sometimes they get full! When a device on the internet receives packets at a rate faster than it can handle, it drops the packets.
The most naive you way you could imagine sending packets is:
1. Send all the packets you have to send all at once
2. If you discover any of those packets got dropped, resend the packet right away
It turns out that if you implemented TCP that way, the internet would collapse and grind to a halt. We know that it would collapse because it did kinda collapse, in 1986\. To fix this, folks invented congestion control algorithms the original paper describing how they avoided collapsing the internet is [Congestion Avoidance and Control][2], by Van Jacobson from 1988\. (30 years ago!)
### How has the internet changed since 1988?
The main thing he said has changed about the internet is it used to be that switches would always have faster network cards than servers on the internet. So the servers in the middle of the internet would be a lot faster than the clients, and it didnt matter as much how fast clients sent packets.
Today apparently thats not true! As we all know, computers today arent really faster than computers 5 years ago (we ran into some problems with the speed of light). So what happens (I think) is that the big switches in routers are not really that much faster than the NICs on servers in datacenters.
This is bad because it means that clients are much more easily able to saturate the links in the middle, which results in the internet getting slower. (and theres [buffer bloat][3] which results in high latency)
So to improve performance on the internet and not saturate all the queues on every router, clients need to be a little better behaved and to send packets a bit more slowly.
### sending more packets more slowly results in better performance
Heres an idea that was really surprising to me sending packets more slowly often actually results in better performance (even if you are the only one doing it). Heres why!
Suppose youre trying to send 10MB of data, and theres a link somewhere in the middle between you and the client youre trying to talk to that is SLOW, like 1MB/s or something. Assuming that you can tell the speed of this slow link (more on that later), you have 2 choices:
1. Send the entire 10MB of data at once and see what happens
2. Slow it down so you send it at 1MB/s
Now either way, youre probably going to end up with some packet loss. So it seems like you might as well just send all the data at once if youre going to end up with packet loss either way, right? No!! The key observation is that packet loss in the middle of your stream is much better than packet loss at the end of your stream. If a few packets in the middle are dropped, the client youre sending to will realize, tell you, and you can just resend them. No big deal! But if packets at the END are dropped, the client has no way of knowing you sent those packets at all! So you basically need to time out at some point when you dont get an ACK for those packets and resend it. And timeouts typically take a long time to happen!
So why is sending data more slowly better? Well, if you send data faster than the bottleneck for the link, what will happen is that all the packets will pile up in a queue somewhere, the queue will get full, and then the packets at the END of your stream will get dropped. And, like we just explained, the packets at the end of the stream are the worst packets to drop! So then you have all these timeouts, and sending your 10MB of data will take way longer than if youd just sent your packets at the correct speed in the first place.
I thought this was really cool because it doesnt require cooperation from anybody else on the internet even if everybody else is sending all their packets really fast, its  _still_  more advantageous for you to send your packets at the correct rate (the rate of the bottleneck in the middle)
### how to tell the right speed to send data at: BBR!
Earlier I said “assuming that you can tell the speed of the slow link between your client and server…“. How do you do that? Well, some folks from Google (where Jacobson works) came up with an algorithm for measuring the speed of bottlenecks! Its called BBR. This post is already long enough, but for more about BBR, see [BBR: Congestion-based congestion control][4] and [the summary from the morning paper][5].
(as an aside, [https://blog.acolyer.org][6]s daily “the morning paper” summaries are basically the only way I learn about / understand CS papers, its possibly the greatest blog on the internet)
### networking code is designed to run “as fast as possible”
So! Lets say we believe we want to send data a little more slowly, at the speed of the bottleneck in our connection. This is all very well, but networking software isnt really designed to send data at a controlled rate! This (as far as I understand it) is how most networking stuff is designed:
1. Theres a queue of packets coming in
2. It reads off the queue and sends the packets out as as fast as possible
3. Thats it
This is pretty inflexible! Like suppose I have one really fast connection Im sending packets on, and one really slow connection. If all I have is a queue to put packets on, I dont get that much control over when the packets Im sending actually get sent out. I cant slow down the queue!
### a better way: give every packet an “earliest departure time”
His proposal was to modify the skb data structure in the Linux kernel (which is the data structure used to represent network packets) to have a TIMESTAMP on it representing the earliest time that packet should go out.
I dont know a lot about the Linux network stack, but the interesting thing to me about this proposal is that it doesnt sound like a huge change! Its just an extra timestamp.
### replace queues with timing wheels!!!
Once we have all these packets with times on them, how do we get them sent out at the right time? TIMING WHEELS!
At Papers We Love a while back ([some good links in the meetup description][7]) there was a talk about timing wheels. Timing wheels are the algorithm the Linux process scheduler decides when to run processes.
He said that timing wheels actually perform better than queues for scheduling work they both offer constant time operations, but the timing wheels constant is smaller because of some stuff to do with cache performance. I didnt really follow the performance arguments.
One point he made about timing wheels is that you can easily implement a queue with a timing wheel (though not vice versa!) if every time you add a new packet, you say that you want it to be sent RIGHT NOW at the earliest, then you effectively end up with a queue. So this timing wheel approach is backwards compatible, but it makes it much easier to implement more complex traffic shaping algorithms where you send out different packets at different rates.
### maybe we can fix the internet by improving Linux!
With any internet-scale problem, the tricky thing about making progress on it is that you need cooperation from SO MANY different parties to change how internet protocols are implemented. You have Linux machines, BSD machines, Windows machines, different kinds of phones, Juniper/Cisco routers, and lots of other devices!
But Linux is in kind of an interesting position in the networking landscape!
* Android phones run Linux
* Most consumer wifi routers run Linux
* Lots of servers run Linux
So in any given network connection, youre actually relatively likely to have a Linux machine at both ends (a linux server, and either a Linux router or Android device).
So the point is that if you want to improve congestion on the internet in general, it would make a huge difference to just change the Linux networking stack. (and maybe the iOS networking stack too) Which is why there was a keynote at this Linux networking conference about it!
### the internet is still changing! Cool!
I usually think of TCP/IP as something that we figured out in the 80s, so it was really fascinating to hear that folks think that there are still serious issues with how were designing our networking protocols, and that theres work to do to design them differently.
And of course it makes sense the landscape of networking hardware and the relative speeds of everything and the kinds of things people are using the internet for (netflix!) is changing all the time, so its reasonable that at some point we need to start designing our algorithms differently for the internet of 2018 instead of the internet of 1998.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/07/12/netdev-day-2--moving-away-from--as-fast-as-possible/
作者:[Julia Evans][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://en.wikipedia.org/wiki/Van_Jacobson
[2]:https://cs162.eecs.berkeley.edu/static/readings/jacobson-congestion.pdf
[3]:https://apenwarr.ca/log/?m=201101#10
[4]:https://queue.acm.org/detail.cfm?id=3022184
[5]:https://blog.acolyer.org/2017/03/31/bbr-congestion-based-congestion-control/
[6]:https://blog.acolyer.org/
[7]:https://www.meetup.com/Papers-We-Love-Montreal/events/235100825/

View File

@ -0,0 +1,290 @@
献给 Debian 和 Ubuntu 用户的一组实用程序
======
![](https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debian-and-ubuntu-users/)
你使用的是基于 Debian 的系统吗?如果是,太好了!我今天在这里给你带来了一个好消息。先向 **“Debian-goodies”** 打个招呼,这是一组基于 Debian 系统比如Ubuntu, Linux Mint的有用工具。这些实用工具提供了一些额外的有用的命令这些命令在基于 Debian 的系统中默认不可用。通过使用这些工具,用户可以找到哪些程序占用更多磁盘空间,更新系统后需要重新启动哪些服务,在一个包中搜索与模式匹配的文件,根据搜索字符串列出已安装的包等等。在这个简短的指南中,我们将讨论一些有用的 Debian 的好东西。
### Debian-goodies 给 Debian 和 Ubuntu 用户的实用程序
debian-goodies 包可以在 Debian 和其衍生的 Ubuntu 以及其它 Ubuntu 变体(如 Linux Mint的官方仓库中找到。要安装 debian-goodies只需简单运行
```
$ sudo apt-get install debian-goodies
```
debian-goodies 安装完成后,让我们继续看一看一些有用的实用程序。
#### **1. Checkrestart**
让我从我最喜欢的 **“checkrestart”** 实用程序开始。安装某些安全更新时,某些正在运行的应用程序可能仍然会使用旧库。要彻底应用安全更新,你需要查找并重新启动所有这些更新。这就是 Checkrestart 派上用场的地方。该实用程序将查找哪些进程仍在使用旧版本的库,然后,你可以重新启动服务。
在进行库更新后,要检查哪些守护进程应该被重新启动,运行:
```
$ sudo checkrestart
[sudo] password for sk:
Found 0 processes using old versions of upgraded files
```
由于我最近没有执行任何安全更新,因此没有显示任何内容。
请注意Checkrestart 实用程序确实运行良好。但是,有一个名为 “needrestart” 的类似工具可用于最新的 Debian 系统。Needrestart 的灵感来自 checkrestart 实用程序,它完成了同样的工作。 Needrestart 得到了积极维护并支持容器LXC, Docker等新技术。
以下是 Needrestart 的特点:
* 支持当不要求systemd
* 二进制黑名单(即显示管理员)
* 试图检测挂起的内核升级
* 尝试检测基于解释器的守护进程所需的重启(支持 Perl, Python, Ruby
* 使用钩子完全集成到 apt/dpkg 中
它在默认仓库中也可以使用。所以,你可以使用如下命令安装它:
```
$ sudo apt-get install needrestart
```
现在,你可以使用以下命令检查更新系统后需要重新启动的守护程序列表:
```
$ sudo needrestart
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
Failed to check for processor microcode upgrades.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
```
好消息是 Needrestart 同样也适用于其它 Linux 发行版。例如,你可以从 Arch Linux 及其衍生版的 AUR 或者其它任何 AUR 帮助程序来安装,就像下面这样:
```
$ yaourt -S needrestart
```
在 fedora:
```
$ sudo dnf install needrestart
```
#### 2. Check-enhancements
Check-enhancements 实用程序用于查找那些用于增强已安装的包的软件包。此实用程序将列出增强其它包但不是必须运行它的包。你可以通过 “-ip” 或 “installed-packages” 选项来查找增强单个包或所有已安装包的软件包。
例如,我将列出增强 gimp 包功能的包:
```
$ check-enhancements gimp
gimp => gimp-data: Installed: (none) Candidate: 2.8.22-1
gimp => gimp-gmic: Installed: (none) Candidate: 1.7.9+zart-4build3
gimp => gimp-gutenprint: Installed: (none) Candidate: 5.2.13-2
gimp => gimp-help-ca: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-de: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-el: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-en: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-es: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-fr: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-it: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ja: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ko: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-nl: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-nn: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-pt: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-ru: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-sl: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-help-sv: Installed: (none) Candidate: 2.8.2-0.1
gimp => gimp-plugin-registry: Installed: (none) Candidate: 7.20140602ubuntu3
gimp => xcftools: Installed: (none) Candidate: 1.0.7-6
```
要列出增强所有已安装包的,请运行:
```
$ check-enhancements -ip
autoconf => autoconf-archive: Installed: (none) Candidate: 20170928-2
btrfs-progs => snapper: Installed: (none) Candidate: 0.5.4-3
ca-certificates => ca-cacert: Installed: (none) Candidate: 2011.0523-2
cryptsetup => mandos-client: Installed: (none) Candidate: 1.7.19-1
dpkg => debsig-verify: Installed: (none) Candidate: 0.18
[...]
```
#### 3. dgrep
顾名思义dgrep 用于根据给定的正则表达式搜索制指定包的所有文件。例如,我将在 Vim 包中搜索包含正则表达式 “text” 的文件。
```
$ sudo dgrep "text" vim
Binary file /usr/bin/vim.tiny matches
/usr/share/doc/vim-tiny/copyright: that they must include this license text. You can also distribute
/usr/share/doc/vim-tiny/copyright: include this license text. You are also allowed to include executables
/usr/share/doc/vim-tiny/copyright: 1) This license text must be included unmodified.
/usr/share/doc/vim-tiny/copyright: text under a) applies to those changes.
/usr/share/doc/vim-tiny/copyright: context diff. You can choose what license to use for new code you
/usr/share/doc/vim-tiny/copyright: context diff will do. The e-mail address to be used is
/usr/share/doc/vim-tiny/copyright: On Debian systems, the complete text of the GPL version 2 license can be
[...]
```
dgrep 支持大多数 grep 的选项。参阅以下指南以了解 grep 命令。
* [献给初学者的 Grep 命令教程][2]
#### 4 dglob
dglob 实用程序生成与给定模式匹配的包名称列表。例如,找到与字符串 “vim” 匹配的包列表。
```
$ sudo dglob vim
vim-tiny:amd64
vim:amd64
vim-common:all
vim-runtime:all
```
默认情况下dglob 将仅显示已安装的软件包。如果要列出所有包(包括已安装的和未安装的),使用 **-a** 标志。
```
$ sudo dglob vim -a
```
#### 5. debget
**debget** 实用程序将在 APT 的数据库中下载一个包的 .deb 文件。请注意,它只会下载给定的包,不包括依赖项。
```
$ debget nano
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
Fetched 231 kB in 2s (113 kB/s)
```
#### 6. dpigs
这是此次集合中另一个有用的实用程序。**dpigs** 实用程序将查找并显示那些占用磁盘空间最多的已安装包。
```
$ dpigs
260644 linux-firmware
167195 linux-modules-extra-4.15.0-20-generic
75186 linux-headers-4.15.0-20
64217 linux-modules-4.15.0-20-generic
55620 snapd
31376 git
31070 libicu60
28420 vim-runtime
25971 gcc-7
24349 g++-7
```
如你所见linux-firmware 包占用的磁盘空间最多。默认情况下,它将显示占用磁盘空间的 **前 10 个**包。如果要显示更多包,例如 20 个,运行以下命令:
```
$ dpigs -n 20
```
#### 7. debman
**debman** 实用程序允许你轻松查看二进制文件 **.deb** 中的手册页而不提取它。你甚至不需要安装 .deb 包。以下命令显示 nano 包的手册页。
```
$ debman -f nano_2.9.3-2_amd64.deb nano
```
如果你没有 .deb 软件包的本地副本,使用 **-p** 标志下载并查看包的手册页。
```
$ debman -p nano nano
```
**建议阅读:**
[每个 Linux 用户都应该知道的 3 个 man 的替代品][3]
#### 8. debmany
安装的 Debian 包不仅包含手册页,还包括其它文件,如确认,版权和 read me (自述文件)等。**debmany** 实用程序允许你查看和读取那些文件。
```
$ debmany vim
```
![][1]
使用方向键选择要查看的文件,然后按 ENTER 键查看所选文件。按 **q** 返回主菜单。
如果未安装指定的软件包debmany 将从 APT 数据库下载并显示手册页。应安装 **dialog** 包来阅读手册页。
#### 9. popbugs
如果你是开发人员,**popbugs** 实用程序将非常有用。它将根据你使用的包显示一个定制的发布关键 bug 列表使用热门竞赛数据。对于那些不关心的人Popular-contest 包设置了一个 cron (定时)任务,它将定期匿名向 Debian 开发人员提交有关该系统上最常用的 Debian 软件包的统计信息。这些信息有助于 Debian 做出决定,例如哪些软件包应该放在第一张 CD 上。它还允许 Debian 改进未来的发行版本,以便为新用户自动安装最流行的软件包。
要生成严重 bug 列表并在默认 Web 浏览器中显示结果,运行:
```
$ popbugs
```
此外,你可以将结果保存在文件中,如下所示。
```
$ popbugs --output=bugs.txt
```
#### 10. which-pkg-broke
此命令将显示给定包的所有依赖项以及安装每个依赖项的时间。通过使用此信息,你可以在升级系统或软件包之后轻松找到哪个包可能会在什么时间损坏另一个包。
```
$ which-pkg-broke vim
Package <debconf-2.0> has no install time info
debconf Wed Apr 25 08:08:40 2018
gcc-8-base:amd64 Wed Apr 25 08:08:41 2018
libacl1:amd64 Wed Apr 25 08:08:41 2018
libattr1:amd64 Wed Apr 25 08:08:41 2018
dpkg Wed Apr 25 08:08:41 2018
libbz2-1.0:amd64 Wed Apr 25 08:08:41 2018
libc6:amd64 Wed Apr 25 08:08:42 2018
libgcc1:amd64 Wed Apr 25 08:08:42 2018
liblzma5:amd64 Wed Apr 25 08:08:42 2018
libdb5.3:amd64 Wed Apr 25 08:08:42 2018
[...]
```
#### 11. dhomepage
dhomepage 实用程序将在默认 Web 浏览器中显示给定包的官方网站。例如,以下命令将打开 Vim 编辑器的主页。
```
$ dhomepage vim
```
这就是全部了。Debian-goodies 是你武器库中必备的工具。即使我们不经常使用所有这些实用程序,但它们值得学习,我相信它们有时会非常有用。
我希望这很有用。更多好东西要来了。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/debian-goodies-a-set-of-useful-utilities-for-debian-and-ubuntu-users/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:http://www.ostechnix.com/wp-content/uploads/2018/05/debmany.png
[2]:
https://www.ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[3]:
https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/

View File

@ -0,0 +1,273 @@
系统管理员的 SELinux 指南:这个大问题的 42 个答案
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
> "一个重要而普遍的事实是,事情并不总是你看上去的那样 …"
> ―Douglas Adams银河系漫游指南
安全、坚固、遵从性、策略 —— 系统管理员启示录的四骑士。除了我们的日常任务之外 —— 监视、备份、实施、调优、更新等等 —— 我们还负责我们的系统安全。甚至是那些第三方提供给我们的禁用了安全增强的系统。这看起来像《碟中碟》中 [Ethan Hunt][1] 的工作一样。
面对这种窘境,一些系统管理员决定去[服用蓝色小药丸][2],因为他们认为他们永远也不会知道如生命、宇宙、以及其它一些大问题的答案。而我们都知道,它的答案就是这 **[42][3]** 个。
按《银河系漫游指南》的精神,这里是关于在你的系统上管理和使用 [SELinux][4] 这个大问题的 42 个答案。
1. SELinux 是一个标签系统,这意味着每个进程都有一个标签。每个文件、目录、以及系统对象都有一个标签。策略规则负责控制标签化进程和标签化对象之间的访问。由内核强制执行这些规则。
2. 两个最重要的概念是:标签化(文件、进程、端口等等)和强制类型(它将基于类型对每个进程进行隔离)。
3. 正确的标签格式是 `user:role:type:level`(可选)。
4. 多级别安全MLS的目的是基于它们所使用数据的安全级别对进程强制实施控制。比如一个秘密级别的进程是不能读取极机密级别的数据。
5. 多类别安全MCS从每个其它类如虚拟机、OpenShift gears、SELinux 沙盒、容器等等)中强制保护类似的进程。
6. 在引导时内核参数可以改变 SELinux 模式:
* `autorelabel=1` → 强制给系统标签化
* `selinux=0` → 内核不加载 SELinux 基础设施的任何部分
* `enforcing=0` → 引导为 permissive 模式
7. 如果给整个系统标签化:
`# touch /.autorelabel #reboot`
如果系统标签中有大量的错误,为了能够让 autorelabel 成功,你可以用 permissive 模式引导系统。
8. 检查 SELinux 是否启用:`# getenforce`
9. 临时启用/禁用 SELinux`# setenforce [1|0]`
10. SELinux 状态工具:`# sestatus`
11. 配置文件:`/etc/selinux/config`
12. SELinux 是如何工作的?这是一个为 Apache Web Server 标签化的示例:
* 二进制文件:`/usr/sbin/httpd`→`httpd_exec_t`
* 配置文件目录:`/etc/httpd`→`httpd_config_t`
* 日志文件目录:`/var/log/httpd` → `httpd_log_t`
* 内容目录:`/var/www/html` → `httpd_sys_content_t`
* 启动脚本:`/usr/lib/systemd/system/httpd.service` → `httpd_unit_file_d`
* 进程:`/usr/sbin/httpd -DFOREGROUND` → `httpd_t`
* 端口:`80/tcp, 443/tcp` → `httpd_t, http_port_t`
`httpd_t` 环境中运行的一个进程可以与具有 `httpd_something_t` 标签的对象交互。
13. 许多命令都可以接收一个 `-Z` 参数去查看、创建、和修改环境:
* `ls -Z`
* `id -Z`
* `ps -Z`
* `netstat -Z`
* `cp -Z`
* `mkdir -Z`
当文件基于它们的父级目录的环境有一些例外创建后它的环境就已经被设置。RPM 包可以在安装时设置环境。
14. 这里有导致 SELinux 出错的四个关键原因,它们将在下面的 15 - 21 号问题中展开描述:
* 标签化问题
* SELinux 需要知道一些东西
* 在一个 SELinux 策略/app 中有 bug
* 你的信息可能被损坏
15. 标签化问题:如果在 `/srv/myweb` 中你的文件没有正确的标签,访问可能会被拒绝。这里有一些修复这类问题的方法:
* 如果你知道标签:
`# semanage fcontext -a -t httpd_sys_content_t '/srv/myweb(/.*)?'`
* 如果你知道使用等价标签的文件:
`# semanage fcontext -a -e /srv/myweb /var/www`
* 恢复环境(对于以上两种情况):
`# restorecon -vR /srv/myweb`
16. 标签化问题:如果你是移动了一个文件,而不是去复制它,那么这个文件将保持原始的环境。修复这类问题:
* 用标签改变环境的命令:
`# chcon -t httpd_system_content_t /var/www/html/index.html`
* 用引用标签改变环境的命令:
`# chcon --reference /var/www/html/ /var/www/html/index.html`
* 恢复环境(对于以上两种情况):
`# restorecon -vR /var/www/html/`
17. 如果 SELinux 需要知道 HTTPD 是在 8585 端口上监听,告诉 SELinux
`# semanage port -a -t http_port_t -p tcp 8585`
18. SELinux 需要知道是否允许在运行时无需重写 SELinux 策略而改变 SELinux 策略部分的布尔值。例如,如果希望 httpd 去发送邮件,输入:`# setsebool -P httpd_can_sendmail 1`
19. SELinux 需要知道 SELinux 设置的 off/on 的布尔值:
* 查看所有的布尔值:`# getsebool -a`
* 查看每个布尔值的描述:`# semanage boolean -l`
* 设置布尔值:`# setsebool [_boolean_] [1|0]`
* 将它配置为永久值,添加 `-P` 标志。例如:
`# setsebool httpd_enable_ftp_server 1 -P`
20. SELinux 策略/apps 可能有 bug包括
* 与众不同的代码路径
* 配置
* 重定向 `stdout`
* 文件描述符漏洞
* 可运行内存
* 错误构建的库打开了一个 ticket不要提交 Bugzilla 报告;这里没有使用 Bugzilla 的 SLAs
21. 如果你定义了域,你的信息可能被损坏:
* 加载内核模块
* 关闭 SELinux 的强制模式
* 写入 `etc_t/shadow_t`
* 修改 iptables 规则
22. 开发策略模块的 SELinux 工具:
`# yum -y install setroubleshoot setroubleshoot-server`
安装完成之后重引导机器或重启 `auditd` 服务。
23. 使用 `journalctl` 去列出所有与 `setroubleshoot` 相关的日志:
`# journalctl -t setroubleshoot --since=14:20`
24. 使用 `journalctl` 去列出所有与特定 SELinux 标签相关的日志。例如:
`# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0`
25. 当 SELinux 发生错误以及建议一些可能的解决方案时,使用 `setroubleshoot` 日志。例如:从 `journalctl` 中:
[code] Jun 14 19:41:07 web1 setroubleshoot: SELinux is preventing httpd from getattr access on the file /var/www/html/index.html. For complete message run: sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
# sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
SELinux is preventing httpd from getattr access on the file /var/www/html/index.html.
***** Plugin restorecon (99.5 confidence) suggests ************************
If you want to fix the label,
/var/www/html/index.html default label should be httpd_syscontent_t.
Then you can restorecon.
Do
# /sbin/restorecon -v /var/www/html/index.html
26. 日志SELinux 记录的信息全部在这些地方:
* `/var/log/messages`
* `/var/log/audit/audit.log`
* `/var/lib/setroubleshoot/setroubleshoot_database.xml`
27. 日志:在审计日志中查找 SELinux 错误:
`# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today`
28. 为特定的服务去搜索 SELinux 的访问向量缓存AVC信息
`# ausearch -m avc -c httpd`
29. `audit2allow` 实用工具从拒绝的操作的日志中采集信息,然后生成 SELinux policy-allow 规则。例如:
* 产生一个人类可读的关于为什么拒绝访问的描述:`# audit2allow -w -a`
* 查看已允许的拒绝访问的强制类型规则:`# audit2allow -a`
* 创建一个自定义模块:`# audit2allow -a -M mypolicy`
`-M` 选项使用一个指定的名字去创建一个类型强制文件(.te并编译这个规则到一个策略包.pp`mypolicy.pp mypolicy.te`
* 安装自定义模块:`# semodule -i mypolicy.pp`
30. 配置单个进程(域)运行在 permissive 模式:`# semanage permissive -a httpd_t`
31. 如果不再希望一个域在 permissive 模式中:`# semanage permissive -d httpd_t`
32. 禁用所有的 permissive 域:`# semodule -d permissivedomains`
33. 启用 SELinux MLS 策略:`# yum install selinux-policy-mls`
`/etc/selinux/config` 中:
`SELINUX=permissive`
`SELINUXTYPE=mls`
确保 SELinux 运行在 permissive 模式:`# setenforce 0`
使用 `fixfiles` 脚本去确保那个文件在下次重引导后重打标签:
`# fixfiles -F onboot # reboot`
34. 使用一个特定的 MLS 范围创建用户:`# useradd -Z staff_u john`
使用 `useradd` 命令,映射新用户到一个已存在的 SELinux 用户(上面例子中是 `staff_u`)。
35. 查看 SELinux 和 Linux 用户之间的映射:`# semanage login -l`
36. 为用户定义一个指定的范围:`# semanage login --modify --range s2:c100 john`
37. 调整用户 home 目录上的标签(如果需要的话):`# chcon -R -l s2:c100 /home/john`
38. 列出当前分类:`# chcat -L`
39. 修改分类或者开始去创建你自己的分类、修改文件:
`/etc/selinux/_<selinuxtype>_/setrans.conf`
40. 在指定的文件、角色、和用户环境中运行一个命令或脚本:
`# runcon -t initrc_t -r system_r -u user_u yourcommandhere`
* `-t` 是文件环境
* `-r` 是角色环境
* `-u` 是用户环境
41. 在容器中禁用 SELinux
* 使用 Podman`# podman run --security-opt label=disable` …
* 使用 Docker`# docker run --security-opt label=disable` …
42. 如果需要给容器提供完全访问系统的权限:
* 使用 Podman`# podman run --privileged` …
* 使用 Docker`# docker run --privileged` …
就这些了,你已经知道了答案。因此请相信我:**不用恐慌,去打开 SELinux 吧**。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/sysadmin-guide-selinux
作者:[Alex Callejas][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/darkaxl
[1]:https://en.wikipedia.org/wiki/Ethan_Hunt
[2]:https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
[3]:https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_%2842%29
[4]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux