翻译完成

This commit is contained in:
liuyakun 2017-12-10 23:12:24 +08:00
commit be435886f8
186 changed files with 17314 additions and 3061 deletions

View File

@ -0,0 +1,179 @@
使用 groff 编写 man 手册页
===================
`groff` 是大多数 Unix 系统上所提供的流行的文本格式化工具 nroff/troff 的 GNU 版本。它一般用于编写手册页,即命令、编程接口等的在线文档。在本文中,我们将给你展示如何使用 `groff` 编写你自己的 man 手册页。
在 Unix 系统上最初有两个文本处理系统troff 和 nroff它们是由贝尔实验室为初始的 Unix 所开发的(事实上,开发 Unix 系统的部分原因就是为了支持这样的一个文本处理系统)。这个文本处理器的第一个版本被称作 roff意为 “runoff”——径流稍后出现了 troff在那时用于为特定的<ruby>排字机<rt>Typesetter</rt></ruby>生成输出。nroff 是更晚一些的版本,它成为了各种 Unix 系统的标准文本处理器。groff 是 nroff 和 troff 的 GNU 实现,用在 Linux 系统上。它包括了几个扩展功能和一些打印设备的驱动程序。
`groff` 能够生成文档、文章和书籍,很多时候它就像是其它的文本格式化系统(如 TeX的血管一样。然而`groff`(以及原来的 nroff有一个固有的功能是 TeX 及其变体所缺乏的:生成普通 ASCII 输出。其它的系统在生成打印的文档方面做得很好,而 `groff` 却能够生成可以在线浏览的普通 ASCII甚至可以在最简单的打印机上直接以普通文本打印。如果要生成在线浏览的文档以及打印的表单`groff` 也许是你所需要的(虽然也有替代品,如 Texinfo、Lametex 等等)。
`groff` 还有一个好处是它比 TeX 小很多;它所需要的支持文件和可执行程序甚至比最小化的 TeX 版本都少。
`groff` 一个特定的用途是用于格式化 Unix 的 man 手册页。如果你是一个 Unix 程序员,你肯定需要编写和生成各种 man 手册页。在本文中,我们将通过编写一个简短的 man 手册页来介绍 `groff` 的使用。
像 TeX 一样,`groff` 使用特定的文本格式化语言来描述如何处理文本。这种语言比 TeX 之类的系统更加神秘一些,但是更加简洁。此外,`groff` 在基本的格式化器之上提供了几个宏软件包;这些宏软件包是为一些特定类型的文档所定制的。举个例子, mgs 宏对于写作文章或论文很适合,而 man 宏可用于 man 手册页。
### 编写 man 手册页
`groff` 编写 man 手册页十分简单。要让你的 man 手册页看起来和其它的一样,你需要从源头上遵循几个惯例,如下所示。在这个例子中,我们将为一个虚构的命令 `coffee` 编写 man 手册页,它用于以各种方式控制你的联网咖啡机。
使用任意文本编辑器,输入如下代码,并保存为 `coffee.man`。不要输入每行的行号,它们仅用于本文中的说明。
```
.TH COFFEE 1 "23 March 94"
.SH NAME
coffee \- Control remote coffee machine
.SH SYNOPSIS
\fBcoffee\fP [ -h | -b ] [ -t \fItype\fP ]
\fIamount\fP
.SH DESCRIPTION
\fBcoffee\fP queues a request to the remote
coffee machine at the device \fB/dev/cf0\fR.
The required \fIamount\fP argument specifies
the number of cups, generally between 0 and
12 on ISO standard coffee machines.
.SS Options
.TP
\fB-h\fP
Brew hot coffee. Cold is the default.
.TP
\fB-b\fP
Burn coffee. Especially useful when executing
\fBcoffee\fP on behalf of your boss.
.TP
\fB-t \fItype\fR
Specify the type of coffee to brew, where
\fItype\fP is one of \fBcolumbian\fP,
\fBregular\fP, or \fBdecaf\fP.
.SH FILES
.TP
\fC/dev/cf0\fR
The remote coffee machine device
.SH "SEE ALSO"
milk(5), sugar(5)
.SH BUGS
May require human intervention if coffee
supply is exhausted.
```
*清单 1示例 man 手册页源文件*
不要让这些晦涩的代码吓坏了你。字符串序列 `\fB`、`\fI` 和 `\fR` 分别用来改变字体为粗体、斜体和正体(罗马字体)。`\fP` 设置字体为前一个选择的字体。
其它的 `groff` <ruby>请求<rt>request</rt></ruby>以点(`.`)开头出现在行首。第 1 行中,我们看到的 `.TH` 请求用于设置该 man 手册页的标题为 `COFFEE`、man 的部分为 `1`、以及该 man 手册页的最新版本的日期。说明man 手册的第 1 部分用于用户命令、第 2 部分用于系统调用等等。使用 `man man` 命令了解各个部分)。
在第 2 行,`.SH` 请求用于标记一个<ruby><rt>section</rt></ruby>的开始,并给该节名称为 `NAME`。注意,大部分的 Unix man 手册页依次使用 `NAME``SYNOPSIS`、`DESCRIPTION`、`FILES`、`SEE ALSO`、`NOTES`、`AUTHOR` 和 `BUGS` 等节,个别情况下也需要一些额外的可选节。这只是编写 man 手册页的惯例,并不强制所有软件都如此。
第 3 行给出命令的名称,并在一个横线(`-`)后给出简短描述。在 `NAME` 节使用这个格式以便你的 man 手册页可以加到 whatis 数据库中——它可以用于 `man -k``apropos` 命令。
第 4-6 行我们给出了 `coffee` 命令格式的大纲。注意,斜体 `\fI...\fP` 用于表示命令行的参数,可选参数用方括号扩起来。
第 7-12 行给出了该命令的摘要介绍。粗体通常用于表示程序或文件的名称。
在 13 行,使用 `.SS` 开始了一个名为 `Options` 的子节。
接着第 14-25 行是选项列表,会使用参数列表样式表示。参数列表中的每一项以 `.TP` 请求来标记;`.TP` 后的行是参数,再之后是该项的文本。例如,第 14-16 行:
```
.TP
\fB-h\P
Brew hot coffee. Cold is the default.
```
将会显示如下:
```
-h Brew hot coffee. Cold is the default.
```
第 26-29 行创建该 man 手册页的 `FILES` 节,它用于描述该命令可能使用的文件。可以使用 `.TP` 请求来表示文件列表。
第 30-31 行,给出了 `SEE ALSO` 节,它提供了其它可以参考的 man 手册页。注意,第 30 行的 `.SH` 请求中 `"SEE ALSO"` 使用括号扩起来,这是因为 `.SH` 使用第一个空格来分隔该节的标题。任何超过一个单词的标题都需要使用引号扩起来成为一个单一参数。
最后,第 32-34 行,是 `BUGS` 节。
### 格式化和安装 man 手册页
为了在你的屏幕上查看这个手册页格式化的样式,你可以使用如下命令:
```
$ groff -Tascii -man coffee.man | more
```
`-Tascii` 选项告诉 `groff` 生成普通 ASCII 输出;`-man` 告诉 `groff` 使用 man 手册页宏集合。如果一切正常,这个 man 手册页显示应该如下。
```
COFFEE(1) COFFEE(1)
NAME
coffee - Control remote coffee machine
SYNOPSIS
coffee [ -h | -b ] [ -t type ] amount
DESCRIPTION
coffee queues a request to the remote coffee machine at
the device /dev/cf0\. The required amount argument speci-
fies the number of cups, generally between 0 and 12 on ISO
standard coffee machines.
Options
-h Brew hot coffee. Cold is the default.
-b Burn coffee. Especially useful when executing cof-
fee on behalf of your boss.
-t type
Specify the type of coffee to brew, where type is
one of columbian, regular, or decaf.
FILES
/dev/cf0
The remote coffee machine device
SEE ALSO
milk(5), sugar(5)
BUGS
May require human intervention if coffee supply is
exhausted.
```
*格式化的 man 手册页*
如之前提到过的,`groff` 能够生成其它类型的输出。使用 `-Tps` 选项替代 `-Tascii` 将会生成 PostScript 输出,你可以将其保存为文件,用 GhostView 查看,或用一个 PostScript 打印机打印出来。`-Tdvi` 会生成设备无关的 .dvi 输出,类似于 TeX 的输出。
如果你希望让别人在你的系统上也可以查看这个 man 手册页,你需要安装这个 groff 源文件到其它用户的 `%MANPATH` 目录里面。标准的 man 手册页放在 `/usr/man`。第一部分的 man 手册页应该放在 `/usr/man/man1` 下,因此,使用命令:
```
$ cp coffee.man /usr/man/man1/coffee.1
```
这将安装该 man 手册页到 `/usr/man` 中供所有人使用(注意使用 `.1` 扩展名而不是 `.man`)。当接下来执行 `man coffee` 命令时,该 man 手册页会被自动重新格式化,并且可查看的文本会被保存到 `/usr/man/cat1/coffee.1.Z` 中。
如果你不能直接复制 man 手册页的源文件到 `/usr/man`(比如说你不是系统管理员),你可创建你自己的 man 手册页目录树,并将其加入到你的 `%MANPATH`。`%MANPATH` 环境变量的格式同 `%PATH` 一样,举个例子,要添加目录 `/home/mdw/man``%MANPATH` ,只需要:
```
$ export MANPATH=/home/mdw/man:$MANPATH
```
`groff` 和 man 手册页宏还有许多其它的选项和格式化命令。找到它们的最好办法是查看 `/usr/lib/groff` 中的文件; `tmac` 目录包含了宏文件,自身通常会包含其所提供的命令的文档。要让 `groff` 使用特定的宏集合,只需要使用 `-m macro` (或 `-macro` 选项。例如,要使用 mgs 宏,使用命令:
```
groff -Tascii -mgs files...
```
`groff` 的 man 手册页对这个选项描述了更多细节。
不幸的是,随同 `groff` 提供的宏集合没有完善的文档。第 7 部分的 man 手册页提供了一些,例如,`man 7 groff_mm` 会给你 mm 宏集合的信息。然而,该文档通常只覆盖了在 `groff` 实现中不同和新功能,而假设你已经了解过原来的 nroff/troff 宏集合(称作 DWBthe Documentor's Work Bench。最佳的信息来源或许是一本覆盖了那些经典宏集合细节的书。要了解更多的编写 man 手册页的信息,你可以看看 man 手册页源文件(`/usr/man` 中),并通过它们来比较源文件的输出。
这篇文章是《Running Linux》 中的一章,由 Matt Welsh 和 Lar Kaufman 著奥莱理出版ISBN 1-56592-100-3。在本书中还包括了 Linux 下使用的各种文本格式化系统的教程。这期的《Linux Journal》中的内容及《Running Linux》应该可以给你提供在 Linux 上使用各种文本工具的良好开端。
### 祝好,撰写快乐!
Matt Welsh [mdw@cs.cornell.edu][1])是康奈尔大学的一名学生和系统程序员,在机器人和视觉实验室从事于时时机器视觉研究。
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/article/1158
作者:[Matt Welsh][a]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/800006
[1]:mailto:mdw@cs.cornell.edu

View File

@ -0,0 +1,331 @@
ARMv8 上的 kprobes 事件跟踪
==============
![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
### 介绍
kprobes 是一种内核功能,它允许通过在执行(或模拟)断点指令之前和之后,设置调用开发者提供例程的任意断点来检测内核。可参见 kprobes 文档^注1 获取更多信息。基本的 kprobes 功能可使用 `CONFIG_KPROBEES` 来选择。在 arm64 的 v4.8 内核发行版中, kprobes 支持被添加到主线。
在这篇文章中,我们将介绍 kprobes 在 arm64 上的使用,通过在命令行中使用 debugfs 事件追踪接口来收集动态追踪事件。这个功能在一些架构(包括 arm32上可用已经有段时间现在在 arm64 上也能使用了。这个功能可以无需编写任何代码就能使用 kprobes。
### 探针类型
kprobes 子系统提供了三种不同类型的动态探针,如下所述。
#### kprobes
基本探针是 kprobes 插入的一个软件断点,用以替代你正在探测的指令,当探测点被命中时,它为最终的单步执行(或模拟)保存下原始指令。
#### kretprobes
kretprobes 是 kprobes 的一部分,它允许拦截返回函数,而不必在返回点设置一个探针(或者可能有多个探针)。对于支持的架构(包括 ARMv8只要选择 kprobes就可以选择此功能。
#### jprobes
jprobes 允许通过提供一个具有相同<ruby>调用签名<rt>call signature</rt></ruby>的中间函数来拦截对一个函数的调用这里中间函数将被首先调用。jprobes 只是一个编程接口,它不能通过 debugfs 事件追踪子系统来使用。因此,我们将不会在这里进一步讨论 jprobes。如果你想使用 jprobes请参考 kprobes 文档。
### 调用 kprobes
kprobes 提供一系列能从内核代码中调用的 API 来设置探测点和当探测点被命中时调用的注册函数。在不往内核中添加代码的情况下kprobes 也是可用的,这是通过写入特定事件追踪的 debugfs 文件来实现的,需要在文件中设置探针地址和信息,以便在探针被命中时记录到追踪日志中。后者是本文将要讨论的重点。最后 kprobes 可以通过 perl 命令来使用。
#### kprobes API
内核开发人员可以在内核中编写函数(通常在专用的调试模块中完成)来设置探测点,并且在探测指令执行前和执行后立即执行任何所需操作。这在 kprobes.txt 中有很好的解释。
#### 事件追踪
事件追踪子系统有自己的自己的文档^注2 ,对于了解一般追踪事件的背景可能值得一读。事件追踪子系统是<ruby>追踪点<rt>tracepoints</rt></ruby>和 kprobes 事件追踪的基础。事件追踪文档重点关注追踪点所以请在查阅文档时记住这一点。kprobes 与追踪点不同的是没有预定义的追踪点列表,而是采用动态创建的用于触发追踪事件信息收集的任意探测点。事件追踪子系统通过一系列 debugfs 文件来控制和监视。事件追踪(`CONFIG_EVENT_TRACING`)将在被如 kprobe 事件追踪子系统等需要时自动选择。
##### kprobes 事件
使用 kprobes 事件追踪子系统用户可以在内核任意断点处指定要报告的信息只需要指定任意现有可探测指令的地址以及格式化信息即可确定。在执行过程中遇到断点时kprobes 将所请求的信息传递给事件追踪子系统的公共部分这些部分将数据格式化并追加到追踪日志中就像追踪点的工作方式一样。kprobes 使用一个类似的但是大部分是独立的 debugfs 文件来控制和显示追踪事件信息。该功能可使用 `CONFIG_KPROBE_EVENT` 来选择。Kprobetrace 文档^ 注3 提供了如何使用 kprobes 事件追踪的基本信息,并且应当被参考用以了解以下介绍示例的详细信息。
#### kprobes 和 perf
perf 工具为 kprobes 提供了另一个命令行接口。特别地,`perf probe` 允许探测点除了由函数名加偏移量和地址指定外还可由源文件和行号指定。perf 接口实际上是使用 kprobes 的 debugfs 接口的封装器。
### Arm64 kprobes
上述所有 kprobes 的方面现在都在 arm64 上得到实现,然而实际上与其它架构上的有一些不同:
* 注册名称参数当然是依架构而特定的,并且可以在 ARM ARM 中找到。
* 目前不是所有的指令类型都可被探测。当前不可探测的指令包括 mrs/msr除了 DAIF 读取、异常生成指令、eret 和 hint除了 nop 变体)。在这些情况下,只探测一个附近的指令来代替是最简单的。这些指令在探测的黑名单里是因为在 kprobes 单步执行或者指令模拟时它们对处理器状态造成的改变是不安全的,这是由于 kprobes 构造的单步执行上下文和指令所需要的不一致,或者是由于指令不能容忍在 kprobes 中额外的处理时间和异常处理ldx/stx
* 试图识别在 ldx/stx 序列中的指令并且防止探测,但是理论上这种检查可能会失败,导致允许探测到的原子序列永远不会成功。当探测原子代码序列附近时应该小心。
* 注意由于 linux ARM64 调用约定的具体信息,为探测函数可靠地复制栈帧是不可能的,基于此不要试图用 jprobes 这样做,这一点与支持 jprobes 的大多数其它架构不同。这样的原因是被调用者没有足够的信息来确定需要的栈数量。
* 注意当探针被命中时,一个探针记录的栈指针信息将反映出使用中的特定栈指针,它是内核栈指针或者中断栈指针。
* 有一组内核函数是不能被探测的,通常因为它们作为 kprobes 处理的一部分被调用。这组函数的一部分是依架构特定的,并且也包含如异常入口代码等。
### 使用 kprobes 事件追踪
kprobes 的一个常用例子是检测函数入口和/或出口。因为只需要使用函数名来作为探针地址它安装探针特别简单。kprobes 事件追踪将查看符号名称并且确定地址。ARMv8 调用标准定义了函数参数和返回值的位置,并且这些可以作为 kprobes 事件处理的一部分被打印出来。
#### 例子: 函数入口探测
检测 USB 以太网驱动程序复位功能:
```
$ pwd
/sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p ax88772_reset %x0
EOF
$ echo 1 > events/kprobes/enable
```
此时每次该驱动的 `ax8872_reset()` 函数被调用,追踪事件都将会被记录。这个事件将显示指向通过作为此函数的唯一参数的 `X0`(按照 ARMv8 调用标准)传入的 `usbnet` 结构的指针。插入需要以太网驱动程序的 USB 加密狗后,我们看见以下追踪信息:
```
$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1 #P:8
#
# _—=> irqs-off
# / _—-=> need-resched
# | / _—=> hardirq/softirq
# || / _=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
kworker/0:0-4 [000] d… 10972.102939: p_ax88772_reset_0:
(ax88772_reset+0x0/0x230) arg1=0xffff800064824c80
```
这里我们可以看见传入到我们的探测函数的指针参数的值。由于我们没有使用 kprobes 事件追踪的可选标签功能,我们需要的信息自动被标注为 `arg1`。注意这指向我们需要 kprobes 记录这个探针的一组值的第一个,而不是函数参数的实际位置。在这个例子中它也只是碰巧是我们探测函数的第一个参数。
#### 例子: 函数入口和返回探测
kretprobe 功能专门用于探测函数返回。在函数入口 kprobes 子系统将会被调用并且建立钩子以便在函数返回时调用,钩子将记录需求事件信息。对最常见情况,返回信息通常在 `X0` 寄存器中,这是非常有用的。在 `%x0` 中返回值也可以被称为 `$retval`。以下例子也演示了如何提供一个可读的标签来展示有趣的信息。
使用 kprobes 和 kretprobe 检测内核 `do_fork()` 函数来记录参数和结果的例子:
```
$ cd /sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p _do_fork %x0 %x1 %x2 %x3 %x4 %x5
r _do_fork pid=%x0
EOF
$ echo 1 > events/kprobes/enable
```
此时每次对 `_do_fork()` 的调用都会产生两个记录到 trace 文件的 kprobe 事件,一个报告调用参数值,另一个报告返回值。返回值在 trace 文件中将被标记为 `pid`。这里是三次 fork 系统调用执行后的 trace 文件的内容:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 6/6 #P:8
#
# _—=> irqs-off
# / _—-=> need-resched
# | / _—=> hardirq/softirq
# || / _=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
bash-1671 [001] d… 204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
bash-1671 [001] d..1 204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
bash-1671 [001] d… 208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
bash-1671 [001] d..1 208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
bash-1671 [001] d… 214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
bash-1671 [001] d..1 214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
```
#### 例子: 解引用指针参数
对于指针值kprobes 事件处理子系统也允许解引用和打印所需的内存内容,适用于各种基本数据类型。为了展示所需字段,手动计算结构的偏移量是必要的。
检测 `_do_wait()` 函数:
```
$ cat > kprobe_events <<EOF
p:wait_p do_wait wo_type=+0(%x0):u32 wo_flags=+4(%x0):u32
r:wait_r do_wait $retval
EOF
$ echo 1 > events/kprobes/enable
```
注意在第一个探针中使用的参数标签是可选的,并且可用于更清晰地识别记录在追踪日志中的信息。带符号的偏移量和括号表明了寄存器参数是指向记录在追踪日志中的内存内容的指针。`:u32` 表明了内存位置包含一个无符号的 4 字节宽的数据(在这个例子中指局部定义的结构中的一个 emum 和一个 int
探针标签(冒号后)是可选的,并且将用来识别日志中的探针。对每个探针来说标签必须是独一无二的。如果没有指定,将从附近的符号名称自动生成一个有用的标签,如前面的例子所示。
也要注意 `$retval` 参数可以只是指定为 `%x0`
这里是两次 fork 系统调用执行后的 trace 文件的内容:
```
$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 4/4 #P:8
#
# _—=> irqs-off
# / _—-=> need-resched
# | / _—=> hardirq/softirq
# || / _=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
bash-1702 [001] d… 175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
bash-1702 [002] d..1 175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
bash-1702 [002] d… 175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
bash-1702 [002] d..1 175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6
```
#### 例子: 探测任意指令地址
在前面的例子中,我们已经为函数的入口和出口插入探针,然而探测一个任意指令(除少数例外)是可能的。如果我们正在 C 函数中放置一个探针,第一步是查看代码的汇编版本以确定我们要放置探针的位置。一种方法是在 vmlinux 文件上使用 gdb,并在要放置探针的函数中展示指令。下面是一个在 `arch/arm64/kernel/modules.c``module_alloc` 函数执行此操作的示例。在这种情况下,因为 gdb 似乎更喜欢使用弱符号定义,并且它是与这个函数关联的存根代码,所以我们从 System.map 中来获取符号值:
```
$ grep module_alloc System.map
ffff2000080951c4 T module_alloc
ffff200008297770 T kasan_module_alloc
```
在这个例子中我们使用了交叉开发工具,并且在我们的主机系统上调用 gdb 来检查指令包含我们感兴趣函数。
```
$ ${CROSS_COMPILE}gdb vmlinux
(gdb) x/30i 0xffff2000080951c4
0xffff2000080951c4 <module_alloc>: sub sp, sp, #0x30
0xffff2000080951c8 <module_alloc+4>: adrp x3, 0xffff200008d70000
0xffff2000080951cc <module_alloc+8>: add x3, x3, #0x0
0xffff2000080951d0 <module_alloc+12>: mov x5, #0x713 // #1811
0xffff2000080951d4 <module_alloc+16>: mov w4, #0xc0 // #192
0xffff2000080951d8 <module_alloc+20>:
mov x2, #0xfffffffff8000000 // #-134217728
0xffff2000080951dc <module_alloc+24>: stp x29, x30, [sp,#16] 0xffff2000080951e0 <module_alloc+28>: add x29, sp, #0x10
0xffff2000080951e4 <module_alloc+32>: movk x5, #0xc8, lsl #48
0xffff2000080951e8 <module_alloc+36>: movk w4, #0x240, lsl #16
0xffff2000080951ec <module_alloc+40>: str x30, [sp] 0xffff2000080951f0 <module_alloc+44>: mov w7, #0xffffffff // #-1
0xffff2000080951f4 <module_alloc+48>: mov x6, #0x0 // #0
0xffff2000080951f8 <module_alloc+52>: add x2, x3, x2
0xffff2000080951fc <module_alloc+56>: mov x1, #0x8000 // #32768
0xffff200008095200 <module_alloc+60>: stp x19, x20, [sp,#32] 0xffff200008095204 <module_alloc+64>: mov x20, x0
0xffff200008095208 <module_alloc+68>: bl 0xffff2000082737a8 <__vmalloc_node_range>
0xffff20000809520c <module_alloc+72>: mov x19, x0
0xffff200008095210 <module_alloc+76>: cbz x0, 0xffff200008095234 <module_alloc+112>
0xffff200008095214 <module_alloc+80>: mov x1, x20
0xffff200008095218 <module_alloc+84>: bl 0xffff200008297770 <kasan_module_alloc>
0xffff20000809521c <module_alloc+88>: tbnz w0, #31, 0xffff20000809524c <module_alloc+136>
0xffff200008095220 <module_alloc+92>: mov sp, x29
0xffff200008095224 <module_alloc+96>: mov x0, x19
0xffff200008095228 <module_alloc+100>: ldp x19, x20, [sp,#16] 0xffff20000809522c <module_alloc+104>: ldp x29, x30, [sp],#32
0xffff200008095230 <module_alloc+108>: ret
0xffff200008095234 <module_alloc+112>: mov sp, x29
0xffff200008095238 <module_alloc+116>: mov x19, #0x0 // #0
```
在这种情况下,我们将在此函数中显示以下源代码行的结果:
```
p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
NUMA_NO_NODE, __builtin_return_address(0));
```
……以及在此代码行的函数调用的返回值:
```
if (p && (kasan_module_alloc(p, size) < 0)) {
```
我们可以在从调用外部函数的汇编代码中识别这些。为了展示这些值,我们将在目标系统上的 `0xffff20000809520c``0xffff20000809521c` 处放置探针。
```
$ cat > kprobe_events <<EOF
p 0xffff20000809520c %x0
p 0xffff20000809521c %x0
EOF
$ echo 1 > events/kprobes/enable
```
现在将一个以太网适配器加密狗插入到 USB 端口后,我们看到以下写入追踪日志的内容:
```
$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 12/12 #P:8
#
# _—=> irqs-off
# / _—-=> need-resched
# | / _—=> hardirq/softirq
# || / _=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
systemd-udevd-2082 [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
systemd-udevd-2082 [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
systemd-udevd-2082 [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
systemd-udevd-2082 [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
systemd-udevd-2082 [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
systemd-udevd-2082 [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
systemd-udevd-2082 [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
systemd-udevd-2082 [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
modprobe-2097 [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
modprobe-2097 [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
modprobe-2097 [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
modprobe-2097 [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
```
kprobes 事件系统的另一个功能是记录统计信息,这可在 `inkprobe_profile` 中找到。在以上追踪后,该文件的内容为:
```
$ cat kprobe_profile
p_0xffff20000809520c 6 0
p_0xffff20000809521c 6 0
```
这表明我们设置的两处断点每个共发生了 8 次命中,这当然与追踪日志数据是一致的。在 kprobetrace 文档中有更多 kprobe_profile 的功能描述。
也可以进一步过滤 kprobes 事件。用来控制这点的 debugfs 文件在 kprobetrace 文档中被列出,然而它们内容的详细信息大多在 trace events 文档中被描述。
### 总结
现在Linux ARMv8 对支持 kprobes 功能也和其它架构相当。有人正在做添加 uprobes 和 systemtap 支持的工作。这些功能/工具和其他已经完成的功能(如: perf、 coresight允许 Linux ARMv8 用户像在其它更老的架构上一样调试和测试性能。
* * *
参考文献
- 注1 Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
- 注2 Tso, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
- 注3 Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
----------------
作者简介 [David Long][8] 在 Linaro Kernel - Core Development 团队中担任工程师。 在加入 Linaro 之前他在商业和国防行业工作了数年既做嵌入式实时工作又为Unix提供软件开发工具。之后在 Digital又名 Compaq公司工作了十几年负责 Unix 标准C 编译器和运行时库的工作。之后 David 又去了一系列初创公司做嵌入式 Linux 和安卓系统,嵌入式定制操作系统和 Xen 虚拟化。他拥有 MIPSAlpha 和 ARM 平台的经验(等等)。他使用过从 1979 年贝尔实验室 V6 开始的大部分Unix操作系统并且长期以来一直是 Linux 用户和倡导者。他偶尔也因使用烙铁和数字示波器调试设备驱动而知名。
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
作者:[David Long][a]
译者:[kimii](https://github.com/kimii)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/david-long/
[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[8]:http://www.linaro.org/author/david-long/
[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[11]:http://www.linaro.org/tag/arm64/
[12]:http://www.linaro.org/tag/armv8/
[13]:http://www.linaro.org/tag/jprobes/
[14]:http://www.linaro.org/tag/kernel/
[15]:http://www.linaro.org/tag/kprobes/
[16]:http://www.linaro.org/tag/kretprobes/
[17]:http://www.linaro.org/tag/perf/
[18]:http://www.linaro.org/tag/tracing/

View File

@ -0,0 +1,340 @@
如何在 Linux 系统里用 Scrot 截屏
============================================================
最近,我们介绍过 [gnome-screenshot][17] 工具,这是一个很优秀的屏幕抓取工具。但如果你想找一个在命令行运行的更好用的截屏工具,你一定要试试 Scrot。这个工具有一些 gnome-screenshot 没有的独特功能。在这篇文章里,我们会通过简单易懂的例子来详细介绍 Scrot。
请注意一下,这篇文章里的所有例子都在 Ubuntu 16.04 LTS 上测试过,我们用的 scrot 版本是 0.8。
### 关于 Scrot
[Scrot][18] **SCR**eensh**OT** 是一个屏幕抓取工具,使用 imlib2 库来获取和保存图片。由 Tom Gilbert 用 C 语言开发完成,通过 BSD 协议授权。
### 安装 Scrot
scort 工具可能在你的 Ubuntu 系统里预装了,不过如果没有的话,你可以用下面的命令安装:
```
sudo apt-get install scrot
```
安装完成后,你可以通过下面的命令来使用:
```
scrot [options] [filename]
```
**注意**:方括号里的参数是可选的。
### Scrot 的使用和特点
在这个小节里,我们会介绍如何使用 Scrot 工具,以及它的所有功能。
如果不带任何选项执行命令,它会抓取整个屏幕。
[
![使用 Scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/scrot.png)
][19]
默认情况下,抓取的截图会用带时间戳的文件名保存到当前目录下,不过你也可以在运行命令时指定截图文件名。比如:
```
scrot [image-name].png
```
### 获取程序版本
你想的话,可以用 `-v` 选项来查看 scrot 的版本。
```
scrot -v
```
这是例子:
[
![获取 scrot 版本](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/version.png)
][20]
### 抓取当前窗口
这个工具可以限制抓取当前的焦点窗口。这个功能可以通过 `-u` 选项打开。
```
scrot -u
```
例如,这是我在命令行执行上边命令时的桌面:
[
![用 scrot 截取窗口](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/desktop.png)
][21]
这是另一张用 scrot 抓取的截图:
[
![用 scrot 抓取的图片](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/active.png)
][22]
### 抓取选定窗口
这个工具还可以让你抓取任意用鼠标点击的窗口。这个功能可以用 `-s` 选项打开。
```
scrot -s
```
例如,在下面的截图里你可以看到,我有两个互相重叠的终端窗口。我在上层的窗口里执行上面的命令。
[
![选择窗口](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select1.png)
][23]
现在假如我想抓取下层的终端窗口。这样我只要在执行命令后点击窗口就可以了 —— 在你用鼠标点击之前,命令的执行不会结束。
这是我点击了下层终端窗口后的截图:
[
![窗口截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select2.png)
][24]
**注意**:你可以在上面的截图里看到,下层终端窗口的整个显示区域都被抓去下来了,甚至包括了上层窗口的部分叠加内容。
### 在截屏时包含窗口边框
我们之前介绍的 `-u` 选项在截屏时不会包含窗口边框。不过,需要的话你也可以在截屏时包含窗口边框。这个功能可以通过 `-b` 选项打开(当然要和 `-u` 选项一起)。
```
scrot -ub
```
下面是示例截图:
[
![截屏时包含窗口边框](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/border-new.png)
][25]
**注意**:截屏时包含窗口边框同时也会增加一点额外的背景。
### 延时截屏
你可以在开始截屏时增加一点延时。需要在 `--delay``-d` 选项后设定一个时间值参数。
```
scrot --delay [NUM]
scrot --delay 5
```
例如:
[
![延时截屏](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/delay.png)
][26]
在这例子里scrot 会等待 5 秒再截屏。
### 截屏前倒数
这个工具也可以在你使用延时功能后显示一个倒计时。这个功能可以通过 `-c` 选项打开。
```
scrot delay [NUM] -c
scrot -d 5 -c
```
下面是示例截图:
[
![延时截屏示例](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/countdown.png)
][27]
### 图片质量
你可以使用这个工具来调整截图的图片质量,范围是 1-100 之间。较大的值意味着更大的文件大小以及更低的压缩率。默认值是 75不过最终效果根据选择的文件类型也会有一些差异。
这个功能可以通过 `--quality``-q` 选项打开,但是你必须提供一个 1 - 100 之间的数值作为参数。
```
scrot quality [NUM]
scrot quality 10
```
下面是示例截图:
[
![截屏质量](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/img-quality.jpg)
][28]
你可以看到,`-q` 选项的参数更靠近 1 让图片质量下降了很多。
### 生成缩略图
scort 工具还可以生成截屏的缩略图。这个功能可以通过 `--thumb` 选项打开。这个选项也需要一个 NUM 数值作为参数,基本上是指定原图大小的百分比。
```
scrot --thumb NUM
scrot --thumb 50
```
**注意**:加上 `--thumb` 选项也会同时保存原始截图文件。
例如,下面是我测试的原始截图:
[
![原始截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/orig.png)
][29]
下面是保存的缩略图:
[
![截图缩略图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/thmb.png)
][30]
### 拼接多显示器截屏
如果你的电脑接了多个显示设备,你可以用 scort 抓取并拼接这些显示设备的截图。这个功能可以通过 `-m` 选项打开。
```
scrot -m
```
下面是示例截图:
[
![拼接截屏](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/multiple.png)
][31]
### 在保存截图后执行操作
使用这个工具,你可以在保存截图后执行各种操作 —— 例如,用像 gThumb 这样的图片编辑器打开截图。这个功能可以通过 `-e` 选项打开。下面是例子:
```
scrot abc.png -e 'gthumb abc.png'
```
这个命令里的 gthumb 是一个图片编辑器,上面的命令在执行后会自动打开。
下面是命令的截图:
[
![截屏后执行命令](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec1.png)
][32]
这个是上面命令执行后的效果:
[
![示例截图](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec2.png)
][33]
你可以看到 scrot 抓取了屏幕截图,然后再启动了 gThumb 图片编辑器打开刚才保存的截图图片。
如果你截图时没有指定文件名,截图将会用带有时间戳的文件名保存到当前目录 —— 这是 scrot 的默认设定,我们前面已经说过。
下面是一个使用默认名字并且加上 `-e` 选项来截图的例子:
```
scrot -e 'gthumb $n'
```
[
![scrot 截屏后运行 gthumb](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec3.png)
][34]
有个地方要注意的是 `$n` 是一个特殊字符串,用来获取当前截图的文件名。关于特殊字符串的更多细节,请继续看下个小节。
### 特殊字符串
scrot 的 `-e`(或 `--exec`)选项和文件名参数可以使用格式说明符。有两种类型格式。第一种是以 `%` 加字母组成,用来表示日期和时间,第二种以 `$` 开头scrot 内部使用。
下面介绍几个 `--exec` 和文件名参数接受的说明符。
`$f`  让你可以使用截图的全路径(包括文件名)。
例如:
```
scrot ashu.jpg -e mv $f ~/Pictures/Scrot/ashish/
```
下面是示例截图:
[
![示例](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/f.png)
][35]
如果你没有指定文件名scrot 默认会用日期格式的文件名保存截图。这个是 scrot 的默认文件名格式:`%yy-%mm-%dd-%hhmmss_$wx$h_scrot.png`。
`$n`  提供截图文件名。下面是示例截图:
[
![scrot $n variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/n.png)
][36]
`$s`  获取截图的文件大小。这个功能可以像下面这样使用。
```
scrot abc.jpg -e echo $s
```
下面是示例截图:
[
![scrot $s 变量](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/s.png)
][37]
类似的,你也可以使用其他格式字符串 `$p`、`$w`、 `$h`、`$t`、`$$` 以及 `\n` 来分别获取图片像素大小、图像宽度、图像高度、图像格式、输入 `$` 字符、以及换行。你可以像上面介绍的 `$s` 格式那样使用这些字符串。
### 结论
这个应用能轻松地安装在 Ubuntu 系统上对初学者比较友好。scrot 也提供了一些高级功能,比如支持格式化字符串,方便专业用户用脚本处理。当然,如果你想用起来的话有一点轻微的学习曲线。
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
作者:[Himanshu Arora][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
[1]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#get-the-applicationnbspversion
[2]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#capturing-current-window
[3]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#selecting-a-window
[4]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#includenbspwindow-border-in-screenshots
[5]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#delay-in-taking-screenshots
[6]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#countdown-before-screenshot
[7]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#image-quality
[8]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#generating-thumbnails
[9]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#join-multiple-displays-shots
[10]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#executing-operations-on-saved-images
[11]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#special-strings
[12]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#about-scrot
[13]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-installation
[14]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-usagefeatures
[15]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#conclusion
[16]:https://www.howtoforge.com/subscription/
[17]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
[18]:https://en.wikipedia.org/wiki/Scrot
[19]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/scrot.png
[20]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/version.png
[21]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/desktop.png
[22]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/active.png
[23]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select1.png
[24]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select2.png
[25]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/border-new.png
[26]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/delay.png
[27]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/countdown.png
[28]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/img-quality.jpg
[29]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/orig.png
[30]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/thmb.png
[31]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/multiple.png
[32]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec1.png
[33]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec2.png
[34]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec3.png
[35]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/f.png
[36]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/n.png
[37]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/s.png

View File

@ -0,0 +1,300 @@
用户指南Linux 文件系统的链接
============================================================
> 学习如何使用链接,通过从 Linux 文件系统多个位置来访问文件,可以让日常工作变得轻松。
![linux 文件链接用户指南](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/links.png?itok=enaPOi4L "A user's guide to links in the Linux filesystem")
Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9]
在我为 opensource.com 写过的关于 Linux 文件系统方方面面的文章中,包括 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][10] [在 Linux 中管理设备][11][Linux 文件系统概览][12] 和 [用户指南:逻辑卷管理][13],我曾简要的提到过 Linux 文件系统一个有趣的特性,它允许用户从多个位置来访问 Linux 文件目录树中的文件来简化一些任务。
Linux 文件系统中有两种<ruby>链接<rt>link</rt></ruby><ruby>硬链接<rt>hard link</rt></ruby><ruby>软链接<rt>soft link</rt></ruby>。虽然二者差别显著,但都用来解决相似的问题。它们都提供了对单个文件的多个目录项(引用)的访问,但实现却大为不同。链接的强大功能赋予了 Linux 文件系统灵活性,因为[一切皆是文件][14]。
举个例子,我曾发现一些程序要求特定的版本库方可运行。 当用升级后的库替代旧库后,程序会崩溃,提示旧版本库缺失。通常,库名的唯一变化就是版本号。出于直觉,我仅仅给程序添加了一个新的库链接,并以旧库名称命名。我试着再次启动程序,运行良好。程序就是一个游戏,人人都明白,每个玩家都会尽力使游戏进行下去。
事实上,几乎所有的应用程序链接库都使用通用的命名规则,链接名称中包含了主版本号,链接所指向的文件的文件名中同样包含了小版本号。再比如,程序的一些必需文件为了迎合 Linux 文件系统规范,从一个目录移动到另一个目录中,系统为了向后兼容那些不能获取这些文件新位置的程序在旧的目录中存放了这些文件的链接。如果你对 `/lib64` 目录做一个长清单列表,你会发现很多这样的例子。
```
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi
lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0
-rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0
lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0
-rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0
lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1
-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0
-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1
lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26
-rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26
lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26
```
`/lib64` 目录下的一些链接
在上面展示的 `/lib64` 目录清单列表中,文件模式第一个字母 `l` (小写字母 l表示这是一个软链接又称符号链接
### 硬链接
在 [Linux 的 EXT4 文件系统的历史、特性以及最佳实践][15]一文中,我曾探讨过这样一个事实,每个文件都有一个包含该文件信息的 inode包含了该文件的位置信息。上述文章中的[图2][16]展示了一个指向 inode 的单一目录项。每个文件都至少有一个目录项指向描述该文件信息的 inode ,目录项是一个硬链接,因此每个文件至少都有一个硬链接。
如下图 1 所示,多个目录项指向了同一 inode 。这些目录项都是硬链接。我曾在三个目录项中使用波浪线 (`~`) 的缩写,这是用户目录的惯例表示,因此在该例中波浪线等同于 `/home/user` 。值得注意的是,第四个目录项是一个完全不同的目录,`/home/shared`,可能是该计算机上用户的共享文件目录。
![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png)
*图 1*
硬链接被限制在一个单一的文件系统中。此处的“文件系统” 是指挂载在特定挂载点上的分区或逻辑卷,此例中是 `/home`。这是因为在每个文件系统中的 inode 号都是唯一的。而在不同的文件系统中,如 `/var``/opt`,会有和 `/home` 中相同的 inode 号。
因为所有的硬链接都指向了包含文件元信息的单一 inode ,这些属性都是文件的一部分,像所属关系、权限、到该 inode 的硬链接数目,对每个硬链接来说这些特性没有什么不同的。这是一个文件所具有的一组属性。唯一能区分这些文件的是包含在 inode 信息中的文件名。链接到同一目录中的单一文件/ inode 的硬链接必须拥有不同的文件名,这是基于同一目录下不能存在重复的文件名的事实的。
文件的硬链接数目可通过 `ls -l` 来查看,如果你想查看实际节点号,可使用 `ls -li` 命令。
### 符号(软)链接
硬链接和软链接(也称为<ruby>符号链接<rt>symlink</rt></ruby>)的区别在于,硬链接直接指向属于该文件的 inode ,而软链接直接指向一个目录项,即指向一个硬链接。因为软链接指向的是一个文件的硬链接而非该文件的 inode ,所以它们并不依赖于 inode 号,这使得它们能跨越不同的文件系统、分区和逻辑卷起作用。
软链接的缺点是,一旦它所指向的硬链接被删除或重命名后,该软链接就失效了。软链接虽然还在,但所指向的硬链接已不存在。所幸的是,`ls` 命令能以红底白字的方式在其列表中高亮显示失效的软链接。
### 实验项目: 链接实验
我认为最容易理解链接用法及其差异的方法是动手搭建一个项目。这个项目应以非超级用户的身份在一个空目录下进行。我创建了 `~/temp` 目录做这个实验,你也可以这么做。这么做可为项目创建一个安全的环境且提供一个新的空目录让程序运作,如此以来这儿仅存放和程序有关的文件。
#### 初始工作
首先在你要进行实验的目录下为该项目中的任务创建一个临时目录确保当前工作目录PWD是你的主目录然后键入下列命令。
```
mkdir temp
```
使用这个命令将当前工作目录切换到 `~/temp`
```
cd temp
```
实验开始,我们需要创建一个能够链接到的文件,下列命令可完成该工作并向其填充内容。
```
du -h > main.file.txt
```
使用 `ls -l` 长列表命名确认文件正确地创建了。运行结果应类似于我的。注意文件大小只有 7 字节,但你的可能会有 12 字节的变动。
```
[dboth@david temp]$ ls -l
total 4
-rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt
```
在列表中,文件模式串后的数字 `1` 代表存在于该文件上的硬链接数。现在应该是 1 ,因为我们还没有为这个测试文件建立任何硬链接。
#### 对硬链接进行实验
硬链接创建一个指向同一 inode 的新目录项,当为文件添加一个硬链接时,你会看到链接数目的增加。确保当前工作目录仍为 `~/temp`。创建一个指向 `main.file.txt` 的硬链接,然后查看该目录下文件列表。
```
[dboth@david temp]$ ln main.file.txt link1.file.txt
[dboth@david temp]$ ls -l
total 8
-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt
```
目录中两个文件都有两个链接且大小相同,时间戳也一样。这就是有一个 inode 和两个硬链接(即该文件的目录项)的一个文件。再建立一个该文件的硬链接,并列出目录清单内容。你可以建立硬链接: `link1.file.txt``main.file.txt`
```
[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l
total 16
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt
```
注意,该目录下的每个硬链接必须使用不同的名称,因为同一目录下的两个文件不能拥有相同的文件名。试着创建一个和现存链接名称相同的硬链接。
```
[dboth@david temp]$ ln main.file.txt link2.file.txt
ln: failed to create hard link 'link2.file.txt': File exists
```
显然不行,因为 `link2.file.txt` 已经存在。目前为止我们只在同一目录下创建硬链接,接着在临时目录的父目录(你的主目录)中创建一个链接。
```
[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main*
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
```
上面的 `ls` 命令显示 `main.file.txt` 文件确实存在于主目录中,且与该文件在 `temp` 目录中的名称一致。当然它们不是不同的文件,它们是同一文件的两个链接,指向了同一文件的目录项。为了帮助说明下一点,在 `temp` 目录中添加一个非链接文件。
```
[dboth@david temp]$ touch unlinked.file ; ls -l
total 12
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
-rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
使用 `ls` 命令的 `i` 选项查看 inode 的硬链接号和新创建文件的硬链接号。
```
[dboth@david temp]$ ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
注意上面文件模式左边的数字 `657024` ,这是三个硬链接文件所指的同一文件的 inode 号,你也可以使用 `i` 选项查看主目录中所创建的链接的节点号,和该值相同。而那个只有一个链接的 inode 号和其他的不同,在你的系统上看到的 inode 号或许不同于本文中的。
接着改变其中一个硬链接文件的大小。
```
[dboth@david temp]$ df -h > link2.file.txt ; ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
现在所有的硬链接文件大小都比原来大了,因为多个目录项都链接着同一文件。
下个实验在我的电脑上会出现这样的结果,是因为我的 `/tmp` 目录在一个独立的逻辑卷上。如果你有单独的逻辑卷或文件系统在不同的分区上(如果未使用逻辑卷),确定你是否能访问那个分区或逻辑卷,如果不能,你可以在电脑上挂载一个 U 盘,如果上述方式适合你,你可以进行这个实验。
试着在 `/tmp` 目录中建立一个 `~/temp` 目录下文件的链接(或你的文件系统所在的位置)。
```
[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt
ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt':
Invalid cross-device link
```
为什么会出现这个错误呢? 原因是每一个单独的可挂载文件系统都有一套自己的 inode 号。简单的通过 inode 号来跨越整个 Linux 文件系统结构引用一个文件会使系统困惑,因为相同的节点号会存在于每个已挂载的文件系统中。
有时你可能会想找到一个 inode 的所有硬链接。你可以使用 `ls -li` 命令。然后使用 `find` 命令找到所有硬链接的节点号。
```
[dboth@david temp]$ find . -inum 657024
./main.file.txt
./link1.file.txt
./link2.file.txt
```
注意 `find` 命令不能找到所属该节点的四个硬链接,因为我们在 `~/temp` 目录中查找。 `find` 命令仅在当前工作目录及其子目录中查找文件。要找到所有的硬链接,我们可以使用下列命令,指定你的主目录作为起始查找条件。
```
[dboth@david temp]$ find ~ -samefile main.file.txt
/home/dboth/temp/main.file.txt
/home/dboth/temp/link1.file.txt
/home/dboth/temp/link2.file.txt
/home/dboth/main.file.txt
```
如果你是非超级用户,没有权限,可能会看到错误信息。这个命令也使用了 `-samefile` 选项而不是指定文件的节点号。这个效果和使用 inode 号一样且更容易,如果你知道其中一个硬链接名称的话。
#### 对软链接进行实验
如你刚才看到的,不能跨越文件系统边界创建硬链接,即在逻辑卷或文件系统中从一个文件系统到另一个文件系统。软链接给出了这个问题的解决方案。虽然它们可以达到相同的目的,但它们是非常不同的,知道这些差异是很重要的。
让我们在 `~/temp` 目录中创建一个符号链接来开始我们的探索。
```
[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
拥有节点号 `657024` 的那些硬链接没有变化,且硬链接的数目也没有变化。新创建的符号链接有不同的 inode 号 `658270`。 名为 `link3.file.txt` 的软链接指向了 `link2.file.txt` 文件。使用 `cat` 命令查看 `link3.file.txt` 文件的内容。符号链接的 inode 信息以字母 `l` (小写字母 l开头意味着这个文件实际是个符号链接。
上例中软链接文件 `link3.file.txt` 的大小只有 14 字节。这是文本内容 `link3.file.txt` 的大小,即该目录项的实际内容。目录项 `link3.file.txt` 并不指向一个 inode ;它指向了另一个目录项,这在跨越文件系统建立链接时很有帮助。现在试着创建一个软链接,之前在 `/tmp` 目录中尝试过的。
```
[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt
/tmp/link3.file.txt ; ls -l /tmp/link*
lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt ->
/home/dboth/temp/link2.file.txt
```
#### 删除链接
当你删除硬链接或硬链接所指的文件时,需要考虑一些问题。
首先,让我们删除硬链接文件 `main.file.txt`。注意指向 inode 的每个目录项就是一个硬链接。
```
[dboth@david temp]$ rm main.file.txt ; ls -li
total 8
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
`main.file.txt` 是该文件被创建时所创建的第一个硬链接。现在删除它,仍然保留着原始文件和硬盘上的数据以及所有剩余的硬链接。要删除原始文件,你必须删除它的所有硬链接。
现在删除 `link2.file.txt` 硬链接文件。
```
[dboth@david temp]$ rm link2.file.txt ; ls -li
total 8
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
注意软链接的变化。删除软链接所指的硬链接会使该软链接失效。在我的系统中,断开的链接用颜色高亮显示,目标的硬链接会闪烁显示。如果需要修复这个损坏的软链接,你需要在同一目录下建立一个和旧链接相同名字的硬链接,只要不是所有硬链接都已删除就行。您还可以重新创建链接本身,链接保持相同的名称,但指向剩余的硬链接中的一个。当然如果软链接不再需要,可以使用 `rm` 命令删除它们。
`unlink` 命令在删除文件和链接时也有用。它非常简单且没有选项,就像 `rm` 命令一样。然而,它更准确地反映了删除的基本过程,因为它删除了目录项与被删除文件的链接。
### 写在最后
我用过这两种类型的链接很长一段时间后,我开始了解它们的能力和特质。我为我所教的 Linux 课程编写了一个实验室项目,以充分理解链接是如何工作的,并且我希望增进你的理解。
--------------------------------------------------------------------------------
作者简介:
戴维.布斯 - 戴维.布斯是 Linux 和开源倡导者,居住在北卡罗莱纳的罗列 。他在 IT 行业工作了四十年,为 IBM 工作了 20 多年的 OS/2。在 IBM 时,他在 1981 年编写了最初的 IBM PC 的第一个培训课程。他为 RedHat 教授过 RHCE 班,并曾在 MCI Worldcom、思科和北卡罗莱纳州工作。他已经用 Linux 和开源软件工作将近 20 年了。
---------------------------------
via: https://opensource.com/article/17/6/linking-linux-filesystem
作者:[David Both][a]
译者:[yongshouzhang](https://github.com/yongshouzhang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM
[7]:https://opensource.com/user/14106/feed
[8]:https://www.flickr.com/photos/digypho/7905320090
[9]:https://creativecommons.org/licenses/by/2.0/
[10]:https://linux.cn/article-8685-1.html
[11]:https://linux.cn/article-8099-1.html
[12]:https://linux.cn/article-8887-1.html
[13]:https://opensource.com/business/16/9/linux-users-guide-lvm
[14]:https://opensource.com/life/15/9/everything-is-a-file
[15]:https://linux.cn/article-8685-1.html
[16]:https://linux.cn/article-8685-1.html#3_19182
[17]:https://opensource.com/users/dboth
[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments

View File

@ -0,0 +1,42 @@
vim 的酷功能:会话!
============================================================
昨天我在编写我的[vimrc][5]的时候了解到一个很酷的 vim 功能!(主要为了添加 fzf 和 ripgrep 插件)。这是一个内置功能,不需要特别的插件。
所以我画了一个漫画。
基本上你可以用下面的命令保存所有你打开的文件和当前的状态
```
:mksession ~/.vim/sessions/foo.vim
```
接着用 `:source ~/.vim/sessions/foo.vim` 或者  `vim -S ~/.vim/sessions/foo.vim` 还原会话。非常酷!
一些 vim 插件给 vim 会话添加了额外的功能:
* [https://github.com/tpope/vim-obsession][1]
* [https://github.com/mhinz/vim-startify][2]
* [https://github.com/xolox/vim-session][3]
这是漫画:
![](https://jvns.ca/images/vimsessions.png)
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2017/09/10/vim-sessions/
作者:[Julia Evans][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about
[1]:https://github.com/tpope/vim-obsession
[2]:https://github.com/mhinz/vim-startify
[3]:https://github.com/xolox/vim-session
[4]:https://jvns.ca/categories/vim
[5]:https://github.com/jvns/vimconfig/blob/master/vimrc

View File

@ -0,0 +1,75 @@
在 Linux 启动或重启时执行命令与脚本
======
有时可能会需要在重启时或者每次系统启动时运行某些命令或者脚本。我们要怎样做呢?本文中我们就对此进行讨论。 我们会用两种方法来描述如何在 CentOS/RHEL 以及 Ubuntu 系统上做到重启或者系统启动时执行命令和脚本。 两种方法都通过了测试。
### 方法 1 使用 rc.local
这种方法会利用 `/etc/` 中的 `rc.local` 文件来在启动时执行脚本与命令。我们在文件中加上一行来执行脚本,这样每次启动系统时,都会执行该脚本。
不过我们首先需要为 `/etc/rc.local` 添加执行权限,
```
$ sudo chmod +x /etc/rc.local
```
然后将要执行的脚本加入其中:
```
$ sudo vi /etc/rc.local
```
在文件最后加上:
```
sh /root/script.sh &
```
然后保存文件并退出。使用 `rc.local` 文件来执行命令也是一样的,但是一定要记得填写命令的完整路径。 想知道命令的完整路径可以运行:
```
$ which command
```
比如:
```
$ which shutter
/usr/bin/shutter
```
如果是 CentOS我们修改的是文件 `/etc/rc.d/rc.local` 而不是 `/etc/rc.local`。 不过我们也需要先为该文件添加可执行权限。
注意:- 启动时执行的脚本,请一定保证是以 `exit 0` 结尾的。
### 方法 2 使用 Crontab
该方法最简单了。我们创建一个 cron 任务,这个任务在系统启动后等待 90 秒,然后执行命令和脚本。
要创建 cron 任务,打开终端并执行
```
$ crontab -e
```
然后输入下行内容,
```
@reboot ( sleep 90 ; sh \location\script.sh )
```
这里 `\location\script.sh` 就是待执行脚本的地址。
我们的文章至此就完了。如有疑问,欢迎留言。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/executing-commands-scripts-at-reboot/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/

View File

@ -0,0 +1,72 @@
Linux 上如何禁用 USB 存储
======
为了保护数据不被泄漏,我们使用软件和硬件防火墙来限制外部未经授权的访问,但是数据泄露也可能发生在内部。 为了消除这种可能性,机构会限制和监测访问互联网,同时禁用 USB 存储设备。
在本教程中,我们将讨论三种不同的方法来禁用 Linux 机器上的 USB 存储设备。所有这三种方法都在 CentOS 67 机器上通过测试。那么让我们一一讨论这三种方法,
(另请阅读: [Ultimate guide to securing SSH sessions][1]
### 方法 1 伪安装
在本方法中,我们往配置文件中添加一行 `install usb-storage /bin/true` 这会让安装 usb-storage 模块的操作实际上变成运行 `/bin/true` 这也是为什么这种方法叫做`伪安装`的原因。 具体来说就是,在文件夹 `/etc/modprobe.d` 中创建并打开一个名为 `block_usb.conf` (也可能叫其他名字)
```
$ sudo vim /etc/modprobe.d/block_usb.conf
```
然后将下行内容添加进去:
```
install usb-storage /bin/true
```
最后保存文件并退出。
### 方法 2 删除 USB 驱动
这种方法要求我们将 USB 存储的驱动程序(`usb_storage.ko`)删掉或者移走,从而达到无法再访问 USB 存储设备的目的。 执行下面命令可以将驱动从它默认的位置移走:
```
$ sudo mv /lib/modules/$(uname -r)/kernel/drivers/usb/storage/usb-storage.ko /home/user1
```
现在在默认的位置上无法再找到驱动程序了,因此当 USB 存储器连接到系统上时也就无法加载到驱动程序了,从而导致磁盘不可用。 但是这个方法有一个小问题,那就是当系统内核更新的时候,`usb-storage` 模块会再次出现在它的默认位置。
### 方法 3 - 将 USB 存储器纳入黑名单
我们也可以通过 `/etc/modprobe.d/blacklist.conf` 文件将 usb-storage 纳入黑名单。这个文件在 RHEL/CentOS 6 是现成就有的,但在 7 上可能需要自己创建。 要将 USB 存储列入黑名单,请使用 vim 打开/创建上述文件:
```
$ sudo vim /etc/modprobe.d/blacklist.conf
```
并输入以下行将 USB 纳入黑名单:
```
blacklist usb-storage
```
保存文件并退出。`usb-storage` 就在就会被系统阻止加载,但这种方法有一个很大的缺点,即任何特权用户都可以通过执行以下命令来加载 `usb-storage` 模块,
```
$ sudo modprobe usb-storage
```
这个问题使得这个方法不是那么理想,但是对于非特权用户来说,这个方法效果很好。
在更改完成后重新启动系统,以使更改生效。请尝试用这些方法来禁用 USB 存储,如果您遇到任何问题或有什么疑问,请告知我们。
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/disable-usb-storage-linux/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject)原创编译,[Linux 中国](https://linux.cn/)荣誉推出
[a]:http://linuxtechlab.com/author/shsuain/
[1]:http://linuxtechlab.com/ultimate-guide-to-securing-ssh-sessions/

View File

@ -1,9 +1,9 @@
并发服务器(3 —— 事件驱动
并发服务器(三):事件驱动
============================================================
这是并发服务器系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节 —— 线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。
这是并发服务器系列的第三节。[第一节][26] 介绍了阻塞式编程,[第二节线程][27] 探讨了多线程,将其作为一种可行的方法来实现服务器并发编程。
另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 [^注1][28]。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 APIs 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。
另一种常见的实现并发的方法叫做 _事件驱动编程_,也可以叫做 _异步_ 编程 ^注1 。这种方法变化万千,因此我们会从最基本的开始,使用一些基本的 API 而非从封装好的高级方法开始。本系列以后的文章会讲高层次抽象,还有各种混合的方法。
本系列的所有文章:
@ -13,13 +13,13 @@
### 阻塞式 vs. 非阻塞式 I/O
要介绍这个标题,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。
作为本篇的介绍,我们先讲讲阻塞和非阻塞 I/O 的区别。阻塞式 I/O 更好理解,因为这是我们使用 I/O 相关 API 时的“标准”方式。从套接字接收数据的时候,调用 `recv` 函数会发生 _阻塞_,直到它从端口上接收到了来自另一端套接字的数据。这恰恰是第一部分讲到的顺序服务器的问题。
因此阻塞式 I/O 存在着固有的性能问题。第二节里我们讲过一种解决方法,就是用多线程。哪怕一个线程的 I/O 阻塞了,别的线程仍然可以使用 CPU 资源。实际上,阻塞 I/O 通常在利用资源方面非常高效,因为线程就等待着 —— 操作系统将线程变成休眠状态,只有满足了线程需要的条件才会被唤醒。
_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回地会很快,哪怕没有数据要接收。这时,就会返回一个特殊的错误状态 ^[注2][15] 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。
_非阻塞式_ I/O 是另一种思路。把套接字设成非阻塞模式时,调用 `recv` 时(还有 `send`,但是我们现在只考虑接收),函数返回的会很快,哪怕没有接收到数据。这时,就会返回一个特殊的错误状态 ^注2 来通知调用者,此时没有数据传进来。调用者可以去做其他的事情,或者尝试再次调用 `recv` 函数。
证明阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^[注3][16]
示范阻塞式和非阻塞式的 `recv` 区别的最好方式就是贴一段示例代码。这里有个监听套接字的小程序,一直在 `recv` 这里阻塞着;当 `recv` 返回了数据,程序就报告接收到了多少个字节 ^注3
```
int main(int argc, const char** argv) {
@ -70,7 +70,6 @@ socket world
^D # to end the connection>
```
The listening program will print the following:
监听程序会输出以下内容:
```
@ -144,7 +143,6 @@ int main(int argc, const char** argv) {
这里与阻塞版本有些差异,值得注意:
1. `accept` 函数返回的 `newsocktfd` 套接字因调用了 `fcntl` 被设置成非阻塞的模式。
2. 检查 `recv` 的返回状态时,我们对 `errno` 进行了检查,判断它是否被设置成表示没有可供接收的数据的状态。这时,我们仅仅是休眠了 200 毫秒然后进入到下一轮循环。
同样用 `nc` 进行测试,以下是非阻塞监听器的输出:
@ -183,19 +181,19 @@ Peer disconnected; I'm done.
作为练习,给输出添加一个时间戳,确认调用 `recv` 得到结果之间花费的时间是比输入到 `nc` 中所用的多还是少(每一轮是 200 ms
这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,这就是 _polling(轮询)_ —— 主程序周期性的查询套接字以便读取数据。
这里就实现了使用非阻塞的 `recv` 让监听者检查套接字变为可能,并且在没有数据的时候重新获得控制权。换句话说,用编程的语言说这就是 <ruby>轮询<rt>polling</rt></ruby> —— 主程序周期性的查询套接字以便读取数据。
对于顺序响应的问题,这似乎是个可行的方法。非阻塞的 `recv` 让同时与多个套接字通信变成可能,轮询这些套接字,仅当有新数据到来时才处理。就是这样,这种方式 _可以_ 用来写并发服务器;但实际上一般不这么做,因为轮询的方式很难扩展。
首先,我在代码中引入的 200 ms 延迟对于记录非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200 ms 的服务器响应时间,这几乎是意料不到的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。
首先,我在代码中引入的 200ms 延迟对于演示非常好(监听器在我输入 `nc` 之间只打印几行 “Calling recv...”,但实际上应该有上千行)。但它也增加了多达 200ms 的服务器响应时间,这无意是不必要的。实际的程序中,延迟会低得多,休眠时间越短,进程占用的 CPU 资源就越多。有些时钟周期只是浪费在等待,这并不好,尤其是在移动设备上,这些设备的电量往往有限。
但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个 客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`找到其中准备好了数据的那一个。这非常低效并且极大的限制了服务器能够并发处理的客户端数。这里有个准则每次轮询之间等待的间隔越久服务器响应性越差而等待的时间越少CPU 在无用的轮询上耗费的资源越多。
但是当我们实际这样来使用多个套接字的时候,更严重的问题出现了。想像下监听器正在同时处理 1000 个客户端。这意味着每一个循环迭代里面,它都得为 _这 1000 个套接字中的每一个_ 执行一遍非阻塞的 `recv`找到其中准备好了数据的那一个。这非常低效并且极大的限制了服务器能够并发处理的客户端数。这里有个准则每次轮询之间等待的间隔越久服务器响应性越差而等待的时间越少CPU 在无用的轮询上耗费的资源越多。
讲真所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的因此没必要逐个扫描。事实上就是这样接下来就会讲一些API让我们可以更优雅地处理多个客户端。
讲真,所有的轮询都像是无用功。当然操作系统应该是知道哪个套接字是准备好了数据的,因此没必要逐个扫描。事实上,就是这样,接下来就会讲一些 API让我们可以更优雅地处理多个客户端。
### select
`select` 的系统调用是轻便的POSIX标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^[注4][17] 的变化,不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select`理解性的教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API然后再展示一个非常复杂的例子。
`select` 的系统调用是可移植的POSIX标准 Unix API 中常有的部分。它是为上一节最后一部分描述的问题而设计的 —— 允许一个线程可以监视许多文件描述符 ^注4 的变化,不用在轮询中执行不必要的代码。我并不打算在这里引入一个关于 `select`全面教程,有很多网站和书籍讲这个,但是在涉及到问题的相关内容时,我会介绍一下它的 API然后再展示一个非常复杂的例子。
`select` 允许 _多路 I/O_,监视多个文件描述符,查看其中任何一个的 I/O 是否可用。
@ -209,30 +207,25 @@ int select(int nfds, fd_set *readfds, fd_set *writefds,
`select` 的调用过程如下:
1. 在调用之前,用户先要为所有不同种类的要监视的文件描述符创建 `fd_set` 实例。如果想要同时监视读取和写入事件,`readfds` 和 `writefds` 都要被创建并且引用。
2. 用户可以使用 `FD_SET` 来设置集合中想要监视的特殊描述符。例如,如果想要监视描述符 2、7 和 10 的读取事件,在 `readfds` 这里调用三次 `FD_SET`,分别设置 2、7 和 10。
3. `select` 被调用。
4. 当 `select` 返回时(现在先不管超时),就是说集合中有多少个文件描述符已经就绪了。它也修改 `readfds``writefds` 集合,来标记这些准备好的描述符。其它所有的描述符都会被清空。
5. 这时用户需要遍历 `readfds``writefds`,找到哪个描述符就绪了(使用 `FD_ISSET`)。
作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的高亮,还有注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。
作为完整的例子,我在并发的服务器程序上使用 `select`,重新实现了我们之前的协议。[完整的代码在这里][18];接下来的是代码中的重点部分及注释。警告:示例代码非常复杂,因此第一次看的时候,如果没有足够的时间,快速浏览也没有关系。
### 使用 select 的并发服务器
使用 I/O 的多发 API 诸如 `select` 会给我们服务器的设计带来一些限制;这不会马上显现出来,但这值得探讨,因为它们是理解事件驱动编程到底是什么的关键。
最重要的是,要记住这种方法本质上是单线程的 ^[注5][19]。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。
最重要的是,要记住这种方法本质上是单线程的 ^注5 。服务器实际上在 _同一时刻只能做一件事_。因为我们想要同时处理多个客户端请求,我们需要换一种方式重构代码。
首先,让我们谈谈主循环。它看起来是什么样的呢?先让我们想象一下服务器有一堆任务,它应该监视哪些东西呢?两种类型的套接字活动:
1. 新客户端尝试连接。这些客户端应该被 `accept`
2. 已连接的客户端发送数据。这个数据要用 [第一节][11] 中所讲到的协议进行传输,有可能会有一些数据要被回送给客户端。
尽管这两种活动在本质上有所区别,我们还是要把们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。
尽管这两种活动在本质上有所区别,我们还是要把们放在一个循环里,因为只能有一个主循环。循环会包含 `select` 的调用。这个 `select` 的调用会监视上述的两种活动。
这里是部分代码,设置了文件描述符集合,并在主循环里转到被调用的 `select` 部分。
@ -264,9 +257,7 @@ while (1) {
这里的一些要点:
1. 由于每次调用 `select` 都会重写传递给函数的集合,调用器就得维护一个 “master” 集合,在循环迭代中,保持对所监视的所有活跃的套接字的追踪。
2. 注意我们所关心的,最开始的唯一那个套接字是怎么变成 `listener_sockfd` 的,这就是最开始的套接字,服务器借此来接收新客户端的连接。
3. `select` 的返回值,是在作为参数传递的集合中,那些已经就绪的描述符的个数。`select` 修改这个集合,用来标记就绪的描述符。下一步是在这些描述符中进行迭代。
```
@ -298,7 +289,7 @@ for (int fd = 0; fd <= fdset_max && nready > 0; fd++) {
}
```
这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20] 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv`_回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^[注6][21]。这个回调函数返回结构体 `fd_status_t`
这部分循环检查 _可读的_ 描述符。让我们跳过监听器套接字(要浏览所有内容,[看这个代码][20] 然后看看当其中一个客户端准备好了之后会发生什么。出现了这种情况后,我们调用一个叫做 `on_peer_ready_recv`_回调_ 函数,传入相应的文件描述符。这个调用意味着客户端连接到套接字上,发送某些数据,并且对套接字上 `recv` 的调用不会被阻塞 ^注6 。这个回调函数返回结构体 `fd_status_t`
```
typedef struct {
@ -307,7 +298,7 @@ typedef struct {
} fd_status_t;
```
这个结构体告诉主循环,是否应该监视套接字的读取事件写入事件,或者两者都监视。上述代码展示了 `FD_SET``FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`
这个结构体告诉主循环,是否应该监视套接字的读取事件写入事件,或者两者都监视。上述代码展示了 `FD_SET``FD_CLR` 是怎么在合适的描述符集合中被调用的。对于主循环中某个准备好了写入数据的描述符,代码是类似的,除了它所调用的回调函数,这个回调函数叫做 `on_peer_ready_send`
现在来花点时间看看这个回调:
@ -464,37 +455,36 @@ INFO:2017-09-26 05:29:18,070:conn0 disconnecting
INFO:2017-09-26 05:29:18,070:conn2 disconnecting
```
和线程的情况相似,客户端之间没有延迟,们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样:
和线程的情况相似,客户端之间没有延迟,们被同时处理。而且在 `select-server` 也没有用线程!主循环 _多路_ 处理所有的客户端,通过高效使用 `select` 轮询多个套接字。回想下 [第二节中][22] 顺序的 vs 多线程的客户端处理过程的图片。对于我们的 `select-server`,三个客户端的处理流程像这样:
![多客户端处理流程](https://eli.thegreenplace.net/images/2017/multiplexed-flow.png)
所有的客户端在同一个线程中同时被处理,通过乘积,做一点这个客户端的任务,然后切换到另一个,再切换到下一个,最后切换回到最开始的那个客户端。注意,这里没有什么循环调度,客户端在它们发送数据的时候被客户端处理,这实际上是受客户端左右的。
### 同步,异步,事件驱动,回调
### 同步、异步、事件驱动、回调
`select-server` 示例代码为讨论什么是异步编程它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。
`select-server` 示例代码为讨论什么是异步编程它和事件驱动及基于回调的编程有何联系,提供了一个良好的背景。因为这些词汇在并发服务器的(非常矛盾的)讨论中很常见。
让我们从一段 `select` 的手册页面中引用的一句开始:
让我们从一段 `select` 的手册页面中引用的一句开始:
> selectpselectFD_CLRFD_ISSETFD_SETFD_ZERO - 同步 I/O 处理
> selectpselectFD\_CLRFD\_ISSETFD\_SETFD\_ZERO - 同步 I/O 处理
因此 `select`_同步_ 处理。但我刚刚演示了大量代码的例子,使用 `select` 作为 _异步_ 处理服务器的例子。有哪些东西?
答案是:这取决于你的观角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send``recv` 是一样的。因此说 `select`_同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。
答案是:这取决于你的观角度。同步常用作阻塞处理,并且对 `select` 的调用实际上是阻塞的。和第 1、2 节中讲到的顺序的、多线程的服务器中对 `send``recv` 是一样的。因此说 `select`_同步的_ API 是有道理的。可是,服务器的设计却可以是 _异步的_,或是 _基于回调的_,或是 _事件驱动的_,尽管其中有对 `select` 的使用。注意这里的 `on_peer_*` 函数是回调函数;它们永远不会阻塞,并且只有网络事件触发的时候才会被调用。它们可以获得部分数据,并能够在调用过程中保持稳定的状态。
如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标,选择菜单,定时器到时间,数据到达套接字,等等。曾经最常见的编程模型是客户端的 JavaScript这里面有一堆回调函数它们在浏览网页时用户的行为被触发。
如果你曾经做过一些 GUI 编程,这些东西对你来说应该很亲切。有个 “事件循环”,常常完全隐藏在框架里,应用的 “业务逻辑” 建立在回调上,这些回调会在各种事件触发后被调用,用户点击鼠标、选择菜单、定时器触发、数据到达套接字等等。曾经最常见的编程模型是客户端的 JavaScript这里面有一堆回调函数它们在浏览网页时用户的行为被触发。
### select 的局限
使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见可移植的 API。但是它也有一些严重的缺陷在监视的文件描述符非常大的时候就会出现。
使用 `select` 作为第一个异步服务器的例子对于说明这个概念很有用,而且由于 `select` 是很常见可移植的 API。但是它也有一些严重的缺陷在监视的文件描述符非常大的时候就会出现。
1. 有限的文件描述符的集合大小。
2. 糟糕的性能。
从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些分支想要写出能够处理上万个并发访问的客户端请求的服务器,这个问题很有现实意义。有一些方法,但是不可移植,也很难用。
从文件描述符的大小开始。`FD_SETSIZE` 是一个编译期常数,在如今的操作系统中,它的值通常是 1024。它被硬编码在 `glibc` 的头文件里,并且不容易修改。它把 `select` 能够监视的文件描述符的数量限制在 1024 以内。曾有些想要写出能够处理上万个并发访问的客户端请求的服务器,所以这个问题很有现实意义。有一些方法,但是不可移植,也很难用。
糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符 就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^[注7][23]
糟糕的性能问题就好解决的多,但是依然非常严重。注意当 `select` 返回的时候,它向调用者提供的信息是 “就绪的” 描述符的个数,还有被修改过的描述符集合。描述符集映射着描述符就绪/未就绪”,但是并没有提供什么有效的方法去遍历所有就绪的描述符。如果只有一个描述符是就绪的,最坏的情况是调用者需要遍历 _整个集合_ 来找到那个描述符。这在监视的描述符数量比较少的时候还行,但是如果数量变的很大的时候,这种方法弊端就凸显出了 ^注7
由于这些原因,为了写出高性能的并发服务器, `select` 已经不怎么用了。每一个流行的操作系统有独特的不可移植的 API允许用户写出非常高效的事件循环像框架这样的高级结构还有高级语言通常在一个可移植的接口中包含这些 API。
@ -541,30 +531,23 @@ while (1) {
}
```
通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。
通过调用 `epoll_ctl` 来配置 `epoll`。这时,配置监听的套接字数量,也就是 `epoll` 监听的描述符的数量。然后分配一个缓冲区,把就绪的事件传给 `epoll` 以供修改。在主循环里对 `epoll_wait` 的调用是魅力所在。它阻塞着,直到某个描述符就绪了(或者超时),返回就绪的描述符数量。但这时,不盲目地迭代所有监视的集合,我们知道 `epoll_write` 会修改传给它的 `events` 缓冲区,缓冲区中有就绪的事件,从 0 到 `nready-1`,因此我们只需迭代必要的次数。
要在 `select` 里面重新遍历,有明显的差异:如果在监视着 1000 个描述符,只有两个就绪, `epoll_waits` 返回的是 `nready=2`,然后修改 `events` 缓冲区最前面的两个元素,因此我们只需要“遍历”两个描述符。用 `select` 我们就需要遍历 1000 个描述符,找出哪个是就绪的。因此,在繁忙的服务器上,有许多活跃的套接字时 `epoll``select` 更加容易扩展。
剩下的代码很直观,因为我们已经很熟悉 `select 服务器` 了。实际上,`epoll 服务器` 中的所有“业务逻辑”和 `select 服务器` 是一样的,回调构成相同的代码。
剩下的代码很直观,因为我们已经很熟悉 “select 服务器” 了。实际上“epoll 服务器” 中的所有“业务逻辑”和 “select 服务器” 是一样的,回调构成相同的代码。
这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 `libuv`,一个最近出现的更加受欢迎的时间循环抽象层。像 `libuv` 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。
这种相似是通过将事件循环抽象分离到一个库/框架中。我将会详述这些内容,因为很多优秀的程序员曾经也是这样做的。相反,下一篇文章里我们会了解 libuv一个最近出现的更加受欢迎的时间循环抽象层。像 libuv 这样的库让我们能够写出并发的异步服务器,并且不用考虑系统调用下繁琐的细节。
* * *
[注1][1] 我试着在两件事的实际差别中突显自己,一件是做一些网络浏览和阅读,但经常做得头疼。有很多不同的选项,从“他们是一样的东西”到“一个是另一个的子集”,再到“他们是完全不同的东西”。在面临这样主观的观点时,最好是完全放弃这个问题,专注特殊的例子和用例。
[注2][2] POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。
[注3][3] 和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。
[注4][5] `select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件,管道,终端,套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。
[注5][6] 有多种方式用多线程来实现事件驱动,我会把它放在稍后的文章中进行讨论。
[注6][7] 由于各种非实验因素,它 _仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv``send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。
[注7][8] 注意这比该文章前面所讲的异步 polling 例子要稍好一点。polling 需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。
- 注1我试着在做网络浏览和阅读这两件事的实际差别中突显自己但经常做得头疼。有很多不同的选项从“它们是一样的东西”到“一个是另一个的子集”再到“它们是完全不同的东西”。在面临这样主观的观点时最好是完全放弃这个问题专注特殊的例子和用例。
- 注2POSIX 表示这可以是 `EAGAIN`,也可以是 `EWOULDBLOCK`,可移植应用应该对这两个都进行检查。
- 注3和这个系列所有的 C 示例类似,代码中用到了某些助手工具来设置监听套接字。这些工具的完整代码在这个 [仓库][4] 的 `utils` 模块里。
- 注4`select` 不是网络/套接字专用的函数,它可以监视任意的文件描述符,有可能是硬盘文件、管道、终端、套接字或者 Unix 系统中用到的任何文件描述符。这篇文章里,我们主要关注它在套接字方面的应用。
- 注5有多种方式用多线程来实现事件驱动我会把它放在稍后的文章中进行讨论。
- 注6由于各种非实验因素_仍然_ 可以阻塞,即使是在 `select` 说它就绪了之后。因此服务器上打开的所有套接字都被设置成非阻塞模式,如果对 `recv``send` 的调用返回了 `EAGAIN` 或者 `EWOULDBLOCK`,回调函数就装作没有事件发生。阅读示例代码的注释可以了解更多细节。
- 注7注意这比该文章前面所讲的异步轮询的例子要稍好一点。轮询需要 _一直_ 发生,而 `select` 实际上会阻塞到有一个或多个套接字准备好读取/写入;`select` 会比一直询问浪费少得多的 CPU 时间。
--------------------------------------------------------------------------------
@ -572,7 +555,7 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
作者:[Eli Bendersky][a]
译者:[GitFuture](https://github.com/GitFuture)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -587,9 +570,9 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
[8]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id9
[9]:https://eli.thegreenplace.net/tag/concurrency
[10]:https://eli.thegreenplace.net/tag/c-c
[11]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
[12]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
[13]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
[11]:https://linux.cn/article-8993-1.html
[12]:https://linux.cn/article-8993-1.html
[13]:https://linux.cn/article-9002-1.html
[14]:http://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
[15]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id11
[16]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id12
@ -598,10 +581,10 @@ via: https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
[19]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id14
[20]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/select-server.c
[21]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id15
[22]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
[22]:https://linux.cn/article-9002-1.html
[23]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id16
[24]:https://github.com/eliben/code-for-blog/blob/master/2017/async-socket-server/epoll-server.c
[25]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/
[26]:http://eli.thegreenplace.net/2017/concurrent-servers-part-1-introduction/
[27]:http://eli.thegreenplace.net/2017/concurrent-servers-part-2-threads/
[26]:https://linux.cn/article-8993-1.html
[27]:https://linux.cn/article-9002-1.html
[28]:https://eli.thegreenplace.net/2017/concurrent-servers-part-3-event-driven/#id10

View File

@ -1,24 +1,20 @@
translating---geekpi
Examining network connections on Linux systems
检查 Linux 系统上的网络连接
============================================================
### Linux systems provide a lot of useful commands for reviewing network configuration and connections. Here's a look at a few, including ifquery, ifup, ifdown and ifconfig.
> Linux 系统提供了许多有用的命令来检查网络配置和连接。下面来看几个,包括 `ifquery`、`ifup`、`ifdown` 和 `ifconfig`
Linux 上有许多可用于查看网络设置和连接的命令。在今天的文章中,我们将会通过一些非常方便的命令来看看它们是如何工作的。
There are a lot of commands available on Linux for looking at network settings and connections. In today's post, we're going to run through some very handy commands and see how they work.
### ifquery 命令
### ifquery command
One very useful command is the **ifquery** command. This command should give you a quick list of network interfaces. However, you might only see something like this —showing only the loopback interface:
一个非常有用的命令是 `ifquery`。这个命令应该会显示一个网络接口列表。但是,你可能只会看到类似这样的内容 - 仅显示回环接口:
```
$ ifquery --list
lo
```
If this is the case, your **/etc/network/interfaces** file doesn't include information on network interfaces except for the loopback interface. You can add lines like the last two in the example below — assuming DHCP is used to assign addresses — if you'd like it to be more useful.
如果是这种情况,那说明你的 `/etc/network/interfaces` 不包括除了回环接口之外的网络接口信息。在下面的例子中,假设你使用 DHCP 来分配地址,且如果你希望它更有用的话,你可以添加例子最后的两行。
```
# interfaces(5) file used by ifup(8) and ifdown(8)
@ -28,15 +24,13 @@ auto eth0
iface eth0 inet dhcp
```
### ifup and ifdown commands
### ifup 和 ifdown 命令
The related **ifup** and **ifdown** commands can be used to bring network connections up and shut them down as needed provided this file has the required descriptive data. Just keep in mind that "if" means "interface" in these commands just as it does in the **ifconfig** command, not "if" as in "if I only had a brain".
可以使用相关的 `ifup``ifdown` 命令来打开网络连接并根据需要将其关闭只要该文件具有所需的描述性数据即可。请记住“if” 在这里意思是<ruby>接口<rt>interface</rt></ruby>,这与 `ifconfig` 命令中的一样,而不是<ruby>如果我只有一个大脑<rt>if I only had a brain</rt></ruby> 中的 “if”。
<aside class="nativo-promo smartphone" id="" style="overflow: hidden; margin-bottom: 16px; max-width: 620px;"></aside>
### ifconfig 命令
### ifconfig command
The **ifconfig** command, on the other hand, doesn't read the /etc/network/interfaces file at all and still provides quite a bit of useful information on network interfaces -- configuration data along with packet counts that tell you how busy each interface has been. The ifconfig command can also be used to shut down and restart network interfaces (e.g., ifconfig eth0 down).
另外,`ifconfig` 命令完全不读取 `/etc/network/interfaces`,但是仍然提供了网络接口相当多的有用信息 —— 配置数据以及可以告诉你每个接口有多忙的数据包计数。`ifconfig` 命令也可用于关闭和重新启动网络接口(例如:`ifconfig eth0 down`)。
```
$ ifconfig eth0
@ -51,15 +45,13 @@ eth0 Link encap:Ethernet HWaddr 00:1e:4f:c8:43:fc
Interrupt:21 Memory:fe9e0000-fea00000
```
The RX and TX packet counts in this output are extremely low. In addition, no errors or packet collisions have been reported. The **uptime** command will likely confirm that this system has only recently been rebooted.
输出中的 RX 和 TX 数据包计数很低。此外,没有报告错误或数据包冲突。或许可以用 `uptime` 命令确认此系统最近才重新启动。
The broadcast (Bcast) and network mask (Mask) addresses shown above indicate that the system is operating on a Class C equivalent network (the default) so local addresses will range from 192.168.0.1 to 192.168.0.254.
上面显示的广播 Bcast 和网络掩码 Mask 地址表明系统运行在 C 类等效网络(默认)上,所以本地地址范围从 `192.168.0.1``192.168.0.254`
### netstat command
### netstat 命令
The **netstat** command provides information on routing and network connections. The **netstat -rn** command displays the system's routing table.
<aside class="nativo-promo tablet desktop" id="" style="overflow: hidden; margin-bottom: 16px; max-width: 620px;"></aside>
`netstat` 命令提供有关路由和网络连接的信息。`netstat -rn` 命令显示系统的路由表。192.168.0.1 是本地网关 Flags=UG)。
```
$ netstat -rn
@ -70,7 +62,7 @@ Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
That **169.254.0.0** entry in the above output is only necessary if you are using or planning to use link-local communications. You can comment out the related lines in the **/etc/network/if-up.d/avahi-autoipd** file like this if this is not the case:
上面输出中的 `169.254.0.0` 条目仅在你正在使用或计划使用本地链路通信时才有必要。如果不是这样的话,你可以在 `/etc/network/if-up.d/avahi-autoipd` 中注释掉相关的行:
```
$ tail -12 /etc/network/if-up.d/avahi-autoipd
@ -87,9 +79,9 @@ $ tail -12 /etc/network/if-up.d/avahi-autoipd
#fi
```
### netstat -a command
### netstat -a 命令
The **netstat -a** command will display  **_all_**  network connections. To limit this to listening and established connections (generally much more useful), use the **netstat -at** command instead.
`netstat -a` 命令将显示“所有”网络连接。为了将其限制为显示正在监听和已建立的连接(通常更有用),请改用 `netstat -at` 命令。
```
$ netstat -at
@ -105,21 +97,9 @@ tcp6 0 0 ip6-localhost:ipp [::]:* LISTEN
tcp6 0 0 ip6-localhost:smtp [::]:* LISTEN
```
### netstat -rn command
### host 命令
The **netstat -rn** command displays the system's routing table. The 192.168.0.1 address is the local gateway (Flags=UG).
```
$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
### host command
The **host** command works a lot like **nslookup** by looking up the remote system's IP address, but also provides the system's mail handler.
`host` 命令就像 `nslookup` 一样,用来查询远程系统的 IP 地址,但是还提供系统的邮箱处理地址。
```
$ host world.std.com
@ -127,9 +107,9 @@ world.std.com has address 192.74.137.5
world.std.com mail is handled by 10 smtp.theworld.com.
```
### nslookup command
### nslookup 命令
The **nslookup** also provides information on the system (in this case, the local system) that is providing DNS lookup services.
`nslookup` 还提供系统中(本例中是本地系统)提供 DNS 查询服务的信息。
```
$ nslookup world.std.com
@ -141,9 +121,9 @@ Name: world.std.com
Address: 192.74.137.5
```
### dig command
### dig 命令
The **dig** command provides quitea lot of information on connecting to a remote system -- including the name server we are communicating with and how long the query takes to respond and is often used for troubleshooting.
`dig` 命令提供了很多有关连接到远程系统的信息 - 包括与我们通信的名称服务器以及查询需要多长时间进行响应,并经常用于故障排除。
```
$ dig world.std.com
@ -168,9 +148,9 @@ world.std.com. 78146 IN A 192.74.137.5
;; MSG SIZE rcvd: 58
```
### nmap command
### nmap 命令
The **nmap** command is most frequently used to probe remote systems, but can also be used to report on the services being offered by the local system. In the output below, we can see that ssh is available for logins, that smtp is servicing email, that a web site is active, and that an ipp print service is running.
`nmap` 经常用于探查远程系统,但是同样也用于报告本地系统提供的服务。在下面的输出中,我们可以看到登录可以使用 ssh、smtp 用于电子邮箱、web 站点也是启用的,并且 ipp 打印服务正在运行。
```
$ nmap localhost
@ -188,15 +168,15 @@ PORT STATE SERVICE
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
```
Linux systems provide a lot of useful commands for reviewing their network configuration and connections. If you run out of commands to explore, keep in mind that **apropos network** might point you toward even more.
Linux 系统提供了很多有用的命令用于查看网络配置和连接。如果你都探索完了,请记住 `apropos network` 或许会让你了解更多。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3230519/linux/examining-network-connections-on-linux-systems.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,80 @@
面向初学者的 Linux 网络硬件:软件思维
===========================================================
![island network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/soderskar-island.jpg?itok=wiMaF66b "island network")
> 没有路由和桥接,我们将会成为孤独的小岛,你将会在这个网络教程中学到更多知识。
[Commons Zero][3]Pixabay
上周,我们学习了本地网络硬件知识,本周,我们将学习网络互联技术和在移动网络中的一些很酷的黑客技术。
### 路由器
网络路由器就是计算机网络中的一切因为路由器连接着网络没有路由器我们就会成为孤岛。图一展示了一个简单的有线本地网络和一个无线接入点所有设备都接入到互联网上本地局域网的计算机连接到一个连接着防火墙或者路由器的以太网交换机上防火墙或者路由器连接到网络服务供应商ISP提供的电缆箱、调制调节器、卫星上行系统……好像一切都在计算中就像是一个带着不停闪烁的的小灯的盒子。当你的网络数据包离开你的局域网进入广阔的互联网它们穿过一个又一个路由器直到到达自己的目的地。
![simple LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_7.png?itok=lsazmf3- "simple LAN")
*图一:一个简单的有线局域网和一个无线接入点。*
路由器可以是各种样式:一个只专注于路由的小巧特殊的小盒子,一个将会提供路由、防火墙、域名服务,以及 VPN 网关功能的大点的盒子,一台重新设计的台式电脑或者笔记本,一个树莓派计算机或者一个 Arduino体积臃肿矮小的像 PC Engines 这样的单板计算机,除了苛刻的用途以外,普通的商品硬件都能良好的工作运行。高端的路由器使用特殊设计的硬件每秒能够传输最大量的数据包。它们有多路数据总线,多个中央处理器和极快的存储。(可以通过了解 Juniper 和思科的路由器来感受一下高端路由器书什么样子的,而且能看看里面是什么样的构造。)
接入你的局域网的无线接入点要么作为一个以太网网桥,要么作为一个路由器。桥接器扩展了这个网络,所以在这个桥接器上的任意一端口上的主机都连接在同一个网络中。一台路由器连接的是两个不同的网络。
### 网络拓扑
有多种设置你的局域网的方式,你可以把所有主机接入到一个单独的<ruby>平面网络<rt>flat network</rt></ruby>,也可以把它们划分为不同的子网。如果你的交换机支持 VLAN 的话,你也可以把它们分配到不同的 VLAN 中。
平面网络是最简单的网络,只需把每一台设备接入到同一个交换机上即可,如果一台交换上的端口不够使用,你可以将更多的交换机连接在一起。有些交换机有特殊的上行端口,有些是没有这种特殊限制的上行端口,你可以连接其中的任意端口,你可能需要使用交叉类型的以太网线,所以你要查阅你的交换机的说明文档来设置。
平面网络是最容易管理的,你不需要路由器也不需要计算子网,但它也有一些缺点。它们的伸缩性不好,所以当网络规模变得越来越大的时候就会被广播网络所阻塞。将你的局域网进行分段将会提升安全保障, 把局域网分成可管理的不同网段将有助于管理更大的网络。图二展示了一个分成两个子网的局域网络:内部的有线和无线主机,和一个托管公开服务的主机。包含面向公共的服务器的子网称作非军事区域 DMZ你有没有注意到那些都是主要在电脑上打字的男人们的术语因为它被阻挡了所有的内部网络的访问。
![LAN](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_4.png?itok=LpXq7bLf "LAN")
*图二:一个分成两个子网的简单局域网。*
即使像图二那样的小型网络也可以有不同的配置方法。你可以将防火墙和路由器放置在一台单独的设备上。你可以为你的非军事区域设置一个专用的网络连接,把它完全从你的内部网络隔离,这将引导我们进入下一个主题:一切基于软件。
### 软件思维
你可能已经注意到在这个简短的系列中我们所讨论的硬件,只有网络接口、交换机,和线缆是特殊用途的硬件。
其它的都是通用的商用硬件而且都是软件来定义它的用途。Linux 是一个真实的网络操作系统,它支持大量的网络操作:网关、虚拟专用网关、以太网桥、网页、邮箱以及文件等等服务器、负载均衡、代理、服务质量、多种认证、中继、故障转移……你可以在运行着 Linux 系统的标准硬件上运行你的整个网络。你甚至可以使用 Linux 交换应用LISA和VDE2 协议来模拟以太网交换机。
有一些用于小型硬件的特殊发行版,如 DD-WRT、OpenWRT以及树莓派发行版也不要忘记 BSD 们和它们的特殊衍生用途如 pfSense 防火墙/路由器,和 FreeNAS 网络存储服务器。
你知道有些人坚持认为硬件防火墙和软件防火墙有区别?其实是没有区别的,就像说硬件计算机和软件计算机一样。
### 端口聚合和以太网绑定
聚合和绑定,也称链路聚合,是把两条以太网通道绑定在一起成为一条通道。一些交换机支持端口聚合,就是把两个交换机端口绑定在一起,成为一个是它们原来带宽之和的一条新的连接。对于一台承载很多业务的服务器来说这是一个增加通道带宽的有效的方式。
你也可以在以太网口进行同样的配置,而且绑定汇聚的驱动是内置在 Linux 内核中的,所以不需要任何其他的专门的硬件。
### 随心所欲选择你的移动宽带
我期望移动宽带能够迅速增长来替代 DSL 和有线网络。我居住在一个有 25 万人口的靠近一个城市的地方,但是在城市以外,要想接入互联网就要靠运气了,即使那里有很大的用户上网需求。我居住的小角落离城镇有 20 分钟的距离,但对于网络服务供应商来说他们几乎不会考虑到为这个地方提供网络。 我唯一的选择就是移动宽带;这里没有拨号网络、卫星网络(即使它很糟糕)或者是 DSL、电缆、光纤但却没有阻止网络供应商把那些我在这个区域从没看到过的 Xfinity 和其它高速网络服务的传单塞进我的邮箱。
我试用了 AT&T、Version 和 T-Mobile。Version 的信号覆盖范围最广,但是 Version 和 AT&T 是最昂贵的。
我居住的地方在 T-Mobile 信号覆盖的边缘,但迄今为止他们给了最大的优惠,为了能够能够有效的使用,我必须购买一个 WeBoost 信号放大器和一台中兴的移动热点设备。当然你也可以使用一部手机作为热点,但是专用的热点设备有着最强的信号。如果你正在考虑购买一台信号放大器,最好的选择就是 WeBoost因为他们的服务支持最棒而且他们会尽最大努力去帮助你。在一个小小的 APP [SignalCheck Pro][8] 的协助下设置将会精准的增强你的网络信号,他们有一个功能较少的免费的版本,但你将一点都不会后悔去花两美元使用专业版。
那个小巧的中兴热点设备能够支持 15 台主机,而且还有拥有基本的防火墙功能。 但你如果你使用像 Linksys WRT54GL这样的设备可以使用 Tomato、OpenWRT或者 DD-WRT 来替代普通的固件,这样你就能完全控制你的防护墙规则、路由配置,以及任何其它你想要设置的服务。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-think-software
作者:[CARLA SCHRODER][a]
译者:[FelixYFZ](https://github.com/FelixYFZ)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/creative-commons-zero
[4]:https://www.linux.com/files/images/fig-1png-7
[5]:https://www.linux.com/files/images/fig-2png-4
[6]:https://www.linux.com/files/images/soderskar-islandjpg
[7]:https://www.linux.com/learn/intro-to-linux/2017/10/linux-networking-hardware-beginners-lan-hardware
[8]:http://www.bluelinepc.com/signalcheck/

View File

@ -0,0 +1,135 @@
如何使用 GPG 加解密文件
=================
目标:使用 GPG 加密文件
发行版:适用于任何发行版
要求:安装了 GPG 的 Linux 或者拥有 root 权限来安装它。
难度:简单
约定:
* `#` - 需要使用 root 权限来执行指定命令,可以直接使用 root 用户来执行,也可以使用 `sudo` 命令
* `$` - 可以使用普通用户来执行指定命令
### 介绍
加密非常重要。它对于保护敏感信息来说是必不可少的。你的私人文件应该要被加密,而 GPG 提供了很好的解决方案。
### 安装 GPG
GPG 的使用非常广泛。你在几乎每个发行版的仓库中都能找到它。如果你还没有安装它,那现在就来安装一下吧。
**Debian/Ubuntu**
```
$ sudo apt install gnupg
```
**Fedora**
```
# dnf install gnupg2
```
**Arch**
```
# pacman -S gnupg
```
**Gentoo**
```
# emerge --ask app-crypt/gnupg
```
### 创建密钥
你需要一个密钥对来加解密文件。如果你为 SSH 已经生成过了密钥对那么你可以直接使用它。如果没有GPG 包含工具来生成密钥对。
```
$ gpg --full-generate-key
```
GPG 有一个命令行程序可以帮你一步一步的生成密钥。它还有一个简单得多的工具,但是这个工具不能让你设置密钥类型,密钥的长度以及过期时间,因此不推荐使用这个工具。
GPG 首先会询问你密钥的类型。没什么特别的话选择默认值就好。
下一步需要设置密钥长度。`4096` 是一个不错的选择。
之后,可以设置过期的日期。 如果希望密钥永不过期则设置为 `0`
然后,输入你的名称。
最后,输入电子邮件地址。
如果你需要的话,还能添加一个注释。
所有这些都完成后GPG 会让你校验一下这些信息。
GPG 还会问你是否需要为密钥设置密码。这一步是可选的, 但是会增加保护的程度。若需要设置密码,则 GPG 会收集你的操作信息来增加密钥的健壮性。 所有这些都完成后, GPG 会显示密钥相关的信息。
### 加密的基本方法
现在你拥有了自己的密钥,加密文件非常简单。 使用下面的命令在 `/tmp` 目录中创建一个空白文本文件。
```
$ touch /tmp/test.txt
```
然后用 GPG 来加密它。这里 `-e` 标志告诉 GPG 你想要加密文件, `-r` 标志指定接收者。
```
$ gpg -e -r "Your Name" /tmp/test.txt
```
GPG 需要知道这个文件的接收者和发送者。由于这个文件给是你的,因此无需指定发送者,而接收者就是你自己。
### 解密的基本方法
你收到加密文件后,就需要对它进行解密。 你无需指定解密用的密钥。 这个信息被编码在文件中。 GPG 会尝试用其中的密钥进行解密。
```
$ gpg -d /tmp/test.txt.gpg
```
### 发送文件
假设你需要发送文件给别人。你需要有接收者的公钥。 具体怎么获得密钥由你自己决定。 你可以让他们直接把公钥发送给你, 也可以通过密钥服务器来获取。
收到对方公钥后,导入公钥到 GPG 中。
```
$ gpg --import yourfriends.key
```
这些公钥与你自己创建的密钥一样,自带了名称和电子邮件地址的信息。 记住,为了让别人能解密你的文件,别人也需要你的公钥。 因此导出公钥并将之发送出去。
```
gpg --export -a "Your Name" > your.key
```
现在可以开始加密要发送的文件了。它跟之前的步骤差不多, 只是需要指定你自己为发送人。
```
$ gpg -e -u "Your Name" -r "Their Name" /tmp/test.txt
```
### 结语
就这样了。GPG 还有一些高级选项, 不过你在 99% 的时间内都不会用到这些高级选项。 GPG 就是这么易于使用。你也可以使用创建的密钥对来发送和接受加密邮件,其步骤跟上面演示的差不多, 不过大多数的电子邮件客户端在拥有密钥的情况下会自动帮你做这个动作。
--------------------------------------------------------------------------------
via: https://linuxconfig.org/how-to-encrypt-and-decrypt-individual-files-with-gpg
作者:[Nick Congleton][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]:https://linuxconfig.org

View File

@ -0,0 +1,43 @@
回复:块层介绍第一部分 - 块 I/O 层
============================================================
### 块层介绍第一部分:块 I/O 层
回复amarao 在[块层介绍第一部分:块 I/O 层][1] 中提的问题
先前的文章:[块层介绍第一部分:块 I/O 层][2]
![](https://static.lwn.net/images/2017/neil-blocklayer.png)
嗨,
你在这里描述的问题与块层不直接相关。这可能是一个驱动错误、可能是一个 SCSI 层错误,但绝对不是一个块层的问题。
不幸的是,报告针对 Linux 的错误是一件难事。有些开发者拒绝去看 bugzilla有些开发者喜欢它有些像我这样只能勉强地使用它。
另一种方法是发送电子邮件。为此,你需要选择正确的邮件列表,还有也许是正确的开发人员,当他们心情愉快,或者不是太忙或者不是假期时找到它们。有些人会努力回复所有,有些是完全不可预知的 - 这对我来说通常会发送一个补丁,包含一些错误报告。如果你只是有一个你自己几乎都不了解的 bug那么你的预期响应率可能会更低。很遗憾但这是是真的。
许多 bug 都会得到回应和处理,但很多 bug 都没有。
我不认为说没有人关心是公平的,但是没有人认为它如你想的那样重要是有可能的。如果你想要一个解决方案,那么你需要驱动它。一个驱动它的方法是花钱请顾问或者与经销商签订支持合同。我怀疑你的情况没有上面的可能。另一种方法是了解代码如何工作,并自己找到解决方案。很多人都这么做,但是这对你来说可能不是一种选择。另一种方法是在不同的相关论坛上不断提出问题,直到得到回复。坚持可以见效。你需要做好准备去执行任何你所要求的测试,可能包括建立一个新的内核来测试。
如果你能在最近的内核(4.12 或者更新)上复现这个 bug我建议你邮件报告给 linux-kernel@vger.kernel.org、linux-scsi@vger.kernel.org 和我neilb@suse.com注意你不必订阅这些列表来发送邮件只需要发送就行。描述你的硬件以及如何触发问题的。
包含所有进程状态是 “D” 的栈追踪。你可以用 “cat /proc/$PID/stack” 来得到它,这里的 “$PID” 是进程的 pid。
确保避免抱怨或者说这个已经坏了好几年了以及这是多么严重不足。没有人关心这个。我们关心的是 bug 以及如何修复它。因此只要报告相关的事实就行。
尝试在邮件中而不是链接到其他地方的链接中包含所有事实。有时链接是需要的,但是对于你的脚本,它只有 8 行,所以把它包含在邮件中就行(并避免像 “fuckup” 之类的描述。只需称它为“坏的”broken或者类似的。同样确保你的邮件发送的不是 HTML 格式。我们喜欢纯文本。HTML 被所有的 @vger.kernel.org 邮件列表拒绝。你或许需要配置你的邮箱程序不发送 HTML。
--------------------------------------------------------------------------------
via: https://lwn.net/Articles/737655/
作者:[neilbrown][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://lwn.net/Articles/737655/
[1]:https://lwn.net/Articles/737588/
[2]:https://lwn.net/Articles/736534/

View File

@ -0,0 +1,57 @@
操作系统何时运行?
============================================================
请各位思考以下问题:在你阅读本文的这段时间内,计算机中的操作系统在**运行**吗?又或者仅仅是 Web 浏览器在运行?又或者它们也许均处于空闲状态,等待着你的指示?
这些问题并不复杂,但它们深入涉及到系统软件工作的本质。为了准确回答这些问题,我们需要透彻理解操作系统的行为模型,包括性能、安全和除错等方面。在该系列文章中,我们将以 Linux 为主举例来帮助你建立操作系统的行为模型OS X 和 Windows 在必要的时候也会有所涉及。对那些深度探索者,我会在适当的时候给出 Linux 内核源码的链接。
这里有一个基本认知,就是,在任意给定时刻,某个 CPU 上仅有一个任务处于活动状态。大多数情形下这个任务是某个用户程序,例如你的 Web 浏览器或音乐播放器,但它也可能是一个操作系统线程。可以确信的是,它是**一个任务**,不是两个或更多,也不是零个,对,**永远**是一个。
这听上去可能会有些问题。比如,你的音乐播放器是否会独占 CPU 而阻止其它任务运行?从而使你不能打开任务管理工具去杀死音乐播放器,甚至让鼠标点击也失效,因为操作系统没有机会去处理这些事件。你可能会愤而喊出,“它究竟在搞什么鬼?”,并引发骚乱。
此时便轮到**中断**大显身手了。中断就好比,一声巨响或一次拍肩后,神经系统通知大脑去感知外部刺激一般。计算机主板上的[芯片组][1]同样会中断 CPU 运行以传递新的外部事件,例如键盘上的某个键被按下、网络数据包的到达、一次硬盘读取的完成,等等。硬件外设、主板上的中断控制器和 CPU 本身,它们共同协作实现了中断机制。
中断对于记录我们最珍视的资源——时间——也至关重要。计算机[启动过程][2]中,操作系统内核会设置一个硬件计时器以让其产生周期性**计时中断**,例如每隔 10 毫秒触发一次。每当计时中断到来,内核便会收到通知以更新系统统计信息和盘点如下事项:当前用户程序是否已运行了足够长时间?是否有某个 TCP 定时器超时了?中断给予了内核一个处理这些问题并采取合适措施的机会。这就好像你给自己设置了整天的周期闹铃并把它们用作检查点:我是否应该去做我正在进行的工作?是否存在更紧急的事项?直到你发现 10 年时间已逝去……
这些内核对 CPU 周期性的劫持被称为<ruby>滴答<rt>tick</rt></ruby>,也就是说,是中断让你的操作系统滴答了一下。不止如此,中断也被用作处理一些软件事件,如整数溢出和页错误,其中未涉及外部硬件。**中断是进入操作系统内核最频繁也是最重要的入口**。对于学习电子工程的人而言,这些并无古怪,它们是操作系统赖以运行的机制。
说到这里,让我们再来看一些实际情形。下图示意了 Intel Core i5 系统中的一个网卡中断。图片中的部分元素设置了超链,你可以点击它们以获取更为详细的信息,例如每个设备均被链接到了对应的 Linux 驱动源码。
![](http://duartes.org/gustavo/blog/img/os/hardware-interrupt.png)
链接如下:
- network card https://github.com/torvalds/linux/blob/v3.17/drivers/net/ethernet/intel/e1000e/netdev.c
- USB keyboard https://github.com/torvalds/linux/blob/v3.16/drivers/hid/usbhid/usbkbd.c
- I/O APIC https://github.com/torvalds/linux/blob/v3.16/arch/x86/kernel/apic/io_apic.c
- HPET https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/hpet.c
让我们来仔细研究下。首先,由于系统中存在众多中断源,如果硬件只是通知 CPU “嘿,这里发生了一些事情”然后什么也不做,则不太行得通。这会带来难以忍受的冗长等待。因此,计算机上电时,每个设备都被授予了一根**中断线**,或者称为 IRQ。这些 IRQ 然后被系统中的中断控制器映射成值介于 0 到 255 之间的**中断向量**。等到中断到达 CPU它便具备了一个完好定义的数值异于硬件的某些其它诡异行为。
相应地CPU 中还存有一个由内核维护的指针,指向一个包含 255 个函数指针的数组,其中每个函数被用来处理某个特定的中断向量。后文中,我们将继续深入探讨这个数组,它也被称作**中断描述符表**IDT
每当中断到来CPU 会用中断向量的值去索引中断描述符表并执行相应处理函数。这相当于在当前正在执行任务的上下文中发生了一个特殊函数调用从而允许操作系统以较小开销快速对外部事件作出反应。考虑下述场景Web 服务器在发送数据时CPU 却间接调用了操作系统函数,这听上去要么很炫酷要么令人惊恐。下图展示了 Vim 编辑器运行过程中一个中断到来的情形。
![](http://duartes.org/gustavo/blog/img/os/vim-interrupted.png)
此处请留意,中断的到来是如何触发 CPU 到 [Ring 0][3] 内核模式的切换而未有改变当前活跃的任务。这看上去就像Vim 编辑器直接面向操作系统内核产生了一次神奇的函数调用,但 Vim 还在那里,它的[地址空间][4]原封未动,等待着执行流返回。
这很令人振奋,不是么?不过让我们暂且告一段落吧,我需要合理控制篇幅。我知道还没有回答完这个开放式问题,甚至还实质上翻开了新的问题,但你至少知道了在你读这个句子的同时**滴答**正在发生。我们将在充实了对操作系统动态行为模型的理解之后再回来寻求问题的答案,对 Web 浏览器情形的理解也会变得清晰。如果你仍有问题,尤其是在这篇文章公诸于众后,请尽管提出。我将会在文章或后续评论中回答它们。下篇文章将于明天在 RSS 和 Twitter 上发布。
--------------------------------------------------------------------------------
via: http://duartes.org/gustavo/blog/post/when-does-your-os-run/
作者:[gustavo][a]
译者:[Cwndmiao](https://github.com/Cwndmiao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://duartes.org/gustavo/blog/about/
[1]:http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
[2]:http://duartes.org/gustavo/blog/post/kernel-boot-process
[3]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
[4]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
[5]:http://feeds.feedburner.com/GustavoDuarte
[6]:http://twitter.com/food4hackers

View File

@ -1,27 +1,27 @@
介绍 MOBY 项目:推进软件容器化运动的一个新的开源项目
介绍 Moby 项目:推进软件容器化运动的一个新的开源项目
============================================================
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/1-2.png?resize=763%2C275&ssl=1)
自从 Docker 四年前将软件容器推向民主化以来,整个生态系统都围绕着容器化而发展,在这段压缩的时期,它经历了两个不同的增长阶段。在这每一个阶段,生产容器系统的模式已经演变适应用户群体以及项目的规模和需求和不断增长的贡献者生态系统
自从 Docker 四年前将软件容器推向大众化以来,整个生态系统都围绕着容器化而发展,在这段这么短的时期内,它经历了两个不同的增长阶段。在这每一个阶段,生产容器系统的模式已经随着项目和不断增长的容器生态系统而演变适应用户群体的规模和需求。
Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生态系统将容器作为主流。它提供了一个组件库,一个将它们组装到定制的基于容器的系统的框架,以及所有容器爱好者进行实验和交换想法的地方。
Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生态系统将容器作为主流。它提供了一个组件库,一个将它们组装到定制的基于容器的系统的框架,也是所有容器爱好者进行实验和交换想法的地方。
让我们来回顾一下我们如何走到今天。在 2013-2014 年开拓者开始使用容器并在一个单一的开源代码库Docker 和其他一些项目中进行协作,以帮助工具成熟。
![Docker Open Source](https://i0.wp.com/blog.docker.com/wp-content/uploads/2-2.png?resize=975%2C548&ssl=1)
然后在 2015-2016 年云原生应用中大量采用容器用于生产环境。在这个阶段用户社区已经发展到支持成千上万个部署由数百个生态系统项目和成千上万的贡献者支持。正是在这个阶段Docker 将其产模式演变为基于开放式组件的方法。这样,它使我们能够增加创新和合作的方面。
然后在 2015-2016 年云原生应用中大量采用容器用于生产环境。在这个阶段用户社区已经发展到支持成千上万个部署由数百个生态系统项目和成千上万的贡献者支持。正是在这个阶段Docker 将其产模式演变为基于开放式组件的方法。这样,它使我们能够增加创新和合作的方面。
涌现出来的新独立的 Docker 组件项目帮助刺激了合作伙伴生态系统和用户社区的发展。在此期间,我们从 Docker 代码库中提取并快速创新组件,以便系统制造商可以在构建自己的容器系统时独立重用它们:[runc][7]、[HyperKit][8]、[VPNKit][9]、[SwarmKit][10]、[InfraKit][11]、[containerd][12] 等。
涌现出来的新独立的 Docker 组件项目帮助促进了合作伙伴生态系统和用户社区的发展。在此期间,我们从 Docker 代码库中提取并快速创新组件,以便系统制造商可以在构建自己的容器系统时独立重用它们:[runc][7]、[HyperKit][8]、[VPNKit][9]、[SwarmKit][10]、[InfraKit][11]、[containerd][12] 等。
![Docker Open Components](https://i1.wp.com/blog.docker.com/wp-content/uploads/3-2.png?resize=975%2C548&ssl=1)
站在容器浪潮的最前沿,我们看到 2017 年出现的一个趋势是容器将成为主流,传播到计算、服务器、数据中心、云、桌面、物联网和移动的各个领域。每个行业和垂直市场金融、医疗、政府、旅游、制造。以及每一个使用案例,现代网络应用、传统服务器应用、机器学习、工业控制系统、机器人技术。容器生态系统中许多新进入者的共同点是,它们建立专门的系统,针对特定的基础设施、行业或使用案例。
站在容器浪潮的最前沿,我们看到 2017 年出现的一个趋势是容器将成为主流,传播到计算、服务器、数据中心、云、桌面、物联网和移动的各个领域。每个行业和垂直市场金融、医疗、政府、旅游、制造。以及每一个使用案例,现代网络应用、传统服务器应用、机器学习、工业控制系统、机器人技术。容器生态系统中许多新进入者的共同点是,它们建立专门的系统,针对特定的基础设施、行业或使用案例。
作为一家公司Docker 使用开源作为我们的创新实验室而与整个生态系统合作。Docker 的成功取决于容器生态系统的成功:如果生态系统成功,我们就成功了。因此,我们一直在计划下一阶段的容器生态系统增长:什么样的产模式将帮助我们扩大容器生态系统,实现容器成为主流的承诺?
作为一家公司Docker 使用开源作为我们的创新实验室而与整个生态系统合作。Docker 的成功取决于容器生态系统的成功:如果生态系统成功,我们就成功了。因此,我们一直在计划下一阶段的容器生态系统增长:什么样的产模式将帮助我们扩大容器生态系统,实现容器成为主流的承诺?
去年,我们的客户开始在 Linux 以外的许多平台上要求有 DockerMac 和 Windows 桌面、Windows Server、云平台如亚马逊网络服务AWS、Microsoft Azure 或 Google 云平台),并且我们专门为这些平台创建了[许多 Docker 版本][13]。为了在一个相对较短的时间与更小的团队,以可扩展的方式构建和发布这些专业版本,而不必重新发明轮子,很明显,我们需要一个新的方。我们需要我们的团队不仅在组件上进行协作,而且还在组件组合上进行协作,这借用[来自汽车行业的想法][14],其中组件被重用于构建完全不同的汽车。
去年,我们的客户开始在 Linux 以外的许多平台上要求有 DockerMac 和 Windows 桌面、Windows Server、云平台如亚马逊网络服务AWS、Microsoft Azure 或 Google 云平台),并且我们专门为这些平台创建了[许多 Docker 版本][13]。为了在一个相对较短的时间和更小的团队中,以可扩展的方式构建和发布这些专业版本,而不必重新发明轮子,很明显,我们需要一个新的方。我们需要我们的团队不仅在组件上进行协作,而且还在组件组合上进行协作,这借用[来自汽车行业的想法][14],其中组件被重用于构建完全不同的汽车。
![Docker production model](https://i1.wp.com/blog.docker.com/wp-content/uploads/4-2.png?resize=975%2C548&ssl=1)
@ -29,15 +29,13 @@ Moby 是一个新的开源项目,旨在推进软件容器化运动,帮助生
![Moby Project](https://i0.wp.com/blog.docker.com/wp-content/uploads/5-2.png?resize=975%2C548&ssl=1)
为了实现这种新的合作高度,今天我们宣布推出软件容器化运动的新开源项目 Moby。它是提供了数十个组件的“乐高”,一个将它们组合成定制容器系统的框架,以及所有容器爱好者进行试验和交换意见的场所。可以把 Moby 认为是容器系统的“乐高俱乐部”。
为了实现这种新的合作高度,今天2017 年 4 月 18 日)我们宣布推出软件容器化运动的新开源项目 Moby。它是提供了数十个组件的“乐高组件”,一个将它们组合成定制容器系统的框架,以及所有容器爱好者进行试验和交换意见的场所。可以把 Moby 认为是容器系统的“乐高俱乐部”。
Moby包括
1. 容器化后端组件**库**例如底层构建器、日志记录设备、卷管理、网络、镜像管理、containerd、SwarmKit 等)
Moby 包括:
1. 容器化后端组件**库**例如低层构建器、日志记录设备、卷管理、网络、镜像管理、containerd、SwarmKit 等)
2. 将组件组合到独立容器平台中的**框架**,以及为这些组件构建、测试和部署构件的工具。
3. 一个名为 **Moby Origin** 的引用组件,它是 Docker 容器平台的开放基础,以及使用 Moby 库或其他项目的各种组件的容器系统示例。
3. 一个名为 “Moby Origin” 的引用组件,它是 Docker 容器平台的开放基础,以及使用 Moby 库或其他项目的各种组件的容器系统示例。
Moby 专为系统构建者而设计,他们想要构建自己的基于容器的系统,而不是可以使用 Docker 或其他容器平台的应用程序开发人员。Moby 的参与者可以从源自 Docker 的组件库中进行选择或者可以选择将“自己的组件”BYOC打包为容器以便在所有组件之间进行混合和匹配以创建定制的容器系统。
@ -49,9 +47,9 @@ Docker 将 Moby 作为一个开放的研发实验室来试验、开发新的组
via: https://blog.docker.com/2017/04/introducing-the-moby-project/
作者:[Solomon Hykes ][a]
作者:[Solomon Hykes][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,96 @@
Linux 上的科学图像处理
============================================================
在显示你的数据和工作方面我发现了几个科学软件,但是我不会涉及太多方面。因此在这篇文章中,我将谈到一款叫 ImageJ 的热门图像处理软件。特别的,我会介绍 [Fiji][4],这是一款绑定了一系列用于科学图像处理插件的 ImageJ 软件。
Fiji 这个名字是一个循环缩略词,很像 GNU 。代表着 “Fiji Is Just ImageJ”。 ImageJ 是科学研究领域进行图像分析的实用工具 —— 例如你可以用它来辨认航拍风景图中树的种类。 ImageJ 能划分物品种类。它以插件架构制成,海量插件可供选择以提升使用灵活度。
首先是安装 ImageJ (或 Fiji。大多数的 ImageJ 发行版都可有该软件包。你愿意的话,可以以这种方式安装它,然后根据你的研究安装所需的独立插件。另一种选择是安装 Fiji 的同时获取最常用的插件。不幸的是,大多数 Linux 发行版的软件中心不会有可用的 Fiji 安装包。幸而,官网上的简单安装文件是可以使用的。这是一个 zip 文件,包含了运行 Fiji 需要的所有文件目录。第一次启动时,你只会看到一个列出了菜单项的工具栏。(图 1
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif1.png)
*图 1. 第一次打开 Fiji 有一个最小化的界面。*
如果你没有备好图片来练习使用 ImageJ Fiji 安装包包含了一些示例图片。点击“File”->“Open Samples”的下拉菜单选项图 2。这些示例包含了许多你可能有兴趣做的任务。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif2.jpg)
*图 2. 案例图片可供学习使用 ImageJ。*
如果你安装了 Fiji而不是单纯的 ImageJ ,那么大量插件也会被安装。首先要注意的是自动更新器插件。每次打开 ImageJ ,该插件将联网检验 ImageJ 和已安装插件的更新。
所有已安装的插件都在“插件”菜单项中可选。一旦你安装了很多插件列表会变得冗杂所以需要精简你选择的插件。你想手动更新的话点击“Help”->“Update Fiji” 菜单项强制检测并获取可用更新的列表(图 3
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif3.png)
*图 3. 强制手动检测可用更新。*
那么,现在,用 Fiji/ImageJ 可以做什么呢举一例统计图片中的物品数。你可以通过点击“File”->“Open Samples”->“Embryos”来载入示例。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif4.jpg)
*图 4. 用 ImageJ 算出图中的物品数。*
第一步给图片设定比例,这样你可以告诉 ImageJ 如何判别物品。首先选择在工具栏选择线条按钮。然后选择“Analyze”->“Set Scale”然后就会设置比例尺包含的像素点个数(图 5)。你可以设置“known distance ”为 100单元为“um”。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif5.png)
*图 5. 很多图片分析任务需要对图片设定一个范围。*
接下来的步骤是简化图片内的信息。点击“Image”->“Type”->“8-bit”来减少信息量到 8 比特灰度图片。要分隔独立物体点击“Process”->“Binary”->“Make Binary”以自动设置图片门限。(图 6)。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif6.png)
*图 6. 有些工具可以自动完成像门限一样的任务。*
图片内的物品计数前你需要移除像比例尺之类的人工操作。可以用矩形选择工具来选中它并点击“Edit”->“Clear”来完成这项操作。现在你可以分析图片看看这里是啥物体。
确保图中没有区域被选中点击“Analyze”->“Analyze Particles”来弹出窗口来选择最小尺寸这决定了最后的图片会展示什么图 7
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif7.png)
*图 7. 你可以通过确定最小尺寸生成一个缩减过的图片。 *
图 8 在总结窗口展示了一个概览。每个最小点也有独立的细节窗口。
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif8.png)
*图 8. 包含了已知最小点总览清单的输出结果。*
当你有一个分析程序可以工作于给定图片类型,你通常需要将相同的步骤应用到一系列图片当中。这可能数以千计,你当然不会想对每张图片手动重复操作。这时候,你可以集中必要步骤到宏,这样它们可以被应用多次。点击插件->“Macros”->“Record”弹出一个新的窗口记录你随后的所有命令。所有步骤完成你可以将之保存为一个宏文件并且通过点击“Plugins”->“Macros”->“Run”来在其它图片上重复运行。
如果你有非常特定的工作步骤,你可以简单地打开宏文件并手动编辑它,因为它是一个简单的文本文件。事实上有一套完整的宏语言可供你更加充分地控制图片处理过程。
然而如果你有真的有非常多的系列图片需要处理这也将是冗长乏味的工作。这种情况下前往“Process”->“Batch”->“Macro”会弹出一个你可以设置批量处理工作的新窗口图 9
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12172fijif9.png)
*图 9. 对批量输入的图片用单一命令运行宏。*
这个窗口中你能选择应用哪个宏文件、输入图片所在的源目录和你想写入输出图片的输出目录。也可以设置输出文件格式及通过文件名筛选输入图片中需要使用的。万事具备之后点击窗口下方的的“Process”按钮开始批量操作。
若这是会重复多次的工作你可以点击窗口底部的“Save”按钮保存批量处理到一个文本文件。点击也在窗口底部的“Open”按钮重新加载相同的工作。这个功能可以使得研究中最冗余部分自动化这样你就可以在重点放在实际的科学研究中。
考虑到单单是 ImageJ 主页就有超过 500 个插件和超过 300 种宏可供使用,简短起见,我只能在这篇短文中提出最基本的话题。幸运的是,还有很多专业领域的教程可供使用,项目主页上还有关于 ImageJ 核心的非常棒的文档。如果你觉得这个工具对研究有用,你研究的专业领域也会有很多信息指引你。
--------------------------------------------------------------------------------
作者简介:
Joey Bernard 有物理学和计算机科学的相关背景。这对他在新不伦瑞克大学当计算研究顾问的日常工作大有裨益。他也教计算物理和并行程序规划。
--------------------------------
via: https://www.linuxjournal.com/content/image-processing-linux
作者:[Joey Bernard][a]
译者:[XYenChi](https://github.com/XYenChi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/joey-bernard
[1]:https://www.linuxjournal.com/tag/science
[2]:https://www.linuxjournal.com/tag/statistics
[3]:https://www.linuxjournal.com/users/joey-bernard
[4]:https://imagej.net/Fiji

View File

@ -1,34 +1,35 @@
# [用 coredumpctl 更好地记录 bug][1]
用 coredumpctl 更好地记录 bug
===========
![](https://fedoramagazine.org/wp-content/uploads/2017/11/coredump.png-945x400.jpg)
一个不幸的事实是,所有的软件都有 bug一些 bug 会导致系统崩溃。当它出现的时候,它经常会在磁盘上留下一个名为 _core dump_ 的数据文件。该文件包含有关系统崩溃时的相关数据,可能有助于确定发生崩溃的原因。通常开发者要求有显示导致崩溃的指令流的 _backtrace_ 形式的数据。开发人员可以使用它来修复 bug 并改进系统。如果系统崩溃,以下是如何轻松生成 backtrace 的方法。
一个不幸的事实是,所有的软件都有 bug一些 bug 会导致系统崩溃。当它出现的时候,它经常会在磁盘上留下一个被称为“<ruby>核心转储<rt>core dump</rt></ruby>的数据文件。该文件包含有关系统崩溃时的相关数据,可能有助于确定发生崩溃的原因。通常开发者要求提供 “<ruby>回溯<rt>backtrace</rt></ruby>” 形式的数据,以显示导致崩溃的指令流。开发人员可以使用它来修复 bug 以改进系统。如果系统发生了崩溃,以下是如何轻松生成 <ruby>回溯<rt>backtrace</rt></ruby> 的方法。
### 开始使用 coredumpctl
### 从使用 coredumpctl 开始
大多数 Fedora 系统使用[自动错误报告工具 ABRT][2]来自动捕获崩溃文件并记录 bug。但是如果你禁用了此服务或删除了该软件包则此方法可能会有所帮助。
大多数 Fedora 系统使用[自动错误报告工具ABRT][2]来自动捕获崩溃文件并记录 bug。但是如果你禁用了此服务或删除了该软件包则此方法可能会有所帮助。
如果你遇到系统崩溃,请首先确保你运行的是最新的软件。更新通常包含修复程序,这些更新通常含有已经发现的会导致严重错误和崩溃的错误的修复。当你更新后,请尝试重新创建导致错误的情况。
如果你遇到系统崩溃,请首先确保你运行的是最新的软件。更新通常包含修复程序,这些更新通常含有已经发现的会导致严重错误和崩溃的错误的修复。当你更新后,请尝试重导致错误的情况。
如果崩溃仍然发生,或者你已经在运行最新的软件,那么可以使用有用的 _coredumpctl_。此程序可帮助查找和处理崩溃。要查看系统上所有核心转储列表,请运行以下命令:
如果崩溃仍然发生,或者你已经在运行最新的软件,那么可以使用有用的 `coredumpctl` 工具。此程序可帮助查找和处理崩溃。要查看系统上所有核心转储列表,请运行以下命令:
```
coredumpctl list
```
如果你看到比预期长的列表,请不要感到惊讶。有时系统组件在后台默默地崩溃,并自行恢复。现在快速查找转储的简单方法是使用 _-since_ 选项:
如果你看到比预期长的列表,请不要感到惊讶。有时系统组件在后台默默地崩溃,并自行恢复。快速查找今天的转储的简单方法是使用 `-since` 选项:
```
coredumpctl list --since=today
```
_PID_ 列包含用于标识转储的进程 ID。请注意这个数字因为你会之后再用到它。或者如果你不想记住它使用下面的命令将它赋值给一个变量
“PID” 列包含用于标识转储的进程 ID。请注意这个数字因为你会之后再用到它。或者如果你不想记住它使用下面的命令将它赋值给一个变量
```
MYPID=<PID>
```
要查看关于核心转储的信息,请使用此命令(使用 _$MYPID_ 变量或替换 PID 编号):
要查看关于核心转储的信息,请使用此命令(使用 `$MYPID` 变量或替换 PID 编号):
```
coredumpctl info $MYPID
@ -36,42 +37,42 @@ coredumpctl info $MYPID
### 安装 debuginfo 包
在核心转储中的数据以及原始代码中的指令之间调试符号转义。这个符号数据可能相当大。因此,符号以 _debuginfo_ 软件包的形式与大多数用户使用的 Fedora 系统分开安装。要确定你必须安装哪些 debuginfo 包,请先运行以下命令:
在核心转储中的数据以及原始代码中的指令之间调试符号转义。这个符号数据可能相当大。与大多数用户运行在 Fedora 系统上的软件包不同,符号以 “debuginfo” 软件包的形式安装。要确定你必须安装哪些 debuginfo 包,请先运行以下命令:
```
coredumpctl gdb $MYPID
```
这可能会在屏幕上显示大量信息。最后一行可能会告诉你使用 _dnf_ 安装更多的 debuginfo 软件包。[用 sudo ][3]运行该命令:
这可能会在屏幕上显示大量信息。最后一行可能会告诉你使用 `dnf` 安装更多的 debuginfo 软件包。[用 sudo ][3]运行该命令以安装
```
sudo dnf debuginfo-install <packages...>
```
然后再次尝试 _coredumpctl gdb $MYPID_ 命令。**你可能需要重复执行此操作**,因为其他符号会在 trace 中展开。
然后再次尝试 `coredumpctl gdb $MYPID` 命令。**你可能需要重复执行此操作**,因为其他符号会在回溯中展开。
### 捕获 backtrace
### 捕获回溯
运行以下命令以在调试器中记录信息:
在调试器中运行以下命令以记录信息:
```
set logging file mybacktrace.txt
set logging on
```
你可能会发现关闭分页有帮助。对于长的 backtrace,这可以节省时间。
你可能会发现关闭分页有帮助。对于长的回溯,这可以节省时间。
```
set pagination off
```
现在运行 backtrace
现在运行回溯
```
thread apply all bt full
```
现在你可以输入 _quit_ 来退出调试器。_mybacktrace.txt_ 包含可附加到 bug 或问题的追踪信息。或者,如果你正在与某人实时合作,则可以将文本上传到 pastebin。无论哪种方式你现在可以向开发人员提供更多的帮助来解决问题。
现在你可以输入 `quit` 来退出调试器。`mybacktrace.txt` 包含可附加到 bug 或问题的追踪信息。或者,如果你正在与某人实时合作,则可以将文本上传到 pastebin。无论哪种方式你现在可以向开发人员提供更多的帮助来解决问题。
---------------------------------
@ -85,9 +86,9 @@ Paul W. Frields 自 1997 年以来一直是 Linux 用户和爱好者,并于 20
via: https://fedoramagazine.org/file-better-bugs-coredumpctl/
作者:[Paul W. Frields ][a]
作者:[Paul W. Frields][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,92 @@
如何轻松记住 Linux 命令
=================
![](https://www.maketecheasier.com/assets/uploads/2017/10/rc-feat.jpg)
Linux 新手往往对命令行心存畏惧。部分原因是因为需要记忆大量的命令,毕竟掌握命令是高效使用命令行的前提。
不幸的是,学习这些命令并无捷径,然而在你开始学习命令之初,有些工具还是可以帮到你的。
### history
![Linux Bash History 命令](https://www.maketecheasier.com/assets/uploads/2017/10/rc-bash-history.jpg)
首先要介绍的是命令行工具 `history`,它能帮你记住那些你曾经用过的命令。包括应用最广泛的 Bash 在内的大多数 [Linux shell][1],都会创建一个历史文件来包含那些你输入过的命令。如果你用的是 Bash这个历史文件就是 `/home/<username>/.bash_history`
这个历史文件是纯文本格式的,你可以用任意的文本编辑器打开来浏览和搜索。
### apropos
确实存在一个可以帮你找到其他命令的命令。这个命令就是 `apropos`,它能帮你找出合适的命令来完成你的搜索。比如,假设你需要知道哪个命令可以列出目录的内容,你可以运行下面命令:
```shell
apropos "list directory"
```
![Linux Apropos](https://www.maketecheasier.com/assets/uploads/2017/10/rc-apropos.jpg)
这就搜索出结果了,非常直接。给 “directory” 加上复数后再试一下。
```shell
apropos "list directories"
```
这次没用了。`apropos` 所作的其实就是搜索一系列命令的描述。描述不匹配的命令不会纳入结果中。
还有其他的用法。通过 `-a` 标志,你可以以更灵活的方式来增加搜索关键字。试试这条命令:
```shell
apropos "match pattern"
```
![Linux Apropos -a Flag](https://www.maketecheasier.com/assets/uploads/2017/10/rc-apropos-a.jpg)
你会觉得应该会有一些匹配的内容出现,比如 [grep][2] 对吗? 然而实际上并没有匹配出任何结果。再说一次apropos 只会根据字面内容进行搜索。
现在让我们试着用 `-a` 标志来把单词分割开来。LCTT 译注该选项的意思是“and”即多个关键字都存在但是不需要正好是连在一起的字符串。
```shell
apropos "match" -a "pattern"
```
这一下,你可以看到很多期望的结果了。
`apropos` 是一个很棒的工具,不过你需要留意它的缺陷。
### ZSH
![Linux ZSH Autocomplete](https://www.maketecheasier.com/assets/uploads/2017/10/rc-zsh.jpg)
ZSH 其实并不是用于记忆命令的工具。它其实是一种 shell。你可以用 [ZSH][3] 来替代 Bash 作为你的命令行 shell。ZSH 包含了自动纠错机制,能在你输入命令的时候给你予提示。开启该功能后,它会提示你相近的选择。在 ZSH 中你可以像往常一样使用命令行,同时你还能享受到极度安全的网络以及其他一些非常好用的特性。充分利用 ZSH 的最简单方法就是使用 [Oh-My-ZSH][4]。
### 速记表
最后,也可能是最间的方法就是使用 [速记表][5]。
有很多在线的速记表,比如[这个][6] 可以帮助你快速查询命令。
![linux-commandline-cheatsheet](https://www.maketecheasier.com/assets/uploads/2013/10/linux-commandline-cheatsheet.gif)
为了快速查询,你可以寻找图片格式的速记表,然后将它设置为你的桌面墙纸。
这并不是记忆命令的最好方法,但是这么做可以帮你节省在线搜索遗忘命令的时间。
在学习时依赖这些方法,最终你会发现你会越来越少地使用这些工具。没有人能够记住所有的事情,因此偶尔遗忘掉某些东西或者遇到某些没有见过的东西也很正常。这也是这些工具以及因特网存在的意义。
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/remember-linux-commands/
作者:[Nick Congleton][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.maketecheasier.com/author/nickcongleton/
[1]: https://www.maketecheasier.com/alternative-linux-shells/
[2]: https://www.maketecheasier.com/what-is-grep-and-uses/
[3]: https://www.maketecheasier.com/understanding-the-different-shell-in-linux-zsh-shell/
[4]: https://github.com/robbyrussell/oh-my-zsh
[5]: https://www.maketecheasier.com/premium/cheatsheet/linux-command-line/
[6]: https://www.cheatography.com/davechild/cheat-sheets/linux-command-line/

View File

@ -0,0 +1,157 @@
tmate秒级分享你的终端会话
=================
不久前,我们写过一篇关于 [teleconsole](https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/) 的介绍,该工具可用于快速分享终端给任何人(任何你信任的人)。今天我们要聊一聊另一款类似的应用,名叫 `tmate`
`tmate` 有什么用?它可以让你在需要帮助时向你的朋友们求助。
### 什么是 tmate
[tmate](https://tmate.io/) 的意思是 `teammates`,它是 tmux 的一个分支,并且使用相同的配置信息(例如快捷键配置,配色方案等)。它是一个终端多路复用器,同时具有即时分享终端的能力。它允许在单个屏幕中创建并操控多个终端,同时这些终端还能与其他同事分享。
你可以分离会话,让作业在后台运行,然后在想要查看状态时重新连接会话。`tmate` 提供了一个即时配对的方案,让你可以与一个或多个队友共享一个终端。
在屏幕的地步有一个状态栏,显示了当前会话的一些诸如 ssh 命令之类的共享信息。
### tmate 是怎么工作的?
- 运行 `tmate` 时,会通过 `libssh` 在后台创建一个连接到 tmate.io (由 tmate 开发者维护的后台服务器)的 ssh 连接。
- tmate.io 服务器的 ssh 密钥通过 DH 交换进行校验。
- 客户端通过本地 ssh 密钥进行认证。
- 连接创建后,本地 tmux 服务器会生成一个 150 位(不可猜测的随机字符)会话令牌。
- 队友能通过用户提供的 SSH 会话 ID 连接到 tmate.io。
### 使用 tmate 的必备条件
由于 `tmate.io` 服务器需要通过本地 ssh 密钥来认证客户机,因此其中一个必备条件就是生成 SSH 密钥 key。
记住,每个系统都要有自己的 SSH 密钥。
```shell
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/magi/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/magi/.ssh/id_rsa.
Your public key has been saved in /home/magi/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3ima5FuwKbWyyyNrlR/DeBucoyRfdOtlUmb5D214NC8 magi@magi-VirtualBox
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| . |
| . . = o |
| *ooS= . + o |
| . =.@*o.o.+ E .|
| =o==B++o = . |
| o.+*o+.. . |
| ..o+o=. |
+----[SHA256]-----+
```
### 如何安装 tmate
`tmate` 已经包含在某些发行版的官方仓库中,可以通过包管理器来安装。
对于 Debian/Ubuntu可以使用 [APT-GET 命令](https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/)或者 [APT 命令](https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/)to 来安装。
```shell
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:tmate.io/archive
$ sudo apt-get update
$ sudo apt-get install tmate
```
你也可以从官方仓库中安装 tmate。
```shell
$ sudo apt-get install tmate
```
对于 Fedora使用 [DNF 命令](https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/) 来安装。
```shell
$ sudo dnf install tmate
```
对于基于 Arch Linux 的系统,使用 []()[Yaourt 命令](https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/)或 []()[Packer 命令](https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/) 来从 AUR 仓库中安装。
```shell
$ yaourt -S tmate
```
```shell
$ packer -S tmate
```
对于 openSUSE使用 [Zypper 命令](https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/) 来安装。
```shell
$ sudo zypper in tmate
```
### 如何使用 tmate
成功安装后,打开终端然后输入下面命令,就会打开一个新的会话,在屏幕底部,你能看到 SSH 会话的 ID。
```shell
$ tmate
```
![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-1.png)
要注意的是SSH 会话 ID 会在几秒后消失,不过不要紧,你可以通过下面命令获取到这些详细信息。
```shell
$ tmate show-messages
```
`tmate``show-messages` 命令会显示 tmate 的日志信息,其中包含了该 ssh 连接内容。
![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-2.png)
现在,分享你的 SSH 会话 ID 给你的朋友或同事从而允许他们观看终端会话。除了 SSH 会话 ID 以外,你也可以分享 web URL。
另外你还可以选择分享的是只读会话还是可读写会话。
### 如何通过 SSH 连接会话
只需要在终端上运行你从朋友那得到的 SSH 终端 ID 就行了。类似下面这样。
```shell
$ ssh session: ssh 3KuRj95sEZRHkpPtc2y6jcokP@sg2.tmate.io
```
![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-4.png)
### 如何通过 Web URL 连接会话
打开浏览器然后访问朋友给你的 URL 就行了。像下面这样。
![](https://www.2daygeek.com/wp-content/uploads/2017/11/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds-3.png)
只需要输入 `exit` 就能退出会话了。
```
[Source System Output]
[exited]
[Remote System Output]
[server exited]
Connection to sg2.tmate.io closed by remote host。
Connection to sg2.tmate.io closed。
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/
作者:[Magesh Maruthamuthu][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,81 @@
容器技术和 K8S 的下一站
============================================================
> 想知道容器编排管理和 K8S 的最新展望么?来看看专家怎么说。
![CIO_Big Data Decisions_2](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_Big%20Data%20Decisions_2.png?itok=Y5zMHxf8 "CIO_Big Data Decisions_2")
如果你想对容器在未来的发展方向有一个整体把握,那么你一定要跟着钱走,看看钱都投在了哪里。当然了,有很多很多的钱正在投入容器的进一步发展。相关研究预计 2020 年容器技术的投入将占有 [27 亿美元][4] 的市场份额。而在 2016 年,容器相关技术投入的总额为 7.62 亿美元,只有 2020 年投入预计的三分之一。巨额投入的背后是一些显而易见的基本因素,包括容器化的迅速增长以及并行化的大趋势。随着容器被大面积推广和使用,容器编排管理也会被理所当然的推广应用起来。
来自 [The new stack][5] 的调研数据表明,容器的推广使用是编排管理被推广的主要的催化剂。根据调研参与者的反馈数据,在已经将容器技术使用到生产环境中的使用者里,有六成使用者正在将 KubernetesK8S编排管理广泛的应用在生产环境中另外百分之十九的人员则表示他们已经处于部署 K8S 的初级阶段。在容器部署初期的使用者当中,虽然只有百分之五的人员表示已经在使用 K8S ,但是百分之五十八的人员表示他们正在计划和准备使用 K8S。总而言之容器和 Kubernetes 的关系就好比是鸡和蛋一样,相辅相成紧密关联。众多专家一致认为编排管理工具对容器的[长周期管理][6] 以及其在市场中的发展有至关重要的作用。正如 [Cockroach 实验室][7] 的 Alex Robinson 所说,容器编排管理被更广泛的拓展和应用是一个总体的大趋势。毫无疑问,这是一个正在快速演变的领域,且未来潜力无穷。鉴于此,我们对 Robinson 和其他的一些容器的实际使用和推介者做了采访,来从他们作为容器技术的践行者的视角上展望一下容器编排以及 K8S 的下一步发展。
### 容器编排将被主流接受
像任何重要技术的转型一样,我们就像是处在一个高崖之上一般,在经过了初期步履蹒跚的跋涉之后将要来到一望无际的广袤平原。广大的新天地和平实真切的应用需求将会让这种新技术在主流应用中被迅速推广,尤其是在大企业环境中。正如 Alex Robinson 说的那样,容器技术的淘金阶段已经过去,早期的技术革新创新正在减速,随之而来的则是市场对容器技术的稳定性和可用性的强烈需求。这意味着未来我们将不会再见到大量的新的编排管理系统的涌现,而是会看到容器技术方面更多的安全解决方案,更丰富的管理工具,以及基于目前主流容器编排系统的更多的新特性。
### 更好的易用性
人们将在简化容器的部署方面下大功夫,因为容器部署的初期工作对很多公司和组织来说还是比较复杂的,尤其是容器的[长期管理维护][8]更是需要投入大量的精力。正如 [Codemill AB][9] 公司的 My Karlsson 所说,容器编排技术还是太复杂了,这导致很多使用者难以娴熟驾驭和充分利用容器编排的功能。很多容器技术的新用户都需要花费很多精力,走很多弯路,才能搭建小规模的或单个的以隔离方式运行的容器系统。这种现象在那些没有针对容器技术设计和优化的应用中更为明显。在简化容器编排管理方面有很多优化可以做,这些优化和改造将会使容器技术更加具有可用性。
### 在混合云以及多云技术方面会有更多侧重
随着容器和容器编排技术被越来越多的使用,更多的组织机构会选择扩展他们现有的容器技术的部署,从之前的把非重要系统部署在单一环境的使用情景逐渐过渡到更加[复杂的使用情景][10]。对很多公司来说,这意味着他们必须开始学会在 [混合云][11] 和 [多云][12] 的环境下,全局化的去管理那些容器化的应用和微服务。正如红帽 [Openshift 部门产品战略总监][14] [Brian Gracely][13] 所说,“容器和 K8S 技术的使用使得我们成功的实现了混合云以及应用的可移植性。结合 Open Service Broker API 的使用,越来越多的结合私有云和公有云资源的新应用将会涌现出来。”
据 [CloudBees][15] 公司的高级工程师 Carlos Sanchez 分析联合服务Federation将会得到极大推动使一些诸如多地区部署和多云部署等的备受期待的新特性成为可能。
**[ 想知道 CIO 们对混合云和多云的战略构想么? 请参看我们的这条相关资源, [Hybrid Cloud: The IT leader's guide][16]。 ]**
### 平台和工具的持续整合及加强
对任何一种科技来说,持续的整合和加强从来都是大势所趋;容器编排管理技术在这方面也不例外。来自 [Sumo Logic][17] 的首席分析师 Ben Newton 表示,随着容器化渐成主流,软件工程师们正在很少数的一些技术上做持续整合加固的工作,来满足他们的一些微应用的需求。容器和 K8S 将会毫无疑问的成为容器编排管理方面的主流平台,并轻松碾压其它的一些小众平台方案。因为 K8S 提供了一个相当清晰的可以摆脱各种特有云生态的途径K8S 将被大量公司使用,逐渐形成一个不依赖于某个特定云服务的<ruby>“中立云”<rt>cloud-neutral</rt></ruby>
### K8S 的下一站
来自 [Alcide][18] 的 CTO 和联合创始人 Gadi Naor 表示K8S 将会是一个有长期和远景发展的技术,虽然我们的社区正在大力推广和发展 K8SK8S 仍有很长的路要走。
专家们对[日益流行的 K8S 平台][19]也作出了以下一些预测:
**_来自 Alcide 的 Gadi Naor 表示_** “运营商会持续演进并趋于成熟,直到在 K8S 上运行的应用可以完全自治。利用 [OpenTracing][20] 和诸如 [istio][21] 技术的 service mesh 架构,在 K8S 上部署和监控微应用将会带来很多新的可能性。”
**_来自 Red Hat 的 Brian Gracely 表示_** “K8S 所支持的应用的种类越来越多。今后在 K8S 上,你不仅可以运行传统的应用程序,还可以运行原生的云应用、大数据应用以及 HPC 或者基于 GPU 运算的应用程序,这将为灵活的架构设计带来无限可能。”
**_来自 Sumo Logic 的 Ben Newton 表示_** “随着 K8S 成为一个具有统治地位的平台,我预计更多的操作机制将会被统一化,尤其是 K8S 将和第三方管理和监控平台融合起来。”
**_来自 CloudBees 的 Carlos Sanchez 表示_** “在不久的将来我们就能看到不依赖于 Docker 而使用其它运行时环境的系统,这将会有助于消除任何可能的 lock-in 情景“ [编辑提示:[CRI-O][22] 就是一个可以借鉴的例子。]“而且我期待将来会出现更多的针对企业环境的存储服务新特性,包括数据快照以及在线的磁盘容量的扩展。”
**_来自 Cockroach Labs 的 Alex Robinson 表示_** “ K8S 社区正在讨论的一个重大发展议题就是加强对[有状态程序][23]的管理。目前在 K8S 平台下,实现状态管理仍然非常困难,除非你所使用的云服务商可以提供远程固定磁盘。现阶段也有很多人在多方面试图改善这个状况,包括在 K8S 平台内部以及在外部服务商一端做出的一些改进。”
-------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next
作者:[Kevin Casey][a]
译者:[yunfengHe](https://github.com/yunfengHe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/11/kubernetes-numbers-10-compelling-stats
[2]:https://enterprisersproject.com/article/2017/11/how-enterprise-it-uses-kubernetes-tame-container-complexity
[3]:https://enterprisersproject.com/article/2017/11/5-kubernetes-success-tips-start-smart?sc_cid=70160000000h0aXAAQ
[4]:https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf
[5]:https://thenewstack.io/
[6]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
[7]:https://www.cockroachlabs.com/
[8]:https://enterprisersproject.com/article/2017/10/microservices-and-containers-6-management-tips-long-haul
[9]:https://codemill.se/
[10]:https://www.redhat.com/en/challenges/integration?intcmp=701f2000000tjyaAAA
[11]:https://enterprisersproject.com/hybrid-cloud
[12]:https://enterprisersproject.com/article/2017/7/multi-cloud-vs-hybrid-cloud-whats-difference
[13]:https://enterprisersproject.com/user/brian-gracely
[14]:https://www.redhat.com/en
[15]:https://www.cloudbees.com/
[16]:https://enterprisersproject.com/hybrid-cloud?sc_cid=70160000000h0aXAAQ
[17]:https://www.sumologic.com/
[18]:http://alcide.io/
[19]:https://enterprisersproject.com/article/2017/10/how-explain-kubernetes-plain-english
[20]:http://opentracing.io/
[21]:https://istio.io/
[22]:http://cri-o.io/
[23]:https://opensource.com/article/17/2/stateful-applications
[24]:https://enterprisersproject.com/article/2017/11/containers-and-kubernetes-whats-next?rate=PBQHhF4zPRHcq2KybE1bQgMkS2bzmNzcW2RXSVItmw8
[25]:https://enterprisersproject.com/user/kevin-casey

View File

@ -0,0 +1,74 @@
Mark McIntyre与 Fedora 的那些事
===========================
![](https://fedoramagazine.org/wp-content/uploads/2017/11/mock-couch-945w-945x400.jpg)
最近我们采访了 Mark McIntyre谈了他是如何使用 Fedora 系统的。这也是 Fedora 杂志上[系列文章的一部分][2]。该系列简要介绍了 Fedora 用户,以及他们是如何用 Fedora 把事情做好的。如果你想成为采访对象,请通过[反馈表][3]与我们联系。
### Mark McIntyre 是谁?
Mark McIntyre 为极客而生,以 Linux 为乐趣。他说:“我在 13 岁开始编程,当时自学 BASIC 语言,我体会到其中的乐趣,并在乐趣的引导下,一步步成为专业的码农。” Mark 和他的侄女都是披萨饼的死忠粉。“去年秋天,我和我的侄女开始了一个任务,去尝试诺克斯维尔的许多披萨饼连锁店。点击[这里][4]可以了解我们的进展情况。”Mark 也是一名业余的摄影爱好者,并且在 Flickr 上 [发布自己的作品][5]。
![](https://fedoramagazine.org/wp-content/uploads/2017/11/31456893222_553b3cac4d_k-1024x575.jpg)
作为一名开发者Mark 有着丰富的工作背景。他用过 Visual Basic 编写应用程序,用过 LotusScript、 PL/SQLOracle、 Tcl/TK 编写代码,也用过基于 Python 的 Django 框架。他的强项是 Python。这也是目前他作为系统工程师的工作语言。“我经常使用 Python。由于我的工作变得更像是自动化工程师 Python 用得就更频繁了。”
McIntyre 自称是个书呆子,喜欢科幻电影,但他最喜欢的一部电影却不是科幻片。“尽管我是个书呆子,喜欢看《<ruby>星际迷航<rt>Star Trek</rt></ruby>》、《<ruby>星球大战<rt>Star Wars</rt></ruby>》之类的影片,但《<ruby>光荣战役<rt>Glory</rt></ruby>》或许才是我最喜欢的电影。”他还提到,电影《<ruby>冲出宁静号<rt>Serenity</rt></ruby>》是一个著名电视剧的精彩后续(指《萤火虫》)。
Mark 比较看重他人的谦逊、知识与和气。他欣赏能够设身处地为他人着想的人。“如果你决定为另一个人服务,那么你会选择自己愿意亲近的人,而不是让自己备受折磨的人。”
McIntyre 目前在 [Scripps Networks Interactive][6] 工作,这家公司是 HGTV、Food Network、Travel Channel、DIY、GAC 以及其他几个有线电视频道的母公司。“我现在是一名系统工程师负责非线性视频内容这是所有媒体要开展线上消费所需要的。”他为一些开发团队提供支持他们编写应用程序将线性视频从有线电视发布到线上平台比如亚马逊、葫芦。这些系统既包含预置系统也包含云系统。Mark 还开发了一些自动化工具,将这些应用程序主要部署到云基础结构中。
### Fedora 社区
Mark 形容 Fedora 社区是一个富有活力的社区,充满着像 Fedora 用户一样热爱生活的人。“从设计师到封包人,这个团体依然非常活跃,生机勃勃。” 他继续说道:“这使我对该操作系统抱有一种信心。”
2002 年左右Mark 开始经常使用 IRC 上的 #fedora 频道“那时候Wi-Fi 在启用适配器和配置模块功能时,有许多还是靠手工实现的。”为了让他的 Wi-Fi 能够工作,他不得不重新去编译 Fedora 内核。
McIntyre 鼓励他人参与 Fedora 社区。“这里有许多来自不同领域的机会。前端设计、测试部署、开发、应用程序打包以及新技术实现。”他建议选择一个感兴趣的领域,然后向那个团体提出疑问。“这里有许多机会去奉献自己。”
对于帮助他起步的社区成员Mark 赞道“Ben Williams 非常乐于助人。在我第一次接触 Fedora 时,他帮我搞定了一些 #fedora 支持频道中的安装补丁。” Ben 也鼓励 Mark 去做 Fedora [大使][7]。
### 什么样的硬件和软件?
McIntyre 将 Fedora Linux 系统用在他的笔记本和台式机上。在服务器上他选择了 CentOS因为它有更长的生命周期支持。他现在的台式机是自己组装的配有 Intel 酷睿 i5 处理器32GB 的内存和2TB 的硬盘。“我装了个 4K 的显示屏,有足够大的地方来同时查看所有的应用。”他目前工作用的笔记本是戴尔灵越二合一,配备 13 英寸的屏16 GB 的内存和 525 GB 的 m.2 固态硬盘。
![](https://fedoramagazine.org/wp-content/uploads/2017/11/Screenshot-from-2017-10-26-08-51-41-1024x640.png)
Mark 现在将 Fedora 26 运行在他过去几个月装配的所有机器中。当一个新版本正式发布的时候,他倾向于避开这个高峰期。“除非在它即将发行的时候,我的工作站中有个正在运行下一代测试版本,通常情况下,一旦它发展成熟,我都会试着去获取最新的版本。”他经常采取就地更新:“这种就地更新方法利用 dnf 系统升级插件,目前表现得非常好。”
为了搞摄影McIntyre 用上了 [GIMP][8]、[Darktable][9],以及其他一些照片查看包和快速编辑包。当不用 Web 电子邮件时Mark 会使用 [Geary][10],还有[GNOME Calendar][11]。Mark 选用 HexChat 作为 IRC 客户端,[HexChat][12] 与在 Fedora 服务器实例上运行的 [ZNC bouncer][13] 联机。他的部门通过 Slave 进行沟通交流。
“我从来都不是 IDE 粉,所以大多数的编辑任务都是在 [vim][14] 上完成的。”Mark 偶尔也会打开一个简单的文本编辑器,如 [gedit][15],或者 [xed][16]。他用 [GPaste][17] 做复制和粘贴工作。“对于终端的选择,我已经变成 [Tilix][18] 的忠粉。”McIntyre 通过 [Rhythmbox][19] 来管理他喜欢的播客,并用 [Epiphany][20] 实现快速网络查询。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/mark-mcintyre-fedora/
作者:[Charles Profitt][a]
译者:[zrszrszrs](https://github.com/zrszrszrs)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/cprofitt/
[1]:https://fedoramagazine.org/mark-mcintyre-fedora/
[2]:https://fedoramagazine.org/tag/how-do-you-fedora/
[3]:https://fedoramagazine.org/submit-an-idea-or-tip/
[4]:https://knox-pizza-quest.blogspot.com/
[5]:https://www.flickr.com/photos/mockgeek/
[6]:http://www.scrippsnetworksinteractive.com/
[7]:https://fedoraproject.org/wiki/Ambassadors
[8]:https://www.gimp.org/
[9]:http://www.darktable.org/
[10]:https://wiki.gnome.org/Apps/Geary
[11]:https://wiki.gnome.org/Apps/Calendar
[12]:https://hexchat.github.io/
[13]:https://wiki.znc.in/ZNC
[14]:http://www.vim.org/
[15]:https://wiki.gnome.org/Apps/Gedit
[16]:https://github.com/linuxmint/xed
[17]:https://github.com/Keruspe/GPaste
[18]:https://fedoramagazine.org/try-tilix-new-terminal-emulator-fedora/
[19]:https://wiki.gnome.org/Apps/Rhythmbox
[20]:https://wiki.gnome.org/Apps/Web

View File

@ -0,0 +1,75 @@
如何在 Linux 下安装安卓文件传输助手
===============
如果你尝试在 Ubuntu 下连接你的安卓手机,你也许可以试试 Linux 下的安卓文件传输助手。
本质上来说,这个应用是谷歌 macOS 版本的一个克隆。它是用 Qt 编写的,用户界面非常简洁,使得你能轻松在 Ubuntu 和安卓手机之间传输文件和文件夹。
现在,有可能一部分人想知道有什么是这个应用可以做,而 NautilusUbuntu 默认的文件资源管理器)不能做的,答案是没有。
当我将我的 Nexus 5X记得选择 [媒体传输协议 MTP][7] 选项)连接在 Ubuntu 上时,在 [GVfs][8]LCTT 译注: GNOME 桌面下的虚拟文件系统)的帮助下,我可以打开、浏览和管理我的手机,就像它是一个普通的 U 盘一样。
[![Nautilus MTP integration with a Nexus 5X](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg)][9]
但是*一些*用户在使用默认的文件管理器时,在 MTP 的某些功能上会出现问题:比如文件夹没有正确加载,创建新文件夹后此文件夹不存在,或者无法在媒体播放器中使用自己的手机。
这就是要为 Linux 系统用户设计一个安卓文件传输助手应用的原因,将这个应用当做将 MTP 设备安装在 Linux 下的另一种选择。如果你使用 Linux 下的默认应用时一切正常,你也许并不需要尝试使用它 (除非你真的很想尝试新鲜事物)。
![Android File Transfer Linux App](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/android-file-transfer-for-linux-750x662.jpg)
该 app 特点:
*   简洁直观的用户界面
*   支持文件拖放功能(从 Linux 系统到手机)
*   支持批量下载 (从手机到 Linux系统)
*   显示传输进程对话框
*   FUSE 模块支持
*   没有文件大小限制
*   可选命令行工具
### Ubuntu 下安装安卓手机文件助手的步骤
以上就是对这个应用的介绍,下面是如何安装它的具体步骤。
这有一个 [PPA](个人软件包集)源为 Ubuntu 14.04 LTS、16.04 LTS 和 Ubuntu 17.10 提供可用应用。
为了将这一 PPA 加入你的软件资源列表中,执行这条命令:
```
sudo add-apt-repository ppa:samoilov-lex/aftl-stable
```
接着,为了在 Ubuntu 下安装 Linux版本的安卓文件传输助手执行
```
sudo apt-get update && sudo apt install android-file-transfer
```
这样就行了。
你会在你的应用列表中发现这一应用的启动图标。
在你启动这一应用之前,要确保没有其他应用(比如 Nautilus已经挂载了你的手机。如果其它应用正在使用你的手机就会显示“无法找到 MTP 设备”。要解决这一问题,将你的手机从 Nautilus或者任何正在使用你的手机的应用上移除然后再重新启动安卓文件传输助手。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux
作者:[JOEY SNEDDON][a]
译者:[wenwensnow](https://github.com/wenwensnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://plus.google.com/117485690627814051450/?rel=author
[2]:http://www.omgubuntu.co.uk/category/app
[3]:http://www.omgubuntu.co.uk/category/download
[4]:https://github.com/whoozle/android-file-transfer-linux
[5]:http://www.omgubuntu.co.uk/2017/11/android-file-transfer-app-linux
[6]:http://android.com/filetransfer?linkid=14270770
[7]:https://en.wikipedia.org/wiki/Media_Transfer_Protocol
[8]:https://en.wikipedia.org/wiki/GVfs
[9]:http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/browsing-android-mtp-nautilus.jpg
[10]:https://launchpad.net/~samoilov-lex/+archive/ubuntu/aftl-stable

View File

@ -0,0 +1,72 @@
开源云技能认证:系统管理员的核心竞争力
=========
![os jobs](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-house-sysadmin.jpg?itok=i5FHc3lu "os jobs")
> [2017年开源工作报告][1](以下简称“报告”)显示,具有开源云技术认证的系统管理员往往能获得更高的薪酬。
报告调查的受访者中53% 认为系统管理员是雇主们最期望被填补的职位空缺之一,因此,技术娴熟的系统管理员更受青睐而收获高薪职位,但这一职位,并没想象中那么容易填补。
系统管理员主要负责服务器和其他电脑操作系统的安装、服务支持和维护,及时处理服务中断和预防其他问题的出现。
总的来说今年的报告指出开源领域人才需求最大的有开源云47%应用开发44%大数据43%开发运营和安全42%)。
此外报告对人事经理的调查显示58% 期望招揽更多的开源人才67% 认为开源人才的需求增长会比业内其他领域更甚。有些单位视开源人才为招聘最优选则,它们招聘的开源人才较上年增长了 2 个百分点。
同时89% 的人事经理认为很难找到颇具天赋的开源人才。
### 为什么要获取认证
报告显示,对系统管理员的需求刺激着人事经理为 53% 的组织/机构提供正规的培训和专业技术认证,而这一比例去年为 47%。
对系统管理方面感兴趣的 IT 人才考虑获取 Linux 认证已成为行业规律。随便查看几个知名的招聘网站,你就能发现:[CompTIA Linux+][3] 认证是入门级 Linux 系统管理员的最高认证;如果想胜任高级别的系统管理员职位,获取[红帽认证工程师RHCE][4]和[红帽认证系统管理员RHCSA][5]则是不可或缺的。
戴士Dice[2017 技术行业薪资调查][6]显示2016 年系统管理员的薪水为 79,538 美元,较上年下降了 0.8%;系统架构师的薪水为 125,946 美元,同比下降 4.7%。尽管如此,该调查发现“高水平专业人才仍最受欢迎,特别是那些精通支持产业转型发展所需技术的人才”。
在开源技术方面HBase一个开源的分布式数据库技术人才的薪水在戴士 2017 技术行业薪资调查中排第一。在计算机网络和数据库领域,掌握 OpenVMS 操作系统技术也能获得高薪。
### 成为出色的系统管理员
出色的系统管理员须在问题出现时马上处理,这意味着你必须时刻准备应对可能出现的状况。这个职位追求“零责备的、精益的、流程或技术上交互式改进的”思维方式和善于自我完善的人格,成为一个系统管理员意味着“你必将与开源软件如 Linux、BSD 甚至开源 Solaris 等结下不解之缘”Paul English ^译注1 在 [opensource.com][7] 上发文指出。
Paul English 认为,现在的系统管理员较以前而言,要更多地与软件打交道,而且要能够编写脚本来协助系统管理。
>译注1Paul English计算机科学学士UNIX/Linux 系统管理员PreOS Security Inc. 公司 CEO2015-2017 年于为推动系统管理员发展实践的非盈利组织——<ruby>专业系统管理员联盟<rt>League of Professional System Administrator</rt></ruby>担任董事会成员。
### 展望 2018
[Robert Half 2018 年技术人才薪资导览][8]预测 2018 年北美地区许多单位将聘用大量系统管理方面的专业人才,同时个人软实力和领导力水平作为优秀人才的考量因素,越来越受到重视。
该报告指出:“良好的聆听能力和批判性思维能力对于理解和解决用户的问题和担忧至关重要,也是 IT 从业者必须具备的重要技能,特别是从事服务台和桌面支持工作相关的技术人员。”
这与[Linux基金会][9]^译注2 提出的不同阶段的系统管理员必备技能相一致,都强调了强大的分析能力和快速处理问题的能力。
>译注2<ruby>Linux 基金会<rt>The Linux Foundation</rt></ruby>,成立于 2000 年,致力于围绕开源项目构建可持续发展的生态系统,以加速开源项目的技术开发和商业应用;它是世界上最大的开源非盈利组织,在推广、保护和推进 Linux 发展,协同开发,维护“历史上最大的共享资源”上功勋卓越。
如果想逐渐爬上系统管理员职位的金字塔上层,还应该对系统配置的结构化方法充满兴趣;且拥有解决系统安全问题的经验;用户身份验证管理的经验;与非技术人员进行非技术交流的能力;以及优化系统以满足最新的安全需求的能力。
- [下载][10]2017年开源工作报告全文以获取更多信息。
-----------------------
via: https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins
作者:[linux.com][a]
译者:[wangy325](https://github.com/wangy325)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/blog/open-source-cloud-skills-and-certification-are-key-sysadmins
[1]:https://www.linuxfoundation.org/blog/2017-jobs-report-highlights-demand-open-source-skills/
[2]:https://www.linux.com/licenses/category/creative-commons-zero
[3]:https://certification.comptia.org/certifications/linux?tracking=getCertified/certifications/linux.aspx
[4]:https://www.redhat.com/en/services/certification/rhce
[5]:https://www.redhat.com/en/services/certification/rhcsa
[6]:http://marketing.dice.com/pdf/Dice_TechSalarySurvey_2017.pdf?aliId=105832232
[7]:https://opensource.com/article/17/7/truth-about-sysadmins
[8]:https://www.roberthalf.com/salary-guide/technology
[9]:https://www.linux.com/learn/10-essential-skills-novice-junior-and-senior-sysadmins%20%20
[10]:http://bit.ly/2017OSSjobsreport

View File

@ -0,0 +1,134 @@
Photon 也许能成为你最喜爱的容器操作系统
============================================================
![Photon OS](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon-linux.jpg?itok=jUFHPR_c "Photon OS")
>Phonton OS 专注于容器,是一个非常出色的平台。 —— Jack Wallen
容器在当下的火热,并不是没有原因的。正如[之前][13]讨论的,容器可以使您轻松快捷地将新的服务与应用部署到您的网络上,而且并不耗费太多的系统资源。比起专用硬件和虚拟机,容器都是更加划算的,除此之外,他们更容易更新与重用。
更重要的是,容器喜欢 Linux反之亦然。不需要太多时间和麻烦你就可以启动一台 Linux 服务器,运行[Docker][14],然后部署容器。但是,哪种 Linux 发行版最适合部署容器呢?我们的选择很多。你可以使用标准的 Ubuntu 服务器平台(更容易安装 Docker 并部署容器)或者是更轻量级的发行版 —— 专门用于部署容器。
[Photon][15] 就是这样的一个发行版。这个特殊的版本是由 [VMware][16] 于 2005 年创建的,它包含了 Docker 的守护进程,并可与容器框架(如 Mesos 和 Kubernetes 一起使用。Photon 经过优化可与 [VMware vSphere][17] 协同工作,而且可用于裸机、[Microsoft Azure][18]、 [Google Compute Engine][19]、 [Amazon Elastic Compute Cloud][20] 或者 [VirtualBox][21] 等。
Photon 通过只安装 Docker 守护进程所必需的东西来保持它的轻量。而这样做的结果是,这个发行版的大小大约只有 300MB。但这足以让 Linux 的运行一切正常。除此之外Photon 的主要特点还有:
* 内核为性能而调整。
* 内核根据[内核自防护项目][6]KSPP进行了加固。
* 所有安装的软件包都根据加固的安全标识来构建。
* 操作系统在信任验证后启动。
* Photon 的管理进程可以管理防火墙、网络、软件包,和远程登录在 Photon 机器上的用户。
* 支持持久卷。
* [Project Lightwave][7] 整合。
* 及时的安全补丁与更新。
Photon 可以通过 [ISO 镜像][22]、[OVA][23]、[Amazon Machine Image][24]、[Google Compute Engine 镜像][25] 和 [Azure VHD][26] 安装使用。现在我将向您展示如何使用 ISO 镜像在 VirtualBox 上安装 Photon。整个安装过程大概需要五分钟在最后您将有一台随时可以部署容器的虚拟机。
### 创建虚拟机
在部署第一台容器之前,您必须先创建一台虚拟机并安装 Photon。为此打开 VirtualBox 并点击“新建”按钮。跟着创建虚拟机向导进行配置(根据您的容器将需要的用途,为 Photon 提供必要的资源)。在创建好虚拟机后,您所需要做的第一件事就是更改配置。选择新建的虚拟机(在 VirtualBox 主窗口的左侧面板中),然后单击“设置”。在弹出的窗口中,点击“网络”(在左侧的导航中)。
在“网络”窗口图1你需要在“连接”的下拉窗口中选择桥接。这可以确保您的 Photon 服务与您的网络相连。完成更改后,单击确定。
![change settings](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_0.jpg?itok=Q0yhOhsZ "change setatings")
*图 1 更改 Photon 在 VirtualBox 中的网络设置。[经许可使用][1]*
从左侧的导航选择您的 Photon 虚拟机,点击启动。系统会提示您去加载 ISO 镜像。当您完成之后Photon 安装程序将会启动并提示您按回车后开始安装。安装过程基于 ncurses没有 GUI但它非常简单。
接下来图2系统会询问您是要最小化安装完整安装还是安装 OSTree 服务器。我选择了完整安装。选择您所需要的任意选项,然后按回车继续。
![installation type](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_2.jpg?itok=QL1Rs-PH "Photon")
*图 2 选择您的安装类型。[经许可使用][2]*
在下一个窗口,选择您要安装 Photon 的磁盘。由于我们将其安装在虚拟机因此只有一块磁盘会被列出图3。选择“自动”按下回车。然后安装程序会让您输入并验证管理员密码。在这之后镜像开始安装在您的磁盘上并在不到 5 分钟的时间内结束。
![Photon](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_1.jpg?itok=OdnMVpaA "installation type")
*图 3 选择安装 Photon 的硬盘。[经许可使用][3]*
安装完成后,重启虚拟机并使用安装时创建的用户 root 和它的密码登录。一切就绪,你准备好开始工作了。
在开始使用 Docker 之前,您需要更新一下 Photon。Photon 使用 `yum` 软件包管理器,因此在以 root 用户登录后输入命令 `yum update`。如果有任何可用更新则会询问您是否确认图4
![Updating](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/photon_3.jpg?itok=vjqrspE2 "Updating")
*图 4 更新 Photon。[经许可使用][4]*
### 用法
正如我所说的Photon 提供了部署容器甚至创建 Kubernetes 集群所需要的所有包。但是,在使用之前还要做一些事情。首先要启动 Docker 守护进程。为此,执行以下命令:
```
systemctl start docker
systemctl enable docker
```
现在我们需要创建一个标准用户,以便我们可以不用 root 去运行 `docker` 命令。为此,执行以下命令:
```
useradd -m USERNAME
passwd USERNAME
```
其中 “USERNAME” 是我们新增的用户的名称。
接下来,我们需要将这个新用户添加到 “docker” 组,执行命令:
```
usermod -a -G docker USERNAME
```
其中 “USERNAME” 是刚刚创建的用户的名称。
注销 root 用户并切换为新增的用户。现在,您已经可以不必使用 `sudo` 命令或者切换到 root 用户来使用 `docker` 命令了。从 Docker Hub 中取出一个镜像开始部署容器吧。
### 一个优秀的容器平台
在专注于容器方面Photon 毫无疑问是一个出色的平台。请注意Photon 是一个开源项目,因此没有任何付费支持。如果您对 Photon 有任何的问题,请移步 Photon 项目的 GitHub 下的 [Issues][27],那里可以供您阅读相关问题,或者提交您的问题。如果您对 Photon 感兴趣,您也可以在该项目的官方 [GitHub][28]中找到源码。
尝试一下 Photon 吧,看看它是否能够使得 Docker 容器和 Kubernetes 集群的部署更加容易。
欲了解 Linux 的更多信息,可以通过学习 Linux 基金会和 edX 的免费课程,[“Linux 入门”][29]。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/11/photon-could-be-your-new-favorite-container-os
作者:[JACK WALLEN][a]
译者:[KeyLD](https://github.com/KeyLd)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/creative-commons-zero
[6]:https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project
[7]:http://vmware.github.io/lightwave/
[8]:https://www.linux.com/files/images/photon0jpg
[9]:https://www.linux.com/files/images/photon1jpg
[10]:https://www.linux.com/files/images/photon2jpg
[11]:https://www.linux.com/files/images/photon3jpg
[12]:https://www.linux.com/files/images/photon-linuxjpg
[13]:https://www.linux.com/learn/intro-to-linux/2017/11/how-install-and-use-docker-linux
[14]:https://www.docker.com/
[15]:https://vmware.github.io/photon/
[16]:https://www.vmware.com/
[17]:https://www.vmware.com/products/vsphere.html
[18]:https://azure.microsoft.com/
[19]:https://cloud.google.com/compute/
[20]:https://aws.amazon.com/ec2/
[21]:https://www.virtualbox.org/
[22]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
[23]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
[24]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
[25]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
[26]:https://github.com/vmware/photon/wiki/Downloading-Photon-OS
[27]:https://github.com/vmware/photon/issues
[28]:https://github.com/vmware/photon
[29]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,149 @@
如何判断 Linux 服务器是否被入侵?
=========================
本指南中所谓的服务器被入侵或者说被黑了的意思,是指未经授权的人或程序为了自己的目的登录到服务器上去并使用其计算资源,通常会产生不好的影响。
免责声明:若你的服务器被类似 NSA 这样的国家机关或者某个犯罪集团入侵,那么你并不会注意到有任何问题,这些技术也无法发觉他们的存在。
然而,大多数被攻破的服务器都是被类似自动攻击程序这样的程序或者类似“脚本小子”这样的廉价攻击者,以及蠢蛋罪犯所入侵的。
这类攻击者会在访问服务器的同时滥用服务器资源,并且不怎么会采取措施来隐藏他们正在做的事情。
### 被入侵服务器的症状
当服务器被没有经验攻击者或者自动攻击程序入侵了的话,他们往往会消耗 100% 的资源。他们可能消耗 CPU 资源来进行数字货币的采矿或者发送垃圾邮件,也可能消耗带宽来发动 DoS 攻击。
因此出现问题的第一个表现就是服务器 “变慢了”。这可能表现在网站的页面打开的很慢,或者电子邮件要花很长时间才能发送出去。
那么你应该查看那些东西呢?
#### 检查 1 - 当前都有谁在登录?
你首先要查看当前都有谁登录在服务器上。发现攻击者登录到服务器上进行操作并不复杂。
其对应的命令是 `w`。运行 `w` 会输出如下结果:
```
08:32:55 up 98 days, 5:43, 2 users, load average: 0.05, 0.03, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 113.174.161.1 08:26 0.00s 0.03s 0.02s ssh root@coopeaa12
root pts/1 78.31.109.1 08:26 0.00s 0.01s 0.00s w
```
第一个 IP 是英国 IP而第二个 IP 是越南 IP。这个不是个好兆头。
停下来做个深呼吸, 不要恐慌之下只是干掉他们的 SSH 连接。除非你能够防止他们再次进入服务器,否则他们会很快进来并踢掉你,以防你再次回去。
请参阅本文最后的“被入侵之后怎么办”这一章节来看找到了被入侵的证据后应该怎么办。
`whois` 命令可以接一个 IP 地址然后告诉你该 IP 所注册的组织的所有信息,当然就包括所在国家的信息。
#### 检查 2 - 谁曾经登录过?
Linux 服务器会记录下哪些用户,从哪个 IP在什么时候登录的以及登录了多长时间这些信息。使用 `last` 命令可以查看这些信息。
输出类似这样:
```
root pts/1 78.31.109.1 Thu Nov 30 08:26 still logged in
root pts/0 113.174.161.1 Thu Nov 30 08:26 still logged in
root pts/1 78.31.109.1 Thu Nov 30 08:24 - 08:26 (00:01)
root pts/0 113.174.161.1 Wed Nov 29 12:34 - 12:52 (00:18)
root pts/0 14.176.196.1 Mon Nov 27 13:32 - 13:53 (00:21)
```
这里可以看到英国 IP 和越南 IP 交替出现,而且最上面两个 IP 现在还处于登录状态。如果你看到任何未经授权的 IP那么请参阅最后章节。
登录后的历史记录会记录到二进制的 `/var/log/wtmp` 文件中LCTT 译注:这里作者应该写错了,根据实际情况修改),因此很容易被删除。通常攻击者会直接把这个文件删掉,以掩盖他们的攻击行为。 因此, 若你运行了 `last` 命令却只看得见你的当前登录,那么这就是个不妙的信号。
如果没有登录历史的话,请一定小心,继续留意入侵的其他线索。
#### 检查 3 - 回顾命令历史
这个层次的攻击者通常不会注意掩盖命令的历史记录,因此运行 `history` 命令会显示出他们曾经做过的所有事情。
一定留意有没有用 `wget``curl` 命令来下载类似垃圾邮件机器人或者挖矿程序之类的非常规软件。
命令历史存储在 `~/.bash_history` 文件中,因此有些攻击者会删除该文件以掩盖他们的所作所为。跟登录历史一样,若你运行 `history` 命令却没有输出任何东西那就表示历史文件被删掉了。这也是个不妙的信号你需要很小心地检查一下服务器了。LCTT 译注,如果没有命令历史,也有可能是你的配置错误。)
#### 检查 4 - 哪些进程在消耗 CPU
你常遇到的这类攻击者通常不怎么会去掩盖他们做的事情。他们会运行一些特别消耗 CPU 的进程。这就很容易发现这些进程了。只需要运行 `top` 然后看最前的那几个进程就行了。
这也能显示出那些未登录进来的攻击者。比如,可能有人在用未受保护的邮件脚本来发送垃圾邮件。
如果你最上面的进程对不了解,那么你可以 Google 一下进程名称,或者通过 `losf``strace` 来看看它做的事情是什么。
使用这些工具,第一步从 `top` 中拷贝出进程的 PID然后运行
```
strace -p PID
```
这会显示出该进程调用的所有系统调用。它产生的内容会很多,但这些信息能告诉你这个进程在做什么。
```
lsof -p PID
```
这个程序会列出该进程打开的文件。通过查看它访问的文件可以很好的理解它在做的事情。
#### 检查 5 - 检查所有的系统进程
消耗 CPU 不严重的未授权进程可能不会在 `top` 中显露出来,不过它依然可以通过 `ps` 列出来。命令 `ps auxf` 就能显示足够清晰的信息了。
你需要检查一下每个不认识的进程。经常运行 `ps` (这是个好习惯)能帮助你发现奇怪的进程。
#### 检查 6 - 检查进程的网络使用情况
`iftop` 的功能类似 `top`,它会排列显示收发网络数据的进程以及它们的源地址和目的地址。类似 DoS 攻击或垃圾机器人这样的进程很容易显示在列表的最顶端。
#### 检查 7 - 哪些进程在监听网络连接?
通常攻击者会安装一个后门程序专门监听网络端口接受指令。该进程等待期间是不会消耗 CPU 和带宽的,因此也就不容易通过 `top` 之类的命令发现。
`lsof``netstat` 命令都会列出所有的联网进程。我通常会让它们带上下面这些参数:
```
lsof -i
```
```
netstat -plunt
```
你需要留意那些处于 `LISTEN``ESTABLISHED` 状态的进程这些进程要么正在等待连接LISTEN要么已经连接ESTABLISHED。如果遇到不认识的进程使用 `strace``lsof` 来看看它们在做什么东西。
### 被入侵之后该怎么办呢?
首先,不要紧张,尤其当攻击者正处于登录状态时更不能紧张。**你需要在攻击者警觉到你已经发现他之前夺回机器的控制权。**如果他发现你已经发觉到他了,那么他可能会锁死你不让你登陆服务器,然后开始毁尸灭迹。
如果你技术不太好那么就直接关机吧。你可以在服务器上运行 `shutdown -h now` 或者 `systemctl poweroff` 这两条命令之一。也可以登录主机提供商的控制面板中关闭服务器。关机后,你就可以开始配置防火墙或者咨询一下供应商的意见。
如果你对自己颇有自信,而你的主机提供商也有提供上游防火墙,那么你只需要以此创建并启用下面两条规则就行了:
1. 只允许从你的 IP 地址登录 SSH。
2. 封禁除此之外的任何东西,不仅仅是 SSH还包括任何端口上的任何协议。
这样会立即关闭攻击者的 SSH 会话,而只留下你可以访问服务器。
如果你无法访问上游防火墙,那么你就需要在服务器本身创建并启用这些防火墙策略,然后在防火墙规则起效后使用 `kill` 命令关闭攻击者的 SSH 会话。LCTT 译注:本地防火墙规则 有可能不会阻止已经建立的 SSH 会话,所以保险起见,你需要手工杀死该会话。)
最后还有一种方法,如果支持的话,就是通过诸如串行控制台之类的带外连接登录服务器,然后通过 `systemctl stop network.service` 停止网络功能。这会关闭所有服务器上的网络连接,这样你就可以慢慢的配置那些防火墙规则了。
重夺服务器的控制权后,也不要以为就万事大吉了。
不要试着修复这台服务器,然后接着用。你永远不知道攻击者做过什么,因此你也永远无法保证这台服务器还是安全的。
最好的方法就是拷贝出所有的数据然后重装系统。LCTT 译注:你的程序这时已经不可信了,但是数据一般来说没问题。)
--------------------------------------------------------------------------------
via: https://bash-prompt.net/guides/server-hacked/
作者:[Elliot Cooper][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://bash-prompt.net

View File

@ -0,0 +1,48 @@
使用 DNSTrails 自动找出每个域名的拥有者
============================================================
今天,我们很高兴地宣布我们最近几周做的新功能。它是 Whois 聚合工具,现在可以在 [DNSTrails][1] 上获得。
在过去,查找一个域名的所有者会花费很多时间,因为大部分时间你都需要把域名翻译为一个 IP 地址,以便找到同一个人拥有的其他域名。
使用老的方法,在得到你想要的域名列表之前,你在一个工具和另外一个工具的一日又一日的研究和交叉比较结果中经常会花费数个小时。
感谢这个新工具和我们的智能 [WHOIS 数据库][2],现在你可以搜索任何域名,并获得组织或个人注册的域名的完整列表,并在几秒钟内获得准确的结果。
### 我如何使用 Whois 聚合功能?
第一步:打开 [DNSTrails.com][3]
第二步搜索任何域名比如godaddy.com
第三步:在得到域名的结果后,如下所见,定位下面的 Whois 信息:
![Domain name search results](https://securitytrails.com/images/a/a/1/3/f/aa13fa3616b8dc313f925bdbf1da43a54856d463-image1.png)
第四步:你会看到那里有有关域名的电话和电子邮箱地址。
第五步:点击右边的链接,你会轻松地找到用相同电话和邮箱注册的域名。
![All domain names by the same owner](https://securitytrails.com/images/1/3/4/0/3/134037822d23db4907d421046b11f3cbb872f94f-image2.png)
如果你正在调查互联网上任何个人的域名所有权,这意味着即使域名甚至没有指向注册服务商的 IP如果他们使用相同的电话和邮件地址我们仍然可以发现其他域名。
想知道一个人拥有的其他域名么?亲自试试 [DNStrails][5] 的 [WHOIS 聚合功能][4]或者[使用我们的 API 访问][6]。
--------------------------------------------------------------------------------
via: https://securitytrails.com/blog/find-every-domain-someone-owns
作者:[SECURITYTRAILS TEAM][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://securitytrails.com/blog/find-every-domain-someone-owns
[1]:https://dnstrails.com/
[2]:https://securitytrails.com/forensics
[3]:https://dnstrails.com/
[4]:http://dnstrails.com/#/domain/domain/ueland.com
[5]:https://dnstrails.com/
[6]:https://securitytrails.com/contact

View File

@ -0,0 +1,97 @@
在命令行中使用 DuckDuckGo 搜索
=============
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/duckduckgo.png)
此前我们介绍了[如何在命令行中使用 Google 搜索][3]。许多读者反馈说他们平时使用 [Duck Duck Go][4],这是一个功能强大而且保密性很强的搜索引擎。
正巧,最近出现了一款能够从命令行搜索 DuckDuckGo 的工具。它叫做 ddgr我把它读作 “dodger”非常好用。
像 [Googler][7] 一样ddgr 是一个完全开源而且非官方的工具。没错,它并不属于 DuckDuckGo。所以如果你发现它返回的结果有些奇怪请先询问这个工具的开发者而不是搜索引擎的开发者。
### DuckDuckGo 命令行应用
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/ddgr-gif.gif)
[DuckDuckGo BangsDuckDuckGo 快捷搜索)][8] 可以帮助你轻易地在 DuckDuckGo 上找到想要的信息(甚至 _本网站 omgubuntu_ 都有快捷搜索。ddgr 非常忠实地呈现了这个功能。
和网页版不同的是,你可以更改每页返回多少结果。这比起每次查询都要看三十多条结果要方便一些。默认界面经过了精心设计,在不影响可读性的情况下尽量减少了占用空间。
`ddgr` 有许多功能和亮点,包括:
* 更改搜索结果数
* 支持 Bash 自动补全
* 使用 DuckDuckGo Bangs
* 在浏览器中打开链接
* ”手气不错“选项
* 基于时间、地区、文件类型等的筛选功能
* 极少的依赖项
你可以从 Github 的项目页面上下载支持各种系统的 `ddgr`
- [从 Github 下载 “ddgr”][9]
另外,在 Ubuntu 16.04 LTS 或更新版本中,你可以使用 PPA 安装 ddgr。这个仓库由 ddgr 的开发者维护。如果你想要保持在最新版本的话,推荐使用这种方式安装。
需要提醒的是,在本文创作时,这个 PPA 中的 ddgr _并不是_ 最新版本,而是一个稍旧的版本(缺少 -num 选项)。
使用以下命令添加 PPA
```
sudo add-apt-repository ppa:twodopeshaggy/jarun
sudo apt-get update
```
### 如何使用 ddgr 在命令行中搜索 DuckDuckGo
安装完毕后,你只需打开你的终端模拟器,并运行:
```
ddgr
```
然后输入查询内容:
```
search-term
```
你可以限制搜索结果数:
```
ddgr --num 5 search-term
```
或者自动在浏览器中打开第一条搜索结果:
```
ddgr -j search-term
```
你可以使用参数和选项来提高搜索精确度。使用以下命令来查看所有的参数:
```
ddgr -h
```
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app
作者:[JOEY SNEDDON][a]
译者:[yixunx](https://github.com/yixunx)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://plus.google.com/117485690627814051450/?rel=author
[2]:http://www.omgubuntu.co.uk/category/download
[3]:http://www.omgubuntu.co.uk/2017/08/search-google-from-the-command-line
[4]:http://duckduckgo.com/
[5]:http://www.omgubuntu.co.uk/2017/11/duck-duck-go-terminal-app
[6]:https://github.com/jarun/ddgr
[7]:https://github.com/jarun/googler
[8]:https://duckduckgo.com/bang
[9]:https://github.com/jarun/ddgr/releases/tag/v1.1

View File

@ -0,0 +1,398 @@
Translate Shell :一款在 Linux 命令行中使用谷歌翻译的工具
============================================================
我对 CLI 应用非常感兴趣,因此热衷于使用并分享 CLI 应用。 我之所以更喜欢 CLI 很大原因是因为我在大多数的时候都使用的是字符界面black screen已经习惯了使用 CLI 应用而不是 GUI 应用。
我写过很多关于 CLI 应用的文章。 最近我发现了一些谷歌的 CLI 工具,像 “Google Translator”、“Google Calendar” 和 “Google Contacts”。 这里,我想在给大家分享一下。
今天我们要介绍的是 “Google Translator” 工具。 由于我的母语是泰米尔语,我在一天内用了很多次才理解了它的意义。
谷歌翻译为其它语系的人们所广泛使用。
### 什么是 Translate Shell
[Translate Shell][2] (之前叫做 Google Translate CLI) 是一款借助谷歌翻译默认、必应翻译、Yandex.Translate 以及 Apertium 来翻译的命令行翻译器。它让你可以在终端访问这些翻译引擎。 Translate Shell 在大多数 Linux 发行版中都能使用。
### 如何安装 Translate Shell
有三种方法安装 Translate Shell。
* 下载自包含的可执行文件
* 手工安装
* 通过包管理器安装
#### 方法 1 : 下载自包含的可执行文件
下载自包含的可执行文件放到 `/usr/bin` 目录中。
```
$ wget git.io/trans
$ chmod +x ./trans
$ sudo mv trans /usr/bin/
```
#### 方法 2 : 手工安装
克隆 Translate Shell 的 GitHub 仓库然后手工编译。
```
$ git clone https://github.com/soimort/translate-shell && cd translate-shell
$ make
$ sudo make install
```
#### 方法 3 : 通过包管理器
有些发行版的官方仓库中包含了 Translate Shell可以通过包管理器来安装。
对于 Debian/Ubuntu 使用 [APT-GET 命令][3] 或者 [APT 命令][4]来安装。
```
$ sudo apt-get install translate-shell
```
对于 Fedora, 使用 [DNF 命令][5] 来安装。
```
$ sudo dnf install translate-shell
```
对于基于 Arch Linux 的系统, 使用 [Yaourt 命令][6] 或 [Packer 明快][7] 来从 AUR 仓库中安装。
```
$ yaourt -S translate-shell
or
$ packer -S translate-shell
```
### 如何使用 Translate Shell
安装好后,打开终端闭关输入下面命令。 谷歌翻译会自动探测源文本是哪种语言,并且在默认情况下将之翻译成你的 `locale` 所对应的语言。
```
$ trans [Words]
```
下面我将泰米尔语中的单词 “நன்றி” (Nanri) 翻译成英语。 这个单词的意思是感谢别人。
```
$ trans நன்றி
நன்றி
(Naṉṟi)
Thanks
Definitions of நன்றி
[ தமிழ் -> English ]
noun
gratitude
நன்றி
thanks
நன்றி
நன்றி
Thanks
```
使用下面命令也能将英语翻译成泰米尔语。
```
$ trans :ta thanks
thanks
/THaNGks/
நன்றி
(Naṉṟi)
Definitions of thanks
[ English -> தமிழ் ]
noun
நன்றி
gratitude, thanks
thanks
நன்றி
```
要将一个单词翻译到多个语种可以使用下面命令(本例中,我将单词翻译成泰米尔语以及印地语)。
```
$ trans :ta+hi thanks
thanks
/THaNGks/
நன்றி
(Naṉṟi)
Definitions of thanks
[ English -> தமிழ் ]
noun
நன்றி
gratitude, thanks
thanks
நன்றி
thanks
/THaNGks/
धन्यवाद
(dhanyavaad)
Definitions of thanks
[ English -> हिन्दी ]
noun
धन्यवाद
thanks, thank, gratitude, thankfulness, felicitation
thanks
धन्यवाद, शुक्रिया
```
使用下面命令可以将多个单词当成一个参数(句子)来进行翻译。(只需要把句子应用起来作为一个参数就行了)。
```
$ trans :ta "what is going on your life?"
what is going on your life?
உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
(Uṅkaḷ vāḻkkaiyil eṉṉa naṭakkiṟatu?)
Translations of what is going on your life?
[ English -> தமிழ் ]
what is going on your life?
உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
```
下面命令单独地翻译各个单词。
```
$ trans :ta curios happy
curios
ஆர்வம்
(Ārvam)
Translations of curios
[ Română -> தமிழ் ]
curios
ஆர்வம், அறிவாளிகள், ஆர்வமுள்ள, அறிய, ஆர்வமாக
happy
/ˈhapē/
சந்தோஷமாக
(Cantōṣamāka)
Definitions of happy
[ English -> தமிழ் ]
மகிழ்ச்சியான
happy, convivial, debonair, gay
திருப்தி உடைய
happy
adjective
இன்பமான
happy
happy
சந்தோஷமாக, மகிழ்ச்சி, இனிய, சந்தோஷமா
```
简洁模式默认情况下Translate Shell 尽可能多的显示翻译信息。如果你希望只显示简要信息,只需要加上 `-b`选项。
```
$ trans -b :ta thanks
நன்றி
```
字典模式:加上 `-d` 可以把 Translate Shell 当成字典来用。
```
$ trans -d :en thanks
thanks
/THaNGks/
Synonyms
noun
- gratitude, appreciation, acknowledgment, recognition, credit
exclamation
- thank you, many thanks, thanks very much, thanks a lot, thank you kindly, much obliged, much appreciated, bless you, thanks a million
Examples
- In short, thanks for everything that makes this city great this Thanksgiving.
- many thanks
- There were no thanks in the letter from him, just complaints and accusations.
- It is a joyful celebration in which Bolivians give thanks for their freedom as a nation.
- festivals were held to give thanks for the harvest
- The collection, as usual, received a great response and thanks is extended to all who subscribed.
- It would be easy to dwell on the animals that Tasmania has lost, but I prefer to give thanks for what remains.
- thanks for being so helpful
- It came back on about half an hour earlier than predicted, so I suppose I can give thanks for that.
- Many thanks for the reply but as much as I tried to follow your advice, it's been a bad week.
- To them and to those who have supported the office I extend my grateful thanks .
- We can give thanks and words of appreciation to others for their kind deeds done to us.
- Adam, thanks for taking time out of your very busy schedule to be with us tonight.
- a letter of thanks
- Thank you very much for wanting to go on reading, and thanks for your understanding.
- Gerry has received a letter of thanks from the charity for his part in helping to raise this much needed cash.
- So thanks for your reply to that guy who seemed to have a chip on his shoulder about it.
- Suzanne, thanks for being so supportive with your comments on my blog.
- She has never once acknowledged my thanks , or existence for that matter.
- My grateful thanks go to the funders who made it possible for me to travel.
- festivals were held to give thanks for the harvest
- All you secretaries who made it this far into the article… thanks for your patience.
- So, even though I don't think the photos are that good, thanks for the compliments!
- And thanks for warning us that your secret service requires a motorcade of more than 35 cars.
- Many thanks for your advice, which as you can see, I have passed on to our readers.
- Tom Ryan was given a bottle of wine as a thanks for his active involvement in the twinning project.
- Mr Hill insists he has received no recent complaints and has even been sent a letter of thanks from the forum.
- Hundreds turned out to pay tribute to a beloved former headteacher at a memorial service to give thanks for her life.
- Again, thanks for a well written and much deserved tribute to our good friend George.
- I appreciate your doing so, and thanks also for the compliments about the photos!
See also
Thanks!, thank, many thanks, thanks to, thanks to you, special thanks, give thanks, thousand thanks, Many thanks!, render thanks, heartfelt thanks, thanks to this
```
使用下面格式可以使用 Translate Shell 来翻译文件。
```
$ trans :ta file:///home/magi/gtrans.txt
உங்கள் வாழ்க்கையில் என்ன நடக்கிறது?
```
下面命令可以让 Translate Shell 进入交互模式。 在进入交互模式之前你需要明确指定源语言和目标语言。本例中,我将英文单词翻译成泰米尔语。
```
$ trans -shell en:ta thanks
Translate Shell
(:q to quit)
thanks
/THaNGks/
நன்றி
(Naṉṟi)
Definitions of thanks
[ English -> தமிழ் ]
noun
நன்றி
gratitude, thanks
thanks
நன்றி
```
想知道语言代码,可以执行下面命令。
```
$ trans -R
```
或者
```
$ trans -T
┌───────────────────────┬───────────────────────┬───────────────────────┐
│ Afrikaans - af │ Hindi - hi │ Punjabi - pa │
│ Albanian - sq │ Hmong - hmn │ Querétaro Otomi- otq │
│ Amharic - am │ Hmong Daw - mww │ Romanian - ro │
│ Arabic - ar │ Hungarian - hu │ Russian - ru │
│ Armenian - hy │ Icelandic - is │ Samoan - sm │
│ Azerbaijani - az │ Igbo - ig │ Scots Gaelic - gd │
│ Basque - eu │ Indonesian - id │ Serbian (Cyr...-sr-Cyrl
│ Belarusian - be │ Irish - ga │ Serbian (Latin)-sr-Latn
│ Bengali - bn │ Italian - it │ Sesotho - st │
│ Bosnian - bs │ Japanese - ja │ Shona - sn │
│ Bulgarian - bg │ Javanese - jv │ Sindhi - sd │
│ Cantonese - yue │ Kannada - kn │ Sinhala - si │
│ Catalan - ca │ Kazakh - kk │ Slovak - sk │
│ Cebuano - ceb │ Khmer - km │ Slovenian - sl │
│ Chichewa - ny │ Klingon - tlh │ Somali - so │
│ Chinese Simp...- zh-CN│ Klingon (pIqaD)tlh-Qaak Spanish - es │
│ Chinese Trad...- zh-TW│ Korean - ko │ Sundanese - su │
│ Corsican - co │ Kurdish - ku │ Swahili - sw │
│ Croatian - hr │ Kyrgyz - ky │ Swedish - sv │
│ Czech - cs │ Lao - lo │ Tahitian - ty │
│ Danish - da │ Latin - la │ Tajik - tg │
│ Dutch - nl │ Latvian - lv │ Tamil - ta │
│ English - en │ Lithuanian - lt │ Tatar - tt │
│ Esperanto - eo │ Luxembourgish - lb │ Telugu - te │
│ Estonian - et │ Macedonian - mk │ Thai - th │
│ Fijian - fj │ Malagasy - mg │ Tongan - to │
│ Filipino - tl │ Malay - ms │ Turkish - tr │
│ Finnish - fi │ Malayalam - ml │ Udmurt - udm │
│ French - fr │ Maltese - mt │ Ukrainian - uk │
│ Frisian - fy │ Maori - mi │ Urdu - ur │
│ Galician - gl │ Marathi - mr │ Uzbek - uz │
│ Georgian - ka │ Mongolian - mn │ Vietnamese - vi │
│ German - de │ Myanmar - my │ Welsh - cy │
│ Greek - el │ Nepali - ne │ Xhosa - xh │
│ Gujarati - gu │ Norwegian - no │ Yiddish - yi │
│ Haitian Creole - ht │ Pashto - ps │ Yoruba - yo │
│ Hausa - ha │ Persian - fa │ Yucatec Maya - yua │
│ Hawaiian - haw │ Polish - pl │ Zulu - zu │
│ Hebrew - he │ Portuguese - pt │ │
└───────────────────────┴───────────────────────┴───────────────────────┘
```
想了解更多选项的内容,可以查看其 man 手册。
```
$ man trans
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/translate-shell-a-tool-to-use-google-translate-from-command-line-in-linux/
作者:[Magesh Maruthamuthu][a]
译者:[lujun9972](https://github.com/lujun9972 )
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[2]:https://github.com/soimort/translate-shell
[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[6]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
[7]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/

View File

@ -0,0 +1,132 @@
如何自动唤醒和关闭 Linux
=====================
![timekeeper](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner.jpg?itok=zItspoSb)
> 了解如何通过配置 Linux 计算机来根据时间自动唤醒和关闭。
不要成为一个电能浪费者。如果你的电脑不需要开机就请把它们关机。出于方便和计算机宅的考虑,你可以通过配置你的 Linux 计算机实现自动唤醒和关闭。
### 宝贵的系统运行时间
有时候有些电脑需要一直处在开机状态,在不超过电脑运行时间的限制下这种情况是被允许的。有些人为他们的计算机可以长时间的正常运行而感到自豪,且现在我们有内核热补丁能够实现只有在硬件发生故障时才需要机器关机。我认为比较实际可行的是,像减少移动部件磨损一样节省电能,且在不需要机器运行的情况下将其关机。比如,你可以在规定的时间内唤醒备份服务器,执行备份,然后关闭它直到它要进行下一次备份。或者,你可以设置你的互联网网关只在特定的时间运行。任何不需要一直运行的东西都可以将其配置成在其需要工作的时候打开,待其完成工作后将其关闭。
### 系统休眠
对于不需要一直运行的电脑,使用 root 的 cron 定时任务(即 `/etc/crontab`)可以可靠地关闭电脑。这个例子创建一个 root 定时任务实现每天晚上 11 点 15 分定时关机。
```
# crontab -e -u root
# m h dom mon dow command
15 23 * * * /sbin/shutdown -h now
```
以下示例仅在周一至周五运行:
```
15 23 * * 1-5 /sbin/shutdown -h now
```
您可以为不同的日期和时间创建多个 cron 作业。 通过命令 `man 5 crontab` 可以了解所有时间和日期的字段。
一个快速、容易的方式是,使用 `/etc/crontab` 文件。但这样你必须指定用户:
```
15 23 * * 1-5 root shutdown -h now
```
### 自动唤醒
实现自动唤醒是一件很酷的事情;我大多数 SUSE SUSE Linux的同事都在纽伦堡因此因此为了跟同事能有几小时一起工作的时间我不得不需要在凌晨五点起床。我的计算机早上 5 点半自动开始工作,而我只需要将自己和咖啡拖到我的桌子上就可以开始工作了。按下电源按钮看起来好像并不是什么大事,但是在每天的那个时候每件小事都会变得很大。
唤醒 Linux 计算机可能不如关闭它可靠因此你可能需要尝试不同的办法。你可以使用远程唤醒Wake-On-LAN、RTC 唤醒或者个人电脑的 BIOS 设置预定的唤醒这些方式。这些方式可行的原因是,当你关闭电脑时,这并不是真正关闭了计算机;此时计算机处在极低功耗状态且还可以接受和响应信号。只有在你拔掉电源开关时其才彻底关闭。
### BIOS 唤醒
BIOS 唤醒是最可靠的。我的系统主板 BIOS 有一个易于使用的唤醒调度程序 (图 1)。对你来说也是一样的容易。
![wakeup](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-1_11.png?itok=8qAeqo1I)
*图 1我的系统 BIOS 有个易用的唤醒定时器。*
### 主机远程唤醒Wake-On-LAN
远程唤醒是仅次于 BIOS 唤醒的又一种可靠的唤醒方法。这需要你从第二台计算机发送信号到所要打开的计算机。可以使用 Arduino 或<ruby>树莓派<rt>Raspberry Pi</rt></ruby>发送给基于 Linux 的路由器或者任何 Linux 计算机的唤醒信号。首先,查看系统主板 BIOS 是否支持 Wake-On-LAN ,要是支持的话,必须先启动它,因为它被默认为禁用。
然后,需要一个支持 Wake-On-LAN 的网卡;无线网卡并不支持。你需要运行 `ethtool` 命令查看网卡是否支持 Wake-On-LAN
```
# ethtool eth0 | grep -i wake-on
Supports Wake-on: pumbg
Wake-on: g
```
这条命令输出的 “Supports Wake-on” 字段会告诉你你的网卡现在开启了哪些功能:
   
* d -- 禁用
* p -- 物理活动唤醒
* u -- 单播消息唤醒
* m -- 多播(组播)消息唤醒
* b -- 广播消息唤醒
* a -- ARP 唤醒
* g -- <ruby>特定数据包<rt>magic packet</rt></ruby>唤醒
* s -- 设有密码的<ruby>特定数据包<rt>magic packet</rt></ruby>唤醒
`ethtool` 命令的 man 手册并没说清楚 `p` 选项的作用;这表明任何信号都会导致唤醒。然而,在我的测试中它并没有这么做。想要实现远程唤醒主机,必须支持的功能是 `g` —— <ruby>特定数据包<rt>magic packet</rt></ruby>唤醒而且下面的“Wake-on” 行显示这个功能已经在启用了。如果它没有被启用,你可以通过 `ethtool` 命令来启用它。
```
# ethtool -s eth0 wol g
```
这条命令可能会在重启后失效,所以为了确保万无一失,你可以创建个 root 用户的定时任务cron在每次重启的时候来执行这条命令。
```
@reboot /usr/bin/ethtool -s eth0 wol g
```
另一个选择是最近的<ruby>网络管理器<rt>Network Manager</rt></ruby>版本有一个很好的小复选框来启用 Wake-On-LAN图 2
![wakeonlan](https://www.linux.com/sites/lcom/files/styles/floated_images/public/fig-2_7.png?itok=XQAwmHoQ)
*图 2启用 Wake on LAN*
这里有一个可以用于设置密码的地方,但是如果你的网络接口不支持<ruby>安全开机<rt>Secure On</rt></ruby>密码,它就不起作用。
现在你需要配置第二台计算机来发送唤醒信号。你并不需要 root 权限,所以你可以为你的普通用户创建 cron 任务。你需要用到的是想要唤醒的机器的网络接口和MAC地址信息。
```
30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B
```
### RTC 唤醒
通过使用实时闹钟来唤醒计算机是最不可靠的方法。对于这个方法,可以参看 [Wake Up Linux With an RTC Alarm Clock][4] ;对于现在的大多数发行版来说这种方法已经有点过时了。
下周继续了解更多关于使用 RTC 唤醒的方法。
通过 Linux 基金会和 edX 可以学习更多关于 Linux 的免费 [Linux 入门][5]教程。
(题图:[The Observatory at Delhi][7]
--------------------------------------------------------------------------------
via:https://www.linux.com/learn/intro-to-linux/2017/11/wake-and-shut-down-linux-automatically
作者:[Carla Schroder][a]
译者:[HardworkFish](https://github.com/HardworkFish)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/files/images/bannerjpg
[2]:https://www.linux.com/files/images/fig-1png-11
[3]:https://www.linux.com/files/images/fig-2png-7
[4]:https://www.linux.com/learn/wake-linux-rtc-alarm-clock
[5]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[6]:https://www.linux.com/licenses/category/creative-commons-attribution
[7]:http://www.columbia.edu/itc/mealac/pritchett/00routesdata/1700_1799/jaipur/delhijantarearly/delhijantarearly.html
[8]:https://www.linux.com/licenses/category/used-permission
[9]:https://www.linux.com/licenses/category/used-permission

View File

@ -0,0 +1,163 @@
如何在 Linux 系统中通过用户组来管理用户
============================================================
![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/group-of-people-1645356_1920.jpg?itok=rJlAxBSV)
> 本教程可以了解如何通过用户组和访问控制表ACL来管理用户。
当你需要管理一台容纳多个用户的 Linux 机器时,比起一些基本的用户管理工具所提供的方法,有时候你需要对这些用户采取更多的用户权限管理方式。特别是当你要管理某些用户的权限时,这个想法尤为重要。比如说,你有一个目录,某个用户组中的用户可以通过读和写的权限访问这个目录,而其他用户组中的用户对这个目录只有读的权限。在 Linux 中这是完全可以实现的。但前提是你必须先了解如何通过用户组和访问控制表ACL来管理用户。
我们将从简单的用户开始逐渐深入到复杂的访问控制表ACL。你可以在你所选择的 Linux 发行版完成你所需要做的一切。本文的重点是用户组,所以不会涉及到关于用户的基础知识。
为了达到演示的目的,我将假设:
你需要用下面两个用户名新建两个用户:
* olivia
* nathan
你需要新建以下两个用户组:
* readers
* editors
olivia 属于 editors 用户组,而 nathan 属于 readers 用户组。reader 用户组对 `/DATA` 目录只有读的权限,而 editors 用户组则对 `/DATA` 目录同时有读和写的权限。当然,这是个非常小的任务,但它会给你基本的信息,你可以扩展这个任务以适应你其他更大的需求。
我将在 Ubuntu 16.04 Server 平台上进行演示。这些命令都是通用的,唯一不同的是,要是在你的发行版中不使用 `sudo` 命令,你必须切换到 root 用户来执行这些命令。
### 创建用户
我们需要做的第一件事是为我们的实验创建两个用户。可以用 `useradd` 命令来创建用户,我们不只是简单地创建一个用户,而需要同时创建用户和属于他们的家目录,然后给他们设置密码。
```
sudo useradd -m olivia
sudo useradd -m nathan
```
我们现在创建了两个用户,如果你看看 `/home` 目录,你可以发现他们的家目录(因为我们用了 `-m` 选项,可以在创建用户的同时创建他们的家目录。
之后,我们可以用以下命令给他们设置密码:
```
sudo passwd olivia
sudo passwd nathan
```
就这样,我们创建了两个用户。
### 创建用户组并添加用户
现在我们将创建 readers 和 editors 用户组,然后给它们添加用户。创建用户组的命令是:
```
addgroup readers
addgroup editors
```
LCTT 译注:当你使用 CentOS 等一些 Linux 发行版时,可能系统没有 `addgroup` 这个命令,推荐使用 `groupadd` 命令来替换 `addgroup` 命令以达到同样的效果)
![groups](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/groups_1.jpg?itok=BKwL89BB)
*图一:我们可以使用刚创建的新用户组了。*
创建用户组后,我们需要添加我们的用户到这两个用户组。我们用以下命令来将 nathan 用户添加到 readers 用户组:
```
sudo usermod -a -G readers nathan
```
用以下命令将 olivia 添加到 editors 用户组:
```
sudo usermod -a -G editors olivia
```
现在我们可以通过用户组来管理用户了。
### 给用户组授予目录的权限
假设你有个目录 `/READERS` 且允许 readers 用户组的所有成员访问这个目录。首先,我们执行以下命令来更改目录所属用户组:
```
sudo chown -R :readers /READERS
```
接下来,执行以下命令收回目录所属用户组的写入权限:
```
sudo chmod -R g-w /READERS
```
然后我们执行下面的命令来收回其他用户对这个目录的访问权限(以防止任何不在 readers 组中的用户访问这个目录里的文件):
```
sudo chmod -R o-x /READERS
```
这时候只有目录的所有者root和用户组 reader 中的用户可以访问 `/READES` 中的文件。
假设你有个目录 `/EDITORS` ,你需要给用户组 editors 里的成员这个目录的读和写的权限。为了达到这个目的,执行下面的这些命令是必要的:
```
sudo chown -R :editors /EDITORS
sudo chmod -R g+w /EDITORS
sudo chmod -R o-x /EDITORS
```
此时 editors 用户组的所有成员都可以访问和修改其中的文件。除此之外其他用户(除了 root 之外)无法访问 `/EDITORS` 中的任何文件。
使用这个方法的问题在于你一次只能操作一个组和一个目录而已。这时候访问控制表ACL就可以派得上用场了。
### 使用访问控制表ACL
现在,让我们把这个问题变得棘手一点。假设你有一个目录 `/DATA` 并且你想给 readers 用户组的成员读取权限,并同时给 editors 用户组的成员读和写的权限。为此,你必须要用到 `setfacl` 命令。`setfacl` 命令可以为文件或文件夹设置一个访问控制表ACL
这个命令的结构如下:
```
setfacl OPTION X:NAME:Y /DIRECTORY
```
其中 OPTION 是可选选项X 可以是 `u`(用户)或者是 `g` 用户组NAME 是用户或者用户组的名字,/DIRECTORY 是要用到的目录。我们将使用 `-m` 选项进行修改。因此,我们给 readers 用户组添加读取权限的命令是:
```
sudo setfacl -m g:readers:rx -R /DATA
```
现在 readers 用户组里面的每一个用户都可以读取 `/DATA` 目录里的文件了,但是他们不能修改里面的内容。
为了给 editors 用户组里面的用户读写权限,我们执行了以下命令:
```
sudo setfacl -m g:editors:rwx -R /DATA
```
上述命令将赋予 editors 用户组中的任何成员读取权限,同时保留 readers 用户组的只读权限。
### 更多的权限控制
使用访问控制表ACL你可以实现你所需的权限控制。你可以添加用户到用户组并且灵活地控制这些用户组对每个目录的权限以达到你的需求。如果想了解上述工具的更多信息可以执行下列的命令
* `man usradd`
* `man addgroup`
* `man usermod`
* `man sefacl`
* `man chown`
* `man chmod`
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/12/how-manage-users-groups-linux
作者:[Jack Wallen]
译者:[imquanquan](https://github.com/imquanquan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.linux.com/files/images/group-people-16453561920jpg
[2]:https://www.linux.com/files/images/groups1jpg
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[4]:https://www.linux.com/licenses/category/creative-commons-zero
[5]:https://www.linux.com/licenses/category/used-permission

View File

@ -0,0 +1,34 @@
Linux Journal 停止发行
============================================================
EOF
伙计们,看起来我们要到终点了。如果按照计划而且没有什么其他的话,十一月份的 Linux Journal 将是我们的最后一期。
简单的事实是,我们已经用完了钱和期权。我们从来没有一个富有的母公司或者自己深厚的资金,从开始到结束,这使得我们变成一个反常的出版商。虽然我们在很长的一段时间内运营着,但当天平不可恢复地最终向相反方向倾斜时,我们在十一月份失去了最后一点支持。
虽然我们像看到出版业的过去那样看到出版业的未来 - 广告商赞助出版物的时代,因为他们重视品牌和读者 - 我们如今的广告宁愿追逐眼球,最好是在读者的浏览器中植入跟踪标记,并随时随地展示那些广告。但是,未来不是这样,过去的已经过去了。
我们猜想,有一个希望,那就是救世主可能会会来。但除了我们的品牌、我们的档案,我们的域名、我们的用户和读者之外,还必须是愿意承担我们一部分债务的人。如果你认识任何人能够提供认真的报价,请告诉我们。不然,请观看 LinuxJournal.com并希望至少我们的遗留归档可以追溯到 Linux Journal 诞生的 1994 年 4 月,当 Linux 命中 1.0 发布时)将不会消失。这里有很多很棒的东西,还有很多我们会痛恨世界失去的历史。
我们最大的遗憾是,我们甚至没有足够的钱回馈最看重我们的人:我们的用户。为此,我们不能更深刻或真诚地道歉。我们对订阅者而言有什么:
Linux Pro Magazine 为我们的用户提供了六本免费的杂志,我们在 Linux Journal 上一直赞叹这点。在我们需要的时候,他们是我们的第一批人,我们感谢他们的恩惠。我们今天刚刚完成了我们的 2017 年归档,其中包括我们曾经发表过的每一个问题,包括第一个和最后一个。通常我们以 25 美元的价格出售,但显然用户将免费获得。订阅者请注意有关两者的详细信息的电子邮件。
我们也希望在知道我们非常非常努力地让 Linux Journal 进行下去后能有一些安慰 ,而且我们已经用最精益、小的可能运营了很长一段时间。我们是一个大多数是自愿者的组织,有些员工已经几个月没有收到工资。我们还欠钱给自由职业者。这时一个限制发行商能够维持多长时间的限制,现在这个限制已经到头了。
伙计们,这是一个伟大的运营。乡亲。对每一个为我们的诞生、我们的成功和我们多年的坚持作出贡献的人致敬。我们列了一份名单,但是列表太长了,并且漏掉有价值的人的风险很高。你知道你是谁。我们再次感谢。
--------------------------------------------------------------------------------
via: https://www.linuxjournal.com/content/linux-journal-ceases-publication
作者:[ Carlie Fairchild][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxjournal.com/users/carlie-fairchild
[1]:https://www.linuxjournal.com/taxonomy/term/29
[2]:https://www.linuxjournal.com/users/carlie-fairchild

View File

@ -0,0 +1,298 @@
2017 年 30 款最好的支持 Linux 的 Steam 游戏
============================================================
说到游戏,人们一般都会推荐使用 Windows 系统。Windows 能提供更好的显卡支持和硬件兼容性,所以对于游戏爱好者来说的确是个更好的选择。但你是否想过[在 Linux 系统上玩游戏][9]?这的确是可以的,也许你以前还曾经考虑过。但在几年之前, [Steam for Linux][10] 上可玩的游戏并不是很吸引人。
但现在情况完全不一样了。Steam 商店里现在有许多支持 Linux 平台的游戏(包括很多主流大作)。我们在本文中将介绍 Steam 上最好的一些 Linux 游戏。
在进入正题之前,先介绍一个省钱小窍门。如果你是个狂热的游戏爱好者,在游戏上花费很多时间和金钱的话,我建议你订阅 [<ruby>Humble 每月包<rt>Humble Monthly</rt></ruby>][11]。这是个每月收费的订阅服务,每月只用 12 美元就能获得价值 100 美元的游戏。
这个游戏包中可能有些游戏不支持 Linux但除了 Steam 游戏之外,它还会让 [Humble Bundle 网站][12]上所有的游戏和书籍都打九折,所以这依然是个不错的优惠。
更棒的是,你在 Humble Bundle 上所有的消费都会捐出一部分给慈善机构。所以你在享受游戏的同时还在帮助改变世界。
### Steam 上最好的 Linux 游戏
以下排名无先后顺序。
额外提示:虽然在 Steam 上有很多支持 Linux 的游戏,但你在 Linux 上玩游戏时依然可能会遇到各种问题。你可以阅读我们之前的文章:[每个 Linux 游戏玩家都会遇到的烦人问题][14]
可以点击以下链接跳转到你喜欢的游戏类型:
* [动作类游戏][3]
* [角色扮演类游戏][4]
* [赛车/运动/模拟类游戏][5]
* [冒险类游戏][6]
* [独立游戏][7]
* [策略类游戏][8]
### Steam 上最佳 Linux 动作类游戏
#### 1、 《<ruby>反恐精英:全球攻势<rt>Counter-Strike: Global Offensive</rt></ruby>》(多人)
《CS:GO》毫无疑问是 Steam 上支持 Linux 的最好的 FPS 游戏之一。我觉得这款游戏无需介绍,但如果你没有听说过它,我要告诉你这将会是你玩过的最好玩的多人 FPS 游戏之一。《CS:GO》还是电子竞技中的一个主流项目。想要提升等级的话你需要在天梯上和其他玩家同台竞技。但你也可以选择更加轻松的休闲模式。
我本想写《彩虹六号:围攻行动》,但它目前还不支持 Linux 或 Steam OS。
- [购买《CS: GO》][15]
#### 2、 《<ruby>求生之路 2<rt>Left 4 Dead 2</rt></ruby>》(多人/单机)
这是最受欢迎的僵尸主题多人 FPS 游戏之一。在 Steam 优惠时,价格可以低至 1.3 美元。这是个有趣的游戏,能让你体会到你在僵尸游戏中期待的战栗和刺激。游戏中的环境包括了沼泽、城市、墓地等等,让游戏既有趣又吓人。游戏中的枪械并不是非常先进,但作为一个老游戏来说,它已经提供了足够真实的体验。
- [购买《求生之路 2》][16]
#### 3、 《<ruby>无主之地 2<rt>Borderlands 2</rt></ruby>》(单机/协作)
《无主之地 2》是个很有意思的 FPS 游戏。它和你以前玩过的游戏完全不同。画风看上去有些诡异和卡通化,如果你正在寻找一个第一视角的射击游戏,我可以保证,游戏体验可一点也不逊色!
如果你在寻找一个好玩而且有很多 DLC 的 Linux 游戏,《无主之地 2》绝对是个不错的选择。
- [购买《无主之地 2》][17]
#### 4、 《<ruby>叛乱<rt>Insurgency</rt></ruby>》(多人)
《叛乱》是 Steam 上又一款支持 Linux 的优秀的 FPS 游戏。它剑走偏锋,从屏幕上去掉了 HUD 和弹药数量指示。如同许多评论者所说,这是款注重武器和团队战术的纯粹的射击游戏。这也许不是最好的 FPS 游戏,但如果你想玩和《三角洲部队》类似的多人游戏的话,这绝对是最好的游戏之一。
- [购买《叛乱》][18]
#### 5、 《<ruby>生化奇兵:无限<rt>Bioshock: Infinite</rt></ruby>》(单机)
《生化奇兵:无限》毫无疑问将会作为 PC 平台最好的单机 FPS 游戏之一而载入史册。你可以利用很多强大的能力来杀死你的敌人。同时你的敌人也各个身怀绝技。游戏的剧情也非常丰富。你不容错过!
- [购买《生化奇兵:无限》][19]
#### 6、 《<ruby>杀手(年度版)<rt>HITMAN - Game of the Year Edition</rt></ruby>》(单机)
《杀手》系列无疑是 PC 游戏爱好者们的最爱之一。本系列的最新作开始按章节发布,让很多玩家觉得不满。但现在 Square Enix 撤出了开发而最新的年度版带着新的内容重返舞台。在游戏中发挥你的想象力暗杀你的目标吧杀手47
- [购买(杀手(年度版))][20]
#### 7、 《<ruby>传送门 2<rt>Portal 2</rt></ruby>
《传送门 2》完美地结合了动作与冒险。这是款解谜类游戏你可以与其他玩家协作并开发有趣的谜题。协作模式提供了和单机模式截然不同的游戏内容。
- [购买《传送门2》][21]
#### 8、 《<ruby>杀出重围:人类分裂<rt>Deux Ex: Mankind Divided</rt></ruby>
如果你在寻找隐蔽类的射击游戏,《杀出重围》是个填充你的 Steam 游戏库的完美选择。这是个非常华丽的游戏,有着最先进的武器和超乎寻常的战斗机制。
- [购买《杀出重围:人类分裂》][22]
#### 9、 《<ruby>地铁 2033 重置版<rt>Metro 2033 Redux</rt></ruby>》 / 《<ruby>地铁:最后曙光 重置版<rt>Metro Last Light Redux</rt></ruby>
《地铁 2033 重置版》和《地铁:最后曙光 重置版》是经典的《地铁 2033》和《地铁最后曙光》的最终版本。故事发生在世界末日之后。你需要消灭所有的变种人来保证人类的生存。剩下的就交给你自己去探索了
- [购买《地铁 2033 重置版》][23]
- [购买《地铁:最后曙光 重置版》][24]
#### 10、 《<ruby>坦能堡<rt>Tannenberg</rt></ruby>》(多人)
《坦能堡》是个全新的游戏 - 在本文发表一个月前刚刚发售。游戏背景是第一次世界大战的东线战场1914-1918。这款游戏只有多人模式。如果你想要在游戏中体验第一次世界大战不要错过这款游戏
- [购买《坦能堡》][25]
### Steam 上最佳 Linux 角色扮演类游戏
#### 11、 《<ruby>中土世界:暗影魔多<rt>Shadow of Mordor</rt></ruby>
《中土世界:暗影魔多》 是 Steam 上支持 Linux 的最好的开放式角色扮演类游戏之一。你将扮演一个游侠(塔里昂),和光明领主(凯勒布理鹏)并肩作战击败索隆的军队(并最终和他直接交手)。战斗机制非常出色。这是款不得不玩的游戏!
- [购买《中土世界:暗影魔多》][26]
#### 12、 《<ruby>神界:原罪加强版<rt>Divinity: Original Sin Enhanced Edition</rt></ruby>
《神界:原罪》是一款极其优秀的角色扮演类独立游戏。它非常独特而又引人入胜。这或许是评分最高的带有冒险和策略元素的角色扮演游戏。加强版添加了新的游戏模式,并且完全重做了配音、手柄支持、协作任务等等。
- [购买《神界:原罪加强版》][27]
#### 13、 《<ruby>废土 2导演剪辑版<rt>Wasteland 2: Directors Cut</rt></ruby>
《废土 2》是一款出色的 CRPG 游戏。如果《辐射 4》被移植成 CRPG 游戏,大概就是这种感觉。导演剪辑版完全重做了画面,并且增加了一百多名新人物。
- [购买《废土 2》][28]
#### 14、 《<ruby>阴暗森林<rt>Darkwood</rt></ruby>
一个充满恐怖的俯视角角色扮演类游戏。你将探索世界、搜集材料、制作武器来生存下去。
- [购买《阴暗森林》][29]
### 最佳赛车 / 运动 / 模拟类游戏
#### 15、 《<ruby>火箭联盟<rt>Rocket League</rt></ruby>
《火箭联盟》是一款充满刺激的足球游戏。游戏中你将驾驶用火箭助推的战斗赛车。你不仅是要驾车把球带进对方球门,你甚至还可以让你的对手化为灰烬!
这是款超棒的体育动作类游戏,每个游戏爱好者都值得拥有!
- [购买《火箭联盟》][30]
#### 16、 《<ruby>公路救赎<rt>Road Redemption</rt></ruby>
想念《暴力摩托》了?作为它精神上的续作,《公路救赎》可以缓解你的饥渴。当然,这并不是真正的《暴力摩托 2》但它一样有趣。如果你喜欢《暴力摩托》你也会喜欢这款游戏。
- [购买《公路救赎》][31]
#### 17、 《<ruby>尘埃拉力赛<rt>Dirt Rally</rt></ruby>
《尘埃拉力赛》是为想要体验公路和越野赛车的玩家准备的。画面非常有魄力,驾驶手感也近乎完美。
- [购买《尘埃拉力赛》][32]
#### 18、 《F1 2017》
《F1 2017》是另一款令人印象深刻的赛车游戏。由《尘埃拉力赛》的开发者 Codemasters & Feral Interactive 制作。游戏中包含了所有标志性的 F1 赛车,值得你去体验。
- [购买《F1 2017》][33]
#### 19、 《<ruby>超级房车赛:汽车运动<rt>GRID Autosport</rt></ruby>
《超级房车赛》是最被低估的赛车游戏之一。《超级房车赛:汽车运动》是《超级房车赛》的续作。这款游戏的可玩性令人惊艳。游戏中的赛车也比前作更好。推荐所有的 PC 游戏玩家尝试这款赛车游戏。游戏还支持多人模式,你可以和你的朋友组队参赛。
- [购买《超级房车赛:汽车运动》][34]
### 最好的冒险游戏
#### 20、 《<ruby>方舟:生存进化<rt>ARK: Survival Evolved</rt></ruby>
《方舟:生存进化》是一款不错的生存游戏,里面有着激动人心的冒险。你发现自己身处一个未知孤岛(方舟岛),为了生存下去并逃离这个孤岛,你必须去驯服恐龙、与其他玩家合作、猎杀其他人来抢夺资源、以及制作物品。
- [购买《方舟:生存进化》][35]
#### 21、 《<ruby>这是我的战争<rt>This War of Mine</rt></ruby>
一款独特的战争游戏。你不是扮演士兵,而是要作为一个平民来面对战争带来的艰难。你需要在身经百战的敌人手下逃生,并帮助其他的幸存者。
- [购买《这是我的战争》][36]
#### 22、 《<ruby>疯狂的麦克斯<rt>Mad Max</rt></ruby>
生存和暴力概括了《疯狂的麦克斯》的全部内容。游戏中有性能强大的汽车,开放性的世界,各种武器,以及徒手肉搏。你要不断地探索世界,并注意升级你的汽车来防患于未然。在做决定之前,你要仔细思考并设计好策略。
- [购买《疯狂的麦克斯》][37]
### 最佳独立游戏
#### 23、 《<ruby>泰拉瑞亚<rt>Terraria</rt></ruby>
这是款在 Steam 上广受好评的 2D 游戏。你在旅途中需要去挖掘、战斗、探索、建造。游戏地图是自动生成的,而不是固定不变的。也许你刚刚遇到的东西,你的朋友过一会儿才会遇到。你还将体验到富有新意的 2D 动作场景。
- [购买《泰拉瑞亚》][38]
#### 24、 《<ruby>王国与城堡<rt>Kingdoms and Castles</rt></ruby>
在《王国与城堡》中,你将建造你自己的王国。在管理你的王国的过程中,你需要收税、保护森林、规划城市,并且发展国防来防止别人入侵你的王国。
这是款比较新的游戏,但在独立游戏中已经相对获得了比较高的人气。
- [购买《王国与城堡》][39]
### Steam 上最佳 Linux 策略类游戏
#### 25、 《<ruby>文明 5<rt>Sid Meiers Civilization V</rt></ruby>
《文明 5》是 PC 上评价最高的策略游戏之一。如果你想的话,你可以去玩《文明 6》。但是依然有许多玩家喜欢《文明 5》觉得它更有独创性游戏细节也更富有创造力。
- [购买《文明 5》][40]
#### 26、 《<ruby>全面战争:战锤<rt>Total War: Warhammer</rt></ruby>
《全面战争:战锤》是 PC 平台上一款非常出色的回合制策略游戏。可惜的是,新作《战锤 2》依然不支持 Linux。但如果你喜欢使用飞龙和魔法来建造与毁灭帝国的话2016 年的《战锤》依然是个不错的选择。
- [购买《全面战争:战锤》][41]
#### 27、 《<ruby>轰炸小队<rt>Bomber Crew</rt></ruby>
想要一款充满乐趣的策略游戏?《轰炸小队》就是为你准备的。你需要选择合适的队员并且让你的队伍稳定运转来取得最终的胜利。
- [购买《轰炸小队》][42]
#### 28、 《<ruby>奇迹时代 3<rt>Age of Wonders III</rt></ruby>
非常流行的策略游戏,包含帝国建造、角色扮演、以及战争元素。这是款精致的回合制策略游戏,请一定要试试!
- [购买《奇迹时代 3》][43]
#### 29、 《<ruby>城市:天际线<rt>Cities: Skylines</rt></ruby>
一款非常简洁的策略游戏。你要从零开始建造一座城市,并且管理它的全部运作。你将体验建造和管理城市带来的愉悦与困难。我不觉得每个玩家都会喜欢这款游戏——它的用户群体非常明确。
- [购买《城市:天际线》][44]
#### 30、 《<ruby>幽浮 2<rt>XCOM 2</rt></ruby>
《幽浮 2》是 PC 上最好的回合制策略游戏之一。我在想如果《幽浮 2》能够被制作成 FPS 游戏的话该有多棒。不过它现在已经是一款好评如潮的杰作了。如果你有多余的预算能花在这款游戏上,建议你购买“<ruby>天选之战<rt>War of the Chosen</rt></ruby>“ DLC。
- [购买《幽浮 2》][45]
### 总结
我们从所有支持 Linux 的游戏中挑选了大部分的主流大作以及一些评价很高的新作。
你觉得我们遗漏了你最喜欢的支持 Linux 的 Steam 游戏么?另外,你还希望哪些 Steam 游戏开始支持 Linux 平台?
请在下面的回复中告诉我们你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/best-linux-games-steam/
作者:[Ankush Das][a]
译者:[yixunx](https://github.com/yixunx)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ankush/
[1]:https://itsfoss.com/author/ankush/
[2]:https://itsfoss.com/best-linux-games-steam/#comments
[3]:https://itsfoss.com/best-linux-games-steam/#action
[4]:https://itsfoss.com/best-linux-games-steam/#rpg
[5]:https://itsfoss.com/best-linux-games-steam/#racing
[6]:https://itsfoss.com/best-linux-games-steam/#adv
[7]:https://itsfoss.com/best-linux-games-steam/#indie
[8]:https://itsfoss.com/best-linux-games-steam/#strategy
[9]:https://linux.cn/article-7316-1.html
[10]:https://itsfoss.com/install-steam-ubuntu-linux/
[11]:https://www.humblebundle.com/?partner=itsfoss
[12]:https://www.humblebundle.com/store?partner=itsfoss
[13]:https://www.humblebundle.com/monthly?partner=itsfoss
[14]:https://itsfoss.com/linux-gaming-problems/
[15]:http://store.steampowered.com/app/730/CounterStrike_Global_Offensive/
[16]:http://store.steampowered.com/app/550/Left_4_Dead_2/
[17]:http://store.steampowered.com/app/49520/?snr=1_5_9__205
[18]:http://store.steampowered.com/app/222880/?snr=1_5_9__205
[19]:http://store.steampowered.com/agecheck/app/8870/
[20]:http://store.steampowered.com/app/236870/?snr=1_5_9__205
[21]:http://store.steampowered.com/app/620/?snr=1_5_9__205
[22]:http://store.steampowered.com/app/337000/?snr=1_5_9__205
[23]:http://store.steampowered.com/app/286690/?snr=1_5_9__205
[24]:http://store.steampowered.com/app/287390/?snr=1_5_9__205
[25]:http://store.steampowered.com/app/633460/?snr=1_5_9__205
[26]:http://store.steampowered.com/app/241930/?snr=1_5_9__205
[27]:http://store.steampowered.com/app/373420/?snr=1_5_9__205
[28]:http://store.steampowered.com/app/240760/?snr=1_5_9__205
[29]:http://store.steampowered.com/app/274520/?snr=1_5_9__205
[30]:http://store.steampowered.com/app/252950/?snr=1_5_9__205
[31]:http://store.steampowered.com/app/300380/?snr=1_5_9__205
[32]:http://store.steampowered.com/app/310560/?snr=1_5_9__205
[33]:http://store.steampowered.com/app/515220/?snr=1_5_9__205
[34]:http://store.steampowered.com/app/255220/?snr=1_5_9__205
[35]:http://store.steampowered.com/app/346110/?snr=1_5_9__205
[36]:http://store.steampowered.com/app/282070/?snr=1_5_9__205
[37]:http://store.steampowered.com/app/234140/?snr=1_5_9__205
[38]:http://store.steampowered.com/app/105600/?snr=1_5_9__205
[39]:http://store.steampowered.com/app/569480/?snr=1_5_9__205
[40]:http://store.steampowered.com/app/8930/?snr=1_5_9__205
[41]:http://store.steampowered.com/app/364360/?snr=1_5_9__205
[42]:http://store.steampowered.com/app/537800/?snr=1_5_9__205
[43]:http://store.steampowered.com/app/226840/?snr=1_5_9__205
[44]:http://store.steampowered.com/app/255710/?snr=1_5_9__205
[45]:http://store.steampowered.com/app/268500/?snr=1_5_9__205
[46]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[47]:https://twitter.com/share?original_referer=/&text=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017&url=https://itsfoss.com/best-linux-games-steam/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=ankushdas9
[48]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[49]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Fbest-linux-games-steam%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[50]:https://www.reddit.com/submit?url=https://itsfoss.com/best-linux-games-steam/&title=30+Best+Linux+Games+On+Steam+You+Should+Play+in+2017

View File

@ -0,0 +1,274 @@
如何在 Linux 上安装友好的交互式 shellFish
======
Fish<ruby>友好的交互式 shell<rt>Friendly Interactive SHell</rt></ruby> 的缩写,它是一个适于装备于类 Unix 系统的智能而用户友好的 shell。Fish 有着很多重要的功能,比如自动建议、语法高亮、可搜索的历史记录(像在 bash 中 `CTRL+r`)、智能搜索功能、极好的 VGA 颜色支持、基于 web 的设置方式、完善的手册页和许多开箱即用的功能。尽管安装并立即使用它吧。无需更多其他配置,你也不需要安装任何额外的附加组件/插件!
在这篇教程中,我们讨论如何在 Linux 中安装和使用 fish shell。
#### 安装 Fish
尽管 fish 是一个非常用户友好的并且功能丰富的 shell但并没有包括在大多数 Linux 发行版的默认仓库中。它只能在少数 Linux 发行版中的官方仓库中找到,如 Arch LinuxGentooNixOS和 Ubuntu 等。然而,安装 fish 并不难。
在 Arch Linux 和它的衍生版上,运行以下命令来安装它。
```
sudo pacman -S fish
```
在 CentOS 7 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_7/shells:fish:release:2.repo
yum install fish
```
在 CentOS 6 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/CentOS_6/shells:fish:release:2.repo
yum install fish
```
在 Debian 9 上以 root 运行以下命令:
```
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_9.0/Release.key -O Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_9.0/ /' > /etc/apt/sources.list.d/fish.list
apt-get update
apt-get install fish
```
在 Debian 8 上以 root 运行以下命令:
```
wget -nv https://download.opensuse.org/repositories/shells:fish:release:2/Debian_8.0/Release.key -O Release.key
apt-key add - < Release.key
echo 'deb http://download.opensuse.org/repositories/shells:/fish:/release:/2/Debian_8.0/ /' > /etc/apt/sources.list.d/fish.list
apt-get update
apt-get install fish
```
在 Fedora 26 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_26/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 25 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_25/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 24 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_24/shells:fish:release:2.repo
dnf install fish
```
在 Fedora 23 上以 root 运行以下命令:
```
dnf config-manager --add-repo https://download.opensuse.org/repositories/shells:fish:release:2/Fedora_23/shells:fish:release:2.repo
dnf install fish
```
在 openSUSE 上以 root 运行以下命令:
```
zypper install fish
```
在 RHEL 7 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/RHEL_7/shells:fish:release:2.repo
yum install fish
```
在 RHEL-6 上以 root 运行以下命令:
```
cd /etc/yum.repos.d/
wget https://download.opensuse.org/repositories/shells:fish:release:2/RedHat_RHEL-6/shells:fish:release:2.repo
yum install fish
```
在 Ubuntu 和它的衍生版上:
```
sudo apt-get update
sudo apt-get install fish
```
就这样了。是时候探索 fish shell 了。
### 用法
要从你默认的 shell 切换到 fish,请执行以下操作:
```
$ fish
Welcome to fish, the friendly interactive shell
```
你可以在 `~/.config/fish/config.fish` 上找到默认的 fish 配置(类似于 `.bashrc`)。如果它不存在,就创建它吧。
#### 自动建议
当我输入一个命令,它以浅灰色自动建议一个命令。所以,我需要输入一个 Linux 命令的前几个字母,然后按下 `tab` 键来完成这个命令。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png)][2]
如果有更多的可能性,它将会列出它们。你可以使用上/下箭头键从列表中选择列出的命令。在选择你想运行的命令后,只需按下右箭头键,然后按下 `ENTER` 运行它。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png)][3]
无需 `CTRL+r` 了!正如你已知道的,我们通过按 `CTRL+r` 来反向搜索 Bash shell 中的历史命令。但在 fish shell 中是没有必要的。由于它有自动建议功能,只需输入命令的前几个字母,然后从历史记录中选择已经执行的命令。很酷,是吧。
#### 智能搜索
我们也可以使用智能搜索来查找一个特定的命令、文件或者目录。例如,我输入一个命令的一部分,然后按向下箭头键进行智能搜索,再次输入一个字母来从列表中选择所需的命令。
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png)][4]
#### 语法高亮
当你输入一个命令时,你将注意到语法高亮。请看下面当我在 Bash shell 和 fish shell 中输入相同的命令时截图的区别。
Bash
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png)][5]
Fish
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png)][6]
正如你所看到的,`sudo` 在 fish shell 中已经被高亮显示。此外,默认情况下它将以红色显示无效命令。
#### 基于 web 的配置方式
这是 fish shell 另一个很酷的功能。我们可以设置我们的颜色、更改 fish 提示符,并从网页上查看所有功能、变量、历史记录、键绑定。
启动 web 配置接口,只需输入:
```
fish_config
```
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png)][7]
#### 手册页补完
Bash 和 其它 shells 支持可编程的补完,但只有 fish 可以通过解析已安装的手册来自动生成它们。
为此,请运行:
```
fish_update_completions
```
实例输出将是:
```
Parsing man pages and writing completions to /home/sk/.local/share/fish/generated_completions/
3435 / 3435 : zramctl.8.gz
```
#### 禁用问候语
默认情况下fish 在启动时问候你“Welcome to fish, the friendly interactive shell”。如果你不想要这个问候消息可以禁用它。为此编辑 fish 配置文件:
```
vi ~/.config/fish/config.fish
```
添加以下行:
```
set -g -x fish_greeting ''
```
你也可以设置任意自定义的问候语,而不是禁用 fish 问候。
```
set -g -x fish_greeting 'Welcome to OSTechNix'
```
#### 获得帮助
这是另一个吸引我的令人印象深刻的功能。要在终端的默认 web 浏览器中打开 fish 文档页面,只需输入:
```
help
```
官方文档将会在你的默认浏览器中打开。另外,你可以使用手册页来显示任何命令的帮助部分。
```
man fish
```
#### 设置 fish 为默认 shell
非常喜欢它?太好了!设置它作为默认 shell 吧。为此,请使用命令 `chsh`
```
chsh -s /usr/bin/fish
```
在这里,`/usr/bin/fish` 是 fish shell 的路径。如果你不知道正确的路径,以下命令将会帮助你:
```
which fish
```
注销并且重新登录以使用新的默认 shell。
请记住,为 Bash 编写的许多 shell 脚本可能不完全兼容 fish。
要切换回 Bash只需运行
```
bash
```
如果你想 Bash 作为你的永久默认 shell运行
```
chsh -s /bin/bash
```
各位,这就是全部了。在这个阶段,你可能会得到一个有关 fish shell 使用的基本概念。 如果你正在寻找一个Bash的替代品fish 可能是一个不错的选择。
Cheers!
资源:
* [fish shell 官网][1]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-fish-friendly-interactive-shell-linux/
作者:[SK][a]
译者:[kimii](https://github.com/kimii)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://fishshell.com/
[2]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-6.png
[5]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-4.png
[7]:http://www.ostechnix.com/wp-content/uploads/2017/12/fish-5.png

View File

@ -1,130 +0,0 @@
Translating by chao-zhi
Be a force for good in your community
============================================================
>Find out how to give the gift of an out, learn about the power of positive intent, and more.
![Be a force for good in your community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/people_remote_teams_world.png?itok=wI-GW8zX "Be a force for good in your community")
>Image by : opensource.com
Passionate debate is among the hallmark traits of open source communities and open organizations. On our best days, these debates are energetic and constructive. They are heated, yet moderated with humor and goodwill. All parties remain focused on facts, on the shared purpose of collaborative problem-solving, and driving continuous improvement. And for many of us, they're just plain fun.
On our worst days, these debates devolve into rehashing the same old arguments on the same old topics. Or we turn on one another, delivering insults—passive-aggressive or outright nasty, depending on our style—and eroding the passion, trust, and productivity of our communities.
We've all been there, watching and feeling helpless, as a community conversation begins to turn toxic. Yet, as [DeLisa Alexander recently shared][1], there are so many ways that each and every one of us can be a force for good in our communities.
In the first article of this "open culture" series, I will share a few strategies for how you can intervene, in that crucial moment, and steer everyone to a more positive and productive place.
### Don't call people out. Call them up.
Recently, I had lunch with my friend and colleague, [Mark Rumbles][2]. Over the years, we've collaborated on a number of projects that support open culture and leadership at Red Hat. On this day, Mark asked me how I was holding up, as he saw I'd recently intervened in a mailing list conversation when I saw the debate was getting ugly.
Fortunately, the dust had long since settled, and in fact I'd almost forgotten about the conversation. Nevertheless, it led us to talk about the challenges of open and frank debate in a community that has thousands of members.
>One of the biggest ways we can be a force for good in our communities is to respond to conflict in a way that compels everyone to elevate their behavior, rather than escalate it.
Mark said something that struck me as rather insightful. He said, "You know, as a community, we are really good at calling each other out. But what I'd like to see us do more of is calling each other _up_."
Mark is absolutely right. One of the biggest ways we can be a force for good in our communities is to respond to conflict in a way that compels everyone to elevate their behavior, rather than escalate it.
### Assume positive intent
We can start by making a simple assumption when we observe poor behavior in a heated conversation: It's entirely possible that there are positive intentions somewhere in the mix.
This is admittedly not an easy thing to do. When I see signs that a debate is turning nasty, I pause and ask myself what Steven Covey calls The Humanizing Question:
"Why would a reasonable, rational, and decent person do something like this?"
Now, if this is one of your "usual suspects"—a community member with a propensity toward negative behavior--perhaps your first thought is, "Um, what if this person _isn't_ reasonable, rational, or decent?"
Stay with me, now. I'm not suggesting that you engage in some touchy-feely form of self-delusion. It's called The Humanizing Question not only because asking it humanizes the other person, but also because it humanizes _you_.
And that, in turn, helps you respond or intervene from the most productive possible place.
### Seek to understand the reasons for community dissent
When I ask myself why a reasonable, rational, and decent person might do something like this, time and again, it comes down to the same few reasons:
* They don't feel heard.
* They don't feel respected.
* They don't feel understood.
One easy positive intention we can apply to almost any poor behavior, then, is that the person wants to be heard, respected, or understood. That's pretty reasonable, I suppose.
By standing in this more objective and compassionate place, we can see that their behavior is _almost certainly _**_not_**_ going to help them get what they want, _and that the community will suffer as a result . . . without our help.
For me, that inspires a desire to help everyone get "unstuck" from this ugly place we're in.
Before I intervene, though, I ask myself a follow-up question: _What other positive intentions might be driving this behavior?_
Examples that readily jump to mind include:
* They are worried that we're missing something important, or we're making a mistake, and no one else seems to see it.
* They want to feel valued for their contributions.
* They are burned out, because of overworking in the community or things happening in their personal life.
* They are tired of something being broken and frustrated that no one else seems to see the damage or inconvenience that creates.
* ...and so on and so forth.
With that, I have a rich supply of positive intent that I can ascribe to their behavior. I'm ready to reach out and offer them some help, in the form of an out.
### Give the gift of an out
What is an out? Think of it as an escape hatch. It's a way to exit the conversation, or abandon the poor behavior and resume behaving like a decent person, without losing face. It's calling someone up, rather than calling them out.
You've probably experienced this, as some point in your life, when _you_ were behaving poorly in a conversation, ranting and hollering and generally raising a fuss about something or another, and someone graciously offered _you_ a way out. Perhaps they chose not to "take the bait" by responding to your unkind choice of words, and instead, said something that demonstrated they believed you were a reasonable, rational, and decent human being with positive intentions, such as:
> _So, uh, what I'm hearing is that you're really worried about this, and you're frustrated because it seems like no one is listening. Or maybe you're concerned that we're missing the significance of it. Is that about right?_
And here's the thing: Even if that wasn't entirely true (perhaps you had less-than-noble intentions), in that moment, you probably grabbed ahold of that life preserver they handed you, and gladly accepted the opportunity to reframe your poor behavior. You almost certainly pivoted and moved to a more productive place, likely without even recognizing it.
Perhaps you said something like, "Well, it's not that exactly, but I just worry that we're headed down the wrong path here, and I get what you're saying that as community, we can't solve every problem at the same time, but if we don't solve this one soon, bad things are going to happen…"
In the end, the conversation almost certainly began to move to a more productive place, or you all agreed to disagree.
We all have the opportunity to offer an upset person a safe way out of that destructive place they're operating from. Here's how.
### Bad behavior or bad actor?
If the person is particularly agitated, they may not hear or accept the first out you hand them. That's okay. Most likely, their lizard brain--that prehistoric amygdala that was once critical for human survival—has taken over, and they need a few more moments to recognize you're not a threat. Just keep gently but firmly treating them as if they _were_ a rational, reasonable, decent human being, and watch what happens.
In my experience, these community interventions end in one of three ways:
Most often, the person actually _is_ a reasonable person, and soon enough, they gratefully and graciously accept the out. In the process, everyone breaks out of the black vs. white, "win or lose" mindset. People begin to think up creative alternatives and "win-win" outcomes that benefit everyone.
>Why would a reasonable, rational, and decent person do something like this?
Occasionally, the person is not particularly reasonable, rational, or decent by nature, but when treated with such consistent, tireless, patient generosity and kindness (by you), they are shamed into retreating from the conversation. This sounds like, "Well, I think I've said all I have to say. Thanks for hearing me out." Or, for less enlightened types, "Well, I'm tired of this conversation. Let's drop it." (Yes, please. Thank you.)
Less often, the person is what's known as a _bad actor_, or in community management circles, a pot-stirrer. These folks do exist, and they thrive on drama. Guess what? By consistently engaging in a kind, generous, community-calming way, and entirely ignoring all attempts to escalate the situation, you effectively shift the conversation into an area that holds little interest for them. They have no choice but to abandon it. Winners all around.
That's the power of assuming positive intent. By responding to angry and hostile words with grace and dignity, you can diffuse a flamewar, untangle and solve tricky problems, and quite possibly make a new friend or two in the process.
Am I successful every time I apply this principle? Heck, no. But I never regret the choice to assume positive intent. And I can vividly recall a few unfortunate occasions when I assumed negative intent and responded in a way that further contributed to the problem.
Now it's your turn. I'd love to hear about some strategies and principles you apply, to be a force for good when conversations get heated in your community. Share your thoughts in the comments below.
Next time, we'll explore more ways to be a force for good in your community, and I'll share some tips for handling "Mr. Grumpy."
--------------------------------------------------------------------------------
作者简介:
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot-square_0.jpg?itok=FS97b9YD)
Rebecca Fernandez is a Principal Employment Branding + Communications Specialist at Red Hat, a contributor to The Open Organization book, and the maintainer of the Open Decision Framework. She is interested in open source and the intersection of the open source way with business management models. Twitter: @ruhbehka
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/17/1/force-for-good-community
作者:[Rebecca Fernandez][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rebecca
[1]:https://opensource.com/business/15/5/5-ways-promote-inclusive-environment
[2]:https://twitter.com/leadership_365

View File

@ -1,290 +0,0 @@
GOOGLE CHROMEONE YEAR IN
========================================
Four weeks ago, emailed notice of a free massage credit revealed that Ive been at Google for a year. Time flies when youre [drinking from a firehose][3].
When I mentioned my anniversary, friends and colleagues from other companies asked what Ive learned while working on Chrome over the last year. This rambling post is an attempt to answer that question.
### NON-MASKABLE INTERRUPTS
While I _started_ at Google just over a year ago, I havent actually _worked_ there for a full year yet. My second son (Nate) was born a few weeks early, arriving ten workdays after my first day of work.
I took full advantage of Googles very generous twelve weeks of paternity leave, taking a few weeks after we brought Nate home, and the balance as spring turned to summer. In a year, we went from having an enormous infant to an enormous toddler whos taking his first steps and trying to emulate everything his 3 year-old brother (Noah) does.
![Baby at the hospital](https://textplain.files.wordpress.com/2017/01/image55.png?w=318&h=468 "New Release")
![First birthday cake](https://textplain.files.wordpress.com/2017/01/image56.png?w=484&h=466)
I mention this because its had a huge impact on my work over the last year—_much_ more than Id naively expected.
When Noah was born, Id been at Telerik for [almost a year][4], and Id been hacking on Fiddler alone for nearly a decade. I took a short paternity leave, and my coding hours shifted somewhat (I started writing code late at night between bottle feeds), but otherwise my work wasnt significantly impacted.
As I pondered joining Google Chromes security team, I expected pretty much the same—a bit less sleep, a bit of scheduling awkwardness, but I figured things would fall into a good routine in a few months.
Things turned out somewhat differently.
Perhaps sensing that my life had become too easy, fate decided that 2016 was the year Id get sick. _Constantly_. (Our theory is that Noah was bringing home germs from pre-school; he got sick a bunch too, but recovered quickly each time.) I was sick more days in 2016 than I was in the prior decade, including a month-long illness in the spring. _That_ ended with a bout of pneumonia that concluded with a doctor-mandated seven days away from the office. As I coughed my brains out on the sofa at home, I derived some consolation in thinking about Googles generous life insurance package. But for the most part, my illnesses were minor—enough to keep me awake at night and coughing all day, but otherwise able to work.
Mathematically, you might expect two kids to be twice as much work as one, but in our experience, it hasnt worked out that way. Instead, it varies between 80% (when the kids happily play together) to 400% (when theyre colliding like atoms in a runaway nuclear reactor). Thanks to my wifes heroic efforts, we found a workable _daytime _routine. The nights, however, have been unexpectedly difficult. Big brother Noah is at an age where he usually sleeps through the night, but hes sure to wake me up every morning at 6:30am sharp. Fortunately, Nate has been a pretty good sleeper, but even now, at just over a year old, he usually still wakes up and requires attention twice a night or so.
I cant_ remember _the last time I had eight hours of sleep in a row. And thats been _extremely _challenging… because I cant remember _much else_ either. Learning new things when you dont remember them the next day is a brutal, frustrating process.
When Noah was a baby, I could simply sleep in after a long night. Even if I didnt get enough sleep, it wouldnt really matter—Id been coding in C# on Fiddler for a decade, and deadlines were few and far between. If all else failed, Id just avoid working on any especially gnarly code and spend the day handling support requests, updating graphics, or doing other simple and straightforward grunt work from my backlog.
Things are much different on Chrome.
### ROLES
When I first started talking to the Chrome Security team about coming aboard, it was for a role on the Developer Advocacy team. Id be driving HTTPS adoption across the web and working with big sites to unblock their migrations in any way I could. Id already been doing the first half of that for fun (delivering [talks][5] at conferences like Codemash and [Velocity][6]), and Id previously spent eight years as a Security Program Manager for the Internet Explorer team. I had _tons _of relevant experience. Easy peasy.
I interviewed for the Developer Advocate role. The hiring committee kicked back my packet and said I should interview as a Technical Program Manager instead.
I interviewed as a Technical Program Manager. The hiring committee kicked back my packet and said I should interview as a Developer Advocate instead.
The Chrome team resolved the deadlock by hiring me as a Senior Software Engineer (SWE).
I was initially _very _nervous about this, having not written any significant C++ code in over a decade—except for one [in-place replacement][7] of IE9s caching logic which Id coded as a PM because I couldnt find a developer to do the work. But eventually I started believing in my own pep talk: _“I mean, how hard could it be, right? Ive been troubleshooting code in web browsers for almost two decades now. Im not a complete dummy. Ill ramp up. Itll be rough, but itll work out. Hell, I started writing Fiddler not knowing either C# nor HTTP, and _that _turned out pretty good. Ill buy some books and get caught up. Theres no way that Google would have just hired me as a C++ developer without asking me any C++ coding questions if it wasnt going to all be okay. Right? Right?!?”_
### THE FIREHOSE
I knew I had a lot to learn, and fast, but it took me a while to realize just how much else I didnt know.
Googles primary development platform is Linux, an OS that I would install every few years, play with for a day, then forget about. My new laptop was a Mac, a platform Id used a bit more, but still one for which I was about a twentieth as proficient as I was on Windows. The Chrome Windows team made a half-hearted attempt to get me to join their merry band, but warned me honestly that some of the tooling wasnt quite as good as it was on Linux and itd probably be harder for me to get help. So I tried to avoid Windows for the first few months, ordering a puny Windows machine that took around four times longer to build Chrome than my obscenely powerful Linux box (with its 48 logical cores). After a few months, I gave up on trying to avoid Windows and started using it as my primary platform. I was more productive, but incredibly slow builds remained a problem for a few months. Everyone told me to just order _another_ obscenely powerful box to put next to my Linux one, but it felt wrong to have hardware at my desk that collectively cost more than my first car—especially when, at Microsoft, I bought all my own hardware. I eventually mentioned my cost/productivity dilemma to a manager, who noted I was getting paid a Google engineers salary and then politely asked me if I was just really terrible at math. I ordered a beastly Windows machine and now my builds scream. (To the extent that _any_ C++ builds can scream, of course. At Telerik, I was horrified when a full build of Fiddler slowed to a full 5 seconds on my puny Windows machine; my typical Chrome build today still takes about 15 minutes.)
Beyond learning different operating systems, Id never used Googles apps before (Docs/Sheets/Slides); luckily, I found these easy to pick up, although I still havent fully figured out how Google Drive file organization works. Google Docs, in particular, is so good that Ive pretty much given up on Microsoft Word (which headed downhill after the 2010 version). Google Keep is a low-powered alternative to OneNote (which is, as far as I can tell, banned because it syncs to Microsoft servers) and I havent managed to get it to work well for my needs. Google Plus still hasnt figured out how to support pasting of images via CTRL+V, a baffling limitation for something meant to compete in the space… hell, even _Microsoft Yammer _supports that, for gods sake. The only real downside to the web apps is that tab/window management on modern browsers is still a very much unsolved problem (but more on that in a bit).
But these speedbumps all pale in comparison to Gmail. Oh, Gmail. As a program manager at Microsoft, pretty much your _entire life _is in your inbox. After twelve years with Outlook and Exchange, switching to Gmail was a train wreck. “_What do you mean, there arent folders? How do I mark this message as low priority? Wheres the button to format text with strikethrough? What do you mean, I cant drag an email to my calendar? What the hell does this Archive thing do? Wheres that message I was just looking at? Hell, where did my Gmail tab even go—it got lost in a pile of sixty other tabs across four top-level Chrome windows. WTH??? How does anyone get anything done?”_
### COMMUNICATION AND REMOTE WORK
While Telerik had an office in Austin, I didnt interact with other employees very often, and when I did they were usually in other offices. I thought I had a handle on remote work, but I really didnt. Working with a remote team on a daily basis is just _different_.
With communication happening over mail, IRC, Hangouts, bugs, document markup comments, GVC (video conferencing), G+, and discussion lists, it was often hard to [figure out which mechanisms to use][8], let alone which recipients to target. Undocumented pitfalls abounded (many discussion groups were essentially abandoned while others were unexpectedly broad; turning on chat history was deemed a “no-no” for document retention reasons).
It often it took a bit of research to even understand who various communication participants were and how they related to the projects at hand.
After years of email culture at Microsoft, I grew accustomed to a particular style of email, and Googles is just _different._ Mail threads were long, with frequent additions of new recipients and many terse remarks. Many times, Id reply privately to someone on a side thread, with a clarifying question, or suggesting a counterpoint to something they said. The response was often “_Hey, this just went to me. Mind adding on the main thread?_”
Im working remotely, with peers around the world, so real-time communication with my team is essential. Some Chrome subteams use Hangouts, but the Security team largely uses IRC.
[
![XKCD comic on IRC](https://textplain.files.wordpress.com/2017/01/image30.png?w=1320&h=560 "https://xkcd.com/1782/")
][9]
Now, Ive been chatting with people online since BBSes were a thing (Ive got a five digit ICQ number somewhere), but my knowledge of IRC was limited to the fact that it was a common way of taking over suckers machines with buffer overflows in the 90s. My new teammates tried to explain how to IRC repeatedly: “_Oh, its easy, you just get this console IRC client. No, no, you dont run it on your own workstation, thatd be crazy. You wouldnt have history! You provision a persistent remote VM on a machine in Googles cloud, then SSH to that, then you run screens and then you run your IRC client in that. Easy peasy._”
Getting onto IRC remained on my “TODO” list for five months before I finally said “F- it”, installed [HexChat][10] on my Windows box, disabled automatic sleep, and called it done. Its worked fairly well.
### GOOGLE DEVELOPER TOOLING
When an engineer first joins Google, they start with a week or two of technical training on the Google infrastructure. Ive worked in software development for nearly two decades, and Ive never even dreamed of the development environment Google engineers get to use. I felt like Charlie Bucket on his tour of Willa Wonkas Chocolate Factory—astonished by the amazing and unbelievable goodies available at any turn. The computing infrastructure was something out of Star Trek, the development tools were slick and amazing, the _process_ was jaw-dropping.
While I was doing a “hello world” coding exercise in Googles environment, a former colleague from the IE team pinged me on Hangouts chat, probably because hed seen my tweets about feeling like an imposter as a SWE.  He sent me a link to click, which I did. Code from Googles core advertising engine appeared in my browser. Googles engineers have access to nearly all of the code across the whole company. This alone was astonishing—in contrast, Id initially joined the IE team so I could get access to the networking code to figure out why the Office Online teams website wasnt working. “Neat, I can see everything!” I typed back. “Push the Analyze button” he instructed. I did, and some sort of automated analyzer emitted a report identifying a few dozen performance bugs in the code. “Wow, thats amazing!” I gushed. “Now, push the Fix button” he instructed. “Uh, this isnt some sort of security red team exercise, right?” I asked. He assured me that it wasnt. I pushed the button. The code changed to fix some unnecessary object copies. “Amazing!” I effused. “Click Submit” he instructed. I did, and watched as the system compiled the code in the cloud, determined which tests to run, and ran them. Later that afternoon, an owner of the code in the affected folder typed LGTM (Googlers approve changes by typing the acronym for Looks Good To Me) on the change list I had submitted, and my change was live in production later that day. I was, in a word, gobsmacked. That night, I searched the entire codebase for [misuse][11] of an IE cache control token and proposed fixes for the instances I found. I also narcissistically searched for my own name and found a bunch of references to blog posts Id written about assorted web development topics.
Unfortunately for Chrome Engineers, the introduction to Googles infrastructure is followed by a major letdown—because Chromium is open-source, the Chrome team itself doesnt get to take advantage of most of Googles internal goodies. Development of Chrome instead resembles C++ development at most major companies, albeit with an automatically deployed toolchain and enhancements like a web-based code review tool and some super-useful scripts. The most amazing of these is called [bisect-builds][12], and it allows a developer to very quickly discover what build of Chrome introduced a particular bug. You just give it a “known good” build number and a “known bad” build number and it  automatically downloads and runs the minimal number of builds to perform a binary search for the build that introduced a given bug:
![Console showing bisect builds running](https://textplain.files.wordpress.com/2017/01/image31.png?w=1320&h=514 "Binary searching for regressions")
Firefox has [a similar system][13], but Idve killed for something like this back when I was reproducing and reducing bugs in IE. While its easy to understand how the system functions, it works so well that it feels like magic. Other useful scripts include the presubmit checks that run on each change list before you submit them for code review—they find and flag various style violations and other problems.
Compilation itself typically uses a local compiler; on Windows, we use the MSVC command line compiler from Visual Studio 2015 Update 3, although work is underway to switch over to [Clang][14]. Compilation and linking all of Chrome takes quite some time, although on my new beastly dev boxes its not _too_ bad. Googlers do have one special perk—we can use Goma (a distributed compiler system that runs on Googles amazing internal cloud) but I havent taken advantage of that so far.
For bug tracking, Chrome recently moved to [Monorail][15], a straightforward web-based bug tracking system. It works fairly well, although it is somewhat more cumbersome than it needs to be and would be much improved with [a few tweaks][16]. Monorail is open-source, but I havent committed to it myself yet.
For code review, Chrome presently uses [Rietveld][17], a web-based system, but this is slated to change in the near(ish) future. Like Monorail, its pretty straightforward although it would benefit from some minor usability tweaks; I committed one trivial change myself, but the pending migration to a different system means that it isnt likely to see further improvements.
As an open-source project, Chromium has quite a bit of public [documentation for developers][18], including [Design Documents][19]. Unfortunately, Chrome moves so fast that many of the design documents are out-of-date, and its not always obvious whats current and what was replaced long ago. The team does _value_ engineers investment in the documents, however, and various efforts are underway to update the documents and reduce Chromes overall architectural complexity. I expect these will be ongoing battles forever, just like in any significant active project.
### WHAT IVE DONE
“Thats all well and good,” my reader asks, “but _what have you done_ in the last year?”
### I WROTE SOME CODE
My first check in to Chrome [landed][20] in February; it was a simple adjustment to limit Public-Key-Pins to 60 days. Assorted other checkins trickled in through the spring before I went on paternity leave. The most _fun_ fix I did cleaned up a tiny [UX glitch][21] that sat unnoticed in Chrome for almost a decade; it was mostly interesting because it was a minor thing that Id tripped over for years, including back in IE. (The root cause was arguably that MSDN documentation about DWM lied; I fixed the bug in Chrome, sent the fix to IE, and asked MSDN to fix their docs).
I fixed a number of [minor][22] [security][23] [bugs][24], and lately Ive been working on [UX issues][25] related to Chromes HTTPS user-experience. Back in 2005, I wrote [a blog post][26] complaining about websites using HTTPS incorrectly, and now, just over a decade later, Chrome and Firefox are launching UI changes to warn users when a site is collecting sensitive information on pages which are Not Secure; Im delighted to have a small part in those changes.
Having written a handful of Internet Explorer Extensions in the past, I was excited to discover the joy of writing Chrome extensions. Chrome extensions are fun, simple, and powerful, and theres none of the complexity and crashes of COM.
[
![My 3 Chrome Extensions](https://textplain.files.wordpress.com/2017/01/image201.png?w=1288&h=650 "My 3 Chrome Extensions")
][27]
My first and most significant extension is the moarTLS Analyzer its related to my HTTPS work at Google and its proven very useful in discovering sites that could improve their security. I [blogged about it][28] and the process of [developing it][29] last year.
Because I run several different Chrome instances on my PC (and they update daily or weekly), I found myself constantly needing to look up the Chrome version number for bug reports and the like. I wrote a tiny extension that shows the version number in a button on the toolbar (so its captured in screenshots too!):
![Show Chrome Version screenshot](https://textplain.files.wordpress.com/2017/02/image.png?w=886&h=326 "Show Chrome Version")
More than once, I spent an hour or so trying to reproduce and reduce a bug that had been filed against Chrome. When I found out the cause, Id jubilently add my notes to the issue in the Monorail bug tracker, click “Save changes” and discover that someone more familiar with the space had beaten me to the punch and figured it out while Id had the bug open on my screen. Adding an “Issue has been updated” alert to the bug tracker itself seemed like the right way to go, but it would require some changes that I wasnt able to commit on my own. So, instead I built an extension that provides such alerts within the page until the [feature][30] can be added to the tracker itself.
Each of these extensions was a joy to write.
### I FILED SOME BUGS
Im a diligent self-hoster, and I run Chrome Canary builds on all of my devices. I submit crash reports and [file bugs][31] with as much information as I can. My proudest moment was in helping narrow down a bizarre and intermittent problem users had with Chrome on Windows 10, where Chrome tabs would crash on every startup until you rebooted the OS. My [blog post][32] explains the full story, and encourages others to file bugs as they encounter them.
### I TRIAGED MORE BUGS
Ive been developing software for Windows for just over two decades, and inevitably Ive learned quite a bit about it, including the undocumented bits. Thats given me a leg up in understanding bugs in the Windows code. Some of the most fun include issues in Drag and Drop, like this [gem][33] of a bug that means that you cant drop files from Chrome to most applications in Windows. More meaningful [bugs][34] [relate][35] [to][36] [problems][37] with Windows Mark-of-the-Web security feature (about which Ive [blogged][38] [about][39] [several][40] times).
### I TOOK SHERIFF ROTATIONS
Google teams have the notion of sheriffs—a rotating assignment that ensures that important tasks (like triaging incoming security bugs) always has a defined owner, without overwhelming any single person. Each Sheriff has a term of ~1 week where they take on additional duties beyond their day-to-day coding, designing, testing, etc.
The Sheriff system has some real benefits—perhaps the most important of which is creating a broad swath of people experienced and qualified in making triage decisions around security vulnerabilities. The alternative is to leave such tasks to a single owner, rapidly increasing their [bus factor][41] and thus the risk to the project. (I know this from first-hand experience. After IE8 shipped, I was on my way out the door to join another team. Then IEs Security PM left, leaving a gaping hole that I felt obliged to stay around to fill. It worked out okay for me and the team, but it was tense all around.)
Im on two sheriff rotations: [Enamel][42] (my subteam) and the broader Chrome Security Sheriff.
The Enamel rotations tasks are akin to what I used to do as a Program Manager at Microsoft—triage incoming bugs, respond to questions in the [Help Forums][43], and generally act as a point of contact for my immediate team.
In contrast, the Security Sheriff rotation is more work, and somewhat more exciting. The Security Sheriffs [duties][44] include triaging all bugs of type “Security”, assigning priority, severity, and finding an owner for each. Most security bugs are automatically reported by [our fuzzers][45] (a tireless robot army!), but we also get reports from the public and from Chrome team members and [Project Zero][46] too.
At Microsoft, incoming security bug reports were first received and evaluated by the Microsoft Security Response Center (MSRC); valid reports were passed along to the IE team after some level of analysis and reproduction was undertaken. In general, all communication was done through MSRC, and the turnaround cycle on bugs was _typically _on the order of weeks to months.
In contrast, anyone can [file a security bug][47] against Chrome, and every week lots of people do. One reason for that is that Chrome has a [Vulnerability Rewards program][48] which pays out up to $100K for reports of vulnerabilities in Chrome and Chrome OS. Chrome paid out just under $1M USD in bounties [last year][49]. This is an _awesome _incentive for researchers to responsibly disclose bugs directly to us, and the bounties are _much _higher than those of nearly any other project.
In his “[Hacker Quantified Security][50]” talk at the OReilly Security conference, HackerOne CTO and Cofounder Alex Rice showed the following chart of bounty payout size for vulnerabilities when explaining why he was using a Chromebook. Apologies for the blurry photo, but the line at the top shows Chrome OS, with the 90th percentile line miles below as severity rises to Critical:
[
![Vulnerability rewards by percentile. Chrome is WAY off the chart.](https://textplain.files.wordpress.com/2017/01/image_thumb6.png?w=962&h=622 "Chrome Vulnerability Rewards are Yuuuuge")
][51]
With a top bounty of $100000 for an exploit or exploit chain that fully compromises a Chromebook, researchers are much more likely to send their bugs to us than to try to find a buyer on the black market.
Bug bounties are great, except when theyre not. Unfortunately, many filers dont bother to read the [Chrome Security FAQ][52] which explains what constitutes a security vulnerability and the great many things that do not. Nearly every week, we have at least one person (and often more) file a bug noting “_I can use the Developer Tools to read my own password out of a webpage. Can I have a bounty?_” or “_If I install malware on my PC, I can see what happens inside Chrome” _or variations of these.
Because we take security bug reports very seriously, we often spend a lot of time on what seem like garbage filings to verify that theres not just some sort of communication problem. This exposes one downside of the sheriff process—the lack of continuity from week to week.
In the fall, we had one bug reporter file a new issue every week that was just a collection of security related terms (XSS! CSRF! UAF! EoP! Dangling Pointer! Script Injection!) lightly wrapped in prose, including screenshots, snippets from websites, console output from developer tools, and the like. Each week, the sheriff would investigate, ask for more information, and engage in a fruitless back and forth with the filer trying to figure out what claim was being made. Eventually I caught on to what was happening and started monitoring the sheriffs queue, triaging the new findings directly and sparing the sheriff of the week. But even today we still catch folks who lookup old bug reports (usually Wont Fixed issues), copy/paste the content into new bugs, and file them into the queue. Its frustrating, but coming from a closed bug database, Id choose the openness of the Chrome bug database every time.
Getting ready for my first Sherriff rotation, I started watching the incoming queue a few months earlier and felt ready for my first rotation in September. Day One was quiet, with a few small issues found by fuzzers and one or two junk reports from the public which I triaged away with pointers to the “_Why isnt a vulnerability_” entries in the Security FAQ. I spent the rest of the day writing a fix for a lower-priority security [bug][53] that had been filed a month before. A pretty successful day, I thought.
Day Two was more interesting. Scanning the queue, I saw a few more fuzzer issues and [one external report][54] whose text started with “Here is a Chrome OS exploit chain.” The report was about two pages long, and had a forty-two page PDF attachment explaining the four exploits the finder had used to take over a fully-patched Chromebook.
![Star Wars trench run photo](https://textplain.files.wordpress.com/2017/02/image1.png?w=478&h=244 "Defenses can't keep up!")
Watching Lukes X-wing take out the Death Star in Star Wars was no more exciting than reading the PDFs tale of how a single byte memory overwrite in the DNS resolver code could weave its way through the many-layered security features of the Chromebook and achieve a full compromise. It was like the most amazing magic trick youve ever seen.
I hopped over to IRC. “So, do we see full compromises of Chrome OS every week?” I asked innocently.
“No. Why?” came the reply from several corners. I pasted in the bug link and a few moments later the replies started flowing in “OMG. Amazing!” Even guys from Project Zero were impressed, and theyre magicians who build exploits like this (usually for other products) all the time. The researcher had found one small bug and a variety of neglected components that were thought to be unreachable and put together a deadly chain.
The first patches were out for code review that evening, and by the next day, wed reached out to the open-source owner of the DNS component with the 1-byte overwrite bug so he could release patches for the other projects using his code. Within a few days, fixes to other components landed and had been ported to all of the supported versions of Chrome OS. Two weeks later, the Chrome Vulnerability rewards team added the [reward-100000][55] tag, the only bug so far to be so marked. Four weeks after that, I had to hold my tongue when Alex mentioned that “no ones ever claimed that $100000 bounty” during his “Hacker Quantified Security” talk. Just under 90 days from filing, the bug was unrestricted and made available for public viewing.
The remainder of my first Sheriff rotation was considerably less exciting, although still interesting. I spent some time looking through the components the researcher had abused in his exploit chain and filed a few bugs. Ultimately, the most risky component he used was removed entirely.
### OUTREACH AND BLOGGING
Beyond working on the Enamel team (focused on Chromes security UI surface), I also work on the “MoarTLS” project, designed to help encourage and assist the web as a whole in moving to HTTPS. This takes a number of forms—I help maintain the [HTTPS on Top Sites Report Card][56], I do consultations and HTTPS Audits with major sites as they enable HTTPS on their sites. I discover, reduce, and file bugs on Chromes and other browsers support of features like Upgrade-Insecure-Requests. I publish a [running list of articles][57] on why and how sites should enable TLS. I hassle teams all over Google (and the web in general) to enable HTTPS on every single hyperlink they emit. I responsibly disclosed security bugs in a number of products and sites, including [a vulnerability][58] in Hillary Clintons fundraising emails. I worked to send a notification to many many many thousands of sites collecting user information non-securely, warning them of the [UI changes in Chrome 56][59].
When I applied to Google for the Developer Advocate role, I expected Id be delivering public talks _constantly_, but as a SWE Ive only given a few talks, including my  [Migrating to HTTPS talk][60] at the first OReilly Security Conference. I had a lot of fun at that conference, catching up with old friends from the security community (mostly ex-Microsofties). I also went to my first [Chrome Dev Summit][61], where I didnt have a public talk (my colleagues did) but I did get to talk to some major companies about deploying HTTPS.
I also blogged [quite a bit][62]. At Microsoft, I started blogging because I got tired of repeating myself, and because our Exchange server and document retention policies had started making it hard or impossible to find old responses—I figured “Well, if I publish everything on the web, Google will find it, and Internet Archive will back it up.”
Ive kept blogging since leaving Microsoft, and Im happy that I have even though my reader count numbers are much lower than they were at Microsoft. Ive managed to mostly avoid trouble, although my posts are not entirely uncontroversial. At Microsoft, they wouldnt let me publish [this post][63] (because it was too frank); in my first month at Google, I got a phone call at home (during the first portion of my paternity leave) from a Google Director complaining that Id written [something][64] that was too harsh about a change Microsoft had made. But for the most part, my blogging seems not to ruffle too many feathers.
### TIDBITS
* Food at Google is generally _really _good; Im at a satellite office in Austin, so the selection is much smaller than on the main campuses, but the rotating menu is fairly broad and always has at least three major options. And the breakfasts! I gained about 15 pounds in my first few months, but my pneumonia took it off and Ive restrained my intake since I came back.
* At Microsoft, I always sneered at companies offering free food (“Im an adult professional. I can pay for my lunch.”), but its definitely convenient to not have to hassle with payments. And until the government closes the loophole, its a way to increase employees compensation without getting taxed.
* For the first three months, I was impressed and slightly annoyed that all of the snack options in Googles micro-kitchens are healthy (e.g. fruit)—probably a good thing, since I sit about twenty feet from one. Then I saw someone open a drawer and pull out some M&Ms, and I learned the secret—all of the junk food is in drawers. The selection is impressive and ranges from the popular to the high end.
* Google makes heavy use of the “open-office concept.” I think this makes sense for some teams, but its not at all awesome for me. Id gladly take a 10% salary cut for a private office. I doubt Im alone.
* Coworkers at Google range from very smart to insanely off-the-scales-smart. Yet, almost all of them are humble, approachable, and kind.
* Google, like Microsoft, offers gift matching for charities. This is an awesome perk, and one I aim to max out every year. Im awed by people who go [far][1] beyond that.
* **Window Management  **I mentioned earlier that one downside of web-based tools is that its hard to even _find _the right tab when Ive got dozens of open tabs that Im flipping between. The [Quick Tabs extension][2] is one great mitigation; it shows your tabs in a searchable, most-recently-used list in a convenient dropdown:
[
![QuickTabs Extension](https://textplain.files.wordpress.com/2017/01/image59.png?w=526&h=376 "A Searchable MRU of open tabs. Yes please!")
][65]
Another trick that I learned just this month is that you can instruct Chrome to open a site in “App” mode, where it runs in its own top-level window (with no other tabs), showing the sites icon as the icon in the Windows taskbar. Its easy:
On Windows, run chrome.exe app=https://mail.google.com
While on OS X, run open -n -b com.google.Chrome args app=[https://news.google.com][66]
_Tip: The easy way to create a shortcut to a the current page in app mode is to click the Chrome Menu > More Tools > Add to {shelf/desktop} and tick the Open as Window checkbox._
I now have [SlickRun][67] MagicWords set up for **mail**, **calendar**, and my other critical applications.
--------------------------------------------------------------------------------
via: https://textslashplain.com/2017/02/01/google-chrome-one-year-in/
作者:[ericlaw][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://textslashplain.com/author/ericlaw1979/
[1]:https://www.jefftk.com/p/leaving-google-joining-wave
[2]:https://chrome.google.com/webstore/detail/quick-tabs/jnjfeinjfmenlddahdjdmgpbokiacbbb
[3]:https://textslashplain.com/2015/12/23/my-next-adventure/
[4]:http://sdtimes.com/telerik-acquires-fiddler-debugger-along-with-its-creator/
[5]:https://bayden.com/dl/Codemash2015-ericlaw-https-in-2015.pptx
[6]:https://conferences.oreilly.com/velocity/devops-web-performance-2015/public/content/2015/04/16-https-stands-for-user-experience
[7]:https://textslashplain.com/2015/04/09/on-appreciation/
[8]:https://xkcd.com/1254/
[9]:http://m.xkcd.com/1782/
[10]:https://hexchat.github.io/
[11]:https://blogs.msdn.microsoft.com/ieinternals/2009/07/20/internet-explorers-cache-control-extensions/
[12]:https://www.chromium.org/developers/bisect-builds-py
[13]:https://mozilla.github.io/mozregression/
[14]:https://chromium.googlesource.com/chromium/src/+/lkgr/docs/clang.md
[15]:https://bugs.chromium.org/p/monorail/adminIntro
[16]:https://bugs.chromium.org/p/monorail/issues/list?can=2&q=reporter%3Aelawrence
[17]:https://en.wikipedia.org/wiki/Rietveld_(software)
[18]:https://www.chromium.org/developers
[19]:https://www.chromium.org/developers/design-documents
[20]:https://codereview.chromium.org/1733973004/
[21]:https://codereview.chromium.org/2244263002/
[22]:https://codereview.chromium.org/2323273003/
[23]:https://codereview.chromium.org/2368593002/
[24]:https://codereview.chromium.org/2347923002/
[25]:https://codereview.chromium.org/search?closed=1&owner=elawrence&reviewer=&cc=&repo_guid=&base=&project=&private=1&commit=1&created_before=&created_after=&modified_before=&modified_after=&order=&format=html&keys_only=False&with_messages=False&cursor=&limit=30
[26]:https://blogs.msdn.microsoft.com/ie/2005/04/20/tls-and-ssl-in-the-real-world/
[27]:https://chrome.google.com/webstore/search/bayden?hl=en-US&_category=extensions
[28]:https://textslashplain.com/2016/03/17/seek-and-destroy-non-secure-references-using-the-moartls-analyzer/
[29]:https://textslashplain.com/2016/03/18/building-the-moartls-analyzer/
[30]:https://bugs.chromium.org/p/monorail/issues/detail?id=1739
[31]:https://bugs.chromium.org/p/chromium/issues/list?can=1&q=reporter%3Ame&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids
[32]:https://textslashplain.com/2016/08/18/file-the-bug/
[33]:https://bugs.chromium.org/p/chromium/issues/detail?id=540547
[34]:https://bugs.chromium.org/p/chromium/issues/detail?id=601538
[35]:https://bugs.chromium.org/p/chromium/issues/detail?id=595844#c6
[36]:https://bugs.chromium.org/p/chromium/issues/detail?id=629637
[37]:https://bugs.chromium.org/p/chromium/issues/detail?id=591343
[38]:https://textslashplain.com/2016/04/04/downloads-and-the-mark-of-the-web/
[39]:https://blogs.msdn.microsoft.com/ieinternals/2011/03/23/understanding-local-machine-zone-lockdown/
[40]:https://blogs.msdn.microsoft.com/ieinternals/2012/06/19/enhanced-protected-mode-and-local-files/
[41]:https://en.wikipedia.org/wiki/Bus_factor
[42]:https://www.chromium.org/Home/chromium-security/enamel
[43]:https://productforums.google.com/forum/#!forum/chrome
[44]:https://www.chromium.org/Home/chromium-security/security-sheriff
[45]:https://blog.chromium.org/2012/04/fuzzing-for-security.html
[46]:https://en.wikipedia.org/wiki/Project_Zero_(Google)
[47]:https://bugs.chromium.org/p/chromium/issues/entry?template=Security%20Bug
[48]:https://www.google.com/about/appsecurity/chrome-rewards/
[49]:https://security.googleblog.com/2017/01/vulnerability-rewards-program-2016-year.html
[50]:https://conferences.oreilly.com/security/network-data-security-ny/public/schedule/detail/53296
[51]:https://textplain.files.wordpress.com/2017/01/image58.png
[52]:https://dev.chromium.org/Home/chromium-security/security-faq
[53]:https://bugs.chromium.org/p/chromium/issues/detail?id=639126#c11
[54]:https://bugs.chromium.org/p/chromium/issues/detail?id=648971
[55]:https://bugs.chromium.org/p/chromium/issues/list?can=1&q=label%3Areward-100000&colspec=ID+Pri+M+Stars+ReleaseBlock+Component+Status+Owner+Summary+OS+Modified&x=m&y=releaseblock&cells=ids
[56]:https://www.google.com/transparencyreport/https/grid/?hl=en
[57]:https://whytls.com/
[58]:https://textslashplain.com/2016/09/22/use-https-for-all-inbound-links/
[59]:https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html
[60]:https://www.safaribooksonline.com/library/view/the-oreilly-security/9781491960035/video287622.html
[61]:https://developer.chrome.com/devsummit/
[62]:https://textslashplain.com/2016/
[63]:https://blogs.msdn.microsoft.com/ieinternals/2013/10/16/strict-p3p-validation/
[64]:https://textslashplain.com/2016/01/20/putting-users-first/
[65]:https://chrome.google.com/webstore/detail/quick-tabs/jnjfeinjfmenlddahdjdmgpbokiacbbb
[66]:https://news.google.com/
[67]:https://bayden.com/slickrun/

View File

@ -1,86 +0,0 @@
translating by hopefully2333
# [The One in Which I Call Out Hacker News][14]
> “Implementing caching would take thirty hours. Do you have thirty extra hours? No, you dont. I actually have no idea how long it would take. Maybe it would take five minutes. Do you have five minutes? No. Why? Because Im lying. It would take much longer than five minutes. Thats the eternal optimism of programmers.”
>
> — Professor [Owen Astrachan][1] during 23 Feb 2004 lecture for [CPS 108][2]
[Accusing open-source software of being a royal pain to use][5] is not a new argument; its been said before, by those much more eloquent than I, and even by some who are highly sympathetic to the open-source movement. Why go over it again?
On Hacker News on Monday, I was amused to read some people saying that [writing StackOverflow was hilariously easy][6]—and proceeding to back up their claim by [promising to clone it over July 4th weekend][7]. Others chimed in, pointing to [existing][8] [clones][9] as a good starting point.
Lets assume, for sake of argument, that you decide its okay to write your StackOverflow clone in ASP.NET MVC, and that I, after being hypnotized with a pocket watch and a small club to the head, have decided to hand you the StackOverflow source code, page by page, so you can retype it verbatim. Well also assume you type like me, at a cool 100 WPM ([a smidge over eight characters per second][10]), and unlike me,  _you_  make zero mistakes. StackOverflows *.cs, *.sql, *.css, *.js, and *.aspx files come to 2.3 MB. So merely typing the source code back into the computer will take you about eighty hours if you make zero mistakes.
Except, of course, youre not doing that; youre going to implement StackOverflow from scratch. So even assuming that it took you a mere ten times longer to design, type out, and debug your own implementation than it would take you to copy the real one, that already has you coding for several weeks straight—and I dont know about you, but I am okay admitting I write new code  _considerably_  less than one tenth as fast as I copy existing code.
_Well, okay_ , I hear you relent. *So not the whole thing. But I can do **most** of it.*
Okay, so whats “most”? Theres simply asking and responding to questions—that parts easy. Well, except you have to implement voting questions and answers up and down, and the questioner should be able to accept a single answer for each question. And you cant let people upvote or accept their own answers, so you need to block that. And you need to make sure that users dont upvote or downvote another user too many times in a certain amount of time, to prevent spambots. Probably going to have to implement a spam filter, too, come to think of it, even in the basic design, and you also need to support user icons, and youre going to have to find a sanitizing HTML library you really trust and that interfaces well with Markdown (provided you do want to reuse [that awesome editor][11] StackOverflow has, of course). Youll also need to purchase, design, or find widgets for all the controls, plus you need at least a basic administration interface so that moderators can moderate, and youll need to implement that scaling karma thing so that you give users steadily increasing power to do things as they go.
But if you do  _all that_ , you  _will_  be done.
Except…except, of course, for the full-text search, especially its appearance in the search-as-you-ask feature, which is kind of indispensable. And user bios, and having comments on answers, and having a main page that shows you important questions but that bubbles down steadily à la reddit. Plus youll totally need to implement bounties, and support multiple OpenID logins per user, and send out email notifications for pertinent events, and add a tagging system, and allow administrators to configure badges by a nice GUI. And youll need to show users karma history, upvotes, and downvotes. And the whole thing has to scale really well, since it could be slashdotted/reddited/StackOverflown at any moment.
But  _then_ ! **Then** youre done!
…right after you implement upgrades, internationalization, karma caps, a CSS design that makes your site not look like ass, AJAX versions of most of the above, and G-d knows what else thats lurking just beneath the surface that you currently take for granted, but that will come to bite you when you start to do a real clone.
Tell me: which of those features do you feel you can cut and still have a compelling offering? Which ones go under “most” of the site, and which can you punt?
Developers think cloning a site like StackOverflow is easy for the same reason that open-source software remains such a horrible pain in the ass to use. When you put a developer in front of StackOverflow, they dont really  _see_ StackOverflow. What they actually  _see_  is this:
```
create table QUESTION (ID identity primary key,
TITLE varchar(255), --- why do I know you thought 255?
BODY text,
UPVOTES integer not null default 0,
DOWNVOTES integer not null default 0,
USER integer references USER(ID));
create table RESPONSE (ID identity primary key,
BODY text,
UPVOTES integer not null default 0,
DOWNVOTES integer not null default 0,
QUESTION integer references QUESTION(ID))
```
If you then tell a developer to replicate StackOverflow, what goes into his head are the above two SQL tables and enough HTML to display them without formatting, and that really  _is_  completely doable in a weekend. The smarter ones will realize that they need to implement login and logout, and comments, and that the votes need to be tied to a user, but thats still totally doable in a weekend; its just a couple more tables in a SQL back-end, and the HTML to show their contents. Use a framework like Django, and you even get basic users and comments for free.
But thats  _not_  what StackOverflow is about. Regardless of what your feelings may be on StackOverflow in general, most visitors seem to agree that the user experience is smooth, from start to finish. They feel that theyre interacting with a polished product. Even if I didnt know better, I would guess that very little of what actually makes StackOverflow a continuing success has to do with the database schema—and having had a chance to read through StackOverflows source code, I know how little really does. There is a  _tremendous_  amount of spit and polish that goes into making a major website highly usable. A developer, asked how hard something will be to clone, simply  _does not think about the polish_ , because  _the polish is incidental to the implementation._
That is why an open-source clone of StackOverflow will fail. Even if someone were to manage to implement most of StackOverflow “to spec,” there are some key areas that would trip them up. Badges, for example, if youre targeting end-users, either need a GUI to configure rules, or smart developers to determine which badges are generic enough to go on all installs. What will actually happen is that the developers will bitch and moan about how you cant implement a really comprehensive GUI for something like badges, and then bikeshed any proposals for standard badges so far into the ground that theyll hit escape velocity coming out the other side. Theyll ultimately come up with the same solution that bug trackers like Roundup use for their workflow: the developers implement a generic mechanism by which anyone, truly anyone at all, who feels totally comfortable working with the system API in Python or PHP or whatever, can easily add their own customizations. And when PHP and Python are so easy to learn and so much more flexible than a GUI could ever be, why bother with anything else?
Likewise, the moderation and administration interfaces can be punted. If youre an admin, you have access to the SQL server, so you can do anything really genuinely administrative-like that way. Moderators can get by with whatever django-admin and similar systems afford you, since, after all, few users are mods, and mods should understand how the sites  _work_ , dammit. And, certainly, none of StackOverflows interface failings will be rectified. Even if StackOverflows stupid requirement that you have to have and know how to use an OpenID (its worst failing) eventually gets fixed, Im sure any open-source clones will rabidly follow it—just as GNOME and KDE for years slavishly copied off Windows, instead of trying to fix its most obvious flaws.
Developers may not care about these parts of the application, but end-users do, and take it into consideration when trying to decide what application to use. Much as a good software company wants to minimize its support costs by ensuring that its products are top-notch before shipping, so, too, savvy consumers want to ensure products are good before they purchase them so that they wont  _have_  to call support. Open-source products fail hard here. Proprietary solutions, as a rule, do better.
Thats not to say that open-source doesnt have its place. This blog runs on Apache, [Django][12], [PostgreSQL][13], and Linux. But let me tell you, configuring that stack is  _not_  for the faint of heart. PostgreSQL needs vacuuming configured on older versions, and, as of recent versions of Ubuntu and FreeBSD, still requires the user set up the first database cluster. MS SQL requires neither of those things. Apache…dear heavens, dont even get me  _started_  on trying to explain to a novice user how to get virtual hosting, MovableType, a couple Django apps, and WordPress all running comfortably under a single install. Hell, just trying to explain the forking vs. threading variants of Apache to a technically astute non-developer can be a nightmare. IIS 7 and Apache with OS X Servers very much closed-source GUI manager make setting up those same stacks vastly simpler. Djangos a great a product, but its nothing  _but_  infrastructure—exactly the thing that I happen to think open-source  _does_  do well,  _precisely_  because of the motivations that drive developers to contribute.
The next time you see an application you like, think very long and hard about all the user-oriented details that went into making it a pleasure to use, before decrying how you could trivially reimplement the entire damn thing in a weekend. Nine times out of ten, when you think an application was ridiculously easy to implement, youre completely missing the user side of the story.
--------------------------------------------------------------------------------
via: https://bitquabit.com/post/one-which-i-call-out-hacker-news/
作者:[Benjamin Pollack][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://bitquabit.com/meta/about/
[1]:http://www.cs.duke.edu/~ola/
[2]:http://www.cs.duke.edu/courses/cps108/spring04/
[3]:https://bitquabit.com/categories/programming
[4]:https://bitquabit.com/categories/technology
[5]:http://blog.bitquabit.com/2009/06/30/one-which-i-say-open-source-software-sucks/
[6]:http://news.ycombinator.com/item?id=678501
[7]:http://news.ycombinator.com/item?id=678704
[8]:http://code.google.com/p/cnprog/
[9]:http://code.google.com/p/soclone/
[10]:http://en.wikipedia.org/wiki/Words_per_minute
[11]:http://github.com/derobins/wmd/tree/master
[12]:http://www.djangoproject.com/
[13]:http://www.postgresql.org/
[14]:https://bitquabit.com/post/one-which-i-call-out-hacker-news/

View File

@ -0,0 +1,211 @@
# Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
**This post assumes some basic C skills.**
Linux puts you in full control. This is not always seen from everyones perspective, but a power user loves to be in control. Im going to show you a basic trick that lets you heavily influence the behavior of most applications, which is not only fun, but also, at times, useful.
#### A motivational example
Let us begin with a simple example. Fun first, science later.
random_num.c:
```
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(){
srand(time(NULL));
int i = 10;
while(i--) printf("%d\n",rand()%100);
return 0;
}
```
Simple enough, I believe. I compiled it with no special flags, just
> ```
> gcc random_num.c -o random_num
> ```
I hope the resulting output is obvious ten randomly selected numbers 0-99, hopefully different each time you run this program.
Now lets pretend we dont really have the source of this executable. Either delete the source file, or move it somewhere we wont need it. We will significantly modify this programs behavior, yet without touching its source code nor recompiling it.
For this, lets create another simple C file:
unrandom.c:
```
int rand(){
return 42; //the most random number in the universe
}
```
Well compile it into a shared library.
> ```
> gcc -shared -fPIC unrandom.c -o unrandom.so
> ```
So what we have now is an application that outputs some random data, and a custom library, which implements the rand() function as a constant value of 42\.  Now… just run  _random_num _ this way, and watch the result:
> ```
> LD_PRELOAD=$PWD/unrandom.so ./random_nums
> ```
If you are lazy and did not do it yourself (and somehow fail to guess what might have happened), Ill let you know the output consists of ten 42s.
This may be even more impressive it you first:
> ```
> export LD_PRELOAD=$PWD/unrandom.so
> ```
and then run the program normally. An unchanged app run in an apparently usual manner seems to be affected by what we did in our tiny library…
###### **Wait, what? What did just happen?**
Yup, you are right, our program failed to generate random numbers, because it did not use the “real” rand(), but the one we provided which returns 42 every time.
###### **But we *told* it to use the real one. We programmed it to use the real one. Besides, at the time we created that program, the fake rand() did not even exist!**
This is not entirely true. We did not choose which rand() we want our program to use. We told it just to use rand().
When our program is started, certain libraries (that provide functionality needed by the program) are loaded. We can learn which are these using  _ldd_ :
> ```
> $ ldd random_nums
> linux-vdso.so.1 => (0x00007fff4bdfe000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f48c03ec000)
> /lib64/ld-linux-x86-64.so.2 (0x00007f48c07e3000)
> ```
What you see as the output is the list of libs that are needed by  _random_nums_ . This list is built into the executable, and is determined compile time. The exact output might slightly differ on your machine, but a **libc.so** must be there this is the file which provides core C functionality. That includes the “real” rand().
We can have a peek at what functions does libc provide. I used the following to get a full list:
> ```
> nm -D /lib/libc.so.6
> ```
The  _nm_  command lists symbols found in a binary file. The -D flag tells it to look for dynamic symbols, which makes sense, as libc.so.6 is a dynamic library. The output is very long, but it indeed lists rand() among many other standard functions.
Now what happens when we set up the environmental variable LD_PRELOAD? This variable **forces some libraries to be loaded for a program**. In our case, it loads  _unrandom.so_  for  _random_num_ , even though the program itself does not ask for it. The following command may be interesting:
> ```
> $ LD_PRELOAD=$PWD/unrandom.so ldd random_nums
> linux-vdso.so.1 => (0x00007fff369dc000)
> /some/path/to/unrandom.so (0x00007f262b439000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f262b044000)
> /lib64/ld-linux-x86-64.so.2 (0x00007f262b63d000)
> ```
Note that it lists our custom library. And indeed this is the reason why its code gets executed:  _random_num_  calls rand(), but if  _unrandom.so_  is loaded it is our library that provides implementation for rand(). Neat, isnt it?
#### Being transparent
This is not enough. Id like to be able to inject some code into an application in a similar manner, but in such way that it will be able to function normally. Its clear if we implemented open() with a simple “ _return 0;_ “, the application we would like to hack should malfunction. The point is to be **transparent**, and to actually call the original open:
inspect_open.c:
```
int open(const char *pathname, int flags){
/* Some evil injected code goes here. */
return open(pathname,flags); // Here we call the "real" open function, that is provided to us by libc.so
}
```
Hm. Not really. This wont call the “original” open(…). Obviously, this is an endless recursive call.
How do we access the “real” open function? It is needed to use the programming interface to the dynamic linker. Its simpler than it sounds. Have a look at this complete example, and then Ill explain what happens there:
inspect_open.c:
```
#define _GNU_SOURCE
#include <dlfcn.h>
typedef int (*orig_open_f_type)(const char *pathname, int flags);
int open(const char *pathname, int flags, ...)
{
/* Some evil injected code goes here. */
orig_open_f_type orig_open;
orig_open = (orig_open_f_type)dlsym(RTLD_NEXT,"open");
return orig_open(pathname,flags);
}
```
The  _dlfcn.h_  is needed for  _dlsym_  function we use later. That strange  _#define_  directive instructs the compiler to enable some non-standard stuff, we need it to enable  _RTLD_NEXT_  in  _dlfcn.h_ . That typedef is just creating an alias to a complicated pointer-to-function type, with arguments just as the original open the alias name is  _orig_open_f_type_ , which well use later.
The body of our custom open(…) consists of some custom code. The last part of it creates a new function pointer  _orig_open_  which will point to the original open(…) function. In order to get the address of that function, we ask  _dlsym_  to find for us the next “open” function on dynamic libraries stack. Finally, we call that function (passing the same arguments as were passed to our fake “open”), and return its return value as ours.
As the “evil injected code” I simply used:
inspect_open.c (fragment):
```
printf("The victim used open(...) to access '%s'!!!\n",pathname); //remember to include stdio.h!
```
To compile it, I needed to slightly adjust compiler flags:
> ```
> gcc -shared -fPIC  inspect_open.c -o inspect_open.so -ldl
> ```
I had to append  _-ldl_ , so that this shared library is linked to  _libdl_ , which provides the  _dlsym_  function. (Nah, I am not going to create a fake version of  _dlsym_ , though this might be fun.)
So what do I have in result? A shared library, which implements the open(…) function so that it behaves **exactly** as the real open(…)… except it has a side effect of  _printf_ ing the file path :-)
If you are not convinced this is a powerful trick, its the time you tried the following:
> ```
> LD_PRELOAD=$PWD/inspect_open.so gnome-calculator
> ```
I encourage you to see the result yourself, but basically it lists every file this application accesses. In real time.
I believe its not that hard to imagine why this might be useful for debugging or investigating unknown applications. Please note, however, that this particular trick is not quite complete, because  _open()_  is not the only function that opens files… For example, there is also  _open64()_  in the standard library, and for full investigation you would need to create a fake one too.
#### **Possible uses**
If you are still with me and enjoyed the above, let me suggest a bunch of ideas of what can be achieved using this trick. Keep in mind that you can do all the above without to source of the affected app!
1. ~~Gain root privileges.~~ Not really, dont even bother, you wont bypass any security this way. (A quick explanation for pros: no libraries will be preloaded this way if ruid != euid)
2. Cheat games: **Unrandomize.** This is what I did in the first example. For a fully working case you would need also to implement a custom  _random()_ ,  _rand_r()_ _, random_r()_ . Also some apps may be reading from  _/dev/urandom_  or so, you might redirect them to  _/dev/null_  by running the original  _open()_  with a modified file path. Furthermore, some apps may have their own random number generation algorithm, there is little you can do about that (unless: point 10 below). But this looks like an easy exercise for beginners.
3. Cheat games: **Bullet time. **Implement all standard time-related functions pretend the time flows two times slower. Or ten times slower. If you correctly calculate new values for time measurement, timed  _sleep_ functions, and others, the affected application will believe the time runs slower (or faster, if you wish), and you can experience awesome bullet-time action.
Or go **even one step further** and let your shared library also be a DBus client, so that you can communicate with it real time. Bind some shortcuts to custom commands, and with some additional calculations in your fake timing functions you will be able to enable&disable the slow-mo or fast-forward anytime you wish.
4. Investigate apps: **List accessed files.** Thats what my second example does, but this could be also pushed further, by recording and monitoring all apps file I/O.
5. Investigate apps: **Monitor internet access.** You might do this with Wireshark or similar software, but with this trick you could actually gain control of what an app sends over the web, and not just look, but also affect the exchanged data. Lots of possibilities here, from detecting spyware, to cheating in multiplayer games, or analyzing & reverse-engineering protocols of closed-source applications.
6. Investigate apps: **Inspect GTK structures.** Why just limit ourselves to standard library? Lets inject code in all GTK calls, so that we can learn what widgets does an app use, and how are they structured. This might be then rendered either to an image or even to a gtkbuilder file! Super useful if you want to learn how does some app manage its interface!
7. **Sandbox unsafe applications.** If you dont trust some app and are afraid that it may wish to _ rm -rf / _ or do some other unwanted file activities, you might potentially redirect all its file IO to e.g. /tmp by appropriately modifying the arguments it passes to all file-related functions (not just  _open_ , but also e.g. removing directories etc.). Its more difficult trick that a chroot, but it gives you more control. It would be only as safe as complete your “wrapper” was, and unless you really know what youre doing, dont actually run any malicious software this way.
8. **Implement features.** [zlibc][1] is an actual library which is run this precise way; it uncompresses files on the go as they are accessed, so that any application can work on compressed data without even realizing it.
9. **Fix bugs. **Another real-life example: some time ago (I am not sure this is still the case) Skype which is closed-source had problems capturing video from some certain webcams. Because the source could not be modified as Skype is not free software, this was fixed by preloading a library that would correct these problems with video.
10. Manually **access applications own memory**. Do note that you can access all app data this way. This may be not impressive if you are familiar with software like CheatEngine/scanmem/GameConqueror, but they all require root privileges to work. LD_PRELOAD does not. In fact, with a number of clever tricks your injected code might access all app memory, because, in fact, it gets executed by that application itself. You might modify everything this application can. You can probably imagine this allows a lot of low-level hacks… but Ill post an article about it another time.
These are only the ideas I came up with. I bet you can find some too, if you do share them by commenting!
--------------------------------------------------------------------------------
via: https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-ld_preload-to-cheat-inject-features-and-investigate-programs/
作者:[Rafał Cieślak ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://rafalcieslak.wordpress.com/
[1]:http://www.zlibc.linux.lu/index.html

View File

@ -1,55 +0,0 @@
Translating by Cwndmiao
When Does Your OS Run?
============================================================
Heres a question: in the time it takes you to read this sentence, has your OS been  _running_ ? Or was it only your browser? Or were they perhaps both idle, just waiting for you to  _do something already_ ?
These questions are simple but they cut through the essence of how software works. To answer them accurately we need a good mental model of OS behavior, which in turn informs performance, security, and troubleshooting decisions. Well build such a model in this post series using Linux as the primary OS, with guest appearances by OS X and Windows. Ill link to the Linux kernel sources for those who want to delve deeper.
The fundamental axiom here is that  _at any given moment, exactly one task is active on a CPU_ . The task is normally a program, like your browser or music player, or it could be an operating system thread, but it is one task. Not two or more. Never zero, either. One. Always.
This sounds like trouble. For what if, say, your music player hogs the CPU and doesnt let any other tasks run? You would not be able to open a tool to kill it, and even mouse clicks would be futile as the OS wouldnt process them. You could be stuck blaring “What does the fox say?” and incite a workplace riot.
Thats where interrupts come in. Much as the nervous system interrupts the brain to bring in external stimuli a loud noise, a touch on the shoulder the [chipset][1] in a computers motherboard interrupts the CPU to deliver news of outside events key presses, the arrival of network packets, the completion of a hard drive read, and so on. Hardware peripherals, the interrupt controller on the motherboard, and the CPU itself all work together to implement these interruptions, called interrupts for short.
Interrupts are also essential in tracking that which we hold dearest: time. During the [boot process][2] the kernel programs a hardware timer to issue timer interrupts at a periodic interval, for example every 10 milliseconds. When the timer goes off, the kernel gets a shot at the CPU to update system statistics and take stock of things: has the current program been running for too long? Has a TCP timeout expired? Interrupts give the kernel a chance to both ponder these questions and take appropriate actions. Its as if you set periodic alarms throughout the day and used them as checkpoints: should I be doing what Im doing right now? Is there anything more pressing? One day you find ten years have got behind you.
These periodic hijackings of the CPU by the kernel are called ticks, so interrupts quite literally make your OS tick. But theres more: interrupts are also used to handle some software events like integer overflows and page faults, which involve no external hardware. Interrupts are the most frequent and crucial entry point into the OS kernel. Theyre not some oddity for the EE people to worry about, theyre  _the_  mechanism whereby your OS runs.
Enough talk, lets see some action. Below is a network card interrupt in an Intel Core i5 system. The diagrams now have image maps, so you can click on juicy bits for more information. For example, each device links to its Linux driver.
![](http://duartes.org/gustavo/blog/img/os/hardware-interrupt.png)
<map id="mapHwInterrupt" name="mapHwInterrupt"><area shape="poly" coords="490,294,490,354,270,354,270,294" href="https://github.com/torvalds/linux/blob/v3.17/drivers/net/ethernet/intel/e1000e/netdev.c"><area shape="poly" coords="754,294,754,354,534,354,534,294" href="https://github.com/torvalds/linux/blob/v3.16/drivers/hid/usbhid/usbkbd.c"><area shape="poly" coords="488,490,488,598,273,598,273,490" href="https://github.com/torvalds/linux/blob/v3.16/arch/x86/kernel/apic/io_apic.c"><area shape="poly" coords="720,490,720,598,506,598,506,490" href="https://github.com/torvalds/linux/blob/v3.17/arch/x86/kernel/hpet.c"></map>
Lets take a look at this. First off, since there are many sources of interrupts, it wouldnt be very helpful if the hardware simply told the CPU “hey, something happened!” and left it at that. The suspense would be unbearable. So each device is assigned an interrupt request line, or IRQ, during power up. These IRQs are in turn mapped into interrupt vectors, a number between 0 and 255, by the interrupt controller. By the time an interrupt reaches the CPU it has a nice, well-defined number insulated from the vagaries of hardware.
The CPU in turn has a pointer to whats essentially an array of 255 functions, supplied by the kernel, where each function is the handler for that particular interrupt vector. Well look at this array, the Interrupt Descriptor Table (IDT), in more detail later on.
Whenever an interrupt arrives, the CPU uses its vector as an index into the IDT and runs the appropriate handler. This happens as a special function call that takes place in the context of the currently running task, allowing the OS to respond to external events quickly and with minimal overhead. So web servers out there indirectly  _call a function in your CPU_  when they send you data, which is either pretty cool or terrifying. Below we show a situation where a CPU is busy running a Vim command when an interrupt arrives:
![](http://duartes.org/gustavo/blog/img/os/vim-interrupted.png)
Notice how the interrupts arrival causes a switch to kernel mode and [ring zero][3] but it  _does not change the active task_ . Its as if Vim made a magic function call straight into the kernel, but Vim is  _still there_ , its [address space][4] intact, waiting for that call to return.
Exciting stuff! Alas, I need to keep this post-sized, so lets finish up for now. I understand we have not answered the opening question and have in fact opened up new questions, but you now suspect ticks were taking place while you read that sentence. Well find the answers as we flesh out our model of dynamic OS behavior, and the browser scenario will become clear. If you have questions, especially as the posts come out, fire away and Ill try to answer them in the posts themselves or as comments. Next installment is tomorrow on [RSS][5] and [Twitter][6].
--------------------------------------------------------------------------------
via: http://duartes.org/gustavo/blog/post/when-does-your-os-run/
作者:[gustavo ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://duartes.org/gustavo/blog/about/
[1]:http://duartes.org/gustavo/blog/post/motherboard-chipsets-memory-map
[2]:http://duartes.org/gustavo/blog/post/kernel-boot-process
[3]:http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
[4]:http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
[5]:http://feeds.feedburner.com/GustavoDuarte
[6]:http://twitter.com/food4hackers

View File

@ -0,0 +1,361 @@
How to turn any syscall into an event: Introducing eBPF Kernel probes
============================================================
TL;DR: Using eBPF in recent (>=4.4) Linux kernel, you can turn any kernel function call into a user land event with arbitrary data. This is made easy by bcc. The probe is written in C while the data is handled by python.
If you are not familiar with eBPF or linux tracing, you really should read the full post. It tries to progressively go through the pitfalls I stumbled unpon while playing around with bcc / eBPF while saving you a lot of the time I spent searching and digging.
### A note on push vs pull in a Linux world
When I started to work on containers, I was wondering how we could update a load balancer configuration dynamically based on actual system state. A common strategy, which works, it to let the container orchestrator trigger a load balancer configuration update whenever it starts a container and then let the load balancer poll the container until some health check passes. It may be a simple “SYN” test.
While this configuration works, it has the downside of making your load balancer waiting for some system to be available while it should be… load balancing.
Can we do better?
When you want a program to react to some change in a system there are 2 possible strategies. The program may  _poll_  the system to detect changes or, if the system supports it, the system may  _push_ events and let the program react to them. Wether you want to use push or poll depends on the context. A good rule of the thumb is to use push events when the event rate is low with respect to the processing time and switch to polling when the events are coming fast or the system may become unusable. For example, typical network driver will wait for events from the network card while frameworks like dpdk will actively poll the card for events to achieve the highest throughput and lowest latency.
In an ideal world, wed have some kernel interface telling us:
> * “Hey Mr. ContainerManager, Ive just created a socket for the Nginx-ware of container  _servestaticfiles_ , maybe you want to update your state?”
>
> * “Sure Mr. OS, Thanks for letting me know”
While Linux has a wide range of interfaces to deal with events, up to 3 for file events, there is no dedicated interface to get socket event notifications. You can get routing table events, neighbor table events, conntrack events, interface change events. Just, not socket events. Or maybe there is, deep hidden in a Netlink interface.
Ideally, wed need a generic way to do it. How?
### Kernel tracing and eBPF, a bit of history
Until recently the only way was to patch the kernel or resort on SystemTap. [SytemTap][5] is a tracing Linux system. In a nutshell, it provides a DSL which is then compiled into a kernel module which is then live-loaded into the running kernel. Except that some production system disable dynamic module loading for security reasons. Including the one I was working on at that time. The other way would be to patch the kernel to trigger some events, probably based on netlink. This is not really convenient. Kernel hacking come with downsides including “interesting” new “features” and increased maintenance burden.
Hopefully, starting with Linux 3.15 the ground was laid to safely transform any traceable kernel function into userland events. “Safely” is common computer science expression referring to “some virtual machine”. This case is no exception. Linux has had one for years. Since Linux 2.1.75 released in 1997 actually. Its called Berkeley Packet Filter of BPF for short. As its name suggests, it was originally developed for the BSD firewalls. It had only 2 registers and only allowed forward jumps meaning that you could not write loops with it (Well, you can, if you know the maximum iterations and you manually unroll them). The point was to guarantee the program would always terminate and hence never hang the system. Still not sure if it has any use while you have iptables? It serves as the [foundation of CloudFlares AntiDDos protection][6].
OK, so, with Linux the 3.15, [BPF was extended][7] turning it into eBPF. For “extended” BPF. It upgrades from 2 32 bits registers to 10 64 bits 64 registers and adds backward jumping among others. It has then been [further extended in Linux 3.18][8] moving it out of the networking subsystem, and adding tools like maps. To preserve the safety guarantees, it [introduces a checker][9] which validates all memory accesses and possible code path. If the checker cant guarantee the code will terminate within fixed boundaries, it will deny the initial insertion of the program.
For more history, there is [an excellent Oracle presentation on eBPF][10].
Lets get started.
### Hello from from `inet_listen`
As writing assembly is not the most convenient task, even for the best of us, well use [bcc][11]. bcc is a collection of tools based on LLVM and Python abstracting the underlying machinery. Probes are written in C and the results can be exploited from python allowing to easily write non trivial applications.
Start by install bcc. For some of these examples, you may require a recent (read >= 4.4) version of the kernel. If you are willing to actually try these examples, I highly recommend that you setup a VM.  _NOT_  a docker container. You cant change the kernel in a container. As this is a young and dynamic projects, install instructions are highly platform/version dependant. You can find up to date instructions on [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12]
So, we want to get an event whenever a program starts to listen on TCP socket. When calling the `listen()` syscall on a `AF_INET` + `SOCK_STREAM` socket, the underlying kernel function is [`inet_listen`][13]. Well start by hooking a “Hello World” `kprobe` on its entrypoint.
```
from bcc import BPF
# Hello BPF Program
bpf_text = """
#include <net/inet_sock.h>
#include <bcc/proto.h>
// 1\. Attach kprobe to "inet_listen"
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
bpf_trace_printk("Hello World!\\n");
return 0;
};
"""
# 2\. Build and Inject program
b = BPF(text=bpf_text)
# 3\. Print debug output
while True:
print b.trace_readline()
```
This program does 3 things: 1\. It attaches a kernel probe to “inet_listen” using a naming convention. If the function was called, say, “my_probe”, it could be explicitly attached with `b.attach_kprobe("inet_listen", "my_probe"`. 2\. It builds the program using LLVM new BPF backend, inject the resulting bytecode using the (new) `bpf()` syscall and automatically attaches the probes matching the naming convention. 3\. It reads the raw output from the kernel pipe.
Note: eBPF backend of LLVM is still young. If you think youve hit a bug, you may want to upgrade.
Noticed the `bpf_trace_printk` call? This is a stripped down version of the kernels `printk()`debug function. When used, it produces tracing informations to a special kernel pipe in `/sys/kernel/debug/tracing/trace_pipe`. As the name implies, this is a pipe. If multiple readers are consuming it, only 1 will get a given line. This makes it unsuitable for production.
Fortunately, Linux 3.19 introduced maps for message passing and Linux 4.4 brings arbitrary perf events support. Ill demo the perf event based approach later in this post.
```
# From a first console
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
nc-4940 [000] d... 22666.991714: : Hello World!
# From a second console
ubuntu@bcc:~$ nc -l 0 4242
^C
```
Yay!
### Grab the backlog
Now, lets print some easily accessible data. Say the “backlog”. The backlog is the number of pending established TCP connections, pending to be `accept()`ed.
Just tweak a bit the `bpf_trace_printk`:
```
bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog);
```
If you re-run the example with this world-changing improvement, you should see something like:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5020 [000] d... 25497.154070: : Listening with with up to 1 pending connections!
```
`nc` is a single connection program, hence the backlog of 1\. Nginx or Redis would output 128 here. But thats another story.
Easy hue? Now lets get the port.
### Grab the port and IP
Studying `inet_listen` source from the kernel, we know that we need to get the `inet_sock` from the `socket` object. Just copy from the sources, and insert at the beginning of the tracer:
```
// cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
```
The port can now be accessed from `inet->inet_sport` in network byte order (aka: Big Endian). Easy! So, we could just replace the `bpf_trace_printk` with:
```
bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport);
```
Then run:
```
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
...
R1 invalid mem access 'inv'
...
Exception: Failed to load BPF program kprobe__inet_listen
```
Except that its not (yet) so simple. Bcc is improving a  _lot_  currently. While writing this post, a couple of pitfalls had already been addressed. But not yet all. This Error means the in-kernel checker could prove the memory accesses in program are correct. See the explicit cast. We need to help is a little by making the accesses more explicit. Well use `bpf_probe_read` trusted function to read an arbitrary memory location while guaranteeing all necessary checks are done with something like:
```
// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack
u16 lport = 0;
// Explicit arbitrary memory access. Read it:
// Read into 'lport', 'sizeof(lport)' bytes from 'inet->inet_sport' memory location
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
```
Reading the bound address for IPv4 is basically the same, using `inet->inet_rcv_saddr`. If we put is all together, we should get the backlog, the port and the bound IP:
```
from bcc import BPF
# BPF Program
bpf_text = """
#include <net/sock.h>
#include <net/inet_sock.h>
#include <bcc/proto.h>
// Send an event for each IPv4 listen with PID, bound address and port
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
// Cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
// Working values. You *need* to initialize them to give them "life" on the stack and use them afterward
u32 laddr = 0;
u16 lport = 0;
// Pull in details. As 'inet_sk' is internally a type cast, we need to use 'bpf_probe_read'
// read: load into 'laddr' 'sizeof(laddr)' bytes from address 'inet->inet_rcv_saddr'
bpf_probe_read(&laddr, sizeof(laddr), &(inet->inet_rcv_saddr));
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
// Push event
bpf_trace_printk("Listening on %x %d with %d pending connections\\n", ntohl(laddr), ntohs(lport), backlog);
return 0;
};
"""
# Build and Inject BPF
b = BPF(text=bpf_text)
# Print debug output
while True:
print b.trace_readline()
```
A test run should output something like:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5024 [000] d... 25821.166286: : Listening on 7f000001 4242 with 1 pending connections
```
Provided that you listen on localhost. The address is displayed as hex here to avoid dealing with the IP pretty printing but thats all wired. And thats cool.
Note: you may wonder why `ntohs` and `ntohl` can be called from BPF while they are not trusted. This is because they are macros and inline functions from “.h” files and a small bug was [fixed][14]while writing this post.
All done, one more piece: We want to get the related container. In the context of networking, thats means we want the network namespace. The network namespace being the building block of containers allowing them to have isolated networks.
### Grab the network namespace: a forced introduction to perf events
On the userland, the network namespace can be determined by checking the target of `/proc/PID/ns/net`. It should look like `net:[4026531957]`. The number between brackets is the inode number of the network namespace. This said, we could grab it by scrapping /proc but this is racy, we may be dealing with short-lived processes. And races are never good. Well grab the inode number directly from the kernel. Fortunately, thats an easy one:
```
// Create an populate the variable
u32 netns = 0;
// Read the netns inode number, like /proc does
netns = sk->__sk_common.skc_net.net->ns.inum;
```
Easy. And it works.
But if youve read so far, you may guess there is something wrong somewhere. And there is:
```
bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns);
```
If you try to run it, youll get some cryptic error message:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32)
too many args to 0x1ba9108: i64 = Constant<6>
```
What clang is trying to tell you is “Hey pal, `bpf_trace_printk` can only take 4 arguments, youve just used 5.“. I wont dive into the details here, but thats a BPF limitation. If you want to dig it, [here is a good starting point][15].
The only way to fix it is to… stop debugging and make it production ready. So lets get started (and make sure run at least Linux 4.4). Well use perf events which supports passing arbitrary sized structures to userland. Additionally, only our reader will get it so that multiple unrelated eBPF programs can produce data concurrently without issues.
To use it, we need to:
1. define a structure
2. declare the event
3. push the event
4. re-declare the event on Pythons side (This step should go away in the future)
5. consume and format the event
This may seem like a lot, but it aint. See:
```
// At the begining of the C program, declare our event
struct listen_evt_t {
u64 laddr;
u64 lport;
u64 netns;
u64 backlog;
};
BPF_PERF_OUTPUT(listen_evt);
// In kprobe__inet_listen, replace the printk with
struct listen_evt_t evt = {
.laddr = ntohl(laddr),
.lport = ntohs(lport),
.netns = netns,
.backlog = backlog,
};
listen_evt.perf_submit(ctx, &evt, sizeof(evt));
```
Python side will require a little more work, though:
```
# We need ctypes to parse the event structure
import ctypes
# Declare data format
class ListenEvt(ctypes.Structure):
_fields_ = [
("laddr", ctypes.c_ulonglong),
("lport", ctypes.c_ulonglong),
("netns", ctypes.c_ulonglong),
("backlog", ctypes.c_ulonglong),
]
# Declare event printer
def print_event(cpu, data, size):
event = ctypes.cast(data, ctypes.POINTER(ListenEvt)).contents
print("Listening on %x %d with %d pending connections in container %d" % (
event.laddr,
event.lport,
event.backlog,
event.netns,
))
# Replace the event loop
b["listen_evt"].open_perf_buffer(print_event)
while True:
b.kprobe_poll()
```
Give it a try. In this example, I have a redis running in a docker container and nc on the host:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 7f000001 6588 with 1 pending connections in container 4026531957
```
### Last word
Absolutely everything is now setup to use trigger events from arbitrary function calls in the kernel using eBPF, and you should have seen most of the common pitfalls I hit while learning eBPF. If you want to see the full version of this tool, along with some more tricks like IPv6 support, have a look at [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]. Its now an official tool, thanks to the support of the bcc team.
To go further, you may want to checkout Brendan Greggs blog, in particular [the post about eBPF maps and statistics][17]. He his one of the projects main contributor.
--------------------------------------------------------------------------------
via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/
作者:[Jean-Tiare Le Bigot ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.yadutaf.fr/about
[1]:https://blog.yadutaf.fr/tags/linux
[2]:https://blog.yadutaf.fr/tags/tracing
[3]:https://blog.yadutaf.fr/tags/ebpf
[4]:https://blog.yadutaf.fr/tags/bcc
[5]:https://en.wikipedia.org/wiki/SystemTap
[6]:https://blog.cloudflare.com/bpf-the-forgotten-bytecode/
[7]:https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/TODO
[8]:https://lwn.net/Articles/604043/
[9]:http://lxr.free-electrons.com/source/kernel/bpf/verifier.c#L21
[10]:http://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[11]:https://github.com/iovisor/bcc
[12]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[13]:http://lxr.free-electrons.com/source/net/ipv4/af_inet.c#L194
[14]:https://github.com/iovisor/bcc/pull/453
[15]:http://lxr.free-electrons.com/source/kernel/trace/bpf_trace.c#L86
[16]:https://github.com/iovisor/bcc/blob/master/tools/solisten.py
[17]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html

View File

@ -0,0 +1,159 @@
Annoying Experiences Every Linux Gamer Never Wanted!
============================================================
[![Linux gamer's problem](https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg)][10]
[Gaming on Linux][12] has come a long way. There are dedicated [Linux gaming distributions][13] now. But this doesnt mean that gaming experience on Linux is as smooth as on Windows.
What are the obstacles that should be thought about to ensure that we enjoy games as much as Windows users do?
[Wine][14], [PlayOnLinux][15] and other similar tools are not always able to play every popular Windows game. In this article, I would like to discuss various factors that must be dealt with in order to have the best possible Linux gaming experience.
### #1 SteamOS is Open Source, Steam for Linux is NOT
As stated on the [SteamOS page][16], even though SteamOS is open source, Steam for Linux continues to be proprietary. Had it also been open source, the amount of support from the open source community would have been tremendous! Since it is not, [the birth of Project Ascension was inevitable][17]:
[video](https://youtu.be/07UiS5iAknA)
Project Ascension is an open source game launcher designed to launch games that have been bought and downloaded from anywhere they can be Steam games, [Origin games][18], Uplay games, games downloaded directly from game developer websites or from DVD/CD-ROMs.
Here is how it all began: [Sharing The Idea][19] resulted in a very interesting discussion with readers all over from the gaming community pitching in their own opinions and suggestions.
### #2 Performance compared to Windows
Getting Windows games to run on Linux is not always an easy task. But thanks to a feature called [CSMT][20] (command stream multi-threading), PlayOnLinux is now better equipped to deal with these performance issues, though its still a long way to achieve Windows level outcomes.
Native Linux support for games has not been so good for past releases.
Last year, it was reported that SteamOS performed [significantly worse][21] than Windows. Tomb Raider was released on SteamOS/Steam for Linux last year. However, benchmark results were [not at par][22] with performance on Windows.
[video](https://youtu.be/nkWUBRacBNE)
This was much obviously due to the fact that the game had been developed with [DirectX][23] in mind and not [OpenGL][24].
Tomb Raider is the [first Linux game that uses TressFX][25]. This video includes TressFX comparisons:
[video](https://youtu.be/-IeY5ZS-LlA)
Here is another interesting comparison which shows Wine+CSMT performing much better than the native Linux version itself on Steam! This is the power of Open Source!
[Suggested readA New Linux OS "OSu" Vying To Be Ubuntu Of Arch Linux World][26]
[video](https://youtu.be/sCJkC6oJ08A)
TressFX has been turned off in this case to avoid FPS loss.
Here is another Linux vs Windows comparison for the recently released “[Life is Strange][27]” on Linux:
[video](https://youtu.be/Vlflu-pIgIY)
Its good to know that  [_Steam for Linux_][28]  has begun to show better improvements in performance for this new Linux game.
Before launching any game for Linux, developers should consider optimizing them especially if its a DirectX game and requires OpenGL translation. We really do hope that [Deus Ex: Mankind Divided on Linux][29] gets benchmarked well, upon release. As its a DirectX game, we hope its being ported well for Linux. Heres [what the Executive Game Director had to say][30].
### #3 Proprietary NVIDIA Drivers
[AMDs support for Open Source][31] is definitely commendable when compared to [NVIDIA][32]. Though [AMD][33] driver support is [pretty good on Linux][34] now due to its better open source driver, NVIDIA graphic card owners will still have to use the proprietary NVIDIA drivers because of the limited capabilities of the open-source version of NVIDIAs graphics driver called Nouveau.
In the past, legendary Linus Torvalds has also shared his thoughts about Linux support from NVIDIA to be totally unacceptable:
[video](https://youtu.be/O0r6Pr_mdio)
You can watch the complete talk [here][35]. Although NVIDIA responded with [a commitment for better linux support][36], the open source graphics driver still continues to be weak as before.
### #4 Need for Uplay and Origin DRM support on Linux
[video](https://youtu.be/rc96NFwyxWU)
The above video describes how to install the [Uplay][37] DRM on Linux. The uploader also suggests that the use of wine as the main tool of games and applications is not recommended on Linux. Rather, preference to native applications should be encouraged instead.
The following video is a guide about installing the [Origin][38] DRM on Linux:
[video](https://youtu.be/ga2lNM72-Kw)
Digital Rights Management Software adds another layer for game execution and hence it adds up to the already challenging task to make a Windows game run well on Linux. So in addition to making the game execute, W.I.N.E has to take care of running the DRM software such as Uplay or Origin as well. It would have been great if, like Steam, Linux could have got its own native versions of Uplay and Origin.
[Suggested readLinux Foundation Head Calls 2017 'Year of the Linux Desktop'... While Running Apple's macOS Himself][39]
### #5 DirectX 11 support for Linux
Even though we have tools on Linux to run Windows applications, every game comes with its own set of tweak requirements for it to be playable on Linux. Though there was an announcement about [DirectX 11 support for Linux][40] last year via Code Weavers, its still a long way to go to make playing newly launched titles on Linux a possibility. Currently, you can
Currently, you can [buy Crossover from Codeweavers][41] to get the best DirectX 11 support available. This [thread][42] on the Arch Linux forums clearly shows how much more effort is required to make this dream a possibility. Here is an interesting [find][43] from a [Reddit thread][44], which mentions Wine getting [DirectX 11 patches from Codeweavers][45]. Now thats definitely some good news.
### #6 100% of Steam games are not available for Linux
This is an important point to ponder as Linux gamers continue to miss out on every major game release since most of them land up on Windows. Here is a guide to [install Steam for Windows on Linux][46].
### #7 Better Support from video game publishers for OpenGL
Currently, developers and publishers focus primarily on DirectX for video game development rather than OpenGL. Now as Steam is officially here for Linux, developers should start considering development in OpenGL as well.
[Direct3D][47] is made solely for the Windows platform. The OpenGL API is an open standard, and implementations exist for not only Windows but a wide variety of other platforms.
Though quite an old article, [this valuable resource][48] shares a lot of thoughtful information on the realities of OpenGL and DirectX. The points made are truly very sensible and enlightens the reader about the facts based on actual chronological events.
Publishers who are launching their titles on Linux should definitely not leave out the fact that developing the game on OpenGL would be a much better deal than translating it from DirectX to OpenGL. If conversion has to be done, the translations must be well optimized and carefully looked into. There might be a delay in releasing the games but still it would definitely be worth the wait.
Have more annoyances to share? Do let us know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-gaming-problems/
作者:[Avimanyu Bandyopadhyay ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/avimanyu/
[1]:https://itsfoss.com/author/avimanyu/
[2]:https://itsfoss.com/linux-gaming-problems/#comments
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21&url=https://itsfoss.com/linux-gaming-problems/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=itsfoss2
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Flinux-gaming-problems%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:http://www.stumbleupon.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
[8]:https://www.reddit.com/submit?url=https://itsfoss.com/linux-gaming-problems/&title=Annoying+Experiences+Every+Linux+Gamer+Never+Wanted%21
[9]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
[10]:https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg
[11]:http://pinterest.com/pin/create/bookmarklet/?media=https://itsfoss.com/wp-content/uploads/2016/09/Linux-Gaming-Problems.jpg&url=https://itsfoss.com/linux-gaming-problems/&is_video=false&description=Linux%20gamer%27s%20problem
[12]:https://itsfoss.com/linux-gaming-guide/
[13]:https://itsfoss.com/linux-gaming-distributions/
[14]:https://itsfoss.com/use-windows-applications-linux/
[15]:https://www.playonlinux.com/en/
[16]:http://store.steampowered.com/steamos/
[17]:http://www.ibtimes.co.uk/reddit-users-want-replace-steam-open-source-game-launcher-project-ascension-1498999
[18]:https://www.origin.com/
[19]:https://www.reddit.com/r/pcmasterrace/comments/33xcvm/we_hate_valves_monopoly_over_pc_gaming_why/
[20]:https://github.com/wine-compholio/wine-staging/wiki/CSMT
[21]:http://arstechnica.com/gaming/2015/11/ars-benchmarks-show-significant-performance-hit-for-steamos-gaming/
[22]:https://www.gamingonlinux.com/articles/tomb-raider-benchmark-video-comparison-linux-vs-windows-10.7138
[23]:https://en.wikipedia.org/wiki/DirectX
[24]:https://en.wikipedia.org/wiki/OpenGL
[25]:https://www.gamingonlinux.com/articles/tomb-raider-released-for-linux-video-thoughts-port-report-included-the-first-linux-game-to-use-tresfx.7124
[26]:https://itsfoss.com/osu-new-linux/
[27]:http://lifeisstrange.com/
[28]:https://itsfoss.com/install-steam-ubuntu-linux/
[29]:https://itsfoss.com/deus-ex-mankind-divided-linux/
[30]:http://wccftech.com/deus-ex-mankind-divided-director-console-ports-on-pc-is-disrespectful/
[31]:http://developer.amd.com/tools-and-sdks/open-source/
[32]:http://nvidia.com/
[33]:http://amd.com/
[34]:http://www.makeuseof.com/tag/open-source-amd-graphics-now-awesome-heres-get/
[35]:https://youtu.be/MShbP3OpASA
[36]:https://itsfoss.com/nvidia-optimus-support-linux/
[37]:http://uplay.com/
[38]:http://origin.com/
[39]:https://itsfoss.com/linux-foundation-head-uses-macos/
[40]:http://www.pcworld.com/article/2940470/hey-gamers-directx-11-is-coming-to-linux-thanks-to-codeweavers-and-wine.html
[41]:https://itsfoss.com/deal-run-windows-software-and-games-on-linux-with-crossover-15-66-off/
[42]:https://bbs.archlinux.org/viewtopic.php?id=214771
[43]:https://ghostbin.com/paste/sy3e2
[44]:https://www.reddit.com/r/linux_gaming/comments/3ap3uu/directx_11_support_coming_to_codeweavers/
[45]:https://www.codeweavers.com/about/blogs/caron/2015/12/10/directx-11-really-james-didnt-lie
[46]:https://itsfoss.com/linux-gaming-guide/
[47]:https://en.wikipedia.org/wiki/Direct3D
[48]:http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX

View File

@ -0,0 +1,68 @@
translating by zrszrszrs
GitHub Is Building a Coders Paradise. Its Not Coming Cheap
============================================================
The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show.
Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg.
The rise of GitHub has captivated venture capitalists. Sequoia Capital led a $250 million investment in mid-2015\. But GitHub management may have been a little too eager to spend the new money. The company paid to send employees jetting across the globe to Amsterdam, London, New York and elsewhere. More costly, it doubled headcount to 600 over the course of about 18 months.
GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. It generated $95 million in revenue during that period, the internal financial document says.
![Chris Wanstrath, co-founder and chief executive officer at GitHub Inc., speaks during the 2015 Bloomberg Technology Conference in San Francisco, California, U.S., on Tuesday, June 16, 2015\. The conference gathers global business leaders, tech influencers, top investors and entrepreneurs to shine a spotlight on how coders and coding are transforming business and fueling disruption across all industries. Photographer: David Paul Morris/Bloomberg *** Local Caption *** Chris Wanstrath](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iXpmtRL9Q0C4/v0/400x-1.jpg)
GitHub CEO Chris Wanstrath.Photographer: David Paul Morris/Bloomberg
Sitting in a conference room featuring an abstract art piece on the wall and a Mad Men-style rollaway bar cart in the corner, GitHubs Chris Wanstrath says the business is running more smoothly now and growing. “What happened to 2015?” says the 31-year-old co-founder and chief executive officer. “Nothing was getting done, maybe? I shouldnt say that. Strike that.”
GitHub recently hired Mike Taylor, the former treasurer and vice president of finance at Tesla Motors Inc., to manage spending as chief financial officer. It also hopes to add a seasoned chief operating officer. GitHub has already surpassed last years revenue in nine months this year, with $98 million, the financial document shows. “The whole product road map, we have all of our shit together in a way that weve never had together. Im pretty elated right now with the way things are going,” says Wanstrath. “Weve had a lot of ups and downs, and right now were definitely in an up.”
Also up: expenses. The income statement shows a loss of $66 million in the first three quarters of this year. Thats more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. At least a dozen members of GitHubs leadership team have left since last year, several of whom expressed unhappiness with Wanstraths management style. GitHub says the company has flourished under his direction but declined to comment on finances. Wanstrath says: “We raised $250 million last year, and were putting it to use. Were not expecting to be profitable right now.”
Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. They encouraged employees to [work remotely][1], which forced the team to adopt GitHubs tools for their own projects and had the added benefit of saving money on office space. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free.
Peter Levine, a partner at Andreessen Horowitz, courted the founders and eventually convinced them to take their first round of VC money in 2012\. The firm led a $100 million cash infusion, and Levine joined the board. The next year, GitHub signed a seven-year lease worth about $35 million for a headquarters in San Francisco, says a person familiar with the project.
The new digs gave employees a reason to come into the office. Visitors would enter a lobby modeled after the White Houses Oval Office before making their way to a replica of the Situation Room. The company also erected a statue of its mascot, a cartoon octopus-cat creature known as the Octocat. The 55,000-square-foot space is filled with wooden tables and modern art.
In GitHubs cultural hierarchy, the coder is at the top. The company has strived to create the best product possible for software developers and watch them to flock to it. In addition to offering its base service for free, GitHub sells more advanced programming tools to companies big and small. But it found that some chief information officers want a human touch and began to consider building out a sales team.
The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we dont want to waste any money where its not needed. So we dont have a big fancy office because we can be effective without it.”
Y Combinator, a Silicon Valley business incubator, welcomed GitLab into the fold last year. GitLab says more than 110,000 organizations, including IBM and Macys Inc., use its software. (IBM also uses GitHub.) Atlassian Corp. has taken a similar top-down approach with its own code repository Bitbucket.
Wanstrath says the competition has helped validate GitHubs business. “When we started, people made fun of us and said there is no money in developer tools,” he says. “Ive kind of been waiting for this for a long time—to be proven right, that this is a real market.”
![GitHub_Office-03](https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iQB5sqXgihdQ/v0/400x-1.jpg)
Source: GitHub
It also spurred GitHub into action. With fresh capital last year valuing the company at $2 billion, it went on a hiring spree. It spent $71 million on salaries and benefits last fiscal year, according to the financial document seen by Bloomberg. This year, those costs rose to $108 million from February to October, with three months still to go in the fiscal year, the document shows. This was the startups biggest expense by far.
The emphasis on sales seemed to be making an impact, but the team missed some of its targets, says a person familiar with the matter. In September 2014, subscription revenue on an annualized basis was about $25 million each from enterprise sales and organizations signing up through the site, according to another financial document. After GitHub staffed up, annual recurring revenue from large clients increased this year to $70 million while the self-service business saw healthy, if less dramatic, growth to $52 million.
But the uptick in revenue wasnt keeping pace with the aggressive hiring. GitHub cut about 20 employees in recent weeks. “The unicorn trap is that youve sold equity against a plan that you often cant hit; then what do you do?” says Nick Sturiale, a VC at Ignition Partners.
Such business shifts are risky, and stumbles arent uncommon, says Jason Lemkin, a corporate software VC whos not an investor in GitHub. “That transition from a self-service product in its early days to being enterprise always has bumps,” he says. GitHub says it has 18 million users, and its Enterprise service is used by half of the worlds 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co.
Some longtime GitHub fans werent happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.”
The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, Are we making developers more productive?’” he says. At GitHubs developer conference in September, Wanstrath introduced several new features, including an updated process for reviewing code. He says 2016 was a “marquee year.”
At least five senior staffers left in 2015, and turnover among leadership continued this year. Among them was co-founder and CIO Scott Chacon, who says he left to start a new venture. “GitHub was always very good to me, from the first day I started when it was just the four of us,” Chacon says. “They allowed me to travel the world representing them; they supported my teaching and evangelizing Git and remote work culture for a long time.”
The travel excursions are expected to continue at GitHub, and theres little evidence it can rein in spending any time soon. The company says about half its staff is remote and that the trips bring together GitHubs distributed workforce and encourage collaboration. Last week, at least 20 employees on GitHubs human-resources team convened in Rancho Mirage, California, for a retreat at the Ritz Carlton.
--------------------------------------------------------------------------------
via: https://www.bloomberg.com/news/articles/2016-12-15/github-is-building-a-coder-s-paradise-it-s-not-coming-cheap
作者:[Eric Newcomer ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.bloomberg.com/authors/ASFMS16EsvU/eric-newcomer
[1]:https://www.bloomberg.com/news/articles/2016-09-06/why-github-finally-abandoned-its-bossless-workplace

View File

@ -1,334 +0,0 @@
Translating by kimii
# Kprobes Event Tracing on ARMv8
![core-dump](http://www.linaro.org/wp-content/uploads/2016/02/core-dump.png)
### Introduction
Kprobes is a kernel feature that allows instrumenting the kernel by setting arbitrary breakpoints that call out to developer-supplied routines before and after the breakpointed instruction is executed (or simulated). See the kprobes documentation[[1]][2] for more information. Basic kprobes functionality is selected withCONFIG_KPROBES. Kprobes support was added to mainline for arm64 in the v4.8 release.
In this article we describe the use of kprobes on arm64 using the debugfs event tracing interfaces from the command line to collect dynamic trace events. This feature has been available for some time on several architectures (including arm32), and is now available on arm64\. The feature allows use of kprobes without having to write any code.
### Types of Probes
The kprobes subsystem provides three different types of dynamic probes described below.
### Kprobes
The basic probe is a software breakpoint kprobes inserts in place of the instruction you are probing, saving the original instruction for eventual single-stepping (or simulation) when the probe point is hit.
### Kretprobes
Kretprobes is a part of kprobes that allows intercepting a returning function instead of having to set a probe (or possibly several probes) at the return points. This feature is selected whenever kprobes is selected, for supported architectures (including ARMv8).
### Jprobes
Jprobes allows intercepting a call into a function by supplying an intermediary function with the same calling signature, which will be called first. Jprobes is a programming interface only and cannot be used through the debugfs event tracing subsystem. As such we will not be discussing jprobes further here. Consult the kprobes documentation if you wish to use jprobes.
### Invoking Kprobes
Kprobes provides a set of APIs which can be called from kernel code to set up probe points and register functions to be called when probe points are hit. Kprobes is also accessible without adding code to the kernel, by writing to specific event tracing debugfs files to set the probe address and information to be recorded in the trace log when the probe is hit. The latter is the focus of what this document will be talking about. Lastly kprobes can be accessed through the perf command.
### Kprobes API
The kernel developer can write functions in the kernel (often done in a dedicated debug module) to set probe points and take whatever action is desired right before and right after the probed instruction is executed. This is well documented in kprobes.txt.
### Event Tracing
The event tracing subsystem has its own documentation[[2]][3] which might be worth a read to understand the background of event tracing in general. The event tracing subsystem serves as a foundation for both tracepoints and kprobes event tracing. The event tracing documentation focuses on tracepoints, so bear that in mind when consulting that documentation. Kprobes differs from tracepoints in that there is no predefined list of tracepoints but instead arbitrary dynamically created probe points that trigger the collection of trace event information. The event tracing subsystem is controlled and monitored through a set of debugfs files. Event tracing (CONFIG_EVENT_TRACING) will be selected automatically when needed by something like the kprobe event tracing subsystem.
#### Kprobes Events
With the kprobes event tracing subsystem the user can specify information to be reported at arbitrary breakpoints in the kernel, determined simply by specifying the address of any existing probeable instruction along with formatting information. When that breakpoint is encountered during execution kprobes passes the requested information to the common parts of the event tracing subsystem which formats and appends the data to the trace log, much like how tracepoints works. Kprobes uses a similar but mostly separate collection of debugfs files to control and display trace event information. This feature is selected withCONFIG_KPROBE_EVENT. The kprobetrace documentation[[3]][4] provides the essential information on how to use kprobes event tracing and should be consulted to understand details about the examples presented below.
### Kprobes and Perf
The perf tools provide another command line interface to kprobes. In particular “perf probe” allows probe points to be specified by source file and line number, in addition to function name plus offset, and address. The perf interface is really a wrapper for using the debugfs interface for kprobes.
### Arm64 Kprobes
All of the above aspects of kprobes are now implemented for arm64, in practice there are some differences from other architectures though:
* Register name arguments are, of course, architecture specific and can be found in the ARM ARM.
* Not all instruction types can currently be probed. Currently unprobeable instructions include mrs/msr(except DAIF read), exception generation instructions, eret, and hint (except for the nop variant). In these cases it is simplest to just probe a nearby instruction instead. These instructions are blacklisted from probing because the changes they cause to processor state are unsafe to do during kprobe single-stepping or instruction simulation, because the single-stepping context kprobes constructs is inconsistent with what the instruction needs, or because the instruction cant tolerate the additional processing time and exception handling in kprobes (ldx/stx).
* An attempt is made to identify instructions within a ldx/stx sequence and prevent probing, however it is theoretically possible for this check to fail resulting in allowing a probed atomic sequence which can never succeed. Be careful when probing around atomic code sequences.
* Note that because of the details of Linux ARM64 calling conventions it is not possible to reliably duplicate the stack frame for the probed function and for that reason no attempt is made to do so with jprobes, unlike the majority of other architectures supporting jprobes. The reason for this is that there is insufficient information for the callee to know for certain the amount of the stack that is needed.
* Note that the stack pointer information recorded from a probe will reflect the particular stack pointer in use at the time the probe was hit, be it the kernel stack pointer or the interrupt stack pointer.
* There is a list of kernel functions which cannot be probed, usually because they are called as part of kprobes processing. Part of this list is architecture-specific and also includes things like exception entry code.
### Using Kprobes Event Tracing
One common use case for kprobes is instrumenting function entry and/or exit. It is particularly easy to install probes for this since one can just use the function name for the probe address. Kprobes event tracing will look up the symbol name and determine the address. The ARMv8 calling standard defines where the function arguments and return values can be found, and these can be printed out as part of the kprobe event processing.
### Example: Function entry probing
Instrumenting a USB ethernet driver reset function:
```
_$ pwd
/sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p ax88772_reset %x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point a trace event will be recorded every time the drivers _ax8872_reset()_ function is called. The event will display the pointer to the _usbnet_ structure passed in via X0 (as per the ARMv8 calling standard) as this functions only argument. After plugging in a USB dongle requiring this ethernet driver we see the following trace information:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 1/1   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
kworker/0:0-4             [000] d… 10972.102939:   p_ax88772_reset_0:
(ax88772_reset+0x0/0x230)   arg1=0xffff800064824c80_
```
Here we can see the value of the pointer argument passed in to our probed function. Since we did not use the optional labelling features of kprobes event tracing the information we requested is automatically labeled_arg1_.  Note that this refers to the first value in the list of values we requested that kprobes log for this probe, not the actual position of the argument to the function. In this case it also just happens to be the first argument to the function weve probed.
### Example: Function entry and return probing
The kretprobe feature is used specifically to probe a function return. At function entry the kprobes subsystem will be called and will set up a hook to be called at function return, where it will record the requested event information. For the most common case the return information, typically in the X0 register, is quite useful. The return value in %x0 can also be referred to as _$retval_. The following example also demonstrates how to provide a human-readable label to be displayed with the information of interest.
Example of instrumenting the kernel __do_fork()_ function to record arguments and results using a kprobe and a kretprobe:
```
_$ cd /sys/kernel/debug/tracing
$ cat > kprobe_events <<EOF
p _do_fork %x0 %x1 %x2 %x3 %x4 %x5
r _do_fork pid=%x0
EOF
$ echo 1 > events/kprobes/enable_
```
At this point every call to _do_fork() will produce two kprobe events recorded into the “_trace_” file, one reporting the calling argument values and one reporting the return value. The return value shall be labeled “_pid_” in the trace file. Here are the contents of the trace file after three fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 6/6   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
              bash-1671  [001] d…   204.946007: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   204.946391: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x724
              bash-1671  [001] d…   208.845749: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   208.846127: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x725
              bash-1671  [001] d…   214.401604: p__do_fork_0: (_do_fork+0x0/0x3e4) arg1=0x1200011 arg2=0x0 arg3=0x0 arg4=0x0 arg5=0xffff78b690d0 arg6=0x0
              bash-1671  [001] d..1   214.401975: r__do_fork_0: (SyS_clone+0x18/0x20 <- _do_fork) pid=0x726_
```
### Example: Dereferencing pointer arguments
For pointer values the kprobe event processing subsystem also allows dereferencing and printing of desired memory contents, for various base data types. It is necessary to manually calculate the offset into structures in order to display a desired field.
Instrumenting the `_do_wait()` function:
```
_$ cat > kprobe_events <<EOF
p:wait_p do_wait wo_type=+0(%x0):u32 wo_flags=+4(%x0):u32
r:wait_r do_wait $retval
EOF
$ echo 1 > events/kprobes/enable_
```
Note that the argument labels used in the first probe are optional and can be used to more clearly identify the information recorded in the trace log. The signed offset and parentheses indicate that the register argument is a pointer to memory contents to be recorded in the trace log. The “_:u32_” indicates that the memory location contains an unsigned four-byte wide datum (an enum and an int in a locally defined structure in this case).
The probe labels (after the colon) are optional and will be used to identify the probe in the log. The label must be unique for each probe. If unspecified a useful label will be automatically generated from a nearby symbol name, as has been shown in earlier examples.
Also note the “_$retval_” argument could just be specified as “_%x0_“.
Here are the contents of the “_trace_” file after two fork syscalls have been made:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 4/4   #P:8
#
#                              _—=> irqs-off
#                             / _—-=> need-resched
#                            | / _—=> hardirq/softirq
#                            || / _=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
             bash-1702  [001] d…   175.342074: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xe
             bash-1702  [002] d..1   175.347236: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0x757
             bash-1702  [002] d…   175.347337: wait_p: (do_wait+0x0/0x260) wo_type=0x3 wo_flags=0xf
             bash-1702  [002] d..1   175.347349: wait_r: (SyS_wait4+0x74/0xe4 <- do_wait) arg1=0xfffffffffffffff6_
```
### Example: Probing arbitrary instruction addresses
In previous examples we have inserted probes for function entry and exit, however it is possible to probe an arbitrary instruction (with a few exceptions). If we are placing a probe inside a C function the first step is to look at the assembler version of the code to identify where we want to place the probe. One way to do this is to use gdb on the vmlinux file and display the instructions in the function where you wish to place the probe. An example of doing this for the _module_alloc_ function in arch/arm64/kernel/modules.c follows. In this case, because gdb seems to prefer using the weak symbol definition and its associated stub code for this function, we get the symbol value from System.map instead:
```
_$ grep module_alloc System.map
ffff2000080951c4 T module_alloc
ffff200008297770 T kasan_module_alloc_
```
In this example were using cross-development tools and we invoke gdb on our host system to examine the instructions comprising our function of interest:
```
_$ ${CROSS_COMPILE}gdb vmlinux
(gdb) x/30i 0xffff2000080951c4
        0xffff2000080951c4 <module_alloc>:    sub    sp, sp, #0x30
        0xffff2000080951c8 <module_alloc+4>:    adrp    x3, 0xffff200008d70000
        0xffff2000080951cc <module_alloc+8>:    add    x3, x3, #0x0
        0xffff2000080951d0 <module_alloc+12>:    mov    x5, #0x713             // #1811
        0xffff2000080951d4 <module_alloc+16>:    mov    w4, #0xc0              // #192
        0xffff2000080951d8 <module_alloc+20>:
              mov    x2, #0xfffffffff8000000    // #-134217728
        0xffff2000080951dc <module_alloc+24>:    stp    x29, x30, [sp,#16]         0xffff2000080951e0 <module_alloc+28>:    add    x29, sp, #0x10
        0xffff2000080951e4 <module_alloc+32>:    movk    x5, #0xc8, lsl #48
        0xffff2000080951e8 <module_alloc+36>:    movk    w4, #0x240, lsl #16
        0xffff2000080951ec <module_alloc+40>:    str    x30, [sp]         0xffff2000080951f0 <module_alloc+44>:    mov    w7, #0xffffffff        // #-1
        0xffff2000080951f4 <module_alloc+48>:    mov    x6, #0x0               // #0
        0xffff2000080951f8 <module_alloc+52>:    add    x2, x3, x2
        0xffff2000080951fc <module_alloc+56>:    mov    x1, #0x8000            // #32768
        0xffff200008095200 <module_alloc+60>:    stp    x19, x20, [sp,#32]         0xffff200008095204 <module_alloc+64>:    mov    x20, x0
        0xffff200008095208 <module_alloc+68>:    bl    0xffff2000082737a8 <__vmalloc_node_range>
        0xffff20000809520c <module_alloc+72>:    mov    x19, x0
        0xffff200008095210 <module_alloc+76>:    cbz    x0, 0xffff200008095234 <module_alloc+112>
        0xffff200008095214 <module_alloc+80>:    mov    x1, x20
        0xffff200008095218 <module_alloc+84>:    bl    0xffff200008297770 <kasan_module_alloc>
        0xffff20000809521c <module_alloc+88>:    tbnz    w0, #31, 0xffff20000809524c <module_alloc+136>
        0xffff200008095220 <module_alloc+92>:    mov    sp, x29
        0xffff200008095224 <module_alloc+96>:    mov    x0, x19
        0xffff200008095228 <module_alloc+100>:    ldp    x19, x20, [sp,#16]         0xffff20000809522c <module_alloc+104>:    ldp    x29, x30, [sp],#32
        0xffff200008095230 <module_alloc+108>:    ret
        0xffff200008095234 <module_alloc+112>:    mov    sp, x29
        0xffff200008095238 <module_alloc+116>:    mov    x19, #0x0               // #0_
```
In this case we are going to display the result from the following source line in this function:
```
_p = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START,
VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
NUMA_NO_NODE, __builtin_return_address(0));_
```
…and also the return value from the function call in this line:
```
_if (p && (kasan_module_alloc(p, size) < 0)) {_
```
We can identify these in the assembler code from the call to the external functions. To display these values we will place probes at 0xffff20000809520c _and _0xffff20000809521c on our target system:
```
_$ cat > kprobe_events <<EOF
p 0xffff20000809520c %x0
p 0xffff20000809521c %x0
EOF
$ echo 1 > events/kprobes/enable_
```
Now after plugging an ethernet adapter dongle into the USB port we see the following written into the trace log:
```
_$ cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 12/12   #P:8
#
#                           _—=> irqs-off
#                          / _—-=> need-resched
#                         | / _—=> hardirq/softirq
#                         || / _=> preempt-depth
#                         ||| / delay
#        TASK-PID   CPU#  |||| TIMESTAMP  FUNCTION
#           | |    |   ||||    |      |
      systemd-udevd-2082  [000] d… 77.200991: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001188000
      systemd-udevd-2082  [000] d… 77.201059: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.201115: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001198000
      systemd-udevd-2082  [000] d… 77.201157: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227456: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011a0000
      systemd-udevd-2082  [000] d… 77.227522: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      systemd-udevd-2082  [000] d… 77.227579: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b0000
      systemd-udevd-2082  [000] d… 77.227635: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.030643: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff2000011b8000
      modprobe-2097  [002] d… 78.030761: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0
      modprobe-2097  [002] d… 78.031132: p_0xffff20000809520c: (module_alloc+0x48/0x98) arg1=0xffff200001270000
      modprobe-2097  [002] d… 78.031187: p_0xffff20000809521c: (module_alloc+0x58/0x98) arg1=0x0_
```
One more feature of the kprobes event system is recording of statistics information, which can be found inkprobe_profile.  After the above trace the contents of that file are:
```
_$ cat kprobe_profile
 p_0xffff20000809520c                                    6            0
p_0xffff20000809521c                                    6            0_
```
This indicates that there have been a total of 8 hits each of the two breakpoints we set, which of course is consistent with the trace log data.  More kprobe_profile features are described in the kprobetrace documentation.
There is also the ability to further filter kprobes events.  The debugfs files used to control this are listed in the kprobetrace documentation while the details of their contents are (mostly) described in the trace events documentation.
### Conclusion
Linux on ARMv8 now is on parity with other architectures supporting the kprobes feature. Work is being done by others to also add uprobes and systemtap support. These features/tools and other already completed features (e.g.: perf, coresight) allow the Linux ARMv8 user to debug and test performance as they would on other, older architectures.
* * *
Bibliography
[[1]][5] Jim Keniston, Prasanna S. Panchamukhi, Masami Hiramatsu. “Kernel Probes (Kprobes).” _GitHub_. GitHub, Inc., 15 Aug. 2016\. Web. 13 Dec. 2016.
[[2]][6] Tso, Theodore, Li Zefan, and Tom Zanussi. “Event Tracing.” _GitHub_. GitHub, Inc., 3 Mar. 2016\. Web. 13 Dec. 2016.
[[3]][7] Hiramatsu, Masami. “Kprobe-based Event Tracing.” _GitHub_. GitHub, Inc., 18 Aug. 2016\. Web. 13 Dec. 2016.
----------------
作者简介 [David Long][8]David works as an engineer in the Linaro Kernel - Core Development team. Before coming to Linaro he spent several years in the commercial and defense industries doing both embedded realtime work, and software development tools for Unix. That was followed by a dozen years at Digital (aka Compaq) doing Unix standards, C compiler, and runtime library work. After that David went to a series of startups doing embedded Linux and Android, embedded custom OS's, and Xen virtualization. He has experience with MIPS, Alpha, and ARM platforms (amongst others). He has used most flavors of Unix starting in 1979 with Bell Labs V6, and has been a long-time Linux user and advocate. He has also occasionally been known to debug a device driver with a soldering iron and digital oscilloscope.
--------------------------------------------------------------------------------
via: http://www.linaro.org/blog/kprobes-event-tracing-armv8/
作者:[ David Long][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linaro.org/author/david-long/
[1]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[2]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[3]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[4]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[5]:https://github.com/torvalds/linux/blob/master/Documentation/kprobes.txt
[6]:https://github.com/torvalds/linux/blob/master/Documentation/trace/events.txt
[7]:https://github.com/torvalds/linux/blob/master/Documentation/trace/kprobetrace.txt
[8]:http://www.linaro.org/author/david-long/
[9]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#comments
[10]:http://www.linaro.org/blog/kprobes-event-tracing-armv8/#
[11]:http://www.linaro.org/tag/arm64/
[12]:http://www.linaro.org/tag/armv8/
[13]:http://www.linaro.org/tag/jprobes/
[14]:http://www.linaro.org/tag/kernel/
[15]:http://www.linaro.org/tag/kprobes/
[16]:http://www.linaro.org/tag/kretprobes/
[17]:http://www.linaro.org/tag/perf/
[18]:http://www.linaro.org/tag/tracing/

View File

@ -0,0 +1,104 @@
New Years resolution: Donate to 1 free software project every month
============================================================
### Donating just a little bit helps ensure the open source software I use remains alive
Free and open source software is an absolutely critical part of our world—and the future of technology and computing. One problem that consistently plagues many free software projects, though, is the challenge of funding ongoing development (and support and documentation). 
With that in mind, I have finally settled on a New Years resolution for 2017: to donate to one free software project (or group) every month—or the whole year. After all, these projects are saving me a boatload of money because I dont need to buy expensive, proprietary packages to accomplish the same things.
#### + Also on Network World: [Free Software Foundation shakes up its list of priority projects][19] +
Im not setting some crazy goal here—not requiring that I donate beyond my means. Heck, some months I may be able to donate only a few bucks. But every little bit helps, right? 
To help me accomplish that goal, below is a list of free software projects with links to where I can donate to them. Organized by categories, just because. Im scheduling a monthly calendar item to remind me to bring up this page and donate to one of these projects. 
This isnt a complete list—not by any measure—but its a good starting point. Apologies to the (many) great projects out there that I missed.
#### Linux distributions 
[elementary OS][20] — In addition to the distribution itself (which is based, in part, on Ubuntu), this team also develops the Pantheon desktop environment. 
[Solus][21] — This is a “from scratch” distro using their own custom-developed desktop environment, “Budgie.” 
[Ubuntu MATE][22] — Its Ubuntu—with Unity ripped off and replaced with MATE. I like to think of this as “What Ubuntu was like back when I still used Ubuntu.” 
[Debian][23] — If you use Ubuntu or elementary or Mint, you are using a system based on Debian. Personally, I use Debian on my [PocketCHIP][24].
#### Linux components 
[PulseAudio][25] — PulsAudio is all over the place now. If it stopped being supported and maintained, that would be… highly inconvenient. 
#### Productivity/Creation 
[Gimp][26] — The GNU Image Manipulation Program is one of the most famous free software projects—and the standard for cross-platform raster design tools. 
[FreeCAD][27] — When people talk about difficulty in moving from Windows to Linux, the lack of CAD software often crops up. Supporting projects such as FreeCAD helps to remove that barrier. 
[OpenShot][28] — Video editing on Linux (and other free software desktops) has improved tremendously over the past few years. But there is still work to be done. 
[Blender][29] — What is Blender? A 3D modelling suite? A video editor? A game creation system? All three (and more)? Whatever you use Blender for, its amazing. 
[Inkscape][30] — This is the most fantastic vector graphics editing suite on the planet (in my oh-so-humble opinion). 
[LibreOffice / The Document Foundation][31] — I am writing this very document in LibreOffice. Donating to their foundation to help further development seems to be in my best interests. 
#### Software development 
[Python Software Foundation][32] — Python is a great language and is used all over the place. 
#### Free and open source foundations 
[Free Software Foundation][33] — “The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users.” 
[Software Freedom Conservancy][34] — “Software Freedom Conservancy helps promote, improve, develop and defend Free, Libre and Open Source Software (FLOSS) projects.” 
Again—this is, by no means, a complete list. Not even close. Luckily many projects provide easy donation mechanisms on their websites.
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html
作者:[ Bryan Lunduke][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Bryan-Lunduke/
[1]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
[2]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
[3]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
[4]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[5]:https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&via=networkworld&text=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[6]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[7]:http://www.linkedin.com/shareArticle?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[8]:https://plus.google.com/share?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[9]:http://reddit.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html&title=New+Year%E2%80%99s+resolution%3A+Donate+to+1+free+software+project+every+month
[10]:http://www.stumbleupon.com/submit?url=https%3A%2F%2Fwww.networkworld.com%2Farticle%2F3160174%2Flinux%2Fnew-years-resolution-donate-to-1-free-software-project-every-month.html
[11]:https://www.networkworld.com/article/3160174/linux/new-years-resolution-donate-to-1-free-software-project-every-month.html#email
[12]:https://www.networkworld.com/article/3143583/linux/linux-y-things-i-am-thankful-for.html
[13]:https://www.networkworld.com/article/3152745/linux/5-rock-solid-linux-distros-for-developers.html
[14]:https://www.networkworld.com/article/3130760/open-source-tools/elementary-os-04-review-and-interview-with-the-founder.html
[15]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[16]:https://www.networkworld.com/video/51206/solo-drone-has-linux-smarts-gopro-mount
[17]:https://www.facebook.com/NetworkWorld/
[18]:https://www.linkedin.com/company/network-world
[19]:http://www.networkworld.com/article/3158685/open-source-tools/free-software-foundation-shakes-up-its-list-of-priority-projects.html
[20]:https://www.patreon.com/elementary
[21]:https://www.patreon.com/solus
[22]:https://www.patreon.com/ubuntu_mate
[23]:https://www.debian.org/donations
[24]:http://www.networkworld.com/article/3157210/linux/review-pocketchipsuper-cheap-linux-terminal-that-fits-in-your-pocket.html
[25]:https://www.patreon.com/tanuk
[26]:https://www.gimp.org/donating/
[27]:https://www.patreon.com/yorikvanhavre
[28]:https://www.patreon.com/openshot
[29]:https://www.blender.org/foundation/donation-payment/
[30]:https://inkscape.org/en/support-us/donate/
[31]:https://www.libreoffice.org/donate/
[32]:https://www.python.org/psf/donations/
[33]:http://www.fsf.org/associate/
[34]:https://sfconservancy.org/supporter/

View File

@ -0,0 +1,113 @@
translating by HardworkFish
INTRODUCING DOCKER SECRETS MANAGEMENT
============================================================
Containers are changing how we view apps and infrastructure. Whether the code inside containers is big or small, container architecture introduces a change to how that code behaves with hardware it fundamentally abstracts it from the infrastructure. Docker believes that there are three key components to container security and together they result in inherently safer apps.
![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/e12387a1-ab21-4942-8760-5b1677bc656d-1.jpg?w=1140&ssl=1)
A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
With containers, applications are now dynamic and portable across multiple environments. This  made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub, or other equally bad—bolted on point solutions as an afterthought.
### Introducing Docker Secrets Management
We fundamentally believe that apps are safer if there is a standardized interface for accessing secrets. Any good solution will also have to follow security best practices, such as encrypting secrets while in transit; encrypting secrets at rest; preventing secrets from unintentionally leaking when consumed by the final application; and strictly adhere to the principle of least-privilege, where an application only has access to the secrets that it needs—no more, no less.
By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles.
The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object.
![Docker Secrets Management](https://i0.wp.com/blog.docker.com/wp-content/uploads/b69d2410-9e25-44d8-aa2d-f67b795ff5e3.jpg?w=1140&ssl=1)
In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm (by running `docker secret create`), Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the [built-in Certificate Authority][17] that gets automatically created when bootstrapping a new swarm.
```
$ echo "This is a secret" | docker secret create my_secret_data -
```
Once the secret reaches a manager node, it gets saved to the internal Raft store, which uses NACLs Salsa20Poly1305 with a 256-bit key to ensure no data is ever written to disk unencrypted. Writing to the internal store gives secrets the same high availability guarantees that the the rest of the swarm management data gets.
When a swarm manager starts up, the encrypted Raft logs containing the secrets is decrypted using a data encryption key that is unique per-node. This key, and the nodes TLS credentials used to communicate with the rest of the cluster, can be encrypted with a cluster-wide key encryption key, called the unlock key, which is also propagated using Raft and will be required on manager start.
When you grant a newly-created or running service access to a secret, one of the manager nodes (only managers have access to all the stored secrets stored) will send it over the already established TLS connection exclusively to the nodes that will be running that specific service. This means that nodes cannot request the secrets themselves, and will only gain access to the secrets when provided to them by a manager strictly for the services that require them.
```
$ docker service  create --name="redis" --secret="my_secret_data" redis:alpine
```
The  unencrypted secret is mounted into the container in an in-memory filesystem at /run/secrets/<secret_name>.
```
$ docker exec $(docker ps --filter name=redis -q) ls -l /run/secrets
total 4
-r--r--r--    1 root     root            17 Dec 13 22:48 my_secret_data
```
If a service gets deleted, or rescheduled somewhere else, the manager will immediately notify all the nodes that no longer require access to that secret to erase it from memory, and the node will no longer have any access to that application secret.
```
$ docker service update --secret-rm="my_secret_data" redis
$ docker exec -it $(docker ps --filter name=redis -q) cat /run/secrets/my_secret_data
cat: can't open '/run/secrets/my_secret_data': No such file or directory
```
Check out the [Docker secrets docs][18] for more information and examples on how to create and manage your secrets. And a special shout out to Laurens Van Houtven (https://www.lvh.io/[)][19] in collaboration with the Docker security and core engineering team to help make this feature a reality.
[Get safer apps for dev and ops w/ new #Docker secrets management][5]
[CLICK TO TWEET][6]
###
![Docker Security](https://i2.wp.com/blog.docker.com/wp-content/uploads/Screenshot-2017-02-08-23.30.13.png?resize=1032%2C111&ssl=1)
### Safer Apps with Docker
Docker secrets is designed to be easily usable by developers and IT ops teams to build and run safer apps. Docker secrets is a container first architecture designed to keep secrets safe and used only when needed by the exact container that needs that secret to operate. From defining apps and secrets with Docker Compose through an IT admin deploying that Compose file directly in Docker Datacenter, the services, secrets, networks and volumes will travel securely, safely with the application.
Resources to learn more:
* [Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and More][7]
* [Download Docker][8] and get started today
* [Try secrets in Docker Datacenter][9]
* [Read the Documentation][10]
* Attend an [upcoming webinar][11]
--------------------------------------------------------------------------------
via: https://blog.docker.com/2017/02/docker-secrets-management/
作者:[ Ying Li][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.docker.com/author/yingli/
[1]:http://www.linkedin.com/shareArticle?mini=true&url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management&summary=Containers%20are%20changing%20how%20we%20view%20apps%20and%20infrastructure.%20Whether%20the%20code%20inside%20containers%20is%20big%20or%20small,%20container%20architecture%20introduces%20a%20change%20to%20how%20that%20code%20behaves%20with%20hardware%20-%20it%20fundamentally%20abstracts%20it%20from%20the%20infrastructure.%20Docker%20believes%20that%20there%20are%20three%20key%20components%20to%20container%20security%20and%20...
[2]:http://www.reddit.com/submit?url=http://dockr.ly/2k6gnOB&title=Introducing%20Docker%20Secrets%20Management
[3]:https://plus.google.com/share?url=http://dockr.ly/2k6gnOB
[4]:http://news.ycombinator.com/submitlink?u=http://dockr.ly/2k6gnOB&t=Introducing%20Docker%20Secrets%20Management
[5]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB
[6]:https://twitter.com/share?text=Get+safer+apps+for+dev+and+ops+w%2F+new+%23Docker+secrets+management+&via=docker&related=docker&url=http://dockr.ly/2k6gnOB
[7]:http://dockr.ly/AppSecurity
[8]:https://www.docker.com/getdocker
[9]:http://www.docker.com/trial
[10]:https://docs.docker.com/engine/swarm/secrets/
[11]:http://www.docker.com/webinars
[12]:https://blog.docker.com/author/yingli/
[13]:https://blog.docker.com/tag/container-security/
[14]:https://blog.docker.com/tag/docker-security/
[15]:https://blog.docker.com/tag/secrets-management/
[16]:https://blog.docker.com/tag/security/
[17]:https://docs.docker.com/engine/swarm/how-swarm-mode-works/pki/
[18]:https://docs.docker.com/engine/swarm/secrets/
[19]:https://lvh.io%29/

View File

@ -1,331 +0,0 @@
zpl1025
How to take screenshots on Linux using Scrot
============================================================
### On this page
1. [About Scrot][12]
2. [Scrot Installation][13]
3. [Scrot Usage/Features][14]
1. [Get the application version][1]
2. [Capturing current window][2]
3. [Selecting a window][3]
4. [Include window border in screenshots][4]
5. [Delay in taking screenshots][5]
6. [Countdown before screenshot][6]
7. [Image quality][7]
8. [Generating thumbnails][8]
9. [Join multiple displays shots][9]
10. [Executing operations on saved images][10]
11. [Special strings][11]
4. [Conclusion][15]
Recently, we discussed about the [gnome-screenshot][17] utility, which is a good screen grabbing tool. But if you are looking for an even better command line utility for taking screenshots, then you must give Scrot a try. This tool has some extra features that are currently not available in gnome-screenshot. In this tutorial, we will explain Scrot using easy to understand examples.
Please note that all the examples mentioned in this tutorial have been tested on Ubuntu 16.04 LTS, and the scrot version we have used is 0.8.
### About Scrot
[Scrot][18] (**SCR**eensh**OT**) is a screenshot capturing utility that uses the imlib2 library to acquire and save images. Developed by Tom Gilbert, it's written in C programming language and is licensed under the BSD License.
### Scrot Installation
The scrot tool may be pre-installed on your Ubuntu system, but if that's not the case, then you can install it using the following command:
sudo apt-get install scrot
Once the tool is installed, you can launch it by using the following command:
scrot [options] [filename]
**Note**: The parameters in [] are optional.
### Scrot Usage/Features
In this section, we will discuss how the Scrot tool can be used and what all features it provides.
When the tool is run without any command line options, it captures the whole screen.
[
![Using Scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/scrot.png)
][19]
By default, the captured file is saved with a date-stamped filename in the current directory, although you can also explicitly specify the name of the captured image when the command is run. For example:
scrot [image-name].png
### Get the application version
If you want, you can check the version of scrot using the -v command line option.
scrot -v
Here is an example:
[
![Get scrot version](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/version.png)
][20]
### Capturing current window
Using the utility, you can limit the screenshot to the currently focused window. This feature can be accessed using the -u command line option.
scrot -u
For example, here's my desktop when I executed the above command on the command line:
[
![capture window in scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/desktop.png)
][21]
And here's the screenshot captured by scrot: 
[
![Screenshot captured by scrot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/active.png)
][22]
### Selecting a window
The utility allows you to capture any window by clicking on it using the mouse. This feature can be accessed using the -s option.
scrot -s
For example, as you can see in the screenshot below, I have a screen with two terminal windows overlapping each other. On the top window, I run the aforementioned command.
[
![select window](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select1.png)
][23]
Now suppose, I want to capture the bottom terminal window. For that, I will just click on that window once the command is executed - the command execution won't complete until you click somewhere on the screen.
Here's the screenshot captured after clicking on that terminal:
[
![window screenshot captured](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/select2.png)
][24]
**Note**: As you can see in the above snapshot, whatever area the bottom window is covering has been captured, even if that includes an overlapping portion of the top window.
### Include window border in screenshots
The -u command line option we discussed earlier doesn't include the window border in screenshots. However, you can include the border of the window if you want. This feature can be accessed using the -b option (in conjunction with the -u option of course).
scrot -ub
Here is an example screenshot:
[
![include window border in screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/border-new.png)
][25]
**Note**: Including window border also adds some of the background area to the screenshot.
### Delay in taking screenshots
You can introduce a time delay while taking screenshots. For this, you have to assign a numeric value to the --delay or -d command line option.
scrot --delay [NUM]
scrot --delay 5
Here is an example:
[
![delay taking screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/delay.png)
][26]
In this case, scrot will wait for 5 seconds and then take the screenshot.
### Countdown before screenshot
The tool also allows you to display countdown while using delay option. This feature can be accessed using the -c command line option.
scrot delay [NUM] -c
scrot -d 5 -c
Here is an example screenshot:
[
![example delayed screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/countdown.png)
][27]
### Image quality
Using the tool, you can adjust the quality of the screenshot image at the scale of 1-100\. High value means high size and low compression. Default value is 75, although effect differs depending on the file format chosen.
This feature can be accessed using --quality or -q option, but you have to assign a numeric value to this option ranging from 1-100.
scrot quality [NUM]
scrot quality 10
Here is an example snapshot:
[
![snapshot quality](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/img-quality.jpg)
][28]
So you can see that the quality of the image degrades a lot as the -q option is assigned value closer to 1.
### Generating thumbnails
The scrot utility also allows you to generate thumbnail of the screenshot. This feature can be accessed using the --thumb option. This option requires a NUM value, which is basically the percentage of the original screenshot size.
scrot --thumb NUM
scrot --thumb 50
**Note**: The --thumb option makes sure that the screenshot is captured and saved in original size as well.
For example, here is the original screenshot captured in my case:
[
![Original screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/orig.png)
][29]
And following is the thumbnail saved:
[
![thumbnail of the screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/thmb.png)
][30]
### Join multiple displays shots
In case your machine has multiple displays attached to it, scrot allows you to grab and join screenshots of these displays. This feature can be accessed using the -m command line option. 
scrot -m
Here is an example snapshot:
[
![Join screenshots](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/multiple.png)
][31]
### Executing operations on saved images
Using the tool, we can execute various operations on saved images - for example, open the screenshot in an image editor like gThumb. This feature can be accessed using the -e command line option. Here's an example:
scrot abc.png -e gthumb abc.png
Here, gthumb is an image editor which will automatically launch after we run the command.
Following is the snapshot of the command:
[
![Execute commands on screenshots](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec1.png)
][32]
And here is the output of the above command:
[
![esample screenshot](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec2.png)
][33]
So you can see that the scrot command grabbed the screenshot and then launched the gThumb image editor with the captured image as argument.
If you dont specify a filename to your screenshot, then the snapshot will be saved with a date-stamped filename in your current directory - this, as we've already mentioned in the beginning, is the default behaviour of scrot.
Here's an -e command line option example where scrot uses the default name for the screenshot: 
scrot -e gthumb $n
[
![scrot running gthumb](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/exec3.png)
][34]
It's worth mentioning that $n is a special string, which provides access to the screenshot name. For more details on special strings, head to the next section.
### Special strings
The -e (or the --exec ) and filename parameters can take format specifiers when used with scrot. There are two types of format specifiers. First type is characters preceded by % that are used for date and time formats, while the second type is internal to scrot and are prefixed by $
Several specifiers which are recognised by the --exec and filename parameters are discussed below.
**$f**  provides access to screenshot path (including filename).
For example,
scrot ashu.jpg -e mv $f ~/Pictures/Scrot/ashish/
Here is an example snapshot:
[
![example](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/f.png)
][35]
If you will not specify a filename, then scrot will by-default save the snapshot in a date stamped file format. This is the by-default date-stamped file format used in scrot : %yy-%mm-%dd-%hhmmss_$wx$h_scrot.png.
**$n**  provides snapshot name. Here is an example snapshot:
[
![scrot $n variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/n.png)
][36]
**$s**  gives access to the size of screenshot. This feature, for example, can be accessed in the following way.
scrot abc.jpg -e echo $s
Here is an example snapshot
[
![scrot $s variable](https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/s.png)
][37]
Similarly, you can use the other special strings **$p**, **$w**, **$h**, **$t**, **$$** and **\n** that provide access to image pixel size, image width, image height, image format, $ symbol, and give access to new line respectively. You can, for example, use these strings in the way similar to the **$s** example we have discussed above.
### Conclusion
The utility is easy to install on Ubuntu systems, which is good for beginners. Scrot also provides some advanced features such as special strings that can be used in scripting by professionals. Needless to say, there is a slight learning curve associated in case you want to use them.
![](https://www.howtoforge.com/images/pdficon_small.png)
 [vie][16]
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
作者:[Himanshu Arora][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/
[1]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#get-the-applicationnbspversion
[2]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#capturing-current-window
[3]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#selecting-a-window
[4]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#includenbspwindow-border-in-screenshots
[5]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#delay-in-taking-screenshots
[6]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#countdown-before-screenshot
[7]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#image-quality
[8]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#generating-thumbnails
[9]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#join-multiple-displays-shots
[10]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#executing-operations-on-saved-images
[11]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#special-strings
[12]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#about-scrot
[13]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-installation
[14]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#scrot-usagefeatures
[15]:https://www.howtoforge.com/tutorial/how-to-take-screenshots-in-linux-with-scrot/#conclusion
[16]:https://www.howtoforge.com/subscription/
[17]:https://www.howtoforge.com/tutorial/taking-screenshots-in-linux-using-gnome-screenshot/
[18]:https://en.wikipedia.org/wiki/Scrot
[19]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/scrot.png
[20]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/version.png
[21]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/desktop.png
[22]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/active.png
[23]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select1.png
[24]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/select2.png
[25]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/border-new.png
[26]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/delay.png
[27]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/countdown.png
[28]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/img-quality.jpg
[29]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/orig.png
[30]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/thmb.png
[31]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/multiple.png
[32]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec1.png
[33]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec2.png
[34]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/exec3.png
[35]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/f.png
[36]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/n.png
[37]:https://www.howtoforge.com/images/how-to-take-screenshots-in-linux-with-scrot/big/s.png

View File

@ -0,0 +1,168 @@
Which Official Ubuntu Flavor Is Best for You?
============================================================
![Ubuntu Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_budgie.jpg?itok=xpo3Ujfw "Ubuntu Budgie")
Ubuntu Budgie is just one of the few officially recognized flavors of Ubuntu. Jack Wallen takes a look at some important differences between them.[Used with permission][7]
Ubuntu Linux comes in a few officially recognized flavors, as well as several derivative distributions. The recognized flavors are:
* [Kubuntu][9] - Ubuntu with the KDE desktop
* [Lubuntu][10] - Ubuntu with the LXDE desktop
* [Mythbuntu][11] - Ubuntu MythTV
* [Ubuntu Budgie][12] - Ubuntu with the Budgie desktop
* [Xubuntu][8] - Ubuntu with Xfce
Up until recently, the official Ubuntu Linux included the in-house Unity desktop and a sixth recognized flavor existed: Ubuntu GNOME -- Ubuntu with the GNOME desktop environment.
When Mark Shuttleworth decided to nix Unity, the choice was obvious to Canonical—make GNOME the official desktop of Ubuntu Linux. This begins with Ubuntu 18.04 (so April, 2018) and well be down to the official distribution and four recognized flavors.
For those already enmeshed in the Linux community, thats some seriously simple math to do—you know which Linux desktop you like, so making the choice between Ubuntu, Kubuntu, Lubuntu, Mythbuntu, Ubuntu Budgie, and Xubuntu couldnt be easier. Those that havent already been indoctrinated into the way of Linux wont see that as such a cut-and-dried decision.
To that end, I thought it might be a good idea to help newer users decide which flavor is best for them. After all, choosing the wrong distribution out of the starting gate can make for a less-than-ideal experience.
And so, if youre considering a flavor of Ubuntu, and you want your experience to be as painless as possible, read on.
### Ubuntu
Ill begin with the official flavor of Ubuntu. I am also going to warp time a bit and skip Unity, to launch right into the upcoming GNOME-based distribution. Beyond GNOME being an incredibly stable and easy to use desktop environment, there is one very good reason to select the official flavor—support. The official flavor of Ubuntu is commercially supported by Canonical. For $150.00 per year, you can purchase [official support][20] for the Ubuntu desktop. There is, of course, a 50-desktop minimum for this level of support. For individuals, the best bet for support would be the [Ubuntu Forums][21], the [Ubuntu documentation][22], or the [Community help wiki][23].
Beyond the commercial support, the reason to choose the official Ubuntu flavor would be if youre looking for a modern, full-featured desktop that is incredibly reliable and easy to use. GNOME has been designed to serve as a platform perfectly suited for both desktops and laptops (Figure 1). Unlike its predecessor, Unity, GNOME can be far more easily customized to suit your needs—to a point. If youre not one to tinker with the desktop, fear not, GNOME just works. In fact, the out of the box experience with GNOME might well be one of the finest on the market—even rivaling (or besting) Mac OS X. If tinkering and tweaking is of primary interest, you will find GNOME somewhat limiting. The [GNOME Tweak Tool][24] and [GNOME Shell Extensions ][25]will only take you so far, before you find yourself wanting more.
![GNOME desktop](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_a.jpg?itok=Ir6jBKbd "GNOME desktop")
Figure 1: The GNOME desktop with a Unity-like flavor might be what we see with Ubuntu 18.04.[Used with permission][1]
### Kubuntu
The [K Desktop Environment][26] (otherwise known as KDE) has been around as long as GNOME and has, at times, been maligned as a lesser desktop. With the release of KDE Plasma 5, that changed. KDE has become an incredibly powerful, efficient, and stable desktop that can stand toe to toe with the best of them. But why would you select Kubuntu over the official Ubuntu? The answer to that question is quite simple—youre used to the Windows XP/7 desktop metaphor. Start menu, taskbar, system tray, etc., KDE has those and more, all fashioned in such a way that will make you feel like youre using the best of the past and current technologies. In fact, if youre looking for one of the most Windows 7-like official Ubuntu flavors, you wont find one that better fits the bill.
One of the nice things about Kubuntu, is that youll find it a bit more flexible than any Windows iteration youve ever used—and equally reliable/user-friendly. And dont think, because KDE opts to offer a desktop somewhat similar to Windows 7, that it doesnt have a modern flavor. In fact, Kubuntu takes what worked well with the Windows 7 interface and updates it to meet a more modern aesthetic (Figure 2).
![Kubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_b.jpg?itok=dGpebi4z "Kubuntu")
Figure 2: Kubuntu offers a modern take on an old UX.[Used with permission][2]
The official Ubuntu is not the only flavor to offer desktop support. Kubuntu users also can pay for [commercial support][27]. Be warned, its not cheap. One hour of support time will cost you $103.88 cents.
### Lubuntu
If youre looking for an easy-to-use desktop that is very fast (so that older hardware will feel like new) and far more flexible than just about any desktop youve ever used, Lubuntu is what you want. The only caveat to Lubuntu is that youre looking at a bit more bare bones on the desktop then you may be accustomed to. Lubuntu makes use of the [LXDE desktop][28] and includes a list of applications that continues the lightweight theme. So if youre looking for blazing fast speeds on the desktop, Lubuntu might be a good choice.
However, there is a caveat with Lubuntu and, for some users, this might be a deal breaker. Along with the small footprint of Lubuntu come pre-installed applications that might not stand up to task. For example, instead of the full-blown office suite, youll find the [AibWord word processor][29] and the [Gnumeric spreadsheet][30] tool. Dont get me wrong; both of these are fine tools. However, if youre looking for software thats business-ready, you will find them lacking. On the other hand, if you want to install more work-centric tools (e.g., LibreOffice), Lubuntu includes the Synaptic Package Manager to make installation of third-party software simple.
Even with the limited default software, Lubuntu offers a clean and easy to use desktop (Figure 3), that anyone could start using with little to no learning curve.
![Lubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_c.jpg?itok=nWsJr39r "Lubuntu")
Figure 3: What Lubuntu lacks in software, it makes up for in speed and simplicity.[Used with permission][3]
### Mythbuntu
Mythbuntu is a sort of odd bird here, because it isnt really a desktop variant. Instead, Mythbuntu is a special flavor of Ubuntu designed to be a multimedia powerhouse. Using Mythbuntu requires TV Tuners and TV Out cards. And, during the installation, there are a number of additional steps that must be taken (choosing how to set up the frontend/backend as well as setting up your IR remotes).
If you do happen to have the hardware (and the desire to create your own Ubuntu-powered entertainment system), Mythbuntu is the distribution you want. Once youve installed Mythbuntu, you will then be prompted to walk through the setup of your Capture cards, recording profiles, video sources, and Input connections (Figure 4).
![Mythbuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_d.jpg?itok=Uk16xUIF "Mythbuntu")
Figure 4: Getting ready to set up Mythbuntu.[Used with permission][4]
### Ubuntu Budgie
Ubuntu Budgie is the new kid on the block to the official flavor list. Sporting the Budgie Desktop, this is a beautiful and modern take on Linux that will please just about any type of user. The goal of Ubuntu Budgie was to create an elegant and simple desktop interface. Mission accomplished. If youre looking for a beautiful desktop to work on top of the remarkably stable Ubuntu Linux platform, look no further than Ubuntu Budgie.
Adding this particular spin on Ubuntu to the list of official variants was a smart move on the part of Canonical. With Unity going away, they needed a desktop that would offer the elegance found in Unity. Customization of Budgie is very easy, and the list of included software will get you working and browsing immediately.
And, unlike the learning curve many users encountered with Unity, the developers/designers of Ubuntu Budgie have done a remarkable job of keeping this take on Ubuntu familiar. Click on the “start” button to reveal a fairly standard menu of applications. Budgie also includes an easy to use Dock (Figure 5) that holds applications launchers for quick access.
![Budgie](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ubuntu_flavor_e.jpg?itok=mwlo4xzm "Budgie")
Figure 5: This is one beautiful desktop.[Used with permission][5]
Another really nice feature found in Ubuntu Budgie is a sidebar that can be quickly revealed and hidden. This sidebar holds applets and notifications. With this in play, your desktop can be both incredibly useful, while remaining clutter free.
In the end, if youre looking for something a bit different, that happens to also be a very modern take on the desktop—with features and functions not found on other distributions—Ubuntu Budgie is what youre looking for.
### Xubuntu
Another official flavor of Ubuntu that does a nice job of providing a small footprint version of Linux is [Xubuntu][32]. The difference between Xubuntu and Lubuntu is that, where Lubuntu uses the LXDE desktop, Xubuntu makes use of [Xfce][33]. What you get with that difference is a lightweight desktop that is far more configurable (than Lubuntu) as well as one that includes the more business-ready LibreOffice office suite.
Xubuntu is an out of the box experience that anyone, regardless of experience, can use. But don't think that immediate familiarity means this flavor of Ubuntu is locked out of making it your own. If you're looking for a take on Ubuntu that's somewhat old-school out of the box, but can be heavily tweaked to better resemble a more modern desktop, Xubuntu is what you want.
One really handy addition to Xubuntu that I've always enjoyed (one that harks back to Enlightenment) is the ability to bring up the "start" menu by right-clicking anywhere on the desktop (Figure 6). This can make for very efficient usage.
![Xubuntu](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/xubuntu.jpg?itok=XL8_hLet "Xubuntu")
Figure 6: Xubuntu lets you bring up the "start" menu by right-clicking anywhere on the desktop.[Used with permission][6]
### The choice is yours
There is a flavor of Ubuntu to meet nearly any need—which one you choose is up to you. As yourself questions such as:
* What are your needs?
* What type of desktop do you prefer to interact with?
* Is your hardware aging?
* Do you prefer a Windows XP/7 feel?
* Are you wanting a multimedia system?
Your answers to the above questions will go a long way to determining which flavor of Ubuntu is right for you. The good news is that you cant really go wrong with any of the available options.
_Learn more about Linux through the free ["Introduction to Linux" ][31]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/5/which-official-ubuntu-flavor-best-you
作者:[ JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://www.linux.com/licenses/category/used-permission
[4]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/licenses/category/used-permission
[6]:https://www.linux.com/licenses/category/used-permission
[7]:https://www.linux.com/licenses/category/used-permission
[8]:http://xubuntu.org/
[9]:http://www.kubuntu.org/
[10]:http://lubuntu.net/
[11]:http://www.mythbuntu.org/
[12]:https://ubuntubudgie.org/
[13]:https://www.linux.com/files/images/ubuntuflavorajpg
[14]:https://www.linux.com/files/images/ubuntuflavorbjpg
[15]:https://www.linux.com/files/images/ubuntuflavorcjpg
[16]:https://www.linux.com/files/images/ubuntuflavordjpg
[17]:https://www.linux.com/files/images/ubuntuflavorejpg
[18]:https://www.linux.com/files/images/xubuntujpg
[19]:https://www.linux.com/files/images/ubuntubudgiejpg
[20]:https://buy.ubuntu.com/collections/ubuntu-advantage-for-desktop
[21]:https://ubuntuforums.org/
[22]:https://help.ubuntu.com/?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
[23]:https://help.ubuntu.com/community/CommunityHelpWiki?_ga=2.155705979.1922322560.1494162076-828730842.1481046109
[24]:https://apps.ubuntu.com/cat/applications/gnome-tweak-tool/
[25]:https://extensions.gnome.org/
[26]:https://www.kde.org/
[27]:https://kubuntu.emerge-open.com/buy
[28]:http://lxde.org/
[29]:https://www.abisource.com/
[30]:http://www.gnumeric.org/
[31]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[32]:https://xubuntu.org/
[33]:https://www.xfce.org/

View File

@ -1,108 +0,0 @@
Translating by aiwhj
# How to Improve a Legacy Codebase
It happens at least once in the lifetime of every programmer, project manager or teamleader. You get handed a steaming pile of manure, if youre lucky only a few million lines worth, the original programmers have long ago left for sunnier places and the documentation - if there is any to begin with - is hopelessly out of sync with what is presently keeping the company afloat.
Your job: get us out of this mess.
After your first instinctive response (run for the hills) has passed you start on the project knowing full well that the eyes of the company senior leadership are on you. Failure is not an option. And yet, by the looks of what youve been given failure is very much in the cards. So what to do?
Ive been (un)fortunate enough to be in this situation several times and me and a small band of friends have found that it is a lucrative business to be able to take these steaming piles of misery and to turn them into healthy maintainable projects. Here are some of the tricks that we employ:
### Backup
Before you start to do anything at all make a backup of  _everything_  that might be relevant. This to make sure that no information is lost that might be of crucial importance somewhere down the line. All it takes is a silly question that you cant answer to eat up a day or more once the change has been made. Especially configuration data is susceptible to this kind of problem, it is usually not versioned and youre lucky if it is taken along in the periodic back-up scheme. So better safe than sorry, copy everything to a very safe place and never ever touch that unless it is in read-only mode.
### Important pre-requisite, make sure you have a build process and that it actually produces what runs in production
I totally missed this step on the assumption that it is obvious and likely already in place but many HN commenters pointed this out and they are absolutely right: step one is to make sure that you know what is running in production right now and that means that you need to be able to build a version of the software that is - if your platform works that way - byte-for-byte identical with the current production build. If you cant find a way to achieve this then likely you will be in for some unpleasant surprises once you commit something to production. Make sure you test this to the best of your ability to make sure that you have all the pieces in place and then, after youve gained sufficient confidence that it will work move it to production. Be prepared to switch back immediately to whatever was running before and make sure that you log everything and anything that might come in handy during the - inevitable - post mortem.
### Freeze the DB
If at all possible freeze the database schema until you are done with the first level of improvements, by the time you have a solid understanding of the codebase and the legacy code has been fully left behind you are ready to modify the database schema. Change it any earlier than that and you may have a real problem on your hand, now youve lost the ability to run an old and a new codebase side-by-side with the database as the steady foundation to build on. Keeping the DB totally unchanged allows you to compare the effect your new business logic code has compared to the old business logic code, if it all works as advertised there should be no differences.
### Write your tests
Before you make any changes at all write as many end-to-end and integration tests as you can. Make sure these tests produce the right output and test any and all assumptions that you can come up with about how you  _think_  the old stuff works (be prepared for surprises here). These tests will have two important functions: they will help to clear up any misconceptions at a very early stage and they will function as guardrails once you start writing new code to replace old code.
Automate all your testing, if youre already experienced with CI then use it and make sure your tests run fast enough to run the full set of tests after every commit.
### Instrumentation and logging
If the old platform is still available for development add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event. That way you can implement a time-stamped event log with a few extra lines of code and youll get a good idea of how many events of one kind lead to events of another kind. One example: User opens app, User closes app. If two events should result in some back-end calls those two counters should over the long term remain at a constant difference, the difference is the number of apps currently open. If you see many more app opens than app closes you know there has to be a way in which apps end (for instance a crash). For each and every event youll find there is some kind of relationship to other events, usually you will strive for constant relationships unless there is an obvious error somewhere in the system. Youll aim to reduce those counters that indicate errors and youll aim to maximize counters further down in the chain to the level indicated by the counters at the beginning. (For instance: customers attempting to pay should result in an equal number of actual payments received).
This very simple trick turns every backend application into a bookkeeping system of sorts and just like with a real bookkeeping system the numbers have to match, as long as they dont you have a problem somewhere.
This system will over time become invaluable in establishing the health of the system and will be a great companion next to the source code control system revision log where you can determine the point in time that a bug was introduced and what the effect was on the various counters.
I usually keep these counters at a 5 minute resolution (so 12 buckets for an hour), but if you have an application that generates fewer or more events then you might decide to change the interval at which new buckets are created. All counters share the same database table and so each counter is simply a column in that table.
### Change only one thing at the time
Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs. This will cause you huge headaches because you now have to ask yourself every step of the way what the desired outcome is of an action and will invalidate some of the tests you made earlier.
### Platform changes
If youve decided to migrate the application to another platform then do this first  _but keep everything else exactly the same_ . If you want you can add more documentation or tests, but no more than that, all business logic and interdependencies should remain as before.
### Architecture changes
The next thing to tackle is to change the architecture of the application (if desired). At this point in time you are free to change the higher level structure of the code, usually by reducing the number of horizontal links between modules, and thus reducing the scope of the code active during any one interaction with the end-user. If the old code was monolithic in nature now would be a good time to make it more modular, break up large functions into smaller ones but leave names of variables and data-structures as they were.
HN user [mannykannot][1] points - rightfully - out that this is not always an option, if youre particularly unlucky then you may have to dig in deep in order to be able to make any architecture changes. I agree with that and I should have included it here so hence this little update. What I would further like to add is if you do both do high level changes and low level changes at least try to limit them to one file or worst case one subsystem so that you limit the scope of your changes as much as possible. Otherwise you might have a very hard time debugging the change you just made.
### Low level refactoring
By now you should have a very good understanding of what each module does and you are ready for the real work: refactoring the code to improve maintainability and to make the code ready for new functionality. This will likely be the part of the project that consumes the most time, document as you go, do not make changes to a module until you have thoroughly documented it and feel you understand it. Feel free to rename variables and functions as well as datastructures to improve clarity and consistency, add tests (also unit tests, if the situation warrants them).
### Fix bugs
Now youre ready to take on actual end-user visible changes, the first order of battle will be the long list of bugs that have accumulated over the years in the ticket queue. As usual, first confirm the problem still exists, write a test to that effect and then fix the bug, your CI and the end-to-end tests written should keep you safe from any mistakes you make due to a lack of understanding or some peripheral issue.
### Database Upgrade
If required after all this is done and you are on a solid and maintainable codebase again you have the option to change the database schema or to replace the database with a different make/model altogether if that is what you had planned to do. All the work youve done up to this point will help to assist you in making that change in a responsible manner without any surprises, you can completely test the new DB with the new code and all the tests in place to make sure your migration goes off without a hitch.
### Execute on the roadmap
Congratulations, you are out of the woods and are now ready to implement new functionality.
### Do not ever even attempt a big-bang rewrite
A big-bang rewrite is the kind of project that is pretty much guaranteed to fail. For one, you are in uncharted territory to begin with so how would you even know what to build, for another, you are pushing  _all_  the problems to the very last day, the day just before you go live with your new system. And thats when youll fail, miserably. Business logic assumptions will turn out to be faulty, suddenly youll gain insight into why that old system did certain things the way it did and in general youll end up realizing that the guys that put the old system together werent maybe idiots after all. If you really do want to wreck the company (and your own reputation to boot) by all means, do a big-bang rewrite, but if youre smart about it this is not even on the table as an option.
### So, the alternative, work incrementally
To untangle one of these hairballs the quickest path to safety is to take any element of the code that you do understand (it could be a peripheral bit, but it might also be some core module) and try to incrementally improve it still within the old context. If the old build tools are no longer available you will have to use some tricks (see below) but at least try to leave as much of what is known to work alive while you start with your changes. That way as the codebase improves so does your understanding of what it actually does. A typical commit should be at most a couple of lines.
### Release!
Every change along the way gets released into production, even if the changes are not end-user visible it is important to make the smallest possible steps because as long as you lack understanding of the system there is a fair chance that only the production environment will tell you there is a problem. If that problem arises right after you make a small change you will gain several advantages:
* it will probably be trivial to figure out what went wrong
* you will be in an excellent position to improve the process
* and you should immediately update the documentation to show the new insights gained
### Use proxies to your advantage
If you are doing web development praise the gods and insert a proxy between the end-users and the old system. Now you have per-url control over which requests go to the old system and which you will re-route to the new system allowing much easier and more granular control over what is run and who gets to see it. If your proxy is clever enough you could probably use it to send a percentage of the traffic to the new system for an individual URL until you are satisfied that things work the way they should. If your integration tests also connect to this interface it is even better.
### Yes, but all this will take too much time!
Well, that depends on how you look at it. Its true there is a bit of re-work involved in following these steps. But it  _does_  work, and any kind of optimization of this process makes the assumption that you know more about the system than you probably do. Ive got a reputation to maintain and I  _really_  do not like negative surprises during work like this. With some luck the company is already on the skids, or maybe there is a real danger of messing things up for the customers. In a situation like that I prefer total control and an iron clad process over saving a couple of days or weeks if that imperils a good outcome. If youre more into cowboy stuff - and your bosses agree - then maybe it would be acceptable to take more risk, but most companies would rather take the slightly slower but much more sure road to victory.
--------------------------------------------------------------------------------
via: https://jacquesmattheij.com/improving-a-legacy-codebase
作者:[Jacques Mattheij ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jacquesmattheij.com/
[1]:https://news.ycombinator.com/item?id=14445661

View File

@ -0,0 +1,93 @@
Translating by XiatianSummer
Why Car Companies Are Hiring Computer Security Experts
============================================================
Photo
![](https://static01.nyt.com/images/2017/06/08/business/08BITS-GURUS1/08BITS-GURUS1-superJumbo.jpg)
The cybersecurity experts Marc Rogers, left, of CloudFlare and Kevin Mahaffey of Lookout were able to control various Tesla functions from their physically connected laptop. They pose in CloudFlares lobby in front of Lava Lamps used to generate numbers for encryption.CreditChristie Hemm Klok for The New York Times
It started about seven years ago. Irans top nuclear scientists were being assassinated in a string of similar attacks: Assailants on motorcycles were pulling up to their moving cars, attaching magnetic bombs and detonating them after the motorcyclists had fled the scene.
In another seven years, security experts warn, assassins wont need motorcycles or magnetic bombs. All theyll need is a laptop and code to send driverless cars careering off a bridge, colliding with a driverless truck or coming to an unexpected stop in the middle of fast-moving traffic.
Automakers may call them self-driving cars. But hackers call them computers that travel over 100 miles an hour.
“These are no longer cars,” said Marc Rogers, the principal security researcher at the cybersecurity firm CloudFlare. “These are data centers on wheels. Any part of the car that talks to the outside world is a potential inroad for attackers.”
Those fears came into focus two years ago when two “white hat” hackers — researchers who look for computer vulnerabilities to spot problems and fix them, rather than to commit a crime or cause problems — successfully gained access to a Jeep Cherokee from their computer miles away. They rendered their crash-test dummy (in this case a nervous reporter) powerless over his vehicle and disabling his transmission in the middle of a highway.
The hackers, Chris Valasek and Charlie Miller (now security researchers respectively at Uber and Didi, an Uber competitor in China), discovered an [electronic route from the Jeeps entertainment system to its dashboard][10]. From there, they had control of the vehicles steering, brakes and transmission — everything they needed to paralyze their crash test dummy in the middle of a highway.
“Car hacking makes great headlines, but remember: No one has ever had their car hacked by a bad guy,” Mr. Miller wrote on Twitter last Sunday. “Its only ever been performed by researchers.”
Still, the research by Mr. Miller and Mr. Valasek came at a steep price for Jeeps manufacturer, Fiat Chrysler, which was forced to recall 1.4 million of its vehicles as a result of the hacking experiment.
It is no wonder that Mary Barra, the chief executive of General Motors, called cybersecurity her companys top priority last year. Now the skills of researchers and so-called white hat hackers are in high demand among automakers and tech companies pushing ahead with driverless car projects.
Uber, [Tesla][11], Apple and Didi in China have been actively recruiting white hat hackers like Mr. Miller and Mr. Valasek from one another as well as from traditional cybersecurity firms and academia.
Last year, Tesla poached Aaron Sigel, Apples manager of security for its iOS operating system. Uber poached Chris Gates, formerly a white hat hacker at Facebook. Didi poached Mr. Miller from Uber, where he had gone to work after the Jeep hack. And security firms have seen dozens of engineers leave their ranks for autonomous-car projects.
Mr. Miller said he left Uber for Didi, in part, because his new Chinese employer has given him more freedom to discuss his work.
“Carmakers seem to be taking the threat of cyberattack more seriously, but Id still like to see more transparency from them,” Mr. Miller wrote on Twitter on Saturday.
Like a number of big tech companies, Tesla and Fiat Chrysler started paying out rewards to hackers who turn over flaws the hackers discover in their systems. GM has done something similar, though critics say GMs program is limited when compared with the ones offered by tech companies, and so far no rewards have been paid out.
One year after the Jeep hack by Mr. Miller and Mr. Valasek, they demonstrated all the other ways they could mess with a Jeep driver, including hijacking the vehicles cruise control, swerving the steering wheel 180 degrees or slamming on the parking brake in high-speed traffic — all from a computer in the back of the car. (Those exploits ended with their test Jeep in a ditch and calls to a local tow company.)
Granted, they had to be in the Jeep to make all that happen. But it was evidence of what is possible.
The Jeep penetration was preceded by a [2011 hack by security researchers at the University of Washington][12] and the University of California, San Diego, who were the first to remotely hack a sedan and ultimately control its brakes via Bluetooth. The researchers warned car companies that the more connected cars become, the more likely they are to get hacked.
Security researchers have also had their way with Teslas software-heavy Model S car. In 2015, Mr. Rogers, together with Kevin Mahaffey, the chief technology officer of the cybersecurity company Lookout, found a way to control various Tesla functions from their physically connected laptop.
One year later, a team of Chinese researchers at Tencent took their research a step further, hacking a moving Tesla Model S and controlling its brakes from 12 miles away. Unlike Chrysler, Tesla was able to dispatch a remote patch to fix the security holes that made the hacks possible.
In all the cases, the car hacks were the work of well meaning, white hat security researchers. But the lesson for all automakers was clear.
The motivations to hack vehicles are limitless. When it learned of Mr. Rogerss and Mr. Mahaffeys investigation into Teslas Model S, a Chinese app-maker asked Mr. Rogers if he would be interested in sharing, or possibly selling, his discovery, he said. (The app maker was looking for a backdoor to secretly install its app on Teslas dashboard.)
Criminals have not yet shown they have found back doors into connected vehicles, though for years, they have been actively developing, trading and deploying tools that can intercept car key communications.
But as more driverless and semiautonomous cars hit the open roads, they will become a more worthy target. Security experts warn that driverless cars present a far more complex, intriguing and vulnerable “attack surface” for hackers. Each new “connected” car feature introduces greater complexity, and with complexity inevitably comes vulnerability.
Twenty years ago, cars had, on average, one million lines of code. The General Motors 2010 [Chevrolet Volt][13] had about 10 million lines of code — more than an [F-35 fighter jet][14].
Today, an average car has more than 100 million lines of code. Automakers predict it wont be long before they have 200 million. When you stop to consider that, on average, there are 15 to 50 defects per 1,000 lines of software code, the potentially exploitable weaknesses add up quickly.
The only difference between computer code and driverless car code is that, “Unlike data center enterprise security — where the biggest threat is loss of data — in automotive security, its loss of life,” said David Barzilai, a co-founder of Karamba Security, an Israeli start-up that is working on addressing automotive security.
To truly secure autonomous vehicles, security experts say, automakers will have to address the inevitable vulnerabilities that pop up in new sensors and car computers, address inherent vulnerabilities in the base car itself and, perhaps most challenging of all, bridge the cultural divide between automakers and software companies.
“The genie is out of the bottle, and to solve this problem will require a major cultural shift,” said Mr. Mahaffey of the cybersecurity company Lookout. “And an automaker that truly values cybersecurity will treat security vulnerabilities the same they would an airbag recall. We have not seen that industrywide shift yet.”
There will be winners and losers, Mr. Mahaffey added: “Automakers that transform themselves into software companies will win. Others will get left behind.”
--------------------------------------------------------------------------------
via: https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html
作者:[NICOLE PERLROTH ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.nytimes.com/by/nicole-perlroth
[1]:https://www.nytimes.com/2016/06/09/technology/software-as-weaponry-in-a-computer-connected-world.html
[2]:https://www.nytimes.com/2015/08/29/technology/uber-hires-two-engineers-who-showed-cars-could-be-hacked.html
[3]:https://www.nytimes.com/2015/08/11/opinion/zeynep-tufekci-why-smart-objects-may-be-a-dumb-idea.html
[4]:https://www.nytimes.com/by/nicole-perlroth
[5]:https://www.nytimes.com/column/bits
[6]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-1
[7]:http://www.nytimes.com/newsletters/sample/bits?pgtype=subscriptionspage&version=business&contentId=TU&eventName=sample&module=newsletter-sign-up
[8]:https://www.nytimes.com/privacy
[9]:https://www.nytimes.com/help/index.html
[10]:https://bits.blogs.nytimes.com/2015/07/21/security-researchers-find-a-way-to-hack-cars/
[11]:http://www.nytimes.com/topic/company/tesla-motors-inc?inline=nyt-org
[12]:http://www.autosec.org/pubs/cars-usenixsec2011.pdf
[13]:http://autos.nytimes.com/2011/Chevrolet/Volt/238/4117/329463/researchOverview.aspx?inline=nyt-classifier
[14]:http://topics.nytimes.com/top/reference/timestopics/subjects/m/military_aircraft/f35_airplane/index.html?inline=nyt-classifier
[15]:https://www.nytimes.com/2017/06/07/technology/why-car-companies-are-hiring-computer-security-experts.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#story-continues-3

View File

@ -1,314 +0,0 @@
Translating by yongshouzhang
A user's guide to links in the Linux filesystem
============================================================
### Learn how to use links, which make tasks easier by providing access to files from multiple locations in the Linux filesystem directory tree.
![A user's guide to links in the Linux filesystem](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/links.png?itok=AumNmse7 "A user's guide to links in the Linux filesystem")
Image by : [Paul Lewin][8]. Modified by Opensource.com. [CC BY-SA 2.0][9]
In articles I have written about various aspects of Linux filesystems for Opensource.com, including [An introduction to Linux's EXT4 filesystem][10]; [Managing devices in Linux][11]; [An introduction to Linux filesystems][12]; and [A Linux user's guide to Logical Volume Management][13], I have briefly mentioned an interesting feature of Linux filesystems that can make some tasks easier by providing access to files from multiple locations in the filesystem directory tree.
There are two types of Linux filesystem links: hard and soft. The difference between the two types of links is significant, but both types are used to solve similar problems. They both provide multiple directory entries (or references) to a single file, but they do it quite differently. Links are powerful and add flexibility to Linux filesystems because [everything is a file][14].
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Download Now: Linux commands cheat sheet][3]
* [Advanced Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
I have found, for instance, that some programs required a particular version of a library. When a library upgrade replaced the old version, the program would crash with an error specifying the name of the old, now-missing library. Usually, the only change in the library name was the version number. Acting on a hunch, I simply added a link to the new library but named the link after the old library name. I tried the program again and it worked perfectly. And, okay, the program was a game, and everyone knows the lengths that gamers will go to in order to keep their games running.
In fact, almost all applications are linked to libraries using a generic name with only a major version number in the link name, while the link points to the actual library file that also has a minor version number. In other instances, required files have been moved from one directory to another to comply with the Linux file specification, and there are links in the old directories for backwards compatibility with those programs that have not yet caught up with the new locations. If you do a long listing of the **/lib64** directory, you can find many examples of both.
```
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd
lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi
lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0
-rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0
lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0
-rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0
lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1
-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0
-rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1
lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26
-rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26
lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26
```
A few of the links in the **/lib64** directory
The long listing of the **/lib64** directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link.
### Hard links
In [An introduction to Linux's EXT4 filesystem][15], I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. [Figure 2][16] in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link.
In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde (**~**) convention for the home directory, so that **~** is equivalent to **/home/user** in this example. Note that the fourth directory entry is in a completely different directory, **/home/shared**, which might be a location for sharing files between users of the computer.
![fig1directory_entries.png](https://opensource.com/sites/default/files/images/life/fig1directory_entries.png)
Figure 1
Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case **/home**. This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, **/var**or **/opt**, will have inodes with the same number as the inode for our file.
Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single **file/inode** located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory.
The number of hard links for a file is displayed with the **ls -l** command. If you want to display the actual inode numbers, the command **ls -li** does that.
### Symbolic (soft) links
The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs.
The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the **ls** command highlights broken links with flashing white text on a red background in a long listing.
### Lab project: experimenting with links
I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a  _non-root user_ . I created the **~/temp** directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there.
### **Initial setup**
First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command.
```
mkdir temp
```
Change into **~/temp** to make it the PWD with this command.
```
cd temp
```
To get started, we need to create a file we can link to. The following command does that and provides some content as well.
```
du -h > main.file.txt
```
Use the **ls -l** long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two.
```
[dboth@david temp]$ ls -l
total 4
-rw-rw-r-- 1 dboth dboth 7 Jun 13 07:34 main.file.txt
```
Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file.
### **Experimenting with hard links**
Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still **~/temp**. Create a hard link to the file **main.file.txt**, then do another long list of the directory.
```
[dboth@david temp]$ ln main.file.txt link1.file.txt
[dboth@david temp]$ ls -l
total 8
-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 2 dboth dboth 7 Jun 13 07:34 main.file.txt
```
Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: **link1.file.txt** or **main.file.txt**.
```
[dboth@david temp]$ ln link1.file.txt link2.file.txt ; ls -l
total 16
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 link2.file.txt
-rw-rw-r-- 3 dboth dboth 7 Jun 13 07:34 main.file.txt
```
Notice that each new hard link in this directory must have a different name because two files—really directory entries—cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones.
```
[dboth@david temp]$ ln main.file.txt link2.file.txt
ln: failed to create hard link 'link2.file.txt': File exists
```
Clearly that does not work, because **link2.file.txt** already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far.
```
[dboth@david temp]$ ln main.file.txt ../main.file.txt ; ls -l ../main*
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
```
The **ls** command in the above listing shows that the **main.file.txt** file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links—directory entries—to the same inode. To help illustrate the next point, add a file that is not a link.
```
[dboth@david temp]$ touch unlinked.file ; ls -l
total 12
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt
-rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
-rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
Look at the inode number of the hard links and that of the new file using the **-i**option to the **ls** command.
```
[dboth@david temp]$ ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07:34 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
Notice the number **657024** to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the **-i** option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system.
Let's change the size of one of the hard-linked files.
```
[dboth@david temp]$ df -h > link2.file.txt ; ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries.
I know this next experiment will work on my computer because my **/tmp**directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment.
Try to create a link to one of the files in your **~/temp** directory in **/tmp** (or wherever your different filesystem directory is located).
```
[dboth@david temp]$ ln link2.file.txt /tmp/link3.file.txt
ln: failed to create hard link '/tmp/link3.file.txt' => 'link2.file.txt':
Invalid cross-device link
```
Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem.
There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the **ls -li** command. Then you can use the **find** command to locate all links with that inode number.
```
[dboth@david temp]$ find . -inum 657024
./main.file.txt
./link1.file.txt
./link2.file.txt
```
Note that the **find** command did not find all four of the hard links to this inode because we started at the current directory of **~/temp**. The **find** command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search.
```
[dboth@david temp]$ find ~ -samefile main.file.txt
/home/dboth/temp/main.file.txt
/home/dboth/temp/link1.file.txt
/home/dboth/temp/link2.file.txt
/home/dboth/main.file.txt
```
You may see error messages if you do not have permissions as a non-root user. This command also uses the **-samefile** option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links.
### **Experimenting with soft links**
As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important.
Let's start by creating a symlink in our **~/temp** directory to start our exploration.
```
[dboth@david temp]$ ln -s link2.file.txt link3.file.txt ; ls -li
total 12
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 link2.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
The hard links, those that have the inode number **657024**, are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number **658270**. The soft link named **link3.file.txt**points to **link2.file.txt**. Use the **cat** command to display the contents of **link3.file.txt**. The file mode information for the symlink starts with the letter "**l**" which indicates that this file is actually a symbolic link.
The size of the symlink **link3.file.txt** is only 14 bytes in the example above. That is the size of the text **link3.file.txt -> link2.file.txt**, which is the actual content of the directory entry. The directory entry **link3.file.txt** does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the **/tmp** directory.
```
[dboth@david temp]$ ln -s /home/dboth/temp/link2.file.txt
/tmp/link3.file.txt ; ls -l /tmp/link*
lrwxrwxrwx 1 dboth dboth 31 Jun 14 21:53 /tmp/link3.file.txt ->
/home/dboth/temp/link2.file.txt
```
### **Deleting links**
There are some other things that you should consider when you need to delete links or the files to which they point.
First, let's delete the link **main.file.txt**. Remember that every directory entry that points to an inode is simply a hard link.
```
[dboth@david temp]$ rm main.file.txt ; ls -li
total 8
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link2.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
The link **main.file.txt** was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links.
Now delete the **link2.file.txt** hard link.
```
[dboth@david temp]$ rm link2.file.txt ; ls -li
total 8
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 link1.file.txt
658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15:21 link3.file.txt ->
link2.file.txt
657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14:14 main.file.txt
657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08:18 unlinked.file
```
Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the **rm** command.
The **unlink** command can also be used to delete files and links. It is very simple and has no options, as the **rm** command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link—the directory entry—to the file being deleted.
### Final thoughts
I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve.
--------------------------------------------------------------------------------
作者简介:
David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years.
---------------------------------
via: https://opensource.com/article/17/6/linking-linux-filesystem
作者:[David Both ][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/resources/what-is-linux?src=linux_resource_menu
[2]:https://opensource.com/resources/what-are-linux-containers?src=linux_resource_menu
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=7016000000127cYAAQ
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?src=linux_resource_menu&intcmp=7016000000127cYAAQ
[5]:https://opensource.com/tags/linux?src=linux_resource_menu
[6]:https://opensource.com/article/17/6/linking-linux-filesystem?rate=YebHxA-zgNopDQKKOyX3_r25hGvnZms_33sYBUq-SMM
[7]:https://opensource.com/user/14106/feed
[8]:https://www.flickr.com/photos/digypho/7905320090
[9]:https://creativecommons.org/licenses/by/2.0/
[10]:https://opensource.com/article/17/5/introduction-ext4-filesystem
[11]:https://opensource.com/article/16/11/managing-devices-linux
[12]:https://opensource.com/life/16/10/introduction-linux-filesystems
[13]:https://opensource.com/business/16/9/linux-users-guide-lvm
[14]:https://opensource.com/life/15/9/everything-is-a-file
[15]:https://opensource.com/article/17/5/introduction-ext4-filesystem
[16]:https://opensource.com/article/17/5/introduction-ext4-filesystem#fig2
[17]:https://opensource.com/users/dboth
[18]:https://opensource.com/article/17/6/linking-linux-filesystem#comments

View File

@ -1,213 +0,0 @@
Designing a Microservices Architecture for Failure
============================================================ 
A Microservices architecture makes it possible to **isolate failures**through well-defined service boundaries. But like in every distributed system, there is a **higher chance** for network, hardware or application level issues. As a consequence of service dependencies, any component can be temporarily unavailable for their consumers. To minimize the impact of partial outages we need to build fault tolerant services that can **gracefully** respond to certain types of outages.
This article introduces the most common techniques and architecture patterns to build and operate a **highly available microservices** system based on [RisingStacks Node.js Consulting & Development experience][3].
_If you are not familiar with the patterns in this article, it doesnt necessarily mean that you do something wrong. Building a reliable system always comes with an extra cost._
### The Risk of the Microservices Architecture
The microservices architecture moves application logic to services and uses a network layer to communicate between them. Communicating over a network instead of in-memory calls brings extra latency and complexity to the system which requires cooperation between multiple physical and logical components. The increased complexity of the distributed system leads to a higher chance of particular **network failures**.
One of the biggest advantage of a microservices architecture over a monolithic one is that teams can independently design, develop and deploy their services. They have full ownership over their service's lifecycle. It also means that teams have no control over their service dependencies as it's more likely managed by a different team. With a microservices architecture, we need to keep in mind that provider **services can be temporarily unavailable** by broken releases, configurations, and other changes as they are controlled by someone else and components move independently from each other.
### Graceful Service Degradation
One of the best advantages of a microservices architecture is that you can isolate failures and achieve graceful service degradation as components fail separately. For example, during an outage customers in a photo sharing application maybe cannot upload a new picture, but they can still browse, edit and share their existing photos.
![Microservices fail separately in theory](https://blog-assets.risingstack.com/2017/08/microservices-fail-separately-in-theory.png)
_Microservices fail separately (in theory)_
In most of the cases, it's hard to implement this kind of graceful service degradation as applications in a distributed system depend on each other, and you need to apply several failover logics  _(some of them will be covered by this article later)_  to prepare for temporary glitches and outages.
![Microservices Depend on Each Other](https://blog-assets.risingstack.com/2017/08/Microservices-depend-on-each-other.png)
_Services depend on each other and fail together without failover logics._
### Change management
Googles site reliability team has found that roughly **70% of the outages are caused by changes** in a live system. When you change something in your service - you deploy a new version of your code or change some configuration - there is always a chance for failure or the introduction of a new bug.
In a microservices architecture, services depend on each other. This is why you should minimize failures and limit their negative effect. To deal with issues from changes, you can implement change management strategies and **automatic rollouts**.
For example, when you deploy new code, or you change some configuration, you should apply these changes to a subset of your instances gradually, monitor them and even automatically revert the deployment if you see that it has a negative effect on your key metrics.
![Microservices Change Management](https://blog-assets.risingstack.com/2017/08/microservices-change-management.png)
_Change Management - Rolling Deployment_
Another solution could be that you run two production environments. You always deploy to only one of them, and you only point your load balancer to the new one after you verified that the new version works as it is expected. This is called blue-green, or red-black deployment.
**Reverting code is not a bad thing.** You shouldnt leave broken code in production and then think about what went wrong. Always revert your changes when its necessary. The sooner the better.
#### Want to learn more about building reliable mircoservices architectures?
##### Check out our upcoming trainings!
[MICROSERVICES TRAININGS ][4]
### Health-check and Load Balancing
Instances continuously start, restart and stop because of failures, deployments or autoscaling. It makes them temporarily or permanently unavailable. To avoid issues, your load balancer should **skip unhealthy instances** from the routing as they cannot serve your customers' or sub-systems' need.
Application instance health can be determined via external observation. You can do it with repeatedly calling a `GET /health`endpoint or via self-reporting. Modern **service discovery** solutions continuously collect health information from instances and configure the load-balancer to route traffic only to healthy components.
### Self-healing
Self-healing can help to recover an application. We can talk about self-healing when an application can **do the necessary steps** to recover from a broken state. In most of the cases, it is implemented by an external system that watches the instances health and restarts them when they are in a broken state for a longer period. Self-healing can be very useful in most of the cases, however, in certain situations it **can cause trouble** by continuously restarting the application. This might happen when your application cannot give positive health status because it is overloaded or its database connection times out.
Implementing an advanced self-healing solution which is prepared for a delicate situation - like a lost database connection - can be tricky. In this case, you need to add extra logic to your application to handle edge cases and let the external system know that the instance is not needed to restart immediately.
### Failover Caching
Services usually fail because of network issues and changes in our system. However, most of these outages are temporary thanks to self-healing and advanced load-balancing we should find a solution to make our service work during these glitches. This is where **failover caching** can help and provide the necessary data to our application.
Failover caches usually use **two different expiration dates**; a shorter that tells how long you can use the cache in a normal situation, and a longer one that says how long can you use the cached data during failure.
![Microservices Failover Caching](https://blog-assets.risingstack.com/2017/08/microservices-failover-caching.png)
_Failover Caching_
Its important to mention that you can only use failover caching when it serves **the outdated data better than nothing**.
To set cache and failover cache, you can use standard response headers in HTTP.
For example, with the `max-age` header you can specify the maximum amount of time a resource will be considered fresh. With the `stale-if-error` header, you can determine how long should the resource be served from a cache in the case of a failure.
Modern CDNs and load balancers provide various caching and failover behaviors, but you can also create a shared library for your company that contains standard reliability solutions.
### Retry Logic
There are certain situations when we cannot cache our data or we want to make changes to it, but our operations eventually fail. In these cases, we can **retry our action** as we can expect that the resource will recover after some time or our load-balancer sends our request to a healthy instance.
You should be careful with adding retry logic to your applications and clients, as a larger amount of **retries can make things even worse** or even prevent the application from recovering.
In distributed system, a microservices system retry can trigger multiple other requests or retries and start a **cascading effect**. To minimize the impact of retries, you should limit the number of them and use an exponential backoff algorithm to continually increase the delay between retries until you reach the maximum limit.
As a retry is initiated by the client  _(browser, other microservices, etc.)_ and the client doesn't know that the operation failed before or after handling the request, you should prepare your application to handle **idempotency**. For example, when you retry a purchase operation, you shouldn't double charge the customer. Using a unique **idempotency-key** for each of your transactions can help to handle retries.
### Rate Limiters and Load Shedders
Rate limiting is the technique of defining how many requests can be received or processed by a particular customer or application during a timeframe. With rate limiting, for example, you can filter out customers and microservices who are responsible for **traffic peaks**, or you can ensure that your application doesnt overload until autoscaling cant come to rescue.
You can also hold back lower-priority traffic to give enough resources to critical transactions.
![Microservices Rate Limiter](https://blog-assets.risingstack.com/2017/08/microservices-rate-limiter.png)
_A rate limiter can hold back traffic peaks_
A different type of rate limiter is called the  _concurrent request limiter_ . It can be useful when you have expensive endpoints that shouldnt be called more than a specified times, while you still want to serve traffic.
A  _fleet usage load shedder_  can ensure that there are always enough resources available to **serve critical transactions**. It keeps some resources for high priority requests and doesnt allow for low priority transactions to use all of them. A load shedder makes its decisions based on the whole state of the system, rather than based on a single users request bucket size. Load shedders **help your system to recover**, since they keep the core functionalities working while you have an ongoing incident.
To read more about rate limiters and load shredders, I recommend checking out [Stripes article][5].
### Fail Fast and Independently
In a microservices architecture we want to prepare our services **to fail fast and separately**. To isolate issues on service level, we can use the  _bulkhead pattern_ . You can read more about bulkheads later in this blog post.
We also want our components to **fail fast** as we don't want to wait for broken instances until they timeout. Nothing is more disappointing than a hanging request and an unresponsive UI. It's not just wasting resources but also screwing up the user experience. Our services are calling each other in a chain, so we should pay an extra attention to prevent hanging operations before these delays sum up.
The first idea that would come to your mind would be applying fine grade timeouts for each service calls. The problem with this approach is that you cannot really know what's a good timeout value as there are certain situations when network glitches and other issues happen that only affect one-two operations. In this case, you probably dont want to reject those requests if theres only a few of them timeouts.
We can say that achieving the fail fast paradigm in microservices by **using timeouts is an anti-pattern** and you should avoid it. Instead of timeouts, you can apply the  _circuit-breaker_  pattern that depends on the success / fail statistics of operations.
#### Want to learn more about building reliable mircoservices architectures?
##### Check out our upcoming trainings!
[MICROSERVICES TRAININGS ][6]
### Bulkheads
Bulkhead is used in the industry to **partition** a ship **into sections**, so that sections can be sealed off if there is a hull breach.
The concept of bulkheads can be applied in software development to **segregate resources**.
By applying the bulkheads pattern, we can **protect limited resources** from being exhausted. For example, we can use two connection pools instead of a shared on if we have two kinds of operations that communicate with the same database instance where we have limited number of connections. As a result of this client - resource separation, the operation that timeouts or overuses the pool won't bring all of the other operations down.
One of the main reasons why Titanic sunk was that its bulkheads had a design failure, and the water could pour over the top of the bulkheads via the deck above and flood the entire hull.
![Titanic Microservices Bulkheads](https://blog-assets.risingstack.com/2017/08/titanic-bulkhead-microservices.png)
_Bulkheads in Titanic (they didn't work)_
### Circuit Breakers
To limit the duration of operations, we can use timeouts. Timeouts can prevent hanging operations and keep the system responsive. However, using static, fine tuned timeouts in microservices communication is an **anti-pattern** as were in a highly dynamic environment where it's almost impossible to come up with the right timing limitations that work well in every case.
Instead of using small and transaction-specific static timeouts, we can use circuit breakers to deal with errors. Circuit breakers are named after the real world electronic component because their behavior is identical. You can **protect resources** and **help them to recover** with circuit breakers. They can be very useful in a distributed system where a repetitive failure can lead to a snowball effect and bring the whole system down.
A circuit breaker opens when a particular type of **error occurs multiple times** in a short period. An open circuit breaker prevents further requests to be made - like the real one prevents electrons from flowing. Circuit breakers usually close after a certain amount of time, giving enough space for underlying services to recover.
Keep in mind that not all errors should trigger a circuit breaker. For example, you probably want to skip client side issues like requests with `4xx` response codes, but include `5xx` server-side failures. Some circuit breakers can have a half-open state as well. In this state, the service sends the first request to check system availability, while letting the other requests to fail. If this first request succeeds, it restores the circuit breaker to a closed state and lets the traffic flow. Otherwise, it keeps it open.
![Microservices Circuit Breakers](https://blog-assets.risingstack.com/2017/08/microservices-circuit-breakers.png)
_Circuit Breaker_
### Testing for Failures
You should continually **test your system against common issues** to make sure that your services can **survive various failures**. You should test for failures frequently to keep your team prepared for incidents.
For testing, you can use an external service that identifies groups of instances and randomly terminates one of the instances in this group. With this, you can prepare for a single instance failure, but you can even shut down entire regions to simulate a cloud provider outage.
One of the most popular testing solutions is the [ChaosMonkey][7]resiliency tool by Netflix.
### Outro
Implementing and running a reliable service is not easy. It takes a lot of effort from your side and also costs money to your company.
Reliability has many levels and aspects, so it is important to find the best solution for your team. You should make reliability a factor in your business decision processes and allocate enough budget and time for it.
### Key Takeways
* Dynamic environments and distributed systems - like microservices - lead to a higher chance of failures.
* Services should fail separately, achieve graceful degradation to improve user experience.
* 70% of the outages are caused by changes, reverting code is not a bad thing.
* Fail fast and independently. Teams have no control over their service dependencies.
* Architectural patterns and techniques like caching, bulkheads, circuit breakers and rate-limiters help to build reliable microservices.
To learn more about running a reliable service check out our free [Node.js Monitoring, Alerting & Reliability 101 e-book][8]. In case you need help with implementing a microservices system, reach out to us at [@RisingStack][9] on Twitter, or enroll in our upcoming [Building Microservices with Node.js][10].
-------------
作者简介
[Péter Márton][2]
CTO at RisingStack, microservices and brewing beer with Node.js
[https://twitter.com/slashdotpeter][1]</footer>
--------------------------------------------------------------------------------
via: https://blog.risingstack.com/designing-microservices-architecture-for-failure/
作者:[ Péter Márton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.risingstack.com/author/peter-marton/
[1]:https://twitter.com/slashdotpeter
[2]:https://blog.risingstack.com/author/peter-marton/
[3]:https://risingstack.com/
[4]:https://blog.risingstack.com/training-building-microservices-node-js/?utm_source=rsblog&utm_medium=roadblock-new&utm_content=/designing-microservices-architecture-for-failure/
[5]:https://stripe.com/blog/rate-limiters
[6]:https://blog.risingstack.com/training-building-microservices-node-js/?utm_source=rsblog&utm_medium=roadblock-new
[7]:https://github.com/Netflix/chaosmonkey
[8]:https://trace.risingstack.com/monitoring-ebook
[9]:https://twitter.com/RisingStack
[10]:https://blog.risingstack.com/training-building-microservices-node-js/
[11]:https://blog.risingstack.com/author/peter-marton/

View File

@ -1,3 +1,5 @@
voidpainter is translating
---
[Betting on the Web][27]
============================================================

View File

@ -1,517 +0,0 @@
Translating by qhwdw
How to Install Software from Source Code… and Remove it Afterwards
============================================================
![How to install software from source code](https://itsfoss.com/wp-content/uploads/2017/10/install-software-from-source-code-linux-800x450.jpg)
_Brief: This detailed guide explains how to install a program from source code in Linux and how to remove the software installed from the source code._
One of the greatest strength of your Linux distribution is its package manager and the associated software repository. With them, you have all the necessary tools and resources to download and install a new software on your computer in a completely automated manner.
But despite all their efforts, the package maintainers cannot handle each and every use cases. Nor can they package all the software available out there. So there are still situations where you will have to compile and install a new software by yourself. As of myself, the most common reason, by far, I have to compile some software is when I need to run a very specific version. Or because I want to modify the source code or use some fancy compilation options.
If your needs belong to that latter category, there are chances you already know what you do. But for the vast majority of Linux users, compiling and installing a software from the sources for the first time might look like an initiation ceremony: somewhat frightening; but with the promise to enter a new world of possibilities and to be part of a privileged community if you overcome that.
[Suggested readHow To Install And Remove Software In Ubuntu [Complete Guide]][8]
### A. Installing software from source code in Linux
And thats exactly what we will do here. For the purpose of that article, lets say I need to install [NodeJS][9] 8.1.1 on my system. That version exactly. A version which is not available from the Debian repository:
```
sh$ apt-cache madison nodejs | grep amd64
nodejs | 6.11.1~dfsg-1 | http://deb.debian.org/debian experimental/main amd64 Packages
nodejs | 4.8.2~dfsg-1 | http://ftp.fr.debian.org/debian stretch/main amd64 Packages
nodejs | 4.8.2~dfsg-1~bpo8+1 | http://ftp.fr.debian.org/debian jessie-backports/main amd64 Packages
nodejs | 0.10.29~dfsg-2 | http://ftp.fr.debian.org/debian jessie/main amd64 Packages
nodejs | 0.10.29~dfsg-1~bpo70+1 | http://ftp.fr.debian.org/debian wheezy-backports/main amd64 Packages
```
### Step 1: Getting the source code from GitHub
Like many open-source projects, the sources of NodeJS can be found on GitHub: [https://github.com/nodejs/node][10]
So, lets go directly there.
![The NodeJS official GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-account.png)
If youre not familiar with [GitHub][11], [git][12] or any other [version control system][13] worth mentioning the repository contains the current source for the software, as well as a history of all the modifications made through the years to that software. Eventually up to the very first line written for that project. For the developers, keeping that history has many advantages. For us today, the main one is we will be able to get the sources from for the project as they were at any given point in time. More precisely, I will be able to get the sources as they were when the 8.1.1 version I want was released. Even if there were many modifications since then.
![Choose the v8.1.1 tag in the NodeJS GitHub repository](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-choose-revision-tag.png)
On GitHub, you can use the “branch” button to navigate between different versions of the software. [“Branch” and “tags” are somewhat related concepts in Git][14]. Basically, the developers create “branch” and “tags” to keep track of important events in the project history, like when they start working on a new feature or when they publish a release. I will not go into the details here, all you need to know is Im looking for the version  _tagged_  “v8.1.1”
![The NodeJS GitHub repository as it was at the time the v8.1.1 tag was created](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-811.png)
After having chosen on the “v8.1.1” tag, the page is refreshed, the most obvious change being the tag now appears as part of the URL. In addition, you will notice the file change date are different too. The source tree you are now seeing is the one that existed at the time the v8.1.1 tag was created. In some sense, you can think of a version control tool like git as a time travel machine, allowing you to go back and forth into a project history.
![NodeJS GitHub repository download as a ZIP button](https://itsfoss.com/wp-content/uploads/2017/07/nodejs-github-revision-download-zip.png)
At this point, we can download the sources of NodeJS 8.1.1\. You cant miss the big blue button suggesting to download the ZIP archive of the project. As of myself, I will download and extract the ZIP from the command line for the sake of the explanation. But if you prefer using a [GUI][15] tool, dont hesitate to do that instead:
```
wget https://github.com/nodejs/node/archive/v8.1.1.zip
unzip v8.1.1.zip
cd node-8.1.1/
```
Downloading the ZIP archive works great. But if you want to do it “like a pro”, I would suggest using directly the `git` tool to download the sources. It is not complicated at all— and it will be a nice first contact with a tool you will often encounter:
```
# first ensure git is installed on your system
sh$ sudo apt-get install git
# Make a shallow clone the NodeJS repository at v8.1.1
sh$ git clone --depth 1 \
--branch v8.1.1 \
https://github.com/nodejs/node
sh$ cd node/
```
By the way, if you have any issue, just consider that first part of this article as a general introduction. Later I have more detailed explanations for Debian- and ReadHat-based distributions in order to help you troubleshoot common issues.
Anyway, whenever you downloaded the source using `git` or as a ZIP archive, you should now have exactly the same source files in the current directory:
```
sh$ ls
android-configure BUILDING.md common.gypi doc Makefile src
AUTHORS CHANGELOG.md configure GOVERNANCE.md node.gyp test
benchmark CODE_OF_CONDUCT.md CONTRIBUTING.md lib node.gypi tools
BSDmakefile COLLABORATOR_GUIDE.md deps LICENSE README.md vcbuild.bat
```
### Step 2: Understanding the Build System of the program
We usually talk about “compiling the sources”, but the compilation is only one of the phases required to produce a working software from its source. A build system is a set of tool and practices used to automate and articulate those different tasks in order to build entirely the software just by issuing few commands.
If the concept is simple, the reality is somewhat more complicated. Because different projects or programming language may have different requirements. Or because of the programmers tastes. Or the supported platforms. Or for historical reason. Or… or.. there is an almost endless list of reasons to choose or create another build system. All that to say there are many different solutions used out there.
NodeJS uses a [GNU-style build system][16]. This is a popular choice in the open source community. And once again, a good way to start your journey.
Writing and tuning a build system is a pretty complex task. But for the “end user”, GNU-style build systems resume themselves in using two tools: `configure` and `make`.
The `configure` file is a project-specific script that will check the destination system configuration and available feature in order to ensure the project can be built, eventually dealing with the specificities of the current platform.
An important part of a typical `configure` job is to build the `Makefile`. That is the file containing the instructions required to effectively build the project.
The [`make` tool][17]), on the other hand, is a POSIX tool available on any Unix-like system. It will read the project-specific `Makefile` and perform the required operations to build and install your program.
But, as always in the Linux world, you still have some latency to customize the build for your specific needs.
```
./configure --help
```
The `configure -help` command will show you all the available configuration options. Once again, this is very project-specific. And to be honest, it is sometimes required to dig into the project before fully understand the meaning of each and every configure option.
But there is at least one standard GNU Autotools option that you must know: the `--prefix` option. This has to do with the file system hierarchy and the place your software will be installed.
[Suggested read8 Vim Tips And Tricks That Will Make You A Pro User][18]
### Step 3: The FHS
The Linux file system hierarchy on a typical distribution mostly comply with the [Filesystem Hierarchy Standard (FHS)][19]
That standard explains the purpose of the various directories of your system: `/usr`, `/tmp`, `/var` and so on.
When using the GNU Autotools— and most other build systems— the default installation location for your new software will be `/usr/local`. Which is a good choice as according to the FSH  _“The /usr/local hierarchy is for use by the system administrator when installing software locally? It needs to be safe from being overwritten when the system software is updated. It may be used for programs and data that are shareable amongst a group of hosts, but not found in /usr.”_
The `/usr/local` hierarchy somehow replicates the root directory, and you will find there `/usr/local/bin` for the executable programs, `/usr/local/lib` for the libraries, `/usr/local/share` for architecture independent files and so on.
The only issue when using the `/usr/local` tree for custom software installation is the files for all your software will be mixed there. Especially, after having installed a couple of software, it will be hard to track to which file exactly of `/usr/local/bin` and `/usr/local/lib` belongs to which software. That will not cause any issue to the system though. After all, `/usr/bin` is just about the same mess. But that will become an issue the day you will want to remove a manually installed software.
To solve that issue, I usually prefer installing custom software in the `/opt`sub-tree instead. Once again, to quote the FHS:
_”/opt is reserved for the installation of add-on application software packages.
A package to be installed in /opt must locate its static files in a separate /opt/<package> or /opt/<provider> directory tree, where <package> is a name that describes the software package and <provider> is the providers LANANA registered name.”_
So we will create a sub-directory of `/opt` specifically for our custom NodeJS installation. And if someday I want to remove that software, I will simply have to remove that directory:
```
sh$ sudo mkdir /opt/node-v8.1.1
sh$ sudo ln -sT node-v8.1.1 /opt/node
# What is the purpose of the symbolic link above?
# Read the article till the end--then try to answer that
# question in the comment section!
sh$ ./configure --prefix=/opt/node-v8.1.1
sh$ make -j9 && echo ok
# -j9 means run up to 9 parallel tasks to build the software.
# As a rule of thumb, use -j(N+1) where N is the number of cores
# of your system. That will maximize the CPU usage (one task per
# CPU thread/core + a provision of one extra task when a process
# is blocked by an I/O operation.
```
Anything but “ok” after the `make` command has completed would mean there was an error during the build process. As we ran a parallel build because of the `-j` option, it is not always easy to retrieve the error message given the large volume of output produced by the build system.
In the case of issue, just restart `make`, but without the `-j` option this time. And the error should appear near the end of the output:
```
sh$ make
```
Finally, once the compilation has gone to the end, you can install your software to its location by running the command:
```
sh$ sudo make install
```
And test it:
```
sh$ /opt/node/bin/node --version
v8.1.1
```
### B. What if things go wrong while installing from source code?
What Ive explained above is mostly what you can see on the “build instruction” page of a well-documented project. But given this article goal is to let you compile your first software from sources, it might worth taking the time to investigate some common issues. So, I will do the whole procedure again, but this time from a fresh and minimal Debian 9.0 and CentOS 7.0 systems. So you can see the error I encountered and how I solved them.
### From Debian 9.0 “Stretch”
```
itsfoss@debian:~$ git clone --depth 1 \
--branch v8.1.1 \
https://github.com/nodejs/node
-bash: git: command not found
```
This problem is quite easy to diagnosis and solve. Just install the `git` package:
```
itsfoss@debian:~$ sudo apt-get install git
```
```
itsfoss@debian:~$ git clone --depth 1 \
--branch v8.1.1 \
https://github.com/nodejs/node && echo ok
[...]
ok
```
```
itsfoss@debian:~/node$ sudo mkdir /opt/node-v8.1.1
itsfoss@debian:~/node$ sudo ln -sT node-v8.1.1 /opt/node
```
No problem here.
```
itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/
WARNING: failed to autodetect C++ compiler version (CXX=g++)
WARNING: failed to autodetect C compiler version (CC=gcc)
Node.js configure error: No acceptable C compiler found!
Please make sure you have a C compiler installed on your system and/or
consider adjusting the CC environment variable if you installed
it in a non-standard prefix.
```
Obviously, to compile a project, you need a compiler. NodeJS being written using the [C++ language][20], we need a C++ [compiler][21]. Here I will install `g++`, the GNU C++ compiler for that purpose:
```
itsfoss@debian:~/node$ sudo apt-get install g++
itsfoss@debian:~/node$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
[...]
ok
```
```
itsfoss@debian:~/node$ make -j9 && echo ok
-bash: make: command not found
```
One other missing tool. Same symptoms. Same solution:
```
itsfoss@debian:~/node$ sudo apt-get install make
itsfoss@debian:~/node$ make -j9 && echo ok
[...]
ok
```
```
itsfoss@debian:~/node$ sudo make install
[...]
itsfoss@debian:~/node$ /opt/node/bin/node --version
v8.1.1
```
Success!
Please notice: Ive installed the various tools one by one to show how to diagnosis the compilation issues and to show you the typical solution to solve those issues. But if you search more about that topic or read other tutorials, you will discover that most distributions have “meta-packages” acting as an umbrella to install some or all the typical tools used for compiling a software. On Debian-based systems, you will probably encounter the [build-essentials][22]package for that purpose. And on Red-Hat-based distributions, that will be the  _“Development Tools”_  group.
### From CentOS 7.0
```
[itsfoss@centos ~]$ git clone --depth 1 \
--branch v8.1.1 \
https://github.com/nodejs/node
-bash: git: command not found
```
Command not found? Just install it using the `yum` package manager:
```
[itsfoss@centos ~]$ sudo yum install git
```
```
[itsfoss@centos ~]$ git clone --depth 1 \
--branch v8.1.1 \
https://github.com/nodejs/node && echo ok
[...]
ok
```
```
[itsfoss@centos ~]$ sudo mkdir /opt/node-v8.1.1
[itsfoss@centos ~]$ sudo ln -sT node-v8.1.1 /opt/node
```
```
[itsfoss@centos ~]$ cd node
[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/
WARNING: failed to autodetect C++ compiler version (CXX=g++)
WARNING: failed to autodetect C compiler version (CC=gcc)
Node.js configure error: No acceptable C compiler found!
Please make sure you have a C compiler installed on your system and/or
consider adjusting the CC environment variable if you installed
it in a non-standard prefix.
```
You guess it: NodeJS is written using the C++ language, but my system lacks the corresponding compiler. Yum to the rescue. As Im not a regular CentOS user, I actually had to search on the Internet the exact name of the package containing the g++ compiler. Leading me to that page: [https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4][23]
```
[itsfoss@centos node]$ sudo yum install gcc-c++
[itsfoss@centos node]$ ./configure --prefix=/opt/node-v8.1.1/ && echo ok
[...]
ok
```
```
[itsfoss@centos node]$ make -j9 && echo ok
[...]
ok
```
```
[itsfoss@centos node]$ sudo make install && echo ok
[...]
ok
```
```
[itsfoss@centos node]$ /opt/node/bin/node --version
v8.1.1
```
Success. Again.
### C. Making changes to the software installed from source code
You may install a software from the source because you need a very specific version not available in your distribution repository. Or because you want to  _modify_  that program. Either to fix a bug or add a feature. After all, open-source is all about that. So I will take that opportunity to give you a taste of the power you have at hand now you are able to compile your own software.
Here, we will make a minor change to the sources of NodeJS. And we will see if our change will be incorporated into the compiled version of the software:
Open the file `node/src/node.cc` in your favorite [text editor][24] (vim, nano, gedit, … ). And try to locate that fragment of code:
```
if (debug_options.ParseOption(argv[0], arg)) {
// Done, consumed by DebugOptions::ParseOption().
} else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
printf("%s\n", NODE_VERSION);
exit(0);
} else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
PrintHelp();
exit(0);
}
```
It is around [line 3830 of the file][25]. Then modify the line containing `printf` to match that one instead:
```
printf("%s (compiled by myself)\n", NODE_VERSION);
```
Then head back to your terminal. Before going further— and to give you some more insight of the power behind git— you can check if youve modified the right file:
```
diff --git a/src/node.cc b/src/node.cc
index bbce1022..a5618b57 100644
--- a/src/node.cc
+++ b/src/node.cc
@@ -3828,7 +3828,7 @@ static void ParseArgs(int* argc,
if (debug_options.ParseOption(argv[0], arg)) {
// Done, consumed by DebugOptions::ParseOption().
} else if (strcmp(arg, "--version") == 0 || strcmp(arg, "-v") == 0) {
- printf("%s\n", NODE_VERSION);
+ printf("%s (compiled by myself)\n", NODE_VERSION);
exit(0);
} else if (strcmp(arg, "--help") == 0 || strcmp(arg, "-h") == 0) {
PrintHelp();
```
You should see a “-” (minus sign) before the line as it was before you changed it. And a “+” (plus sign) before the line after your changes.
It is now time to recompile and re-install your software:
```
make -j9 && sudo make install && echo ok
[...]
ok
```
This times, the only reason it might fail is that youve made a typo while changing the code. If this is the case, re-open the `node/src/node.cc` file in your text editor and fix the mistake.
Once youve managed to compile and install that new modified NodeJS version, you will be able to check if your modifications were actually incorporated into the software:
```
itsfoss@debian:~/node$ /opt/node/bin/node --version
v8.1.1 (compiled by myself)
```
Congratulations! Youve made your first change to an open-source program!
### D. Let the shell locate our custom build software
You may have noticed until now, I always launched my newly compiled NodeJS software by specifying the absolute path to the binary file.
```
/opt/node/bin/node
```
It works. But this is annoying, to say the least. There are actually two common ways of fixing that. But to understand them, you must first know your shell locates the executable files by looking for them only into the directories specified by the `PATH` [environment variable][26].
```
itsfoss@debian:~/node$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
```
Here, on that Debian system, if you do not specify explicitly any directory as part of a command name, the shell will first look for that executable programs into `/usr/local/bin`, then if not found into `/usr/bin`, then if not found into `/bin` then if not found into `/usr/local/games` then if not found into `/usr/games`, then if not found … the shell will report an error  _“command not found”_ .
Given that, we have two way to make a command accessible to the shell: by adding it to one of the already configured `PATH` directories. Or by adding the directory containing our executable file to the `PATH`.
### Adding a link from /usr/local/bin
Just  _copying_  the node binary executable from `/opt/node/bin` to `/usr/local/bin` would be a bad idea since by doing so, the executable program would no longer be able to locate the other required components belonging to `/opt/node/` (its a common practice for a software to locate its resource files relative to its own location).
So, the traditional way of doing that is by using a symbolic link:
```
itsfoss@debian:~/node$ sudo ln -sT /opt/node/bin/node /usr/local/bin/node
itsfoss@debian:~/node$ which -a node || echo not found
/usr/local/bin/node
itsfoss@debian:~/node$ node --version
v8.1.1 (compiled by myself)
```
This is a simple and effective solution, especially if a software package is made of just few well known executable programs— since you have to create a symbolic link for each and every user-invokable commands. For example, if youre familiar with NodeJS, you know the `npm` companion application I should symlink from `/usr/local/bin` too. But I let that to you as an exercise.
### Modifying the PATH
First, if you tried the preceding solution, remove the node symbolic link created previously to start from a clear state:
```
itsfoss@debian:~/node$ sudo rm /usr/local/bin/node
itsfoss@debian:~/node$ which -a node || echo not found
not found
```
And now, here is the magic command to change your `PATH`:
```
itsfoss@debian:~/node$ export PATH="/opt/node/bin:${PATH}"
itsfoss@debian:~/node$ echo $PATH
/opt/node/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
```
Simply said, I replaced the content of the `PATH` environment variable by its previous content, but prefixed by `/opt/node/bin`. So, as you can imagine it now, the shell will look first into the `/opt/node/bin` directory for executable programs. We can confirm that using the `which` command:
```
itsfoss@debian:~/node$ which -a node || echo not found
/opt/node/bin/node
itsfoss@debian:~/node$ node --version
v8.1.1 (compiled by myself)
```
Whereas the “link” solution is permanent as soon as youve created the symbolic link into `/usr/local/bin`, the `PATH` change is effective only into the current shell. I let you do some researches by yourself to know how to make changes of the `PATH` permanents. As a hint, it has to do with your “profile”. If you find the solution, dont hesitate to share that with the other readers by using the comment section below!
### E. How to remove that newly installed software from source code
Since our custom compiled NodeJS software sits completely in the `/opt/node-v8.1.1` directory, removing that software is not more work than using the `rm` command to remove that directory:
```
sudo rm -rf /opt/node-v8.1.1
```
BEWARE: `sudo` and `rm -rf` are a dangerous cocktail! Always check your command twice before pressing the “enter” key. You wont have any confirmation message and no undelete if you remove the wrong directory…
Then, if youve modified your `PATH`, you will have to revert those changes. Which is not complicated at all.
And if youve created links from `/usr/local/bin` you will have to remove them all:
```
itsfoss@debian:~/node$ sudo find /usr/local/bin \
-type l \
-ilname "/opt/node/*" \
-print -delete
/usr/local/bin/node
```
### Wait? Where was the Dependency Hell?
As a final comment, if you read about compiling your own custom software, you might have heard about the [dependency hell][27]. This is a nickname for that annoying situation where before being able to successfully compile a software, you must first compile a pre-requisite library, which in its turn requires another library that might in its turn be incompatible with some other software youve already installed.
Part of the job of the package maintainers of your distribution is to actually resolve that dependency hell and to ensure the various software of your system are using compatible libraries and are installed in the right order.
In that article, I chose on purpose to install NodeJS as it virtually doesnt have dependencies. I said “virtually” because, in fact, it  _has_  dependencies. But the source code of those dependencies are present in the source repository of the project (in the `node/deps` subdirectory), so you dont have to download and install them manually before hand.
But if youre interested in understanding more about that problem and learn how to deal with it, let me know that using the comment section below: that would be a great topic for a more advanced article!
--------------------------------------------------------------------------------
作者简介:
Engineer by Passion, Teacher by Vocation. My goals : to share my enthusiasm for what I teach and prepare my students to develop their skills by themselves. You can find me on my website as well.
--------------------
via: https://itsfoss.com/install-software-from-source-code/
作者:[Sylvain Leroux ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/sylvain/
[1]:https://itsfoss.com/author/sylvain/
[2]:https://itsfoss.com/install-software-from-source-code/#comments
[3]:https://www.facebook.com/share.php?u=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3Dfacebook%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[4]:https://twitter.com/share?original_referer=/&text=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards&url=https://itsfoss.com/install-software-from-source-code/%3Futm_source%3Dtwitter%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare&via=Yes_I_Know_IT
[5]:https://plus.google.com/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DgooglePlus%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[6]:https://www.linkedin.com/cws/share?url=https%3A%2F%2Fitsfoss.com%2Finstall-software-from-source-code%2F%3Futm_source%3DlinkedIn%26utm_medium%3Dsocial%26utm_campaign%3DSocialWarfare
[7]:https://www.reddit.com/submit?url=https://itsfoss.com/install-software-from-source-code/&title=How+to+Install+Software+from+Source+Code%E2%80%A6+and+Remove+it+Afterwards
[8]:https://itsfoss.com/remove-install-software-ubuntu/
[9]:https://nodejs.org/en/
[10]:https://github.com/nodejs/node
[11]:https://en.wikipedia.org/wiki/GitHub
[12]:https://en.wikipedia.org/wiki/Git
[13]:https://en.wikipedia.org/wiki/Version_control
[14]:https://stackoverflow.com/questions/1457103/how-is-a-tag-different-from-a-branch-which-should-i-use-here
[15]:https://en.wikipedia.org/wiki/Graphical_user_interface
[16]:https://en.wikipedia.org/wiki/GNU_Build_System
[17]:https://en.wikipedia.org/wiki/Make_%28software
[18]:https://itsfoss.com/pro-vim-tips/
[19]:http://www.pathname.com/fhs/
[20]:https://en.wikipedia.org/wiki/C%2B%2B
[21]:https://en.wikipedia.org/wiki/Compiler
[22]:https://packages.debian.org/sid/build-essential
[23]:https://superuser.com/questions/590808/yum-install-gcc-g-doesnt-work-anymore-in-centos-6-4
[24]:https://en.wikipedia.org/wiki/List_of_text_editors
[25]:https://github.com/nodejs/node/blob/v8.1.1/src/node.cc#L3830
[26]:https://en.wikipedia.org/wiki/Environment_variable
[27]:https://en.wikipedia.org/wiki/Dependency_hell

Some files were not shown because too many files have changed in this diff Show More