Merge pull request #52 from LCTT/master

update
This commit is contained in:
MjSeven 2018-07-16 15:27:05 +08:00 committed by GitHub
commit 405232247e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 1451 additions and 499 deletions

View File

@ -1,49 +1,45 @@
使用 Ftrace 跟踪内核
使用 ftrace 跟踪内核
============================================================
标签: [ftrace][8][kernel][9][kernel profiling][10][kernel tracing][11][linux][12][tracepoints][13]
![](https://blog.selectel.com/wp-content/uploads/2017/04/PR-1801-2-2.png)
在内核级别上分析事件有很多的工具:[SystemTap][14][ktap][15][Sysdig][16][LTTNG][17]等等,并且你也可以在网络上找到关于这些工具的大量介绍文章和资料。
在内核层面上分析事件有很多的工具:[SystemTap][14]、[ktap][15]、[Sysdig][16]、[LTTNG][17] 等等,你也可以在网络上找到关于这些工具的大量介绍文章和资料。
而对于使用 Linux 原生机制去跟踪系统事件以及检索/分析故障信息的方面的资料却很少找的到。这就是 [ftrace][18],它是添加到内核中的第一款跟踪工具,今天我们来看一下它都能做什么,让我们从它的一些重要术语开始吧。
### 内核跟踪和分析
内核分析可以发现性能“瓶颈”。分析能够帮我们发现在一个程序中性能损失的准确位置。特定的程序生成一个概述 — 一个事件总结 — 它能够用于帮我们找出哪个函数占用了大量的运行时间。尽管这些程序并不能识别出为什么会损失性能。
<ruby>内核分析<rt>Kernel profiling</rt></ruby>可以发现性能“瓶颈”。分析能够帮我们发现在一个程序中性能损失的准确位置。特定的程序生成一个<ruby>概述<rt>profile</rt></ruby>这是一个事件总结 — 它能够用于帮我们找出哪个函数占用了大量的运行时间。尽管这些程序并不能识别出为什么会损失性能。
瓶颈经常发生在无法通过分析来识别的情况下。去推断出为什么会发生事件,去保存发生事件时的相关上下文,这就需要去跟踪。
瓶颈经常发生在无法通过分析来识别的情况下。要推断出为什么会发生事件,就必须保存发生事件时的相关上下文,这就需要去<ruby>跟踪<rt>tracing</rt></ruby>
跟踪可以理解为在一个正常工作的系统上活动的信息收集程。它使用特定的工具来完成这项工作,就像录音机来记录声音一样,用它来记录各种注册的系统事件。
跟踪可以理解为在一个正常工作的系统上活动的信息收集程。它使用特定的工具来完成这项工作,就像录音机来记录声音一样,用它来记录各种系统事件。
跟踪程序能够同时跟踪应用级和操作系统级的事件。它们收集的信息能够用于诊断多种系统问题。
有时候会将跟踪与日志比较。它们两者确时很相似,但是也有不同的地方。
对于跟踪,记录的信息都是些低级别事件。它们的数量是成百上千的,甚至是成千上万的。对于日志,记录的信息都是些高级别事件,数量上通常少多了。这些包含用户登系统、应用程序错误、数据库事务等等。
对于跟踪,记录的信息都是些低级别事件。它们的数量是成百上千的,甚至是成千上万的。对于日志,记录的信息都是些高级别事件,数量上通常少多了。这些包含用户登系统、应用程序错误、数据库事务等等。
就像日志一样,跟踪数据可以被原样读取,但是用特定的应用程序提取的信息更有用。所有的跟踪程序都能这样做。
在内核跟踪和分析方面Linux 内核有三个主要的机制:
* 跟踪点 —— 一种基于静态测试代码的工作机制
* 探针 —— 一种动态跟踪机制,用于在任意时刻中断内核代码的运行,调用它自己的处理程序,在完成需要的操作之后再返回。
* perf_events —— 一个访问 PMU性能监视单元的接口
* <ruby>跟踪点<rt>tracepoint</rt></ruby>:一种基于静态测试代码的工作机制
* <ruby>探针<rt>kprobe</rt></ruby>:一种动态跟踪机制,用于在任意时刻中断内核代码的运行,调用它自己的处理程序,在完成需要的操作之后再返回
* perf_events —— 一个访问 PMU<ruby>性能监视单元<rt>Performance Monitoring Unit</rt></ruby>)的接口
我并不想在这里写关于这些机制方面的内容,任何对它们感兴趣的人可以去访问 [Brendan Gregg 的博客][19]。
使用 ftrace我们可以与这些机制进行交互并可以从用户空间直接得到调试信息。下面我们将讨论这方面的详细内容。示例中的所有命令行都是在内核版本为 3.13.0-24 的 Ubuntu 14.04 中运行的。
### Ftrace常用信息
### ftrace常用信息
Ftrace 是函数 Trace 的简写,但它能做的远不止这些:它可以跟踪上下文切换、测量进程阻塞时间、计算高优先级任务的活动时间等等。
ftrace 是 Function Trace 的简写,但它能做的远不止这些:它可以跟踪上下文切换、测量进程阻塞时间、计算高优先级任务的活动时间等等。
Ftrace 是由 Steven Rostedt 开发的,从 2008 年发布的内核 2.6.27 中开始就内置了。这是为记录数据提供的一个调试 `Ring` 缓冲区的框架。这些数据由集成到内核中的跟踪程序来采集。
ftrace 是由 Steven Rostedt 开发的,从 2008 年发布的内核 2.6.27 中开始就内置了。这是为记录数据提供的一个调试 Ring 缓冲区的框架。这些数据由集成到内核中的跟踪程序来采集。
Ftrace 工作在 debugfs 文件系统上,这是在大多数现代 Linux 分发版中默认挂载的文件系统。为开始使用 ftrace你将进入到 `sys/kernel/debug/tracing` 目录(仅对 root 用户可用):
ftrace 工作在 debugfs 文件系统上,在大多数现代 Linux 发行版中都默认挂载了。要开始使用 ftrace你将进入到 `sys/kernel/debug/tracing` 目录(仅对 root 用户可用):
```
# cd /sys/kernel/debug/tracing
@ -70,16 +66,13 @@ kprobe_profile stack_max_size uprobe_profile
我不想去描述这些文件和子目录;它们的描述在 [官方文档][20] 中已经写的很详细了。我只想去详细介绍与我们这篇文章相关的这几个文件:
* available_tracers —— 可用的跟踪程序
* current_tracer —— 正在运行的跟踪程序
* tracing_on —— 负责启用或禁用数据写入到 `Ring` 缓冲区的系统文件(如果启用它,在文件中添加数字 1禁用它添加数字 0
* tracing_on —— 负责启用或禁用数据写入到 Ring 缓冲区的系统文件(如果启用它,数字 1 被添加到文件中,禁用它,数字 0 被添加)
* trace —— 以人类友好格式保存跟踪数据的文件
### 可用的跟踪程序
我们可以使用如下的命令去查看可用的跟踪程序的一个列表
我们可以使用如下的命令去查看可用的跟踪程序的一个列表
```
root@andrei:/sys/kernel/debug/tracing#: cat available_tracers
@ -89,18 +82,14 @@ blk mmiotrace function_graph wakeup_rt wakeup function nop
我们来快速浏览一下每个跟踪程序的特性:
* function —— 一个无需参数的函数调用跟踪程序
* function_graph —— 一个使用子调用的函数调用跟踪程序
* blk —— 一个与块 I/O 跟踪相关的调用和事件跟踪程序(它是 blktrace 的用途)
* blk —— 一个与块 I/O 跟踪相关的调用和事件跟踪程序(它是 blktrace 使用的)
* mmiotrace —— 一个内存映射 I/O 操作跟踪程序
* nop —— 简化的跟踪程序,就像它的名字所暗示的那样,它不做任何事情(尽管在某些情况下可能会派上用场,我们将在后文中详细解释)
* nop —— 最简单的跟踪程序,就像它的名字所暗示的那样,它不做任何事情(尽管在某些情况下可能会派上用场,我们将在后文中详细解释)
### 函数跟踪程序
在开始介绍函数跟踪程序 ftrace 之前,我们先看一测试脚本:
在开始介绍函数跟踪程序 ftrace 之前,我们先看一测试脚本:
```
#!/bin/sh
@ -117,7 +106,7 @@ less ${dir}/trace
这个脚本是非常简单的,但是还有几个需要注意的地方。命令 `sysctl ftrace.enabled=1` 启用了函数跟踪程序。然后我们通过写它的名字到 `current_tracer` 文件来启用 `current tracer`
接下来,我们写入一个 `1``tracing_on`,它启用了 `Ring` 缓冲区。这些语法都要求在 `1``>` 符号前后有一个空格;写成像 `echo1> tracing_on` 这样将不能工作。一行之后我们禁用它(如果 `0` 写入到 `tracing_on` 缓冲区不会被清除并且 ftrace 并不会被禁用)。
接下来,我们写入一个 `1``tracing_on`,它启用了 Ring 缓冲区。这些语法都要求在 `1``>` 符号前后有一个空格;写成像 `echo 1> tracing_on` 这样将不能工作。一行之后我们禁用它(如果 `0` 写入到 `tracing_on` 缓冲区不会被清除并且 ftrace 并不会被禁用)。
我们为什么这样做呢?在两个 `echo` 命令之间,我们看到了命令 `sleep 1`。我们启用了缓冲区,运行了这个命令,然后禁用它。这将使跟踪程序采集了这个命令运行期间发生的所有系统调用的信息。
@ -156,21 +145,18 @@ less ${dir}/trace
trace.sh-1295 [000] d... 90.502879: __acct_update_integrals <-acct_account_cputime
```
这个输出以缓冲区中的信息条目数量和写入的条目数量开始。这两者的数据差异是缓冲区中事件的丢失数量(在我们的示例中没有发生丢失)。
这个输出以缓冲区中的信息条目数量写入的全部条目数量开始。这两者的数据差异是缓冲区中事件的丢失数量(在我们的示例中没有发生丢失)。
在这里有一个包含下列信息的函数列表:
* 进程标识符PID
* 运行这个进程的 CPUCPU#
* 进程开始时间TIMESTAMP
* 被跟踪函数的名字以及调用它的父级函数;例如,在我们输出的第一行,`rb_simple_write` 调用了 `mutex-unlock` 函数。
### Function_graph 跟踪程序
### function_graph 跟踪程序
`function_graph` 跟踪程序的工作和函数一样,但是它更详细:它显示了每个函数的进入和退出点。使用这个跟踪程序,我们可以跟踪函数的子调用并且测量每个函数的运行时间。
function_graph 跟踪程序的工作和函数跟踪程序一样,但是它更详细:它显示了每个函数的进入和退出点。使用这个跟踪程序,我们可以跟踪函数的子调用并且测量每个函数的运行时间。
我们来编辑一下最后一个示例的脚本:
@ -215,11 +201,11 @@ less ${dir}/trace
0) ! 208.154 us | } /* ip_local_deliver_finish */
```
在这个图中,`DURATION` 展示了花费在每个运行的函数上的时间。注意使用 `+``!` 符号标记的地方。加号(+)意思是这个函数花费的时间超过 10 毫秒;而感叹号()意思是这个函数花费的时间超过了 100 毫秒。
在这个图中,`DURATION` 展示了花费在每个运行的函数上的时间。注意使用 `+``!` 符号标记的地方。加号(`+`)意思是这个函数花费的时间超过 10 毫秒;而感叹号(`!`)意思是这个函数花费的时间超过了 100 毫秒。
`FUNCTION_CALLS` 下面,我们可以看到每个函数调用的信息。
和 C 语言一样使用了花括号({)标记每个函数的边界,它展示了每个函数的开始和结束,一个用于开始,一个用于结束;不能调用其它任何函数的叶子函数用一个分号()标记。
和 C 语言一样使用了花括号(`{`)标记每个函数的边界,它展示了每个函数的开始和结束,一个用于开始,一个用于结束;不能调用其它任何函数的叶子函数用一个分号(`;`)标记。
### 函数过滤器
@ -249,13 +235,13 @@ ftrace 还有很多过滤选项。对于它们更详细的介绍,你可以去
### 跟踪事件
我们在上面提到到跟踪点机制。跟踪点是插入的由系统事件触发的特定代码。跟踪点可以是动态的(意味着可能会在它们上面附加几个检查),也可以是静态的(意味着不会附加任何检查)。
我们在上面提到到跟踪点机制。跟踪点是插入的触发系统事件的特定代码。跟踪点可以是动态的(意味着可能会在它们上面附加几个检查),也可以是静态的(意味着不会附加任何检查)。
静态跟踪点不会对系统有任何影响;它们只是增加几个字节用于调用测试函数以及在一个独立的节上增加一个数据结构。
静态跟踪点不会对系统有任何影响;它们只是在测试的函数末尾增加几个字节的函数调用以及在一个独立的节上增加一个数据结构。
当相关代码片断运行时,动态跟踪点调用一个跟踪函数。跟踪数据是写入到 `Ring` 缓冲区。
当相关代码片断运行时,动态跟踪点调用一个跟踪函数。跟踪数据是写入到 Ring 缓冲区。
跟踪点可以设置在代码的任何位置;事实上,它们确实可以在许多的内核函数中找到。我们来看一下 `kmem_cache_alloc` 函数(它在 [这里][22]
跟踪点可以设置在代码的任何位置;事实上,它们确实可以在许多的内核函数中找到。我们来看一下 `kmem_cache_alloc` 函数(取自 [这里][22]
```
{
@ -294,7 +280,7 @@ fs kvm power scsi vfs
ftrace kvmmmu printk signal vmscan
```
所有可能的事件都按子系统分组到子目录中。在我们开始跟踪事件之前,我们要先确保启用了 `Ring` 缓冲区写入:
所有可能的事件都按子系统分组到子目录中。在我们开始跟踪事件之前,我们要先确保启用了 Ring 缓冲区写入:
```
root@andrei:/sys/kernel/debug/tracing# cat tracing_on
@ -306,25 +292,25 @@ root@andrei:/sys/kernel/debug/tracing# cat tracing_on
root@andrei:/sys/kernel/debug/tracing# echo 1 > tracing_on
```
在我们上一篇的文章中,我们写了关于 `chroot()` 系统调用的内容;我们来跟踪访问一下这个系统调用。为了跟踪,我们使用 `nop` 因为函数跟踪程序和 `function_graph` 跟踪程序记录的信息太多,它包含了我们不感兴趣的事件信息。
在我们上一篇的文章中,我们写了关于 `chroot()` 系统调用的内容;我们来跟踪访问一下这个系统调用。对于我们的跟踪程序,我们使用 `nop` 因为函数跟踪程序和 `function_graph` 跟踪程序记录的信息太多,它包含了我们不感兴趣的事件信息。
```
root@andrei:/sys/kernel/debug/tracing# echo nop > current_tracer
```
所有事件相关的系统调用都保存在系统调用目录下。在这里我们将找到一个进入和退出多个系统调用的目录。我们需要在相关的文件中通过写入数字 `1` 来激活跟踪点:
所有事件相关的系统调用都保存在系统调用目录下。在这里我们将找到一个进入和退出各种系统调用的目录。我们需要在相关的文件中通过写入数字 `1` 来激活跟踪点:
```
root@andrei:/sys/kernel/debug/tracing# echo 1 > events/syscalls/sys_enter_chroot/enable
```
然后我们使用 `chroot` 来创建一个独立的文件系统(更多内容,请查看 [这篇文章][23])。在我们执行完我们需要的命令之后,我们将禁用跟踪程序,以便于不需要的信息或者过量信息出现在输出中:
然后我们使用 `chroot` 来创建一个独立的文件系统(更多内容,请查看 [之前这篇文章][23])。在我们执行完我们需要的命令之后,我们将禁用跟踪程序,以便于不需要的信息或者过量信息不会出现在输出中:
```
root@andrei:/sys/kernel/debug/tracing# echo 0 > tracing_on
```
然后,我们去查看 `Ring` 缓冲区的内容。在输出的结束部分,我们找到了有关的系统调用信息(这里只是一个节选)。
然后,我们去查看 Ring 缓冲区的内容。在输出的结束部分,我们找到了有关的系统调用信息(这里只是一个节选)。
```
root@andrei:/sys/kernel/debug/tracing# сat trace
@ -343,15 +329,10 @@ root@andrei:/sys/kernel/debug/tracing# сat trace
在这篇文篇中,我们做了一个 ftrace 的功能概述。我们非常感谢你的任何意见或者补充。如果你想深入研究这个主题,我们为你推荐下列的资源:
* [https://www.kernel.org/doc/Documentation/trace/tracepoints.txt][1] — 一个跟踪点机制的详细描述
* [https://www.kernel.org/doc/Documentation/trace/events.txt][2] — 在 Linux 中跟踪系统事件的指南
* [https://www.kernel.org/doc/Documentation/trace/ftrace.txt][3] — ftrace 的官方文档
* [https://lttng.org/files/thesis/desnoyers-dissertation-2009-12-v27.pdf][4] — Mathieu Desnoyers作者是跟踪点和 LTTNG 的创建者)的关于内核跟踪和分析的学术论文。
* [https://lwn.net/Articles/370423/][5] — Steven Rostedt 的关于 ftrace 功能的文章
* [http://alex.dzyoba.com/linux/profiling-ftrace.html][6] — 用 ftrace 分析实际案例的一个概述
--------------------------------------------------------------------------------
@ -360,7 +341,7 @@ via:https://blog.selectel.com/kernel-tracing-ftrace/
作者:[Andrej Yemelianov][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,9 +1,11 @@
不知道的 Bash关于 Bash 数组的介绍
所不了解的 Bash关于 Bash 数组的介绍
======
> 进入这个古怪而神奇的 Bash 数组的世界。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S)
尽管软件工程师常常使用命令行来进行各种开发,但命令行中的数组似乎总是一个模糊的东西(虽然没有正则操作符 `=~` 那么复杂隐晦)。除开隐晦和有疑问的语法,[Bash][1] 数组其实是非常有用的。
尽管软件工程师常常使用命令行来进行各种开发,但命令行中的数组似乎总是一个模糊的东西(虽然不像正则操作符 `=~` 那么复杂隐晦)。除开隐晦和有疑问的语法,[Bash][1] 数组其实是非常有用的。
### 稍等,这是为什么?
@ -15,26 +17,21 @@
### 基础
我们要测试的 `--threads` 参数:
我们首先要做的事是定义一个数组,用来容纳我们想要测试的 `--threads` 参数:
```
allThreads=(1 2 4 8 16 32 64 128)
```
我们首先要做的事是定义一个数组,用来容纳我们想要测试的参数:
本例中所有元素都是数字但参数并不一定是数字Bash 中的 数组可以容纳数字和字符串,比如 `myArray=(1 2 "three" 4 "five")` 就是个有效的表达式。就像 Bash 中其它的变量一样,确保赋值符号两边没有空格。否则 Bash 将会把变量名当作程序来执行,把 `=` 当作程序的第一个参数。
本例中所有元素都是数字但参数并不一定是数字Bash 中的数组可以容纳数字和字符串,比如 `myArray=(1 2 "three" 4 "five")` 就是个有效的表达式。就像 Bash 中其它的变量一样,确保赋值符号两边没有空格。否则 Bash 将会把变量名当作程序来执行,把 `=` 当作程序的第一个参数。
现在我们初始化了数组,让我们解析它其中的一些元素。仅仅输入 `echo $allThreads` ,你能发现,它只会输出第一个元素。
要理解这个产生的原因,需要回到上一步,回顾我们一般是怎么在 Bash 中输出 变量。考虑以下场景:
要理解这个产生的原因,需要回到上一步,回顾我们一般是怎么在 Bash 中输出变量。考虑以下场景:
```
type="article"
echo "Found 42 $type"
```
假如我们得到的变量 `$type` 是一个单词,我们想要添加在句子结尾一个 `s`。我们无法直接把 `s` 加到 `$type` 里面,因为这会把它变成另一个变量,`$types`。尽管我们可以利用像 `echo "Found 42 "$type"s"` 这样的代码形变,但解决这个问题的最好方法是用一个花括号:`echo "Found 42 ${type}s"`,这让我们能够告诉 Bash 变量名的起止位置有趣的是JavaScript/ES6 在 [template literals][2] 中注入变量和表达式的语法和这里是一样的)
@ -49,37 +46,31 @@ echo "Found 42 $type"
#### 遍历数组元素
记住上面讲过的,我们遍历 `$allThreads` 数组,把每个值当作 `--threads` 参数启动 pipeline
记住上面讲过的,我们遍历 `$allThreads` 数组,把每个值当作 `--threads` 参数启动管线
```
for t in ${allThreads[@]}; do
  ./pipeline --threads $t
done
```
#### 遍历数组索引
接下来,考虑一个稍稍不同的方法。不遍历所有的数组元素,我们可以遍历所有的索引:
接下来,考虑一个稍稍不同的方法。不遍历所有的数组元素,我们可以遍历所有的索引:
```
for i in ${!allThreads[@]}; do
  ./pipeline --threads ${allThreads[$i]}
done
```
一步一步看:如之前所见,`${allThreads[@]}` 表示数组中的所有元素。前面加了个感叹号,变成 `${!allThreads[@]}`,这会返回数组索引列表(这里是 0 到 7。换句话说。`for` 循环就遍历所有的索引 `$i` 并从 `$allThreads` 中读取第 `$i` 个元素,当作 `--threads` 选项的参数。
这看上去很辣眼睛,你可能奇怪为什么我要一开始就讲这个。这是因为有时候在循环中需要同时获得索引和对应的值,例如,如果你想要忽视数组中的第一个元素,使用索引避免创建在循环中累加的额外变量。
这看上去很辣眼睛,你可能奇怪为什么我要一开始就讲这个。这是因为有时候在循环中需要同时获得索引和对应的值,例如,如果你想要忽视数组中的第一个元素,使用索引可以避免额外创建在循环中累加的变量。
### 填充数组
到目前为止,我们已经能够用给定的 `--threads` 选项启动 pipeline 了。现在假设按秒计时的运行时间输出到 pipeline。我们想要捕捉每个迭代的输出,然后把它保存在另一个数组中,因此我们最终可以随心所欲的操作它。
到目前为止,我们已经能够用给定的 `--threads` 选项启动管线了。现在假设按秒计时的运行时间输出到管线。我们想要捕捉每个迭代的输出,然后把它保存在另一个数组中,因此我们最终可以随心所欲的操作它。
#### 一些有用的语法
@ -89,7 +80,6 @@ done
```
myArray+=( "newElement1" "newElement2" )
```
#### 参数扫描
@ -98,24 +88,18 @@ myArray+=( "newElement1" "newElement2" )
```
allThreads=(1 2 4 8 16 32 64 128)
allRuntimes=()
for t in ${allThreads[@]}; do
  runtime=$(./pipeline --threads $t)
  allRuntimes+=( $runtime )
runtime=$(./pipeline --threads $t)
allRuntimes+=( $runtime )
done
```
就是这个了!
### 还有什么能做的?
这篇文章中,我们讲过使用数组进行参数扫描的场景。我担保有很多理由要使用 Bash 数组,这里就有两个例子:
这篇文章中,我们讲过使用数组进行参数扫描的场景。我敢保证有很多理由要使用 Bash 数组,这里就有两个例子:
#### 日志警告
@ -123,81 +107,49 @@ done
```
# 日志列表,发生问题时应该通知的人
logPaths=("api.log" "auth.log" "jenkins.log" "data.log")
logEmails=("jay@email" "emma@email" "jon@email" "sophia@email")
# 在每个日志中查找问题标志
for i in ${!logPaths[@]};
do
  log=${logPaths[$i]}
  stakeholder=${logEmails[$i]}
  numErrors=$( tail -n 100 "$log" | grep "ERROR" | wc -l )
# 如果近期发现超过 5 个错误,就警告负责人
  if [[ "$numErrors" -gt 5 ]];
  then
    emailRecipient="$stakeholder"
    emailSubject="WARNING: ${log} showing unusual levels of errors"
    emailBody="${numErrors} errors found in log ${log}"
    echo "$emailBody" | mailx -s "$emailSubject" "$emailRecipient"
  fi
done
```
#### API 查询
如果你想要生成一些分析数据,分析你的 Medium 帖子中用户评论最多的。由于我们无法直接访问数据库,毫无疑问要用 SQL但我们可以用 APIs
如果你想要生成一些分析数据,分析你的 Medium 帖子中用户评论最多的。由于我们无法直接访问数据库SQL 不在我们考虑范围,但我们可以用 API
为了避免陷入关于 API 授权和令牌的冗长讨论,我们将会使用 [JSONPlaceholder][3] 作为我们的目的,这是一个面向公众的测试服务 API。一旦我们查询每个帖子解析出评论者的邮箱我们就可以把这些邮箱添加到我们的结果数组里
为了避免陷入关于 API 授权和令牌的冗长讨论,我们将会使用 [JSONPlaceholder][3],这是一个面向公众的测试服务 API。一旦我们查询每个帖子解析出每个评论者的邮箱,我们就可以把这些邮箱添加到我们的结果数组里:
```
endpoint="https://jsonplaceholder.typicode.com/comments"
allEmails=()
# 查询前 10 个帖子
for postId in {1..10};
do
# 执行 API 调用,获取该帖子评论者的邮箱
  response=$(curl "${endpoint}?postId=${postId}")
 
# 使用 jq 把 JSON 响应解析成数组
  allEmails+=( $( jq '.[].email' <<< "$response" ) )
done
```
注意这里我是用 [`jq` 工具][4] 从命令行里解析 JSON 数据。关于 `jq` 的语法超出了本文的范围,但我强烈建议你了解它。
注意这里我是用 [jq 工具][4] 从命令行里解析 JSON 数据。关于 `jq` 的语法超出了本文的范围,但我强烈建议你了解它。
你可能已经想到,使用 Bash 数组在数不胜数的场景中很有帮助,我希望这篇文章中的示例可以给你思维的启发。如果你从自己的工作中找到其它的例子想要分享出来,请在帖子下方评论。
@ -209,17 +161,15 @@ done
|:--|:--|
| `arr=()` | 创建一个空数组 |
| `arr=(1 2 3)` | 初始化数组 |
| `${arr[2]}` | 解析第三个元素 |
| `${arr[@]}` | 解析所有元素 |
| `${!arr[@]}` | 解析数组索引 |
| `${arr[2]}` | 取得第三个元素 |
| `${arr[@]}` | 取得所有元素 |
| `${!arr[@]}` | 取得数组索引 |
| `${#arr[@]}` | 计算数组长度 |
| `arr[0]=3` | 重写第 1 个元素 |
| `arr[0]=3` | 覆盖第 1 个元素 |
| `arr+=(4)` | 添加值 |
| `str=$(ls)` | 把 `ls` 输出保存到字符串 |
| `arr=( $(ls) )` | 把 `ls` 输出的文件保存到数组里 |
| `${arr[@]:s:n}` | 解析索引在 `n``s+n` 之间的元素|
>译者注: `${arr[@]:s:n}` 应该是解析索引在 `s``s+n-1` 之间的元素
| `${arr[@]:s:n}` | 取得从索引 `s` 开始的 `n` 个元素 |
### 最后一点思考
@ -236,37 +186,24 @@ done
```
import subprocess
all_threads = [1, 2, 4, 8, 16, 32, 64, 128]
all_runtimes = []
# 用不同的线程数字启动 pipeline
# 用不同的线程数字启动管线
for t in all_threads:
  cmd = './pipeline --threads {}'.format(t)
# 使用子线程模块获得返回的输出
  p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
  output = p.communicate()[0]
  all_runtimes.append(output)
```
由于本例中没法避免使用命令行,所以可以优先使用 Bash。
#### 羞耻的宣传时间
如果你喜欢这篇文章,这里还有很多类似的文章! [在此注册,加入 OSCON][5]2018 年 7 月 17 号我会在这做一个主题为 [你不知道的 Bash][6] 的在线编码研讨会。没有幻灯片,不需要门票,只有你和我在命令行里面敲代码,探索 Bash 中的奇妙世界。
如果你喜欢这篇文章,这里还有很多类似的文章! [在此注册,加入 OSCON][5]2018 年 7 月 17 号我会在这做一个主题为 [你所不了解的 Bash][6] 的在线编码研讨会。没有幻灯片,不需要门票,只有你和我在命令行里面敲代码,探索 Bash 中的奇妙世界。
本文章由 [Medium] 首发,再发布时已获得授权。
@ -277,7 +214,7 @@ via: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
作者:[Robert Aboukhalil][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[BriFuture](https://github.com/BriFuture)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,33 +1,38 @@
2018 年 6 月 COPR 中值得尝试的 4 个很酷的新项目
======
COPR 是个人软件仓库[集合][1],它不在 Fedora 中携带。某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准尽管它是免费和开源的。COPR 可以在 Fedora 套件之外提供这些项目。Fedora 基础设施不支持 COPR 中的软件或没有项目签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-1024x433.jpg)
COPR 是个人软件仓库[集合][1],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准。或者它可能不符合其他 Fedora 标准尽管它是免费和开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不被 Fedora 基础设施不支持或没有被该项目所签名。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
这是 COPR 中一组新的有趣项目。
### Ghostwriter
[Ghostwriter][2] 是 [Markdown][3] 格式的文本编辑器,它有一个最小的界面。它以 HTML 格式提供文档预览,并为 Markdown 提供语法高亮显示。它提供了仅高亮显示当前正在编写的段落或句子的选项。此外Ghostwriter 可以将文档导出为多种格式,包括 PDF 和 HTML。最后它有所谓的“海明威”模式其中擦除被禁用迫使用户现在编写并稍后编辑。![][4]
[Ghostwriter][2] 是 [Markdown][3] 格式的文本编辑器,它有一个最小的界面。它以 HTML 格式提供文档预览,并为 Markdown 提供语法高亮显示。它提供了仅高亮显示当前正在编写的段落或句子的选项。此外Ghostwriter 可以将文档导出为多种格式,包括 PDF 和 HTML。最后它有所谓的“海明威”模式其中删除被禁用迫使用户现在智能编写而在稍后编辑。
![][4]
#### 安装说明
仓库目前为 Fedora 26、27、28 和 Rawhide 以及 EPEL 7 提供 Ghostwriter。要安装 Ghostwriter请使用以下命令
```
sudo dnf copr enable scx/ghostwriter
sudo dnf install ghostwriter
```
### Lector
[Lector][5] 是一个简单的电子书阅读器程序。Lector 支持最常见的电子书格式,如 EPUB、MOBI 和 AZW以及漫画书格式 CBZ 和 CBR。它很容易设置 - 只需指定包含电子书的目录即可。你可以使用表格或书籍封面浏览 Lector 库内的书籍。Lector 的功能包括书签、用户自定义标签和内置字典。![][6]
[Lector][5] 是一个简单的电子书阅读器程序。Lector 支持最常见的电子书格式,如 EPUB、MOBI 和 AZW以及漫画书格式 CBZ 和 CBR。它很容易设置 —— 只需指定包含电子书的目录即可。你可以使用表格或书籍封面浏览 Lector 库内的书籍。Lector 的功能包括书签、用户自定义标签和内置字典。![][6]
#### 安装说明
该仓库目前为 Fedora 26、27、28 和 Rawhide 提供Lector。要安装 Lector请使用以下命令
```
sudo dnf copr enable bugzy/lector
sudo dnf install lector
```
### Ranger
@ -37,24 +42,25 @@ Ranerger 是一个基于文本的文件管理器,它带有 Vim 键绑定。它
#### 安装说明
该仓库目前为 Fedora 27、28 和 Rawhide 提供 Ranger。要安装 Ranger请使用以下命令
```
sudo dnf copr enable fszymanski/ranger
sudo dnf install ranger
```
### PrestoPalette
PrestoPeralette 是一款帮助创建平衡调色板的工具。PrestoPalette 的一个很好的功能是能够使用光照来影响调色板的亮度和饱和度。你可以将创建的调色板导出为 PNG 或 JSON。
![][8]
#### 安装说明
仓库目前为 Fedora 26、27、28 和 Rawhide 以及 EPEL 7 提供 PrestoPalette。要安装 PrestoPalette请使用以下命令
```
sudo dnf copr enable dagostinelli/prestopalette
sudo dnf install prestopalette
```
@ -65,7 +71,7 @@ via: https://fedoramagazine.org/4-try-copr-june-2018/
作者:[Dominik Turecek][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,5 @@
Translating by HardworkFish
Understanding Linux filesystems: ext4 and beyond
======

View File

@ -1,44 +0,0 @@
translating---geekpi
My first sysadmin mistake
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mistakes.png?itok=dN0OoIl5)
If you work in IT, you know that things never go completely as you think they will. At some point, you'll hit an error or something will go wrong, and you'll end up having to fix things. That's the job of a systems administrator.
As humans, we all make mistakes. Sometimes, we are the error in the process, or we are what went wrong. As a result, we end up having to fix our own mistakes. That happens. We all make mistakes, typos, or errors.
As a young systems administrator, I learned this lesson the hard way. I made a huge blunder. But thanks to some coaching from my supervisor, I learned not to dwell on my errors, but to create a "mistake strategy" to set things right. Learn from your mistakes. Get over it, and move on.
My first job was a Unix systems administrator for a small company. Really, I was a junior sysadmin, but I worked alone most of the time. We were a small IT team, just the three of us. I was the only sysadmin for 20 or 30 Unix workstations and servers. The other two supported the Windows servers and desktops.
Any systems administrators reading this probably won't be surprised to know that, as an unseasoned, junior sysadmin, I eventually ran the `rm` command in the wrong directory. As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the `/etc` directory by mistake. Ouch.
My clue that I'd done something wrong was an error message that `rm` couldn't delete certain subdirectories. But the cache directory should contain only files! I immediately stopped the `rm` command and looked at what I'd done. And then I panicked. All at once, a million thoughts ran through my head. Did I just destroy an important server? What was going to happen to the system? Would I get fired?
Fortunately, I'd run `rm *` and not `rm -rf *` so I'd deleted only files. The subdirectories were still there. But that didn't make me feel any better.
Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some coaching with me. "You're not the first person to do this," she said. "What would someone else do in your situation?" That helped me calm down and focus. I started to think less about the stupid thing I had just done, and more about what I was going to do next.
I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the `/etc` directory.
Once I had my plan of action, the rest was easy. It was just a matter of running the right commands to copy the `/etc` files from another server and edit the configuration so it matched the system. Thanks to my practice of documenting everything, I used my existing documentation to make any final adjustments. I avoided having to completely restore the server, which would have meant a huge disruption.
To be sure, I learned from that mistake. For the rest of my years as a systems administrator, I always confirmed what directory I was in before running any command.
I also learned the value of building a "mistake strategy." When things go wrong, it's natural to panic and think about all the bad things that might happen next. That's human nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and focus on making things better. I may still think about it, but knowing my next steps allows me to "get over it."
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/my-first-sysadmin-mistake
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jim-hall

View File

@ -0,0 +1,81 @@
What's the difference between a fork and a distribution?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spoons_forks_520x292_jh.png?itok=DzEzZBuG)
If you've been around open source software for any length of time, you'll hear the terms fork and distribution thrown around casually in conversation. For many people, the distinction between the two isn't clear, so here I'll try to clear up the confusion.
### First, some definitions
Before explaining the nuances of a fork vs. a distribution and the pitfalls thereof, let's define key concepts.
**[Open source software][1]** is software that:
* Is freely available to distribute under certain [license][2] restraints
* Permits its source code to be viewable and modified under certain license restraints
Open source software can be **consumed** in the following ways:
* Downloaded in binary or source code format, often at no charge (e.g., the [Eclipse developer environment][3])
* As a distribution (product) by a vendor, sometimes at a cost to the user (e.g., [Red Hat products][4])
* Embedded into proprietary software solutions (e.g., some smartphones and browsers display fonts using the open source [freetype software][5])
**Free and open source (FOSS)** is not necessarily "free" as in "zero cost." Free and open source simply means the software is free to distribute, modify, study, and use, subject to the software's licensing. The software distributor may attach a purchase price to it. For example, Linux is available at no cost as Fedora, CentOS, Gentoo, etc. or as a paid distribution as Red Hat Enterprise Linux, SUSE, etc.
**Community** refers to the organizations and individuals that collaboratively work on an open source project. Any individual or organization can contribute to the project by writing or reviewing code, documentation, test suites, managing meetings, updating websites, etc., provided they abide by the license. For example, at [Openhub.net][6], we see government, nonprofit, commercial, and education organizations [contributing to some open source projects][7].
**project** is the result of this collaborative development, documentation, and testing. Most projects have a central repository where code, documentation, testing, and so forth are developed.
An open sourceis the result of this collaborative development, documentation, and testing. Most projects have a central repository where code, documentation, testing, and so forth are developed.
A **distribution** is a copy, in binary or source code format, of an open source project. For example, CentOS, Fedora, Red Hat Enterprise Linux, SUSE, Ubuntu, and others are distributions of the Linux project. Tectonic, Google Kubernetes Engine, Amazon Container Service, and Red Hat OpenShift are distributions of the Kubernetes project.
Vendor distributions of open source projects are often called **products** , thus Red Hat OpenStack Platform is the Red Hat OpenStack product that is a distribution of the OpenStack upstream project—and it is still 100% open source.
The **trunk** is the main workstream in the community where the open source project is developed.
An open source **fork** is a version of the open source project that is developed along a separate workstream from the main trunk.
Thus, **a distribution is not the same as a fork**. A distribution is a packaging of the upstream project that is made available by vendors, often as products. However, the core code and documentation in the distribution adhere to the version in the upstream project. A fork—and any distribution based on the fork—results in a version of the code and documentation that are different from the upstream project. Users who have forked upstream open source code have to maintain it on their own, meaning they lose the benefit of the collaboration that takes place in the upstream community.
To further explain a software fork, let's use the analogy of migrating animals. Whales and sea lions migrate from the Arctic to California and Mexico; Monarch butterflies migrate from Alaska to Mexico; and (in the Northern Hemisphere) swallows and many other birds fly south for the winter. The key to a successful migration is that all animals in the group stick together, follow the leaders, find food and shelter, and don't get lost.
### Risks of going it on your own
A bird, butterfly, or whale that strays from the group loses the benefit of remaining with the group and knowing where to find food, shelter, and the desired destination.
Similarly, users or organizations that fork and modify an upstream project and maintain it on their own run the following risks:
1. **They cannot update their code based on the upstream because their code differs.** This is known as technical debt; the more changes made to forked code, the more it costs in time and money to rebase the fork to the upstream project.
2. **They potentially run less secure code.** If a vulnerability is found in open source code and fixed by the community in the upstream, a forked version of the code may not benefit from this fix because it is different from the upstream.
3. **They might not benefit from new features.** The upstream community, using input from many organizations and individuals, creates new features for the benefit of all users of the upstream project. If an organization forks the upstream, they potentially cannot incorporate the new features because their code differs.
4. **They might not integrate with other software packages.** Open source projects are rarely developed as single entities; rather they often are packaged together with other projects to create a solution. Forked code may not be able to be integrated with other projects because the developers of the forked code are not collaborating in the upstream with other participants.
5. **They might not certify on hardware platforms.** Software packages are often certified to run on hardware platforms so, if problems arise, the hardware and software vendors can collaborate to find the root cause or problem.
In summary, an open source distribution is simply a packaging of an upstream, multi-organizational, collaborative open source project sold and supported by a vendor. A fork is a separate development workstream of an open source project and risks not being able to benefit from the collaborative efforts of the upstream community.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/forks-vs-distributions
作者:[Jonathan Gershater][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jgershat
[1]:https://opensource.com/resources/what-open-source
[2]:https://opensource.com/tags/licensing
[3]:https://www.eclipse.org/che/getting-started/download/
[4]:https://access.redhat.com/downloads
[5]:https://www.freetype.org/
[6]:http://openhub.net
[7]:https://www.openhub.net/explore/orgs

View File

@ -1,3 +1,4 @@
## sober-wang 翻译中
Linux Virtual Machines vs Linux Live Images
======
I'll be the first to admit that I tend to try out new [Linux distros][1] on a far too frequent basis. Yet the method I use to test them, does vary depending on my goals for each instance. In this article, we're going to look at both running Linux virtual machines and running Linux live images. There are advantages to each method, but there are some hurdles with each method as well.

View File

@ -1,98 +0,0 @@
Translating by qhwdw
Blockchain evolution: A quick guide and why open source is at the heart of it
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc)
It isn't uncommon, when working on a new version of an open source project, to suffix it with "-ng", for "next generation." Fortunately, in their rapid evolution blockchains have so far avoided this naming pitfall. But in this evolutionary open source ecosystem, changes have been abundant, and good ideas have been picked up, remixed, and evolved between many different projects in a typical open source fashion.
In this article, I will look at the different generations of blockchains and what ideas have emerged to address the problems the ecosystem has encountered. Of course, any attempt at classifying an ecosystem will have limits—and objectors—but it should provide a rough guide to the jungle of blockchain projects.
### The beginning: Bitcoin
The first generation of blockchains stems from the [Bitcoin][1] blockchain, the ledger underpinning the decentralized, peer-to-peer cryptocurrency that has gone from [Slashdot][2] miscellanea to a mainstream topic.
This blockchain is a distributed ledger that keeps track of all users' transactions to prevent them from double-spending their coins (a task historically entrusted to third parties: banks). To prevent attackers from gaming the system, the ledger is replicated to every computer participating in the Bitcoin network and can be updated by only one computer in the network at a time. To decide which computer earns the right to update the ledger, the system organizes every 10 minutes a race between the computers, which costs them (a lot of) energy to enter. The winner wins the right to commit the last 10 minutes of transactions to the ledger (the "block" in blockchain) and some Bitcoin as a reward for their efforts. This setup is called a _proof of work_ consensus mechanism.
The goal of using a blockchain is to raise the level of trust participants have in the network.
This is where it gets interesting. Bitcoin was released as an [open source project][3] in January 2009. In 2010, realizing that quite a few of these elements can be tweaked, the community that had aggregated around Bitcoin, often on the [bitcointalk forums][4], started experimenting with them.
First, seeing that the Bitcoin blockchain is a form of a distributed database, the [Namecoin][5] project emerged, suggesting to store arbitrary data in its transaction database. If the blockchain can record the transfer of money, it could also record the transfer of other assets, such as domain names. This is exactly Namecoin's main use case, which went live in April 2011, two years after Bitcoin's introduction.
Where Namecoin tweaked the content of the blockchain, [Litecoin][6] tweaked two technical aspects: reducing the time between two blocks from 10 to 2.5 minutes and changing how the race is run (replacing the SHA-256 secure hashing algorithm with [scrypt][7]). This was possible because Bitcoin was released as open source software and Litecoin is essentially identical to Bitcoin in all other places. Litecoin was the first fork to modify the consensus mechanism, paving the way for many more.
Along the way, many more variations of the Bitcoin codebase have appeared. Some started as proposed extensions to Bitcoin, such as the [Zerocash][8] protocol, which aimed to provide transaction anonymity and fungibility but was eventually spun off into its own currency, [Zcash][9].
While Zcash has brought its own innovations, using recent cryptographic advances known as zero-knowledge proofs, it maintains compatibility with the vast majority of the Bitcoin code base, meaning it too can benefit from upstream Bitcoin innovations.
Another project, [CryptoNote][10], didn't use the same code base but sprouted from the same community, building on (and against) Bitcoin and again, on older ideas. Published in December 2012, it led to the creation of several cryptocurrencies, of which [Monero][11] (2014) is the best-known. Monero takes a different approach to Zcash but aims to solve the same issues: privacy and fungibility.
As is often the case in the open source world, there is more than one tool for the job.
### The next generations: "Blockchain-ng"
So far, however, all these variations have only really been about refining cryptocurrencies or extending them to support another type of transaction. This brings us to the second generation of blockchains.
Once the community started modifying what a blockchain could be used for and tweaking technical aspects, it didn't take long for some people to expand and rethink them further. A longtime follower of Bitcoin, [Vitalik Buterin][12] suggested in late 2013 that a blockchain's transactions could represent the change of states of a state machine, conceiving the blockchain as a distributed computer capable of running applications ("smart contracts"). The project, [Ethereum][13], went live in July 2015. It has seen fair success in running distributed apps, and the popularity of some of its better-known distributed apps ([CryptoKitties][14]) have even caused the Ethereum blockchain to slow down.
This demonstrates one of the big limitations of current blockchains: speed and capacity. (Speed is often measured in transactions per second, or TPS.) Several approaches have been suggested to solve this, from sharding to sidechains and so-called "second-layer" solutions. The need for more innovation here is strong.
With the words "smart contract" in the air and a proved—if still slow—technology to run them, another idea came to fruition: permissioned blockchains. So far, all the blockchain networks we've described have had two unsaid characteristics: They are public (anyone can see them function), and they are without permission (anyone can join them). These two aspects are both desirable and necessary to run a distributed, non-third-party-based currency.
As blockchains were being considered more and more separately from cryptocurrencies, it started to make sense to consider them in some private, permissioned settings. A consortium-type group of actors that have business relationships but don't necessarily trust each other fully can benefit from these types of blockchains—for example, actors along a logistics chain, financial or insurance institutions that regularly do bilateral settlements or use a clearinghouse, idem for healthcare institutions.
Once you change the setting from "anyone can join" to "invitation-only," further changes and tweaks to the blockchain building blocks become possible, yielding interesting results for some.
For a start, proof of work, designed to protect the network from malicious and spammy actors, can be replaced by something simpler and less resource-hungry, such as a [Raft][15]-based consensus protocol. A tradeoff appears between a high level of security or faster speed, embodied by the option of simpler consensus algorithms. This is highly desirable to many groups, as they can trade some cryptography-based assurance for assurance based on other means—legal relationships, for instance—and avoid the energy-hungry arms race that proof of work often leads to. This is another area where innovation is ongoing, with [Proof of Stake][16] a notable contender for the public network consensus mechanism of choice. It would likely also find its way to permissioned networks too.
Several projects make it simple to create permissioned blockchains, including [Quorum][17] (a fork of Ethereum) and [Hyperledger][18]'s [Fabric][19] and [Sawtooth][20], two open source projects based on new code.
Permissioned blockchains can avoid certain complexities that public, non-permissioned ones can't, but they still have their own set of issues. Proper management of participants is one: Who can join? How do they identify? How can they be removed from the network? Does one entity on the network manage a central public key infrastructure (PKI)?
The open nature of blockchains is seen as a form of governance.
### Open nature of blockchains
In all of the cases so far, one thing is clear: The goal of using a blockchain is to raise the level of trust participants have in the network and the data it produces—ideally, enough to be able to use it as is, without further work.
Reaching this level of trust is possible only if the software that powers the network is free and open source. Even a correctly distributed proprietary blockchain is essentially a collection of independent agents running the same third party's code. By nature, it's necessary—but not sufficient—for a blockchain's source code to be open source. This has both been a minimum guarantee and the source of further innovation as the ecosystem keeps growing.
Finally, it is worth mentioning that while the open nature of blockchains has been a source of innovation and variation, it has also been seen as a form of governance: governance by code, where users are expected to run whichever specific version of the code contains a function or approach they think the whole network should embrace. In this respect, one can say the open nature of some blockchains has also become a cop-out regarding governance. But this is being addressed.
### Third and fourth generations: governance
Next, I will look at what I am currently considering the third and fourth generations of blockchains: blockchains with built-in governance tools and projects to solve the tricky question of interconnecting the multitude of different blockchain projects to let them exchange information and value with each other.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/blockchain-guide-next-generation
作者:[Axel Simon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/axel
[1]:https://bitcoin.org
[2]:https://slashdot.org/
[3]:https://github.com/bitcoin/bitcoin
[4]:https://bitcointalk.org/
[5]:https://www.namecoin.org/
[6]:https://litecoin.org/
[7]:https://en.wikipedia.org/wiki/Scrypt
[8]:http://zerocash-project.org/index
[9]:https://z.cash
[10]:https://cryptonote.org/
[11]:https://en.wikipedia.org/wiki/Monero_(cryptocurrency)
[12]:https://en.wikipedia.org/wiki/Vitalik_Buterin
[13]:https://ethereum.org
[14]:http://cryptokitties.co/
[15]:https://en.wikipedia.org/wiki/Raft_(computer_science)
[16]:https://www.investopedia.com/terms/p/proof-stake-pos.asp
[17]:https://www.jpmorgan.com/global/Quorum
[18]:https://hyperledger.org/
[19]:https://www.hyperledger.org/projects/fabric
[20]:https://www.hyperledger.org/projects/sawtooth

View File

@ -1,157 +0,0 @@
translated by hopefully2333
Install an NVIDIA GPU on almost any machine
======
![](https://fedoramagazine.org/wp-content/uploads/2018/06/nvidia-816x345.jpg)
Whether for research or recreation, installing a new GPU can bolster your computers performance and enable new functionality across the board. This installation guide uses Fedora 28s brand-new third-party repositories to install NVIDIA drivers. It walks you through the installation of both software and hardware, and covers everything you need to get your NVIDIA card up and running. This process works for any UEFI-enabled computer, and any modern NVIDIA GPU.
### Preparation
This guide relies on the following materials:
* A machine that is [UEFI][1] capable. If youre uncertain whether your machine has this firmware, run sudo dmidecode -t 0. If “UEFI is supported” appears anywhere in the output, you are all set to continue. Otherwise, while its technically possible to update some computers to support UEFI, the process is often finicky and generally not recommended.
* A modern, UEFI-enabled NVIDIA card
* A power source that meets the wattage and wiring requirements for your NVIDIA card (see the Hardware & Modifications section for details)
* Internet connection
* Fedora 28
### Example setup
This example installation uses:
* An Optiplex 9010 (a fairly old machine)
* NVIDIA [GeForce GTX 1050 Ti XLR8 Gaming Overclocked Edition 4GB GDDR5 PCI Express 3.0][2] graphics card
* In order to meet the power requirements of the new GPU, the power supply was upgraded to an [EVGA 80 PLUS 600W ATX 12V/EPS 12V][3]. This new PSU was 300W above the minimum recommendation, but simply meeting the minimum recommendation is sufficient in most cases.
* And, of course, Fedora 28.
### Hardware and modifications
#### PSU
Open up your desktop case and check the maximum power output printed on your power supply. Next, check the documentation on your NVIDIA GPU and determine the minimum recommended power (in watts). Further, take a look at your GPU and see if it requires additional wiring, such as a 6-pin connector. Most entry-level GPUs only draw power directly from the motherboard, but some require extra juice. Youll need to upgrade your PSU if:
1. Your power supplys max power output is below the GPUs suggested minimum power. **Note:** According to some NVIDIA card manufacturers, pre-built systems may require more or less power than recommended, depending on the systems configuration. Use your discretion to determine your requirements if youre using a particularly power-efficient or power-hungry setup.
2. Your power supply does not provide the necessary wiring to power your card.
PSUs are straightforward to replace, but make sure to take note of the wiring layout before detaching your current power supply. Additionally, make sure to select a PSU that fits your desktop case.
#### CPU
Although installing a high-quality NVIDIA GPU is possible in many old machines, a slow or damaged CPU can “bottleneck” the performance of the GPU. To calculate the impact of the bottlenecking effect for your machine, click [here][4]. Its important to know your CPUs performance to avoid pairing a high-powered GPU with a CPU that cant keep up. Upgrading your CPU is a potential consideration.
#### Motherboard
Before proceeding, ensure your motherboard is compatible with your GPU of choice. Your graphics card should be inserted into the PCI-E x16 slot closest to the heat-sink. Ensure that your setup contains enough space for the GPU. In addition, note that most GPUs today employ PCI-E 3.0 technology. Though these GPUs will run best if mounted on a PCI-E 3.0 x16 slot, performance should not suffer significantly with an older version slot.
### Installation
```
sudo dnf update
```
2\. Next, reboot with the simple command:
```
reboot
```
3\. After reboot, install the Fedora 28 workstation repositories:
```
sudo dnf install fedora-workstation-repositories
```
4\. Next, enable the NVIDIA driver repository:
```
sudo dnf config-manager --set-enabled rpmfusion-nonfree-nvidia-driver
```
5\. Then, reboot again.
6\. After the reboot, verify the addition of the repository via the following command:
```
sudo dnf repository-packages rpmfusion-nonfree-nvidia-driver info
```
If several NVIDIA tools and their respective specs are loaded, then proceed to the next step. If not, you may have encountered an error when adding the new repository and you should give it another shot.
7\. Login, connect to the internet, and open the software app. Click Add-ons> Hardware Drivers> NVIDIA Linux Graphics Driver> Install.
Then, reboot once again.
8\. After reboot, go to Show Applications on the side bar, and open up the newly added NVIDIA X Server Settings application. A GUI should open up, and a dialog box will appear with the following message:
![NVIDIA X Server Prompt][5]
Take the applications advice, but before doing so, ensure you have your NVIDIA GPU on-hand and are ready to install. **Please note** that running nvidia xconfig as root and powering off without installing your GPU immediately may cause drastic damage. Doing so may prevent your computer from booting, and force you to repair the system through the reboot screen. A fresh install of Fedora may fix these issues, but the effects can be much worse.
If youre ready to proceed, enter the command:
```
sudo nvidia-xconfig
```
If the system prompts you to perform any downloads, accept them and proceed.
9\. Once this process is complete, close all applications and **shut down** the computer. Unplug the power supply to your machine. Then, press the power button once to drain any residual power to protect yourself from electric shock. If your PSU has a power switch, switch it off.
10\. Finally, install the graphics card. Remove the old GPU and insert your new NVIDIA graphics card into the proper PCI-E x16 slot, with the fans facing down. If there is no space for the fans to ventilate in this position, place the graphics card face up instead, if possible. When you have successfully installed the new GPU, close your case, plug in the PSU, and turn the computer on. It should successfully boot up.
**NOTE:** To disable the NVIDIA driver repository used in this installation, or to disable all fedora workstation repositories, consult [The Fedora Wiki Page][6].
### Verification
1\. If your newly installed NVIDIA graphics card is connected to your monitor and displaying correctly, then your NVIDIA driver has successfully established a connection to the GPU.
If youd like to view your settings, or verify the driver is working (in the case that you have two GPUs installed on the motherboard), open up the NVIDIA X Server Settings app again. This time, you should not be prompted with an error message, and information on the X configuration file and your NVIDIA GPU should be available (see screenshot below).
![NVIDIA X Server Settings][7]
Through this app, you may alter your X configuration file should you please, and may monitor the GPUs performance, clock speed, and thermal information.
2\. To ensure the new card is working at capacity, a GPU performance test is needed. GL Mark 2, a benchmarking tool that provides information on buffering, building, lighting, texturing, etc, offers an excellent solution. GL Mark 2 records frame rates for a variety of different graphical tests, and outputs an overall performance score (called the glmark2 score).
**Note:** glxgears will only test the performance of your screen or monitor, not the graphics card itself. Use GL Mark 2 instead.
To run GLMark2:
1. Open up a terminal and close all other applications
2. sudo dnf install glmark2
3. glmark2
4. Allow the test to run to completion for best results. Check to see if the frame rates match your expectation for your NVIDA card. If youd like additional verification, consult the web to determine if a glmark2 benchmark has been previously conducted on your NVIDA card model and published to the web. Compare scores to assess your GPUs performance.
5. If your framerates and/or glmark2 score are below expected, consider potential causes. CPU-induced bottlenecking? Other issues?
Assuming the diagnostics look good, enjoy using your new GPU.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/install-nvidia-gpu/
作者:[Justice del Castillo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/justice/
[1]:https://whatis.techtarget.com/definition/Unified-Extensible-Firmware-Interface-UEFI
[2]:https://www.cnet.com/products/pny-geforce-gtx-xlr8-gaming-1050-ti-overclocked-edition-graphics-card-gf-gtx-1050-ti-4-gb/specs/
[3]:https://www.evga.com/products/product.aspx?pn=100-B1-0600-KR
[4]:http://thebottlenecker.com (Home: The Bottle Necker)
[5]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/nvidia_xserver_error.jpg?token=c6a7effe35f1c592a155a4a46a068a19fd060a91 (NVIDIA X Sever Prompt)
[6]:https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories
[7]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/NVIDIA_XCONFIG.png?token=64e1a7be21e5e9ba157f029b65e24e4eef54d88f (NVIDIA X Server Settings)

View File

@ -1,4 +1,4 @@
What is the Difference Between the macOS and Linux Kernels
tranWhat is the Difference Between the macOS and Linux Kernels
======
Some people might think that there are similarities between the macOS and the Linux kernel because they can handle similar commands and similar software. Some people even think that Apples macOS is based on Linux. The truth is that both kernels have very different histories and features. Today, we will take a look at the difference between macOS and Linux kernels.

View File

@ -0,0 +1,49 @@
5 Reasons Open Source Certification Matters More Than Ever
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/open-source-training_0.jpg?itok=lqkiM56e)
In todays technology landscape, open source is the new normal, with open source components and platforms driving mission-critical processes and everyday tasks at organizations of all sizes. As open source has become more pervasive, it has also profoundly impacted the job market. Across industries [the skills gap is widening][1], making it ever more difficult to hire people with much needed job skills. In response, the [demand for training and certification is growing][2].
In a recent webinar, Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation, discussed the growing need for certification and some of the benefits of obtaining open source credentials. “As open source has become the new normal in everything from startups to Fortune 2000 companies, it is important to start thinking about the career road map, the paths that you can take and how Linux and open source in general can help you reach your career goals,” Seepersad said.
With all this in mind, this is the first article in a weekly series that will cover: why it is important to obtain certification; what to expect from training options that lead to certification; and how to prepare for exams and understand what your options are if you dont initially pass them.
Seepersad pointed to these five reasons for pursuing certification:
* **Demand for Linux and open source talent.** “Year after year, we do the Linux jobs report, and year after year we see the same story, which is that the demand for Linux professionals exceeds the supply. This is true for the open source market in general,” Seepersad said. For example, certifications such as the [LFCE, LFCS,][3] and [OpenStack administrator exam][4] have made a difference for many people.
* **Getting the interview.** “One of the challenges that recruiters always reference, especially in the age of open source, is that it can be hard to decide who you want to have come in to the interview,” Seepersad said. “Not everybody has the time to do reference checks. One of the beautiful things about certification is that it independently verifies your skillset.”
* **Confirming your skills.** “Certification programs allow you to step back, look across what we call the domains and topics, and find those areas where you might be a little bit rusty,” Seepersad said. “Going through that process and then being able to demonstrate skills on the exam shows that you have a very broad skillset, not just a deep skillset in certain areas.”
* **Confidence.** This is the beauty of performance-based exams,” Seepersad said. “You're working on our live system. You're being monitored and recorded. Your timer is counting down. This really puts you on the spot to demonstrate that you can troubleshoot.” The inevitable result of successfully navigating the process is confidence.
* **Making hiring decisions.** “As you become more senior in your career, you're going to find the tables turned and you are in the role of making a hiring decision,” Seepersad said. “You're going to want to have candidates who are certified, because you recognize what that means in terms of the skillsets.”
Although Linux has been around for more than 25 years, “it's really only in the past few years that certification has become a more prominent feature,” Seepersad noted. As a matter of fact, 87 percent of hiring managers surveyed for the [2018 Open Source Jobs Report][5] cite difficulty in finding the right open source skills and expertise. The Jobs Report also found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications.
With certification playing a more important role in securing a rewarding long-term career, are you interested in learning about options for gaining credentials? If so, stay tuned for more information in this series.
[Learn more about Linux training and certification.][6]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/sysadmin-cert/2018/7/5-reasons-open-source-certification-matters-more-ever
作者:[Sam Dean][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.linuxfoundation.org/blog/open-source-skills-soar-in-demand-according-to-2018-jobs-report/
[2]:https://www.linux.com/blog/os-jobs-report/2018/7/certification-plays-big-role-open-source-hiring
[3]:https://www.linux.com/learn/certification/2018/5/linux-foundation-lfcs-lfce-maja-kraljic
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/openstack-administration-fundamentals
[5]:https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
[6]:https://training.linuxfoundation.org/certification

View File

@ -1,3 +1,5 @@
Translating by SunWave...
How to use dd in Linux without destroying your disk
======

View File

@ -1,3 +1,5 @@
translating---geekpi
Getting started with Perlbrew
======

View File

@ -0,0 +1,153 @@
Users, Groups, and Other Linux Beasts
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flamingo-2458782_1920.jpg?itok=_gkzGGx5)
Having reached this stage, [after seeing how to manipulate folders/directories][1], but before flinging ourselves headlong into fiddling with files, we have to brush up on the matter of _permissions_ , _users_ and _groups_. Luckily, [there is already an excellent and comprehensive tutorial on this site that covers permissions][2], so you should go and read that right now. In a nutshell: you use permissions to establish who can do stuff to files and directories and what they can do with each file and directory -- read from it, write to it, move it, erase it, etc.
To try everything this tutorial covers, you'll need to create a new user on your system. Let's be practical and make a user for anybody who needs to borrow your computer, that is, what we call a _guest account_.
**WARNING:** _Creating and especially deleting users, along with home directories, can seriously damage your system if, for example, you remove your own user and files by mistake. You may want to practice on another machine which is not your main work machine or on a virtual machine. Regardless of whether you want to play it safe, or not, it is always a good idea to back up your stuff frequently, check the backups have worked correctly, and save yourself a lot of gnashing of teeth later on._
### A New User
You can create a new user with the `useradd` command. Run `useradd` with superuser/root privileges, that is using `sudo` or `su`, depending on your system, you can do:
```
sudo useradd -m guest
```
... and input your password. Or do:
```
su -c "useradd -m guest"
```
... and input the password of root/the superuser.
( _For the sake of brevity, we'll assume from now on that you get superuser/root privileges by using`sudo`_ ).
By including the `-m` argument, `useradd` will create a home directory for the new user. You can see its contents by listing _/home/guest_.
Next you can set up a password for the new user with
```
sudo passwd guest
```
Or you could also use `adduser`, which is interactive and asks you a bunch of questions, including what shell you want to assign the user (yes, there are more than one), where you want their home directory to be, what groups you want them to belong to (more about that in a second) and so on. At the end of running `adduser`, you get to set the password. Note that `adduser` is not installed by default on many distributions, while `useradd` is.
Incidentally, you can get rid of a user with `userdel`:
```
sudo userdel -r guest
```
With the `-r` option, `userdel` not only removes the _guest_ user, but also deletes their home directory and removes their entry in the mailing spool, if they had one.
### Skeletons at Home
Talking of users' home directories, depending on what distro you're on, you may have noticed that when you use the `-m` option, `useradd` populates a user's directory with subdirectories for music, documents, and whatnot as well as an assortment of hidden files. To see everything in you guest's home directory run `sudo ls -la /home/guest`.
What goes into a new user's directory is determined by a skeleton directory which is usually _/etc/skel_. Sometimes it may be a different directory, though. To check which directory is being used, run:
```
useradd -D
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=no
```
This gives you some extra interesting information, but what you're interested in right now is the `SKEL=/etc/skel` line. In this case, and as is customary, it is pointing to _/etc/skel/_.
As everything is customizable in Linux, you can, of course, change what gets put into a newly created user directory. Try this: Create a new directory in _/etc/skel/_ :
```
sudo mkdir /etc/skel/Documents
```
And create a file containing a welcome text and copy it over:
```
sudo cp welcome.txt /etc/skel/Documents
```
Now delete the guest account:
```
sudo userdel -r guest
```
And create it again:
```
sudo useradd -m guest
```
Hey presto! Your _Documents/_ directory and _welcome.txt_ file magically appear in the guest's home directory.
You can also modify other things when you create a user by editing _/etc/default/useradd_. Mine looks like this:
```
GROUP=users
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=no
```
Most of these options are self-explanatory, but let's take a closer look at the `GROUP` option.
### Herd Mentality
Instead of assigning permissions and privileges to users one by one, Linux and other Unix-like operating systems rely on _groups_. A group is a what you imagine it to be: a bunch of users that are related in some way. On your system you may have a group of users that are allowed to use the printer. They would belong to the _lp_ (for " _line printer_ ") group. The members of the _wheel_ group were traditionally the only ones who could become superuser/root by using _su_. The _network_ group of users can bring up and power down the network. And so on and so forth.
Different distributions have different groups and groups with the same or similar names have different privileges also depending on the distribution you are using. So don't be surprised if what you read in the prior paragraph doesn't match what is going on in your system.
Either way, to see which groups are on your system you can use:
```
getent group
```
The `getent` command lists the contents of some of the system's databases.
To find out which groups your current user belongs to, try:
```
groups
```
When you create a new user with `useradd`, unless you specify otherwise, the user will only belong to one group: their own. A _guest_ user will belong to a _guest_ group and the group gives the user the power to administer their own stuff and that is about it.
You can create new groups and then add users to them at will with the `groupadd` command:
```
sudo groupadd photos
```
will create the _photos_ group, for example. Next time, well use this to build a shared directory all members of the group can read from and write to, and we'll learn even more about permissions and privileges. Stay tuned!
Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux
[2]:https://www.linux.com/learn/understanding-linux-file-permissions
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,63 @@
4 add-ons to improve your privacy on Thunderbird
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/tb-privacy-addons-816x345.jpg)
Thunderbird is a popular free email client developed by [Mozilla][1]. Similar to Firefox, Thunderbird offers a large choice of add-ons for extra features and customization. This article focuses on four add-ons to improve your privacy.
### Enigmail
Encrypting emails using GPG (GNU Privacy Guard) is the best way to keep their contents private. If you arent familiar with GPG, [check out our primer right here][2] on the Magazine.
[Enigmail][3] is the go-to add-on for using OpenPGP with Thunderbird. Indeed, Enigmail integrates well with Thunderbird, and lets you encrypt, decrypt, and digitally sign and verify emails.
### Paranoia
[Paranoia][4] gives you access to critical information about your incoming emails. An emoticon shows the encryption state between servers an email traveled through before reaching your inbox.
A yellow, happy emoticon tells you all connections were encrypted. A blue, sad emoticon means one connection was not encrypted. Finally, a red, scared emoticon shows on more than one connection the message wasnt encrypted.
More details about these connections are available, so you can check which servers were used to deliver the email.
### Sensitivity Header
[Sensitivity Header][5] is a simple add-on that lets you select the privacy level of an outgoing email. Using the option menu, you can select a sensitivity: Normal, Personal, Private and Confidential.
Adding this header doesnt add extra security to email. However, some email clients or mail transport/user agents (MTA/MUA) can use this header to process the message differently based on the sensitivity.
Note that this add-on is marked as experimental by its developers.
### TorBirdy
If youre really concerned about your privacy, [TorBirdy][6] is the add-on for you. It configures Thunderbird to use the [Tor][7] network.
TorBirdy offers less privacy on email accounts that have been used without Tor before, as noted in the [documentation][8].
> Please bear in mind that email accounts that have been used without Tor before offer **less** privacy/anonymity/weaker pseudonyms than email accounts that have always been accessed with Tor. But nevertheless, TorBirdy is still useful for existing accounts or real-name email addresses. For example, if you are looking for location anonymity — you travel a lot and dont want to disclose all your locations by sending emails — TorBirdy works wonderfully!
Note that to use this add-on, you must have Tor installed on your system.
Photo by [Braydon Anderson][9] on [Unsplash][10].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-addons-privacy-thunderbird/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://www.mozilla.org/en-US/
[2]:https://fedoramagazine.org/gnupg-a-fedora-primer/
[3]:https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/
[4]:https://addons.mozilla.org/en-US/thunderbird/addon/paranoia/?src=cb-dl-users
[5]:https://addons.mozilla.org/en-US/thunderbird/addon/sensitivity-header/?src=cb-dl-users
[6]:https://addons.mozilla.org/en-US/thunderbird/addon/torbirdy/?src=cb-dl-users
[7]:https://www.torproject.org/
[8]:https://trac.torproject.org/projects/tor/wiki/torbirdy
[9]:https://unsplash.com/photos/wOHH-NUTvVc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[10]:https://unsplash.com/search/photos/privacy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,102 @@
5 open source racing and flying games for Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc)
Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games often are not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist.
So, can someone who uses only free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely to ever rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website to install and play it.
This article looks at racing and flying games. I have already written about [arcade-style games][1], [board and card games][2], and [puzzle games][3]. In future articles, I plan to cover role-playing games and strategy & simulation games.
### Extreme Tux Racer
![](https://opensource.com/sites/default/files/uploads/extreme_tux_racer.png)
Race down snow and ice-covered mountains as Tux or other characters in [Extreme Tux Racer][4]. In this racing game, the goal is to collect herrings and earn the best time. There are many different tracks to choose from, and tracks can be customized by altering the time of day, wind, and weather conditions. While the game has a few rough edges compared to modern, commercial racing games, it is still an enjoyable game to play. The controls and gameplay are straightforward and simple to learn, making this a great choice for kids.
To install Extreme Tux Racer, run the following command:
* On Fedora: `dnf install extremetuxracer`
* On Debian/Ubuntu: `apt install extremetuxracer`
### FlightGear
![](https://opensource.com/sites/default/files/uploads/flightgear.png)
[FlightGear][5] is a full-fledged, open source flight simulator. Multiple aircraft types are available, and 20,000 airports are included in the full world scenery set. That means the player can fly to most parts of the world and have realistic airports and scenery. The full world scenery data is large enough to fill three DVDs. Even the developers are jokingly not sure if that counts as "a feature or a problem," so be aware that a complete installation of FlightGear and all its scenery data is huge. While certainly not the right game for everyone, FlightGear provides a very complete and complex flight simulator experience for players looking to explore the skies on their own computer.
To install FlightGear, run the following command:
* On Fedora: `dnf install FlightGear`
* On Debian/Ubuntu: `apt install flightgear`
### SuperTuxKart
![](https://opensource.com/sites/default/files/uploads/supertuxkart.png)
[SuperTuxKart][6] takes the basic formula used by Nintendo in the Mario Kart series and applies it to open source mascots. Players race around a variety of tracks in go-karts driven by the mascots for a plethora of open source projects. Character choices include the mascots for open source operating systems and applications of varying familiarity, with options ranging from Tux and Beastie to Gavroche, the mascot for [GNU MediaGoblin][7]. There are several gameplay modes to choose from, including multi-player modes, but many of the tracks are unavailable until they are unlocked by playing the game's single-player story mode. SuperTuxKart's graphics settings can be tweaked to run on everything from older computers with built-in graphics to modern hardware with high-end graphics cards. There is also a version of [SuperTuxKart for Android][8] available. SuperTuxKart is a very good game and great for players of all ages.
To install SuperTuxKart, run the following command:
* On Fedora: `dnf install supertuxkart`
* On Debian/Ubuntu: `apt install supertuxkart`
### Torcs
![](https://opensource.com/sites/default/files/uploads/torcs.png)
[Torcs][9] is a fairly standard racing game with some extra features for the tech-savvy. Torcs can be played as just a standard racing game, where the player drives around a track trying to get the best time, but an alternative usage is as a platform to develop an artificial intelligence driver that can drive itself through Torcs' tracks. The cars and tracks included with the game vary in style, ranging from stock car racing to rally racing, but the gameplay is pretty typical for a racing game. Keyboard, mouse, joystick, and steering wheel input are all supported, but keyboard and mouse input modes are a little hard to get used to. Single-player races range from practice runs to championships, and there is a [split-screen multi-player mode][10] for up to four players.
To install Torcs, run the following command:
* On Fedora: `dnf install torcs`
* On Debian/Ubuntu: `apt install torcs`
### Trigger Rally
![](https://opensource.com/sites/default/files/uploads/trigger_rally.png)
[Trigger Rally][11] is an off-road, single-player rally racing game. The player needs to make it to each checkpoint in time to complete the race, which is standard racing game fare, but still enjoyable. The gameplay is more arcade-like than a strict racing simulator like Torcs but more realistic than cartoonish racing games like SuperTuxKart. The tracks are interesting and the controls are responsive, but a little too sensitive when playing with a keyboard. Joystick controls are available by changing an option in a configuration file. Unfortunately, development on the game is slow going, with the latest release in 2016, but the gameplay that is already there is fun.
To install Trigger Rally, run the following command:
* On Debian/Ubuntu: `apt install trigger-rally`
Unfortunately, Trigger Rally is not packaged for Fedora.
Did I miss one of your favorite open source racing or flying games? Share it in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/racing-flying-games-linux
作者:[About The Author;Joshua Allen Holm;Mlis;Med;Is One Of Opensource.Com'S Community Moderators. Joshua'S Main Interests Are Digital Humanities;Open Access;Open Educational Resources. He Can Reached At][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/holmja
[1]:https://opensource.com/article/18/1/arcade-games-linux
[2]:https://opensource.com/article/18/3/card-board-games-linux
[3]:https://opensource.com/article/18/6/puzzle-games-linux
[4]:https://extremetuxracer.sourceforge.io/
[5]:http://home.flightgear.org/
[6]:https://supertuxkart.net/Main_Page
[7]:https://mediagoblin.org
[8]:https://play.google.com/store/apps/details?id=org.supertuxkart.stk
[9]:http://torcs.sourceforge.net/index.php
[10]:http://torcs.sourceforge.net/?name=Sections&op=viewarticle&artid=30#c4_4_4
[11]:http://trigger-rally.sf.net/

View File

@ -0,0 +1,175 @@
Install Microsoft Windows Fonts In Ubuntu 18.04 LTS
======
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Microsoft-Windows-Fonts-in-Ubuntu-1-720x340.png)
Most of the educational institutions are still using Microsoft Fonts. I am not sure about other countries. But, in Tamilnadu (An Indian state), **Times New Roman** and **Arial** fonts are being mostly used by almost all sorts of documentation works, projects, and assignments in colleges and schools. Not only the educational institutions, some small organizations, offices, and shops are still using MS Windows Fonts. Just in case, if you are in a situation where you need to use Microsoft fonts in your Ubuntu Linux desktop, here is how to do it.
**Disclaimer:** Microsoft has released its core fonts for free. However, **Please be aware that usage of Microsoft fonts is prohibited in other operating systems**. Read the EULA carefully before installing MS Fonts in any Linux operating system. We (OSTechNix) are not responsible for any kind of piracy act.
### Install MS Fonts in Ubuntu 18.04 LTS desktop
Install MS TrueType Fonts as shown below:
```
$ sudo apt update
$ sudo apt install ttf-mscorefonts-installer
```
Microsofts End user agreement wizard will appear. Click **OK** to continue.
![][2]
Click **Yes** to accept the Microsoft agreement:
![][3]
After installing the fonts, we need to update the font cache using command:
```
$ sudo fc-cache -f -v
```
**Sample output:**
```
/usr/share/fonts: caching, new cache contents: 0 fonts, 6 dirs
/usr/share/fonts/X11: caching, new cache contents: 0 fonts, 4 dirs
/usr/share/fonts/X11/Type1: caching, new cache contents: 8 fonts, 0 dirs
/usr/share/fonts/X11/encodings: caching, new cache contents: 0 fonts, 1 dirs
/usr/share/fonts/X11/encodings/large: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/X11/misc: caching, new cache contents: 89 fonts, 0 dirs
/usr/share/fonts/X11/util: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cMap: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cmap: caching, new cache contents: 0 fonts, 5 dirs
/usr/share/fonts/cmap/adobe-cns1: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cmap/adobe-gb1: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cmap/adobe-japan1: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cmap/adobe-japan2: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/cmap/adobe-korea1: caching, new cache contents: 0 fonts, 0 dirs
/usr/share/fonts/opentype: caching, new cache contents: 0 fonts, 2 dirs
/usr/share/fonts/opentype/malayalam: caching, new cache contents: 3 fonts, 0 dirs
/usr/share/fonts/opentype/noto: caching, new cache contents: 24 fonts, 0 dirs
/usr/share/fonts/truetype: caching, new cache contents: 0 fonts, 46 dirs
/usr/share/fonts/truetype/Gargi: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/Gubbi: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/Nakula: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/Navilu: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/Sahadeva: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/Sarai: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/abyssinica: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/dejavu: caching, new cache contents: 6 fonts, 0 dirs
/usr/share/fonts/truetype/droid: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-beng-extra: caching, new cache contents: 6 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-deva-extra: caching, new cache contents: 3 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-gujr-extra: caching, new cache contents: 5 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-guru-extra: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-kalapi: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-orya-extra: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/fonts-telu-extra: caching, new cache contents: 2 fonts, 0 dirs
/usr/share/fonts/truetype/freefont: caching, new cache contents: 12 fonts, 0 dirs
/usr/share/fonts/truetype/kacst: caching, new cache contents: 15 fonts, 0 dirs
/usr/share/fonts/truetype/kacst-one: caching, new cache contents: 2 fonts, 0 dirs
/usr/share/fonts/truetype/lao: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/liberation: caching, new cache contents: 16 fonts, 0 dirs
/usr/share/fonts/truetype/liberation2: caching, new cache contents: 12 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-assamese: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-bengali: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-devanagari: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-gujarati: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-kannada: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-malayalam: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-oriya: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-punjabi: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-tamil: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-tamil-classical: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/lohit-telugu: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/malayalam: caching, new cache contents: 11 fonts, 0 dirs
/usr/share/fonts/truetype/msttcorefonts: caching, new cache contents: 60 fonts, 0 dirs
/usr/share/fonts/truetype/noto: caching, new cache contents: 2 fonts, 0 dirs
/usr/share/fonts/truetype/openoffice: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/padauk: caching, new cache contents: 4 fonts, 0 dirs
/usr/share/fonts/truetype/pagul: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/samyak: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/samyak-fonts: caching, new cache contents: 3 fonts, 0 dirs
/usr/share/fonts/truetype/sinhala: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/tibetan-machine: caching, new cache contents: 1 fonts, 0 dirs
/usr/share/fonts/truetype/tlwg: caching, new cache contents: 58 fonts, 0 dirs
/usr/share/fonts/truetype/ttf-khmeros-core: caching, new cache contents: 2 fonts, 0 dirs
/usr/share/fonts/truetype/ubuntu: caching, new cache contents: 13 fonts, 0 dirs
/usr/share/fonts/type1: caching, new cache contents: 0 fonts, 1 dirs
/usr/share/fonts/type1/gsfonts: caching, new cache contents: 35 fonts, 0 dirs
/usr/local/share/fonts: caching, new cache contents: 0 fonts, 0 dirs
/home/sk/.local/share/fonts: skipping, no such directory
/home/sk/.fonts: skipping, no such directory
/var/cache/fontconfig: cleaning cache directory
/home/sk/.cache/fontconfig: cleaning cache directory
/home/sk/.fontconfig: not cleaning non-existent cache directory
fc-cache: succeeded
```
### Install MS Fonts in Dual boot with Linux and Windows
If you have dual boot system with Linux and Windows operating system, you can easily install the MS fonts from Windows C drive. All you have to do is mount the Windows partition (C:/Windows).
I assume you have mounted the **C:\Windows** partition at **/Windowsdrive** directory in linux.
Now, link the fonts location to your Linux systems fonts folder as shown below.
```
ln -s /Windowsdrive/Windows/Fonts /usr/share/fonts/WindowsFonts
```
After linking the fonts folder, regenerate the fontconfig cache using command:
```
fc-cache
```
Alternatively, copy all Windows fonts to **/usr/share/fonts** directory and install the fonts using the following commands:
```
mkdir /usr/share/fonts/WindowsFonts
cp /Windowsdrive/Windows/Fonts/* /usr/share/fonts/WindowsFonts
chmod 755 /usr/share/fonts/WindowsFonts/*
```
Finally, regenerate the fontconfig cache using command:
```
fc-cache
```
### Test Windows font
Open LibreOffice or GIMP after installing MS Fonts. Now, you will see there the Microsoft coretype fonts.
![][4]
Thats it. Hope this guide useful. Again, I warn you usage of MS fonts in other operating system is prohibited. Please read the Microsoft License agreement before installing the MS fonts.
If you find our guides useful, please share them on your social, professional networks and support OSTechNix. More good stuffs to come. Keep visiting!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-microsoft-windows-fonts-ubuntu-16-04/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/ms-fonts-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/ms-fonts-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/07/ms-fonts-3.png

View File

@ -0,0 +1,69 @@
Open hardware meets open science in a multi-microphone hearing aid project
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_OpenInnovation.png?itok=l29msbql)
Since [Opensource.com][1] first published the story of the [GNU/Linux hearing aid][2] research platform in 2010, there has been an explosion in the availability of miniature system boards, including the original BeagleBone in 2011 and the Raspberry Pi in 2012. These ARM processor devices built from cellphone chips differ from the embedded system reference boards of the past—not only by being far less expensive and more widely available—but also because they are powerful enough to run familiar GNU/Linux distributions and desktop applications.
What took a laptop to accomplish in 2010 can now be achieved with a pocket-sized board costing a fraction as much. Because a hearing aid does not need a screen and a small ARM board's power consumption is far less than a typical laptop's, field trials can potentially run all day. Additionally, the system's lower weight is easier for the end user to wear.
The [openMHA project][3]—from the [Carl von Ossietzky Universität Oldenburg][4] in Germany, [BatAndCat Sound Labs][5] in Palo Alto, California, and [HörTech gGmbH][6]—is an open source platform for improving hearing aids using real-time audio signal processing. For the next iteration of the research platform, openMHA is using the US$ 55 [BeagleBone Black][7] board with its 1GHz Cortex A8 CPU.
The BeagleBone family of boards enjoys guaranteed long-term availability, thanks to its open hardware design that can be produced by anyone with the requisite knowledge. For example, BeagleBone hardware variations are available from community members including [SeeedStudio][8] and [SanCloud][9].
![BeagleBone Black][11]
The BeagleBone Black is open hardware finding its way into research labs.
Spatial filtering techniques, including [beamforming][12] and [directional microphone arrays][13], can suppress distracting noise, focusing audio amplification on the point in space where the hearing aid wearer is looking, rather than off to the side where a truck might be thundering past. These neat tricks can use two or three microphones per ear, yet typical sound cards for embedded devices support only one or two input channels in total.
Fortunately, the [McASP][14] communication peripheral in Texas Instruments chips offers multiple channels and support for the [I2S protocol][15], originally devised by Philips for short digital audio interconnects inside CD players. This means an add-on "cape" board can hook directly into the BeagleBone's audio system without using USB or other external interfaces. The direct approach helps reduce the signal processing delay into the range where it is undetectable by the hearing aid wearer.
The openMHA project uses an audio cape developed by the [Hearing4all][16] project, which combines three stereo codecs to provide up to six input channels. Like the BeagleBone, the Cape4all is open hardware with design files available on [GitHub][17].
The Cape4all, [presented recently][18] at the Linux Audio Conference in Berlin, Germany, runs at a sample rate from 24kHz to 96Khz with as few as 12 samples per period, leading to internal latencies in the sub-millisecond range. With hearing enhancement algorithms running, the complete round-trip latency from a microphone to an earpiece has been measured at 3.6 milliseconds (at 48KHz sample rate with 16 samples per period). Using the speed of sound for comparison, this latency is similar to listening to someone just over four feet away without a hearing aid.
![Cape4all ][20]
The Cape4all might be the first multi-microphone hearing aid on an open hardware platform.
The next step for the openMHA project is to develop a [Bluetooth Low Energy][21] module that will enable remote control of the research device from a smartphone and perhaps route phone calls and media playback to the hearing aid. Consumer hearing aids support Bluetooth, so the openMHA research platform must do so, too.
Also, instructions for running a [stereo hearing aid on the Raspberry Pi][22] were released by an openMHA user-project.
As evidenced by the openMHA project, open source innovation has transformed digital hearing aid research from an esoteric branch of audiology into an accessible open science.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/open-hearing-aid-platform
作者:[Daniel James,Christopher Obbard][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/daniel-james
[1]:http://Opensource.com
[2]:https://opensource.com/life/10/9/open-source-designing-next-generation-digital-hearing-aids
[3]:http://www.openmha.org/
[4]:https://www.uni-oldenburg.de/
[5]:http://batandcat.com/
[6]:http://www.hoertech.de/
[7]:https://beagleboard.org/black
[8]:https://www.seeedstudio.com/
[9]:http://www.sancloud.co.uk
[10]:/file/403046
[11]:https://opensource.com/sites/default/files/uploads/1-beagleboneblack-600.jpg (BeagleBone Black)
[12]:https://en.wikipedia.org/wiki/Beamforming
[13]:https://en.wikipedia.org/wiki/Microphone_array
[14]:https://en.wikipedia.org/wiki/McASP
[15]:https://en.wikipedia.org/wiki/I%C2%B2S
[16]:http://hearing4all.eu/EN/
[17]:https://github.com/HoerTech-gGmbH/Cape4all
[18]:https://lac.linuxaudio.org/2018/pages/event/35/
[19]:/file/403051
[20]:https://opensource.com/sites/default/files/uploads/2-beaglebone-wireless-with-cape4all-labelled-600.jpg (Cape4all )
[21]:https://en.wikipedia.org/wiki/Bluetooth_Low_Energy
[22]:http://www.openmha.org/userproject/2017/12/21/openMHA-on-raspberry-pi.html

View File

@ -0,0 +1,271 @@
A sysadmin's guide to SELinux: 42 answers to the big questions
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
> "It is an important and popular fact that things are not always what they seem…"
> ―Douglas Adams, The Hitchhiker's Guide to the Galaxy
Security. Hardening. Compliance. Policy. The Four Horsemen of the SysAdmin Apocalypse. In addition to our daily tasks—monitoring, backup, implementation, tuning, updating, and so forth—we are also in charge of securing our systems. Even those systems where the third-party provider tells us to disable the enhanced security. It seems like a job for Mission Impossible's [Ethan Hunt][1].
Faced with this dilemma, some sysadmins decide to [take the blue pill][2] because they think they will never know the answer to the big question of life, the universe, and everything else. And, as we all know, that answer is **[42][3]**.
In the spirit of The Hitchhiker's Guide to the Galaxy, here are the 42 answers to the big questions about managing and using [SELinux][4] with your systems.
1. SELinux is a LABELING system, which means every process has a LABEL. Every file, directory, and system object has a LABEL. Policy rules control access between labeled processes and labeled objects. The kernel enforces these rules.
2. The two most important concepts are: Labeling (files, process, ports, etc.) and Type enforcement (which isolates processes from each other based on types).
3. The correct Label format is `user:role:type:level` (optional).
4. The purpose of Multi-Level Security (MLS) enforcement is to control processes (domains) based on the security level of the data they will be using. For example, a secret process cannot read top-secret data.
5. Multi-Category Security (MCS) enforcement protects similar processes from each other (like virtual machines, OpenShift gears, SELinux sandboxes, containers, etc.).
6. Kernel parameters for changing SELinux modes at boot:
* `autorelabel=1` → forces the system to relabel
* `selinux=0` → kernel doesn't load any part of the SELinux infrastructure
* `enforcing=0` → boot in permissive mode
7. If you need to relabel the entire system:
`# touch /.autorelabel #reboot`
If the system labeling contains a large amount of errors, you might need to boot in permissive mode in order for the autorelabel to succeed.
8. To check if SELinux is enabled: `# getenforce`
9. To temporarily enable/disable SELinux: `# setenforce [1|0]`
10. SELinux status tool: `# sestatus`
11. Configuration file: `/etc/selinux/config`
12. How does SELinux work? Here's an example of labeling for an Apache Web Server:
* Binary: `/usr/sbin/httpd`→`httpd_exec_t`
* Configuration directory: `/etc/httpd`→`httpd_config_t`
* Logfile directory: `/var/log/httpd``httpd_log_t`
* Content directory: `/var/www/html``httpd_sys_content_t`
* Startup script: `/usr/lib/systemd/system/httpd.service``httpd_unit_file_d`
* Process: `/usr/sbin/httpd -DFOREGROUND``httpd_t`
* Ports: `80/tcp, 443/tcp``httpd_t, http_port_t`
A process running in the `httpd_t` context can interact with an object with the `httpd_something_t` label.
13. Many commands accept the argument `-Z` to view, create, and modify context:
* `ls -Z`
* `id -Z`
* `ps -Z`
* `netstat -Z`
* `cp -Z`
* `mkdir -Z`
Contexts are set when files are created based on their parent directory's context (with a few exceptions). RPMs can set contexts as part of installation.
14. There are four key causes of SELinux errors, which are further explained in items 15-21 below:
* Labeling problems
* Something SELinux needs to know
* A bug in an SELinux policy/app
* Your information may be compromised
15. Labeling problem: If your files in `/srv/myweb` are not labeled correctly, access might be denied. Here are some ways to fix this:
* If you know the label:
`# semanage fcontext -a -t httpd_sys_content_t '/srv/myweb(/.*)?'`
* If you know the file with the equivalent labeling:
`# semanage fcontext -a -e /srv/myweb /var/www`
* Restore the context (for both cases):
`# restorecon -vR /srv/myweb`
16. Labeling problem: If you move a file instead of copying it, the file keeps its original context. To fix these issues:
* Change the context command with the label:
`# chcon -t httpd_system_content_t /var/www/html/index.html`
* Change the context command with the reference label:
`# chcon --reference /var/www/html/ /var/www/html/index.html`
* Restore the context (for both cases): `# restorecon -vR /var/www/html/`
17. If SELinux needs to know HTTPD listens on port 8585, tell SELinux:
`# semanage port -a -t http_port_t -p tcp 8585`
18. SELinux needs to know booleans allow parts of SELinux policy to be changed at runtime without any knowledge of SELinux policy writing. For example, if you want httpd to send email, enter: `# setsebool -P httpd_can_sendmail 1`
19. SELinux needs to know booleans are just off/on settings for SELinux:
* To see all booleans: `# getsebool -a`
* To see the description of each one: `# semanage boolean -l`
* To set a boolean execute: `# setsebool [_boolean_] [1|0]`
* To configure it permanently, add `-P`. For example:
`# setsebool httpd_enable_ftp_server 1 -P`
20. SELinux policies/apps can have bugs, including:
* Unusual code paths
* Configurations
* Redirection of `stdout`
* Leaked file descriptors
* Executable memory
* Badly built libraries Open a ticket (do not file a Bugzilla report; there are no SLAs with Bugzilla).
21. Your information may be compromised if you have confined domains trying to:
* Load kernel modules
* Turn off the enforcing mode of SELinux
* Write to `etc_t/shadow_t`
* Modify iptables rules
22. SELinux tools for the development of policy modules:
`# yum -y install setroubleshoot setroubleshoot-server`
Reboot or restart `auditd` after you install.
23. Use `journalctl` for listing all logs related to `setroubleshoot`:
`# journalctl -t setroubleshoot --since=14:20`
24. Use `journalctl` for listing all logs related to a particular SELinux label. For example:
`# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0`
25. Use `setroubleshoot` log when an SELinux error occurs and suggest some possible solutions. For example, from `journalctl`:
[code] Jun 14 19:41:07 web1 setroubleshoot: SELinux is preventing httpd from getattr access on the file /var/www/html/index.html. For complete message run: sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
# sealert -l 12fd8b04-0119-4077-a710-2d0e0ee5755e
SELinux is preventing httpd from getattr access on the file /var/www/html/index.html.
***** Plugin restorecon (99.5 confidence) suggests ************************
If you want to fix the label,
/var/www/html/index.html default label should be httpd_syscontent_t.
Then you can restorecon.
Do
# /sbin/restorecon -v /var/www/html/index.html
```
26. Logging: SELinux records information all over the place:
* `/var/log/messages`
* `/var/log/audit/audit.log`
* `/var/lib/setroubleshoot/setroubleshoot_database.xml`
27. Logging: Looking for SELinux errors in the audit log:
`# ausearch -m AVC,USER_AVC,SELINUX_ERR -ts today`
28. To search for SELinux Access Vector Cache (AVC) messages for a particular service:
`# ausearch -m avc -c httpd`
29. The `audit2allow` utility gathers information from logs of denied operations and then generates SELinux policy-allow rules. For example:
* To produce a human-readable description of why the access was denied: `# audit2allow -w -a`
* To view the type enforcement rule that allows the denied access: `# audit2allow -a`
* To create a custom module: `# audit2allow -a -M mypolicy`
The `-M` option creates a type enforcement file (.te) with the name specified and compiles the rule into a policy package (.pp): `mypolicy.pp mypolicy.te`
* To install the custom module: `# semodule -i mypolicy.pp`
30. To configure a single process (domain) to run permissive: `# semanage permissive -a httpd_t`
31. If you no longer want a domain to be permissive: `# semanage permissive -d httpd_t`
32. To disable all permissive domains: `# semodule -d permissivedomains`
33. Enabling SELinux MLS policy: `# yum install selinux-policy-mls`
In `/etc/selinux/config:`
`SELINUX=permissive`
`SELINUXTYPE=mls`
Make sure SELinux is running in permissive mode: `# setenforce 0`
Use the `fixfiles` script to ensure that files are relabeled upon the next reboot:
`# fixfiles -F onboot # reboot`
34. Create a user with a specific MLS range: `# useradd -Z staff_u john`
Using the `useradd` command, map the new user to an existing SELinux user (in this case, `staff_u`).
35. To view the mapping between SELinux and Linux users: `# semanage login -l`
36. Define a specific range for a user: `# semanage login --modify --range s2:c100 john`
37. To correct the label on the user's home directory (if needed): `# chcon -R -l s2:c100 /home/john`
38. To list the current categories: `# chcat -L`
39. To modify the categories or to start creating your own, modify the file as follows:
`/etc/selinux/_<selinuxtype>_/setrans.conf`
40. To run a command or script in a specific file, role, and user context:
`# runcon -t initrc_t -r system_r -u user_u yourcommandhere`
* `-t` is the file context
* `-r` is the role context
* `-u` is the user context
41. Containers running with SELinux disabled:
* With Podman: `# podman run --security-opt label=disable`
* With Docker: `# docker run --security-opt label=disable`
42. If you need to give a container full access to the system:
* With Podman: `# podman run --privileged`
* With Docker: `# docker run --privileged`
And with this, you already know the answer. So please: **Don't panic, and turn on SELinux**.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/sysadmin-guide-selinux
作者:[Alex Callejas][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/darkaxl
[1]:https://en.wikipedia.org/wiki/Ethan_Hunt
[2]:https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
[3]:https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life,_the_Universe,_and_Everything_%2842%29
[4]:https://en.wikipedia.org/wiki/Security-Enhanced_Linux

View File

@ -0,0 +1,41 @@
我的第一个系统管理员错误
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mistakes.png?itok=dN0OoIl5)
如果你在 IT 领域工作,你知道事情永远不会像你想象的那样完好。在某些时候,你会遇到错误或出现问题,你最终必须解决问题。这是系统管理员的工作。
作为人类,我们都会犯错误。有时,我们是过程中的错误,或者我们出了什么问题。结果,我们最终必须解决自己的错误。它们会发生。我们都犯错误,错别字或故障。
作为一名年轻的系统管理员,我艰难地学到了这一课。我犯了一个大错。但是多亏了上级的指导,我学会了不去纠缠于我的错误,而是制定一个“错误策略”来设置正确的事情。从错误中吸取教训。克服它,继续前进。
我的第一份工作是一家小公司的 Unix 系统管理员。真的,我是一名初级系统管理员,但我大部分时间都独自工作。我们是一个小型 IT 团队,只有我们三个人。我是 20 或 30 台 Unix 工作站和服务器的唯一系统管理员。另外两个支持 Windows 服务器和桌面。
任何阅读这篇文章的系统管理员都不会惊讶地发现,作为一个不成熟的初级系统管理员,我最终在错误的目录中运行了 `rm` 命令。作为 root我以为我正在为我们的某个程序删除一些陈旧的缓存文件。相反我错误地清除了 `/ etc` 目录中的所有文件。糟糕。
我意识到反了错误是看到了一条错误消息,`rm` 无法删除某些子目录。但缓存目录应该只包含文件!我立即停止了 `rm` 命令,看着我做了什么。然后我惊慌失措。一下子,无数个想法涌入了我的脑中。我刚刚销毁了一台重要的服务器吗?系统会发生什么?我会被解雇吗?
幸运的是,我运行的是 `rm *` 而不是 `rm -rf *`,因此我只删除了文件。子目录仍在那里。但这并没有让我感觉更好。
我立刻去找我的主管告诉她我做了什么。她看到我对自己的错误感到愚蠢,但是我犯的。尽管紧迫,她花了几分钟时间跟我做了一些指导。 她说:“你不是第一个这样做的人,在你这种情况下,别人会怎么做?”这帮助我平静下来并专注。我开始更少考虑我刚刚做的愚蠢事情,而更多地考虑我接下来要做的事情。
我做了一个简单的策略:不要重启服务器。使用相同的系统作为模板,并重新创建 `/ etc` 目录。
制定了行动计划后,剩下的就很容易了。只需运行正确的命令即可从另一台服务器复制 `/ etc` 文件并编辑配置,使其与系统匹配。多亏了我对所有东西都做记录,我使用现有的文档进行最后的调整。我避免了完全恢复服务器,这意味着一个巨大的中断。
可以肯定的是,我从这个错误中吸取了教训。在接下来作为系统管理员的日子中,我总是在运行任何命令之前确认我所在的目录。
我还学习了构建“错误策略”的价值。当事情出错时,恐慌并思考接下来可能发生的所有坏事是很自然的。这是人性。但是制定一个“错误策略”可以帮助我不再担心出了什么问题,而是专注于让事情变得更好。我仍然会想一下,但是知道我接下来的步骤可以让我“克服它”。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/my-first-sysadmin-mistake
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jim-hall

View File

@ -1,31 +1,30 @@
Translating by qhwdw
Running a Python application on Kubernetes
在 Kubernetes 上运行一个 Python 应用程序
============================================================
### This step-by-step tutorial takes you through the process of deploying a simple Python application on Kubernetes.
### 这个分步指导教程教你通过在 Kubernetes 上部署一个简单的 Python 应用程序来学习部署的流程。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag)
Image by : opensource.com
图片来源:opensource.com
Kubernetes is an open source platform that offers deployment, maintenance, and scaling features. It simplifies management of containerized Python applications while providing portability, extensibility, and self-healing capabilities.
Kubernetes 是一个具备部署、维护、和可伸缩特性的开源平台。它在提供可移植性、可扩展性、以及自我修复能力的同时,简化了容器化 Python 应用程序的管理。
Whether your Python applications are simple or more complex, Kubernetes lets you efficiently deploy and scale them, seamlessly rolling out new features while limiting resources to only those required.
不论你的 Python 应用程序是简单还是复杂Kubernetes 都可以帮你高效地部署和伸缩它们,在有限的资源范围内滚动升级新特性。
In this article, I will describe the process of deploying a simple Python application to Kubernetes, including:
在本文中,我将描述在 Kubernetes 上部署一个简单的 Python 应用程序的过程,它包括:
* Creating Python container images
* 创建 Python 容器镜像
* Publishing the container images to an image registry
* 发布容器镜像到镜像注册中心
* Working with persistent volume
* 使用持久卷
* Deploying the Python application to Kubernetes
* 在 Kubernetes 上部署 Python 应用程序
### Requirements
### 必需条件
You will need Docker, kubectl, and this [source code][10].
你需要 Docker、kubectl、以及这个 [源代码][10]。
Docker is an open platform to build and ship distributed applications. To install Docker, follow the [official documentation][11]. To verify that Docker runs your system:
Docker 是一个构建和承载已发布的应用程序的开源平台。可以参照 [官方文档][11] 去安装 Docker。运行如下的命令去验证你的系统上运行的 Docker
```
$ docker info
@ -41,25 +40,25 @@ WARNING: No memory limit support
WARNING: No swap limit support
```
kubectl is a command-line interface for executing commands against a Kubernetes cluster. Run the shell script below to install kubectl:
kubectl 是在 Kubernetes 集群上运行命令的一个命令行界面。运行下面的 shell 脚本去安装 kubectl
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
```
Deploying to Kubernetes requires a containerized application. Let's review containerizing Python applications.
部署到 Kubernetes 的应用要求必须是一个容器化的应用程序。我们来回顾一下 Python 应用程序的容器化过程。
### Containerization at a glance
### 一句话了解容器化
Containerization involves enclosing an application in a container with its own operating system. This full machine virtualization option has the advantage of being able to run an application on any machine without concerns about dependencies.
容器化是指将一个应用程序所需要的东西打包进一个自带操作系统的容器中。这种完整机器虚拟化的好处是,一个应用程序能够在任何机器上运行而无需考虑它的依赖项。
Roman Gaponov's [article][12] serves as a reference. Let's start by creating a container image for our Python code.
我们以 Roman Gaponov 的 [文章][12] 为参考,来为我们的 Python 代码创建一个容器。
### Create a Python container image
### 创建一个 Python 容器镜像
To create these images, we will use Docker, which enables us to deploy applications inside isolated Linux software containers. Docker is able to automatically build images using instructions from a Docker file.
为创建这些镜像,我们将使用 Docker它可以让我们在一个隔离的 Linux 软件容器中部署应用程序。Docker 可以使用来自一个 `Docker file` 中的指令来自动化构建镜像。
This is a Docker file for our Python application:
这是我们的 Python 应用程序的 `Docker file`
```
FROM python:3.6
@ -91,43 +90,43 @@ VOLUME ["/app-data"]
CMD ["python", "app.py"]
```
This Docker file contains instructions to run our sample Python code. It uses the Python 3.5 development environment.
这个 `Docker file` 包含运行我们的示例 Python 代码的指令。它使用的开发环境是 Python 3.5。
### Build a Python Docker image
### 构建一个 Python Docker 镜像
We can now build the Docker image from these instructions using this command:
现在,我们可以使用下面的这个命令按照那些指令来构建 Docker 镜像:
```
docker build -t k8s_python_sample_code .
```
This command creates a Docker image for our Python application.
这个命令为我们的 Python 应用程序创建了一个 Docker 镜像。
### Publish the container images
### 发布容器镜像
We can publish our Python container image to different private/public cloud repositories, like Docker Hub, AWS ECR, Google Container Registry, etc. For this tutorial, we'll use Docker Hub.
我们可以将我们的 Python 容器镜像发布到不同的私有/公共云仓库中,像 Docker Hub、AWS ECR、Google Container Registry 等等。本教程中我们将发布到 Docker Hub。
Before publishing the image, we need to tag it to a version:
在发布镜像之前,我们需要给它标记一个版本号:
```
docker tag k8s_python_sample_code:latest k8s_python_sample_code:0.1
```
### Push the image to a cloud repository
### 推送镜像到一个云仓库
Using a Docker registry other than Docker Hub to store images requires you to add that container registry to the local Docker daemon and Kubernetes Docker daemons. You can look up this information for the different cloud registries. We'll use Docker Hub in this example.
如果使用一个 Docker 注册中心而不是 Docker Hub 去保存镜像,那么你需要在你本地的 Docker 守护程序和 Kubernetes Docker 守护程序上添加一个容器注册中心。对于不同的云注册中心,你可以在它上面找到相关信息。我们在示例中使用的是 Docker Hub。
Execute this Docker command to push the image:
运行下面的 Docker 命令去推送镜像:
```
docker push k8s_python_sample_code
```
### Working with CephFS persistent storage
### 使用 CephFS 持久卷
Kubernetes supports many persistent storage providers, including AWS EBS, CephFS, GlusterFS, Azure Disk, NFS, etc. I will cover Kubernetes persistence storage with CephFS.
Kubernetes 支持许多的持久存储提供商,包括 AWS EBS、CephFS、GlusterFS、Azure Disk、NFS 等等。我在示例中使用 CephFS 做为 Kubernetes 的持久卷。
To use CephFS for persistent data to Kubernetes containers, we will create two files:
为使用 CephFS 存储 Kubernetes 的容器数据,我们将创建两个文件:
persistent-volume.yml
@ -167,20 +166,20 @@ spec:
    storage: 10Gi
```
We can now use kubectl to add the persistent volume and claim to the Kubernetes cluster:
现在,我们将使用 kubectl 去添加持久卷并声明到 Kubernetes 集群中:
```
$ kubectl create -f persistent-volume.yml
$ kubectl create -f persistent-volume-claim.yml
```
We are now ready to deploy to Kubernetes.
现在,我们准备去部署 Kubernetes。
### Deploy the application to Kubernetes
### 在 Kubernetes 上部署应用程序
To manage the last mile of deploying the application to Kubernetes, we will create two important files: a service file and a deployment file.
为管理部署应用程序到 Kubernetes 上的最后一步,我们将创建两个重要文件:一个服务文件和一个部署文件。
Create a file and name it `k8s_python_sample_code.service.yml` with the following content:
使用下列的内容创建服务文件,并将它命名为 `k8s_python_sample_code.service.yml`
```
apiVersion: v1
@ -198,7 +197,7 @@ spec:
  k8s-app: k8s_python_sample_code
```
Create a file and name it `k8s_python_sample_code.deployment.yml` with the following content:
使用下列的内容创建部署文件并将它命名为 `k8s_python_sample_code.deployment.yml`
```
apiVersion: extensions/v1beta1
@ -228,35 +227,35 @@ spec:
             claimName: appclaim1
```
Finally, use kubectl to deploy the application to Kubernetes:
最后,我们使用 kubectl 将应用程序部署到 Kubernetes
```
$ kubectl create -f k8s_python_sample_code.deployment.yml $ kubectl create -f k8s_python_sample_code.service.yml
```
Your application was successfully deployed to Kubernetes.
现在,你的应用程序已经成功部署到 Kubernetes。
You can verify whether your application is running by inspecting the running services:
你可以通过检查运行的服务来验证你的应用程序是否在运行:
```
kubectl get services
```
May Kubernetes free you from future deployment hassles!
或许 Kubernetes 可以解决未来你部署应用程序的各种麻烦!
_Want to learn more about Python? Nanjekye's book, [Python 2 and 3 Compatibility][7]offers clean ways to write code that will run on both Python 2 and 3, including detailed examples of how to convert existing Python 2-compatible code to code that will run reliably on both Python 2 and 3._
_想学习更多关于 Python 的知识Nanjekye 的书,[和平共处的 Python 2 和 3][7] 提供了完整的方法,让你写的代码在 Python 2 和 3 上完美运行,包括如何转换已有的 Python 2 代码为能够可靠运行在 Python 2 和 3 上的代码的详细示例。_
### About the author
### 关于作者
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/joannah-nanjekye.jpg?itok=F4RqEjoA)][13] Joannah Nanjekye - Straight Outta 256 , I choose Results over Reasons, Passionate Aviator, Show me the code.[More about me][8]
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/joannah-nanjekye.jpg?itok=F4RqEjoA)][13] Joannah Nanjekye - Straight Outta 256 , 只要结果不问原因,充满激情的飞行员,喜欢用代码说话。[关于我的更多信息][8]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/running-python-application-kubernetes
作者:[Joannah Nanjekye ][a]
译者:[译者ID](https://github.com/译者ID)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
## sober-wang 翻译中
Linux Virtual Machines vs Linux Live Images
Linxu 虚拟机 vs Linux 实体机
======
I'll be the first to admit认可 that I tend照顾 to try out new [Linux distros发行版本][1] on a far too frequent频繁 basis. Yet the method(方法) I use to test them, does vary depending依赖 on my goals目标 for each instance每一个. In this article文章, we're going to look at both两个 running Linux virtual machines and running Linux live images. There are advantages优势/促进/有利于) to each method方法, but there are some hurdles障碍 with each method方法/函数) as well同样的.
首先我得承认,我非常倾向于频繁尝试新的[ linux 发行版本 ][1],我的目标是为了解决每一个 Linux 发行版的依赖,所以我用一些方法来测试它们。在一些文章中,我们将会看到两种运行 Linux 的模式,虚拟机或实体机。每一种方式都存在优势,但是有一些障碍会伴随着这两种方式。
### Testing out a new Linux distro for the first time
### 第一时间测试一个新的 Linux 发行版
When I test out a brand new Linux distro for the first time, the method I use depends heavily沉重的 on the resources资源 of the PC I'm currently目前的 on. If I have access to my desktop PC, I'm going to run the distro to be tested in a virtual machine. The reason理由 for this approach靠近 is that I can download and test the distro in not only a live environment环境, but also as an installed product with persistent稳定的 storage abilities能力.
为了第一时间去做 Linux 发型版本的依赖测试,我把它们运行在我目前所拥有的所有类型的 PC 上。如果我用我的台式机,我将运行一个 Linux 虚拟机做测试。
On the other hand, if I am working with much less robust hardware on a PC, then testing out a distro with a virtual machine installation of Linux is counter-productive. I'd be pushing that PC to its limits and honestly would be better off using a live Linux image instead running from a flash drive.
### Touring software on a new Linux distro
If you're interested in checking out a distro's desktop environment or the available software, you can't go wrong with a live image of the distro. A live environment provides you with a birds eye view of what to expect in terms of overall layout, applications provided and how the user experience flows overall.
To be fair, you could do the same thing with a virtual machine installation, but it may be a bit overkill if you would rather avoid filling up hard drive space with yet more data. After all, this is a simple tour of the distro. Remember what I said in the first section I like to run Linux in a virtual machine to test it. This means I'm going to see how it installs, what the partition options look like and other elements you wouldn't see from using a live image of any given distro.
Touring usually indicates that you're only looking to take a quick look at a distro, so in this case the method that can be done with the least amount of resistance and time investment is a good course of action.
### Taking a Linux distro with you
While it's not as common as it was a few years ago, the ability to take a Linux distro with you may be a consideration for some users. Obviously, virtual machine installations don't necessarily lend themselves favorably to portability. However a live image of a Linux distro is actually quite portable. A live image can be written to a DVD or copied onto a flash drive for easy traveling.
Expanding on this concept of Linux portability, it's also beneficial to have a live image on a flash drive when showing off how Linux works on a friend's computer. This empowers you to demonstrate how Linux can enrich their life while not relying on running a virtual machine on their PC. It's a bit of a win-win in favor of using a live image.
### Alternative to dual-booting Linux
This next item is a huge one. Consider this perhaps you're a Windows user. You like playing with Linux, but would rather not take the plunge. Dual-booting is out of the question in case something goes wrong or perhaps you're not comfortable identifying individual partitions. Whatever the case may be, both using Linux in a virtual machine or from a live image might be a great option for you.
Now I'm going to take a rather odd stance on something. I think you'll get far more value in the long term running Linux on a flash drive using a live image than with a virtual machine. There are two reasons for this. First of all, you'll get used to truly running Linux vs running it inside of a virtual machine on top of Windows. Second, you can setup your flash drive to contain user data with persistent storage.
I'll grant you the same could be said with a virtual machine running Linux, however you will never have an update break anything using the live image approach. Why? Because you're not updating a host OS or the guest OS. Remember there are entire distros that are designed to be nothing more than persistent storage Linux distros. Puppy Linux is one great example. Not only can it run on PCs that would otherwise be recycled or thrown away, it allows you to never be bothered again with tedious system updates thanks to the way the distro handles security. It's not a normal Linux distro and it's walled off in such a way that the persistent live image is free from anything scary.
### When a Linux virtual machine is absolutely the best option
As I bring this article to a close, let me leave you with this. There is one instance where using a virtual machine such as Virtual Box is absolutely better than using a live image recording the desktop environment of any Linux distro.
For example, I make videos that provide a tour and review of a variety of Linux distros. Doing this with live images would require me to capture the screen with a hardware device or install a software capture device from the live image's repositories. Clearly, a virtual machine is better suited for this job than a live image of a Linux distro.
Once you toss audio capture into the mix, there is no question that if you're going to use software to capture your review, you really want to have a host OS that has all the basic needs covered for a reasonably decent capture environment. Again, you could do all of this with a hardware device...but that might be cost prohibitive if you're only do video/audio capturing as a part time endeavor.
### A Linux virtual machine vs a Linux live image
What is your preferred method of trying out new distros? Perhaps you're someone who is fine with formatting their hard drive and throwing caution to the wind, thus, making the idea of any of this unneeded?
Most people I've interacted with online tend to follow much of the methodology I've touched on above, but I'd love to hear what approach works best for you. Hit the comments, let me know which method you prefer when checking out the greatest and latest from the Linux distro world.
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/linux-virtual-machines-vs-linux-live-images.html
作者:[Matt Hartley][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.datamation.com/open-source/best-linux-distro.html

View File

@ -0,0 +1,97 @@
区块链进化史:一个快速导览和为什么开源是其核心所在
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc)
当在开源项目的一个新版本上工作时,用后缀 "-ng" 表示 ”下一代“的情况并不鲜见。幸运的是,到目前为止,快速演进的区块链成功地避开了这个命名陷阱。但是在这个开源生态系统的演进过程中,改变是不断发生的,而好的创意被采用、融入、并以典型的开源方式进化到许多不同的项目中。
在本文中,我将审视不同代次的区块链,并且看一看他们在处理生态系统遇到的问题时采用了什么创意。当然,任何对生态系统进行分类的尝试都有其局限性的 —— 和反对者的 —— 但是这也将为混乱的区块链项目提供了一个粗略的指南。
### 始作蛹者比特币Bitcoin
第一代的区块链起源于 [比特币][1] 区块链,以去中心化为基础的总帐,从 [Slashdot][2] 杂集变成主流话题的点对点加密货币。
这个区块链是一个分布式总帐,它对所有用户的事务保持跟踪,以避免它们的货币重复支付(在历史上,这个任务是委托给第三方—— 银行 ——来做的)。为防范攻击者在系统上捣乱,总帐被复制到每个参与到比特币网络的计算机上,并且每次只允许一台计算机去更新总帐。为决定哪台计算机能够获得更新总帐的权力,系统安排在比特币网络上的计算机之间每 10 分钟进行一场竞赛,这将消耗它们的(许多)能源。赢家将获得将前 10 分钟发生的事务写入到总帐(区块链中的“区块”)的权力,并且为赢家写入区块链的工作给予一些比特币奖励。这些设置被称为一个 _工作量证明_ 共识机制。
使用区块链的目标是提升网络中参与者的信任水平。
这就是区块链最有趣的地方。比特币是发布于 2009 年 1 月的一个 [开源项目][3]。在 2010 年,由于意识到这些元素中的许多是可以调整的,围绕比特币聚集起了一个社区 —— [bitcointalk forums][4],来开始各种实验。
起初,看到的比特币区块链是一个分布式数据库的形式, [Namecoin][5] 项目出现后,建议去保存任意数据到它的事务数据库中。如果区块链能够记录金钱的转移,那么它也应该能够记录其它资产的转移,比如域名。这确实是 Namecoin 的主要使用案例,它上线于 2011 年 4 月 —— 也就是比特币出现两年后。
Namecoin 调整的地方是区块链的内容,[莱特币Litecoin][6] 调整了两个技术部分:一是将两个区块的时间间隔从 10 分钟减少到 2.5 分钟,二是改变了竞赛方式(用 [scrypt][7] 来替换 SHA-256 安全哈希算法)。这是能够做到的,因为比特币是以开源软件的方式来发布的,而莱特币本质上与比特币在其它部分是完全相同的。莱特币是修改了比特币共识机制的第一个分叉,这也为其它的更多“币”铺平了道路。
沿着这条道路,基于比特币代码库的各种变种越来越多。其中一些扩展了比特币的用途,比如 [Zerocash][8] 协议,它专注于提供交易的匿名性和可替换性,但它最终分拆为它自己的货币 —— [Zcash][9]。
虽然 Zcash 带来了它自己的创新,使用了最近被称为“零知识证明”的加密技术,但它维持着与大多数主要的比特币代码库的兼容性,这意味着它能够从上游的比特币创新中获益。
另外的项目 —— [CryptoNote][10],它萌芽于相同的社区,但是并没有使用相同的代码,它以比特币为背景来构建的,但又与之不同。它发行于 2012 年 12 月,由于它的出现,导致了几种加密货币的诞生,最著名的 [Monero][11] (2014) 就是其中之一。Monero 与 Zcash 使用了不同的方法,但解决了相同的问题:隐私性和可替换性。
就像在开源世界中经常出现的案例一样,做同样的工作有不止一个的工具可用。
### 下一代:"Blockchain-ng"
但是,到目前为止,所有的这些变体只是改进加密货币或者扩展它们去支持其它类型的事务。因此,这就引出了第二代区块链。
一旦社区开始去修改区块链的用法和调整技术部分时,对于一些想去扩展和重新思考它们未来的人来说,这种调整花费不了多长时间的。比特币的长期追随者 —— [Vitalik Buterin][12] 在 2013 年底建议,区域链的事务应该能够表示一个状态机的状态变化,将区域链识为能够运行应用程序(“智能合约”)的分布式计算机。这个项目 —— [以太坊Ethereum][13],上线于 2015 年 4 月。它在运行分布式应用程序方面取得了巨大的成功,它的一些非常流行的分布式应用程序([CryptoKitties][14])甚至导致以太坊区块链变得很慢。
这证明了目前的区块链存在一个很大的局限性:速度和容量。(速度通常用每秒事务数来测量,简称 TPS有几个提议都建议去解决这个速度问题从分片到侧链以及一个被称为“第二层second-layer)”的解决方案。这里需要更多的创新。
随着“智能合约”这个词开始流行起来,并且一件被证明的事情是 —— 如果仍然用很慢的技术去运行它们那么就需要另外的即将要实现的创意许可区块链Permissioned blockchains。到目前为止所有的区块链网络上我们有两个没有明说的特征一是它们是公开的任何人都可以看到它们的功能二是它们没有许可任何人都可以加入它们。这两个部分是运行一个分布式的、非基于第三方货币应该具有的和必需具有的条件。
随着区块链被认为出现与加密货币越来越明显的分离趋势,开始去考虑一些隐私、许可设置是很有意义的。一个有业务关系但相互之间完全不信任的财团类型的参与者,能够从这些区块链类型中获益 —— 比如,物流链上的参与者,定期进行双边结算或者使用一个清算中心的金融、保险、或医疗保健机构。
一旦你将设置从“任何人都可以加入”变为“仅邀请者方可加入”,进一步对区块链构建区块的方式进行改变和调整将变得可能,那么对一些人来说,结果将变得非常有趣。
首先,为了保护网络不受恶意或者垃圾参与者的影响,工作量证明被替换为更简单的和更少资源消耗的一些东西,比如,基于 [Raft][15] 的共识协议。在更高级别的安全性和更快的速度之间进行权衡,采用更简单的共识算法。对于更多群体来说这样更理想,因为他们可以用基于加密技术的担保来取代其它的基于法律关系的担保,例如为避免由于竞争而产生的大量能源消耗,而工作量证明就是这种情况。另外一个创新的地方是,使用 [股权证明Proof of Stake][16],它是公共网络共识机制的一个重量级的竞争者。它将可能像许可网络一样找到它自己的实现方式。
有几个项目可以让创建许可区块链变得更简单,包括 [Quorum][17] (以太坊的一个分叉)和 [Hyperledger][18] 的 [Fabric][19] 和 [Sawtooth][20],基于新代码的两个开源项目。
许可区块链可以避免公共的、非许可方式的区块链中某些错综复杂的问题但是它自己也存在一些问题。正确地管理参与者是其中的一个问题谁可以加入如何辨别他们如何将他们从网络上移除网络上的一个实体是否去管理一个中央公共密钥基础设施PKI?
区块链的开放本质被识为一种治理形式。
### 区块链的开放本质
到目前为止的所有案例中,有一件事情是很明确的:使用一个区块链的目标是去提升网络中的参与者和它产生的数据的信任水平,理想情况下,不需要做进一步的工作即可足以使用它。
只有为这个网络提供动力的软件是自由和开源的,才能达到这种信任水平。即便是一个正确的、专用的、分布式区块链,它的本质仍然是运行着相同的第三方代码的私有代理的集合。从本质上来说,区块链的源代码必须是开源的,但仅是开源还不够,随着生态系统持续成长,这既是最低限度的担保也是进一步创新的源头。
最后,值得一提的是,虽然区块链的开放本质被认为是创新和变化的源头,它也被认为是一种治理形式:代码治理,用户期望运行的任何一个特定版本,都应该包含他们认为的整个网络应该包含的功能和方法。在这方面,需要说明的一点是,一些区块链的开放本质正在“变味”。但是这一问题正在解决。
### 第三和第四代:治理
接下来,我正在考虑第三和第四代区块链:区块链将内置治理工具,以及项目将去解决棘手的大量不同区块链之间互连互通的问题,以便于它们之间可以交换信息和价值。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/blockchain-guide-next-generation
作者:[Axel Simon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/axel
[1]:https://bitcoin.org
[2]:https://slashdot.org/
[3]:https://github.com/bitcoin/bitcoin
[4]:https://bitcointalk.org/
[5]:https://www.namecoin.org/
[6]:https://litecoin.org/
[7]:https://en.wikipedia.org/wiki/Scrypt
[8]:http://zerocash-project.org/index
[9]:https://z.cash
[10]:https://cryptonote.org/
[11]:https://en.wikipedia.org/wiki/Monero_(cryptocurrency)
[12]:https://en.wikipedia.org/wiki/Vitalik_Buterin
[13]:https://ethereum.org
[14]:http://cryptokitties.co/
[15]:https://en.wikipedia.org/wiki/Raft_(computer_science)
[16]:https://www.investopedia.com/terms/p/proof-stake-pos.asp
[17]:https://www.jpmorgan.com/global/Quorum
[18]:https://hyperledger.org/
[19]:https://www.hyperledger.org/projects/fabric
[20]:https://www.hyperledger.org/projects/sawtooth

View File

@ -0,0 +1,154 @@
在绝大部分类型的机器上安装 NVIDIA 显卡驱动
======
![](https://fedoramagazine.org/wp-content/uploads/2018/06/nvidia-816x345.jpg)
无论是研究还是娱乐,安装一个最新的显卡驱动都能提升你的计算机性能,并且使你能全方位地实现新功能。本安装指南使用 Fedora 28 的新的第三方仓库来安装 NVIDIA 驱动。它将引导您完成硬件和软件两方面的安装,并且涵盖你需要得到的 NVIDIA 显卡启动和运行的一切。这个流程适用于任何支持 UEFI 的计算机和任意新的 NVIDIA 显卡。
### 准备
本指南依赖于下面这些材料:
* 一台使用 UEFI 的计算机,如果你不确定你的电脑是否有这个这个固件,请运行 sudo dmidecode -t 0。如果输出中出现了“UEFI is supported”你的安装过程就可以继续了。不然的话虽然可以在技术上更新部分电脑来支持 UEFI但是这个过程的要求很苛刻我们通常不建议你这么使用。
* 一个现代的,支持 UEFI 的 NVIDIA 的显卡
* 一个满足你的 NVIDIA 显卡的功率和接线要求的电源(有关详细信息,请参考硬件和修改的章节)
* 网络连接
* Fedora 28 系统
### 安装实例
这个安装示例使用的是:
* 一台 Optiplex 9010 的主机(一台相当老的机器)
* NVIDIA GeForce GTX 1050 Ti XLR8 游戏超频版 4 GB GDDR5 PCI Express 3.0 显卡
* 为了满足新显卡的电源要求,电源升级为 EVGA 80 PLUS 600 W ATX 12V/EPS 12V这个最新的 PSU 比推荐的最低要求高了 300 W但在大部分情况下满足推荐的最低要求就足够了。
* 然后当然的Fedora 28 也别忘了.
### 硬件和修改
#### PSU
打开你的台式机的机箱,检查印刷在电源上的最大输出功率。然后,查看你的 NVIDIA 显卡的文档,确定推荐的最小电源功率要求(以瓦特为单位)。除此之外,检查你的显卡,看它是否需要额外的接线,例如 6 针口连接器,大多数的入门级显卡只从主板获取电力,但是有一些显卡需要额外的电力,如果出现以下情况,你需要升级的 PSU
1. 你的电源的最大输出功率低于显卡建议的最小电源功率。注意:根据一些显卡厂家的说法,比起推荐的功率,预先构建的系统可能会需要更多或更少的功率,而这取决于系统的配置。如果你使用的是一个特别耗电或者特别节能的配置,请灵活决定你的电源需求。
2. 你的电源没有提供必须的接线口来为你的显卡供电。
PSU 的更换很容易,但是在你拆除你当前正在使用的电源之前,请务必注意你的接线布局。除此之外,请确保你选择的 PSU 适合你的机箱。
#### CPU
虽然在大多数老机器上安装高性能的 NVIDIA 显卡是可能的,但是一个缓慢或受损的 CPU 会阻碍显卡性能的发挥,如果要计算在你的机器上瓶颈效果的影响,请点击这里。知道你的 CPU 性能来避免高性能的显卡和 CPU 无法保持匹配是很重要的。升级你的 CPU 是一个潜在的考虑因素。
#### 主板
在继续进行之前,请确认你的主板和你选择的显卡是兼容的。你的显卡应该插在最靠近散热器的 PCI-E x16 插槽中。确保你的设置为显卡预留了足够的空间。此外,请注意,现在大部分的显卡使用的都是 PCI-E 3.0 技术。虽然这些显卡如果插在 PCI-E 3.0 插槽上会运行地最好,但如果插在一个旧版的插槽上的话,性能也不会受到太大的影响。
### 安装
```
sudo dnf update
```
2\. 然后,使用这条简单的命令进行重启:
```
reboot
```
3\. 在重启之后,安装 Fedora 28 的工作站仓库:
```
sudo dnf install fedora-workstation-repositories
```
4\. 接着,设置 NVIDIA 驱动仓库:
```
sudo dnf config-manager --set-enabled rpmfusion-nonfree-nvidia-driver
```
5\. 然后,再次重启。
6\. 在这次重启之后,通过下面这条命令验证是否添加了仓库:
```
sudo dnf repository-packages rpmfusion-nonfree-nvidia-driver info
```
如果加载了多个 NVIDIA 工具和它们各自的参数,请继续进行下一步。如果没有,你可能在添加新仓库的时候遇到了一个错误。你应该再试一次。
7\. 登陆,连接到互联网,然后打开软件应用程序。点击加载项>硬件驱动> NVIDIA Linux 图形驱动>安装。
接着,再一次重启。
8\. 在重新启动后,转到侧栏上的‘显示应用程序’,然后打开新添加的 NVIDIA X 服务器设置应用程序。一个图形界面会被打开,然后出现一个对话框并包含以下信息:
![NVIDIA X Server Prompt][5]
请参考应用程序的建议,但是在这样做之前,请确保你的 NVIDIA 显卡就在手里,并且已准备好去安装。请注意在以 root 身份运行 nvidia xconfig 的时候,如果在没有立刻安装显卡的情况下关闭电源,这可能会造成严重损坏。这样做可能会导致你的电脑无法启动,并强制你通过重启屏幕来修复系统。重新安装 Fedora 会修复这些问题,但是效果会更加糟糕。
如果你已准备好继续,请输入下面这条命令:
```
sudo nvidia-xconfig
```
如果系统提示你完成任何地下载,请选择接收然后继续。
9\. 一旦这个过程完成,关闭所有的应用程序并关闭电脑,拔掉机器的电源。然后,按一下电源按钮来释放掉多有的剩余电量,以此来保护你自己不会被点击。如果你的 PSU 有电源开关,请将其关闭。
10\. 最后,安装显卡,拔掉老的显卡并将新的显卡插入到正确的 PCI-E x16 插槽中,风扇朝着下方。如果这个位置已经没有空间让风扇通风。那作为代替,如果可以的话就把显卡面朝上放置。成功安装新的显卡之后,关闭你的机箱,插入 PSU ,然后打开计算机,它应该会成功启动。
**注意:** 要禁用此安装中使用的 NVIDIA 驱动仓库,或者要禁用所有的 Fedora 工作站仓库,请参考这个 Fedora Wiki 页面。
### 验证
1\. 如果你新安装的显卡已连接到你的显示器并显示正确,则表明你的 NVIDIA 驱动程序已成功和显卡建立连接。
如果你想去查看你的设置,或者验证驱动是否在正常工作(如果机箱的主板里安装了两块显卡),再次打开 NVIDIA X 服务器设置应用程序。这次,你应该不会被提示一个错误信息,并且系统会给出有关 X 的设置文件和你的 NVIDIA 显卡的信息。(请参考下面的屏幕截图)
![NVIDIA X Server Settings][7]
通过这个应用程序,你可以根据你的需要需改 X 配置文件,并可以监控显卡的性能,时钟速度和温度信息。
2\. 为确保新显卡以满功率运行一次显卡性能测试是非常必要的。GL Mark 2是一个提供后台处理、构建、照明、纹理等等有关信息的标准工具。它提供了一个优秀的解决方案。GL Mark 2 记录了各种各样的图形测试的帧速率,然后输出一个总体的性能评分(这被称为 glmark2 分数)。
**注意:** glxgears 只会测试你的屏幕或显示器的性能,不会测试显卡本身,请使用 GL Mark 2。
要运行 GLMark2:
1. 打开终端并关闭其他所有的应用程序
2. 运行 sudo dnf install glmark2 命令
3. 运行 glmark2 命令
4. 允许运行完整的测试来得到最好的结果。检查帧速率是否符合你对这块显卡的预期。如果你想要额外的验证,你可以查阅网站来确认是否已有你这块显卡的 glmark2 测试评分被公布到网上,你可以比较这个分数来评估你这块显卡的性能。
5. 如果你的帧速率或者 glmark2 评分低于预期请思考潜在的因素。CPU 造成的瓶颈?其他问题导致?
如果诊断的结果很好,就开始享受你的新显卡吧。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/install-nvidia-gpu/
作者:[Justice del Castillo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/justice/
[1]:https://whatis.techtarget.com/definition/Unified-Extensible-Firmware-Interface-UEFI
[2]:https://www.cnet.com/products/pny-geforce-gtx-xlr8-gaming-1050-ti-overclocked-edition-graphics-card-gf-gtx-1050-ti-4-gb/specs/
[3]:https://www.evga.com/products/product.aspx?pn=100-B1-0600-KR
[4]:http://thebottlenecker.com (Home: The Bottle Necker)
[5]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/nvidia_xserver_error.jpg?token=c6a7effe35f1c592a155a4a46a068a19fd060a91 (NVIDIA X Sever Prompt)
[6]:https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories
[7]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/NVIDIA_XCONFIG.png?token=64e1a7be21e5e9ba157f029b65e24e4eef54d88f (NVIDIA X Server Settings)