mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
commit
01abf9281e
@ -1,26 +1,25 @@
|
||||
怎么去转换任何系统调用为一个事件:介绍 eBPF 内核探针
|
||||
怎么去转换任何系统调用为一个事件:对 eBPF 内核探针的介绍
|
||||
============================================================
|
||||
|
||||
长文预警:在最新的 Linux 内核(>=4.4)中使用 eBPF,你可以将任何内核函数调用转换为一个带有任意数据的用户空间的事件。这通过 bcc 来做很容易。这个探针是用 C 语言写的,而数据是由 Python 来处理的。
|
||||
|
||||
长文预警:在最新的 Linux 内核(>=4.4)中使用 eBPF,你可以使用任意数据将任何内核函数调用转换为一个用户空间事件。这通过 bcc 来做很容易。这个探针是用 C 语言写的,而数据是由 Python 来处理的。
|
||||
如果你对 eBPF 或者 Linux 跟踪不熟悉,那你应该好好阅读一下整篇文章。本文尝试逐步去解决我在使用 bcc/eBPF 时遇到的困难,以为您节省我在搜索和挖掘上花费的时间。
|
||||
|
||||
如果你对 eBPF 或者 Linux 跟踪不熟悉,那你应该好好阅读一下本文。它尝试逐步去克服我在使用 bcc/eBPF 时遇到的困难,同时也节省了我在搜索和挖掘上花费的时间。
|
||||
### 在 Linux 的世界中关于推送与轮询的一个看法
|
||||
|
||||
### 在 Linux 的世界中关于 push vs pull 的一个提示
|
||||
|
||||
当我开始在容器上工作的时候,我想知道我们怎么基于一个真实的系统状态去动态更新一个负载均衡器的配置。一个通用的策略是这样做的,无论什么时候只要一个容器启动,容器编排器触发一个负载均衡配置更新动作,负载均衡器去轮询每个容器,直到它的健康状态检查结束。它只是简单进行 “SYN” 测试。
|
||||
当我开始在容器上工作的时候,我想知道我们怎么基于一个真实的系统状态去动态更新一个负载均衡器的配置。一个通用的策略是这样做的,无论什么时候只要一个容器启动,容器编排器触发一个负载均衡配置更新动作,负载均衡器去轮询每个容器,直到它的健康状态检查结束。它也许只是简单进行 “SYN” 测试。
|
||||
|
||||
虽然这种配置方式可以有效工作,但是它的缺点是你的负载均衡器为了让一些系统变得可用需要等待,而不是 … 让负载去均衡。
|
||||
|
||||
可以做的更好吗?
|
||||
|
||||
当你希望在一个系统中让一个程序对一些变化做出反应,这里有两种可能的策略。程序可以去 _轮询_ 系统去检测变化,或者,如果系统支持,系统可以 _推送_ 事件并且让程序对它作出反应。你希望去使用推送还是轮询取决于上下文环境。一个好的经验法则是,基于处理时间的考虑,如果事件发生的频率较低时使用推送,而当事件发生的较快或者让系统变得不可用时切换为轮询。例如,一般情况下,网络驱动将等待来自网卡的事件,但是,像 dpdk 这样的框架对事件将激活对网卡的轮询,以达到高吞吐低延迟的目的。
|
||||
当你希望在一个系统中让一个程序对一些变化做出反应,这里有两种可能的策略。程序可以去 _轮询_ 系统去检测变化,或者,如果系统支持,系统可以 _推送_ 事件并且让程序对它作出反应。你希望去使用推送还是轮询取决于上下文环境。一个好的经验法则是,基于处理时间的考虑,如果事件发生的频率较低时使用推送,而当事件发生的较快或者让系统变得不可用时切换为轮询。例如,一般情况下,网络驱动程序将等待来自网卡的事件,但是,像 dpdk 这样的框架对事件将主动轮询网卡,以达到高吞吐低延迟的目的。
|
||||
|
||||
理想状态下,我们将有一些内核接口告诉我们:
|
||||
|
||||
> * “容器管理器,你好,我刚才为容器 _servestaticfiles_ 的 Nginx-ware 创建了一个套接字,或许你应该去更新你的状态?
|
||||
>
|
||||
> * “好的,操作系统,感谢你告诉我这个事件“
|
||||
> * “好的,操作系统,感谢你告诉我这个事件”
|
||||
|
||||
虽然 Linux 有大量的接口去处理事件,对于文件事件高达 3 个,但是没有专门的接口去得到套接字事件提示。你可以得到路由表事件、邻接表事件、连接跟踪事件、接口变化事件。唯独没有套接字事件。或者,也许它深深地隐藏在一个 Netlink 接口中。
|
||||
|
||||
@ -28,23 +27,23 @@
|
||||
|
||||
### 内核跟踪和 eBPF,一些它们的历史
|
||||
|
||||
直到最近,仅有的方式是去在内核上打一个补丁程序或者借助于 SystemTap。[SytemTap][5] 是一个 Linux 系统跟踪器。简单地说,它提供了一个 DSL,然后编译进内核模块,然后被内核加载运行。除了一些因安全原因禁用动态模块加载的生产系统之外,包括在那个时候我工作的那一个。另外的方式是为内核打一个补丁程序去触发一些事件,可能是基于 netlink。但是这很不方便。深入内核带来的缺点包括 “有趣的” 新 “特性” 和增加了维护负担。
|
||||
直到最近,内核跟踪的唯一方式是对内核上打补丁或者借助于 SystemTap。[SytemTap][5] 是一个 Linux 系统跟踪器。简单地说,它提供了一个 DSL,编译进内核模块,然后被内核加载运行。除了一些因安全原因禁用动态模块加载的生产系统之外,包括在那个时候我开发的那一个。另外的方式是为内核打一个补丁程序以触发一些事件,可能是基于 netlink。但是这很不方便。深入内核所带来的缺点包括 “有趣的” 新 “特性” ,并增加了维护负担。
|
||||
|
||||
从 Linux 3.15 开始给我们带来了希望,它支持任何可跟踪内核函数可安全转换为用户空间事件。在一般的计算机科学中,“安全” 是指 ”一些虚拟机”。这里说的情况不是这种意思。Linux 已经有多好年了。自从 Linux 2.1.75 在 1997 年正式发行以来。但是,对被称为伯克利包过滤器的 BPF 来说它的历史是很短的。正如它的名字所表达的那样,它最初是为 BSD 防火墙开发的。它仅有两个寄存器,并且它仅允许跳转,意味着你不能使用它写一个循环(好吧,如果你知道最大迭代次数并且去手工实现它,你也可以实现循环)。这一点保证了程序总会终止并且从来不会使系统处于 hang 的状态。还不确定它的作用吗?即便你用的是 iptables。它的作用正如 [CloudFlare 的 DDos 防护基础][6]。
|
||||
从 Linux 3.15 开始给我们带来了希望,它支持将任何可跟踪内核函数可安全转换为用户空间事件。在一般的计算机科学中,“安全” 是指 “某些虚拟机”。在此也不例外。自从 Linux 2.1.75 在 1997 年正式发行以来,Linux 已经有这个多好年了。但是,它被称为伯克利包过滤器,或简称 BPF。正如它的名字所表达的那样,它最初是为 BSD 防火墙开发的。它仅有两个寄存器,并且它仅允许向前跳转,这意味着你不能使用它写一个循环(好吧,如果你知道最大迭代次数并且去手工实现它,你也可以实现循环)。这一点保证了程序总会终止,而不会使系统处于挂起的状态。还不知道它有什么用?你用过 iptables 的话,其作用就是 [CloudFlare 的 DDos 防护的基础][6]。
|
||||
|
||||
好的,因此,随着 Linux 3.15,[BPF 被扩展了][7] 转变为 eBPF。对于 “扩展的” BPF。它从两个 32 位寄存器升级到 10 个 64 位寄存器,并且增加了它们之间向后跳转的特性。它因此将被 [在 Linux 3.18 中进一步扩展][8],并将被移到网络子系统中,并且增加了像映射(maps)这样的工具。为保证安全,它 [引进了一个检查器][9],它验证所有的内存访问和可能的代码路径。如果检查器不能保证代码在固定的边界内,代码将被终止,它拒绝程序的初始插入。
|
||||
好的,因此,随着 Linux 3.15,[BPF 被扩展][7] 成为了 eBPF。对于 “扩展的” BPF。它从两个 32 位寄存器升级到 10 个 64 位寄存器,并且增加了它们之间向后跳转的特性。然后它 [在 Linux 3.18 中被进一步扩展][8],并将被移出网络子系统中,并且增加了像映射(map)这样的工具。为保证安全,它 [引进了一个检查器][9],它验证所有的内存访问和可能的代码路径。如果检查器不能保证代码会终止在固定的边界内,它一开始就要拒绝程序的插入。
|
||||
|
||||
关于它的更多历史,可以看 [Oracle 的关于 eBPF 的一个很捧的演讲][10]。
|
||||
关于它的更多历史,可以看 [Oracle 的关于 eBPF 的一个很棒的演讲][10]。
|
||||
|
||||
让我们开始吧!
|
||||
|
||||
### 来自 `inet_listen` 的问候
|
||||
### 来自 inet_listen 的问候
|
||||
|
||||
因为写一个汇编程序并不是件容易的任务,甚至对于很优秀的我们来说,我将使用 [bcc][11]。bcc 是一个基于 LLVM 的采集工具,并且用 Python 抽象了底层机制。探针是用 C 写的,并且返回的结果可以被 Python 利用,来写一些非常简单的应用程序。
|
||||
因为写一个汇编程序并不是件十分容易的任务,甚至对于很优秀的我们来说,我将使用 [bcc][11]。bcc 是一个基于 LLVM 的工具集,并且用 Python 抽象了底层机制。探针是用 C 写的,并且返回的结果可以被 Python 利用,可以很容易地写一些不算简单的应用程序。
|
||||
|
||||
首先安装 bcc。对于一些示例,你可能会被要求使用一个最新的内核版本(>= 4.4)。如果你想亲自去尝试一下这些示例,我强烈推荐你安装一台虚拟机。 _而不是_ 一个 Docker 容器。你不能在一个容器中改变内核。作为一个非常新的很活跃的项目,安装教程高度依赖平台/版本的。你可以在 [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12] 上找到最新的教程。
|
||||
首先安装 bcc。对于一些示例,你可能会需要使用一个最新的内核版本(>= 4.4)。如果你想亲自去尝试一下这些示例,我强烈推荐你安装一台虚拟机, _而不是_ 一个 Docker 容器。你不能在一个容器中改变内核。作为一个非常新的很活跃的项目,其安装教程高度依赖于平台/版本。你可以在 [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12] 上找到最新的教程。
|
||||
|
||||
现在,我希望在 TCP 套接字上进行监听,不管什么时候,只要有任何程序启动我将得到一个事件。当我在一个 `AF_INET` + `SOCK_STREAM` 套接字上调用一个 `listen()` 系统调用时,底层的内核函数是 [`inet_listen`][13]。我将钩在 `kprobe` 的输入点上,它启动时输出一个 “Hello World”。
|
||||
现在,我希望不管在什么时候,只要有任何程序开始监听 TCP 套接字,我将得到一个事件。当我在一个 `AF_INET` + `SOCK_STREAM` 套接字上调用一个 `listen()` 系统调用时,其底层的内核函数是 [`inet_listen`][13]。我将从钩在一个“Hello World” `kprobe` 的入口上开始。
|
||||
|
||||
```
|
||||
from bcc import BPF
|
||||
@ -54,7 +53,7 @@ bpf_text = """
|
||||
#include <net/inet_sock.h>
|
||||
#include <bcc/proto.h>
|
||||
|
||||
// 1\. Attach kprobe to "inet_listen"
|
||||
// 1. Attach kprobe to "inet_listen"
|
||||
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
|
||||
{
|
||||
bpf_trace_printk("Hello World!\\n");
|
||||
@ -62,22 +61,25 @@ int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
|
||||
};
|
||||
"""
|
||||
|
||||
# 2\. Build and Inject program
|
||||
# 2. Build and Inject program
|
||||
b = BPF(text=bpf_text)
|
||||
|
||||
# 3\. Print debug output
|
||||
# 3. Print debug output
|
||||
while True:
|
||||
print b.trace_readline()
|
||||
|
||||
```
|
||||
|
||||
这个程序做了三件事件:1. 它使用一个命名惯例附加到一个内核探针上。如果函数被调用,输出 “my_probe”,它使用 `b.attach_kprobe("inet_listen", "my_probe")` 被显式地附加。2.它使用 LLVM 去 new 一个 BPF 后端来构建程序。使用 (new) `bpf()` 系统调用去注入结果字节码,并且按匹配的命名惯例自动附加探针。3. 从内核管道读取原生输出。
|
||||
这个程序做了三件事件:
|
||||
|
||||
注意:eBPF 的后端 LLVM 还很新。如果你认为你发了一个 bug,你可以去升级它。
|
||||
1. 它通过命名惯例来附加到一个内核探针上。如果函数被调用,比如说 `my_probe` 函数,它会使用 `b.attach_kprobe("inet_listen", "my_probe")` 显式附加。
|
||||
2. 它使用 LLVM 新的 BPF 后端来构建程序。使用(新的) `bpf()` 系统调用去注入结果字节码,并且按匹配的命名惯例自动附加探针。
|
||||
3. 从内核管道读取原生输出。
|
||||
|
||||
注意到 `bpf_trace_printk` 调用了吗?这是一个内核的 `printk()` 精简版的 debug 函数。使用时,它产生一个跟踪信息到 `/sys/kernel/debug/tracing/trace_pipe` 中的专门的内核管道。就像名字所暗示的那样,这是一个管道。如果多个读取者消费它,仅有一个将得到一个给定的行。对生产系统来说,这样是不合适的。
|
||||
注意:eBPF 的后端 LLVM 还很新。如果你认为你遇到了一个 bug,你也许应该去升级。
|
||||
|
||||
幸运的是,Linux 3.19 引入了对消息传递的映射以及 Linux 4.4 带来了任意 perf 事件支持。在这篇文章的后面部分,我将演示 perf 事件。
|
||||
注意到 `bpf_trace_printk` 调用了吗?这是一个内核的 `printk()` 精简版的调试函数。使用时,它产生跟踪信息到一个专门的内核管道 `/sys/kernel/debug/tracing/trace_pipe` 。就像名字所暗示的那样,这是一个管道。如果多个读取者在读取它,仅有一个将得到一个给定的行。对生产系统来说,这样是不合适的。
|
||||
|
||||
幸运的是,Linux 3.19 引入了对消息传递的映射,以及 Linux 4.4 带来了对任意 perf 事件的支持。在这篇文章的后面部分,我将演示基于 perf 事件的方式。
|
||||
|
||||
```
|
||||
# From a first console
|
||||
@ -90,11 +92,11 @@ ubuntu@bcc:~$ nc -l 0 4242
|
||||
|
||||
```
|
||||
|
||||
Yay!
|
||||
搞定!
|
||||
|
||||
### 抓取 backlog
|
||||
|
||||
现在,让我们输出一些很容易访问到的数据,叫做 “backlog”。backlog 是正在建立 TCP 连接的、即将成为 `accept()` 的数量。
|
||||
现在,让我们输出一些很容易访问到的数据,比如说 “backlog”。backlog 是正在建立 TCP 连接的、即将被 `accept()` 的连接的数量。
|
||||
|
||||
只要稍微调整一下 `bpf_trace_printk`:
|
||||
|
||||
@ -111,26 +113,24 @@ bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog
|
||||
|
||||
```
|
||||
|
||||
`nc` 是一个单个的连接程序,因此,在 1\. Nginx 或者 Redis 上的 backlog 在这里将输出 128 。但是,那是另外一件事。
|
||||
`nc` 是个单连接程序,因此,其 backlog 是 1。而 Nginx 或者 Redis 上的 backlog 将在这里输出 128 。但是,那是另外一件事。
|
||||
|
||||
简单吧?现在让我们获取它的端口。
|
||||
|
||||
### 抓取端口和 IP
|
||||
|
||||
正在研究的 `inet_listen` 的信息来源于内核,我们知道它需要从 `socket` 对象中取得 `inet_sock`。就像从源头拷贝,然后插入到跟踪器的开始处:
|
||||
正在研究的 `inet_listen` 来源于内核,我们知道它需要从 `socket` 对象中取得 `inet_sock`。只需要从源头拷贝,然后插入到跟踪器的开始处:
|
||||
|
||||
```
|
||||
// cast types. Intermediate cast not needed, kept for readability
|
||||
struct sock *sk = sock->sk;
|
||||
struct inet_sock *inet = inet_sk(sk);
|
||||
|
||||
```
|
||||
|
||||
端口现在可以在按网络字节顺序(就是“从小到大”的顺序)的 `inet->inet_sport` 访问到。很容易吧!因此,我们将替换为 `bpf_trace_printk`:
|
||||
端口现在可以按网络字节顺序(就是“从小到大、大端”的顺序)从 `inet->inet_sport` 访问到。很容易吧!因此,我们只需要把 `bpf_trace_printk` 替换为:
|
||||
|
||||
```
|
||||
bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport);
|
||||
|
||||
```
|
||||
|
||||
然后运行:
|
||||
@ -141,10 +141,9 @@ ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
|
||||
R1 invalid mem access 'inv'
|
||||
...
|
||||
Exception: Failed to load BPF program kprobe__inet_listen
|
||||
|
||||
```
|
||||
|
||||
除了它不再简单之外,Bcc 现在提升了 _许多_。直到写这篇文章的时候,几个问题已经被处理了,但是并没有全部处理完。这个错误意味着内核检查器可以证实程序中的内存访问是正确的。看显式的类型转换。我们需要一点帮助,以使访问更加明确。我们将使用 `bpf_probe_read` 可信任的函数去读取一个任意内存位置,虽然为了确保,要像如下的那样做一些必需的检查:
|
||||
抛出的异常并没有那么简单,Bcc 现在提升了 _许多_。直到写这篇文章的时候,有几个问题已经被处理了,但是并没有全部处理完。这个错误意味着内核检查器可以证实程序中的内存访问是正确的。看这个显式的类型转换。我们需要一点帮助,以使访问更加明确。我们将使用 `bpf_probe_read` 可信任的函数去读取一个任意内存位置,同时确保所有必要的检查都是用类似这样方法完成的:
|
||||
|
||||
```
|
||||
// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack
|
||||
@ -156,7 +155,7 @@ bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
|
||||
|
||||
```
|
||||
|
||||
使用 `inet->inet_rcv_saddr` 读取 IPv4 边界地址,和它基本上是相同的。如果我把这些一起放上去,我们将得到 backlog,端口和边界 IP:
|
||||
读取 IPv4 边界地址和它基本上是相同的,使用 `inet->inet_rcv_saddr` 。如果我把这些一起放上去,我们将得到 backlog、端口和边界 IP:
|
||||
|
||||
```
|
||||
from bcc import BPF
|
||||
@ -198,7 +197,7 @@ while True:
|
||||
|
||||
```
|
||||
|
||||
运行一个测试,输出的内容像下面这样:
|
||||
测试运行输出的内容像下面这样:
|
||||
|
||||
```
|
||||
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
|
||||
@ -206,15 +205,15 @@ while True:
|
||||
|
||||
```
|
||||
|
||||
你的监听是在本地主机上提供的。因为没有处理为友好的输出,这里的地址以 16 进制的方式显示,并且那是有线的。并且它很酷。
|
||||
这证明你的监听是在本地主机上的。因为没有处理为友好的输出,这里的地址以 16 进制的方式显示,但是这是没错的,并且它很酷。
|
||||
|
||||
注意:你可能想知道为什么 `ntohs` 和 `ntohl` 可以从 BPF 中被调用,即便它们并不可信。这是因为它们是宏,并且是从 “.h” 文件中来的内联函数,并且,在写这篇文章的时候一个小的 bug 已经 [修复了][14]。
|
||||
|
||||
全部完成之后,再来一个代码片断:我们希望获取相关的容器。在一个网络环境中,那意味着我们希望取得网络的命名空间。网络命名空间是一个容器的构建块,它允许它们拥有独立的网络。
|
||||
全部达成了,还剩下一些:我们希望获取相关的容器。在一个网络环境中,那意味着我们希望取得网络的命名空间。网络命名空间是一个容器的构建块,它允许它们拥有独立的网络。
|
||||
|
||||
### 抓取网络命名空间:被迫引入的 perf 事件
|
||||
|
||||
在用户空间中,网络命名空间可以通过检查 `/proc/PID/ns/net` 的目标来确定,它将看起来像 `net:[4026531957]` 这样。方括号中的数字是节点的网络空间编号。这就是说,我们可以通过 `/proc` 来取得,但是这并不是好的方式,我们或许可以临时处理时用一下。我们可以从内核中直接抓取节点编号。幸运的是,那样做很容易:
|
||||
在用户空间中,网络命名空间可以通过检查 `/proc/PID/ns/net` 的目标来确定,它将看起来像 `net:[4026531957]` 这样。方括号中的数字是网络空间的 inode 编号。这就是说,我们可以通过 `/proc` 来取得,但是这并不是好的方式,我们或许可以临时处理时用一下。我们可以从内核中直接抓取 inode 编号。幸运的是,那样做很容易:
|
||||
|
||||
```
|
||||
// Create an populate the variable
|
||||
@ -231,7 +230,6 @@ netns = sk->__sk_common.skc_net.net->ns.inum;
|
||||
|
||||
```
|
||||
bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns);
|
||||
|
||||
```
|
||||
|
||||
如果你尝试去运行它,你将看到一些令人难解的错误信息:
|
||||
@ -240,24 +238,19 @@ bpf_trace_printk("Listening on %x %d with %d pending connections in container %d
|
||||
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
|
||||
error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32)
|
||||
too many args to 0x1ba9108: i64 = Constant<6>
|
||||
|
||||
```
|
||||
|
||||
clang 想尝试去告诉你的是 “Hey pal,`bpf_trace_printk` 只能带四个参数,你刚才给它传递了 5 个“。在这里我不打算继续追究细节了,但是,那是 BPF 的一个限制。如果你想继续去深入研究,[这里是一个很好的起点][15]。
|
||||
clang 想尝试去告诉你的是 “嗨,哥们,`bpf_trace_printk` 只能带四个参数,你刚才给它传递了 5 个”。在这里我不打算继续追究细节了,但是,那是 BPF 的一个限制。如果你想继续去深入研究,[这里是一个很好的起点][15]。
|
||||
|
||||
去修复它的唯一方式是去 … 停止调试并且准备投入使用。因此,让我们开始吧(确保运行在内核版本为 4.4 的 Linux 系统上)我将使用 perf 事件,它支持传递任意大小的结构体到用户空间。另外,只有我们的读者可以获得它,因此,多个没有关系的 eBPF 程序可以并发产生数据而不会出现问题。
|
||||
去修复它的唯一方式是 … 停止调试并且准备投入使用。因此,让我们开始吧(确保运行在内核版本为 4.4 的 Linux 系统上)。我将使用 perf 事件,它支持传递任意大小的结构体到用户空间。另外,只有我们的读者可以获得它,因此,多个没有关系的 eBPF 程序可以并发产生数据而不会出现问题。
|
||||
|
||||
去使用它吧,我们需要:
|
||||
|
||||
1. 定义一个结构体
|
||||
|
||||
2. 声明事件
|
||||
|
||||
3. 推送事件
|
||||
|
||||
4. 在 Python 端重新声明事件(这一步以后将不再需要)
|
||||
|
||||
5. 消费和格式化事件
|
||||
5. 处理和格式化事件
|
||||
|
||||
这看起来似乎很多,其它并不多,看下面示例:
|
||||
|
||||
@ -314,7 +307,7 @@ while True:
|
||||
|
||||
```
|
||||
|
||||
来试一下吧。在这个示例中,我有一个 redis 运行在一个 Docker 容器中,并且 nc 在主机上:
|
||||
来试一下吧。在这个示例中,我有一个 redis 运行在一个 Docker 容器中,并且 `nc` 运行在主机上:
|
||||
|
||||
```
|
||||
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
|
||||
@ -326,7 +319,7 @@ Listening on 7f000001 6588 with 1 pending connections in container 4026531957
|
||||
|
||||
### 结束语
|
||||
|
||||
现在,所有事情都可以在内核中使用 eBPF 将任何函数的调用设置为触发事件,并且在学习 eBPF 时,你将看到了我所遇到的大多数的问题。如果你希望去看这个工具的所有版本,像 IPv6 支持这样的一些技巧,看一看 [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]。它现在是一个官方的工具,感谢 bcc 团队的支持。
|
||||
现在,所有事情都可以在内核中使用 eBPF 将任何函数的调用设置为触发事件,并且你看到了我在学习 eBPF 时所遇到的大多数的问题。如果你希望去看这个工具的完整版本,像 IPv6 支持这样的一些技巧,看一看 [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]。它现在是一个官方的工具,感谢 bcc 团队的支持。
|
||||
|
||||
更进一步地去学习,你可能需要去关注 Brendan Gregg 的博客,尤其是 [关于 eBPF 映射和统计的文章][17]。他是这个项目的主要贡献人之一。
|
||||
|
||||
@ -337,7 +330,7 @@ via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-
|
||||
|
||||
作者:[Jean-Tiare Le Bigot][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,13 @@
|
||||
# 将 DEB 软件包转换成 Arch Linux 软件包
|
||||
将 DEB 软件包转换成 Arch Linux 软件包
|
||||
============
|
||||
|
||||

|
||||
|
||||
我们已经学会了如何[**为多个平台构建包**][1],以及如何从[**源代码构建包**][2]。 今天,我们将学习如何将 DEB 包转换为 Arch Linux 包。 您可能会问,AUR 是这个星球上的大型软件存储库,几乎所有的软件都可以在其中使用。 为什么我需要将 DEB 软件包转换为 Arch Linux 软件包? 这的确没错! 但是,某些软件包无法编译(封闭源代码软件包),或者由于各种原因(如编译时出错或文件不可用)而无法从 AUR 生成。 或者,开发人员懒得在 AUR 中构建一个包,或者他/她不想创建 AUR 包。 在这种情况下,我们可以使用这种快速但有点粗糙的方法将 DEB 包转换成 Arch Linux 包。
|
||||
我们已经学会了如何[为多个平台构建包][1],以及如何从[源代码构建包][2]。 今天,我们将学习如何将 DEB 包转换为 Arch Linux 包。 您可能会问,AUR 是这个星球上的大型软件存储库,几乎所有的软件都可以在其中使用。 为什么我需要将 DEB 软件包转换为 Arch Linux 软件包? 这的确没错! 但是,由于某些软件包无法编译(封闭源代码软件包),或者由于各种原因(如编译时出错或文件不可用)而无法从 AUR 生成。 或者,开发人员懒得在 AUR 中构建一个包,或者他/她不想创建 AUR 包。 在这种情况下,我们可以使用这种快速但有点粗糙的方法将 DEB 包转换成 Arch Linux 包。
|
||||
|
||||
### Debtap - 将 DEB 包转换成 Arch Linux 包
|
||||
|
||||
为此,我们将使用名为 “Debtap” 的实用程序。 它代表了 **DEB** **T** o **A** rch (Linux) **P** ackage。 Debtap 在 AUR 中可以使用,因此您可以使用 AUR 辅助工具(如 [Pacaur][3],[Packer][4] 或 [Yaourt][5] )来安装它。
|
||||
为此,我们将使用名为 “Debtap” 的实用程序。 它代表了 **DEB** **T** o **A** rch (Linux) **P** ackage。 Debtap 在 AUR 中可以使用,因此您可以使用 AUR 辅助工具(如 [Pacaur][3]、[Packer][4] 或 [Yaourt][5] )来安装它。
|
||||
|
||||
使用 pacaur 安装 debtap 运行:
|
||||
|
||||
@ -26,7 +27,7 @@ packer -S debtap
|
||||
yaourt -S debtap
|
||||
```
|
||||
|
||||
同时,你的 Arch 系统也应该已经安装好了 **bash**, **binutils** ,**pkgfile** 和 **fakeroot** 包。
|
||||
同时,你的 Arch 系统也应该已经安装好了 `bash`, `binutils` ,`pkgfile` 和 `fakeroot` 包。
|
||||
|
||||
在安装 Debtap 和所有上述依赖关系之后,运行以下命令来创建/更新 pkgfile 和 debtap 数据库。
|
||||
|
||||
@ -73,11 +74,11 @@ sudo debtap -u
|
||||
==> All steps successfully completed!
|
||||
```
|
||||
|
||||
你至少需要运行上述命令一次
|
||||
你至少需要运行上述命令一次。
|
||||
|
||||
现在是时候开始转换包了。
|
||||
|
||||
比如说要使用 debtap 转换包 **Quadrapassel**,你可以这样做:
|
||||
比如说要使用 debtap 转换包 Quadrapassel,你可以这样做:
|
||||
|
||||
```
|
||||
debtap quadrapassel_3.22.0-1.1_arm64.deb
|
||||
@ -95,17 +96,17 @@ debtap quadrapassel_3.22.0-1.1_arm64.deb
|
||||
==> Generating .PKGINFO file...
|
||||
|
||||
:: Enter Packager name:
|
||||
**quadrapassel**
|
||||
quadrapassel
|
||||
|
||||
:: Enter package license (you can enter multiple licenses comma separated):
|
||||
**GPL**
|
||||
GPL
|
||||
|
||||
*** Creation of .PKGINFO file in progress. It may take a few minutes, please wait...
|
||||
|
||||
Warning: These dependencies (depend = fields) could not be translated into Arch Linux packages names:
|
||||
gsettings-backend
|
||||
|
||||
== > Checking and generating .INSTALL file (if necessary)...
|
||||
==> Checking and generating .INSTALL file (if necessary)...
|
||||
|
||||
:: If you want to edit .PKGINFO and .INSTALL files (in this order), press (1) For vi (2) For nano (3) For default editor (4) For a custom editor or any other key to continue:
|
||||
|
||||
@ -118,25 +119,25 @@ gsettings-backend
|
||||
|
||||
**注**:Quadrapassel 在 Arch Linux 官方的软件库中早已可用,我只是用它来说明一下。
|
||||
|
||||
如果在包转化的过程中,你不想回答任何问题,使用 **-q** 略过除了编辑元数据的所有问题。
|
||||
如果在包转化的过程中,你不想回答任何问题,使用 `-q` 略过除了编辑元数据之外的所有问题。
|
||||
|
||||
```
|
||||
debtap -q quadrapassel_3.22.0-1.1_arm64.deb
|
||||
```
|
||||
|
||||
为了略过所有的问题(不推荐),使用 -Q。
|
||||
为了略过所有的问题(不推荐),使用 `-Q`。
|
||||
|
||||
```
|
||||
debtap -Q quadrapassel_3.22.0-1.1_arm64.deb
|
||||
```
|
||||
|
||||
转换完成后,您可以使用 “pacman” 在 Arch 系统中安装新转换的软件包,如下所示。
|
||||
转换完成后,您可以使用 `pacman` 在 Arch 系统中安装新转换的软件包,如下所示。
|
||||
|
||||
```
|
||||
sudo pacman -U <package-name>
|
||||
```
|
||||
|
||||
显示帮助文档,使用 -h:
|
||||
显示帮助文档,使用 `-h`:
|
||||
|
||||
```
|
||||
$ debtap -h
|
||||
@ -154,7 +155,7 @@ Options:
|
||||
-P --P -Pkgbuild --Pkgbuild Generate a PKGBUILD file only
|
||||
```
|
||||
|
||||
这就是现在要讲的。希望这个工具有所帮助。如果你发现我们的指南有用,请花一点时间在你的社交、专业网络分享并在 OSTechNix 支持我们!
|
||||
这就是现在要讲的。希望这个工具有所帮助。如果你发现我们的指南有用,请花一点时间在你的社交、专业网络分享并支持我们!
|
||||
|
||||
更多的好东西来了。请继续关注!
|
||||
|
||||
@ -168,7 +169,7 @@ via: https://www.ostechnix.com/convert-deb-packages-arch-linux-packages/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[amwps290](https://github.com/amwps290)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,60 +1,59 @@
|
||||
|
||||
为什么 Linux 比 Windows 和 macOS 的安全性好
|
||||
为什么 Linux 比 Windows 和 macOS 更安全?
|
||||
======
|
||||
|
||||
> 多年前做出的操作系统选型终将影响到如今的企业安全。在三大主流操作系统当中,有一个能被称作最安全的。
|
||||
|
||||

|
||||
|
||||
企业投入了大量时间、精力和金钱来保障系统的安全性。最强的安全意识可能就是有一个安全的网络运行中心,肯定用上了防火墙以及反病毒软件,可能花费大量时间监控他们的网络,寻找可能表明违规的异常信号,就像 IDS、SIEM 和 NGFW 一样,他们部署了一个名副其实的防御阵列。
|
||||
企业投入了大量时间、精力和金钱来保障系统的安全性。最强的安全意识可能就是有一个安全运行中心(SOC),肯定用上了防火墙以及反病毒软件,或许还花费了大量时间去监控他们的网络,以寻找可能表明违规的异常信号,用那些 IDS、SIEM 和 NGFW 之类的东西,他们部署了一个名副其实的防御阵列。
|
||||
|
||||
然而又有多少人想过数字化操作的基础之一:部署在员工的个人电脑上的操作系统?当选择桌面操作系统时,安全性是一个考虑的因素吗?
|
||||
然而又有多少人想过数字化操作的基础之一——部署在员工的个人电脑上的操作系统呢?当选择桌面操作系统时,安全性是一个考虑的因素吗?
|
||||
|
||||
这就产生了一个 IT 人士都应该能回答的问题:一般部署哪种操作系统最安全呢?
|
||||
|
||||
我们问了一些专家他们对于以下三种选项的看法:Windows,最复杂的平台也是最受欢迎的桌面操作系统;macOS X,基于 FreeBSD 的 Unix 操作系统,驱动着苹果的 Macintosh 系统运行;还有 Linux,这里我们指的是所有的 Linux 发行版以及与基于 Unix 的操作系统相关的系统。
|
||||
我们问了一些专家他们对于以下三种选择的看法:Windows,最复杂的平台也是最受欢迎的桌面操作系统;macOS X,基于 FreeBSD 的 Unix 操作系统,驱动着苹果的 Macintosh 系统运行;还有 Linux,这里我们指的是所有的 Linux 发行版以及与基于 Unix 的操作系统相关的系统。
|
||||
|
||||
### 怎么会这样
|
||||
|
||||
企业可能没有评估他们部署到工作人员的操作系统的安全性的一个原因是,他们多年前就已经做出了选择。退一步讲,所有操作系统都还算安全,因为侵入它们,窃取数据或安装恶意软件的业务还处于起步阶段。而且一旦选择了操作系统,就很难再想改变。很少有 IT 组织希望将全球分散的员工队伍转移到全新的操作系统上。唉,他们已经受够了把用户搬到一个选好的新版本操作系统时的负面反响。
|
||||
企业可能没有评估他们部署给工作人员的操作系统的安全性的一个原因是,他们多年前就已经做出了选择。退一步讲,所有操作系统都还算安全,因为侵入它们、窃取数据或安装恶意软件的牟利方式还处于起步阶段。而且一旦选择了操作系统,就很难再改变。很少有 IT 组织想要面对将全球分散的员工队伍转移到全新的操作系统上的痛苦。唉,他们已经受够了把用户搬到一个现有的操作系统的新版本时的负面反响。
|
||||
|
||||
还有,重新考虑它是高明的吗?这三款领先的桌面操作系统在安全方面的差异是否足以值得我们去做出改变呢?
|
||||
还有,重新考虑操作系统是高明的吗?这三款领先的桌面操作系统在安全方面的差异是否足以值得我们去做出改变呢?
|
||||
|
||||
当然商业系统面临的威胁近几年已经改变了。攻击变得成熟多了。曾经支配了公众想象力的单枪匹马的青少年黑客已经被组织良好的犯罪分子网络以及具有庞大计算资源的政府资助组织的网络所取代。
|
||||
|
||||
像你们许多人一样,我有过很多那时的亲身经历:我曾经在许多 Windows 电脑上被恶意软件和病毒感染,我甚至被 宏病毒感染了 Mac 上的文件。最近,一个广泛传播的自动黑客绕开了网站的保护程序并用恶意软件感染了它。这种恶意软件的影响一开始是隐形的,甚至有些东西你没注意,直到恶意软件最终深深地植入系统以至于它的性能开始变差。一件有关病毒蔓延的震惊之事是不法之徒从来没有特定针对过我;当今世界,用僵尸网络攻击 100,000 台电脑容易得就像一次攻击几台电脑一样。
|
||||
像你们许多人一样,我有过很多那时的亲身经历:我曾经在许多 Windows 电脑上被恶意软件和病毒感染,我甚至被宏病毒感染了 Mac 上的文件。最近,一个广泛传播的自动黑客攻击绕开了网站的保护程序并用恶意软件感染了它。这种恶意软件的影响一开始是隐形的,甚至有些东西你没注意,直到恶意软件最终深深地植入系统以至于它的性能开始变差。一件有关病毒蔓延的震惊之事是不法之徒从来没有特定针对过我;当今世界,用僵尸网络攻击 100,000 台电脑容易得就像一次攻击几台电脑一样。
|
||||
|
||||
### 操作系统真的很重要吗?
|
||||
|
||||
给你的用户部署的那个操作系统确实对你的安全态度产生了影响,但那并不是一个可靠的安全措施。首先,现在的攻击很可能会发生,因为攻击者探测了你的用户,而不是你的系统。一项对参与过 DEFCON 会议黑客的[调查][1]表明“84%的人使用社交工程作为攻击策略的一部分。”部署安全操作系统只是一个重要的起点,但如果没有用户培训,强大的防火墙和持续的警惕性,即使是最安全的网络也会受到入侵。当然,用户下载的软件,扩展程序,实用程序,插件和其他看起来还好的软件总是有风险的,成为了恶意软件出现在系统上的一种途径.
|
||||
给你的用户部署的哪个操作系统确实对你的安全态势产生了影响,但那并不是一个可靠的安全措施。首先,现在的攻击很可能会发生,因为攻击者探测的是你的用户,而不是你的系统。一项对参加了 DEFCON 会议的黑客的[调查][1]表明,“84% 的人使用社交工程作为攻击策略的一部分。”部署安全的操作系统只是一个重要的起点,但如果没有用户培训、强大的防火墙和持续的警惕性,即使是最安全的网络也会受到入侵。当然,用户下载的软件、扩展程序、实用程序、插件和其他看起来还好的软件总是有风险的,成为了恶意软件出现在系统上的一种途径。
|
||||
|
||||
无论你选择哪种平台,保持你系统安全最好的方法之一就是确保立即应用了软件更新。一旦补丁正式发布,黑客就可以对其进行反向工程并找到一种新的漏洞,以便在下一波攻击中使用。
|
||||
|
||||
而且别忘了最基本的操作。别用 root 权限,别授权用户连接到网络中的老服务器上。教您的用户如何挑选一个真正的好密码并且使用例如 [1Password][2] 这样的工具,以便在每个他们使用的帐户和网站上拥有不同的密码
|
||||
而且别忘了最基本的操作。别用 root 权限,别授权访客连接到网络中的老服务器上。教您的用户如何挑选一个真正的好密码并且使用例如 [1Password][2] 这样的工具,以便在每个他们使用的帐户和网站上拥有不同的密码
|
||||
|
||||
因为底线是您对系统做出的每一个决定都会影响您的安全性,即使您的用户工作使用的操作系统也是如此。
|
||||
|
||||
### Windows,流行之选
|
||||
|
||||
若你是一个安全管理人员,很可能文章中提出的问题就会变成这样:是否我们远离微软的 Windows 会更安全呢?说 Windows 主导商业市场都是低估事实了。[NetMarketShare][4] 估计互联网上 88% 的电脑令人震惊地运行着 Windows 的某个版本。
|
||||
若你是一个安全管理人员,很可能文章中提出的问题就会变成这样:是否我们远离微软的 Windows 会更安全呢?说 Windows 主导企业市场都是低估事实了。[NetMarketShare][4] 估计互联网上 88% 的电脑令人震惊地运行着 Windows 的某个版本。
|
||||
|
||||
如果你的系统在这 88% 之中,你可能知道微软会继续加强 Windows 系统的安全性。不断重写其改进或者重新改写了其代码库,增加了它的反病毒软件系统,改进了防火墙以及实现了沙箱架构,这样在沙箱里的程序就不能访问系统的内存空间或者其他应用程序。
|
||||
如果你的系统在这 88% 之中,你可能知道微软会继续加强 Windows 系统的安全性。这些改进被不断重写,或者重新改写了其代码库,增加了它的反病毒软件系统,改进了防火墙以及实现了沙箱架构,这样在沙箱里的程序就不能访问系统的内存空间或者其他应用程序。
|
||||
|
||||
但可能 Windows 的流行本身就是个问题,操作系统的安全性可能很大程度上依赖于装机用户量的规模。对于恶意软件作者来说,Windows 提供了大的施展平台。专注其中可以让他们的努力发挥最大作用。
|
||||
|
||||
像 Troy Wilkinson,Axiom Cyber Solutions 的 CEO 解释的那样,“Windows 总是因为很多原因而导致安全性保障来的最晚,主要是因为消费者的采用率。由于市场上大量基于 Windows 的个人电脑,黑客历来最有针对性地将这些系统作为目标。”
|
||||
就像 Troy Wilkinson,Axiom Cyber Solutions 的 CEO 解释的那样,“Windows 总是因为很多原因而导致安全性保障来的最晚,主要是因为消费者的采用率。由于市场上大量基于 Windows 的个人电脑,黑客历来最有针对性地将这些系统作为目标。”
|
||||
|
||||
可以肯定地说,从梅丽莎病毒到 WannaCry 或者更强的,许多世界上已知的恶意软件早已对准了 Windows 系统.
|
||||
|
||||
### macOS X 以及通过隐匿实现的安全
|
||||
|
||||
如果最流行的操作系统总是成为大目标,那么用一个不流行的操作系统能确保安全吗?这个主意是老法新用——而且是完全不可信的概念——“通过隐匿实现的安全”,这秉承了让软件内部运作保持专有,从而不为人知是抵御攻击的最好方法的理念。
|
||||
如果最流行的操作系统总是成为大目标,那么用一个不流行的操作系统能确保安全吗?这个主意是老法新用——而且是完全不可信的概念——“通过隐匿实现的安全”,这秉承了“让软件内部运作保持专有,从而不为人知是抵御攻击的最好方法”的理念。
|
||||
|
||||
Wilkinson 坦言,macOS X “比 Windows 更安全”,但他急于补充说,“macOS 曾被认为是一个安全漏洞很小的完全安全的操作系统,但近年来,我们看到黑客制造了攻击苹果系统的额外漏洞。”
|
||||
Wilkinson 坦言,macOS X “比 Windows 更安全”,但他马上补充说,“macOS 曾被认为是一个安全漏洞很小的完全安全的操作系统,但近年来,我们看到黑客制造了攻击苹果系统的额外漏洞。”
|
||||
|
||||
换句话说,攻击者会扩大活动范围而不会无视 Mac 领域。
|
||||
|
||||
Comparitech 的安全研究员 Lee Muson 说,在选择更安全的操作系统时,“macOS 很可能是被选中的目标”,但他提醒说,这一想法并不令人费解。它的优势在于“它仍然受益于通过隐匿实现的安全感和微软提供的更大的目标。”
|
||||
Comparitech 的安全研究员 Lee Muson 说,在选择更安全的操作系统时,“macOS 很可能是被选中的目标”,但他提醒说,这一想法并不令人费解。它的优势在于“它仍然受益于通过隐匿实现的安全感和微软提供的操作系统是个更大的攻击目标。”
|
||||
|
||||
Wolf Solutions 公司的 Joe Moore 给予了苹果更多的信任,称“现成的 macOS X 在安全方面有着良好的记录,部分原因是它不像 Windows 那么广泛,而且部分原因是苹果公司在安全问题上干的不错。”
|
||||
|
||||
@ -66,17 +65,17 @@ Wolf Solutions 公司的 Joe Moore 给予了苹果更多的信任,称“现成
|
||||
|
||||
像 Moore 解释的那样,“Linux 有可能是最安全的,但要求用户是资深用户。”所以,它不是针对所有人的。
|
||||
|
||||
将安全性作为主要功能的 Linux 发行版包括 Parrot Linux,这是一个基于 Debian 的发行版,Moore 说,它提供了许多与安全相关开箱即用的工具。
|
||||
将安全性作为主要功能的 Linux 发行版包括 [Parrot Linux][5],这是一个基于 Debian 的发行版,Moore 说,它提供了许多与安全相关开箱即用的工具。
|
||||
|
||||
当然,一个重要的区别是 Linux 是开源的。Simplex Solutions 的 CISO Igor Bidenko 说,编码人员可以阅读和评论彼此工作的现实看起来像是一场安全噩梦,但这确实是让 Linux 如此安全的重要原因。 “Linux 是最安全的操作系统,因为它的源代码是开放的。任何人都可以查看它,并确保没有错误或后门。”
|
||||
当然,一个重要的区别是 Linux 是开源的。Simplex Solutions 的 CISO Igor Bidenko 说,编码人员可以阅读和审查彼此工作的现实看起来像是一场安全噩梦,但这确实是让 Linux 如此安全的重要原因。 “Linux 是最安全的操作系统,因为它的源代码是开放的。任何人都可以查看它,并确保没有错误或后门。”
|
||||
|
||||
Wilkinson 阐述说:“Linux 和基于 Unix 的操作系统具有较少的信息安全领域已知的、可利用的安全缺陷。技术社区对 Linux 代码进行了审查,该代码有助于提高安全性:通过进行这么多的监督,易受攻击之处、漏洞和威胁就会减少。”
|
||||
Wilkinson 阐述说:“Linux 和基于 Unix 的操作系统具有较少的在信息安全领域已知的、可利用的安全缺陷。技术社区对 Linux 代码进行了审查,该代码有助于提高安全性:通过进行这么多的监督,易受攻击之处、漏洞和威胁就会减少。”
|
||||
|
||||
这是一个微妙的而违反直觉的解释,但是通过让数十人(有时甚至数百人)通读操作系统中的每一行代码,代码实际上更加健壮,并且发布漏洞错误的机会减少了。这与 PC World 为何出来说 Linux 更安全有很大关系。正如 Katherine Noyes 解释的那样,“微软可能吹捧它的大型付费开发者团队,但团队不太可能与基于全球的 Linux 用户开发者进行比较。 安全只能通过所有额外的关注获益。”
|
||||
这是一个微妙而违反直觉的解释,但是通过让数十人(有时甚至数百人)通读操作系统中的每一行代码,代码实际上更加健壮,并且发布漏洞错误的机会减少了。这与 《PC World》 为何出来说 Linux 更安全有很大关系。正如 Katherine Noyes [解释][6]的那样,“微软可能吹捧它的大型的付费开发者团队,但团队不太可能与基于全球的 Linux 用户开发者进行比较。 安全只能通过所有额外的关注获益。”
|
||||
|
||||
另一个被 《PC World》举例的原因是 Linux 更好的用户特权模式:Windows 用户“一般被默认授予管理员权限,那意味着他们几乎可以访问系统中的一切,”Noye 的文章讲到。Linux,反而很好地限制了“root”权限。
|
||||
另一个被 《PC World》举例的原因是 Linux 更好的用户特权模式:Noye 的文章讲到,Windows 用户“一般被默认授予管理员权限,那意味着他们几乎可以访问系统中的一切”。Linux,反而很好地限制了“root”权限。
|
||||
|
||||
Noyes 还指出,Linux 环境下的多样性可能比典型的 Windows 单一文化更好地对抗攻击:Linux 有很多不同的发行版。其中一些以其特别的安全关注点进行差异化。Comparitech 的安全研究员 Lee Muson 为 Linux 发行版提供了这样的建议:“Qubes OS 对于 Linux 来说是一个很好的出发点,现在你可以发现,爱德华·斯诺登的认可大大地掩盖了它自己极其不起眼的主张。”其他安全性专家指出了专门的安全 Linux 发行版,如 Tails Linux,它旨在直接从 USB 闪存驱动器或类似的外部设备安全地匿名运行。
|
||||
Noyes 还指出,Linux 环境下的多样性可能比典型的 Windows 单一文化更好地对抗攻击:Linux 有很多不同的发行版。其中一些以其特别的安全关注点而差异化。Comparitech 的安全研究员 Lee Muson 为 Linux 发行版提供了这样的建议:“[Qubes OS][7] 对于 Linux 来说是一个很好的出发点,现在你可以发现,[爱德华·斯诺登的认可][8]极大地超过了其谦逊的宣传。”其他安全性专家指出了专门的安全 Linux 发行版,如 [Tails Linux][9],它旨在直接从 USB 闪存驱动器或类似的外部设备安全地匿名运行。
|
||||
|
||||
### 构建安全趋势
|
||||
|
@ -1,58 +1,57 @@
|
||||
|
||||
|
||||
如何使用树莓派制作一个数字针孔摄像头
|
||||
======
|
||||
|
||||

|
||||
在 2015 年底的时候,树莓派基金会发布了一个非常小的 [Raspberry Pi Zero][1],这让大家感到很意外。更夸张的是,他们随 MagPi 杂志一起 [免费赠送][2]。我看到这个消息后立即冲出去到处找报刊亭,直到我在这一区域的某处找到最后两份。实际上我还没有想好如何去使用它们,但是我知道,因为它们非常小,所以,它们可以做很多全尺寸树莓派没法做的一些项目。
|
||||
> 学习如何使用一个树莓派 Zero、高清网络摄像头和一个空的粉盒来搭建一个简单的相机。
|
||||
|
||||

|
||||
|
||||
在 2015 年底的时候,树莓派基金会发布了一个让大家很惊艳的非常小的 [树莓派 Zero][1]。更夸张的是,他们随 MagPi 杂志一起 [免费赠送][2]。我看到这个消息后立即冲出去到处找报刊亭,直到我在这一地区的某处找到最后两份。实际上我还没有想好如何去使用它们,但是我知道,因为它们非常小,所以,它们可以做很多全尺寸树莓派没法做的一些项目。
|
||||
|
||||
![Raspberry Pi Zero][4]
|
||||
|
||||
从 MagPi 杂志上获得的树莓派 Zero。CC BY-SA.4.0。
|
||||
*从 MagPi 杂志上获得的树莓派 Zero。CC BY-SA.4.0。*
|
||||
|
||||
因为我对天文摄影非常感兴趣,我以前还改过一台微软出的 LifeCam Cinema 高清网络摄像头,拆掉了它的外壳、镜头、以及红外滤镜,露出了它的 [CCD 芯片][5]。我把它定制为我的 Celestron 天文望远镜的目镜。用它我捕获到了令人难以置信的木星照片、月球上的陨石坑、以及太阳黑子的特写镜头(使用了适当的 Baader 安全保护膜)。
|
||||
因为我对天文摄影非常感兴趣,我以前还改造过一台微软出的 LifeCam Cinema 高清网络摄像头,拆掉了它的外壳、镜头、以及红外滤镜,露出了它的 [CCD 芯片][5]。我把它定制为我的 Celestron 天文望远镜的目镜。用它我捕获到了令人难以置信的木星照片、月球上的陨石坑、以及太阳黑子的特写镜头(使用了适当的 Baader 安全保护膜)。
|
||||
|
||||
在那之前,我甚至还在我的使用胶片的 SLR 摄像机上,通过在镜头盖(这个盖子就是在摄像机上没有安装镜头时,用来保护摄像机的内部元件的那个盖子)上钻一个小孔来变成一个 [针孔摄像头][6],将这个钻了小孔的盖子,盖到一个汽水罐上切下来的小圆盘上,以提供一个针孔。碰巧有一天,在我的桌子上发现了一个可以用来做针孔体的盖子,随后我将它改成了用于天文摄像的网络摄像头。我很好奇这个网络摄像头是否有从针孔盖子后面捕获低照度图像的能力。我花了一些时间使用 [GNOME Cheese][7] 应用程序,去验证这个针孔摄像头确实是个可行的创意。
|
||||
在那之前,我甚至还在我的使用胶片的 SLR 摄像机上,通过在镜头盖(这个盖子就是在摄像机上没有安装镜头时,用来保护摄像机的内部元件的那个盖子)上钻一个小孔来变成一个 [针孔摄像机][6],将这个钻了小孔的盖子,盖到一个汽水罐上切下来的小圆盘上,以提供一个针孔。碰巧有一天,这个放在我的桌子上的针孔镜头盖被改成了用于天文摄像的网络摄像头。我很好奇这个网络摄像头是否有从针孔盖子后面捕获低照度图像的能力。我花了一些时间使用 [GNOME Cheese][7] 应用程序,去验证这个针孔摄像头确实是个可行的创意。
|
||||
|
||||
自从有了这个想法,我就有了树莓派 Zero 的一个用法!针孔摄像机一般都非常小,除了曝光时间和胶片的 ISO 速率外,一般都不提供其它的控制选项。数字摄像机就不一样了,它至少有 20 多个按钮和成百上千的设置菜单。我的数字针孔摄像头的目标是真实反映天文摄像的传统风格,设计一个没有控制选项的极简设备,甚至连曝光时间控制也没有。
|
||||
|
||||
用树莓派 Zero、高清网络镜头和空粉盒设计的数字针孔摄像头,是我设计的 [一系列][9] 针孔摄像头的 [第一个项目][8]。现在,我们开始来制作它。
|
||||
用树莓派 Zero、高清网络镜头和空的粉盒设计的数字针孔摄像头,是我设计的 [一系列][9] 针孔摄像头的 [第一个项目][8]。现在,我们开始来制作它。
|
||||
|
||||
### 硬件
|
||||
|
||||
因为我手头已经有了一个树莓派 Zero,为完成这个项目我还需要一个网络摄像头。这个树莓派 Zero 在英国的零售价是 4 英磅,这个项目其它部件的价格,我希望也差不多是这样的价格水平。花费 30 英磅买来的摄像头安装在一个 4 英磅的计算机主板上,让我感觉有些不平衡。显而易见的答案是前往一个知名的拍卖网站上,去找到一些二手的网络摄像头。不久之后,我仅花费了 1 英磅再加一些运费,获得了一个普通的高清摄像头。之后,在 Fedora 上做了一些测试操作,以确保它是可用正常使用的,我拆掉了它的外壳,以检查它的电子元件的大小是否适合我的项目。
|
||||
|
||||
|
||||
![Hercules DualPix HD webcam][11]
|
||||
|
||||
Hercules DualPix 高清网络摄像头,它将被解剖以提取它的电路板和 CCD 图像传感器。CC BY-SA 4.0.
|
||||
*Hercules DualPix 高清网络摄像头,它将被解剖以提取它的电路板和 CCD 图像传感器。CC BY-SA 4.0.*
|
||||
|
||||
接下来,我需要一个安放摄像头的外壳。树莓派 Zero 电路板大小仅仅为 65mm x 30mm x 5mm。虽然网络摄像头的 CCD 芯片周围有一个用来安装镜头的塑料支架,但是,实际上它的电路板尺寸更小。我在家里找了一圈,希望能够找到一个适合盛放这两个极小的电路板的容器。最后,我发现我妻子的粉盒足够去安放树莓派的电路板。稍微做一些小调整,似乎也可以将网络摄像头的电路板放进去。
|
||||
|
||||
![Powder compact][13]
|
||||
|
||||
变成我的针孔摄像头外壳的粉盒。CC BY-SA 4.0.
|
||||
*变成我的针孔摄像头外壳的粉盒。CC BY-SA 4.0.*
|
||||
|
||||
我拆掉了网络摄像头外壳上的一些螺丝,取出了它的内部元件。网络摄像头外壳的大小反映了它的电路板的大小或 CCD 的位置。我很幸运,这个网络摄像头很小而且它的电路板的布局也很方便。因为我要做一个针孔摄像头,我需要取掉镜像,露出 CCD 芯片。
|
||||
我拆掉了网络摄像头外壳上的一些螺丝,取出了它的内部元件。网络摄像头外壳的大小反映了它的电路板的大小或 CCD 的位置。我很幸运,这个网络摄像头很小而且它的电路板的布局也很方便。因为我要做一个针孔摄像头,我需要取掉镜头,露出 CCD 芯片。
|
||||
|
||||
它的塑料外壳大约有 1 厘米高,它太高了没有办法放进粉盒里。我拆掉了电路板后面的螺丝,将它完全拆开,我认为将它放在盒子里有助于阻挡从缝隙中来的光线,因此,我用一把工艺刀将它修改成 4 毫米高,然后将它重新安装。我折弯了 LED 的支脚以降低它的高度。最后,我切掉了安装麦克风的塑料管,因为我不想采集声音。
|
||||
|
||||
![Bare CCD chip][15]
|
||||
|
||||
取下镜头以后,就可以看到裸露的 CCD 芯片了。圆柱形的塑料柱将镜头固定在合适的位置上,并阻挡 LED 光进入镜头破坏图像。CC BY-SA 4.0
|
||||
*取下镜头以后,就可以看到裸露的 CCD 芯片了。圆柱形的塑料柱将镜头固定在合适的位置上,并阻挡 LED 光进入镜头破坏图像。CC BY-SA 4.0*
|
||||
|
||||
网络摄像头有一个很长的带全尺寸插头的 USB 线缆,而树莓派 Zero 使用的是一个 Micro-USB 插座,因此,我需要一个 USB-to-Micro-USB 适配器。但是,使用适配器插入,这个树莓派将放不进这个粉盒中,更不用说还有将近一米长的 USB 线缆。因此,我用刀将 Micro-USB 适配器削开,切掉了它的 USB 插座并去掉这些塑料,露出连接到 Micro-USB 插头上的金属材料。同时也切掉了网络摄像头上大约 6 厘米长的 USB 电缆,并剥掉裹在它外面的锡纸,露出它的四根电线。我把它们直接焊接到 Micro-USB 插头上。现在网络摄像头可以插入到树莓派 Zero 上了,并且电线也可以放到粉盒中了。
|
||||
网络摄像头有一个很长的带全尺寸插头的 USB 线缆,而树莓派 Zero 使用的是一个 Micro-USB 插座,因此,我需要一个 USB 转 Micro-USB 的适配器。但是,使用适配器插入,这个树莓派将放不进这个粉盒中,更不用说还有将近一米长的 USB 线缆。因此,我用刀将 Micro-USB 适配器削开,切掉了它的 USB 插座并去掉这些塑料,露出连接到 Micro-USB 插头上的金属材料。同时也把网络摄像头的 USB 电缆切到大约 6 厘米长,并剥掉裹在它外面的锡纸,露出它的四根电线。我把它们直接焊接到 Micro-USB 插头上。现在网络摄像头可以插入到树莓派 Zero 上了,并且电线也可以放到粉盒中了。
|
||||
|
||||
![Modified USB plugs][17]
|
||||
|
||||
网络摄像头使用的 Micro-USB 插头已经剥掉了线,并直接焊接到触点上。这个插头现在插入到树莓派 Zero 后大约仅高出树莓派 1 厘米。CC BY-SA 4.0
|
||||
*网络摄像头使用的 Micro-USB 插头已经剥掉了线,并直接焊接到触点上。这个插头现在插入到树莓派 Zero 后大约仅高出树莓派 1 厘米。CC BY-SA 4.0*
|
||||
|
||||
最初,我认为到此为止,已经全部完成了电子设计部分,但是在测试之后,我意识到,如果摄像头没有捕获图像或者甚至没有加电我都不知道。我决定使用树莓派的 GPIO 针脚去驱动 LED 指示灯。一个黄色的 LED 表示网络摄像头控制软件已经运行,而一个绿色的 LED 表示网络摄像头正在捕获图像。我在 BCM 的 17 号和 18 号针脚上各自串接一个 300 欧姆的电阻,并将它们各自连接到 LED 的正极上,然后将 LED 的负极连接到一起并接入到公共地针脚上。
|
||||
|
||||
![LEDs][19]
|
||||
|
||||
LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号针脚上,负极连接到公共地针脚。CC BY-SA 4.0.
|
||||
*LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号针脚上,负极连接到公共地针脚。CC BY-SA 4.0.*
|
||||
|
||||
接下来,该去修改粉盒了。首先,我去掉了卡在粉盒上的托盘以腾出更多的空间,并且用刀将连接处切开。我打算在一个便携式充电宝上运行树莓派 Zero,充电宝肯定是放不到这个盒子里面,因此,我挖了一个孔,这样就可以引出 USB 连接头。LED 的光需要能够从盒子外面看得见,因此,我在盖子上钻了两个 3 毫米的小孔。
|
||||
|
||||
@ -60,29 +59,29 @@ LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号
|
||||
|
||||
![Bottom of the case with the pinhole aperture][21]
|
||||
|
||||
带针孔的盒子底部。CC BY-SA 4.0
|
||||
*带针孔的盒子底部。CC BY-SA 4.0*
|
||||
|
||||
剩下的工作就是将这些已经改造完成的设备封装起来。首先我使用蓝色腻子将摄像头的电路板固定在盒子中合适的位置,这样针孔就直接处于 CCD 之上了。使用蓝色腻子的好处是,如果我需要清理污渍(或者如果放错了位置)时,就可以很容易地重新安装 CCD 了。将树莓派 Zero 直接放在摄像头电路板上面。为防止这两个电路板之间可能出现的短路情况,我在这两个电路板之间放了几层防静电袋。
|
||||
剩下的工作就是将这些已经改造完成的设备封装起来。首先我使用蓝色腻子将摄像头的电路板固定在盒子中合适的位置,这样针孔就直接处于 CCD 之上了。使用蓝色腻子的好处是,如果我需要清理污渍(或者如果放错了位置)时,就可以很容易地重新安装 CCD 了。将树莓派 Zero 直接放在摄像头电路板上面。为防止这两个电路板之间可能出现的短路情况,我在树莓派的背面贴了几层防静电胶带。
|
||||
|
||||
[树莓派 Zero][22] 非常适合放到这个粉盒中,并且不需要任何固定,而从小孔中穿出去连接充电宝的 USB 电缆需要将它粘住固定。最后,我将 LED 塞进了前面在盒子上钻的孔中,并用胶水将它们固定住。我在 LED 的针脚之中放了一些防静电袋,以防止盒子盖上时,它与树莓派电路板接触而发生短路。
|
||||
[树莓派 Zero][22] 非常适合放到这个粉盒中,并且不需要任何固定,而从小孔中穿出去连接充电宝的 USB 电缆需要将它粘住固定。最后,我将 LED 塞进了前面在盒子上钻的孔中,并用胶水将它们固定住。我在 LED 的针脚之中放了一些防静电胶带,以防止盒子盖上时,它与树莓派电路板接触而发生短路。
|
||||
|
||||
![Raspberry Pi Zero slotted into the case][24]
|
||||
|
||||
树莓派 Zero 塞入到这个盒子中后,周围的空隙不到 1mm。从盒子中引出的连接到网络摄像头上的 Micro-USB 插头,接下来需要将它连接到充电宝上。CC BY-SA 4.0
|
||||
*树莓派 Zero 塞入到这个盒子中后,周围的空隙不到 1mm。从盒子中引出的连接到网络摄像头上的 Micro-USB 插头,接下来需要将它连接到充电宝上。CC BY-SA 4.0*
|
||||
|
||||
### 软件
|
||||
|
||||
当然,计算机硬件离开控制它的软件是不能使用的。树莓派 Zero 同样可以运行全尺寸树莓派能够运行的软件,但是,因为树莓派 Zero 的 CPU 速度比较慢,让它去引导传统的 [Raspbian OS][25] 镜像非常耗时。打开摄像头都要花费差不多一分钟的时间,这样的速度在现实中是没有什么用处的。而且,一个完整的树莓派操作系统对我的这个摄像头项目来说也没有必要。甚至是,我禁用了引导时启动的所有可禁用的服务,启动仍然需要很长的时间。因此,我决定仅使用需要的软件,我将用一个 [U-Boot][26] 引导加载器和 Linux 内核。自定义 `init` 二进制文件从 microSD 卡上加载 root 文件系统,读入驱动网络摄像头所需要的内核模块,并将它放在 `/dev` 目录下,然后运行二进制的应用程序。
|
||||
当然,计算机硬件离开控制它的软件是不能使用的。树莓派 Zero 同样可以运行全尺寸树莓派能够运行的软件,但是,因为树莓派 Zero 的 CPU 速度比较慢,让它去引导传统的 [Raspbian OS][25] 镜像非常耗时。打开摄像头都要花费差不多一分钟的时间,这样的速度在现实中是没有什么用处的。而且,一个完整的树莓派操作系统对我的这个摄像头项目来说也没有必要。甚至是,我禁用了引导时启动的所有可禁用的服务,启动仍然需要很长的时间。因此,我决定仅使用需要的软件,我将用一个 [U-Boot][26] 引导加载器和 Linux 内核。自定义 `init` 二进制文件从 microSD 卡上加载 root 文件系统、读入驱动网络摄像头所需要的内核模块,并将它放在 `/dev` 目录下,然后运行二进制的应用程序。
|
||||
|
||||
这个二进制的应用程序是另一个定制的 C 程序,它做的核心工作就是管理摄像头。首先,它等待内核驱动程序去初始化网络摄像头、打开它、以及通过低级的 `v4l ioctl` 调用去初始化它。GPIO 针是通过 `/dev/mem` 寄存器被配置为驱动 LED。
|
||||
这个二进制的应用程序是另一个定制的 C 程序,它做的核心工作就是管理摄像头。首先,它等待内核驱动程序去初始化网络摄像头、打开它、以及通过低级的 `v4l ioctl` 调用去初始化它。GPIO 针配置用来通过 `/dev/mem` 寄存器去驱动 LED。
|
||||
|
||||
初始化完成之后,摄像头进入一个 loop 循环。每个图像捕获循环是摄像头使用缺省配置,以 JPEG 格式捕获一个单一的图像帧、保存这个图像帧到 SD 卡、然后休眠三秒。这个循环持续运行直到断电为止。这已经很完美地实现了我的最初目标,那就是用一个传统的模拟的针孔摄像头设计一个简单的数字摄像头。
|
||||
初始化完成之后,摄像头进入一个循环。每个图像捕获循环是摄像头使用缺省配置,以 JPEG 格式捕获一个单一的图像帧、保存这个图像帧到 SD 卡、然后休眠三秒。这个循环持续运行直到断电为止。这已经很完美地实现了我的最初目标,那就是用一个传统的模拟的针孔摄像头设计一个简单的数字摄像头。
|
||||
|
||||
定制的用户空间 [代码][27] 在遵守 [GPLv3][28] 或者更新版许可下自由使用。树莓派 Zero 需要 ARMv6 的二进制文件,因此,我使用了 [QEMU ARM][29] 模拟器在一台 x86_64 主机上进行编译,它使用了 [Pignus][30] 发行版(一个针对 ARMv6 移植/重构建的 Fedora 23 版本)下的工具链,在 `chroot` 下进行编译。所有的二进制文件都静态链接了 [glibc][31],因此,它们都是自包含的。我构建了一个定制的 RAMDisk 去包含这些二进制文件和所需的内核模块,并将它拷贝到 SD 卡,这样引导加载器就可以找到它们了。
|
||||
定制的用户空间 [代码][27] 在遵守 [GPLv3][28] 或者更新版许可下自由使用。树莓派 Zero 需要 ARMv6 的二进制文件,因此,我使用了 [QEMU ARM][29] 模拟器在一台 x86_64 主机上进行编译,它使用了 [Pignus][30] 发行版(一个针对 ARMv6 移植/重构建的 Fedora 23 版本)下的工具链,在 `chroot` 环境下进行编译。所有的二进制文件都静态链接了 [glibc][31],因此,它们都是自包含的。我构建了一个定制的 RAMDisk 去包含这些二进制文件和所需的内核模块,并将它拷贝到 SD 卡,这样引导加载器就可以找到它们了。
|
||||
|
||||
![Completed camera][33]
|
||||
|
||||
最终完成的摄像机完全隐藏在这个粉盒中了。唯一露在外面的东西就是 USB 电缆。CC BY-SA 4.0
|
||||
*最终完成的摄像机完全隐藏在这个粉盒中了。唯一露在外面的东西就是 USB 电缆。CC BY-SA 4.0*
|
||||
|
||||
### 照像
|
||||
|
||||
@ -92,28 +91,28 @@ LED 使用一个 300 欧姆的电阻连接到 GPIO 的 BCM 17 号和 BCM 18 号
|
||||
|
||||
![Picture of houses taken with pinhole webcam][35]
|
||||
|
||||
在伦敦,大街上的屋顶。CC BY-SA 4.0
|
||||
*在伦敦,大街上的屋顶。CC BY-SA 4.0*
|
||||
|
||||
![Airport photo][37]
|
||||
|
||||
范堡罗机场的老航站楼。CC BY-SA 4.0
|
||||
*范堡罗机场的老航站楼。CC BY-SA 4.0*
|
||||
|
||||
最初,我只是想使用摄像头去捕获一些静态图像。后面,我降低了 loop 循环的延迟时间,从三秒改为一秒,然后用它捕获一段时间内的一系列图像。我使用 [GStreamer][38] 将这些图像做成了延时视频。
|
||||
最初,我只是想使用摄像头去捕获一些静态图像。后面,我降低了循环的延迟时间,从三秒改为一秒,然后用它捕获一段时间内的一系列图像。我使用 [GStreamer][38] 将这些图像做成了延时视频。
|
||||
|
||||
以下是我创建视频的过程:
|
||||
|
||||
[][38]
|
||||
|
||||
视频是我在某天下班之后,从银行沿着泰晤式河到滑铁卢的画面。以每分钟 40 帧捕获的 1200 帧图像被我制作成了每秒 20 帧的动画。
|
||||
*视频是我在某天下班之后,从银行沿着泰晤式河到滑铁卢的画面。以每分钟 40 帧捕获的 1200 帧图像被我制作成了每秒 20 帧的动画。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/3/how-build-digital-pinhole-camera-raspberry-pi
|
||||
|
||||
作者:[Daniel Berrange][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -5,7 +5,7 @@ Oracle Linux 系统如何去注册使用坚不可摧 Linux 网络(ULN)
|
||||
|
||||
甚至我也不知道关于它的信息,我是最近才了解了有关它的信息,想将这些内容共享给其他人。因此写了这篇文章,它将指导你去注册 Oracle Linux 系统去使用坚不可摧 Linux 网络(ULN) 。
|
||||
|
||||
这将允许你去注册系统以获得软件更新和其它的 ASAP 补丁。
|
||||
这将允许你去注册系统以尽快获得软件更新和其它的补丁。
|
||||
|
||||
### 什么是坚不可摧 Linux 网络
|
||||
|
||||
|
@ -1,16 +1,18 @@
|
||||
8 个基本的 Docker 容器管理命令
|
||||
======
|
||||
利用这 8 个命令可以学习 Docker 容器的基本管理方式。这是一个为 Docker 初学者准备的,带有示范命令输出的指南。
|
||||
|
||||
> 利用这 8 个命令可以学习 Docker 容器的基本管理方式。这是一个为 Docker 初学者准备的,带有示范命令输出的指南。
|
||||
|
||||
![Docker 容器管理命令][1]
|
||||
|
||||
在这篇文章中,我们将带你学习 8 个基本的 Docker 容器命令,它们操控着 Docker 容器的基本活动,例如 <ruby>运行<rt>run</rt></ruby>, <ruby>列举<rt>list</rt></ruby>, <ruby>停止<rt>stop</rt></ruby>, <ruby>查看历史纪录<rt>view logs</rt></ruby>, <ruby>删除<rt>delete</rt></ruby>, 等等。如果你对 Docker 的概念很陌生,推荐你看看我们的 [介绍指南][2],来了解 Docker 的基本内容以及 [如何][3] 在 Linux 上安装 Docker. 现在让我们赶快进入要了解的命令:
|
||||
在这篇文章中,我们将带你学习 8 个基本的 Docker 容器命令,它们操控着 Docker 容器的基本活动,例如 <ruby>运行<rt>run</rt></ruby>、 <ruby>列举<rt>list</rt></ruby>、 <ruby>停止<rt>stop</rt></ruby>、 查看<ruby>历史纪录<rt>logs</rt></ruby>、 <ruby>删除<rt>delete</rt></ruby> 等等。如果你对 Docker 的概念很陌生,推荐你看看我们的 [介绍指南][2],来了解 Docker 的基本内容以及 [如何][3] 在 Linux 上安装 Docker。 现在让我们赶快进入要了解的命令:
|
||||
|
||||
### 如何运行 Docker 容器?
|
||||
|
||||
众所周知,Docker 容器只是一个运行于<ruby>宿主操作系统<rt>host OS</rt></ruby>上的应用进程,所以你需要一个镜像来运行它。Docker 镜像运行时的进程就叫做 Docker 容器。你可以加载本地 Docker 镜像,也可以从 Docker Hub 上下载。Docker Hub 是一个提供公有和私有镜像来进行<ruby>拉取<rt>pull</rt></ruby>操作的集中仓库。官方的 Docker Hub 位于 [hub.docker.com][4]. 当你指示 Docker 引擎运行容器时,它会首先搜索本地镜像,如果没有找到,它会从 Docker Hub 上拉取相应的镜像。
|
||||
众所周知,Docker 容器只是一个运行于<ruby>宿主操作系统<rt>host OS</rt></ruby>上的应用进程,所以你需要一个镜像来运行它。Docker 镜像以进程的方式运行时就叫做 Docker 容器。你可以加载本地 Docker 镜像,也可以从 Docker Hub 上下载。Docker Hub 是一个提供公有和私有镜像来进行<ruby>拉取<rt>pull</rt></ruby>操作的集中仓库。官方的 Docker Hub 位于 [hub.docker.com][4]。 当你指示 Docker 引擎运行容器时,它会首先搜索本地镜像,如果没有找到,它会从 Docker Hub 上拉取相应的镜像。
|
||||
|
||||
让我们运行一个 Apache web 服务器的 Docker 镜像,比如 httpd 进程。你需要运行 `docker container run` 命令。旧的命令为 `docker run`, 但后来 Docker 添加了子命令部分,所以新版本支持下列命令:
|
||||
|
||||
让我们运行一个 Apache web-server 的 Docker 镜像,比如 httpd 进程。你需要运行 `docker container run` 命令。旧的命令为 `docker run`, 但后来 Docker 添加了子命令部分,所以新版本支持<ruby>附属命令<rt>below command</rt></ruby> -
|
||||
|
||||
```
|
||||
root@kerneltalks # docker container run -d -p 80:80 httpd
|
||||
@ -28,18 +30,16 @@ Status: Downloaded newer image for httpd:latest
|
||||
c46f2e9e4690f5c28ee7ad508559ceee0160ac3e2b1688a61561ce9f7d99d682
|
||||
```
|
||||
|
||||
Docker 的 `run` 命令将镜像名作为强制参数,另外还有很多可选参数。常用的参数有 -
|
||||
Docker 的 `run` 命令将镜像名作为强制参数,另外还有很多可选参数。常用的参数有:
|
||||
|
||||
* `-d` : Detach container from current shell
|
||||
* `-p X:Y` : Bind container port Y with host’s port X
|
||||
* `--name` : Name your container. If not used, it will be assigned randomly generated name
|
||||
* `-e` : Pass environmental variables and their values while starting container
|
||||
* `-d`:从当前 shell 脱离容器
|
||||
* `-p X:Y`:绑定容器的端口 Y 到宿主机的端口 X
|
||||
* `--name`:命名你的容器。如果未指定,它将被赋予随机生成的名字
|
||||
* `-e`:当启动容器时传递环境编辑及其值
|
||||
|
||||
通过以上输出你可以看到,我们将 `httpd` 作为镜像名来运行容器。接着,本地镜像没有找到,Docker 引擎从 Docker Hub 拉取了它。注意,它下载了镜像 `httpd:latest`, 其中 `:` 后面跟着版本号。如果你需要运行特定版本的容器,你可以在镜像名后面注明版本名。如果不提供版本名,Docker 引擎会自动拉取最新的版本。
|
||||
|
||||
|
||||
通过以上输出你可以看到,我们将 `httpd` 作为镜像名来运行容器。接着,本地镜像没有找到,Docker 引擎从 Docker Hub 拉取了它。注意,它下载了镜像 **httpd:latest**, 其中 : 后面跟着版本号。如果你需要运行特定版本的容器,你可以在镜像名后面注明版本名。如果不提供版本名,Docker 引擎会自动拉取最新的版本。
|
||||
|
||||
输出的最后一行显示了你新运行的 httpd 容器的特有 ID。
|
||||
输出的最后一行显示了你新运行的 httpd 容器的唯一 ID。
|
||||
|
||||
### 如何列出所有运行中的 Docker 容器?
|
||||
|
||||
@ -51,9 +51,9 @@ CONTAINER ID IMAGE COMMAND CREATED
|
||||
c46f2e9e4690 httpd "httpd-foreground" 11 minutes ago Up 11 minutes 0.0.0.0:80->80/tcp cranky_cori
|
||||
```
|
||||
|
||||
列出的结果是按列显示的。每一列的值分别为 -
|
||||
列出的结果是按列显示的。每一列的值分别为:
|
||||
|
||||
1. Container ID :一开始的几个字符对应你特有的容器 ID
|
||||
1. Container ID :一开始的几个字符对应你的容器的唯一 ID
|
||||
2. Image :你运行容器的镜像名
|
||||
3. Command :容器启动后运行的命令
|
||||
4. Created :创建时间
|
||||
@ -61,11 +61,9 @@ c46f2e9e4690 httpd "httpd-foreground" 11 minutes ago
|
||||
6. Ports :与宿主端口相连接的端口信息
|
||||
7. Names :容器名(如果你没有命名你的容器,那么会随机创建)
|
||||
|
||||
|
||||
|
||||
### 如何查看 Docker 容器的历史纪录?
|
||||
|
||||
在第一步我们使用了 -d 参数来将容器,在它一开始运行的时候,就从当前的 shell 中分离出来。在这种情况下,我们不知道容器里面发生了什么。所以为了查看容器的历史纪录,Docker 提供了 `logs` 命令。它采用容器名称或 ID 作为参数。
|
||||
在第一步我们使用了 `-d` 参数来将容器,在它一开始运行的时候,就从当前的 shell 中脱离出来。在这种情况下,我们不知道容器里面发生了什么。所以为了查看容器的历史纪录,Docker 提供了 `logs` 命令。它采用容器名称或 ID 作为参数。
|
||||
|
||||
```
|
||||
root@kerneltalks # docker container logs cranky_cori
|
||||
@ -99,7 +97,7 @@ bin 15731 15702 0 18:35 ? 00:00:00 httpd -DFOREGROUND
|
||||
root 15993 15957 0 18:59 pts/0 00:00:00 grep --color=auto -i 15702
|
||||
```
|
||||
|
||||
在第一个输出中,列出了容器产生的进程的列表。它包含了所有细节,包括用途,<ruby>进程号<rt>pid</rt></ruby>,<ruby>父进程号<rt>ppid</rt></ruby>,开始时间,命令,等等。这里所有的进程号你都可以在宿主的进程表里搜索到。这就是我们在第二个命令里做得。这证明了容器确实是宿主系统中的进程。
|
||||
在第一个输出中,列出了容器产生的进程的列表。它包含了所有细节,包括<ruby>用户号<rt>uid</rt></ruby>、<ruby>进程号<rt>pid</rt></ruby>,<ruby>父进程号<rt>ppid</rt></ruby>、开始时间、命令,等等。这里所有的进程号你都可以在宿主的进程表里搜索到。这就是我们在第二个命令里做得。这证明了容器确实是宿主系统中的进程。
|
||||
|
||||
### 如何停止 Docker 容器?
|
||||
|
||||
@ -128,7 +126,7 @@ CONTAINER ID IMAGE COMMAND CREATED
|
||||
c46f2e9e4690 httpd "httpd-foreground" 33 minutes ago Exited (0) 2 minutes ago cranky_cori
|
||||
```
|
||||
|
||||
有了 `-a` 参数,现在我们可以查看已停止的容器。注意这些容器的状态被标注为 <ru by>已退出<rt>exited</rt></ruby>。既然容器只是一个进程,那么用“退出”比“停止”更合适!
|
||||
有了 `-a` 参数,现在我们可以查看已停止的容器。注意这些容器的状态被标注为 <ruby>已退出<rt>exited</rt></ruby>。既然容器只是一个进程,那么用“退出”比“停止”更合适!
|
||||
|
||||
### 如何(重新)启动 Docker 容器?
|
||||
|
||||
@ -145,7 +143,7 @@ c46f2e9e4690 httpd "httpd-foreground" 35 minutes ago
|
||||
|
||||
### 如何移除 Docker 容器?
|
||||
|
||||
我们使用 `rm` 命令来移处容器。你不可以移除运行中的容器。移除之前需要先停止容器。你可以使用 `-f` 参数搭配 `rm` 命令来强制移除容器,但并不推荐这么做。
|
||||
我们使用 `rm` 命令来移除容器。你不可以移除运行中的容器。移除之前需要先停止容器。你可以使用 `-f` 参数搭配 `rm` 命令来强制移除容器,但并不推荐这么做。
|
||||
|
||||
```
|
||||
root@kerneltalks # docker container rm cranky_cori
|
||||
@ -162,8 +160,8 @@ via: https://kerneltalks.com/virtualization/8-basic-docker-container-management-
|
||||
|
||||
作者:[Shrikant Lavhate][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[lonaparte](https://github.com/译者ID/lonaparte)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[lonaparte](https://github.com/lonaparte)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,10 @@
|
||||
Linux 中一种友好的 find 替代工具
|
||||
======
|
||||
> fd 命令提供了一种简单直白的搜索 Linux 文件系统的方式。
|
||||
|
||||

|
||||
|
||||
[fd][1] 是一个超快的,基于 [Rust][2] 的 Unix/Linux `find` 命令的替代。它不提供所有 `find` 的强大功能。但是,它确实提供了足够的功能来覆盖可能遇到的 80% 的情况。诸如良好的规划和方便的语法、彩色输出、智能大小写、正则表达式以及并行命令执行等特性使 `fd` 成为一个非常有能力的后继者。
|
||||
[fd][1] 是一个超快的,基于 [Rust][2] 的 Unix/Linux `find` 命令的替代品。它不提供所有 `find` 的强大功能。但是,它确实提供了足够的功能来覆盖你可能遇到的 80% 的情况。诸如良好的规划和方便的语法、彩色输出、智能大小写、正则表达式以及并行命令执行等特性使 `fd` 成为一个非常有能力的后继者。
|
||||
|
||||
### 安装
|
||||
|
||||
@ -10,80 +12,68 @@ Linux 中一种友好的 find 替代工具
|
||||
|
||||
### 简单搜索
|
||||
|
||||
`fd` 旨在帮助你轻松找到文件系统中的文件和文件夹。你可以用 `fd` 带上一个参数执行最简单的搜索,该参数就是你要搜索的任何东西。例如,假设你想要找一个 Markdown 文档,其中包含单词 `services` 作为文件名的一部分:
|
||||
`fd` 旨在帮助你轻松找到文件系统中的文件和文件夹。你可以用 `fd` 带上一个参数执行最简单的搜索,该参数就是你要搜索的任何东西。例如,假设你想要找一个 Markdown 文档,其中包含单词 `services` 作为文件名的一部分:
|
||||
|
||||
```
|
||||
$ fd services
|
||||
|
||||
downloads/services.md
|
||||
|
||||
```
|
||||
|
||||
如果仅带一个参数调用,那么 `fd` 递归地搜索当前目录以查找与莫的参数匹配的任何文件和/或目录。使用内置的 `find` 命令的等效搜索如下所示:
|
||||
|
||||
```
|
||||
$ find . -name 'services'
|
||||
|
||||
downloads/services.md
|
||||
|
||||
```
|
||||
|
||||
如你所见,`fd` 要简单得多,并需要更少的输入。在我心中用更少的输入做更多的事情总是胜利的。
|
||||
如你所见,`fd` 要简单得多,并需要更少的输入。在我心中用更少的输入做更多的事情总是对的。
|
||||
|
||||
### 文件和文件夹
|
||||
|
||||
您可以使用 `-t` 参数将搜索范围限制为文件或目录,后面跟着代表你要搜索的内容的字母。例如,要查找当前目录中文件名中包含 `services` 的所有文件,可以使用:
|
||||
|
||||
```
|
||||
$ fd -tf services
|
||||
|
||||
downloads/services.md
|
||||
|
||||
```
|
||||
|
||||
并找到当前目录中文件名中包含 `services` 的所有目录:
|
||||
以及,找到当前目录中文件名中包含 `services` 的所有目录:
|
||||
|
||||
```
|
||||
$ fd -td services
|
||||
|
||||
applications/services
|
||||
|
||||
library/services
|
||||
|
||||
```
|
||||
|
||||
如何在当前文件夹中列出所有带 `.md` 扩展名的文档?
|
||||
|
||||
```
|
||||
$ fd .md
|
||||
|
||||
administration/administration.md
|
||||
|
||||
development/elixir/elixir_install.md
|
||||
|
||||
readme.md
|
||||
|
||||
sidebar.md
|
||||
|
||||
linux.md
|
||||
|
||||
```
|
||||
|
||||
从输出中可以看到,`fd` 不仅可以找到并列出当前文件夹中的文件,还可以在子文件夹中找到文件。很简单。你甚至可以使用 `-H` 参数来搜索隐藏文件:
|
||||
从输出中可以看到,`fd` 不仅可以找到并列出当前文件夹中的文件,还可以在子文件夹中找到文件。很简单。
|
||||
|
||||
你甚至可以使用 `-H` 参数来搜索隐藏文件:
|
||||
|
||||
```
|
||||
fd -H sessions .
|
||||
|
||||
.bash_sessions
|
||||
|
||||
```
|
||||
|
||||
### 指定目录
|
||||
|
||||
如果你想搜索一个特定的目录,这个目录的名字可以作为第二个参数传给 `fd`
|
||||
如果你想搜索一个特定的目录,这个目录的名字可以作为第二个参数传给 `fd`:
|
||||
|
||||
```
|
||||
$ fd passwd /etc
|
||||
|
||||
/etc/default/passwd
|
||||
|
||||
/etc/pam.d/passwd
|
||||
|
||||
/etc/passwd
|
||||
|
||||
```
|
||||
|
||||
在这个例子中,我们告诉 `fd` 我们要在 `etc` 目录中搜索 `passwd` 这个单词的所有实例。
|
||||
@ -91,11 +81,10 @@ $ fd passwd /etc
|
||||
### 全局搜索
|
||||
|
||||
如果你知道文件名的一部分,但不知道文件夹怎么办?假设你下载了一本关于 Linux 网络管理的书,但你不知道它的保存位置。没有问题:
|
||||
|
||||
```
|
||||
fd Administration /
|
||||
|
||||
/Users/pmullins/Documents/Books/Linux/Mastering Linux Network Administration.epub
|
||||
|
||||
```
|
||||
|
||||
### 总结
|
||||
@ -109,7 +98,7 @@ via: https://opensource.com/article/18/6/friendly-alternative-find
|
||||
作者:[Patrick H. Mullins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,133 @@
|
||||
A summer reading list for open organization enthusiasts
|
||||
======
|
||||
|
||||

|
||||
|
||||
The books on this year's open organization reading list crystallize so much of what makes "open" work: Honesty, authenticity, trust, and the courage to question those status quo arrangements that prevent us from achieving our potential by working powerfully together.
|
||||
|
||||
These nine books—each one a recommendation from a member of our community—represent merely the beginning of an important journey toward greater and better openness.
|
||||
|
||||
But they sure are a great place to start.
|
||||
|
||||
### Radical Candor
|
||||
|
||||
**by Kim Scott** (recommended by [Angela Roberstson][1])
|
||||
|
||||
Do you avoid conflict? Love it? Or are you somewhere in between?
|
||||
|
||||
Wherever you are on the spectrum, Kim Scott gives you a set of tools for improving your ability to speak your truth in the workplace.
|
||||
|
||||
The book is divided into two parts: Part 1 is Scott's perspective on giving feedback, including handling the conflict that might be associated with it. Part 2 focuses on tools and techniques that she recommends.
|
||||
|
||||
Radical candor is most impactful for managers when it comes to evaluating and communicating feedback about employee performance. In Chapter 3, "Understand what motivates each person on your team," Scott explains how we can employ radical candor when assessing employees. Included is an explanation of how to have constructive conversations about our assessments.
|
||||
|
||||
I also appreciate that Scott spends a few pages sharing her perspective on how gender politics can impact work. With all the emphasis on diversity and inclusion, especially in the tech sector, including this topic in the book is another reason to read.
|
||||
|
||||
### Powerful
|
||||
**by Patty McCord** (recommended by [Jeff Mackanic][2])
|
||||
|
||||
Powerful is an inspiring leadership book by Patty McCord, the former chief talent officer at Netflix. It's a fast-paced book with many great examples drawn from the author's career at Netflix.
|
||||
|
||||
One of the key characteristics of an open organization is collaboration, and readers will learn a good deal from McCord as she explains a few of Netflix's core practices that can help any company be more collaborative.
|
||||
|
||||
For McCord, collaboration clearly begins with honesty. For example, she writes, "We wanted people to practice radical honesty: telling one another, and us, the truth in a timely fashion and ideally face to face." She also explains how, at Netflix, "We wanted people to have strong, fact-based opinions and to debate them avidly and test them rigorously."
|
||||
|
||||
This is a wonderful book that will inspire the reader to look at leadership through a new lens.
|
||||
|
||||
### The Code of Trust
|
||||
**by Robin Dreeke** (recommended by [Ron McFarland][3])
|
||||
|
||||
Author Robin Dreeke was an FBI agent, which gave him experience getting information from total strangers. To do that, he had to get people to be open to him.
|
||||
|
||||
His experience led to this book, which offers five rules he calls "The Code of Trust." Put simply, the rules are: 1) Suspend your ego or pride when you meet someone for the first time, 2) Avoid being judgmental about that person, 3) Validate the person's position and feelings, 4) Honor basic reason, and 5) Be generous and encourage building the value of the relationship.
|
||||
|
||||
Dreeke argues that you can achieve the above by 1) Aligning your goals with others' after learning what ther goals are, 2) Understanding the power of context and their situations, 3) Crafting the meeting to get them to open up to you, and 4) Connecting with deep communication (something over and above language that includes feelings as well).
|
||||
|
||||
The book teaches how to do the above, so I learned a great deal. Overall, though, it makes some important points for anyone interested in open organizations. If people are cooperative, engaged, interactive, and open, an organization with many outside contributors can be very successful. But if people are uninterested, non-cooperative, protective, reluctant to interact, and closed, an organization will suffer.
|
||||
|
||||
### Team of Teams
|
||||
|
||||
**by Gen. Stanley McChrystal, Chris Fussell, and Tantum Collins** (recommended by [Matt Micene][4])
|
||||
|
||||
Does the highly specialized and hierarchical United States military strike you as a source for advice on building agile, integrated, highly disparate teams? This book traces General McChrystal's experiences transforming a team "moving from playing football to basketball, and finding that habits and preconceptions had to be discarded along with pads and cleats."
|
||||
|
||||
With lives literally on the line, circumstances forced McChrystal's Joint Special Operations Task Force walks through some radical changes. But as much as this book takes place during a war, it's not a story about a war. It's a story that traces Frederick Winslow Taylor's legacy and impact on the way we think about operational efficiency. It's about the radical restructuring of communications in a siloed organization. It distinguishes the "complex" and the "complicated," and explains the different forces those two concepts exert on organizations. Readers will note many themes that resonate with open organization thinking—like resilience thinking, the OODA loop, systems thinking, and empowered execution in leadership.
|
||||
|
||||
Perhaps most importantly, you'll see more than discourse and discussion on these topics. You'll get to see an example of a highly siloed organization successfuly changing its culture and embracing a more transparent and "organic" system of organization that fostered success.
|
||||
|
||||
### Liminal Thinking
|
||||
|
||||
**by Dave Gray** (recommended by [Angela Roberstson][1])
|
||||
|
||||
When I read this book's title, the word "liminal" throws me every time. I think "limit." But as Dave Gray patiently explains, "The word liminal comes from the Latin root limen, which means threshold." Gray shares his perspective on ways that readers can push past the boundaries of our thinking to become more creative, impactul leaders.
|
||||
|
||||
I love how Gray quickly explains how beliefs impact our lives. We can reframe beliefs, he says, if we're willing to stop clinging to them. The concise text means that you can read and reread this book as you work to implement the practices for enacting change that Gray provides.
|
||||
|
||||
The book is divided into two parts: Principles and Practices. After describing each of the six principles and nine practices, Gray offers a short exercise you can complete. Throughout the book are also great visuals and quotes to ensure you're staying engaged.
|
||||
|
||||
Read this book if you're looking for fresh ideas about how to manage change.
|
||||
|
||||
### Orbiting the Giant Hairball
|
||||
|
||||
**by Gordon MacKenzie** (recommended by [Allison Matlack][5])
|
||||
|
||||
Sometimes—even in open organizations—we can struggle to maintain our creativity and authenticity in the face of the bureaucratic processes that live at the heart of every company of certain size. Gordon MacKenzie offers a refreshing alternative to corporate normalcy in this charming book that has been something of a cult classic since it was self-published in the 1980s.
|
||||
|
||||
There's a masterpiece in each of us, MacKenzie posits—one that is singular and unique. We can choose to paint by the corporate numbers, or we can find real joy in using bold strokes to create an original work of art.
|
||||
|
||||
### Tribal Leadership
|
||||
|
||||
**by Dave Logan, John King, and Halee Fischer-Wright** (recommended by [Chris Baynham-Hughes][6])
|
||||
|
||||
Too often, technology rather than culture an organization's starting point for transformation, innovation, and speed to market. I've lost count of the times I've used this book to frame conversations around company culture and challenge leaders on what they are doing to foster innovation and loyalty, and to create a workplace in which people. It's been a game-changer for me.
|
||||
|
||||
Tribal Leadership is essential reading for anybody interested in workplace culture or a leadership role—especially those wanting to develop open, innovative, and collaborative cultures. It provides an evidence-based approach to developing corporate culture detailing: 1) five distinct stages of tribal culture, 2) a framework to develop yourself and others as tribal leaders, and 3) characteristics and coaching tips to ensure practitioners can identify the levels each individual is at and nudge them to the next level. Each chapter presents a case study narrative before identifying coaching tips and summarizing key points. I found it enjoyable to read and easy to remember.
|
||||
|
||||
### Wikipedia and the Politics of Openness
|
||||
|
||||
**by Nathaniel Tkacz** (recommended by [Bryan Behrenshausen][7])
|
||||
|
||||
This thing we call "open" isn't something natural or eternal—some kind of fixed and immutable essence or quality that somehow exists outside time. It's flexible, contingent, context-specific, and the site of so much negotiation and contestation. What does "open" mean to and for the parties most invested in the term? And what happens when we organize groups and institutions around those very definitions? What (and who) do they enable? And what (and who) do they preclude?
|
||||
|
||||
Tkacz explores these questions with historical richness and critical flair by examining one of the world's largest and most notable open organizations: Wikipedia, that paragon of ostensibly participatory and collaborative behavior. Tkacz is perhaps less sanguine: "While the force of the open must be acknowledged, the real energy of the people who rally behind it, the way innumerable projects have been transformed in its name, the new projects and positive outcomes it has produced—I suggest that the concept itself has some crucial problems," he writes. Read on to see if you agree.
|
||||
|
||||
### WTF? What's the Future and Why It's Up to Us
|
||||
|
||||
**by Tim O'Reilly** (recommended by [Jason Hibbets][8])
|
||||
|
||||
Since I first saw Tim O'Reilly speak at a conference many years ago, I've always felt he had a good grasp of what's happening not only in open source but also in the broader space of digital technology. O'Reilly possesses the great ability to read the tea leaves, to make connections, and (based on those observations), to "predict" potential outcomes. In the book, he calls this map making.
|
||||
|
||||
While this book is about what the future could hold (with a particular filter on the impacts of artificial intelligence), it really boils down to the fact that humans are shaping the future. The book opens with a pretty extensive history of free and open source software, which I think many in the community will enjoy. Then it dives directly into the race for automated vehicles—and why Uber, Lyft, Tesla, and Google are all pushing to win.
|
||||
|
||||
And closely related to open organizations, the book description posted on [Harper Collins][9] poses the following questions:
|
||||
|
||||
* What will happen to business when technology-enabled networks and marketplaces are better at deploying talent than traditional companies?
|
||||
* How should companies organize themselves to take advantage of these new tools?
|
||||
|
||||
|
||||
|
||||
As many of our readers know, the future will be based on open source. O'Reilly provides you with some thought-provoking ideas on how AI and automation are closer than you might think.
|
||||
|
||||
Do yourself a favor. Turn to your favorite AI-driven home automation unit and say: "Order Tim O'Reilly 'What's the Future.'"
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/6/summer-reading-2018
|
||||
|
||||
作者:[Bryan Behrenshausen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/remyd
|
||||
[1]:https://opensource.com/users/arobertson98
|
||||
[2]:https://opensource.com/users/mackanic
|
||||
[3]:https://opensource.com/users/ron-mcfarland
|
||||
[4]:https://opensource.com/users/matt-micene
|
||||
[5]:https://opensource.com/users/amatlack
|
||||
[6]:https://opensource.com/users/onlychrisbh
|
||||
[7]:https://opensource.com/users/bbehrens
|
||||
[8]:https://opensource.com/users/jhibbets
|
||||
[9]:https://www.harpercollins.com/9780062565716/wtf/
|
@ -0,0 +1,90 @@
|
||||
How To Record Everything You Do In Terminal
|
||||
======
|
||||

|
||||
|
||||
A few days ago, we published a guide that explained how to [**save commands in terminal itself and use them on demand**][1]. It is very useful for those who don’t want to memorize a lengthy Linux command. Today, in this guide, we are going to see how to record everything you do in Terminal using **‘script’** command. You might have run a command, or created a directory, or installed an application in Terminal. Script command simply saves whatever you do in the Terminal. You can then view them if you want to know what you did few hours or few days ago. I know I know, we can use UP/DOWN arrow keys or history command to view previously running commands. However, you can’t view the output of those commands. But, Script command records and displays complete terminal session activities.
|
||||
|
||||
The script command creates a typescript of everything you do in the Terminal. It doesn’t matter whether you install an application, create a directory/file, remove a folder. Everything will be recorded, including the commands and the respective outputs. This command will be helpful who wants a hard-copy record of an interactive session as proof of an assignment. Whether you’re a student or a tutor, you can make a copy of everything you do in the Terminal along with all outputs.
|
||||
|
||||
### Record Everything You Do In Terminal using script command in Linux
|
||||
|
||||
The script command comes pre-installed on most modern Linux operating systems. So, let us not bother about the installation.
|
||||
|
||||
Let us go ahead and see how to use it in real time.
|
||||
|
||||
Run the following command to start the Terminal session recording.
|
||||
```
|
||||
$ script -a my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
Where, **-a** flag is used to append the output to file or to typescript, retaining the prior contents. The above command records everything you do in the Terminal and append the output to a file called **‘my_terminal_activities’** and save it in your current working directory.
|
||||
|
||||
Sample output would be:
|
||||
```
|
||||
Script started, file is my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
Now, run some random Linux commands in your Terminal.
|
||||
```
|
||||
$ mkdir ostechnix
|
||||
|
||||
$ cd ostechnix/
|
||||
|
||||
$ touch hello_world.txt
|
||||
|
||||
$ cd ..
|
||||
|
||||
$ uname -r
|
||||
|
||||
```
|
||||
|
||||
After running all commands, end the ‘script’ command’s session using command:
|
||||
```
|
||||
$ exit
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
exit
|
||||
Script done, file is my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
As you see, the Terminal activities have been stored in a file called **‘my_terminal_activities’** and saves it in the current working directory.
|
||||
|
||||
To view your Terminal activities, just open this file in any editor or simply display it using the ‘cat’ command.
|
||||
```
|
||||
$ cat my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
As you see in the above output, script command has recorded all my Terminal activities, including the start and end time of the script command. Awesome, isn’t it? The reason to use script command is it’s not just records the commands, but also the commands’ output as well. To put this simply, Script command will record everything you do on the Terminal.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Like I said, script command would be useful for students, teachers and any Linux users who wants to keep the record of their Terminal activities. Even though there are many CLI and GUI to do this, script command is an easiest and quickest way to record the Terminal session activities.
|
||||
|
||||
And, that’s all. Hope this helps. If you find this guide useful, please share it on your social, professional networks and **support OSTechNix**.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/record-everything-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/save-commands-terminal-use-demand/
|
@ -1,100 +0,0 @@
|
||||
translating by sunxi
|
||||
How debuggers really work
|
||||
======
|
||||
|
||||

|
||||
|
||||
Image by : opensource.com
|
||||
|
||||
A debugger is one of those pieces of software that most, if not every, developer uses at least once during their software engineering career, but how many of you know how they actually work? During my talk at [linux.conf.au 2018][1] in Sydney, I will be talking about writing a debugger from scratch... in [Rust][2]!
|
||||
|
||||
In this article, the terms debugger/tracer are interchangeably. "Tracee" refers to the process being traced by the tracer.
|
||||
|
||||
### The ptrace system call
|
||||
|
||||
Most debuggers heavily rely on a system call known as `ptrace(2)`, which has the prototype:
|
||||
```
|
||||
|
||||
|
||||
long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data);
|
||||
```
|
||||
|
||||
This is a system call that can manipulate almost all aspects of a process; however, before the debugger can attach to a process, the "tracee" has to call `ptrace` with the request `PTRACE_TRACEME`. This tells Linux that it is legitimate for the parent to attach via `ptrace` to this process. But... how do we coerce a process into calling `ptrace`? Easy-peasy! `fork/execve` provides an easy way of calling `ptrace` after `fork` but before the tracee really starts using `execve`. Conveniently, `fork` will also return the `pid` of the tracee, which is required for using `ptrace` later.
|
||||
|
||||
Now that the tracee can be traced by the debugger, important changes take place:
|
||||
|
||||
* Every time a signal is delivered to the tracee, it stops and a wait-event is delivered to the tracer that can be captured by the `wait` family of system calls.
|
||||
* Each `execve` system call will cause a `SIGTRAP` to be delivered to the tracee. (Combined with the previous item, this means the tracee is stopped before an `execve` can fully take place.)
|
||||
|
||||
|
||||
|
||||
This means that, once we issue the `PTRACE_TRACEME` request and call the `execve` system call to actually start the program in the tracee, the tracee will immediately stop, since `execve` delivers a `SIGTRAP`, and that is caught by a wait-event in the tracer. How do we continue? As one would expect, `ptrace` has a number of requests that can be used for telling the tracee it's fine to continue:
|
||||
|
||||
* `PTRACE_CONT`: This is the simplest. The tracee runs until it receives a signal, at which point a wait-event is delivered to the tracer. This is most commonly used to implement "continue-until-breakpoint" and "continue-forever" options of a real-world debugger. Breakpoints will be covered below.
|
||||
* `PTRACE_SYSCALL`: Very similar to `PTRACE_CONT`, but stops before a system call is entered and also before a system call returns to userspace. It can be used in combination with other requests (which we will cover later in this article) to monitor and modify a system call's arguments or return value. `strace`, the system call tracer, uses this request heavily to figure out what system calls are made by a process.
|
||||
* `PTRACE_SINGLESTEP`: This one is pretty self-explanatory. If you used a debugger before, this request executes the next instruction, but stops immediately after.
|
||||
|
||||
|
||||
|
||||
We can stop the process with a variety of requests, but how do we get the state of the tracee? The state of a process is mostly captured by its registers, so of course `ptrace` has a request to get (or modify!) the registers:
|
||||
|
||||
* `PTRACE_GETREGS`: This request will give the registers' state as it was when a tracee was stopped.
|
||||
* `PTRACE_SETREGS`: If the tracer has the values of registers from a previous call to `PTRACE_GETREGS`, it can modify the values in that structure and set the registers to the new values via this request.
|
||||
* `PTRACE_PEEKUSER` and `PTRACE_POKEUSER`: These allow reading from the tracee's `USER` area, which holds the registers and other useful information. This can be used to modify a single register, without the more heavyweight `PTRACE_{GET,SET}REGS`.
|
||||
|
||||
|
||||
|
||||
Modifying the registers isn't always sufficient in a debugger. A debugger will sometimes need to read some parts of the memory or even modify it. The GNU Project Debugger (GDB) can use `print` to get the value of a memory location or a variable. `ptrace` has the functionality to implement this:
|
||||
|
||||
* `PTRACE_PEEKTEXT` and `PTRACE_POKETEXT`: These allow reading and writing a word in the address space of the tracee. Of course, the tracee has to be stopped for this to work.
|
||||
|
||||
|
||||
|
||||
Real-world debuggers also have features like breakpoints and watchpoints. In the next section, I'll dive into the architectural details of debugging support. For the purposes of clarity and conciseness, this article will consider x86 only.
|
||||
|
||||
### Architectural support
|
||||
|
||||
`ptrace` is all cool, but how does it work? In the previous section, we've seen that `ptrace` has quite a bit to do with signals: `SIGTRAP` can be delivered during single-stepping, before `execve` and before or after system calls. Signals can be generated a number of ways, but we will look at two specific examples that can be used by debuggers to stop a program (effectively creating a breakpoint!) at a given location:
|
||||
|
||||
* **Undefined instructions:** When a process tries to execute an undefined instruction, an exception is raised by the CPU. This exception is handled via a CPU interrupt, and a handler corresponding to the interrupt in the kernel is called. This will result in a `SIGILL` being sent to the process. This, in turn, causes the process to stop, and the tracer is notified via a wait-event. It can then decide what to do. On x86, an instruction `ud2` is guaranteed to be always undefined.
|
||||
|
||||
* **Debugging interrupt:** The problem with the previous approach is that the `ud2` instruction takes two bytes of machine code. A special instruction exists that takes one byte and raises an interrupt. It's `int $3` and the machine code is `0xCC`. When this interrupt is raised, the kernel sends a `SIGTRAP` to the process and, just as before, the tracer is notified.
|
||||
|
||||
|
||||
|
||||
|
||||
This is fine, but how do we coerce the tracee to execute these instructions? Easy: `ptrace` has `PTRACE_POKETEXT`, which can override a word at a memory location. A debugger would read the original word at the location using `PTRACE_PEEKTEXT` and replace it with `0xCC`, remembering the original byte and the fact that it is a breakpoint in its internal state. The next time the tracee executes at the location, it is automatically stopped by the virtue of a `SIGTRAP`. The debugger's end user can then decide how to continue (for instance, inspect the registers).
|
||||
|
||||
Okay, we've covered breakpoints, but what about watchpoints? How does a debugger stop a program when a certain memory location is read or written? Surely you wouldn't just overwrite every instruction with `int $3` that could read or write some memory location. Meet debug registers, a set of registers designed to fulfill this goal more efficiently:
|
||||
|
||||
* `DR0` to `DR3`: Each of these registers contains an address (a memory location), where the debugger wants the tracee to stop for some reason. The reason is specified as a bitmask in `DR7`.
|
||||
* `DR4` and `DR5`: These obsolete aliases to `DR6` and `DR7`, respectively.
|
||||
* `DR6`: Debug status. Contains information about which `DR0` to `DR3` caused the debugging exception to be raised. This is used by Linux to figure out the information passed along with the `SIGTRAP` to the tracee.
|
||||
* `DR7`: Debug control. Using the bits in these registers, the debugger can control how the addresses specified in `DR0` to `DR3` are interpreted. A bitmask controls the size of the watchpoint (whether 1, 2, 4, or 8 bytes are monitored) and whether to raise an exception on execution, reading, writing, or either of reading and writing.
|
||||
|
||||
|
||||
|
||||
Because the debug registers form part of the `USER` area of a process, the debugger can use `PTRACE_POKEUSER` to write values into the debug registers. The debug registers are only relevant to a specific process and are thus restored to the value at preemption before the process regains control of the CPU.
|
||||
|
||||
### Tip of the iceberg
|
||||
|
||||
We've glanced at the iceberg a debugger is: we've covered `ptrace`, went over some of its functionality, then we had a look at how `ptrace` is implemented. Some parts of `ptrace` can be implemented in software, but other parts have to be implemented in hardware, otherwise they'd be very expensive or even impossible.
|
||||
|
||||
There's plenty that we didn't cover, of course. Questions, like "how does a debugger know where a variable is in memory?" remain open due to space and time constraints, but I hope you've learned something from this article; if it piqued your interest, there are plenty of resources available online to learn more.
|
||||
|
||||
For more, attend Levente Kurusa's talk, [Let's Write a Debugger!][3], at [linux.conf.au][1], which will be held January 22-26 in Sydney.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-debuggers-really-work
|
||||
|
||||
作者:[Levente Kurusa][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/lkurusa
|
||||
[1]:https://linux.conf.au/index.html
|
||||
[2]:https://www.rust-lang.org
|
||||
[3]:https://rego.linux.conf.au/schedule/presentation/91/
|
@ -1,109 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Run A Command For A Specific Time In Linux
|
||||
======
|
||||

|
||||
|
||||
The other day I was transferring a large file using rsync to another system on my local area network. Since it is very big file, It took around 20 minutes to complete. I don’t want to wait that longer, and I don’t want to terminate the process by pressing CTRL+C either. I was just wondering if there could be any easy ways to run a command for a specific time and kill it automatically once the time is out in Unix-like operating systems – hence this post. Read on.
|
||||
|
||||
### Run A Command For A Specific Time In Linux
|
||||
|
||||
We can do this in two methods.
|
||||
|
||||
#### Method 1 – Using “timeout” command
|
||||
|
||||
The most common method is using **timeout** command. For those who don’t know, the timeout command will effectively limit the absolute execution time of a process. The timeout command is part of the GNU coreutils package, so it comes pre-installed in all GNU/Linux systems.
|
||||
|
||||
Let us say, you want to run a command for only 5 seconds, and then kill it. To do so, we use:
|
||||
```
|
||||
$ timeout <time-limit-interval> <command>
|
||||
|
||||
```
|
||||
|
||||
For example, the following command will terminate after 10 seconds.
|
||||
```
|
||||
$ timeout 10s tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
You also don’t have to specify the suffix “s” for seconds. The following command is same as above.
|
||||
```
|
||||
$ timeout 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
The other available suffixes are:
|
||||
|
||||
* ‘m’ for minutes,
|
||||
* ‘h’ for hours
|
||||
* ‘d’ for days.
|
||||
|
||||
|
||||
|
||||
If you run this **tail -f /var/log/pacman.log** command, it will keep running until you manually end it by pressing CTRL+C. However, if you run it along with **timeout** command, it will be killed automatically after the given time interval. If the command is till running after the time out, you can send a **kill** signal like below.
|
||||
```
|
||||
$ timeout -k 20 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
In this case, if you the tail command still running after 10 seconds, the timeout command will send it a kill signal after 20 seconds and end it.
|
||||
|
||||
For more details, check the man pages.
|
||||
```
|
||||
$ man timeout
|
||||
|
||||
```
|
||||
|
||||
Sometimes, a particular program might take long time to complete and end up freezing your system. In such cases, you can use this trick to end the process automatically after a particular time.
|
||||
|
||||
Also, consider using **Cpulimit** , a simple application to limit the CPU usage of a process. For more details, check the following link.
|
||||
|
||||
#### Method 2 – Using “Timelimit” program
|
||||
|
||||
The Timelimit utility executes a given command with the supplied arguments and terminates the spawned process after a given time with a given signal. First, it will pass the warning signal and then after timeout, it will send the **kill** signal.
|
||||
|
||||
Unlike the timeout utility, the Timelimit has more options. You can pass number of arguments such as killsig, warnsig, killtime, warntime etc. It is available in the default repositories of Debian-based systems. So, you can install it using command:
|
||||
```
|
||||
$ sudo apt-get install timelimit
|
||||
|
||||
```
|
||||
|
||||
For Arch-based systems, it is available in the AUR. So, you can install it using any AUR helper programs such as [**Pacaur**][3], [**Packer**][4], [**Yay**][5], [**Yaourt**][6] etc.
|
||||
|
||||
For other distributions, download the source [**from here**][7] and manually install it. After installing Timelimit program, run the following command for a specific time, for example 10 seconds:
|
||||
```
|
||||
$ timelimit -t10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
If you run timelimit without any arguments, it will use the default values: warntime=3600 seconds, warnsig=15, killtime=120, killsig=9. For more details, refer the man pages and the project’s website given at the end of this guide.
|
||||
```
|
||||
$ man timelimit
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for today. I hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/run-command-specific-time-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/Timeout.gif
|
||||
[3]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[4]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[7]:http://devel.ringlet.net/sysutils/timelimit/#download
|
@ -1,135 +0,0 @@
|
||||
Translating
|
||||
Make your first contribution to an open source project
|
||||
============================================================
|
||||
|
||||
> There's a first time for everything.
|
||||
|
||||

|
||||
Image by : [WOCinTech Chat][16]. Modified by Opensource.com. [CC BY-SA 4.0][17]
|
||||
|
||||
It's a common misconception that contributing to open source is difficult. You might think, "Sometimes I can't even understand my own code; how am I supposed to understand someone else's?"
|
||||
|
||||
Relax. Until last year, I thought the same thing. Reading and understanding someone else's code and writing your own on top of that can be a daunting task, but with the right resources, it isn't as hard as you might think.
|
||||
|
||||
The first step is to choose a project. This decision can be instrumental in turning a newbie programmer into a seasoned open sourcer.
|
||||
|
||||
Many amateur programmers interested in open source are advised to check out [Git][18], but that is not the best way to start. Git is maintained by uber-geeks with years of software development experience. It is a good place to find an open source project to contribute to, but it's not beginner-friendly. Most devs contributing to Git have enough experience that they do not need resources or detailed documentation. In this article, I'll provide a checklist of beginner-friendly features and some tips to make your first open source contribution easy.
|
||||
|
||||
### Understand the product
|
||||
|
||||
Before contributing to a project, you should understand how it works. To understand it, you need to try it for yourself. If you find the product interesting and useful, it is worth contributing to.
|
||||
|
||||
Too often, beginners try to contribute to a project without first using the software. They then get frustrated and give up. If you don't use the software, you can't understand how it works. If you don't know how it works, how can you fix a bug or write a new feature?
|
||||
|
||||
Remember: Try it, then hack it.
|
||||
|
||||
### Check the project's status
|
||||
|
||||
How active is the project?
|
||||
|
||||
If you send a pull request to an unmaintained or dormant project, your pull request (or PR) may never be reviewed or merged. Look for projects with lots of activity; that way you will get immediate feedback on your code and your contributions will not go to waste.
|
||||
|
||||
Here's how to tell if a project is active:
|
||||
|
||||
* **Number of contributors:** A growing number of contributors indicates the developer community is interested and willing to accept new contributors.
|
||||
|
||||
* **Frequency of commits:** Check the most recent commit date. If it was within the last week, or even month or two, the project is being maintained.
|
||||
|
||||
* **Number of maintainers:** A higher number of maintainers means more potential mentors to guide you.
|
||||
|
||||
* **Activity level in the chat room/IRC:** A busy chat room means quick replies to your queries.
|
||||
|
||||
### Resources for beginners
|
||||
|
||||
Coala is an example of an open sour project that has its own resources for tutorials and documentation, where you can also access its API (every class and method). The site also features an attractive UI that makes you want to read more.
|
||||
|
||||
**Documentation:** Developers of all levels need reliable, well-maintained documentation to understand the details of a project. Look for projects that offer solid documentation on [GitHub][19] (or wherever it is hosted) and on a separate site like [Read the Docs][20], with lots of examples that will help you dive deep into the code.
|
||||
|
||||
### [coala-newcomers_guide.png][2]
|
||||
|
||||

|
||||
|
||||
**Tutorials:** Tutorials that explain how to add features to a project are helpful for beginners (however, you may not find them for all projects). For example, coala offers [tutorials for writing _bears_][21] (Python wrappers for linting tools to perform code analysis).
|
||||
|
||||
### [coala_ui.png][3]
|
||||
|
||||

|
||||
|
||||
**Labeled issues:** For beginners who are just figuring out how to choose their first project, selecting an issue can be an even tougher task. Issues labeled "difficulty/low," "difficulty/newcomer," "good first issue," and "low-hanging fruit" can be perfect for newbies.
|
||||
|
||||
### [coala_labeled_issues.png][4]
|
||||
|
||||

|
||||
|
||||
### Miscellaneous factors
|
||||
|
||||
### [ci_logs.png][5]
|
||||
|
||||

|
||||
|
||||
* **Maintainers' attitudes toward new contributors:** In my experience, most open sourcers are eager to help newcomers onboard their projects. However, you may also encounter some who are less welcoming (maybe even a bit rude) when you ask for help. Don't let them discourage you. Just because someone has more experience doesn't give them the right to be rude. There are plenty of others out there who want to help.
|
||||
|
||||
* **Review process/structure:** Your PR will go through a number of reviews and changes by experienced developers and your peers—that's how you learn the most about software development. A project with a stringent review process enables you to grow as a developer by writing production-grade code.
|
||||
|
||||
* **A robust CI pipeline:** Open source projects introduce beginners to continuous integration and deployment services. A robust CI pipeline will help you learn how to read and make sense of CI logs. It will also give you experience dealing with failing test cases and code coverage issues.
|
||||
|
||||
* **Participation in code programs (Ex. [Google Summer Of Code][1]): **Participating organizations demonstrate a willingness to commit to the long-term development of a project. They also provide an opportunity for newcomers to gain real-world development experience and get paid for it. Most of the organizations that participate in such programs welcome newbies.
|
||||
|
||||
### 7 beginner-friendly organizations
|
||||
|
||||
* [coala (Python)][7]
|
||||
|
||||
* [oppia (Python, Django)][8]
|
||||
|
||||
* [DuckDuckGo (Perl, JavaScript)][9]
|
||||
|
||||
* [OpenGenus (JavaScript)][10]
|
||||
|
||||
* [Kinto (Python, JavaScript)][11]
|
||||
|
||||
* [FOSSASIA (Python, JavaScript)][12]
|
||||
|
||||
* [Kubernetes (Go)][13]
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[][22] Palash Nigam - I'm a computer science undergrad from India who loves to hack on open source software and spend most of my time on GitHub. My current interests include backend web development, blockchains, and all things python.[More about me][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/get-started-open-source-project
|
||||
|
||||
作者:[ Palash Nigam ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/palash25
|
||||
[1]:https://en.wikipedia.org/wiki/Google_Summer_of_Code
|
||||
[2]:https://opensource.com/file/391211
|
||||
[3]:https://opensource.com/file/391216
|
||||
[4]:https://opensource.com/file/391226
|
||||
[5]:https://opensource.com/file/391221
|
||||
[6]:https://opensource.com/article/18/4/get-started-open-source-project?rate=i_d2neWpbOIJIAEjQKFExhe0U_sC6SiQgkm3c7ck8IM
|
||||
[7]:https://github.com/coala/coala
|
||||
[8]:https://github.com/oppia/oppia
|
||||
[9]:https://github.com/duckduckgo/
|
||||
[10]:https://github.com/OpenGenus/
|
||||
[11]:https://github.com/kinto
|
||||
[12]:https://github.com/fossasia/
|
||||
[13]:https://github.com/kubernetes
|
||||
[14]:https://opensource.com/users/palash25
|
||||
[15]:https://opensource.com/user/212436/feed
|
||||
[16]:https://www.flickr.com/photos/wocintechchat/25171528213/
|
||||
[17]:https://creativecommons.org/licenses/by/4.0/
|
||||
[18]:https://git-scm.com/
|
||||
[19]:https://github.com/
|
||||
[20]:https://readthedocs.org/
|
||||
[21]:http://api.coala.io/en/latest/Developers/Writing_Linter_Bears.html
|
||||
[22]:https://opensource.com/users/palash25
|
||||
[23]:https://opensource.com/users/palash25
|
||||
[24]:https://opensource.com/users/palash25
|
||||
[25]:https://opensource.com/article/18/4/get-started-open-source-project#comments
|
||||
[26]:https://opensource.com/tags/web-development
|
@ -1,3 +1,4 @@
|
||||
translating by stenphenxs
|
||||
How to get a core dump for a segfault on Linux
|
||||
============================================================
|
||||
|
||||
|
@ -1,108 +0,0 @@
|
||||
pinewall translating
|
||||
|
||||
An introduction to cryptography and public key infrastructure
|
||||
======
|
||||

|
||||
|
||||
Secure communication is quickly becoming the norm for today's web. In July 2018, Google Chrome plans to [start showing "not secure" notifications][1] for **all** sites transmitted over HTTP (instead of HTTPS). Mozilla has a [similar plan][2]. While cryptography is becoming more commonplace, it has not become easier to understand. [Let's Encrypt][3] designed and built a wonderful solution to provide and periodically renew free security certificates, but if you don't understand the underlying concepts and pitfalls, you're just another member of a large group of [cargo cult][4] programmers.
|
||||
|
||||
### Attributes of secure communication
|
||||
|
||||
The intuitively obvious purpose of cryptography is confidentiality: a message can be transmitted without prying eyes learning its contents. For confidentiality, we encrypt a message: given a message, we pair it with a key and produce a meaningless jumble that can only be made useful again by reversing the process using the same key (thereby decrypting it). Suppose we have two friends, [Alice and Bob][5], and their nosy neighbor, Eve. Alice can encrypt a message like "Eve is annoying", send it to Bob, and never have to worry about Eve snooping on her.
|
||||
|
||||
For truly secure communication, we need more than confidentiality. Suppose Eve gathered enough of Alice and Bob's messages to figure out that the word "Eve" is encrypted as "Xyzzy". Furthermore, Eve knows Alice and Bob are planning a party and Alice will be sending Bob the guest list. If Eve intercepts the message and adds "Xyzzy" to the end of the list, she's managed to crash the party. Therefore, Alice and Bob need their communication to provide integrity: a message should be immune to tampering.
|
||||
|
||||
We have another problem though. Suppose Eve watches Bob open an envelope marked "From Alice" with a message inside from Alice reading "Buy another gallon of ice cream." Eve sees Bob go out and come back with ice cream, so she has a general idea of the message's contents even if the exact wording is unknown to her. Bob throws the message away, Eve recovers it, and then every day for the next week drops an envelope marked "From Alice" with a copy of the message in Bob's mailbox. Now the party has too much ice cream and Eve goes home with free ice cream when Bob gives it away at the end of the night. The extra messages are confidential, and their integrity is intact, but Bob has been misled as to the true identity of the sender. Authentication is the property of knowing that the person you are communicating with is in fact who they claim to be.
|
||||
|
||||
Information security has [other attributes][6], but confidentiality, integrity, and authentication are the three traits you must know.
|
||||
|
||||
### Encryption and ciphers
|
||||
|
||||
What are the components of encryption? We need a message which we'll call the plaintext. We may need to do some initial formatting to the message to make it suitable for the encryption process (padding it to a certain length if we're using a block cipher, for example). Then we take a secret sequence of bits called the key. A cipher then takes the key and transforms the plaintext into ciphertext. The ciphertext should look like random noise and only by using the same cipher and the same key (or as we will see later in the case of asymmetric ciphers, a mathematically related key) can the plaintext be restored.
|
||||
|
||||
The cipher transforms the plaintext's bits using the key's bits. Since we want to be able to decrypt the ciphertext, our cipher needs to be reversible too. We can use [XOR][7] as a simple example. It is reversible and is [its own inverse][8] (P ^ K = C; C ^ K = P) so it can both encrypt plaintext and decrypt ciphertext. A trivial use of an XOR can be used for encryption in a one-time pad, but it is generally not [practical][9]. However, it is possible to combine XOR with a function that generates an arbitrary stream of random data from a single key. Modern ciphers like AES and Chacha20 do exactly that.
|
||||
|
||||
We call any cipher that uses the same key to both encrypt and decrypt a symmetric cipher. Symmetric ciphers are divided into stream ciphers and block ciphers. A stream cipher runs through the message one bit or byte at a time. Our XOR cipher is a stream cipher, for example. Stream ciphers are useful if the length of the plaintext is unknown (such as data coming in from a pipe or socket). [RC4][10] is the best-known stream cipher but it is vulnerable to several different attacks, and the newest version (1.3) of the TLS protocol (the "S" in "HTTPS") does not even support it. [Efforts][11] are underway to create new stream ciphers with some candidates like [ChaCha20][12] already supported in TLS.
|
||||
|
||||
A block cipher takes a fix-sized block and encrypts it with a fixed-sized key. The current king of the hill in the block cipher world is the [Advanced Encryption Standard][13] (AES), and it has a block size of 128 bits. That's not very much data, so block ciphers have a [mode][14] that describes how to apply the cipher's block operation across a message of arbitrary size. The simplest mode is [Electronic Code Book][15] (ECB) which takes the message, splits it into blocks (padding the message's final block if necessary), and then encrypts each block with the key independently.
|
||||
|
||||

|
||||
|
||||
You may spot a problem here: if the same block appears multiple times in the message (a phrase like "GET / HTTP/1.1" in web traffic, for example) and we encrypt it using the same key, we'll get the same result. The appearance of a pattern in our encrypted communication makes it vulnerable to attack.
|
||||
|
||||
Thus there are more advanced modes such as [Cipher Block Chaining][16] (CBC) where the result of each block's encryption is XORed with the next block's plaintext. The very first block's plaintext is XORed with an initialization vector of random numbers. There are many other modes each with different advantages and disadvantages in security and speed. There are even modes, such as Counter (CTR), that can turn a block cipher into a stream cipher.
|
||||
|
||||

|
||||
|
||||
In contrast to symmetric ciphers, there are asymmetric ciphers (also called public-key cryptography). These ciphers use two keys: a public key and a private key. The keys are mathematically related but still distinct. Anything encrypted with the public key can only be decrypted with the private key and data encrypted with the private key can be decrypted with the public key. The public key is widely distributed while the private key is kept secret. If you want to communicate with a given person, you use their public key to encrypt your message and only their private key can decrypt it. [RSA][17] is the current heavyweight champion of asymmetric ciphers.
|
||||
|
||||
A major downside to asymmetric ciphers is that they are computationally expensive. Can we get authentication with symmetric ciphers to speed things up? If you only share a key with one other person, yes. But that breaks down quickly. Suppose a group of people want to communicate with one another using a symmetric cipher. The group members could establish keys for each unique pairing of members and encrypt messages based on the recipient, but a group of 20 people works out to 190 pairs of members total and 19 keys for each individual to manage and secure. By using an asymmetric cipher, each person only needs to guard their own private key and have access to a listing of public keys.
|
||||
|
||||
Asymmetric ciphers are also limited in the [amount of data][18] they can encrypt. Like block ciphers, you have to split a longer message into pieces. In practice then, asymmetric ciphers are often used to establish a confidential, authenticated channel which is then used to exchange a shared key for a symmetric cipher. The symmetric cipher is used for subsequent communications since it is much faster. TLS can operate in exactly this fashion.
|
||||
|
||||
### At the foundation
|
||||
|
||||
At the heart of secure communication are random numbers. Random numbers are used to generate keys and to provide unpredictability for otherwise deterministic processes. If the keys we use are predictable, then we're susceptible to attack right from the very start. Random numbers are difficult to generate on a computer which is meant to behave in a consistent manner. Computers can gather random data from things like mouse movement or keyboard timings. But gathering that randomness (called entropy) takes significant time and involve additional processing to ensure uniform distributions. It can even involve the use of dedicated hardware (such as [a wall of lava lamps][19]). Generally, once we have a truly random value, we use that as a seed to put into a [cryptographically secure pseudorandom number generator][20] Beginning with the same seed will always lead to the same stream of numbers, but what's important is that the stream of numbers descended from the seed don't exhibit any pattern. In the Linux kernel, [/dev/random and /dev/urandom][21], operate in this fashion: they gather entropy from multiple sources, process it to remove biases, create a seed, and can then provide the random numbers used to generate an RSA key for example.
|
||||
|
||||
### Other cryptographic building blocks
|
||||
|
||||
We've covered confidentiality, but I haven't mentioned integrity or authentication yet. For that, we'll need some new tools in our toolbox.
|
||||
|
||||
The first is the cryptographic hash function. A cryptographic hash function is meant to take an input of arbitrary size and produce a fixed size output (often called a digest). If we can find any two messages that create the same digest, that's a collision and makes the hash function unsuitable for cryptography. Note the emphasis on "find"; if we have an infinite world of messages and a fixed sized output, there are bound to be collisions, but if we can find any two messages that collide without a monumental investment of computational resources, that's a deal-breaker. Worse still would be if we could take a specific message and could then find another message that results in a collision.
|
||||
|
||||
As well, the hash function should be one-way: given a digest, it should be computationally infeasible to determine what the message is. Respectively, these [requirements][22] are called collision resistance, second preimage resistance, and preimage resistance. If we meet these requirements, our digest acts as a kind of fingerprint for a message. No two people ([in theory][23]) have the same fingerprints, and you can't take a fingerprint and turn it back into a person.
|
||||
|
||||
If we send a message and a digest, the recipient can use the same hash function to generate an independent digest. If the two digests match, they know the message hasn't been altered. [SHA-256][24] is the most popular cryptographic hash function currently since [SHA-1][25] is starting to [show its age][26].
|
||||
|
||||
Hashes sound great, but what good is sending a digest with a message if someone can tamper with your message and then tamper with the digest too? We need to mix hashing in with the ciphers we have. For symmetric ciphers, we have message authentication codes (MACs). MACs come in different forms, but an HMAC is based on hashing. An [HMAC][27] takes the key K and the message M and blends them together using a hashing function H with the formula H(K + H(K + M)) where "+" is concatenation. Why this formula specifically? That's beyond this article, but it has to do with protecting the integrity of the HMAC itself. The MAC is sent along with an encrypted message. Eve could blindly manipulate the message, but as soon as Bob independently calculates the MAC and compares it to the MAC he received, he'll realize the message has been tampered with.
|
||||
|
||||
For asymmetric ciphers, we have digital signatures. In RSA, encryption with a public key makes something only the private key can decrypt, but the inverse is true as well and can create a type of signature. If only I have the private key and encrypt a document, then only my public key will decrypt the document, and others can implicitly trust that I wrote it: authentication. In fact, we don't even need to encrypt the entire document. If we create a digest of the document, we can then encrypt just the fingerprint. Signing the digest instead of the whole document is faster and solves some problems around the size of a message that can be encrypted using asymmetric encryption. Recipients decrypt the digest, independently calculate the digest for the message, and then compare the two to ensure integrity. The method for digital signatures varies for other asymmetric ciphers, but the concept of using the public key to verify a signature remains.
|
||||
|
||||
### Putting it all together
|
||||
|
||||
Now that we have all the major pieces, we can implement a [system][28] that has all three of the attributes we're looking for. Alice picks a secret symmetric key and encrypts it with Bob's public key. Then she hashes the resulting ciphertext and uses her private key to sign the digest. Bob receives the ciphertext and the signature, computes the ciphertext's digest and compares it to the digest in the signature he verified using Alice's public key. If the two digests are identical, he knows the symmetric key has integrity and is authenticated. He decrypts the ciphertext with his private key and uses the symmetric key Alice sent him to communicate with her confidentially using HMACs with each message to ensure integrity. There's no protection here against a message being replayed (as seen in the ice cream disaster Eve caused). To handle that issue, we would need some sort of "handshake" that could be used to establish a random, short-lived session identifier.
|
||||
|
||||
The cryptographic world is vast and complex, but I hope this article gives you a basic mental model of the core goals and components it uses. With a solid foundation in the concepts, you'll be able to continue learning more.
|
||||
|
||||
Thank you to Hubert Kario, Florian Weimer, and Mike Bursell for their help with this article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/cryptography-pki
|
||||
|
||||
作者:[Alex Wood][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/awood
|
||||
[1]:https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
|
||||
[2]:https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
|
||||
[3]:https://letsencrypt.org/
|
||||
[4]:https://en.wikipedia.org/wiki/Cargo_cult_programming
|
||||
[5]:https://en.wikipedia.org/wiki/Alice_and_Bob
|
||||
[6]:https://en.wikipedia.org/wiki/Information_security#Availability
|
||||
[7]:https://en.wikipedia.org/wiki/XOR_cipher
|
||||
[8]:https://en.wikipedia.org/wiki/Involution_(mathematics)#Computer_science
|
||||
[9]:https://en.wikipedia.org/wiki/One-time_pad#Problems
|
||||
[10]:https://en.wikipedia.org/wiki/RC4
|
||||
[11]:https://en.wikipedia.org/wiki/ESTREAM
|
||||
[12]:https://en.wikipedia.org/wiki/Salsa20
|
||||
[13]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[14]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
||||
[15]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:ECB_encryption.svg
|
||||
[16]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:CBC_encryption.svg
|
||||
[17]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
|
||||
[18]:https://security.stackexchange.com/questions/33434/rsa-maximum-bytes-to-encrypt-comparison-to-aes-in-terms-of-security
|
||||
[19]:https://www.youtube.com/watch?v=1cUUfMeOijg
|
||||
[20]:https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
|
||||
[21]:https://www.2uo.de/myths-about-urandom/
|
||||
[22]:https://crypto.stackexchange.com/a/1174
|
||||
[23]:https://www.telegraph.co.uk/science/2016/03/14/why-your-fingerprints-may-not-be-unique/
|
||||
[24]:https://en.wikipedia.org/wiki/SHA-2
|
||||
[25]:https://en.wikipedia.org/wiki/SHA-1
|
||||
[26]:https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
|
||||
[27]:https://en.wikipedia.org/wiki/HMAC
|
||||
[28]:https://en.wikipedia.org/wiki/Hybrid_cryptosystem
|
@ -1,3 +1,5 @@
|
||||
translated by hopefully2333
|
||||
|
||||
5 trending open source machine learning JavaScript frameworks
|
||||
======
|
||||

|
||||
|
@ -1,132 +0,0 @@
|
||||
translating-----geekpi
|
||||
|
||||
How to use the history command in Linux
|
||||
======
|
||||
|
||||

|
||||
|
||||
As I spend more and more time in terminal sessions, it feels like I'm continually finding new commands that make my daily tasks more efficient. The GNU `history` command is one that really changed my work day.
|
||||
|
||||
The GNU `history` command keeps a list of all the other commands that have been run from that terminal session, then allows you to replay or reuse those commands instead of retyping them. If you are an old greybeard, you know about the power of `history`, but for us dabblers or new sysadmin folks, `history` is an immediate productivity gain.
|
||||
|
||||
### History 101
|
||||
|
||||
To see `history` in action, open a terminal program on your Linux installation and type:
|
||||
```
|
||||
$ history
|
||||
|
||||
```
|
||||
|
||||
Here's the response I got:
|
||||
```
|
||||
1 clear
|
||||
|
||||
|
||||
|
||||
2 ls -al
|
||||
|
||||
|
||||
|
||||
3 sudo dnf update -y
|
||||
|
||||
|
||||
|
||||
4 history
|
||||
|
||||
```
|
||||
|
||||
The `history` command shows a list of the commands entered since you started the session. The joy of `history` is that now you can replay any of them by using a command such as:
|
||||
```
|
||||
$ !3
|
||||
|
||||
```
|
||||
|
||||
The `!3` command at the prompt tells the shell to rerun the command on line 3 of the history list. I could also access that command by entering:
|
||||
```
|
||||
linuser@my_linux_box: !sudo dnf
|
||||
|
||||
```
|
||||
|
||||
`history` will search for the last command that matches the pattern you provided and run it.
|
||||
|
||||
### Searching history
|
||||
|
||||
You can also use `history` to rerun the last command you entered by typing `!!`. And, by pairing it with `grep`, you can search for commands that match a text pattern or, by using it with `tail`, you can find the last few commands you executed. For example:
|
||||
```
|
||||
$ history | grep dnf
|
||||
|
||||
|
||||
|
||||
3 sudo dnf update -y
|
||||
|
||||
|
||||
|
||||
5 history | grep dnf
|
||||
|
||||
|
||||
|
||||
$ history | tail -n 3
|
||||
|
||||
|
||||
|
||||
4 history
|
||||
|
||||
|
||||
|
||||
5 history | grep dnf
|
||||
|
||||
|
||||
|
||||
6 history | tail -n 3
|
||||
|
||||
```
|
||||
|
||||
Another way to get to this search functionality is by typing `Ctrl-R` to invoke a recursive search of your command history. After typing this, the prompt changes to:
|
||||
```
|
||||
(reverse-i-search)`':
|
||||
|
||||
```
|
||||
|
||||
Now you can start typing a command, and matching commands will be displayed for you to execute by pressing Return or Enter.
|
||||
|
||||
### Changing an executed command
|
||||
|
||||
`history` also allows you to rerun a command with different syntax. For example, if I wanted to change my previous command `history | grep dnf` to `history | grep ssh`, I can execute the following at the prompt:
|
||||
```
|
||||
$ ^dnf^ssh^
|
||||
|
||||
```
|
||||
|
||||
`history` will rerun the command, but replace `dnf` with `ssh`, and execute it.
|
||||
|
||||
### Removing history
|
||||
|
||||
There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter `history -d <line number>`. To clear the entire contents of the history file, execute `history -c`.
|
||||
|
||||
The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as `.bash_history`.
|
||||
|
||||
### Next steps
|
||||
|
||||
There are a number of other things that you can do with `history`:
|
||||
|
||||
* Set the size of your history buffer to a certain number of commands
|
||||
* Record the date and time for each line in history
|
||||
* Prevent certain commands from being recorded in history
|
||||
|
||||
|
||||
|
||||
For more information about the `history` command and other interesting things you can do with it, take a look at the [GNU Bash Manual][1].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/history-command
|
||||
|
||||
作者:[Steve Morris][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/smorris12
|
||||
[1]:https://www.gnu.org/software/bash/manual/
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
BLUI: An easy way to create game UI
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translating by amwps290
|
||||
Complete Sed Command Guide [Explained with Practical Examples]
|
||||
======
|
||||
In a previous article, I showed the [basic usage of Sed][1], the stream editor, on a practical use case. Today, be prepared to gain more insight about Sed as we will take an in-depth tour of the sed execution model. This will be also an opportunity to make an exhaustive review of all Sed commands and to dive into their details and subtleties. So, if you are ready, launch a terminal, [download the test files][2] and sit comfortably before your keyboard: we will start our exploration right now!
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating----geekpi
|
||||
|
||||
How to Mount and Use an exFAT Drive on Ubuntu Linux
|
||||
======
|
||||
**Brief: This quick tutorial shows you how to enable exFAT file system support on Ubuntu and other Ubuntu-based Linux distributions. This way you won’t see any error while mounting exFAT drives on your system.**
|
||||
|
@ -0,0 +1,159 @@
|
||||
How To Check Which Groups A User Belongs To On Linux
|
||||
======
|
||||
Adding a user into existing group is one of the regular activity for Linux admin. This is daily activity for some of the administrator who’s working one big environments.
|
||||
|
||||
Even i am performing such a activity on daily in my environment due to business requirement. It’s one of the important command which helps you to identify existing groups on your environment.
|
||||
|
||||
Also these commands helps you to identify which groups a user belongs to. All the users are listed in `/etc/passwd` file and groups are listed in `/etc/group`.
|
||||
|
||||
Whatever command we use, that will fetch the information from these files. Also, each command has their unique feature which helps user to get the required information alone.
|
||||
|
||||
### What Is /etc/passwd?
|
||||
|
||||
`/etc/passwd` is a text file that contains each user information, which is necessary to login Linux system. It maintain useful information about users such as username, password, user ID, group ID, user ID info, home directory and shell. The passwd file contain every user details as a single line with seven fields as described above.
|
||||
```
|
||||
$ grep "daygeek" /etc/passwd
|
||||
daygeek:x:1000:1000:daygeek,,,:/home/daygeek:/bin/bash
|
||||
|
||||
```
|
||||
|
||||
### What Is /etc/group?
|
||||
|
||||
`/etc/group` is a text file that defines which groups a user belongs to. We can add multiple users into single group. It allows user to access other users files and folders as Linux permissions are organized into three classes, user, group, and others. It maintain useful information about group such as Group name, Group password, Group ID (GID) and Member list. each on a separate line. The group file contain every group details as a single line with four fields as described above.
|
||||
|
||||
This can be performed by using below methods.
|
||||
|
||||
* `groups:`Show All Members of a Group.
|
||||
* `id:`Print user and group information for the specified username.
|
||||
* `lid:`It display user’s groups or group’s users.
|
||||
* `getent:`get entries from Name Service Switch libraries.
|
||||
* `grep`grep stands for “global regular expression print” which prints matching pattern.
|
||||
|
||||
|
||||
|
||||
### What Is groups Command?
|
||||
|
||||
groups command prints the names of the primary and any supplementary groups for each given username.
|
||||
```
|
||||
$ groups daygeek
|
||||
daygeek : daygeek adm cdrom sudo dip plugdev lpadmin sambashare
|
||||
|
||||
```
|
||||
|
||||
If you would like to check list of groups associated with current user. Just run **“group”** command alone without any username.
|
||||
```
|
||||
$ groups
|
||||
daygeek adm cdrom sudo dip plugdev lpadmin sambashare
|
||||
|
||||
```
|
||||
|
||||
### What Is id Command?
|
||||
|
||||
id stands for identity. print real and effective user and group IDs. To print user and group information for the specified user, or for the current user.
|
||||
```
|
||||
$ id daygeek
|
||||
uid=1000(daygeek) gid=1000(daygeek) groups=1000(daygeek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),118(lpadmin),128(sambashare)
|
||||
|
||||
```
|
||||
|
||||
If you would like to check list of groups associated with current user. Just run **“id”** command alone without any username.
|
||||
```
|
||||
$ id
|
||||
uid=1000(daygeek) gid=1000(daygeek) groups=1000(daygeek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),118(lpadmin),128(sambashare)
|
||||
|
||||
```
|
||||
|
||||
### What Is lid Command?
|
||||
|
||||
It display user’s groups or group’s users. Displays information about groups containing user name, or users contained in group name. This command required privileges to run.
|
||||
```
|
||||
$ sudo lid daygeek
|
||||
adm(gid=4)
|
||||
cdrom(gid=24)
|
||||
sudo(gid=27)
|
||||
dip(gid=30)
|
||||
plugdev(gid=46)
|
||||
lpadmin(gid=108)
|
||||
daygeek(gid=1000)
|
||||
sambashare(gid=124)
|
||||
|
||||
```
|
||||
|
||||
### What Is getent Command?
|
||||
|
||||
The getent command displays entries from databases supported by the Name Service Switch libraries, which are configured in /etc/nsswitch.conf.
|
||||
```
|
||||
$ getent group | grep daygeek
|
||||
adm:x:4:syslog,daygeek
|
||||
cdrom:x:24:daygeek
|
||||
sudo:x:27:daygeek
|
||||
dip:x:30:daygeek
|
||||
plugdev:x:46:daygeek
|
||||
lpadmin:x:118:daygeek
|
||||
daygeek:x:1000:
|
||||
sambashare:x:128:daygeek
|
||||
|
||||
```
|
||||
|
||||
If you would like to print only associated groups name then include **“awk”** command along with above command.
|
||||
```
|
||||
$ getent group | grep daygeek | awk -F: '{print $1}'
|
||||
adm
|
||||
cdrom
|
||||
sudo
|
||||
dip
|
||||
plugdev
|
||||
lpadmin
|
||||
daygeek
|
||||
sambashare
|
||||
|
||||
```
|
||||
|
||||
Run the below command to print only primary group information.
|
||||
```
|
||||
$ getent group daygeek
|
||||
daygeek:x:1000:
|
||||
|
||||
```
|
||||
|
||||
### What Is grep Command?
|
||||
|
||||
grep stands for “global regular expression print” which prints matching pattern from the file.
|
||||
```
|
||||
$ grep "daygeek" /etc/group
|
||||
adm:x:4:syslog,daygeek
|
||||
cdrom:x:24:daygeek
|
||||
sudo:x:27:daygeek
|
||||
dip:x:30:daygeek
|
||||
plugdev:x:46:daygeek
|
||||
lpadmin:x:118:daygeek
|
||||
daygeek:x:1000:
|
||||
sambashare:x:128:daygeek
|
||||
|
||||
```
|
||||
|
||||
If you would like to print only associated groups name then include **“awk”** command along with above command.
|
||||
```
|
||||
$ grep "daygeek" /etc/group | awk -F: '{print $1}'
|
||||
adm
|
||||
cdrom
|
||||
sudo
|
||||
dip
|
||||
plugdev
|
||||
lpadmin
|
||||
daygeek
|
||||
sambashare
|
||||
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-which-groups-a-user-belongs-to-on-linux/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/prakash/
|
@ -0,0 +1,106 @@
|
||||
translating---geekpi
|
||||
|
||||
How to disable iptables firewall temporarily
|
||||
======
|
||||
|
||||
Learn how to disable iptables firewall in Linux temporarily for troubleshooting purpose. Also learn how to save policies and how to restore them back when you enable firewall back.
|
||||
|
||||
![How to disable iptables firewall temporarily][1]
|
||||
|
||||
Sometimes you have the requirement to turn off iptables firewall to do some connectivity troubleshooting and then you need to turn it back on. While doing it you also want to save all your [firewall policies][2] as well. In this article, we will walk you through how to save firewall policies and how to disable/enable iptables firewall. For more details about iptables firewall and policies [read our article][3] on it.
|
||||
|
||||
### Save iptables policies
|
||||
|
||||
The first step while disabling iptables firewall temporarily is to save existing firewall rules/policies. `iptables-save` command lists all your existing policies which you can save in a file on your server.
|
||||
|
||||
```
|
||||
root@kerneltalks # # iptables-save
|
||||
# Generated by iptables-save v1.4.21 on Tue Jun 19 09:54:36 2018
|
||||
*nat
|
||||
:PREROUTING ACCEPT [1:52]
|
||||
:INPUT ACCEPT [1:52]
|
||||
:OUTPUT ACCEPT [15:1140]
|
||||
:POSTROUTING ACCEPT [15:1140]
|
||||
:DOCKER - [0:0]
|
||||
---- output trucated----
|
||||
|
||||
root@kerneltalks # iptables-save > /root/firewall_rules.backup
|
||||
```
|
||||
|
||||
So iptables-save is the command with you can take iptables policy backup.
|
||||
|
||||
### Stop/disable iptables firewall
|
||||
|
||||
For older Linux kernels you have an option of stopping service iptables with `service iptables stop` but if you are on the new kernel, you just need to wipe out all the policies and allow all traffic through the firewall. This is as good as you are stopping the firewall.
|
||||
|
||||
Use below list of commands to do that.
|
||||
```
|
||||
root@kerneltalks # iptables -F
|
||||
root@kerneltalks # iptables -X
|
||||
root@kerneltalks # iptables -P INPUT ACCEPT
|
||||
root@kerneltalks # iptables -P OUTPUT ACCEPT
|
||||
root@kerneltalks # iptables -P FORWARD ACCEPT
|
||||
```
|
||||
|
||||
Where –
|
||||
|
||||
* -F : Flush all policy chains
|
||||
* -X : Delete user defined chains
|
||||
* -P INPUT/OUTPUT/FORWARD : Accept specified traffic
|
||||
|
||||
|
||||
|
||||
Once done, check current firewall policies. It should looks like below which means everything is accepted (as good as your firewall is disabled/stopped)
|
||||
|
||||
```
|
||||
# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
```
|
||||
|
||||
### Restore firewall policies
|
||||
|
||||
Once you are done with troubleshooting and you want to turn iptables back on with all its configurations. You need to first restore policies from the backup we took in first step.
|
||||
|
||||
```
|
||||
root@kerneltalks # iptables-restore </root/firewall_rules.backup
|
||||
```
|
||||
### Start iptables firewall
|
||||
|
||||
And then start iptables service in case you have stopped it in previous step using `service iptables start`. If you havnt stopped service then only restoring policies will do for you. Check if all policies are back in iptables firewall configurations :
|
||||
|
||||
```
|
||||
# iptables -L
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
|
||||
Chain FORWARD (policy DROP)
|
||||
target prot opt source destination
|
||||
DOCKER-USER all -- anywhere anywhere
|
||||
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
|
||||
-----output truncated-----
|
||||
```
|
||||
|
||||
That’s it! You have successfully disabled and enabled firewall without loosing your policy rules.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://kerneltalks.com/howto/how-to-disable-iptables-firewall-temporarily/
|
||||
|
||||
作者:[kerneltalks][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://kerneltalks.com
|
||||
[1]:https://a2.kerneltalks.com/wp-content/uploads/2018/06/How-to-disable-iptables-firewall-temporarily.png
|
||||
[2]:https://kerneltalks.com/networking/configuration-of-iptables-policies/
|
||||
[3]:https://kerneltalks.com/networking/basics-of-iptables-linux-firewall/
|
153
sources/tech/20180620 Anatomy of a perfect pull request.md
Normal file
153
sources/tech/20180620 Anatomy of a perfect pull request.md
Normal file
@ -0,0 +1,153 @@
|
||||
Anatomy of a perfect pull request
|
||||
======
|
||||

|
||||
|
||||
Writing clean code is just one of many factors you should care about when creating a pull request.
|
||||
|
||||
Large pull requests cause a big overhead during the code review and can facilitate bugs in the codebase.
|
||||
|
||||
That's why you need to care about the pull request itself. It should be short, have a clear title and description, and do only one thing.
|
||||
|
||||
### Why should you care?
|
||||
|
||||
* A good pull request will be reviewed quickly
|
||||
* It reduces bug introduction into codebase
|
||||
* It facilitates new developers onboarding
|
||||
* It does not block other developers
|
||||
* It speeds up the code review process and consequently, product development
|
||||
|
||||
|
||||
|
||||
### The size of the pull request
|
||||
|
||||

|
||||
|
||||
The first step to identifying problematic pull requests is to look for big diffs.
|
||||
|
||||
Several studies show that it is harder to find bugs when reviewing a lot of code.
|
||||
|
||||
In addition, large pull requests will block other developers who may be depending on the code.
|
||||
|
||||
#### How can we determine the perfect pull request size?
|
||||
|
||||
A [study of a Cisco Systems programming team][1] revealed that a review of 200-400 LOC over 60 to 90 minutes should yield 70-90% defect discovery.
|
||||
|
||||
With this number in mind, a good pull request should not have more than 250 lines of code changed.
|
||||
|
||||

|
||||
|
||||
Image from [small business programming][2].
|
||||
|
||||
As shown in the chart above, pull requests with more than 250 lines of changes usually take more than one hour to review.
|
||||
|
||||
### Break down large pull requests into smaller ones
|
||||
|
||||
Feature breakdown is an art. The more you do it, the easier it gets.
|
||||
|
||||
What do I mean by feature breakdown?
|
||||
|
||||
Feature breakdown is understanding a big feature and breaking it into small pieces that make sense and that can be merged into the codebase piece by piece without breaking anything.
|
||||
|
||||
#### Learning by doing
|
||||
|
||||
Let’s say that you need to create a subscribe feature on your app. It's just a form that accepts an email address and saves it.
|
||||
|
||||
Without knowing how your app works, I can already break it into eight pull requests:
|
||||
|
||||
* Create a model to save emails
|
||||
* Create a route to receive requests
|
||||
* Create a controller
|
||||
* Create a service to save it in the database (business logic)
|
||||
* Create a policy to handle access control
|
||||
* Create a subscribe component (frontend)
|
||||
* Create a button to call the subscribe component
|
||||
* Add the subscribe button in the interface
|
||||
|
||||
|
||||
|
||||
As you can see, I broke this feature into many parts, most of which can be done simultaneously by different developers.
|
||||
|
||||
### Single responsibility principle
|
||||
|
||||
The single responsibility principle (SRP) is a computer programming principle that states that every [module][3] or [class][4] should have responsibility for a single part of the [functionality][5] provided by the [software][6], and that responsibility should be entirely [encapsulated][7] by the class.
|
||||
|
||||
Just like classes and modules, pull requests should do only one thing.
|
||||
|
||||
Following the SRP reduces the overhead caused by revising a code that attempts to solve several problems.
|
||||
|
||||
Before submitting a PR for review, try applying the single responsibility principle. If the code does more than one thing, break it into other pull requests.
|
||||
|
||||
### Title and description matter
|
||||
|
||||
When creating a pull request, you should care about the title and the description.
|
||||
|
||||
Imagine that the code reviewer is joining your team today without knowing what is going on. He should be able to understand the changes.
|
||||
|
||||
![good_title_and_description.png][9]
|
||||
|
||||
What a good title and description look like
|
||||
|
||||
The image above shows [what a good title and description look like][10].
|
||||
|
||||
### The title of the pull request should be self-explanatory
|
||||
|
||||
The title should make clear what is being changed.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
#### Make a useful description
|
||||
|
||||
* Describe what was changed in the pull request
|
||||
* Explain why this PR exists
|
||||
* Make it clear how it does what it sets out to do— for example, does it change a column in the database? How is this done? What happens to the old data?
|
||||
* Use screenshots to demonstrate what has changed.
|
||||
|
||||
|
||||
|
||||
### Recap
|
||||
|
||||
#### Pull request size
|
||||
|
||||
The pull request must have a maximum of 250 lines of change.
|
||||
|
||||
#### Feature breakdown
|
||||
|
||||
Whenever possible, break pull requests into smaller ones.
|
||||
|
||||
#### Single Responsibility Principle
|
||||
|
||||
The pull request should do only one thing.
|
||||
|
||||
#### Title
|
||||
|
||||
Create a self-explanatory title that describes what the pull request does.
|
||||
|
||||
#### Description
|
||||
|
||||
Detail what was changed, why it was changed, and how it was changed.
|
||||
|
||||
_This article was originally posted at[Medium][11]. Reposted with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/anatomy-perfect-pull-request
|
||||
|
||||
作者:[Hugo Dias][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/hugodias
|
||||
[1]:https://smartbear.com/learn/code-review/best-practices-for-peer-code-review/
|
||||
[2]:https://smallbusinessprogramming.com/optimal-pull-request-size/
|
||||
[3]:https://en.wikipedia.org/wiki/Modular_programming
|
||||
[4]:https://en.wikipedia.org/wiki/Class_%28computer_programming%29
|
||||
[5]:https://en.wikipedia.org/wiki/Software_feature
|
||||
[6]:https://en.wikipedia.org/wiki/Software
|
||||
[7]:https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)
|
||||
[8]:/file/400671
|
||||
[9]:https://opensource.com/sites/default/files/uploads/good_title_and_description.png (good_title_and_description.png)
|
||||
[10]:https://github.com/rails/rails/pull/32865
|
||||
[11]:https://medium.com/@hugooodias/the-anatomy-of-a-perfect-pull-request-567382bb6067
|
@ -0,0 +1,85 @@
|
||||
Stop merging your pull requests manually
|
||||
======
|
||||
|
||||

|
||||
|
||||
If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.
|
||||
|
||||
Nevertheless, every day, they are thousands of developers using [GitHub][1] that are doing the same thing over and over again: they click on this button:
|
||||
|
||||
![Screen-Shot-2018-06-19-at-18.12.39][2]
|
||||
|
||||
This does not make any sense.
|
||||
|
||||
Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.
|
||||
|
||||
It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:
|
||||
|
||||
* Is the test suite passing?
|
||||
* Is the documentation up to date?
|
||||
* Does this follow our code style guideline?
|
||||
* Have N developers reviewed this?
|
||||
|
||||
|
||||
|
||||
As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?
|
||||
|
||||
In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.
|
||||
|
||||
When those conditions are all set, I want the code to be merged.
|
||||
|
||||
Without clicking a single button.
|
||||
|
||||
That's exactly how [Mergify][3] started.
|
||||
|
||||
![github-branching-1][4]
|
||||
|
||||
[Mergify][3] is a service that pushes that merge button for you. You define rules in the `.mergify.yml` file of your repository, and when the rules are satisfied, Mergify merges the pull request.
|
||||
|
||||
No need to press any button.
|
||||
|
||||
Take a random pull request, like this one:
|
||||
|
||||
![Screen-Shot-2018-06-20-at-17.12.11][5]
|
||||
|
||||
This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.
|
||||
|
||||
With [Mergify][3] enabled, you'd just have to put this `.mergify.yml` a the root of the repository:
|
||||
```
|
||||
rules:
|
||||
default:
|
||||
protection:
|
||||
required_status_checks:
|
||||
contexts:
|
||||
- continuous-integration/travis-ci
|
||||
required_pull_request_reviews:
|
||||
required_approving_review_count: 1
|
||||
|
||||
```
|
||||
|
||||
With such a configuration, [Mergify][3] enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.
|
||||
|
||||
We built [Mergify][3] as a **free service for open-source projects**. The [engine powering the service][6] is also open-source.
|
||||
|
||||
Now go [check it out][3] and stop letting those pull requests hang out one second more. Merge them!
|
||||
|
||||
If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://julien.danjou.info/stop-merging-your-pull-request-manually/
|
||||
|
||||
作者:[Julien Danjou][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://julien.danjou.info/author/jd/
|
||||
[1]:https://github.com
|
||||
[2]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-19-at-18.12.39.png
|
||||
[3]:https://mergify.io
|
||||
[4]:https://julien.danjou.info/content/images/2018/06/github-branching-1.png
|
||||
[5]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-20-at-17.12.11.png
|
||||
[6]:https://github.com/mergifyio/mergify-engine
|
98
translated/tech/20180115 How debuggers really work.md
Normal file
98
translated/tech/20180115 How debuggers really work.md
Normal file
@ -0,0 +1,98 @@
|
||||
调试器到底怎样工作
|
||||
======
|
||||
|
||||

|
||||
|
||||
供图:opensource.com
|
||||
|
||||
调试器是那些大多数(即使不是每个)开发人员在软件工程职业生涯中至少使用过一次的软件之一,但是你们中有多少人知道它们到底是如何工作的?我在悉尼 [linux.conf.au 2018][1] 的演讲中,将讨论从头开始编写调试器...使用 [Rust][2]!
|
||||
|
||||
在本文中,术语调试器/跟踪器可以互换。 “被跟踪者”是指正在被跟踪者跟踪的进程。
|
||||
|
||||
### ptrace 系统调用
|
||||
|
||||
大多数调试器严重依赖称为 `ptrace(2)` 的系统调用,其原型如下:
|
||||
|
||||
```
|
||||
long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data);
|
||||
```
|
||||
|
||||
这是一个可以操纵进程几乎所有方面的系统调用;但是,在调试器可以连接到一个进程之前,“被跟踪者”必须以请求 `PTRACE_TRACEME` 调用 `ptrace`。这告诉 Linux,父进程通过 `ptrace` 连接到这个进程是合法的。但是......我们如何强制一个进程调用 `ptrace`?很简单!`fork/execve` 提供了在 `fork` 之后但在被跟踪者真正开始使用 `execve` 之前调用 `ptrace` 的简单方法。很方便地,`fork` 还会返回被跟踪者的 `pid`,这是后面使用 `ptrace` 所必需的。
|
||||
|
||||
现在被跟踪者可以被调试器追踪,重要的变化发生了:
|
||||
|
||||
* 每当一个信号被传送到被调试者时,它就会停止,并且一个可以被 `wait` 系列系统调用捕获的等待事件被传送给跟踪器。
|
||||
* 每个 `execve` 系统调用都会导致 `SIGTRAP` 被传递给被跟踪者。(与之前的项目相结合,这意味着被跟踪者在一个 `execve` 完全发生之前停止。)
|
||||
|
||||
这意味着,一旦我们发出 `PTRACE_TRACEME` 请求并调用 `execve` 系统调用来实际在被跟踪者(进程上下文)中启动程序时,被跟踪者将立即停止,因为 `execve` 会传递一个 `SIGTRAP`,并且会被跟踪器中的等待事件捕获。我们如何继续?正如人们所期望的那样,`ptrace` 有大量的请求可以用来告诉被跟踪者可以继续:
|
||||
|
||||
|
||||
* `PTRACE_CONT`:这是最简单的。 被跟踪者运行,直到它接收到一个信号,此时等待事件被传递给跟踪器。这是最常见的实现真实世界调试器的“继续直至断点”和“永远继续”选项的方式。断点将在下面介绍。
|
||||
* `PTRACE_SYSCALL`:与 `PTRACE_CONT` 非常相似,但在进入系统调用之前以及在系统调用返回到用户空间之前停止。它可以与其他请求(我们将在本文后面介绍)结合使用来监视和修改系统调用的参数或返回值。系统调用追踪程序 `strace` 很大程度上使用这个请求来获知进程发起了哪些系统调用。
|
||||
* `PTRACE_SINGLESTEP`:这个很好理解。如果您之前使用过调试器(你会知道),此请求会执行下一条指令,然后立即停止。
|
||||
|
||||
|
||||
|
||||
我们可以通过各种各样的请求停止进程,但我们如何获得被调试者的状态?进程的状态大多是通过其寄存器捕获的,所以当然 `ptrace` 有一个请求来获得(或修改)寄存器:
|
||||
|
||||
* `PTRACE_GETREGS`:这个请求将给出被跟踪者刚刚被停止时的寄存器的状态。
|
||||
* `PTRACE_SETREGS`:如果跟踪器之前通过调用 `PTRACE_GETREGS` 得到了寄存器的值,它可以在参数结构中修改相应寄存器的值并使用 `PTRACE_SETREGS` 将寄存器设为新值。
|
||||
* `PTRACE_PEEKUSER` 和 `PTRACE_POKEUSER`:这些允许从被跟踪者的 `USER` 区读取信息,这里保存了寄存器和其他有用的信息。 这可以用来修改单一寄存器,而避免使用更重的 `PTRACE_{GET,SET}REGS` 请求。
|
||||
|
||||
|
||||
|
||||
在调试器仅仅修改寄存器是不够的。调试器有时需要读取一部分内存,甚至对其进行修改。GDB 可以使用 `print` 得到一个内存位置或变量的值。`ptrace` 通过下面的方法实现这个功能:
|
||||
|
||||
* `PTRACE_PEEKTEXT` 和 `PTRACE_POKETEXT`:这些允许读取和写入被跟踪者地址空间中的一个字。当然,使用这个功能时被跟踪者要被暂停。
|
||||
|
||||
|
||||
|
||||
真实世界的调试器也有类似断点和观察点的功能。 在接下来的部分中,我将深入体系结构对调试器支持的细节。为了清晰和简洁,本文将只考虑x86。
|
||||
|
||||
### 体系结构的支持
|
||||
|
||||
`ptrace` 很酷,但它是如何工作? 在前面的部分中,我们已经看到 `ptrace` 跟信号有很大关系:`SIGTRAP` 可以在单步跟踪、`execve` 之前以及系统调用前后被传送。信号可以通过一些方式产生,但我们将研究两个具体的例子,以展示信号可以被调试器用来在给定的位置停止程序(有效地创建一个断点!):
|
||||
|
||||
|
||||
* **未定义的指令**:当一个进程尝试执行一个未定义的指令,CPU 将产生一个异常。此异常通过 CPU 中断处理,内核中相应的中断处理程序被调用。这将导致一个 `SIGILL` 信号被发送给进程。 这依次导致进程被停止,跟踪器通过一个等待事件被通知,然后它可以决定后面做什么。在 x86 上,指令 `ud2` 被确保始终是未定义的。
|
||||
|
||||
* **调试中断**:前面的方法的问题是,`ud2` 指令需要占用两个字节的机器码。存在一条特殊的单字节指令能够触发一个中断,它是 `int $3`,机器码是 `0xCC`。 当该中断发出时,内核向进程发送一个 `SIGTRAP`,如前所述,跟踪器被通知。
|
||||
|
||||
|
||||
|
||||
这很好,但如何做我们胁迫的被跟踪者执行这些指令? 这很简单:利用 `ptrace` 的 `PTRACE_POKETEXT` 请求,它可以覆盖内存中的一个字。 调试器将使用 `PTRACE_PEEKTEXT` 读取该位置原来的值并替换为 `0xCC` ,然后在其内部状态中记录该处原来的值,以及它是一个断点的事实。 下次被跟踪者执行到该位置时,它将被通过 `SIGTRAP` 信号自动停止。 然后调试器的最终用户可以决定如何继续(例如,检查寄存器)。
|
||||
|
||||
好吧,我们已经讲过了断点,那观察点呢? 当一个特定的内存位置被读或写,调试器如何停止程序? 当然你不可能为了能够读或写内存而去把每一个指令都覆盖为 `int $3`。有一组调试寄存器为了更有效的满足这个目的而被设计出来:
|
||||
|
||||
|
||||
* `DR0` 到 `DR3`:这些寄存器中的每个都包含一个地址(内存位置),调试器因为某种原因希望被跟踪者在那些地址那里停止。 其原因以掩码方式被设定在 `DR7` 寄存器中。
|
||||
* `DR4` 和 `DR5`:这些分别是 `DR6` 和 `DR7`过时的别名。
|
||||
* `DR6`:调试状态。包含有关 `DR0` 到 `DR3` 中的哪个寄存器导致调试异常被引发的信息。这被 Linux 用来计算与 `SIGTRAP` 信号一起传递给被跟踪者的信息。
|
||||
* `DR7`:调试控制。通过使用这些寄存器中的位,调试器可以控制如何解释DR0至DR3中指定的地址。位掩码控制监视点的尺寸(监视1,2,4或8个字节)以及是否在执行、读取、写入时引发异常,或在读取或写入时引发异常。
|
||||
|
||||
|
||||
由于调试寄存器是进程的 `USER` 区域的一部分,调试器可以使用 `PTRACE_POKEUSER` 将值写入调试寄存器。调试寄存器只与特定进程相关,因此在进程抢占并重新获得 CPU 控制权之前,调试寄存器会被恢复。
|
||||
|
||||
### 冰山一角
|
||||
|
||||
我们已经浏览了一个调试器的“冰山”:我们已经介绍了 `ptrace`,了解了它的一些功能,然后我们看到了 `ptrace` 是如何实现的。 `ptrace` 的某些部分可以用软件实现,但其它部分必须用硬件来实现,否则实现代价会非常高甚至无法实现。
|
||||
|
||||
当然有很多我们没有涉及。例如“调试器如何知道变量在内存中的位置?”等问题由于空间和时间限制而尚未解答,但我希望你从本文中学到了一些东西;如果它激起你的兴趣,网上有足够的资源可以了解更多。
|
||||
|
||||
想要了解更多,请查看 [linux.conf.au][1] 中 Levente Kurusa 的演讲 [Let's Write a Debugger!][3],于一月 22-26 日在悉尼举办。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/how-debuggers-really-work
|
||||
|
||||
作者:[Levente Kurusa][a]
|
||||
译者:[stephenxs](https://github.com/stephenxs)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/lkurusa
|
||||
[1]:https://linux.conf.au/index.html
|
||||
[2]:https://www.rust-lang.org
|
||||
[3]:https://rego.linux.conf.au/schedule/presentation/91/
|
@ -0,0 +1,107 @@
|
||||
如何在 Linux 中的特定时间运行命令
|
||||
======
|
||||

|
||||
|
||||
有一天,我使用 rsync 将大文件传输到局域网上的另一个系统。由于它是非常大的文件,大约需要 20 分钟才能完成。我不想再等了,我也不想按 CTRL+C 来终止这个过程。我只是想知道在类 Unix 操作系统中是否有简单的方法可以在特定的时间运行一个命令,并且一旦超时就自动杀死它 - 因此有了这篇文章。请继续阅读。
|
||||
|
||||
### 在 Linux 中在特定时间运行命令
|
||||
|
||||
我们可以用两种方法做到这一点。
|
||||
|
||||
#### 方法 1 - 使用 “timeout” 命令
|
||||
|
||||
最常用的方法是使用 **timeout** 命令。对于那些不知道的人来说,timeout 命令会有效地限制一个进程的绝对执行时间。timeout 命令是 GNU coreutils 包的一部分,因此它预装在所有 GNU/Linux 系统中。
|
||||
|
||||
假设你只想运行一个命令 5 秒钟,然后杀死它。为此,我们使用:
|
||||
```
|
||||
$ timeout <time-limit-interval> <command>
|
||||
|
||||
```
|
||||
|
||||
例如,以下命令将在 10 秒后终止。
|
||||
```
|
||||
$ timeout 10s tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
你也可以不用在秒数后加后缀 “s”。以下命令与上面的相同。
|
||||
```
|
||||
$ timeout 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
其他可用的后缀有:
|
||||
|
||||
* ‘m‘ 代表分钟。
|
||||
* ’h‘ 代表小时。
|
||||
* ’d‘ 代表天。
|
||||
|
||||
|
||||
|
||||
如果你运行这个 **tail -f /var/log/pacman.log** 命令,它将继续运行,直到你按 CTRL+C 手动结束它。但是,如果你使用 **timeout** 命令运行它,它将在给定的时间间隔后自动终止。如果该命令在超时后仍在运行,则可以发送 **kill** 信号,如下所示。
|
||||
```
|
||||
$ timeout -k 20 10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
在这种情况下,如果 tail 命令在 10 秒后仍然运行,timeout 命令将在 20 秒后发送一个 kill 信号并结束。
|
||||
|
||||
有关更多详细信息,请查看手册页。
|
||||
```
|
||||
$ man timeout
|
||||
|
||||
```
|
||||
|
||||
有时,某个特定程序可能需要很长时间才能完成并最终冻结你的系统。在这种情况下,你可以使用此技巧在特定时间后自动结束该进程。
|
||||
|
||||
另外,可以考虑使用 **cpulimit**,一个简单的限制进程的 CPU 使用率的程序。有关更多详细信息,请查看下面的链接。
|
||||
|
||||
#### 方法 2 - 使用 “Timelimit” 程序
|
||||
|
||||
Timelimit 使用提供的参数执行给定的命令,并在给定的时间后使用给定的信号终止进程。首先,它会发送警告信号,然后在超时后发送 **kill** 信号。
|
||||
|
||||
与 timeout 不同,Timelimit 有更多选项。你可以传递参数数量,如 killsig、warnsig、killtime、warntime 等。它存在于基于 Debian 的系统的默认仓库中。所以,你可以使用命令来安装它:
|
||||
```
|
||||
$ sudo apt-get install timelimit
|
||||
|
||||
```
|
||||
|
||||
对于基于 Arch 的系统,它在 AUR 中存在。因此,你可以使用任何 AUR 助手进行安装,例如 [**Pacaur**][3]、[**Packer**][4]、[**Yay**][5]、[**Yaourt**][6] 等。
|
||||
|
||||
对于其他发行版,请[**在这里**][7]下载源码并手动安装。安装 Timelimit 后,运行下面的命令一段特定的时间,例如 10 秒钟:
|
||||
```
|
||||
$ timelimit -t10 tail -f /var/log/pacman.log
|
||||
|
||||
```
|
||||
|
||||
如果不带任何参数运行 timelimit,它将使用默认值:warntime=3600 秒、warnsig=15、killtime=120、killsig=9。有关更多详细信息,请参阅本指南最后给出的手册页和项目网站。
|
||||
```
|
||||
$ man timelimit
|
||||
|
||||
```
|
||||
|
||||
今天就是这些。我希望对你有用。还有更好的东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/run-command-specific-time-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/02/Timeout.gif
|
||||
[3]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[4]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[7]:http://devel.ringlet.net/sysutils/timelimit/#download
|
@ -0,0 +1,134 @@
|
||||
在开源项目中做出你的第一个贡献
|
||||
============================================================
|
||||
|
||||
> 这是许多事情的第一步
|
||||
|
||||

|
||||
图片提供 : [WOCinTech Chat][16]. 图片修改 : Opensource.com. [CC BY-SA 4.0][17]
|
||||
|
||||
有一个普遍的误解,那就是对开源做出贡献是一件很难的事。你可能会想,“有时我甚至不能理解我自己的代码;那我怎么可能理解别人的?”
|
||||
|
||||
放轻松。直到去年,我都以为是这样。阅读和理解他人的代码,然后把你自己的写在顶上,这是一件令人气馁的任务;但如果有合适的资源,这不像你想象的那么糟。
|
||||
|
||||
第一步要做的是选择一个项目。这个决定是可能是一个菜鸟转变成一个老练的开源贡献者的关键一步。
|
||||
|
||||
许多对开源感兴趣的业余程序员都被建议从 [Git][18] 入手,但这并不是最好的开始方式。Git 是由许多有着多年软件开发经验的超级极客维护的。它是寻找可以做贡献的开源项目的好地方,但对新手并不友好。大多数对 Git 做出贡献的开发者都有足够的经验,他们不需要参考各类资源或文档。在这篇文章里,我将提供一个对新手友好的特性的列表,并且给出一些建议,希望可以使你更轻松地对开源做出贡献。
|
||||
|
||||
### 理解产品
|
||||
|
||||
在开始贡献之前,你需要理解项目是怎么工作的。为了理解这一点,你需要自己来尝试。如果你发现这个产品很有趣并且游泳,它就值得你来做贡献。
|
||||
|
||||
初学者常常选择参与贡献那些他们没有使用过的软件。他们会失望,并且最终放弃贡献。如果你没有用过这个软件,你不会理解它是怎么工作的。如果你不理解它是怎么工作的,你怎么能解决 bug 或添加新特性呢?
|
||||
|
||||
要记住:尝试它,才能改变它。
|
||||
|
||||
### 确认产品的状况
|
||||
|
||||
这个项目有多活跃?
|
||||
|
||||
如果你向一个暂停维护的项目提交一个<ruby>拉取请求<rt>pull request</rt></ruby>,你的请求可能永远不会被讨论或合并。找找那些活跃的项目,这样你的代码可以得到即时的反馈,你的贡献也就不会被浪费。
|
||||
|
||||
这里介绍了怎么确认一个项目是否还是活跃的:
|
||||
|
||||
* **贡献者数量:** 一个增加的贡献者数量表明开发者社区乐于接受新的贡献者。
|
||||
|
||||
* **<ruby>提交<rt>commit</rt></ruby>频率:** 查看最近的提交时间。如果是一周之内,甚至是一两个月内,这个项目应该是定期维护的。
|
||||
|
||||
* **维护者数量:** 维护者的数量越多,你越可能得到指导。
|
||||
|
||||
* **聊天室活动等级:** 一个繁忙的聊天室意味着你的问题可以更快得到回复。
|
||||
|
||||
### 新手资源
|
||||
|
||||
Coala 是一个开源项目的例子。它有自己的教程和文档,让你可以使用它(每一个类和方法)的 API。这个网站还设计了一个有吸引力的界面,让你有阅读的兴趣。
|
||||
|
||||
**文档:** 所有水平的开发者都需要可靠的,被很好地维护的文档,来理解项目的细节。找找在 [GitHub][19](或者承载的任何位置)上,或者在单独的类似于 [阅读文档][20] 的页面上提供完善文档的项目,这样可以帮助你深入了解代码。
|
||||
|
||||
### [Coala 新手指南.png][2]
|
||||
|
||||

|
||||
|
||||
**教程:** 教程会给新手解释如何在项目里添加特性 (然而,你可以在任何项目里找到它)。例如,Coala 提供了 [tutorials for writing _bears_][21] (执行代码分析的<ruby>格式化代码<rt>linting</rt></ruby>工具的Python 包装器).
|
||||
|
||||
### [Coala 界面.png][3]
|
||||
|
||||

|
||||
|
||||
**添加标签的<ruby>讨论点<rt>issue</rt></ruby>:** 对刚刚想明白如何选择第一个项目的初学者来说,选择一个讨论点是一个更加困难的任务。标签被设为“难度/低”,“难度/新手”,“利于初学者”,以及“low-hanging fruit”都表明是对新手友好的。F
|
||||
|
||||
### [Coala 讨论点标签.png][4]
|
||||
|
||||

|
||||
|
||||
### 其他因素
|
||||
|
||||
### [ci_历史纪录.png][5]
|
||||
|
||||

|
||||
|
||||
* **维护者对新的贡献者的态度:** 从我的经验来看,大部分开源贡献者都很乐于帮助他们项目里的新手。然而,当你问问题时,你也有可能遇到一些不太友好的人(甚至可能有点粗鲁)。不要因为这些人失去信心。他们只是因为在比他们经验更丰富的人那儿得不到发泄的机会。还有很多其他人愿意提供帮助。
|
||||
|
||||
* **审阅过程/结构:** 你的拉取请求将被你的同事和有经验的开发者查看和更改很多次——这就是你学习软件开发最主要的方式。一个具有严格审阅过程的项目使您能够通过编写生产级代码来作为开发人员成长。
|
||||
|
||||
* **一个稳健的<ruby>持续整合<rt>continuous integration</rt></ruby>管道:** 开源项目会向新手们介绍持续整合和部署服务。一个稳健的 CI 管道将帮助你学习阅读和理解 CI 日志。它也将带给你处理失败的测试案例和代码覆盖率问题的经验。
|
||||
|
||||
* **参加编程项目 (例如 [Google Summer Of Code][1]):** 参加组织证明了你乐于对一个项目的长期发展做贡献。他们也会给新手提供一个机会来获得现实世界中的开发经验,从而获得报酬。大多数参加这些项目的组织都欢迎新人加入。
|
||||
|
||||
### 7 对新手友好的组织
|
||||
|
||||
* [coala (Python)][7]
|
||||
|
||||
* [oppia (Python, Django)][8]
|
||||
|
||||
* [DuckDuckGo (Perl, JavaScript)][9]
|
||||
|
||||
* [OpenGenus (JavaScript)][10]
|
||||
|
||||
* [Kinto (Python, JavaScript)][11]
|
||||
|
||||
* [FOSSASIA (Python, JavaScript)][12]
|
||||
|
||||
* [Kubernetes (Go)][13]
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
[][22] Palash Nigam - 我是一个印度计算机科学专业本科生,十分乐于参与开源软件的开发,我在 GitHub 上花费了大部分的时间。我现在的兴趣包括 web 后端开发,区块链,和 All things python.[更多关于我][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/get-started-open-source-project
|
||||
|
||||
作者:[ Palash Nigam ][a]
|
||||
译者:[lonaparte](https://github.com/lonaparte)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/palash25
|
||||
[1]:https://en.wikipedia.org/wiki/Google_Summer_of_Code
|
||||
[2]:https://opensource.com/file/391211
|
||||
[3]:https://opensource.com/file/391216
|
||||
[4]:https://opensource.com/file/391226
|
||||
[5]:https://opensource.com/file/391221
|
||||
[6]:https://opensource.com/article/18/4/get-started-open-source-project?rate=i_d2neWpbOIJIAEjQKFExhe0U_sC6SiQgkm3c7ck8IM
|
||||
[7]:https://github.com/coala/coala
|
||||
[8]:https://github.com/oppia/oppia
|
||||
[9]:https://github.com/duckduckgo/
|
||||
[10]:https://github.com/OpenGenus/
|
||||
[11]:https://github.com/kinto
|
||||
[12]:https://github.com/fossasia/
|
||||
[13]:https://github.com/kubernetes
|
||||
[14]:https://opensource.com/users/palash25
|
||||
[15]:https://opensource.com/user/212436/feed
|
||||
[16]:https://www.flickr.com/photos/wocintechchat/25171528213/
|
||||
[17]:https://creativecommons.org/licenses/by/4.0/
|
||||
[18]:https://git-scm.com/
|
||||
[19]:https://github.com/
|
||||
[20]:https://readthedocs.org/
|
||||
[21]:http://api.coala.io/en/latest/Developers/Writing_Linter_Bears.html
|
||||
[22]:https://opensource.com/users/palash25
|
||||
[23]:https://opensource.com/users/palash25
|
||||
[24]:https://opensource.com/users/palash25
|
||||
[25]:https://opensource.com/article/18/4/get-started-open-source-project#comments
|
||||
[26]:https://opensource.com/tags/web-development
|
@ -0,0 +1,108 @@
|
||||
密码学及公钥基础设施入门
|
||||
======
|
||||

|
||||
|
||||
安全通信正快速成为当今互联网的规范。从 2018 年 7 月起,Google Chrome 将对**全部**使用 HTTP 传输(而不是 HTTPS 传输)的站点[开始显示“不安全”警告][1]。虽然密码学已经逐渐变广为人知,但其本身并没有变得更容易理解。[Let's Encrypt][3] 设计并实现了一套令人惊叹的解决方案,可以提供免费安全证书和周期性续签;但如果不了解底层概念和缺陷,你也不过是加入了类似”[<ruby>船货崇拜<rt>cargo cult</rt></ruby>][4]“的技术崇拜的程序员大军。
|
||||
|
||||
### 安全通信的特性
|
||||
|
||||
密码学最直观明显的目标是<ruby>保密性<rt>confidentiality</rt></ruby>:<ruby>消息<rt>message</rt></ruby>传输过程中不会被窥探内容。为了保密性,我们对消息进行加密:对于给定消息,我们结合一个<ruby>密钥<rt>key</rt></ruby>生成一个无意义的乱码,只有通过相同的密钥逆转加密过程(即解密过程)才能将其转换为可读的消息。假设我们有两个朋友 [Alice 和 Bob][5],以及他们的<ruby>八卦<rt>nosy</rt></ruby>邻居 Eve。Alice 加密类似 "Eve 很讨厌" 的消息,将其发送给 Bob,期间不用担心 Eve 会窥探到这条消息的内容。
|
||||
|
||||
对于真正的安全通信,保密性是不够的。假如 Eve 收集了足够多 Alice 和 Bob 之间的消息,发现单词 "Eve" 被加密为 "Xyzzy"。除此之外,Eve 还知道 Alice 和 Bob 正在准备一个派对,Alice 会将访客名单发送给 Bob。如果 Eve 拦截了消息并将 "Xyzzy" 加到访客列表的末尾,那么她已经成功的破坏了这个派对。因此,Alice 和 Bob 需要他们之间的通信可以提供<ruby>完整性<rt>integrity</rt></ruby>:消息应该不会被篡改。
|
||||
|
||||
而且我们还有一个问题有待解决。假如 Eve 观察到 Bob 打开了标记为”来自 Alice“的信封,信封中包含一条来自 Alice 的消息”再买一加仑冰淇淋“。Eve 看到 Bob 外出,回家时带着冰淇淋,这样虽然 Eve 并不知道消息的完整内容,但她对消息有了大致的了解。Bob 将上述消息丢弃,但 Eve 找出了它并在下一周中的每一天都向 Bob 的邮箱中投递一封标记为”来自 Alice“的信封,内容拷贝自之前 Bob 丢弃的那封信。这样到了派对的时候,冰淇淋严重超量;派对当晚结束后,Bob 分发剩余的冰淇淋,Eve 带着免费的冰淇淋回到家。消息是加密的,完整性也没问题,但 Bob 被误导了,没有认出发信人的真实身份。<ruby>身份认证<rt>Authentication</rt></ruby>这个特性用于保证你正在通信的人的确是其声称的那样。
|
||||
|
||||
信息安全还有[其它特性][6],但保密性、完整性和身份验证是你必须了解的三大特性。
|
||||
|
||||
### 加密和加密算法
|
||||
|
||||
加密都包含哪些部分呢?首先,需要一条消息,我们称之为<ruby>明文<rt>plaintext</rt></ruby>。接着,需要对明文做一些格式上的初始化,以便用于后续的加密过程(例如,假如我们使用<ruby>分组加密算法<rt>block cipher</rt></ruby>,需要在明文尾部填充使其达到特定长度)。下一步,需要一个保密的比特序列,我们称之为<ruby>密钥<rt>key</rt></ruby>。然后,基于私钥,使用一种加密算法将明文转换为<ruby>密文<rt>ciphertext</rt></ruby>。密文看上去像是随机噪声,只有通过相同的加密算法和相同的密钥(在后面提到的非对称加密算法情况下,是另一个数学上相关的密钥)才能恢复为明文。
|
||||
|
||||
(LCTT 译注:cipher 一般被翻译为密码,但其具体表达的意思是加密算法,这里采用加密算法的翻译)
|
||||
|
||||
加密算法使用密钥加密明文。考虑到希望能够解密密文,我们用到的加密算法也必须是<ruby>可逆的<rt>reversible</rt></ruby>。作为简单示例,我们可以使用 [XOR][7]。该算子可逆,而且逆算子就是本身(P ^ K = C; C ^ K = P),故可同时用于加密和解密。该算子的平凡应用可以是<ruby>一次性密码本<rt>one-time pad</rt></ruby>,但一般而言并不[可行][9]。但可以将 XOR 与一个基于单个密钥生成<ruby>任意随机数据流<rt>arbitrary stream of random data</rt></ruby>的函数结合起来。现代加密算法 AES 和 Chacha20 就是这么设计的。
|
||||
|
||||
我们把加密和解密使用同一个密钥的加密算法称为<ruby>对称加密算法<rt>symmetric cipher</rt></ruby>。对称加密算法分为<ruby>流加密算法<rt>stream ciphers</rt></ruby>和分组加密算法两类。流加密算法依次对明文中的每个比特或字节进行加密。例如,我们上面提到的 XOR 加密算法就是一个流加密算法。流加密算法适用于明文长度未知的情形,例如数据从管道或 socket 传入。[RC4][10] 是最为人知的流加密算法,但在多种不同的攻击面前比较脆弱,以至于最新版本 (1.3)的 TLS ("HTTPS" 中的 "S")已经不再支持该加密算法。[Efforts][11] 正着手创建新的加密算法,候选算法 [ChaCha20][12] 已经被 TLS 支持。
|
||||
|
||||
分组加密算法对固定长度的分组,使用固定长度的密钥加密。在分组加密算法领域,排行第一的是 [<ruby>先进加密标准<rt>Advanced Encryption Standard, AES</rt></ruby>][13],使用的分组长度为 128 比特。分组包含的数据并不多,因而分组加密算法包含一个[工作模式][14],用于描述如何对任意长度的明文执行分组加密。最简单的工作模式是 [<ruby>电子密码本<rt>Electronic Code Book, ECB</rt></ruby>][15],将明文按分组大小划分成多个分组(在必要情况下,填充最后一个分组),使用密钥独立的加密各个分组。
|
||||
|
||||

|
||||
|
||||
这里我们留意到一个问题:如果相同的分组在明文中出现多次(例如互联网流量中的 "GET / HTTP/1.1" 词组),由于我们使用相同的密钥加密分组,我们会得到相同的加密结果。我们的安全通信中会出现一种<ruby>模式规律<rt>pattern</rt></ruby>,容易受到攻击。
|
||||
|
||||
因此还有很多高级的工作模式,例如 [<ruby>密码分组链接<rt>Cipher Block Chaining, CBC</rt></ruby>][16],其中每个分组的明文在加密前会与前一个分组的密文进行 XOR 操作,而第一个分组的明文与一个随机数构成的初始化向量进行 XOR 操作。还有其它一些工作模式,在安全性和执行速度方面各有优缺点。甚至还有 Counter (CTR) 这种工作模式,可以将分组加密算法转换为流加密算法。
|
||||
|
||||

|
||||
|
||||
除了对称加密算法,还有<ruby>非对称加密算法<rt>asymmetric ciphers</rt></ruby>,也被称为<ruby>公钥密码学<rt>public-key cryptography</rt></ruby>。这类加密算法使用两个密钥:一个<ruby>公钥<rt>public key</rt></ruby>,一个<ruby>私钥<rt>private key</rt></ruby>。公钥和私钥在数学上有一定关联,但可以区分二者。经过公钥加密的密文只能通过私钥解密,经过私钥加密的密文可以通过公钥解密。公钥可以大范围分发出去,但私钥必须对外不可见。如果你希望和一个给定的人通信,你可以使用对方的公钥加密消息,这样只有他们的私钥可以解密出消息。在非对称加密算法领域,目前 [RSA][17] 最具有影响力。
|
||||
|
||||
非对称加密算法最主要的缺陷是,它们是<ruby>计算密集型<rt>computationally expensive</rt></ruby>的。那么使用对称加密算法可以让身份验证更快吗?如果你只与一个人共享密钥,答案是肯定的。但这种方式很快就会失效。假如一群人希望使用对称加密算法进行两两通信,如果对每对成员通信都采用单独的密钥,一个 20 人的群体将有 190 对成员通信,即每个成员要维护 19 个密钥并确认其安全性。如果使用非对称加密算法,每个成员仅需确保自己的私钥安全并维护一个公钥列表即可。
|
||||
|
||||
非对称加密算法也有加密[数据长度][18]限制。类似于分组加密算法,你需要将长消息进行划分。但实际应用中,非对称加密算法通常用于建立<ruby>机密<rt>confidential</rt></ruby>、<ruby>已认证<rt>authenticated</rt></ruby>的<ruby>通道<rt>channel</rt></ruby>,利用该通道交换对称加密算法的共享密钥。考虑到速度优势,对称加密算法用于后续的通信。TLS 就是严格按照这种方式运行的。
|
||||
|
||||
### 基础
|
||||
|
||||
安全通信的核心在于随机数。随机数用于生成密钥并为<ruby>确定性过程<rt>deterministic processes</rt></ruby>提供不可预测性。如果我们使用的密钥是可预测的,那我们从一开始就可能受到攻击。计算机被设计成按固定规则操作,因此生成随机数是比较困难的。计算机可以收集鼠标移动或<ruby>键盘计时<rt>keyboard timings</rt></ruby>这类随机数据。但收集随机性(也叫<ruby>信息熵<rt>entropy</rt></ruby>)需要花费不少时间,而且涉及额外处理以确保<ruby>均匀分布<rt>uniform distribution</rt></ruby>。甚至可以使用专用硬件,例如[<ruby>熔岩灯<rt>lava lamps</rt></ruby>墙][19]等。一般而言,一旦有了一个真正的随机数值,我们可以将其用作<ruby>种子<rt>seed</rt></ruby>,使用<ruby>密码安全的伪随机数生成器<rt>cryptographically secure pseudorandom number generator</rt></ruby>生成随机数。使用相同的种子,同一个随机数生成器生成的随机数序列保持不变,但重要的是随机数序列是无规律的。在 Linux 内核中,[/dev/random 和 /dev/urandom][21] 工作方式如下:从多个来源收集信息熵,进行<ruby>无偏处理<rt>remove biases</rt></ruby>,生成种子,然后生成随机数,该随机数可用于 RSA 密钥生成等。
|
||||
|
||||
### 其它密码学组件
|
||||
|
||||
我们已经实现了保密性,但还没有考虑完整性和身份验证。对于后两者,我们需要使用一些额外的技术。
|
||||
|
||||
首先是<ruby>密码散列函数<rt>crytographic hash function</rt></ruby>,该函数接受任意长度的输入并给出固定长度的输出(一般称为<ruby>摘要<rt>digest</rt></ruby>)。如果我们找到两条消息,其摘要相同,我们称之为<ruby>碰撞<rt>collision</rt></ruby>,对应的散列函数就不适合用于密码学。这里需要强调一下“找到”:考虑到消息的条数是无限的而摘要的长度是固定的,那么总是会存在碰撞;但如果无需海量的计算资源,我们总是能找到发生碰撞的消息对,那就令人比较担心了。更严重的情况是,对于每一个给定的消息,都能找到与之碰撞的另一条消息。
|
||||
|
||||
另外,哈希函数必须是<ruby>单向的<rt>one-way</rt></ruby>:给定一个摘要,反向计算对应的消息在计算上不可行。相应的,这类[条件][22]被称为<ruby>碰撞阻力<rt>collision resistance</rt></ruby>、<ruby>第二原象抗性<rt>second preimage resistance</rt></ruby>和<ruby>原象抗性<rt>preimage resistance</rt></ruby>。如果满足这些条件,摘要可以用作消息的指纹。[理论上][23]不存在具有相同指纹的两个人,而且你无法使用指纹反向找到其对应的人。
|
||||
|
||||
如果我们同时发送消息及其摘要,接收者可以使用相同的哈希函数独立计算摘要。如果两个摘要相同,可以认为消息没有被篡改。考虑到 [SHA-1][25] 已经变得[有些过时][26],目前最流行的密码散列函数是 [SHA-256][24]。
|
||||
|
||||
散列函数看起来不错,但如果有人可以同时篡改消息及其摘要,那么消息发送仍然是不安全的。我们需要将哈希与加密算法结合起来。在对称加密算法领域,我们有<ruby>消息认证码<rt>message authentication codes, MACs</rt></ruby>技术。MACs 有多种形式,但<ruby>哈希消息认证码<rt>hash message authentication codes, HMAC</rt></ruby> 这类是基于哈希的。[HMAC][27] 使用哈希函数 H 处理密钥 K、消息 M,公式为 H(K + H(K + M)),其中 "+" 代表<ruby>连接<rt>concatenation</rt></ruby>。公式的独特之处并不在本文讨论范围内,大致来说与保护 HMAC 自身的完整性有关。发送加密消息的同时也发送 MAC。Eve 可以任意篡改消息,但一旦 Bob 独立计算 MAC 并与接收到的 MAC 做比较,就会发现消息已经被篡改。
|
||||
|
||||
在非对称加密算法领域,我们有<ruby>数字签名<rt>digital signatures</rt></ruby>技术。如果使用 RSA,使用公钥加密的内容只能通过私钥解密,反过来也是如此;这种机制可用于创建一种签名。如果只有我持有私钥并用其加密文档,那么只有我的公钥可以用于解密,那么大家潜在的承认文档是我写的:这是一种身份验证。事实上,我们无需加密整个文档。如果生成文档的摘要,只要对这个指纹加密即可。对摘要签名比对整个文档签名要快得多,而且可以解决非对称加密存在的消息长度限制问题。接收者解密出摘要信息,独立计算消息的摘要并进行比对,可以确保消息的完整性。对于不同的非对称加密算法,数字签名的方法也各不相同;但核心都是使用公钥来检验已有签名。
|
||||
|
||||
### 汇总
|
||||
|
||||
现在,我们已经有了全部的主体组件,可以用其实现一个我们期待的、具有全部三个特性的[<ruby>体系<rt>system</rt></ruby>][28]。Alice 选取一个保密的对称加密密钥并使用 Bob 的公钥进行加密。接着,她对得到的密文进行哈希并使用其私钥对摘要进行签名。Bob 接收到密文和签名,一方面独立计算密文的摘要,另一方面使用 Alice 的公钥解密签名中的摘要;如果两个摘要相同,他可以确信对称加密密钥没有被篡改且通过了身份验证。Bob 使用私钥解密密文得到对称加密密钥,接着使用该密钥及 HMAC 与 Alice 进行保密通信,这样每一条消息的完整性都得到保障。但该体系没有办法抵御消息重放攻击(我们在 Eve 造成的冰淇淋灾难中见过这种攻击)。要解决重放攻击,我们需要使用某种类型的“<ruby>握手<rt>handshake</rt></ruby>”建立随机、短期的<ruby>会话标识符<rt>session identifier</rt></ruby>。
|
||||
|
||||
密码学的世界博大精深,我希望这篇文章能让你对密码学的核心目标及其组件有一个大致的了解。这些概念为你打下坚实的基础,让你可以继续深入学习。
|
||||
|
||||
感谢 Hubert Kario,Florian Weimer 和 Mike Bursell 在本文写作过程中提供的帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/cryptography-pki
|
||||
|
||||
作者:[Alex Wood][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pinewall](https://github.com/pinewall)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/awood
|
||||
[1]:https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
|
||||
[2]:https://blog.mozilla.org/security/2017/01/20/communicating-the-dangers-of-non-secure-http/
|
||||
[3]:https://letsencrypt.org/
|
||||
[4]:https://en.wikipedia.org/wiki/Cargo_cult_programming
|
||||
[5]:https://en.wikipedia.org/wiki/Alice_and_Bob
|
||||
[6]:https://en.wikipedia.org/wiki/Information_security#Availability
|
||||
[7]:https://en.wikipedia.org/wiki/XOR_cipher
|
||||
[8]:https://en.wikipedia.org/wiki/Involution_(mathematics)#Computer_science
|
||||
[9]:https://en.wikipedia.org/wiki/One-time_pad#Problems
|
||||
[10]:https://en.wikipedia.org/wiki/RC4
|
||||
[11]:https://en.wikipedia.org/wiki/ESTREAM
|
||||
[12]:https://en.wikipedia.org/wiki/Salsa20
|
||||
[13]:https://en.wikipedia.org/wiki/Advanced_Encryption_Standard
|
||||
[14]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation
|
||||
[15]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:ECB_encryption.svg
|
||||
[16]:https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#/media/File:CBC_encryption.svg
|
||||
[17]:https://en.wikipedia.org/wiki/RSA_(cryptosystem)
|
||||
[18]:https://security.stackexchange.com/questions/33434/rsa-maximum-bytes-to-encrypt-comparison-to-aes-in-terms-of-security
|
||||
[19]:https://www.youtube.com/watch?v=1cUUfMeOijg
|
||||
[20]:https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
|
||||
[21]:https://www.2uo.de/myths-about-urandom/
|
||||
[22]:https://crypto.stackexchange.com/a/1174
|
||||
[23]:https://www.telegraph.co.uk/science/2016/03/14/why-your-fingerprints-may-not-be-unique/
|
||||
[24]:https://en.wikipedia.org/wiki/SHA-2
|
||||
[25]:https://en.wikipedia.org/wiki/SHA-1
|
||||
[26]:https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
|
||||
[27]:https://en.wikipedia.org/wiki/HMAC
|
||||
[28]:https://en.wikipedia.org/wiki/Hybrid_cryptosystem
|
@ -0,0 +1,130 @@
|
||||
如何在 Linux 中使用 history 命令
|
||||
======
|
||||
|
||||

|
||||
|
||||
随着我在终端中花费越来越多的时间,我感觉就像不断地寻找新的命令,使我的日常任务更加高效。GNU 的 `history` 命令是一个真正改变我日常工作的命令。
|
||||
|
||||
GNU `history` 命令保存了从该终端会话运行的所有其他命令的列表,然后允许你重放或者重用这些命令,而不用重新输入它们。如果你是一个老玩家,你知道 `history` 的力量,但对于我们这些半吊子或新手系统管理员来说, `history` 是一个立竿见影的生产力增益。
|
||||
|
||||
### History 101
|
||||
|
||||
要查看 `history`,请在 Linux 中打开终端程序,然后输入:
|
||||
```
|
||||
$ history
|
||||
|
||||
```
|
||||
|
||||
这是我得到的响应:
|
||||
```
|
||||
1 clear
|
||||
|
||||
|
||||
|
||||
2 ls -al
|
||||
|
||||
|
||||
|
||||
3 sudo dnf update -y
|
||||
|
||||
|
||||
|
||||
4 history
|
||||
|
||||
```
|
||||
|
||||
`history` 命令显示自开始会话后输入的命令列表。 `history` 有趣的地方是你可以使用以下命令重放任意一个命令:
|
||||
```
|
||||
$ !3
|
||||
|
||||
```
|
||||
|
||||
提示符中的 `!3` 告诉 shell 重新运行历史列表中第 3 个命令。我还可以输入以下命令来使用:
|
||||
```
|
||||
linuser@my_linux_box: !sudo dnf
|
||||
|
||||
```
|
||||
|
||||
`history` 将搜索与你提供的模式相匹配的最后一个命令并运行它。
|
||||
|
||||
### 搜索历史
|
||||
|
||||
你还可以输入 `!!` 重新运行 `history` 的最后一条命令。而且,通过与` grep` 配对,你可以搜索与文本模式相匹配的命令,或者通过与 `tail` 一起使用,你可以找到你最后几条执行的命令。例如:
|
||||
```
|
||||
$ history | grep dnf
|
||||
|
||||
|
||||
|
||||
3 sudo dnf update -y
|
||||
|
||||
|
||||
|
||||
5 history | grep dnf
|
||||
|
||||
|
||||
|
||||
$ history | tail -n 3
|
||||
|
||||
|
||||
|
||||
4 history
|
||||
|
||||
|
||||
|
||||
5 history | grep dnf
|
||||
|
||||
|
||||
|
||||
6 history | tail -n 3
|
||||
|
||||
```
|
||||
|
||||
另一种实现这个功能的方法是输入 `Ctrl-R` 来调用你的命令历史记录的递归搜索。输入后,提示变为:
|
||||
```
|
||||
(reverse-i-search)`':
|
||||
|
||||
```
|
||||
|
||||
现在你可以开始输入一个命令,并且会显示匹配的命令,按回车键执行。
|
||||
|
||||
### 更改已执行的命令
|
||||
|
||||
`history` 还允许你使用不同的语法重新运行命令。例如,如果我想改变我以前的命令 `history | grep dnf` 成 `history | grep ssh`,我可以在提示符下执行以下命令:
|
||||
```
|
||||
$ ^dnf^ssh^
|
||||
|
||||
```
|
||||
|
||||
`history` 将重新运行该命令,但用 `ssh` 替换 `dnf`,并执行它。
|
||||
|
||||
### 删除历史
|
||||
|
||||
有时你想要删除一些或全部的历史记录。如果要删除特定命令,请输入 `history -d <行号>`。要清空历史记录,请执行 `history -c`。
|
||||
|
||||
历史文件存储在一个你可以修改的文件中。bash shell 用户可以在他们的家目录下找到 `.bash_history`。
|
||||
|
||||
### 下一步
|
||||
|
||||
你可以使用 `history` 做许多其他事情:
|
||||
|
||||
* 将历史缓冲区设置为一定数量
|
||||
* 记录历史中每行的日期和时间
|
||||
* 防止某些命令被记录在历史记录中
|
||||
|
||||
|
||||
|
||||
有关 `history` 命令的更多信息和其他有趣的事情,请参考[ GNU Bash 手册][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/history-command
|
||||
|
||||
作者:[Steve Morris][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/smorris12
|
||||
[1]:https://www.gnu.org/software/bash/manual/
|
Loading…
Reference in New Issue
Block a user