翻译完成

This commit is contained in:
liuyakun 2017-12-13 23:24:11 +08:00
commit 8fcc27d79c
49 changed files with 4621 additions and 2570 deletions

View File

@ -0,0 +1,122 @@
Docker使用多阶段构建镜像
============================================================
多阶段构建是 Docker 17.05 及更高版本提供的新功能。这对致力于优化 Dockerfile 的人来说,使得 Dockerfile 易于阅读和维护。
> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 Docker 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础。
### 在多阶段构建之前
关于构建镜像最具挑战性的事情之一是保持镜像体积小巧。 Dockerfile 中的每条指令都会在镜像中增加一层,并且在移动到下一层之前,需要记住清除不需要的构件。要编写一个非常高效的 Dockerfile你通常需要使用 shell 技巧和其它方式来尽可能地减少层数,并确保每一层都具有上一层所需的构件,而其它任何东西都不需要。
实际上最常见的是,有一个 Dockerfile 用于开发(其中包含构建应用程序所需的所有内容),而另一个裁剪过的用于生产环境,它只包含您的应用程序以及运行它所需的内容。这被称为“构建器模式”。但是维护两个 Dockerfile 并不理想。
下面分别是一个 `Dockerfile.build` 和遵循上面的构建器模式的 `Dockerfile` 的例子:
`Dockerfile.build`
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN go get -d -v golang.org/x/net/html \
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
```
注意这个例子还使用 Bash 的 `&&` 运算符人为地将两个 `RUN` 命令压缩在一起,以避免在镜像中创建额外的层。这很容易失败,难以维护。例如,插入另一个命令时,很容易忘记继续使用 `\` 字符。
`Dockerfile`
```
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app .
CMD ["./app"]
```
`build.sh`
```
#!/bin/sh
echo Building alexellis2/href-counter:build
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
-t alexellis2/href-counter:build . -f Dockerfile.build
docker create --name extract alexellis2/href-counter:build
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
docker rm -f extract
echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app
```
当您运行 `build.sh` 脚本时,它会构建第一个镜像,从中创建一个容器,以便将该构件复制出来,然后构建第二个镜像。 这两个镜像会占用您的系统的空间,而你仍然会一个 `app` 构件存放在你的本地磁盘上。
多阶段构建大大简化了这种情况!
### 使用多阶段构建
在多阶段构建中,您需要在 Dockerfile 中多次使用 `FROM` 声明。每次 `FROM` 指令可以使用不同的基础镜像,并且每次 `FROM` 指令都会开始新阶段的构建。您可以选择将构件从一个阶段复制到另一个阶段,在最终镜像中,不会留下您不需要的所有内容。为了演示这是如何工作的,让我们调整前一节中的 Dockerfile 以使用多阶段构建。
`Dockerfile`
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
您只需要单一个 Dockerfile。 不需要另外的构建脚本。只需运行 `docker build` 即可。
```
$ docker build -t alexellis2/href-counter:latest .
```
最终的结果是和以前体积一样小的生产镜像,复杂性显著降低。您不需要创建任何中间镜像,也不需要将任何构件提取到本地系统。
它是如何工作的呢?第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段。`COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段。Go SDK 和任何中间构件都被留在那里,而不会保存到最终的镜像中。
### 命名您的构建阶段
默认情况下,这些阶段没有命名,您可以通过它们的整数来引用它们,从第一个 `FROM` 指令的 0 开始。但是,你可以通过在 `FROM` 指令中使用 `as <NAME>` 来为阶段命名。以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例。这意味着,即使您的 `Dockerfile` 中的指令稍后重新排序,`COPY` 也不会出问题。
```
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
--------------------------------------------------------------------------------
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/
作者:[docker][a]
译者:[iron0x](https://github.com/iron0x)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
[1]:https://twitter.com/alexellisuk
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/

View File

@ -0,0 +1,219 @@
在红帽企业版 Linux 中将系统服务容器化(一)
====================
在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DNS 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。
### 我们的目的
**创建一个可以独立于任何其它系统服务而更新的网络服务,并且可以从主机端容易地管理和更新。**
让我们来探究一下在容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将了解到如何建立自己的容器以及管理 BIND 配置和数据文件。
在本系列的第二部分,我们将看到如何整合主机中的 systemd 和容器中的 systemd。我们将探究如何管理容器中的服务并且使它作为一种主机中的服务。
### 创建 BIND 容器
为了使 systemd 在一个容器中轻松运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl``machinectl` 来跟踪容器。
```
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
```
回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础镜像][6]包含了 systemd 作为其初始化系统。我们可以如我们在典型的系统中做的那样安装并激活 BIND。你可以从 [git 仓库中下载这份 Dockerfile][8]。
```
[root@rhel7-host bind]# vi Dockerfile
# Dockerfile for BIND
FROM registry.access.redhat.com/rhel7/rhel
ENV container docker
RUN yum -y install bind && \
yum clean all && \
systemctl enable named
STOPSIGNAL SIGRTMIN+3
EXPOSE 53
EXPOSE 53/udp
CMD [ "/sbin/init" ]
```
因为我们以 PID 1 来启动一个初始化系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中 `man 2 kill`
> 唯一可以发送给 PID 1 进程(即 init 进程)的信号,是那些初始化系统明确安装了<ruby>信号处理器<rt>signal handler</rt></ruby>的信号。这是为了避免系统被意外破坏。
对于 systemd 信号处理器,`SIGRTMIN+3` 是对应于 `systemd start halt.target` 的信号。我们也需要为 BIND 暴露 TCP 和 UDP 端口号,因为这两种协议可能都要使用。
### 管理数据
有了一个可以工作的 BIND 服务,我们还需要一种管理配置文件和区域文件的方法。目前这些都放在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理的角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以镜像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。
相反的,我们将从这个容器中提取出配置文件和数据文件,把它们拷贝到主机上,然后在运行的时候挂载它们。用这种方式我们可以很容易地重启或者重建容器,而不会丢失所做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统所提供服务的特定站点数据”,让我们遵循 Linux <ruby>文件系统层次标准<rt>File System Hierarchy</rt></ruby>,并在当前主机上创建 `/srv/named` 目录来保持管理权分离。
```
[root@rhel7-host ~]# mkdir -p /srv/named/etc
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
```
*提示:如果你正在迁移一个已有的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你也许仍然要用一个临时容器来检查一下分配给这个容器的 GID。*
让我们建立并运行一个临时容器来检查 BIND。在将 init 进程以 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器启动后执行 shell并且使用 `rpm` 命令来检查重要文件。
```
[root@rhel7-host ~]# docker build -t named .
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
[root@0e77ce00405e /]# rpm -ql bind
```
对于这个例子来说,我们将需要 `/etc/named.conf``/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果注册了一个以上的容器,我们可以在任一机器上使用 `machinectl status` 命令来查看运行的是什么。一旦有了这些配置,我们就可以终止这个临时容器了。
*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件][8]。*
```
[root@rhel7-host bind]# machinectl list
MACHINE CLASS SERVICE
8824c90294d5a36d396c8ab35167937f container docker
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
[root@rhel7-host ~]# docker stop infallible_wescoff
```
### 最终的创建
为了创建和运行最终的容器,添加卷选项以挂载:
- 将文件 `/srv/named/etc/named.conf` 映射为 `/etc/named.conf`
- 将目录 `/srv/named/var/named` 映射为 `/var/named`
因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。
```
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
```
在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。请确保任何新文件的 GID 与来自这个容器中的其余的 BIND 文件相匹配。
```
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
[root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone
```
> 很好奇为什么我不需要在主机目录中改变 SELinux 上下文?^注1
我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑该文件,并且重新加载配置。在主机中使用 `host``dig`,我们可以检查来自该容器化服务的 example.com 的响应。
```
[root@rhel7-host ~]# docker exec -it named-container rndc reload
server reload successful
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
May 12 19:29:17 ac1752c314a7 named[27]: running
[root@rhel7-host bind]# host www.example.com localhost
Using domain server:
Name: localhost
Address: ::1#53
Aliases:
www.example.com is an alias for server1.example.com.
server1.example.com is an alias for mail
```
> 你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。^注2
### 终点线
我们已经达成了我们打算完成的目标,从容器中为 DNS 请求和区域文件提供服务。我们已经得到一个持久化的位置来管理更新和配置,并且更新后该配置不变。
在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务来运行。
---
[关注 RHEL 博客](http://redhatstackblog.wordpress.com/feed/),通过电子邮件来获得本系列第二部分和其它新文章的更新。
---
### 附加资源
- **所附带文件的 Github 仓库:** [https://github.com/nzwulfin/named-container](https://github.com/nzwulfin/named-container)
- **注1** **通过容器访问本地文件的 SELinux 上下文**
你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有出错?将一个文件拷贝到 `/srv` 会将这个文件标记为类型 `var_t`。我 `setenforce 0` (关闭 SELinux了吗
当然没有,这将让 [Dan Walsh 大哭](https://stopdisablingselinux.com/)LCTT 译注RedHat 的 SELinux 团队负责人,倡议不要禁用 SELinux。是的`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下:
启动一个容器之前:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf
```
不过,运行中我使用了一个卷选项可以使 Dan Walsh 先生高兴起来,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z` 命令的这部分做了两件事情:首先它表示这需要使用一个私有卷的 SELiunx 标记来重新标记;其次它表明以读写挂载。
启动容器之后:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
```
- **注2** **VIM 备份行为能改变 inode**
如果你在本地主机中使用 `vim` 来编辑配置文件,而你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑时,有三种 `vim` 设定影响备份副本:`backup`、`writebackup` 和 `backupcopy`
我摘录了 RHEL 7 中的来自官方 VIM [backup_table][9] 中的默认配置。
```
backup writebackup
off on backup current file, deleted afterwards (default)
```
所以我们不创建残留下的 `~` 副本,而是创建备份。另外的设定是 `backupcopy``auto` 是默认的设置:
```
"yes" make a copy of the file and overwrite the original one
"no" rename the file and write a new one
"auto" one of the previous, what works best
```
这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由(请查看文档了解其逻辑),你将会得到一个包含你编辑内容的新文件,当你保存时它会重命名为原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况这不是问题但是这里容器的<ruby>绑定挂载<rt>bind mount</rt></ruby>对 inode 的改变很敏感。为了解决这个问题,你需要改变 `backupcopy` 的行为。
不管是在 `vim` 会话中还是在你的 `.vimrc`中,请添加 `set backupcopy=yes`。这将确保原先的文件被清空并覆写,维持了 inode 不变并且将该改变传递到了容器中。
------------
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
作者:[Matt Micene][a]
译者:[liuxinyu123](https://github.com/liuxinyu123)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
[1]:http://rhelblog.redhat.com/author/mmicenerht/
[2]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
[3]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1
[4]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2
[5]:http://redhatstackblog.wordpress.com/feed/
[6]:https://access.redhat.com/containers
[7]:http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo
[8]:https://github.com/nzwulfin/named-container
[9]:http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table

View File

@ -1,43 +1,50 @@
translated by smartgrids
Eclipse 如何助力 IoT 发展
============================================================
### 开源组织的模块发开发方式非常适合物联网。
> 开源组织的模块化开发方式非常适合物联网。
![How Eclipse is advancing IoT development](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_ArchitectureOfParticipation_520x292.png?itok=FA0Uuwzv "How Eclipse is advancing IoT development")
图片来源: opensource.com
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。九月 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
[Eclipse][3] 可能不是第一个去研究物联网的开源组织。但是,远在 IoT 家喻户晓之前,该基金会在 2001 年左右就开始支持开源软件发展商业化。
九月份的 Eclipse 物联网日和 RedMonk 的 [ThingMonk 2017][4] 一块举行,着重强调了 Eclipse 在 [物联网发展][5] 中的重要作用。它现在已经包含了 28 个项目,覆盖了大部分物联网项目需求。会议过程中,我和负责 Eclipse 市场化运作的 [Ian Skerritt][6] 讨论了 Eclipse 的物联网项目以及如何拓展它。
### 物联网的最新进展?
###物联网的最新进展?
我问 Ian 物联网同传统工业自动化,也就是前几十年通过传感器和相应工具来实现工厂互联的方式有什么不同。 Ian 指出很多工厂是还没有互联的。
另外,他说“ SCADA[监控和数据分析] 系统以及工厂底层技术都是私有、独立性的。我们很难去改变它,也很难去适配它们…… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
另外,他说 “SCADA [<ruby>监控和数据分析<rt>supervisory control and data analysis</rt></ruby>] 系统以及工厂底层技术都是非常私有的、独立性的。我们很难去改变它,也很难去适配它们 …… 现在,如果你想运行一套生产系统,你需要设计成百上千的单元。生产线想要的是满足用户需求,使制造过程更灵活,从而可以不断产出。” 这也就是物联网会带给制造业的一个很大的帮助。
###Eclipse 物联网方面的研究
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用从而可以获得更好的适配性。” 他说Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的观念可能认为含有更多的层面,但是他们一直符合这个三层模型的功能的:
### Eclipse 物联网方面的研究
Ian 对于 Eclipse 在物联网的研究是这样描述的:“满足任何物联网解决方案的核心基础技术” ,通过使用开源技术,“每个人都可以使用,从而可以获得更好的适配性。” 他说Eclipse 将物联网视为包括三层互联的软件栈。从更高的层面上看,这些软件栈(按照大家常见的说法)将物联网描述为跨越三个层面的网络。特定的实现方式可能含有更多的层,但是它们一般都可以映射到这个三层模型的功能上:
* 一种可以装载设备(例如设备、终端、微控制器、传感器)用软件的堆栈。
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反
* 将不同的传感器采集到的数据信息聚合起来并传输到网上的一类网关。这一层也可能会针对传感器数据检测做出实时反
* 物联网平台后端的一个软件栈。这个后端云存储数据并能根据采集的数据比如历史趋势、预测分析提供服务。
这三个软件栈在 Eclipse 的白皮书 “ [The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
这三个软件栈在 Eclipse 的白皮书 “[The Three Software Stacks Required for IoT Architectures][7] ”中有更详细的描述。
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。
Ian 说在这些架构中开发一种解决方案时,“需要开发一些特殊的东西,但是很多底层的技术是可以借用的,像通信协议、网关服务。需要一种模块化的方式来满足不的需求场合。” Eclipse 关于物联网方面的研究可以概括为:开发模块化开源组件从而可以被用于开发大量的特定性商业服务和解决方案。
###Eclipse 的物联网项目
### Eclipse 的物联网项目
在众多一杯应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用一个设备与设备互联M2M的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的协议。一个优势就是,他说,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
在众多已被应用的 Eclipse 物联网应用中, Ian 举了两个和 [MQTT][8] 有关联的突出应用一个设备与设备互联M2M的物联网协议。 Ian 把它描述成“一个专为重视电源管理工作的油气传输线监控系统的信息发布/订阅协议。MQTT 已经是众多物联网广泛应用标准中很成功的一个。” [Eclipse Mosquitto][9] 是 MQTT 的代理,[Eclipse Paho][10] 是他的客户端。
[Eclipse Kura][11] 是一个物联网网关,引用 Ian 的话“它连接了很多不同的协议间的联系”包括蓝牙、Modbus、CANbus 和 OPC 统一架构协议,以及一直在不断添加的各种协议。他说,一个优势就是,取代了你自己写你自己的协议, Kura 提供了这个功能并将你通过卫星、网络或其他设备连接到网络。”另外它也提供了防火墙配置、网络延时以及其它功能。Ian 也指出“如果网络不通时,它会存储信息直到网络恢复。”
最新的一个项目中,[Eclipse Kapua][12] 正尝试通过微服务来为物联网云平台提供不同的服务。比如它集成了通信、汇聚、管理、存储和分析功能。Ian 说“它正在不断前进,虽然还没被完全开发出来,但是 Eurotech 和 RedHat 在这个项目上非常积极。”
Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非常有趣的项目。从安全的角度说,如果你不能更新你的设备,你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关,他说,“ HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于无赖女王软件升级的项目一直是被当作重要内容往前推进。
Ian 说 [Eclipse hawkBit][13] 一个软件更新管理的软件是一项“非常有趣的项目。从安全的角度说如果你不能更新你的设备你将会面临巨大的安全漏洞。”很多物联网安全事故都和无法更新的设备有关他说“HawkBit 可以基本负责通过物联网系统来完成扩展性更新的后端管理。”
###为什么物联网这么适合 Eclipse
物联网设备软件升级的难度一直被看作是难度最高的安全挑战之一。物联网设备不是一直连接的,而且数目众多,再加上首先设备的更新程序很难完全正常。正因为这个原因,关于 IoT 软件升级的项目一直是被当作重要内容往前推进。
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是宽约工业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
### 为什么物联网这么适合 Eclipse
在物联网发展趋势中的一个方面就是关于构建模块来解决商业问题,而不是跨越行业和公司的大物联网平台。 Eclipse 关于物联网的研究放在一系列模块栈、提供特定和大众化需求功能的项目上,还有就是指定目标所需的可捆绑式中间件、网关和协议组件上。
--------------------------------------------------------------------------------
@ -46,15 +53,15 @@ Ian 说 [Eclipse hawkBit][13] ,软件更新管理的软件,是一项“非
作者简介:
Gordon Haff - Gordon Haff 是红帽公司的云营销员,经常在消费者和工业会议上讲话,并且帮助发展红帽全办公云解决方案。他是 计算机前言:云如何如何打开众多出版社未来之门 的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
Gordon Haff - Gordon Haff 是红帽公司的云专家,经常在消费者和行业会议上讲话,并且帮助发展红帽全面云化解决方案。他是《计算机前沿:云如何如何打开众多出版社未来之门》的作者。在红帽之前, Gordon 写了成百上千的研究报告,经常被引用到公众刊物上,像纽约时报关于 IT 的议题和产品建议等……
--------------------------------------------------------------------------------
转自 https://opensource.com/article/17/10/eclipse-and-iot
via https://opensource.com/article/17/10/eclipse-and-iot
作者:[Gordon Haff ][a]
作者:[Gordon Haff][a]
译者:[smartgrids](https://github.com/smartgrids)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,20 +1,19 @@
归档仓库
如何归档 GitHub 仓库
====================
因为仓库不再活跃开发或者你不想接受额外的贡献并不意味着你想要删除它。现在在 Github 上归档仓库让它变成只读。
如果仓库不再活跃开发或者你不想接受额外的贡献,但这并不意味着你想要删除它。现在可以在 Github 上归档仓库让它变成只读。
[![archived repository banner](https://user-images.githubusercontent.com/7321362/32558403-450458dc-c46a-11e7-96f9-af31d2206acb.png)][1]
归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括编辑仓库、问题、合并请求、标记、里程碑、维基、发布、提交、标签、分支、反馈和评论。没有人可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库-允许归档的仓库在其他地方继续开发。
归档一个仓库让它对所有人只读(包括仓库拥有者)。这包括对仓库的编辑、<ruby>问题<rt>issue</rt></ruby><ruby>合并请求<rt>pull request</rt></ruby>PR、标记、里程碑、项目、维基、发布、提交、标签、分支、反馈和评论。谁都不可以在一个归档的仓库上创建新的问题、合并请求或者评论,但是你仍可以 fork 仓库——以允许归档的仓库在其它地方继续开发。
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击归档。
要归档一个仓库,进入仓库设置页面并点在这个仓库上点击<ruby>归档该仓库<rt>Archive this repository</rt></ruby>
[![archive repository button](https://user-images.githubusercontent.com/125011/32273119-0fc5571e-bef9-11e7-9909-d137268a1d6d.png)][2]
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够贡献。
在归档你的仓库前,确保你已经更改了它的设置并考虑关闭所有的开放问题和合并请求。你还应该更新你的 README 和描述来让它让访问者了解他不再能够对之贡献。
如果你改变了主意想要解除归档你的仓库,在相同的地方点击解除归档。请注意大多数归档仓库的设置是隐藏的,并且你需要解除归档来改变它们。
如果你改变了主意想要解除归档你的仓库,在相同的地方点击<ruby>解除归档该仓库<rt>Unarchive this repository</rt></ruby>”。请注意归档仓库的大多数设置是隐藏的,你需要解除归档才能改变它们。
[![archived labelled repository](https://user-images.githubusercontent.com/125011/32541128-9d67a064-c466-11e7-857e-3834054ba3c9.png)][3]
@ -24,9 +23,9 @@
via: https://github.com/blog/2460-archiving-repositories
作者:[MikeMcQuaid ][a]
作者:[MikeMcQuaid][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,19 +1,18 @@
Glitch立即写出有趣的小型网站项目
Glitch可以让你立即写出有趣的小型网站
============================================================
我刚写了一篇关于 Jupyter Notebooks 是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch这个我同样喜爱我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章!
我刚写了一篇关于 Jupyter Notebooks 的文章,它是一个有趣的交互式写 Python 代码的方式。这让我想起我最近学习了 Glitch这个我同样喜爱我构建了一个小的程序来用于[关闭转发 twitter][2]。因此有了这篇文章!
[Glitch][3] 是一个简单的构建 Javascript web 程序的方式javascript 后端、javascript 前端)
[Glitch][3] 是一个简单的构建 Javascript web 程序的方式javascript 后端、javascript 前端)
关于 glitch 有趣的有:
关于 glitch 有趣的地方有:
1. 你在他们的网站输入 Javascript 代码
2. 只要输入了任何代码,它会自动用你的新代码重载你的网站。你甚至不必保存!它会自动保存。
所以这就像 Heroku但更神奇像这样的编码你输入代码代码立即在公共网络上运行对我而言感觉很**有趣**。
这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,并让它立即可用,这也是我所喜爱的。现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。
这有点像 ssh 登录服务器,编辑服务器上的 PHP/HTML 代码,它立即可用这也是我所喜爱的方式虽然现在我们有了“更好的部署实践”,而不是“编辑代码,它立即出现在互联网上”,但我们并不是在谈论严肃的开发实践,而是在讨论编写微型程序的乐趣。
### Glitch 有很棒的示例应用程序
@ -22,18 +21,16 @@ Glitch 似乎是学习编程的好方式!
比如,这有一个太空侵略者游戏(由 [Mary Rose Cook][4] 编写):[https://space-invaders.glitch.me/][5]。我喜欢的是我只需要点击几下。
1. 点击 “remix this”
2. 开始编辑代码使箱子变成橘色而不是黑色
3. 制作我自己太空侵略者游戏!我的在这:[http://julias-space-invaders.glitch.me/][1]。(我只做了很小的更改使其变成橘色,没什么神奇的)
他们有大量的示例程序,你可以从中启动 - 例如[机器人][6]、[游戏][7]等等。
### 实际有用的非常好的程序tweetstorms
我学习 Glitch 的方式是从这个程序:[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的 tweetstorm
我学习 Glitch 的方式是从这个程序开始的[https://tweetstorms.glitch.me/][8],它会向你展示给定用户的推特云
比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的 tweetstorm(她发布了很多好的 tweetstorm
比如,你可以在 [https://tweetstorms.glitch.me/sarahmei][10] 看到 [@sarahmei][9] 的推特云(她发布了很多好的 tweetstorm
### 我的 Glitch 程序: 关闭转推
@ -41,11 +38,11 @@ Glitch 似乎是学习编程的好方式!
我喜欢我不必设置一个本地开发环境,我可以直接开始输入然后开始!
Glitch 只支持 Javascript我不非常了解 Javascript我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。
Glitch 只支持 Javascript我不非常了解 Javascript我之前从没写过一个 Node 程序),所以代码不是很好。但是编写它很愉快 - 能够输入并立即看到我的代码运行是令人愉快的。这是我的项目:[https://turn-off-retweets.glitch.me/][11]。
### 就是这些!
使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork找一个托管设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。
使用 Glitch 感觉真的很有趣和民主。通常情况下,如果我想 fork 某人的 Web 项目,并做出更改,我不会这样做 - 我必须 fork找一个托管设置本地开发环境或者 Heroku 或其他,安装依赖项等。我认为像安装 node.js 依赖关系这样的任务过去很有趣,就像“我正在学习新东西很酷”,现在我觉得它们很乏味。
所以我喜欢只需点击 “remix this!” 并立即在互联网上能有我的版本。
@ -53,9 +50,9 @@ Glitch 只支持 Javascript我不非常了解 Javascript我之前从没写
via: https://jvns.ca/blog/2017/11/13/glitch--write-small-web-projects-easily/
作者:[Julia Evans ][a]
作者:[Julia Evans][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,27 +1,22 @@
# LibreOffice 现在在 Flatpak 的 Flathub 应用商店提供
LibreOffice 上架 Flathub 应用商店
===============
![LibreOffice on Flathub](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/libroffice-on-flathub-750x250.jpeg)
LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。
> LibreOffice 现在可以从集中化的 Flatpak 应用商店 [Flathub][3] 进行安装。
它的到来使任何运行现代 Linux 发行版的人都能只点击一两次安装 LibreOffice 的最新稳定版本,而无需搜索 PPA纠缠 tar 包或等待发行商将其打包。
它的到来使任何运行现代 Linux 发行版的人都能只点击一两次即可安装 LibreOffice 的最新稳定版本,而无需搜索 PPA纠缠于 tar 包或等待发行版将其打包。
自去年 8 月份以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装 [LibreOffice 5.2][6]
自去年 8 月份 [LibreOffice 5.2][6] 发布以来,[LibreOffice Flatpak][5] 已经可供用户下载和安装。
这里“新”的是发行方法。文档基金会选择使用 Flathub 而不是专门的服务器来发布更新。
这里“新”的是发行方法。<ruby>文档基金会<rt>Document Foundation</rt></ruby>选择使用 Flathub 而不是专门的服务器来发布更新。
这对于终端用户来说是一个_很好_的消息因为这意味着不需要在新安装时担心仓库但对于 Flatpak 的倡议者来说也是一个好消息LibreOffice 是开源软件最流行的生产力套件。它对格式和应用商店的支持肯定会受到热烈的欢迎。
这对于终端用户来说是一个_很好_的消息因为这意味着不需要在新安装时担心仓库但对于 Flatpak 的倡议者来说也是一个好消息LibreOffice 是开源软件最流行的生产力套件。它对格式和应用商店的支持肯定会受到热烈的欢迎。
在撰写本文时,你可以从 Flathub 安装 LibreOffice 5.4.2。新的稳定版本将在发布时添加。
### 在 Ubuntu 上启用 Flathub
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png)
Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak随时可以开箱即用。Mint 甚至预启用了 Flathub remote。
[从 Flathub 安装 LibreOffice][7]
要在 Ubuntu 上启动并运行 Flatpak首先必须安装它
```
@ -34,17 +29,25 @@ sudo apt install flatpak gnome-software-plugin-flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
这就行了。只需注销并返回(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。
这就行了。只需注销并重新登录(以便 Ubuntu Software 刷新其缓存),之后你应该能够通过 Ubuntu Software 看到 Flathub 上的任何 Flatpak 程序了。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2017/11/flathub-750x495.png)
*Fedora、Arch 和 Linux Mint 18.3 用户已经安装了 Flatpak随时可以开箱即用。Mint 甚至预启用了 Flathub remote。*
在本例中,搜索 “LibreOffice” 并在结果中找到下面有 Flathub 提示的结果。请记住Ubuntu 已经调整了客户端,来将 Snap 程序显示在最上面,所以你可能需要向下滚动列表来查看它)。
### 从 Flathub 安装 LibreOffice
- [从 Flathub 安装 LibreOffice][7]
从 flatpakref 中[安装 Flatpak 程序有一个 bug][8],所以如果上面的方法不起作用,你也可以使用命令行从 Flathub 中安装 Flathub 程序。
Flathub 网站列出了安装每个程序所需的命令。切换到“命令行”选项卡来查看它们。
#### Flathub 上更多的应用
### Flathub 上更多的应用
如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需折衷就能获得这些应用程序的最新,稳定版本(加上它们需要的所有依赖)。
如果你经常看这个网站,你就会知道我喜欢 Flathub。这是我最喜欢的一些应用Corebird、Parlatype、GNOME MPV、Peek、Audacity、GIMP 等)的家园。我无需等待就能获得这些应用程序的最新、稳定版本(加上它们需要的所有依赖)。
而且,在我 twiiter 上发布一周左右后,大多数 Flatpak 应用现在看起来有很棒 GTK 主题 - 不再需要[临时方案][9]了!
@ -52,9 +55,9 @@ Flathub 网站列出了安装每个程序所需的命令。切换到“命令行
via: http://www.omgubuntu.co.uk/2017/11/libreoffice-now-available-flathub-flatpak-app-store
作者:[ JOEY SNEDDON ][a]
作者:[JOEY SNEDDON][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,27 +3,27 @@ AWS 帮助构建 ONNX 开源 AI 平台
![onnx-open-source-ai-platform](https://www.linuxinsider.com/article_images/story_graphics_xlarge/xl-2017-onnx-1.jpg)
AWS 已经成为最近加入深度学习社区的开放神经网络交换ONNX协作的最新技术公司最近在无摩擦和可互操作的环境中推出了高级人工智能。由 Facebook 和微软领头。
AWS 最近成为了加入深度学习社区的<ruby>开放神经网络交换<rt>Open Neural Network Exchange</rt></ruby>ONNX协作的技术公司最近在<ruby>无障碍和可互操作<rt>frictionless and interoperable</rt></ruby>的环境中推出了高级人工智能。由 Facebook 和微软领头了该协作
作为该合作的一部分AWS 将其开源 Python 软件包 ONNX-MxNet 作为一个深度学习框架提供,该框架提供跨多种语言的编程接口,包括 Python、Scala 和开源统计软件 R。
作为该合作的一部分AWS 开源其深度学习框架 Python 软件包 ONNX-MXNet该框架提供了跨多种语言的编程接口API,包括 Python、Scala 和开源统计软件 R。
AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道ONNX 格式将帮助开发人员构建和训练其他框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet并运行它们进行推理。
AWS 深度学习工程经理 Hagay Lupesko 和软件开发人员 Roshani Nagmote 上周在一篇帖子中写道ONNX 格式将帮助开发人员构建和训练其它框架的模型,包括 PyTorch、Microsoft Cognitive Toolkit 或 Caffe2。它可以让开发人员将这些模型导入 MXNet并运行它们进行推理。
### 对开发者的帮助
今年夏天Facebook 和微软推出了 ONNX以支持共享模式的互操作性来促进 AI 的发展。微软提交了其 Cognitive Toolkit、Caffe2 和 PyTorch 来支持 ONNX。
微软表示Cognitive Toolkit 和其他框架使开发人员更容易构建和运行代表神经网络的计算图
微软表示Cognitive Toolkit 和其他框架使开发人员更容易构建和运行计算图以表达神经网络
Github 上提供了[ ONNX 代码和文档][4]的初始版本。
[ONNX 代码和文档][4]的初始版本已经放到了 Github
AWS 和微软上个月宣布了在 Apache MXNet 上的一个新 Gluon 接口计划,该计划允许开发人员构建和训练深度学习模型。
[Tractica][5] 的研究总监 Aditya Kaul 观察到“Gluon 是他们与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。
[Tractica][5] 的研究总监 Aditya Kaul 观察到“Gluon 是他们试图与 Google 的 Tensorflow 竞争的合作伙伴关系的延伸”。
他告诉 LinuxInsider“谷歌在这点上的疏忽是非常明显的但也说明了他们在市场上的主导地位。
他告诉 LinuxInsider“谷歌在这点上的疏忽是非常明显的但也说明了他们在市场上的主导地位。
Kaul 说:“甚至 Tensorflow 是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。”
Kaul 说:“甚至 Tensorflow 是开源的,所以开源在这里并不是什么大事,但这归结到底是其他生态系统联手与谷歌竞争。”
根据 AWS 的说法本月早些时候Apache MXNet 社区推出了 MXNet 的 0.12 版本,它扩展了 Gluon 的功能,以便进行新的尖端研究。它的新功能之一是变分 dropout它允许开发人员使用 dropout 技术来缓解递归神经网络中的过拟合。
@ -52,15 +52,15 @@ Tractica 的 Kaul 指出:“框架互操作性是一件好事,这会帮助
越来越多的大型科技公司已经宣布使用开源技术来加快 AI 协作开发的计划,以便创建更加统一的开发和研究平台。
ATT 几周前宣布了与 TechMahindra 和 Linux 基金会合作[推出 Acumos 项目][8]的计划。该平台旨在开拓电信、媒体和技术方面的合作。
![](https://www.ectnews.com/images/end-enn.gif)
--------------------------------------------------------------------------------
via: https://www.linuxinsider.com/story/AWS-to-Help-Build-ONNX-Open-Source-AI-Platform-84971.html
作者:[ David Jones ][a]
作者:[David Jones][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,127 @@
Suplemon带有多光标支持的现代 CLI 文本编辑器
======
Suplemon 是一个 CLI 中的现代文本编辑器,它模拟 [Sublime Text][1] 的多光标行为和其它特性。它是轻量级的,非常易于使用,就像 Nano 一样。
使用 CLI 编辑器的好处之一是,无论你使用的 Linux 发行版是否有 GUI你都可以使用它。这种文本编辑器也很简单、快速和强大。
你可以在其[官方仓库][2]中找到有用的信息和源代码。
### 功能
这些是一些它有趣的功能:
* 多光标支持
* 撤销/重做
* 复制和粘贴,带有多行支持
* 鼠标支持
* 扩展
* 查找、查找所有、查找下一个
* 语法高亮
* 自动完成
* 自定义键盘快捷键
### 安装
首先,确保安装了最新版本的 python3 和 pip3。
然后在终端输入:
```
$ sudo pip3 install suplemon
```
### 使用
#### 在当前目录中创建一个新文件
打开一个终端并输入:
```
$ suplemon
```
你将看到如下:
![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png)
#### 打开一个或多个文件
打开一个终端并输入:
```
$ suplemon <filename1> <filename2> ... <filenameN>
```
例如:
```
$ suplemon example1.c example2.c
```
### 主要配置
你可以在 `~/.config/suplemon/suplemon-config.json` 找到配置文件。
编辑这个文件很简单,你只需要进入命令模式(进入 suplemon 后)并运行 `config` 命令。你可以通过运行 `config defaults` 来查看默认配置。
#### 键盘映射配置
我会展示 suplemon 的默认键映射。如果你想编辑它们,只需运行 `keymap` 命令。运行 `keymap default` 来查看默认的键盘映射文件。
| 操作 | 快捷键 |
| ---- | ---- |
| 退出| `Ctrl + Q`|
| 复制行到缓冲区|`Ctrl + C`|
| 剪切行缓冲区| `Ctrl + X`|
| 插入缓冲区| `Ctrl + V`|
| 复制行| `Ctrl + K`|
| 跳转| `Ctrl + G`。 你可以跳转到一行或一个文件(只需键入一个文件名的开头)。另外,可以输入类似于 `exam:50` 跳转到 `example.c``50` 行。|
| 用字符串或正则表达式搜索| `Ctrl + F`|
| 搜索下一个| `Ctrl + D`|
| 去除空格| `Ctrl + T`|
| 在箭头方向添加新的光标| `Alt + 方向键`|
| 跳转到上一个或下一个单词或行| `Ctrl + 左/右`|
| 恢复到单光标/取消输入提示| `Esc`|
| 向上/向下移动行| `Page Up` / `Page Down`|
| 保存文件|`Ctrl + S`|
| 用新名称保存文件|`F1`|
| 重新载入当前文件|`F2`|
| 打开文件|`Ctrl + O`|
| 关闭文件|`Ctrl + W`|
| 切换到下一个/上一个文件|`Ctrl + Page Up` / `Ctrl + Page Down`|
| 运行一个命令|`Ctrl + E`|
| 撤消|`Ctrl + Z`|
| 重做|`Ctrl + Y`|
| 触发可见的空格|`F7`|
| 切换鼠标模式|`F8`|
| 显示行号|`F9`|
| 显示全屏|`F11`|
#### 鼠标快捷键
* 将光标置于指针位置:左键单击
* 在指针位置添加一个光标:右键单击
* 垂直滚动:向上/向下滚动滚轮
### 总结
在尝试 Suplemon 一段时间后,我改变了对 CLI 文本编辑器的看法。我以前曾经尝试过 Nano是的我喜欢它的简单性但是它的现代特征的缺乏使它在日常使用中变得不实用。
这个工具有 CLI 和 GUI 世界最好的东西……简单性和功能丰富!所以我建议你试试看,并在评论中写下你的想法 :-)
--------------------------------------------------------------------------------
via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/
作者:[Ivo Ursino][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://linoxide.com/author/ursinov/
[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/
[2]:https://github.com/richrd/suplemon/

View File

@ -0,0 +1,161 @@
在 Ubuntu 16.04 下随机化你的 WiFi MAC 地址
============================================================
> 你的设备的 MAC 地址可以在不同的 WiFi 网络中记录你的活动。这些信息能被共享后出售,用于识别特定的个体。但可以用随机生成的伪 MAC 地址来阻止这一行为。
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
_Image courtesy of [Cloudessa][4]_
每一个诸如 WiFi 或者以太网卡这样的网络设备,都有一个叫做 MAC 地址的唯一标识符,如:`b4:b6:76:31:8c:ff`。这就是你能上网的原因:每当你连上 WiFi路由器就会用这一地址来向你接受和发送数据并且用它来区别你和这一网络的其它设备。
这一设计的缺陷在于唯一性,不变的 MAC 地址正好可以用来追踪你。连上了星巴克的 WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。
如果你曾经在某一个 WiFi 验证页面上输入过你的真实姓名,你就已经把自己和这一 MAC 地址建立了联系。没有仔细阅读许可服务条款、你可以认为,机场的免费 WiFi 正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。
我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。
### MAC 地址不一定总是不变的
幸运的是,在不断开网络的情况下,是可以随机生成一个伪 MAC 地址的。
我想随机生成我的 MAC 地址,但是有三个要求:
1. MAC 地址在不同网络中是不相同的。这意味着,我在星巴克和在伦敦地铁网络中的 MAC 地址是不相同的,这样在不同的服务提供商中就无法将我的活动系起来。
2. MAC 地址需要经常更换,这样在网络上就没人知道我就是去年在这儿经过了 75 次的那个人。
3. MAC 地址一天之内应该保持不变。当 MAC 地址更改时,大多数网络都会与你断开连接,然后必须得进入验证页面再次登陆 - 这很烦人。
### 操作<ruby>网络管理器<rt>NetworkManager</rt></ruby>
我第一次尝试用一个叫做 `macchanger` 的工具,但是失败了。因为<ruby>网络管理器<rt>NetworkManager</rt></ruby>会根据它自己的设置恢复默认的 MAC 地址。
我了解到,网络管理器 1.4.1 以上版本可以自动生成随机的 MAC 地址。如果你在使用 Ubuntu 17.04 版本,你可以根据[这一配置文件][7]实现这一目的。但这并不能完全符合我的三个要求(你必须在<ruby>随机<rt>random</rt></ruby><ruby>稳定<rt>stable</rt></ruby>这两个选项之中选择一个,但没有一天之内保持不变这一选项)
因为我使用的是 Ubuntu 16.04,网络管理器版本为 1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。
幸运的是,网络管理器 1.2 允许模拟 MAC 地址。你在已连接的网络中可以看见 ‘编辑连接’ 这一选项:
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https:/www.paulfurley.com/img/network-manager-cloned-mac-address.png)
网络管理器也支持钩子处理 —— 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。
### 分配随机生成的伪 MAC 地址
我想根据网络 ID 和日期来生成新的随机 MAC 地址。 我们可以使用网络管理器的命令行工具 nmcli 来显示所有可用网络:
```
> nmcli connection
NAME UUID TYPE DEVICE
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
```
因为每个网络都有一个唯一标识符UUID为了实现我的计划我将 UUID 和日期拼接在一起,然后使用 MD5 生成 hash 值:
```
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
> echo -n "${UUID}-$(date +%F)" | md5sum
53594de990e92f9b914a723208f22b3f -
```
生成的结果可以代替 MAC 地址的最后八个字节。
值得注意的是,最开始的字节 `02` 代表这个地址是[自行指定][8]的。实际上,真实 MAC 地址的前三个字节是由制造商决定的,例如 `b4:b6:76` 就代表 Intel。
有可能某些路由器会拒绝自己指定的 MAC 地址,但是我还没有遇到过这种情况。
每次连接到一个网络,这一脚本都会用 `nmcli` 来指定一个随机生成的伪 MAC 地址:
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/imgterminal-window-nmcli-commands.png)
最后,我查看了 `ifconfig` 的输出结果,我发现 MAC 地址 `HWaddr` 已经变成了随机生成的地址(模拟 Intel 的),而不是我真实的 MAC 地址。
```
> ifconfig
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
```
### 脚本
完整的脚本也可以[在 Github 上查看][9]。
```
#!/bin/sh
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
# address, seeded from the UUID of the connection and the date eg:
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
# This makes your MAC impossible(?) to track across WiFi providers, and
# for one provider to track across days.
# For craptive portals that authenticate based on MAC, you might want to
# automate logging in :)
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
# automatically.
export PATH=$PATH:/usr/bin:/bin
LOG_FILE=/var/log/randomize-mac-addresses
echo "$(date): $*" > ${LOG_FILE}
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
for UUID in ${WIFI_UUIDS}
do
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
echo "$CMD" >> ${LOG_FILE}
$CMD &
done
wait
```
_更新[使用自己指定的 MAC 地址][5]可以避免和真正的 intel 地址冲突。感谢 [@_fink][6]_
---------------------------------------------------------------------------------
via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
作者:[Paul M Furley][a]
译者:[wenwensnow](https://github.com/wenwensnow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.paulfurley.com/
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abcarandomize-mac-addresses
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
[3]:https://github.com/
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
[6]:https://twitter.com/fink_/status/937305600005943296
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f

View File

@ -1,21 +1,23 @@
如何获知一个命令或程序在执行前将会做什么
如何在执行一个命令或程序之前就了解它会做什么
======
有没有想过一个 Unix 命令在执行前将会干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地知道一个命令或者程序在执行前会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件系统做了什么而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。
有没有想过在执行一个 Unix 命令前就知道它干些什么呢?并不是每个人都会知道一个特定的命令或者程序将会做什么。当然,你可以用 [Explainshell][2] 来查看它。你可以在 Explainshell 网站中粘贴你的命令,然后它可以让你了解命令的每个部分做了什么。但是,这是没有必要的。现在,我们从终端就可以轻易地在执行一个命令或者程序前就知道它会做什么。 `maybe` ,一个简单的工具,它允许你运行一条命令并可以查看此命令对你的文件做了什么而实际上这条命令却并未执行!在查看 `maybe` 的输出列表后,你可以决定是否真的想要运行这条命令。
#### “maybe”是如何工作的
![](https://www.ostechnix.com/wp-content/uploads/2017/12/maybe-2-720x340.png)
根据开发者的介绍
### `maybe` 是如何工作的
> `maybe` 利用 `python-ptrace` 库运行了一个在 `ptrace` 控制下的进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID将其变成一个无效操作no-op并将这个无效操作no-op的返回值设置为有效操作的返回值。结果这个进程认为它所做的一切都发生了实际上什么都没有改变。
根据开发者的介绍:
警告: 在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用
> `maybe` 利用 `python-ptrace` 库在 `ptrace` 控制下运行了一个进程。当它截取到一个即将更改文件系统的系统调用时,它会记录该调用,然后修改 CPU 寄存器,将这个调用重定向到一个无效的系统调用 ID效果上将其变成一个无效操作no-op并将这个无效操作no-op的返回值设置为有效操作的返回值。结果这个进程认为它所做的一切都发生了实际上什么都没有改变
#### 安装 “maybe”
警告:在生产环境或者任何你所关心的系统里面使用这个工具时都应该小心。它仍然可能造成严重的损失,因为它只能阻止少数系统调用。
#### 安装 `maybe`
确保你已经在你的 Linux 系统中已经安装了 `pip` 。如果没有,可以根据您使用的发行版,按照如下指示进行安装。
在 Arch Linux 及其衍生产品(如 AntergosManjaro Linux使用以下命令安装 `pip`
在 Arch Linux 及其衍生产品(如 AntergosManjaro Linux使用以下命令安装 `pip`
```
sudo pacman -S python-pip
@ -25,8 +27,6 @@ sudo pacman -S python-pip
```
sudo yum install epel-release
```
```
sudo yum install python-pip
```
@ -34,8 +34,6 @@ sudo yum install python-pip
```
sudo dnf install epel-release
```
```
sudo dnf install python-pip
```
@ -45,19 +43,19 @@ sudo dnf install python-pip
sudo apt-get install python-pip
```
在 SUSE, openSUSE 上:
在 SUSE、 openSUSE 上:
```
sudo zypper install python-pip
```
安装 `pip` 后,运行以下命令安装 `maybe`
安装 `pip` 后,运行以下命令安装 `maybe`
```
sudo pip install maybe
```
#### 了解一个命令或程序在执行前会做什么
### 了解一个命令或程序在执行前会做什么
用法是非常简单的!只要在要执行的命令前加上 `maybe` 即可。
@ -83,8 +81,7 @@ Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y
[![](http://www.ostechnix.com/wp-content/uploads/2017/12/maybe-1.png)][3]
`maybe` 执行 5 个文件系统操作并向我显示该命令rm -r ostechnix /)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷!
`maybe` 执行了 5 个文件系统操作,并向我显示该命令(`rm -r ostechnix/`)究竟会做什么。现在我可以决定是否应该执行这个操作。是不是很酷呢?确实很酷!
这是另一个例子。我要为 Gmail 安装 Inboxer 桌面客户端。这是我得到的输出:
@ -122,9 +119,9 @@ maybe has not detected any file system operations from sudo pacman -Syu.
Cheers!
资源:
资源
* [“maybe” GitHub page][1]
* [`maybe` GitHub 主页][1]
--------------------------------------------------------------------------------
@ -132,7 +129,7 @@ via: https://www.ostechnix.com/know-command-program-will-exactly-executing/
作者:[SK][a]
译者:[imquanquan](https://github.com/imquanquan)
校对:[校对ID](https://github.com/校对ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,26 +1,25 @@
NETSTAT 命令: 通过案例学习使用 netstate
通过示例学习使用 netstat
======
Netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口。
Netstat 命令还能显示其他各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等
netstat 是一个告诉我们系统中所有 tcp/udp/unix socket 连接状态的命令行工具。它会列出所有已经连接或者等待连接状态的连接。 该工具在识别某个应用监听哪个端口时特别有用,我们也能用它来判断某个应用是否正常的在监听某个端口
本文中,我们会通过几个例子来学习 Netstat
netstat 命令还能显示其它各种各样的网络相关信息,例如路由表, 网卡统计信息, 虚假连接以及多播成员等
(推荐阅读: [Learn to use CURL command with examples][1] )
本文中,我们会通过几个例子来学习 netstat。
Netstat with examples
============================================================
(推荐阅读: [通过示例学习使用 CURL 命令][1] )
### 1- 检查所有的连接
### 1 - 检查所有的连接
使用 `a` 选项可以列出系统中的所有连接,
```shell
$ netstat -a
```
这会显示系统所有的 tcpudp 以及 unix 连接。
这会显示系统所有的 tcpudp 以及 unix 连接。
### 2- 检查所有的 tcp/udp/unix socket 连接
### 2 - 检查所有的 tcp/udp/unix socket 连接
使用 `t` 选项只列出 tcp 连接,
@ -28,19 +27,19 @@ $ netstat -a
$ netstat -at
```
类似的,使用 `u` 选项只列出 udp 连接 to list out only the udp connections on our system we can use u option with netstat
类似的,使用 `u` 选项只列出 udp 连接,
```shell
$ netstat -au
```
使用 `x` 选项只列出 Unix socket 连接,we can use x options
使用 `x` 选项只列出 Unix socket 连接,
```shell
$ netstat -ax
```
### 3- 同时列出进程 ID/进程名称
### 3 - 同时列出进程 ID/进程名称
使用 `p` 选项可以在列出连接的同时也显示 PID 或者进程名称,而且它还能与其他选项连用,
@ -48,15 +47,15 @@ $ netstat -ax
$ netstat -ap
```
### 4- 列出端口号而不是服务名
### 4 - 列出端口号而不是服务名
使用 `n` 选项可以加快输出,它不会执行任何反向查询(译者注:这里原文说的是 "it will perform any reverse lookup",应该是写错了),而是直接输出数字。 由于无需查询,因此结果输出会快很多。
使用 `n` 选项可以加快输出,它不会执行任何反向查询(LCTT 译注:这里原文有误),而是直接输出数字。 由于无需查询,因此结果输出会快很多。
```shell
$ netstat -an
```
### 5- 只输出监听端口
### 5 - 只输出监听端口
使用 `l` 选项只输出监听端口。它不能与 `a` 选项连用,因为 `a` 会输出所有端口,
@ -64,15 +63,15 @@ $ netstat -an
$ netstat -l
```
### 6- 输出网络状态
### 6 - 输出网络状态
使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量
使用 `s` 选项输出每个协议的统计信息,包括接收/发送的包数量
```shell
$ netstat -s
```
### 7- 输出网卡状态
### 7 - 输出网卡状态
使用 `I` 选项只显示网卡的统计信息,
@ -80,7 +79,7 @@ $ netstat -s
$ netstat -i
```
### 8- 显示多播组(multicast group)信息
### 8 - 显示<ruby>多播组<rt>multicast group</rt></ruby>信息
使用 `g` 选项输出 IPV4 以及 IPV6 的多播组信息,
@ -88,7 +87,7 @@ $ netstat -i
$ netstat -g
```
### 9- 显示网络路由信息
### 9 - 显示网络路由信息
使用 `r` 输出网络路由信息,
@ -96,7 +95,7 @@ $ netstat -g
$ netstat -r
```
### 10- 持续输出
### 10 - 持续输出
使用 `c` 选项持续输出结果
@ -104,7 +103,7 @@ $ netstat -r
$ netstat -c
```
### 11- 过滤出某个端口
### 11 - 过滤出某个端口
`grep` 连用来过滤出某个端口的连接,
@ -112,17 +111,17 @@ $ netstat -c
$ netstat -anp | grep 3306
```
### 12- 统计连接个数
### 12 - 统计连接个数
通过与 wc 和 grep 命令连用,可以统计指定端口的连接数量
通过与 `wc``grep` 命令连用,可以统计指定端口的连接数量
```shell
$ netstat -anp | grep 3306 | wc -l
```
输出 mysql 服务端口(即 3306的连接数。
输出 mysql 服务端口(即 3306的连接数。
这就是我们间断的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。
这就是我们简短的案例指南了,希望它带给你的信息量足够。 有任何疑问欢迎提出。
--------------------------------------------------------------------------------
@ -130,7 +129,7 @@ via: http://linuxtechlab.com/learn-use-netstat-with-examples/
作者:[Shusain][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,6 +1,7 @@
如何在 Bash 中抽取子字符串
======
子字符串不是别的,就是出现在其他字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。
所谓“子字符串”就是出现在其它字符串内的字符串。 比如 “3382” 就是 “this is a 3382 test” 的子字符串。 我们有多种方法可以从中把数字或指定部分字符串抽取出来。
[![How to Extract substring in Bash Shell on Linux or Unix](https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg)][2]
@ -8,15 +9,17 @@
### 在 Bash 中抽取子字符串
其语法为:
```shell
## syntax ##
${parameter:offset:length}
```
子字符串扩展是 bash 的一项功能。它会扩展成 parameter 值中以 offset 为开始,长为 length 个字符的字符串。 假设, $u 定义如下:
其语法为:
```shell
## define var named u ##
## 格式 ##
${parameter:offset:length}
```
子字符串扩展是 bash 的一项功能。它会扩展成 `parameter` 值中以 `offset` 为开始,长为 `length` 个字符的字符串。 假设, `$u` 定义如下:
```shell
## 定义变量 u ##
u="this is a test"
```
@ -34,6 +37,7 @@ test
```
其中这些参数分别表示:
+ 10 : 偏移位置
+ 4 : 长度
@ -41,9 +45,9 @@ test
根据 bash 的 man 页说明:
> The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command。The default value is<space><tab><newline>
> [IFS (内部字段分隔符)][3]用于在扩展后进行单词分割,并用内建的 read 命令将行分割为词。默认值是<space><tab><newline>
另一种 POSIX 就绪POSIX ready) 的方案如下:
另一种 <ruby>POSIX 就绪<rt>POSIX ready</rt></ruby>的方案如下:
```shell
u="this is a test"
@ -54,7 +58,7 @@ echo "$3"
echo "$4"
```
输出为:
输出为
```shell
this
@ -63,7 +67,7 @@ a
test
```
下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url
下面是一段 bash 代码,用来从 Cloudflare cache 中去除带主页的 url
```shell
#/bin/bash
@ -113,14 +117,15 @@ done
echo
```
它的使用方法为:
它的使用方法为:
```shell
~/bin/cf.clear.cache https://www.cyberciti.biz/faq/bash-for-loop/ https://www.cyberciti.biz/tips/linux-security.html
```
### 借助 cut 命令
可以使用 cut 命令来将文件中每一行或者变量中的一部分删掉。它的语法为:
可以使用 `cut` 命令来将文件中每一行或者变量中的一部分删掉。它的语法为
```shell
u="this is a test"
@ -135,13 +140,14 @@ var="$(cut -d' ' -f 4 <<< $u)"
echo "${var}"
```
想了解更多请阅读 bash 的 man 页:
想了解更多请阅读 bash 的 man 页:
```shell
man bash
man cut
```
另请参见: [Bash String Comparison: Find Out IF a Variable Contains a Substring][1]
另请参见 [Bash String Comparison: Find Out IF a Variable Contains a Substring][1]
--------------------------------------------------------------------------------
@ -149,10 +155,11 @@ via: https://www.cyberciti.biz/faq/how-to-extract-substring-in-bash/
作者:[Vivek Gite][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.cyberciti.biz
[1]:https://www.cyberciti.biz/faq/bash-find-out-if-variable-contains-substring/
[2]:https://www.cyberciti.biz/media/new/faq/2017/12/How-to-Extract-substring-in-Bash-Shell-on-Linux-or-Unix.jpg
[3]:https://bash.cyberciti.biz/guide/$IFS

View File

@ -0,0 +1,401 @@
7 个使用 bcc/BPF 的性能分析神器
============================================================
> 使用<ruby>伯克利包过滤器<rt>Berkeley Packet Filter</rt></ruby>BPF<ruby>编译器集合<rt>Compiler Collection</rt></ruby>BCC工具深度探查你的 linux 代码。
![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul)
在 Linux 中出现的一种新技术能够为系统管理员和开发者提供大量用于性能分析和故障排除的新工具和仪表盘。它被称为<ruby>增强的伯克利数据包过滤器<rt>enhanced Berkeley Packet Filter</rt></ruby>eBPF或 BPF虽然这些改进并不是由伯克利开发的而且它们不仅仅是处理数据包更多的是过滤。我将讨论在 Fedora 和 Red Hat Linux 发行版中使用 BPF 的一种方法,并在 Fedora 26 上演示。
BPF 可以在内核中运行由用户定义的沙盒程序,可以立即添加新的自定义功能。这就像按需给 Linux 系统添加超能力一般。 你可以使用它的例子包括如下:
* **高级性能跟踪工具**对文件系统操作、TCP 事件、用户级事件等的可编程的低开销检测。
* **网络性能** 尽早丢弃数据包以提高对 DDoS 的恢复能力,或者在内核中重定向数据包以提高性能。
* **安全监控** 7x24 小时的自定义检测和记录内核空间与用户空间内的可疑事件。
在可能的情况下BPF 程序必须通过一个内核验证机制来保证它们的安全运行,这比写自定义的内核模块更安全。我在此假设大多数人并不编写自己的 BPF 程序,而是使用别人写好的。在 GitHub 上的 [BPF Compiler Collection (bcc)][12] 项目中我已发布许多开源代码。bcc 为 BPF 开发提供了不同的前端支持,包括 Python 和 Lua并且是目前最活跃的 BPF 工具项目。
### 7 个有用的 bcc/BPF 新工具
为了了解 bcc/BPF 工具和它们的检测内容,我创建了下面的图表并添加到 bcc 项目中。
![Linux bcc/BPF 跟踪工具图](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png)
这些是命令行界面工具,你可以通过 SSH 使用它们。目前大多数分析,包括我的老板,都是用 GUI 和仪表盘进行的。SSH 是最后的手段。但这些命令行工具仍然是预览 BPF 能力的好方法,即使你最终打算通过一个可用的 GUI 使用它。我已着手向一个开源 GUI 添加 BPF 功能,但那是另一篇文章的主题。现在我想向你分享今天就可以使用的 CLI 工具。
#### 1、 execsnoop
从哪儿开始呢?如何查看新的进程。那些会消耗系统资源,但很短暂的进程,它们甚至不会出现在 `top(1)` 命令或其它工具中的显示之中。这些新进程可以使用 [execsnoop][15] 进行检测(或使用行业术语说,可以<ruby>被追踪<rt>traced</rt></ruby>)。 在追踪时,我将在另一个窗口中通过 SSH 登录:
```
# /usr/share/bcc/tools/execsnoop
PCOMM PID PPID RET ARGS
sshd 12234 727 0 /usr/sbin/sshd -D -R
unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull
unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry
bash 12239 12238 0 /bin/bash
id 12241 12240 0 /usr/bin/id -un
hostname 12243 12242 0 /usr/bin/hostname
pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion
grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c
grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
tty 12249 12248 0 /usr/bin/tty -s
tput 12250 12248 0 /usr/bin/tput colors
dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS
grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS
grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c
grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c
grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
```
哇哦。 那是什么? 什么是 `grepconf.sh` 什么是 `/etc/GREP_COLORS``grep` 在读取它自己的配置文件……由 `grep` 运行的? 这究竟是怎么工作的?
欢迎来到有趣的系统追踪世界。 你可以学到很多关于系统是如何工作的(或者根本不工作,在有些情况下),并且发现一些简单的优化方法。 `execsnoop` 通过跟踪 `exec()` 系统调用来工作,`exec()` 通常用于在新进程中加载不同的程序代码。
#### 2、 opensnoop
接着上面继续,所以,`grepconf.sh` 可能是一个 shell 脚本,对吧? 我将运行 `file(1)` 来检查它,并使用[opensnoop][16] bcc 工具来查看打开的文件:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
12420 file 3 0 /etc/ld.so.cache
12420 file 3 0 /lib64/libmagic.so.1
12420 file 3 0 /lib64/libz.so.1
12420 file 3 0 /lib64/libc.so.6
12420 file 3 0 /usr/lib/locale/locale-archive
12420 file -1 2 /etc/magic.mgc
12420 file 3 0 /etc/magic
12420 file 3 0 /usr/share/misc/magic.mgc
12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache
12420 file 3 0 /usr/libexec/grepconf.sh
1 systemd 16 0 /proc/565/cgroup
1 systemd 16 0 /proc/536/cgroup
```
`execsnoop``opensnoop` 这样的工具会将每个事件打印一行。上图显示 `file(1)` 命令当前打开或尝试打开的文件返回的文件描述符“FD” 列)对于 `/etc/magic.mgc` 是 -1而 “ERR” 列指示它是“文件未找到”。我不知道该文件,也不知道 `file(1)` 正在读取的 `/usr/share/misc/magic.mgc` 文件是什么。我不应该感到惊讶,但是 `file(1)` 在识别文件类型时没有问题:
```
# file /usr/share/misc/magic.mgc /etc/magic
/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian)
/etc/magic: magic text file for file(1) cmd, ASCII text
```
`opensnoop` 通过跟踪 `open()` 系统调用来工作。为什么不使用 `strace -feopen file` 命令呢? 在这种情况下是可以的。然而,`opensnoop` 的一些优点在于它能在系统范围内工作,并且跟踪所有进程的 `open()` 系统调用。注意上例的输出中包括了从 systemd 打开的文件。`opensnoop` 应该系统开销更低BPF 跟踪已经被优化过,而当前版本的 `strace(1)` 仍然使用较老和较慢的 `ptrace(2)` 接口。
#### 3、 xfsslower
bcc/BPF 不仅仅可以分析系统调用。[xfsslower][17] 工具可以跟踪大于 1 毫秒(参数)延迟的常见 XFS 文件系统操作。
```
# /usr/share/bcc/tools/xfsslower 1
Tracing XFS operations slower than 1 ms
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
14:17:34 systemd-journa 530 S 0 0 1.69 system.journal
14:17:35 auditd 651 S 0 0 2.43 audit.log
14:17:42 cksum 4167 R 52976 0 1.04 at
14:17:45 cksum 4168 R 53264 0 1.62 [
14:17:45 cksum 4168 R 65536 0 1.01 certutil
14:17:45 cksum 4168 R 65536 0 1.01 dir
14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client
14:17:46 cksum 4168 R 65536 0 1.06 grub2-file
14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest
[...]
```
在上图输出中,我捕获到了多个延迟超过 1 毫秒 的 `cksum(1)` 读取操作(字段 “T” 等于 “R”。这是在 `xfsslower` 工具运行的时候,通过在 XFS 中动态地检测内核函数实现的,并当它结束的时候解除该检测。这个 bcc 工具也有其它文件系统的版本:`ext4slower`、`btrfsslower`、`zfsslower` 和 `nfsslower`
这是个有用的工具,也是 BPF 追踪的重要例子。对文件系统性能的传统分析主要集中在块 I/O 统计信息 —— 通常你看到的是由 `iostat(1)` 工具输出,并由许多性能监视 GUI 绘制的图表。这些统计数据显示的是磁盘如何执行,而不是真正的文件系统如何执行。通常比起磁盘来说,你更关心的是文件系统的性能,因为应用程序是在文件系统中发起请求和等待。并且,文件系统的性能可能与磁盘的性能大为不同!文件系统可以完全从内存缓存中读取数据,也可以通过预读算法和回写缓存来填充缓存。`xfsslower` 显示了文件系统的性能 —— 这是应用程序直接体验到的性能。通常这对于排除整个存储子系统的问题是有用的;如果确实没有文件系统延迟,那么性能问题很可能是在别处。
#### 4、 biolatency
虽然文件系统性能对于理解应用程序性能非常重要,但研究磁盘性能也是有好处的。当各种缓存技巧都无法挽救其延迟时,磁盘的低性能终会影响应用程序。 磁盘性能也是容量规划研究的目标。
`iostat(1)` 工具显示了平均磁盘 I/O 延迟,但平均值可能会引起误解。 以直方图的形式研究 I/O 延迟的分布是有用的,这可以通过使用 [biolatency] 来实现[18]
```
# /usr/share/bcc/tools/biolatency
Tracing block device I/O... Hit Ctrl-C to end.
^C
usecs : count distribution
0 -> 1 : 0 | |
2 -> 3 : 0 | |
4 -> 7 : 0 | |
8 -> 15 : 0 | |
16 -> 31 : 0 | |
32 -> 63 : 1 | |
64 -> 127 : 63 |**** |
128 -> 255 : 121 |********* |
256 -> 511 : 483 |************************************ |
512 -> 1023 : 532 |****************************************|
1024 -> 2047 : 117 |******** |
2048 -> 4095 : 8 | |
```
这是另一个有用的工具和例子;它使用一个名为 maps 的 BPF 特性,它可以用来实现高效的内核摘要统计。从内核层到用户层的数据传输仅仅是“计数”列。 用户级程序生成其余的。
值得注意的是,这种工具大多支持 CLI 选项和参数,如其使用信息所示:
```
# /usr/share/bcc/tools/biolatency -h
usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count]
Summarize block device I/O latency as a histogram
positional arguments:
interval output interval, in seconds
count number of outputs
optional arguments:
-h, --help show this help message and exit
-T, --timestamp include timestamp on output
-Q, --queued include OS queued time in I/O time
-m, --milliseconds millisecond histogram
-D, --disks print a histogram per disk device
examples:
./biolatency # summarize block I/O latency as a histogram
./biolatency 1 10 # print 1 second summaries, 10 times
./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps
./biolatency -Q # include OS queued time in I/O time
./biolatency -D # show each disk device separately
```
它们的行为就像其它 Unix 工具一样,以利于采用而设计。
#### 5、 tcplife
另一个有用的工具是 [tcplife][19] ,该例显示 TCP 会话的生命周期和吞吐量统计。
```
# /usr/share/bcc/tools/tcplife
PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS
12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82
12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53
12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26
12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90
```
在你说 “我不是可以只通过 `tcpdump(8)` 就能输出这个?” 之前请注意,运行 `tcpdump(8)` 或任何数据包嗅探器,在高数据包速率的系统上的开销会很大,即使 `tcpdump(8)` 的用户层和内核层机制已经过多年优化(要不可能更差)。`tcplife` 不会测试每个数据包;它只会有效地监视 TCP 会话状态的变化并由此得到该会话的持续时间。它还使用已经跟踪了吞吐量的内核计数器以及进程和命令信息“PID” 和 “COMM” 列),这些对于 `tcpdump(8)` 等线上嗅探工具是做不到的。
#### 6、 gethostlatency
之前的每个例子都涉及到内核跟踪,所以我至少需要一个用户级跟踪的例子。 这就是 [gethostlatency][20],它检测用于名称解析的 `gethostbyname(3)` 和相关的库调用:
```
# /usr/share/bcc/tools/gethostlatency
TIME PID COMM LATms HOST
06:43:33 12903 curl 188.98 opensource.com
06:43:36 12905 curl 8.45 opensource.com
06:43:40 12907 curl 6.55 opensource.com
06:43:44 12911 curl 9.67 opensource.com
06:45:02 12948 curl 19.66 opensource.cats
06:45:06 12950 curl 18.37 opensource.cats
06:45:07 12952 curl 13.64 opensource.cats
06:45:19 13139 curl 13.10 opensource.cats
```
是的,总是有 DNS 请求,所以有一个工具来监视系统范围内的 DNS 请求会很方便(这只有在应用程序使用标准系统库时才有效)。看看我如何跟踪多个对 “opensource.com” 的查找? 第一个是 188.98 毫秒,然后更快,不到 10 毫秒,毫无疑问,这是缓存的作用。它还追踪多个对 “opensource.cats” 的查找,一个不存在的可怜主机名,但我们仍然可以检查第一个和后续查找的延迟。(第二次查找后是否有一些否定缓存的影响?)
#### 7、 trace
好的,再举一个例子。 [trace][21] 工具由 Sasha Goldshtein 提供,并提供了一些基本的 `printf(1)` 功能和自定义探针。 例如:
```
# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2'
PID TID COMM FUNC -
13266 13266 sshd pam_start sshd: root
```
在这里,我正在跟踪 `libpam` 及其 `pam_start(3)` 函数,并将其两个参数都打印为字符串。 `libpam` 用于插入式身份验证模块系统,该输出显示 sshd 为 “root” 用户调用了 `pam_start()`(我登录了)。 其使用信息中有更多的例子(`trace -h`),而且所有这些工具在 bcc 版本库中都有手册页和示例文件。 例如 `trace_example.txt``trace.8`
### 通过包安装 bcc
安装 bcc 最佳的方法是从 iovisor 仓储库中安装,按照 bcc 的 [INSTALL.md][22] 进行即可。[IO Visor][23] 是包括了 bcc 的 Linux 基金会项目。4.x 系列 Linux 内核中增加了这些工具所使用的 BPF 增强功能,直到 4.9 添加了全部支持。这意味着拥有 4.8 内核的 Fedora 25 可以运行这些工具中的大部分。 使用 4.11 内核的 Fedora 26 可以全部运行它们(至少在目前是这样)。
如果你使用的是 Fedora 25或者 Fedora 26而且这个帖子已经在很多个月前发布了 —— 你好,来自遥远的过去!),那么这个通过包安装的方式是可以工作的。 如果您使用的是 Fedora 26那么请跳至“通过源代码安装”部分它避免了一个[已修复的][26]的[已知][25]错误。 这个错误修复目前还没有进入 Fedora 26 软件包的依赖关系。 我使用的系统是:
```
# uname -a
Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
```
以下是我所遵循的安装步骤,但请参阅 INSTALL.md 获取更新的版本:
```
# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo
# dnf install bcc-tools
[...]
Total download size: 37 M
Installed size: 143 M
Is this ok [y/N]: y
```
安装完成后,您可以在 `/usr/share` 中看到新的工具:
```
# ls /usr/share/bcc/tools/
argdist dcsnoop killsnoop softirqs trace
bashreadline dcstat llcstat solisten ttysnoop
[...]
```
试着运行其中一个:
```
# /usr/share/bcc/tools/opensnoop
chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory
Traceback (most recent call last):
File "/usr/share/bcc/tools/opensnoop", line 126, in
b = BPF(text=bpf_text)
File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__
raise Exception("Failed to compile BPF module %s" % src_file)
Exception: Failed to compile BPF module
```
运行失败,提示 `/lib/modules/4.11.8-300.fc26.x86_64/build` 丢失。 如果你也遇到这个问题,那只是因为系统缺少内核头文件。 如果你看看这个文件指向什么(这是一个符号链接),然后使用 `dnf whatprovides` 来搜索它,它会告诉你接下来需要安装的包。 对于这个系统,它是:
```
# dnf install kernel-devel-4.11.8-300.fc26.x86_64
[...]
Total download size: 20 M
Installed size: 63 M
Is this ok [y/N]: y
[...]
```
现在:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
11792 ls 3 0 /etc/ld.so.cache
11792 ls 3 0 /lib64/libselinux.so.1
11792 ls 3 0 /lib64/libcap.so.2
11792 ls 3 0 /lib64/libc.so.6
[...]
```
运行起来了。 这是捕获自另一个窗口中的 ls 命令活动。 请参阅前面的部分以使用其它有用的命令。
### 通过源码安装
如果您需要从源代码安装,您还可以在 [INSTALL.md][27] 中找到文档和更新说明。 我在 Fedora 26 上做了如下的事情:
```
sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \
python-netaddr python-pip gcc gcc-c++ make zlib-devel \
elfutils-libelf-devel
sudo dnf install -y luajit luajit-devel # for Lua support
sudo dnf install -y \
http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm
sudo pip install pyroute2
sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel
```
`netperf` 外一切妥当,其中有以下错误:
```
Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds]
```
不必理会,`netperf` 是可选的,它只是用于测试,而 bcc 没有它也会编译成功。
以下是余下的 bcc 编译和安装步骤:
```
git clone https://github.com/iovisor/bcc.git
mkdir bcc/build; cd bcc/build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
make
sudo make install
```
现在,命令应该可以工作了:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
4131 date 3 0 /etc/ld.so.cache
4131 date 3 0 /lib64/libc.so.6
4131 date 3 0 /usr/lib/locale/locale-archive
4131 date 3 0 /etc/localtime
[...]
```
### 写在最后和其他的前端
这是一个可以在 Fedora 和 Red Hat 系列操作系统上使用的新 BPF 性能分析强大功能的快速浏览。我演示了 BPF 的流行前端 [bcc][28] ,并包括了其在 Fedora 上的安装说明。bcc 附带了 60 多个用于性能分析的新工具,这将帮助您充分利用 Linux 系统。也许你会直接通过 SSH 使用这些工具,或者一旦 GUI 监控程序支持 BPF 的话,你也可以通过它们来使用相同的功能。
此外bcc 并不是正在开发的唯一前端。[ply][29] 和 [bpftrace][30],旨在为快速编写自定义工具提供更高级的语言支持。此外,[SystemTap][31] 刚刚发布[版本3.2][32],包括一个早期的实验性 eBPF 后端。 如果这个继续开发,它将为运行多年来开发的许多 SystemTap 脚本和 tapset提供一个安全和高效的生产级引擎。随同 eBPF 使用 SystemTap 将是另一篇文章的主题。)
如果您需要开发自定义工具,那么也可以使用 bcc 来实现,尽管语言比 SystemTap、ply 或 bpftrace 要冗长得多。我的 bcc 工具可以作为代码示例,另外我还贡献了用 Python 开发 bcc 工具的[教程][33]。 我建议先学习 bcc 的 multi-tools因为在需要编写新工具之前你可能会从里面获得很多经验。 您可以从它们的 bcc 存储库[funccount] [34][funclatency] [35][funcslower] [36][stackcount] [37][trace] [38] [argdist] [39] 的示例文件中研究 bcc。
感谢[Opensource.com] [40]进行编辑。
### 关于作者
[![Brendan Gregg](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43]
Brendan Gregg 是 Netflix 的一名高级性能架构师,在那里他进行大规模的计算机性能设计、分析和调优。
题图opensource.com
--------------------------------------------------------------------------------
via:https://opensource.com/article/17/11/bccbpf-performance
作者:[Brendan Gregg][a]
译者:[yongshouzhang](https://github.com/yongshouzhang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/brendang
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://opensource.com/participate
[7]:https://opensource.com/users/brendang
[8]:https://opensource.com/users/brendang
[9]:https://opensource.com/user/77626/feed
[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM
[11]:https://opensource.com/article/17/11/bccbpf-performance#comments
[12]:https://github.com/iovisor/bcc
[13]:https://opensource.com/file/376856
[14]:https://opensource.com/usr/share/bcc/tools/trace
[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop
[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop
[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py
[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py
[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py
[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py
[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary
[23]:https://www.iovisor.org/
[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource
[25]:https://github.com/iovisor/bcc/issues/1221
[26]:https://reviews.llvm.org/rL302055
[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source
[28]:https://github.com/iovisor/bcc
[29]:https://github.com/iovisor/ply
[30]:https://github.com/ajor/bpftrace
[31]:https://sourceware.org/systemtap/
[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html
[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt
[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt
[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt
[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt
[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt
[40]:http://opensource.com/
[41]:https://opensource.com/tags/linux
[42]:https://opensource.com/tags/sysadmin
[43]:https://opensource.com/users/brendang
[44]:https://opensource.com/users/brendang

View File

@ -1,63 +0,0 @@
Book review: Ours to Hack and to Own
============================================================
![Book review: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own")
Image by : opensource.com
It seems like the age of ownership is over, and I'm not just talking about the devices and software that many of us bring into our homes and our lives. I'm also talking about the platforms and services on which those devices and apps rely.
While many of the services that we use are free, we don't have any control over them. The firms that do, in essence, control what we see, what we hear, and what we read. Not only that, but many of them are also changing the nature of work. They're using closed platforms to power a shift away from full-time work to the [gig economy][2], one that offers little in the way of security or certainty.
This move has wide-ranging implications for the Internet and for everyone who uses and relies on it. The vision of the open Internet from just 20-odd-years ago is fading and is rapidly being replaced by an impenetrable curtain.
One remedy that's becoming popular is building [platform cooperatives][3], which are digital platforms that their users own. The idea behind platform cooperatives has many of the same roots as open source, as the book "[Ours to Hack and to Own][4]" explains.
Scholar Trebor Scholz and writer Nathan Schneider have collected 40 essays discussing the rise of, and the need for, platform cooperatives as tools ordinary people can use to promote openness, and to counter the opaqueness and the restrictions of closed systems.
### Where open source fits in
At or near the core of any platform cooperative lies open source; not necessarily open source technologies, but the principles and the ethos that underlie open source—openness, transparency, cooperation, collaboration, and sharing.
In his introduction to the book, Trebor Scholz points out that:
> In opposition to the black-box systems of the Snowden-era Internet, these platforms need to distinguish themselves by making their data flows transparent. They need to show where the data about customers and workers are stored, to whom they are sold, and for what purpose.
It's that transparency, so essential to open source, which helps make platform cooperatives so appealing and a refreshing change from much of what exists now.
Open source software can definitely play a part in the vision of platform cooperatives that "Ours to Hack and to Own" shares. Open source software can provide a fast, inexpensive way for groups to build the technical infrastructure that can power their cooperatives.
Mickey Metts illustrates this in the essay, "Meet Your Friendly Neighborhood Tech Co-Op." Metts works for a firm called Agaric, which uses Drupal to build for groups and small business what they otherwise couldn't do for themselves. On top of that, Metts encourages anyone wanting to build and run their own business or co-op to embrace free and open source software. Why? It's high quality, it's inexpensive, you can customize it, and you can connect with large communities of helpful, passionate people.
### Not always about open source, but open source is always there
Not all of the essays in this book focus or touch on open source; however, the key elements of the open source way—cooperation, community, open governance, and digital freedom—are always on or just below the surface.
In fact, as many of the essays in "Ours to Hack and to Own" argue, platform cooperatives can be important building blocks of a more open, commons-based economy and society. That can be, in Douglas Rushkoff's words, organizations like Creative Commons compensating "for the privatization of shared intellectual resources." It can also be what Francesca Bria, Barcelona's CTO, describes as cities running their own "distributed common data infrastructures with systems that ensure the security and privacy and sovereignty of citizens' data."
### Final thought
If you're looking for a blueprint for changing the Internet and the way we work, "Ours to Hack and to Own" isn't it. The book is more a manifesto than user guide. Having said that, "Ours to Hack and to Own" offers a glimpse at what we can do if we apply the principles of the open source way to society and to the wider world.
--------------------------------------------------------------------------------
作者简介:
Scott Nesbitt - Writer. Editor. Soldier of fortune. Ocelot wrangler. Husband and father. Blogger. Collector of pottery. Scott is a few of these things. He's also a long-time user of free/open source software who extensively writes and blogs about it. You can find Scott on Twitter, GitHub
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
[2]:https://en.wikipedia.org/wiki/Access_economy
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
[5]:https://opensource.com/user/14925/feed
[6]:https://opensource.com/users/scottnesbitt

View File

@ -1,3 +1,4 @@
Translating by qhwdw
# Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs
**This post assumes some basic C skills.**
@ -209,3 +210,5 @@ via: https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker-tricks-using-l
[a]:https://rafalcieslak.wordpress.com/
[1]:http://www.zlibc.linux.lu/index.html

View File

@ -1,361 +0,0 @@
How to turn any syscall into an event: Introducing eBPF Kernel probes
============================================================
TL;DR: Using eBPF in recent (>=4.4) Linux kernel, you can turn any kernel function call into a user land event with arbitrary data. This is made easy by bcc. The probe is written in C while the data is handled by python.
If you are not familiar with eBPF or linux tracing, you really should read the full post. It tries to progressively go through the pitfalls I stumbled unpon while playing around with bcc / eBPF while saving you a lot of the time I spent searching and digging.
### A note on push vs pull in a Linux world
When I started to work on containers, I was wondering how we could update a load balancer configuration dynamically based on actual system state. A common strategy, which works, it to let the container orchestrator trigger a load balancer configuration update whenever it starts a container and then let the load balancer poll the container until some health check passes. It may be a simple “SYN” test.
While this configuration works, it has the downside of making your load balancer waiting for some system to be available while it should be… load balancing.
Can we do better?
When you want a program to react to some change in a system there are 2 possible strategies. The program may  _poll_  the system to detect changes or, if the system supports it, the system may  _push_ events and let the program react to them. Wether you want to use push or poll depends on the context. A good rule of the thumb is to use push events when the event rate is low with respect to the processing time and switch to polling when the events are coming fast or the system may become unusable. For example, typical network driver will wait for events from the network card while frameworks like dpdk will actively poll the card for events to achieve the highest throughput and lowest latency.
In an ideal world, wed have some kernel interface telling us:
> * “Hey Mr. ContainerManager, Ive just created a socket for the Nginx-ware of container  _servestaticfiles_ , maybe you want to update your state?”
>
> * “Sure Mr. OS, Thanks for letting me know”
While Linux has a wide range of interfaces to deal with events, up to 3 for file events, there is no dedicated interface to get socket event notifications. You can get routing table events, neighbor table events, conntrack events, interface change events. Just, not socket events. Or maybe there is, deep hidden in a Netlink interface.
Ideally, wed need a generic way to do it. How?
### Kernel tracing and eBPF, a bit of history
Until recently the only way was to patch the kernel or resort on SystemTap. [SytemTap][5] is a tracing Linux system. In a nutshell, it provides a DSL which is then compiled into a kernel module which is then live-loaded into the running kernel. Except that some production system disable dynamic module loading for security reasons. Including the one I was working on at that time. The other way would be to patch the kernel to trigger some events, probably based on netlink. This is not really convenient. Kernel hacking come with downsides including “interesting” new “features” and increased maintenance burden.
Hopefully, starting with Linux 3.15 the ground was laid to safely transform any traceable kernel function into userland events. “Safely” is common computer science expression referring to “some virtual machine”. This case is no exception. Linux has had one for years. Since Linux 2.1.75 released in 1997 actually. Its called Berkeley Packet Filter of BPF for short. As its name suggests, it was originally developed for the BSD firewalls. It had only 2 registers and only allowed forward jumps meaning that you could not write loops with it (Well, you can, if you know the maximum iterations and you manually unroll them). The point was to guarantee the program would always terminate and hence never hang the system. Still not sure if it has any use while you have iptables? It serves as the [foundation of CloudFlares AntiDDos protection][6].
OK, so, with Linux the 3.15, [BPF was extended][7] turning it into eBPF. For “extended” BPF. It upgrades from 2 32 bits registers to 10 64 bits 64 registers and adds backward jumping among others. It has then been [further extended in Linux 3.18][8] moving it out of the networking subsystem, and adding tools like maps. To preserve the safety guarantees, it [introduces a checker][9] which validates all memory accesses and possible code path. If the checker cant guarantee the code will terminate within fixed boundaries, it will deny the initial insertion of the program.
For more history, there is [an excellent Oracle presentation on eBPF][10].
Lets get started.
### Hello from from `inet_listen`
As writing assembly is not the most convenient task, even for the best of us, well use [bcc][11]. bcc is a collection of tools based on LLVM and Python abstracting the underlying machinery. Probes are written in C and the results can be exploited from python allowing to easily write non trivial applications.
Start by install bcc. For some of these examples, you may require a recent (read >= 4.4) version of the kernel. If you are willing to actually try these examples, I highly recommend that you setup a VM.  _NOT_  a docker container. You cant change the kernel in a container. As this is a young and dynamic projects, install instructions are highly platform/version dependant. You can find up to date instructions on [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12]
So, we want to get an event whenever a program starts to listen on TCP socket. When calling the `listen()` syscall on a `AF_INET` + `SOCK_STREAM` socket, the underlying kernel function is [`inet_listen`][13]. Well start by hooking a “Hello World” `kprobe` on its entrypoint.
```
from bcc import BPF
# Hello BPF Program
bpf_text = """
#include <net/inet_sock.h>
#include <bcc/proto.h>
// 1\. Attach kprobe to "inet_listen"
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
bpf_trace_printk("Hello World!\\n");
return 0;
};
"""
# 2\. Build and Inject program
b = BPF(text=bpf_text)
# 3\. Print debug output
while True:
print b.trace_readline()
```
This program does 3 things: 1\. It attaches a kernel probe to “inet_listen” using a naming convention. If the function was called, say, “my_probe”, it could be explicitly attached with `b.attach_kprobe("inet_listen", "my_probe"`. 2\. It builds the program using LLVM new BPF backend, inject the resulting bytecode using the (new) `bpf()` syscall and automatically attaches the probes matching the naming convention. 3\. It reads the raw output from the kernel pipe.
Note: eBPF backend of LLVM is still young. If you think youve hit a bug, you may want to upgrade.
Noticed the `bpf_trace_printk` call? This is a stripped down version of the kernels `printk()`debug function. When used, it produces tracing informations to a special kernel pipe in `/sys/kernel/debug/tracing/trace_pipe`. As the name implies, this is a pipe. If multiple readers are consuming it, only 1 will get a given line. This makes it unsuitable for production.
Fortunately, Linux 3.19 introduced maps for message passing and Linux 4.4 brings arbitrary perf events support. Ill demo the perf event based approach later in this post.
```
# From a first console
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
nc-4940 [000] d... 22666.991714: : Hello World!
# From a second console
ubuntu@bcc:~$ nc -l 0 4242
^C
```
Yay!
### Grab the backlog
Now, lets print some easily accessible data. Say the “backlog”. The backlog is the number of pending established TCP connections, pending to be `accept()`ed.
Just tweak a bit the `bpf_trace_printk`:
```
bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog);
```
If you re-run the example with this world-changing improvement, you should see something like:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5020 [000] d... 25497.154070: : Listening with with up to 1 pending connections!
```
`nc` is a single connection program, hence the backlog of 1\. Nginx or Redis would output 128 here. But thats another story.
Easy hue? Now lets get the port.
### Grab the port and IP
Studying `inet_listen` source from the kernel, we know that we need to get the `inet_sock` from the `socket` object. Just copy from the sources, and insert at the beginning of the tracer:
```
// cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
```
The port can now be accessed from `inet->inet_sport` in network byte order (aka: Big Endian). Easy! So, we could just replace the `bpf_trace_printk` with:
```
bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport);
```
Then run:
```
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
...
R1 invalid mem access 'inv'
...
Exception: Failed to load BPF program kprobe__inet_listen
```
Except that its not (yet) so simple. Bcc is improving a  _lot_  currently. While writing this post, a couple of pitfalls had already been addressed. But not yet all. This Error means the in-kernel checker could prove the memory accesses in program are correct. See the explicit cast. We need to help is a little by making the accesses more explicit. Well use `bpf_probe_read` trusted function to read an arbitrary memory location while guaranteeing all necessary checks are done with something like:
```
// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack
u16 lport = 0;
// Explicit arbitrary memory access. Read it:
// Read into 'lport', 'sizeof(lport)' bytes from 'inet->inet_sport' memory location
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
```
Reading the bound address for IPv4 is basically the same, using `inet->inet_rcv_saddr`. If we put is all together, we should get the backlog, the port and the bound IP:
```
from bcc import BPF
# BPF Program
bpf_text = """
#include <net/sock.h>
#include <net/inet_sock.h>
#include <bcc/proto.h>
// Send an event for each IPv4 listen with PID, bound address and port
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
// Cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
// Working values. You *need* to initialize them to give them "life" on the stack and use them afterward
u32 laddr = 0;
u16 lport = 0;
// Pull in details. As 'inet_sk' is internally a type cast, we need to use 'bpf_probe_read'
// read: load into 'laddr' 'sizeof(laddr)' bytes from address 'inet->inet_rcv_saddr'
bpf_probe_read(&laddr, sizeof(laddr), &(inet->inet_rcv_saddr));
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
// Push event
bpf_trace_printk("Listening on %x %d with %d pending connections\\n", ntohl(laddr), ntohs(lport), backlog);
return 0;
};
"""
# Build and Inject BPF
b = BPF(text=bpf_text)
# Print debug output
while True:
print b.trace_readline()
```
A test run should output something like:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5024 [000] d... 25821.166286: : Listening on 7f000001 4242 with 1 pending connections
```
Provided that you listen on localhost. The address is displayed as hex here to avoid dealing with the IP pretty printing but thats all wired. And thats cool.
Note: you may wonder why `ntohs` and `ntohl` can be called from BPF while they are not trusted. This is because they are macros and inline functions from “.h” files and a small bug was [fixed][14]while writing this post.
All done, one more piece: We want to get the related container. In the context of networking, thats means we want the network namespace. The network namespace being the building block of containers allowing them to have isolated networks.
### Grab the network namespace: a forced introduction to perf events
On the userland, the network namespace can be determined by checking the target of `/proc/PID/ns/net`. It should look like `net:[4026531957]`. The number between brackets is the inode number of the network namespace. This said, we could grab it by scrapping /proc but this is racy, we may be dealing with short-lived processes. And races are never good. Well grab the inode number directly from the kernel. Fortunately, thats an easy one:
```
// Create an populate the variable
u32 netns = 0;
// Read the netns inode number, like /proc does
netns = sk->__sk_common.skc_net.net->ns.inum;
```
Easy. And it works.
But if youve read so far, you may guess there is something wrong somewhere. And there is:
```
bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns);
```
If you try to run it, youll get some cryptic error message:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32)
too many args to 0x1ba9108: i64 = Constant<6>
```
What clang is trying to tell you is “Hey pal, `bpf_trace_printk` can only take 4 arguments, youve just used 5.“. I wont dive into the details here, but thats a BPF limitation. If you want to dig it, [here is a good starting point][15].
The only way to fix it is to… stop debugging and make it production ready. So lets get started (and make sure run at least Linux 4.4). Well use perf events which supports passing arbitrary sized structures to userland. Additionally, only our reader will get it so that multiple unrelated eBPF programs can produce data concurrently without issues.
To use it, we need to:
1. define a structure
2. declare the event
3. push the event
4. re-declare the event on Pythons side (This step should go away in the future)
5. consume and format the event
This may seem like a lot, but it aint. See:
```
// At the begining of the C program, declare our event
struct listen_evt_t {
u64 laddr;
u64 lport;
u64 netns;
u64 backlog;
};
BPF_PERF_OUTPUT(listen_evt);
// In kprobe__inet_listen, replace the printk with
struct listen_evt_t evt = {
.laddr = ntohl(laddr),
.lport = ntohs(lport),
.netns = netns,
.backlog = backlog,
};
listen_evt.perf_submit(ctx, &evt, sizeof(evt));
```
Python side will require a little more work, though:
```
# We need ctypes to parse the event structure
import ctypes
# Declare data format
class ListenEvt(ctypes.Structure):
_fields_ = [
("laddr", ctypes.c_ulonglong),
("lport", ctypes.c_ulonglong),
("netns", ctypes.c_ulonglong),
("backlog", ctypes.c_ulonglong),
]
# Declare event printer
def print_event(cpu, data, size):
event = ctypes.cast(data, ctypes.POINTER(ListenEvt)).contents
print("Listening on %x %d with %d pending connections in container %d" % (
event.laddr,
event.lport,
event.backlog,
event.netns,
))
# Replace the event loop
b["listen_evt"].open_perf_buffer(print_event)
while True:
b.kprobe_poll()
```
Give it a try. In this example, I have a redis running in a docker container and nc on the host:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 7f000001 6588 with 1 pending connections in container 4026531957
```
### Last word
Absolutely everything is now setup to use trigger events from arbitrary function calls in the kernel using eBPF, and you should have seen most of the common pitfalls I hit while learning eBPF. If you want to see the full version of this tool, along with some more tricks like IPv6 support, have a look at [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]. Its now an official tool, thanks to the support of the bcc team.
To go further, you may want to checkout Brendan Greggs blog, in particular [the post about eBPF maps and statistics][17]. He his one of the projects main contributor.
--------------------------------------------------------------------------------
via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/
作者:[Jean-Tiare Le Bigot ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.yadutaf.fr/about
[1]:https://blog.yadutaf.fr/tags/linux
[2]:https://blog.yadutaf.fr/tags/tracing
[3]:https://blog.yadutaf.fr/tags/ebpf
[4]:https://blog.yadutaf.fr/tags/bcc
[5]:https://en.wikipedia.org/wiki/SystemTap
[6]:https://blog.cloudflare.com/bpf-the-forgotten-bytecode/
[7]:https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/TODO
[8]:https://lwn.net/Articles/604043/
[9]:http://lxr.free-electrons.com/source/kernel/bpf/verifier.c#L21
[10]:http://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[11]:https://github.com/iovisor/bcc
[12]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[13]:http://lxr.free-electrons.com/source/net/ipv4/af_inet.c#L194
[14]:https://github.com/iovisor/bcc/pull/453
[15]:http://lxr.free-electrons.com/source/kernel/trace/bpf_trace.c#L86
[16]:https://github.com/iovisor/bcc/blob/master/tools/solisten.py
[17]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html

View File

@ -0,0 +1,263 @@
Useful Linux Commands that you should know
======
If you are Linux system administrator or just a Linux enthusiast/lover, than
you love & use command line aks CLI. Until some years ago majority of Linux
work was accomplished using CLI only & even there are some limitations to GUI
. Though there are plenty of Linux distributions that can complete tasks with
GUI but still learning CLI is major part of mastering Linux.
To this effect, we present you list of useful Linux commands that you should
know.
**Note:-** There is no definite order to all these commands & all of these
commands are equally important to learn & master in order to excel in Linux
administration. One more thing, we have only used some of the options for each
command for an example, you can refer to 'man pages' for complete list of
options for each command.
### 1- top command
'top' command displays the real time summary/information of our system. It
also displays the processes and all the threads that are running & are being
managed by the system kernel.
Information provided by top command includes uptime, number of users, Load
average, running/sleeping/zombie processes, CPU usage in percentage based on
users/system etc, system memory free & used, swap memory etc.
To use top command, open terminal & execute the comamnd,
**$ top**
To exit out the command, either press 'q' or 'ctrl+c'.
### 2- free command
'free' command is used to specifically used to get the information about
system memory or RAM. With this command we can get information regarding
physical memory, swap memory as well as system buffers. It provided amount of
total, free & used memory available on the system.
To use this utility, execute following command in terminal
**$ free**
It will present all the data in kb or kilobytes, for megabytes use options
'-m' & '-g ' for gb.
#### 3- cp command
'cp' or copy command is used to copy files among the folders. Syntax for using
'cp' command is,
**$ cp source destination**
### 4- cd command
'cd' command is used for changing directory . We can switch among directories
using cd command.
To use it, execute
**$ cd directory_location**
### 5- ifconfig
'Ifconfig' is very important utility for viewing & configuring network
information on Linux machine.
To use it, execute
**$ ifconfig**
This will present the network information of all the networking devices on the
system. There are number of options that can be used with 'ifconfig' for
configuration, in fact they are some many options that we have created a
separate article for it ( **Read it here ||[IFCONFIG command : Learn with some
examples][1]** ).
### 6- crontab command
'Crontab' is another important utility that is used schedule a job on Linux
system. With crontab, we can make sure that a command or a script is executed
at the pre-defined time. To create a cron job, run
**$ crontab -e**
To display all the created jobs, run
**$ crontab -l**
You can read our detailed article regarding crontab ( **Read it here ||[
Scheduling Important Jobs with Crontab][2]** )
### 7- cat command
'cat' command has many uses, most common use is that it's used to display
content of a file,
**$ cat file.txt**
But it can also be used to merge two or more file using the syntax below,
**$ cat file1 file2 file3 file4 > file_new**
We can also use 'cat' command to clone a whole disk ( **Read it here ||
[Cloning Disks using dd & cat commands for Linux systems][3]** )
### 8- df command
'df' command is used to show the disk utilization of our whole Linux file
system. Simply run.
**$ df**
& we will be presented with disk complete utilization of all the partitions on
our Linux machine.
### 9- du command
'du' command shows the amount of disk that is being utilized by the files &
directories on our Linux machine. To run it, type
**$ du /directory**
( **Recommended Read :[Use of du & df commands with examples][4]** )
### 10- mv command
'mv' command is used to move the files or folders from one location to
another. Command syntax for moving the files/folders is,
**$ mv /source/filename /destination**
We can also use 'mv' command to rename a file/folder. Syntax for changing name
is,
**$ mv file_oldname file_newname**
### 11- rm command
'rm' command is used to remove files\folders from Linux system. To use it, run
**$ rm filename**
We can also use '-rf' option with 'rm' command to completely remove a
file\folder from the system but we must use this with caution.
### 12- vi/vim command
VI or VIM is very famous & one of the widely used CLI-based text editor for
Linux. It takes some time to master it but it has a great number of utilities,
which makes it a favorite for Linux users.
For detailed knowledge of VIM, kindly refer to the articles [**Beginner 's
Guide to LVM (Logical Volume Management)** & **Working with Vi/Vim Editor :
Advanced concepts.**][5]
### 13- ssh command
SSH utility is to remotely access another machine from the current Linux
machine. To access a machine, execute
**$ ssh[[email protected]][6] OR machine_name**
Once we have remote access to machine, we can work on CLI of that machine as
if we are working on local machine.
### 14- tar command
'tar' command is used to compress & extract the files\folders. To compress the
files\folders using tar, execute
**$ tar -cvf file.tar file_name**
where file.tar will be the name of compressed folder & 'file_name' is the name
of source file or folders. To extract a compressed folder,
**$ tar -xvf file.tar**
For more details on 'tar' command, read [**Tar command : Compress & Decompress
the files\directories**][7]
### 15- locate command
'locate' command is used to locate files & folders on your Linux machines. To
use it, run
**$ locate file_name**
### 16- grep command
'grep' command another very important command that a Linux administrator
should know. It comes especially handy when we want to grab a keyword or
multiple keywords from a file. Syntax for using it is,
**$ grep 'pattern' file.txt**
It will search for 'pattern' in the file 'file.txt' and produce the output on
the screen. We can also redirect the output to another file,
**$ grep 'pattern' file.txt > newfile.txt**
### 17- ps command
'ps' command is especially used to get the process id of a running process. To
get information of all the processes, run
**$ ps -ef**
To get information regarding a single process, executed
**$ ps -ef | grep java**
### 18- kill command
'kill' command is used to kill a running process. To kill a process we will
need its process id, which we can get using above 'ps' command. To kill a
process, run
**$ kill -9 process_id**
### 19- ls command
'ls' command is used list all the files in a directory. To use it, execute
**$ ls**
### 20- mkdir command
To create a directory in Linux machine, we use command 'mkdir'. Syntax for
using 'mkdir' is
**$ mkdir new_dir**
These were some of the useful linux commands that every System Admin should
know, we will soon be sharing another list of some more important commands
that you should know being a Linux lover. You can also leave your suggestions
and queries in the comment box below.
--------------------------------------------------------------------------------
via: http://linuxtechlab.com/useful-linux-commands-you-should-know/
作者:[][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linuxtechlab.com
[1]:http://linuxtechlab.com/ifconfig-command-learn-examples/
[2]:http://linuxtechlab.com/scheduling-important-jobs-crontab/
[3]:http://linuxtechlab.com/linux-disk-cloning-using-dd-cat-commands/
[4]:http://linuxtechlab.com/du-df-commands-examples/
[5]:http://linuxtechlab.com/working-vivim-editor-advanced-concepts/
[6]:/cdn-cgi/l/email-protection#bbcec8dec9d5dad6defbf2ebdadfdfc9dec8c8
[7]:http://linuxtechlab.com/tar-command-compress-decompress-files
[8]:https://www.facebook.com/linuxtechlab/
[9]:https://twitter.com/LinuxTechLab
[10]:https://plus.google.com/+linuxtechlab
[11]:http://linuxtechlab.com/contact-us-2/

View File

@ -1,4 +1,4 @@
fuzheng1998 translating
A Large-Scale Study of Programming Languages and Code Quality in GitHub
============================================================

View File

@ -1,5 +1,6 @@
3 Simple, Excellent Linux Network Monitors
============================================================
KeyLD translating
![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_3.png?itok=iuPcSN4k "network")
Learn more about your network connections with the iftop, Nethogs, and vnstat tools.[Used with permission][3]

View File

@ -1,713 +0,0 @@
Translating by qhwdw Dive into BPF: a list of reading material
============================================================
* [What is BPF?][143]
* [Dive into the bytecode][144]
* [Resources][145]
* [Generic presentations][23]
* [About BPF][1]
* [About XDP][2]
* [About other components related or based on eBPF][3]
* [Documentation][24]
* [About BPF][4]
* [About tc][5]
* [About XDP][6]
* [About P4 and BPF][7]
* [Tutorials][25]
* [Examples][26]
* [From the kernel][8]
* [From package iproute2][9]
* [From bcc set of tools][10]
* [Manual pages][11]
* [The code][27]
* [BPF code in the kernel][12]
* [XDP hooks code][13]
* [BPF logic in bcc][14]
* [Code to manage BPF with tc][15]
* [BPF utilities][16]
* [Other interesting chunks][17]
* [LLVM backend][18]
* [Running in userspace][19]
* [Commit logs][20]
* [Troubleshooting][28]
* [Errors at compilation time][21]
* [Errors at load and run time][22]
* [And still more!][29]
_~ [Updated][146] 2017-11-02 ~_
# What is BPF?
BPF, as in **B**erkeley **P**acket **F**ilter, was initially conceived in 1992 so as to provide a way to filter packets and to avoid useless packet copies from kernel to userspace. It initially consisted in a simple bytecode that is injected from userspace into the kernel, where it is checked by a verifier—to prevent kernel crashes or security issues—and attached to a socket, then run on each received packet. It was ported to Linux a couple of years later, and used for a small number of applications (tcpdump for example). The simplicity of the language as well as the existence of an in-kernel Just-In-Time (JIT) compiling machine for BPF were factors for the excellent performances of this tool.
Then in 2013, Alexei Starovoitov completely reshaped it, started to add new functionalities and to improve the performances of BPF. This new version is designated as eBPF (for “extended BPF”), while the former becomes cBPF (“classic” BPF). New features such as maps and tail calls appeared. The JIT machines were rewritten. The new language is even closer to native machine language than cBPF was. And also, new attach points in the kernel have been created.
Thanks to those new hooks, eBPF programs can be designed for a variety of use cases, that divide into two fields of applications. One of them is the domain of kernel tracing and event monitoring. BPF programs can be attached to kprobes and they compare with other tracing methods, with many advantages (and sometimes some drawbacks).
The other application domain remains network programming. In addition to socket filter, eBPF programs can be attached to tc (Linux traffic control tool) ingress or egress interfaces and perform a variety of packet processing tasks, in an efficient way. This opens new perspectives in the domain.
And eBPF performances are further leveraged through the technologies developed for the IO Visor project: new hooks have also been added for XDP (“eXpress Data Path”), a new fast path recently added to the kernel. XDP works in conjunction with the Linux stack, and relies on BPF to perform very fast packet processing.
Even some projects such as P4, Open vSwitch, [consider][155] or started to approach BPF. Some others, such as CETH, Cilium, are entirely based on it. BPF is buzzing, so we can expect a lot of tools and projects to orbit around it soon…
# Dive into the bytecode
As for me: some of my work (including for [BEBA][156]) is closely related to eBPF, and several future articles on this site will focus on this topic. Logically, I wanted to somehow introduce BPF on this blog before going down to the details—I mean, a real introduction, more developed on BPF functionalities that the brief abstract provided in first section: What are BPF maps? Tail calls? What do the internals look like? And so on. But there are a lot of presentations on this topic available on the web already, and I do not wish to create “yet another BPF introduction” that would come as a duplicate of existing documents.
So instead, here is what we will do. After all, I spent some time reading and learning about BPF, and while doing so, I gathered a fair amount of material about BPF: introductions, documentation, but also tutorials or examples. There is a lot to read, but in order to read it, one has to  _find_  it first. Therefore, as an attempt to help people who wish to learn and use BPF, the present article introduces a list of resources. These are various kinds of readings, that hopefully will help you dive into the mechanics of this kernel bytecode.
# Resources
![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg)
### Generic presentations
The documents linked below provide a generic overview of BPF, or of some closely related topics. If you are very new to BPF, you can try picking a couple of presentation among the first ones and reading the ones you like most. If you know eBPF already, you probably want to target specific topics instead, lower down in the list.
### About BPF
Generic presentations about eBPF:
* [_Making the Kernels Networking Data Path Programmable with BPF and XDP_][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017):
One of the best set of slides available to understand quickly all the basics about eBPF and XDP (mostly for network processing).
* [The BSD Packet Filter][54] (Suchakra Sharma, June 2017): 
A very nice introduction, mostly about the tracing aspects.
* [_BPF: tracing and more_][55]  (Brendan Gregg, January 2017):
Mostly about the tracing use cases.
* [_Linux BPF Superpowers_][56]  (Brendan Gregg, March 2016):
With a first part on the use of **flame graphs**.
* [_IO Visor_][57]  (Brenden Blanco, SCaLE 14x, January 2016):
Also introduces **IO Visor project**.
* [_eBPF on the Mainframe_][58]  (Michael Holzheu, LinuxCon, Dubin, October 2015)
* [_New (and Exciting!) Developments in Linux Tracing_][59]  (Elena Zannoni, LinuxCon, Japan, 2015)
* [_BPF — in-kernel virtual machine_][60]  (Alexei Starovoitov, February 2015):
Presentation by the author of eBPF.
* [_Extending extended BPF_][61]  (Jonathan Corbet, July 2014)
**BPF internals**:
* Daniel Borkmann has been doing an amazing work to present **the internals** of eBPF, in particular about **its use with tc**, through several talks and papers.
* [_Advanced programmability and recent updates with tcs cls_bpf_][30]  (netdev 1.2, Tokyo, October 2016):
Daniel provides details on eBPF, its use for tunneling and encapsulation, direct packet access, and other features.
* [_cls_bpf/eBPF updates since netdev 1.1_][31]  (netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32])
* [_On getting tc classifier fully programmable with cls_bpf_][33]  (netdev 1.1, Sevilla, February 2016):
After introducing eBPF, this presentation provides insights on many internal BPF mechanisms (map management, tail calls, verifier). A must-read! For the most ambitious, [the full paper is available here][34].
* [_Linux tc and eBPF_][35]  (fosdem16, Brussels, Belgium, January 2016)
* [_eBPF and XDP walkthrough and recent updates_][36]  (fosdem17, Brussels, Belgium, February 2017)
These presentations are probably one of the best sources of documentation to understand the design and implementation of internal mechanisms of eBPF.
The [**IO Visor blog**][157] has some interesting technical articles about BPF. Some of them contain a bit of marketing talks.
**Kernel tracing**: summing up all existing methods, including BPF:
* [_Meet-cute between eBPF and Kerne Tracing_][62]  (Viller Hsiao, July 2016):
Kprobes, uprobes, ftrace
* [_Linux Kernel Tracing_][63]  (Viller Hsiao, July 2016):
Systemtap, Kernelshark, trace-cmd, LTTng, perf-tool, ftrace, hist-trigger, perf, function tracer, tracepoint, kprobe/uprobe…
Regarding **event tracing and monitoring**, Brendan Gregg uses eBPF a lot and does an excellent job at documenting some of his use cases. If you are in kernel tracing, you should see his blog articles related to eBPF or to flame graphs. Most of it are accessible [from this article][158] or by browsing his blog.
Introducing BPF, but also presenting **generic concepts of Linux networking**:
* [_Linux Networking Explained_][64]  (Thomas Graf, LinuxCon, Toronto, August 2016)
* [_Kernel Networking Walkthrough_][65]  (Thomas Graf, LinuxCon, Seattle, August 2015)
**Hardware offload**:
* eBPF with tc or XDP supports hardware offload, starting with Linux kernel version 4.9 and introduced by Netronome. Here is a presentation about this feature:
[eBPF/XDP hardware offload to SmartNICs][147] (Jakub Kicinski and Nic Viljoen, netdev 1.2, Tokyo, October 2016)
About **cBPF**:
* [_The BSD Packet Filter: A New Architecture for User-level Packet Capture_][66]  (Steven McCanne and Van Jacobson, 1992):
The original paper about (classic) BPF.
* [The FreeBSD manual page about BPF][67] is a useful resource to understand cBPF programs.
* Daniel Borkmann realized at least two presentations on cBPF, [one in 2013 on mmap, BPF and Netsniff-NG][68], and [a very complete one in 2014 on tc and cls_bpf][69].
* On Cloudflares blog, Marek Majkowski presented his [use of BPF bytecode with the `xt_bpf`module for **iptables**][70]. It is worth mentioning that eBPF is also supported by this module, starting with Linux kernel 4.10 (I do not know of any talk or article about this, though).
* [Libpcap filters syntax][71]
### About XDP
* [XDP overview][72] on the IO Visor website.
* [_eXpress Data Path (XDP)_][73]  (Tom Herbert, Alexei Starovoitov, March 2016):
The first presentation about XDP.
* [_BoF - What Can BPF Do For You?_][74]  (Brenden Blanco, LinuxCon, Toronto, August 2016).
* [_eXpress Data Path_][148]  (Brenden Blanco, Linux Meetup at Santa Clara, July 2016):
Contains some (somewhat marketing?) **benchmark results**! With a single core:
* ip routing drop: ~3.6 million packets per second (Mpps)
* tc (with clsact qdisc) drop using BPF: ~4.2 Mpps
* XDP drop using BPF: 20 Mpps (<10 % CPU utilization)
* XDP forward (on port on which the packet was received) with rewrite: 10 Mpps
(Tests performed with the mlx4 driver).
* Jesper Dangaard Brouer has several excellent sets of slides, that are essential to fully understand the internals of XDP.
* [_XDP eXpress Data Path, Intro and future use-cases_][37]  (September 2016):
_“Linux Kernels fight against DPDK”_ . **Future plans** (as of this writing) for XDP and comparison with DPDK.
* [_Network Performance Workshop_][38]  (netdev 1.2, Tokyo, October 2016):
Additional hints about XDP internals and expected evolution.
* [_XDP eXpress Data Path, Used for DDoS protection_][39]  (OpenSourceDays, March 2017):
Contains details and use cases about XDP, with **benchmark results**, and **code snippets** for **benchmarking** as well as for **basic DDoS protection** with eBPF/XDP (based on an IP blacklisting scheme).
* [_Memory vs. Networking, Provoking and fixing memory bottlenecks_][40]  (LSF Memory Management Summit, March 2017):
Provides a lot of details about current **memory issues** faced by XDP developers. Do not start with this one, but if you already know XDP and want to see how it really works on the page allocation side, this is a very helpful resource.
* [_XDP for the Rest of Us_][41]  (netdev 2.1, Montreal, April 2017), with Andy Gospodarek:
How to get started with eBPF and XDP for normal humans. This presentation was also summarized by Julia Evans on [her blog][42].
(Jesper also created and tries to extend some documentation about eBPF and XDP, see [related section][75].)
* [_XDP workshop — Introduction, experience, and future development_][76]  (Tom Herbert, netdev 1.2, Tokyo, October 2016) — as of this writing, only the video is available, I dont know if the slides will be added.
* [_High Speed Packet Filtering on Linux_][149]  (Gilberto Bertin, DEF CON 25, Las Vegas, July 2017) — an excellent introduction to state-of-the-art packet filtering on Linux, oriented towards DDoS protection, talking about packet processing in the kernel, kernel bypass, XDP and eBPF.
### About other components related or based on eBPF
* [_P4 on the Edge_][77]  (John Fastabend, May 2016):
Presents the use of **P4**, a description language for packet processing, with BPF to create high-performance programmable switches.
* If you like audio presentations, there is an associated [OvS Orbit episode (#11), called  _**P4** on the Edge_][78] , dating from August 2016\. OvS Orbit are interviews realized by Ben Pfaff, who is one of the core maintainers of Open vSwitch. In this case, John Fastabend is interviewed.
* [_P4, EBPF and Linux TC Offload_][79]  (Dinan Gunawardena and Jakub Kicinski, August 2016):
Another presentation on **P4**, with some elements related to eBPF hardware offload on Netronomes **NFP** (Network Flow Processor) architecture.
* **Cilium** is a technology initiated by Cisco and relying on BPF and XDP to provide “fast in-kernel networking and security policy enforcement for containers based on eBPF programs generated on the fly”. [The code of this project][150] is available on GitHub. Thomas Graf has been performing a number of presentations of this topic:
* [_Cilium: Networking & Security for Containers with BPF & XDP_][43] , also featuring a load balancer use case (Linux Plumbers conference, Santa Fe, November 2016)
* [_Cilium: Networking & Security for Containers with BPF & XDP_][44]  (Docker Distributed Systems Summit, October 2016 — [video][45])
* [_Cilium: Fast IPv6 container Networking with BPF and XDP_][46]  (LinuxCon, Toronto, August 2016)
* [_Cilium: BPF & XDP for containers_][47]  (fosdem17, Brussels, Belgium, February 2017)
A good deal of contents is repeated between the different presentations; if in doubt, just pick the most recent one. Daniel Borkmann has also written [a generic introduction to Cilium][80] as a guest author on Google Open Source blog.
* There are also podcasts about **Cilium**: an [OvS Orbit episode (#4)][81], in which Ben Pfaff interviews Thomas Graf (May 2016), and [another podcast by Ivan Pepelnjak][82], still with Thomas Graf about eBPF, P4, XDP and Cilium (October 2016).
* **Open vSwitch** (OvS), and its related project **Open Virtual Network** (OVN, an open source network virtualization solution) are considering to use eBPF at various level, with several proof-of-concept prototypes already implemented:
* [Offloading OVS Flow Processing using eBPF][48] (William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016)
* [Coupling the Flexibility of OVN with the Efficiency of IOVisor][49] (Fulvio Risso, Matteo Bertrone and Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016)
These use cases for eBPF seem to be only at the stage of proposals (nothing merge to OvS main branch) as far as I know, but it will be very interesting to see what comes out of it.
* XDP is envisioned to be of great help for protection against Distributed Denial-of-Service (DDoS) attacks. More and more presentations focus on this. For example, the talks from people from Cloudflare ( [_XDP in practice: integrating XDP in our DDoS mitigation pipeline_][83] ) or from Facebook ( [_Droplet: DDoS countermeasures powered by BPF + XDP_][84] ) at the netdev 2.1 conference in Montreal, Canada, in April 2017, present such use cases.
* [_CETH for XDP_][85]  (Yan Chan and Yunsong Lu, Linux Meetup, Santa Clara, July 2016):
**CETH** stands for Common Ethernet Driver Framework for faster network I/O, a technology initiated by Mellanox.
* [**The VALE switch**][86], another virtual switch that can be used in conjunction with the netmap framework, has [a BPF extension module][87].
* **Suricata**, an open source intrusion detection system, [seems to rely on eBPF components][88] for its “capture bypass” features:
[_The adventures of a Suricate in eBPF land_][89]  (Éric Leblond, netdev 1.2, Tokyo, October 2016)
[_eBPF and XDP seen from the eyes of a meerkat_][90]  (Éric Leblond, Kernel Recipes, Paris, September 2017)
* [InKeV: In-Kernel Distributed Network Virtualization for DCN][91] (Z. Ahmed, M. H. Alizai and A. A. Syed, SIGCOMM, August 2016):
**InKeV** is an eBPF-based datapath architecture for virtual networks, targeting data center networks. It was initiated by PLUMgrid, and claims to achieve better performances than OvS-based OpenStack solutions.
* [_**gobpf** - utilizing eBPF from Go_][92]  (Michael Schubert, fosdem17, Brussels, Belgium, February 2017):
A “library to create, load and use eBPF programs from Go”
* [**ply**][93] is a small but flexible open source dynamic **tracer** for Linux, with some features similar to the bcc tools, but with a simpler language inspired by awk and dtrace, written by Tobias Waldekranz.
* If you read my previous article, you might be interested in this talk I gave about [implementing the OpenState interface with eBPF][151], for stateful packet processing, at fosdem17.
![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg)
### Documentation
Once you managed to get a broad idea of what BPF is, you can put aside generic presentations and start diving into the documentation. Below are the most complete documents about BPF specifications and functioning. Pick the one you need and read them carefully!
### About BPF
* The **specification of BPF** (both classic and extended versions) can be found within the documentation of the Linux kernel, and in particular in file[linux/Documentation/networking/filter.txt][94]. The use of BPF as well as its internals are documented there. Also, this is where you can find **information about errors thrown by the verifier** when loading BPF code fails. Can be helpful to troubleshoot obscure error messages.
* Also in the kernel tree, there is a document about **frequent Questions & Answers** on eBPF design in file [linux/Documentation/bpf/bpf_design_QA.txt][95].
* … But the kernel documentation is dense and not especially easy to read. If you look for a simple description of eBPF language, head for [its **summarized description**][96] on the IO Visor GitHub repository instead.
* By the way, the IO Visor project gathered a lot of **resources about BPF**. Mostly, it is split between[the documentation directory][97] of its bcc repository, and the whole content of [the bpf-docs repository][98], both on GitHub. Note the existence of this excellent [BPF **reference guide**][99] containing a detailed description of BPF C and bcc Python helpers.
* To hack with BPF, there are some essential **Linux manual pages**. The first one is [the `bpf(2)` man page][100] about the `bpf()` **system call**, which is used to manage BPF programs and maps from userspace. It also contains a description of BPF advanced features (program types, maps and so on). The second one is mostly addressed to people wanting to attach BPF programs to tc interface: it is [the `tc-bpf(8)` man page][101], which is a reference for **using BPF with tc**, and includes some example commands and samples of code.
* Jesper Dangaard Brouer initiated an attempt to **update eBPF Linux documentation**, including **the different kinds of maps**. [He has a draft][102] to which contributions are welcome. Once ready, this document should be merged into the man pages and into kernel documentation.
* The Cilium project also has an excellent [**BPF and XDP Reference Guide**][103], written by core eBPF developers, that should prove immensely useful to any eBPF developer.
* David Miller has sent several enlightening emails about eBPF/XDP internals on the [xdp-newbies][152]mailing list. I could not find a link that gathers them at a single place, so here is a list:
* [bpf.h and you…][50]
* [Contextually speaking…][51]
* [BPF Verifier Overview][52]
The last one is possibly the best existing summary about the verifier at this date.
* Ferris Ellis started [a **blog post series about eBPF**][104]. As I write this paragraph, the first article is out, with some historical background and future expectations for eBPF. Next posts should be more technical, and look promising.
* [A **list of BPF features per kernel version**][153] is available in bcc repository. Useful is you want to know the minimal kernel version that is required to run a given feature. I contributed and added the links to the commits that introduced each feature, so you can also easily access the commit logs from there.
### About tc
When using BPF for networking purposes in conjunction with tc, the Linux tool for **t**raffic **c**ontrol, one may wish to gather information about tcs generic functioning. Here are a couple of resources about it.
* It is difficult to find simple tutorials about **QoS on Linux**. The two links I have are long and quite dense, but if you can find the time to read it you will learn nearly everything there is to know about tc (nothing about BPF, though). There they are:  [_Traffic Control HOWTO_  (Martin A. Brown, 2006)][105], and the  [_Linux Advanced Routing & Traffic Control HOWTO_  (“LARTC”) (Bert Hubert & al., 2002)][106].
* **tc manual pages** may not be up-to-date on your system, since several of them have been added lately. If you cannot find the documentation for a particular queuing discipline (qdisc), class or filter, it may be worth checking the latest [manual pages for tc components][107].
* Some additional material can be found within the files of iproute2 package itself: the package contains [some documentation][108], including some files that helped me understand better [the functioning of **tcs actions**][109].
**Edit:** While still available from the Git history, these files have been deleted from iproute2 in October 2017.
* Not exactly documentation: there was [a workshop about several tc features][110] (including filtering, BPF, tc offload, …) organized by Jamal Hadi Salim during the netdev 1.2 conference (October 2016).
* Bonus information—If you use `tc` a lot, here are some good news: I [wrote a bash completion function][111] for this tool, and it should be shipped with package iproute2 coming with kernel version 4.6 and higher!
### About XDP
* Some [work-in-progress documentation (including specifications)][112] for XDP started by Jesper Dangaard Brouer, but meant to be a collaborative work. Under progress (September 2016): you should expect it to change, and maybe to be moved at some point (Jesper [called for contribution][113], if you feel like improving it).
* The [BPF and XDP Reference Guide][114] from Cilium project… Well, the name says it all.
### About P4 and BPF
[P4][159] is a language used to specify the behavior of a switch. It can be compiled for a number of hardware or software targets. As you may have guessed, one of these targets is BPF… The support is only partial: some P4 features cannot be translated towards BPF, and in a similar way there are things that BPF can do but that would not be possible to express with P4\. Anyway, the documentation related to **P4 use with BPF** [used to be hidden in bcc repository][160]. This changed with P4_16 version, the p4c reference compiler including [a backend for eBPF][161].
![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg)
### Tutorials
Brendan Gregg has produced excellent **tutorials** intended for people who want to **use bcc tools** for tracing and monitoring events in the kernel. [The first tutorial about using bcc itself][162] comes with eleven steps (as of today) to understand how to use the existing tools, while [the one **intended for Python developers**][163] focuses on developing new tools, across seventeen “lessons”.
Sasha Goldshtein also has some  [_**Linux Tracing Workshops Materials**_][164]  involving the use of several BPF tools for tracing.
Another post by Jean-Tiare Le Bigot provides a detailed (and instructive!) example of [using perf and eBPF to setup a low-level tracer][165] for ping requests and replies
Few tutorials exist for network-related eBPF use cases. There are some interesting documents, including an  _eBPF Offload Starting Guide_ , on the [Open NFP][166] platform operated by Netronome. Other than these, the talk from Jesper,  [_XDP for the Rest of Us_][167] , is probably one of the best ways to get started with XDP.
![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg)
### Examples
It is always nice to have examples. To see how things really work. But BPF program samples are scattered across several projects, so I listed all the ones I know of. The examples do not always use the same helpers (for instance, tc and bcc both have their own set of helpers to make it easier to write BPF programs in C language).
### From the kernel
The kernel contains examples for most types of program: filters to bind to sockets or to tc interfaces, event tracing/monitoring, and even XDP. You can find these examples under the [linux/samples/bpf/][168]directory.
Also do not forget to have a look to the logs related to the (git) commits that introduced a particular feature, they may contain some detailed example of the feature.
### From package iproute2
The iproute2 package provide several examples as well. They are obviously oriented towards network programming, since the programs are to be attached to tc ingress or egress interfaces. The examples dwell under the [iproute2/examples/bpf/][169] directory.
### From bcc set of tools
Many examples are [provided with bcc][170]:
* Some are networking example programs, under the associated directory. They include socket filters, tc filters, and a XDP program.
* The `tracing` directory include a lot of example **tracing programs**. The tutorials mentioned earlier are based on these. These programs cover a wide range of event monitoring functions, and some of them are production-oriented. Note that on certain Linux distributions (at least for Debian, Ubuntu, Fedora, Arch Linux), these programs have been [packaged][115] and can be “easily” installed by typing e.g. `# apt install bcc-tools`, but as of this writing (and except for Arch Linux), this first requires to set up IO Visors own package repository.
* There are also some examples **using Lua** as a different BPF back-end (that is, BPF programs are written with Lua instead of a subset of C, allowing to use the same language for front-end and back-end), in the third directory.
### Manual pages
While bcc is generally the easiest way to inject and run a BPF program in the kernel, attaching programs to tc interfaces can also be performed by the `tc` tool itself. So if you intend to **use BPF with tc**, you can find some example invocations in the [`tc-bpf(8)` manual page][171].
![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg)
### The code
Sometimes, BPF documentation or examples are not enough, and you may have no other solution that to display the code in your favorite text editor (which should be Vim of course) and to read it. Or you may want to hack into the code so as to patch or add features to the machine. So here are a few pointers to the relevant files, finding the functions you want is up to you!
### BPF code in the kernel
* The file [linux/include/linux/bpf.h][116] and its counterpart [linux/include/uapi/bpf.h][117] contain **definitions** related to eBPF, to be used respectively in the kernel and to interface with userspace programs.
* On the same pattern, files [linux/include/linux/filter.h][118] and [linux/include/uapi/filter.h][119] contain information used to **run the BPF programs**.
* The **main pieces of code** related to BPF are under [linux/kernel/bpf/][120] directory. **The different operations permitted by the system call**, such as program loading or map management, are implemented in file `syscall.c`, while `core.c` contains the **interpreter**. The other files have self-explanatory names: `verifier.c` contains the **verifier** (no kidding), `arraymap.c` the code used to interact with **maps** of type array, and so on.
* The **helpers**, as well as several functions related to networking (with tc, XDP…) and available to the user, are implemented in [linux/net/core/filter.c][121]. It also contains the code to migrate cBPF bytecode to eBPF (since all cBPF programs are now translated to eBPF in the kernel before being run).
* The **JIT compilers** are under the directory of their respective architectures, such as file[linux/arch/x86/net/bpf_jit_comp.c][122] for x86.
* You will find the code related to **the BPF components of tc** in the [linux/net/sched/][123] directory, and in particular in files `act_bpf.c` (action) and `cls_bpf.c` (filter).
* I have not hacked with **event tracing** in BPF, so I do not really know about the hooks for such programs. There is some stuff in [linux/kernel/trace/bpf_trace.c][124]. If you are interested in this and want to know more, you may dig on the side of Brendan Greggs presentations or blog posts.
* Nor have I used **seccomp-BPF**. But the code is in [linux/kernel/seccomp.c][125], and some example use cases can be found in [linux/tools/testing/selftests/seccomp/seccomp_bpf.c][126].
### XDP hooks code
Once loaded into the in-kernel BPF virtual machine, **XDP** programs are hooked from userspace into the kernel network path thanks to a Netlink command. On reception, the function `dev_change_xdp_fd()` in file [linux/net/core/dev.c][172] is called and sets a XDP hook. Such hooks are located in the drivers of supported NICs. For example, the mlx4 driver used for some Mellanox hardware has hooks implemented in files under the [drivers/net/ethernet/mellanox/mlx4/][173] directory. File en_netdev.c receives Netlink commands and calls `mlx4_xdp_set()`, which in turns calls for instance `mlx4_en_process_rx_cq()` (for the RX side) implemented in file en_rx.c.
### BPF logic in bcc
One can find the code for the **bcc** set of tools [on the bcc GitHub repository][174]. The **Python code**, including the `BPF` class, is initiated in file [bcc/src/python/bcc/__init__.py][175]. But most of the interesting stuff—to my opinion—such as loading the BPF program into the kernel, happens [in the libbcc **C library**][176].
### Code to manage BPF with tc
The code related to BPF **in tc** comes with the iproute2 package, of course. Some of it is under the[iproute2/tc/][177] directory. The files f_bpf.c and m_bpf.c (and e_bpf.c) are used respectively to handle BPF filters and actions (and tc `exec` command, whatever this may be). File q_clsact.c defines the `clsact` qdisc especially created for BPF. But **most of the BPF userspace logic** is implemented in[iproute2/lib/bpf.c][178] library, so this is probably where you should head to if you want to mess up with BPF and tc (it was moved from file iproute2/tc/tc_bpf.c, where you may find the same code in older versions of the package).
### BPF utilities
The kernel also ships the sources of three tools (`bpf_asm.c`, `bpf_dbg.c`, `bpf_jit_disasm.c`) related to BPF, under the [linux/tools/net/][179] or [linux/tools/bpf/][180] directory depending on your version:
* `bpf_asm` is a minimal cBPF assembler.
* `bpf_dbg` is a small debugger for cBPF programs.
* `bpf_jit_disasm` is generic for both BPF flavors and could be highly useful for JIT debugging.
* `bpftool` is a generic utility written by Jakub Kicinski, and that can be used to interact with eBPF programs and maps from userspace, for example to show, dump, pin programs, or to show, create, pin, update, delete maps.
Read the comments at the top of the source files to get an overview of their usage.
### Other interesting chunks
If you are interested the use of less common languages with BPF, bcc contains [a **P4 compiler** for BPF targets][181] as well as [a **Lua front-end**][182] that can be used as alternatives to the C subset and (in the case of Lua) to the Python tools.
### LLVM backend
The BPF backend used by clang / LLVM for compiling C into eBPF was added to the LLVM sources in[this commit][183] (and can also be accessed on [the GitHub mirror][184]).
### Running in userspace
As far as I know there are at least two eBPF userspace implementations. The first one, [uBPF][185], is written in C. It contains an interpreter, a JIT compiler for x86_64 architecture, an assembler and a disassembler.
The code of uBPF seems to have been reused to produce a [generic implementation][186], that claims to support FreeBSD kernel, FreeBSD userspace, Linux kernel, Linux userspace and MacOSX userspace. It is used for the [BPF extension module for VALE switch][187].
The other userspace implementation is my own work: [rbpf][188], based on uBPF, but written in Rust. The interpreter and JIT-compiler work (both under Linux, only the interpreter for MacOSX and Windows), there may be more in the future.
### Commit logs
As stated earlier, do not hesitate to have a look at the commit log that introduced a particular BPF feature if you want to have more information about it. You can search the logs in many places, such as on [git.kernel.org][189], [on GitHub][190], or on your local repository if you have cloned it. If you are not familiar with git, try things like `git blame <file>` to see what commit introduced a particular line of code, then `git show <commit>` to have details (or search by keyword in `git log` results, but this may be tedious). See also [the list of eBPF features per kernel version][191] on bcc repository, that links to relevant commits.
![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg)
### Troubleshooting
The enthusiasm about eBPF is quite recent, and so far I have not found a lot of resources intending to help with troubleshooting. So here are the few I have, augmented with my own recollection of pitfalls encountered while working with BPF.
### Errors at compilation time
* Make sure you have a recent enough version of the Linux kernel (see also [this document][127]).
* If you compiled the kernel yourself: make sure you installed correctly all components, including kernel image, headers and libc.
* When using the `bcc` shell function provided by `tc-bpf` man page (to compile C code into BPF): I once had to add includes to the header for the clang call:
```
__bcc() {
clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \
-I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \
-emit-llvm -c $1 -o - | \
llc -march=bpf -filetype=obj -o "`basename $1 .c`.o"
}
```
(seems fixed as of today).
* For other problems with `bcc`, do not forget to have a look at [the FAQ][128] of the tool set.
* If you downloaded the examples from the iproute2 package in a version that does not exactly match your kernel, some errors can be triggered by the headers included in the files. The example snippets indeed assume that the same version of iproute2 package and kernel headers are installed on the system. If this is not the case, download the correct version of iproute2, or edit the path of included files in the examples to point to the headers included in iproute2 (some problems may or may not occur at runtime, depending on the features in use).
### Errors at load and run time
* To load a program with tc, make sure you use a tc binary coming from an iproute2 version equivalent to the kernel in use.
* To load a program with bcc, make sure you have bcc installed on the system (just downloading the sources to run the Python script is not enough).
* With tc, if the BPF program does not return the expected values, check that you called it in the correct fashion: filter, or action, or filter with “direct-action” mode.
* With tc still, note that actions cannot be attached directly to qdiscs or interfaces without the use of a filter.
* The errors thrown by the in-kernel verifier may be hard to interpret. [The kernel documentation][129]may help, so may [the reference guide][130] or, as a last resort, the source code (see above) (good luck!). For this kind of errors it is also important to keep in mind that the verifier  _does not run_  the program. If you get an error about an invalid memory access or about uninitialized data, it does not mean that these problems actually occurred (or sometimes, that they can possibly occur at all). It means that your program is written in such a way that the verifier estimates that such errors could happen, and therefore it rejects the program.
* Note that `tc` tool has a verbose mode, and that it works well with BPF: try appending `verbose`at the end of your command line.
* bcc also has verbose options: the `BPF` class has a `debug` argument that can take any combination of the three flags `DEBUG_LLVM_IR`, `DEBUG_BPF` and `DEBUG_PREPROCESSOR` (see details in [the source file][131]). It even embeds [some facilities to print output messages][132] for debugging the code.
* LLVM v4.0+ [embeds a disassembler][133] for eBPF programs. So if you compile your program with clang, adding the `-g` flag for compiling enables you to later dump your program in the rather human-friendly format used by the kernel verifier. To proceed to the dump, use:
```
$ llvm-objdump -S -no-show-raw-insn bpf_program.o
```
* Working with maps? You want to have a look at [bpf-map][134], a very userful tool in Go created for the Cilium project, that can be used to dump the contents of kernel eBPF maps. There also exists [a clone][135] in Rust.
* There is an old [`bpf` tag on **StackOverflow**][136], but as of this writing it has been hardly used—ever (and there is nearly nothing related to the new eBPF version). If you are a reader from the Future though, you may want to check whether there has been more activity on this side.
![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg)
### And still more!
* In case you would like to easily **test XDP**, there is [a Vagrant setup][137] available. You can also **test bcc**[in a Docker container][138].
* Wondering where the **development and activities** around BPF occur? Well, the kernel patches always end up [on the netdev mailing list][139] (related to the Linux kernel networking stack development): search for “BPF” or “XDP” keywords. Since April 2017, there is also [a mailing list specially dedicated to XDP programming][140] (both for architecture or for asking for help). Many discussions and debates also occur [on the IO Visor mailing list][141], since BPF is at the heart of the project. If you only want to keep informed from time to time, there is also an [@IOVisor Twitter account][142].
And come back on this blog from time to time to see if they are new articles [about BPF][192]!
_Special thanks to Daniel Borkmann for the numerous [additional documents][154] he pointed to me so that I could complete this collection._
--------------------------------------------------------------------------------
via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
作者:[Quentin Monnet ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://qmonnet.github.io/whirl-offload/about/
[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf
[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp
[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf
[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1
[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc
[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf
[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel
[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2
[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools
[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages
[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel
[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-hooks-code
[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc
[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc
[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities
[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks
[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend
[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace
[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs
[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time
[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time
[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations
[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation
[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials
[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples
[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code
[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting
[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more
[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann
[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf
[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf
[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf
[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf
[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/
[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf
[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop
[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf
[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
[41]:http://netdevconf.org/2.1/session.html?gospodarek
[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/
[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp
[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823
[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs
[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp
[47]:https://fosdem.org/2017/schedule/event/cilium/
[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf
[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf
[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html
[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html
[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html
[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf
[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter
[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more
[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers
[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf
[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf
[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
[61]:https://lwn.net/Articles/603983/
[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing
[63]:http://www.slideshare.net/vh21/linux-kernel-tracing
[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained
[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough
[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf
[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf
[68]:http://borkmann.ch/talks/2013_devconf.pdf
[69]:http://borkmann.ch/talks/2014_devconf.pdf
[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/
[71]:http://biot.com/capstats/bpf.html
[72]:https://www.iovisor.org/technology/xdp
[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf
[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf
[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop
[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf
[78]:https://ovsorbit.benpfaff.org/#e11
[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf
[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html
[81]:https://ovsorbit.benpfaff.org/
[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html
[83]:http://netdevconf.org/2.1/session.html?bertin
[84]:http://netdevconf.org/2.1/session.html?zhou
[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016
[86]:http://info.iet.unipi.it/~luigi/vale/
[87]:https://github.com/YutaroHayakawa/vale-bpf
[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf
[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond
[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf
[92]:https://fosdem.org/2017/schedule/event/go_bpf/
[93]:https://wkz.github.io/ply/
[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef
[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
[97]:https://github.com/iovisor/bcc/tree/master/docs
[98]:https://github.com/iovisor/bpf-docs/
[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[100]:http://man7.org/linux/man-pages/man2/bpf.2.html
[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html
[103]:http://docs.cilium.io/en/latest/bpf/
[104]:https://ferrisellis.com/tags/ebpf/
[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/
[106]:http://lartc.org/lartc.html
[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8
[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0
[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0
[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b
[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html
[113]:https://marc.info/?l=linux-netdev&m=147436253625672
[114]:http://docs.cilium.io/en/latest/bpf/
[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h
[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h
[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h
[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h
[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf
[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c
[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c
[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched
[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c
[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c
[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt
[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output
[133]:https://www.spinics.net/lists/netdev/msg406926.html
[134]:https://github.com/cilium/bpf-map
[135]:https://github.com/badboy/bpf-map
[136]:https://stackoverflow.com/questions/tagged/bpf
[137]:https://github.com/iovisor/xdp-vagrant
[138]:https://github.com/zlim/bcc-docker
[139]:http://lists.openwall.net/netdev/
[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[141]:http://lists.iovisor.org/pipermail/iovisor-dev/
[142]:https://twitter.com/IOVisor
[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf
[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode
[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources
[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md
[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski
[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016
[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf
[150]:https://github.com/cilium/cilium
[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/
[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429
[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html
[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/
[157]:https://www.iovisor.org/resources/blog
[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[159]:http://p4.org/
[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4
[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md
[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[164]:https://github.com/goldshtn/linux-tracing-workshop
[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/
[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/
[167]:http://netdevconf.org/2.1/session.html?gospodarek
[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf
[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf
[170]:https://github.com/iovisor/bcc/tree/master/examples
[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c
[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/
[174]:https://github.com/iovisor/bcc/
[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c
[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc
[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c
[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net
[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf
[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler
[182]:https://github.com/iovisor/bcc/tree/master/src/lua
[183]:https://reviews.llvm.org/D6494
[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d
[185]:https://github.com/iovisor/ubpf/
[186]:https://github.com/YutaroHayakawa/generic-ebpf
[187]:https://github.com/YutaroHayakawa/vale-bpf
[188]:https://github.com/qmonnet/rbpf
[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
[190]:https://github.com/torvalds/linux
[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF

View File

@ -1,95 +0,0 @@
translating---geekpi
GitHub welcomes all CI tools
====================
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
Continuous Integration ([CI][12]) tools help you stick to your team's quality standards by running tests every time you push a new commit and [reporting the results][13] to a pull request. Combined with continuous delivery ([CD][14]) tools, you can also test your code on multiple configurations, run additional performance tests, and automate every step [until production][15].
There are several CI and CD tools that [integrate with GitHub][16], some of which you can install in a few clicks from [GitHub Marketplace][17]. With so many options, you can pick the best tool for the job—even if it's not the one that comes pre-integrated with your system.
The tools that will work best for you depends on many factors, including:
* Programming language and application architecture
* Operating system and browsers you plan to support
* Your team's experience and skills
* Scaling capabilities and plans for growth
* Geographic distribution of dependent systems and the people who use them
* Packaging and delivery goals
Of course, it isn't possible to optimize your CI tool for all of these scenarios. The people who build them have to choose which use cases to serve best—and when to prioritize complexity over simplicity. For example, if you like to test small applications written in a particular programming language for one platform, you won't need the complexity of a tool that tests embedded software controllers on dozens of platforms with a broad mix of programming languages and frameworks.
If you need a little inspiration for which CI tool might work best, take a look at [popular GitHub projects][18]. Many show the status of their integrated CI/CD tools as badges in their README.md. We've also analyzed the use of CI tools across more than 50 million repositories in the GitHub community, and found a lot of variety. The following diagram shows the relative percentage of the top 10 CI tools used with GitHub.com, based on the most used [commit status contexts][19] used within our pull requests.
_Our analysis also showed that many teams use more than one CI tool in their projects, allowing them to emphasize what each tool does best._
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
If you'd like to check them out, here are the top 10 tools teams use:
* [Travis CI][1]
* [Circle CI][2]
* [Jenkins][3]
* [AppVeyor][4]
* [CodeShip][5]
* [Drone][6]
* [Semaphore CI][7]
* [Buildkite][8]
* [Wercker][9]
* [TeamCity][10]
It's tempting to just pick the default, pre-integrated tool without taking the time to research and choose the best one for the job, but there are plenty of [excellent choices][21] built for your specific use cases. And if you change your mind later, no problem. When you choose the best tool for a specific situation, you're guaranteeing tailored performance and the freedom of interchangability when it no longer fits.
Ready to see how CI tools can fit into your workflow?
[Browse GitHub Marketplace][22]
--------------------------------------------------------------------------------
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
作者:[jonico ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/jonico
[1]:https://travis-ci.org/
[2]:https://circleci.com/
[3]:https://jenkins.io/
[4]:https://www.appveyor.com/
[5]:https://codeship.com/
[6]:http://try.drone.io/
[7]:https://semaphoreci.com/
[8]:https://buildkite.com/
[9]:http://www.wercker.com/
[10]:https://www.jetbrains.com/teamcity/
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
[12]:https://en.wikipedia.org/wiki/Continuous_integration
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
[16]:https://github.com/works-with/category/continuous-integration
[17]:https://github.com/marketplace/category/continuous-integration
[18]:https://github.com/explore?trending=repositories#trending
[19]:https://developer.github.com/v3/repos/statuses/
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
[21]:https://github.com/works-with/category/continuous-integration
[22]:https://github.com/marketplace/category/continuous-integration

View File

@ -1,3 +1,5 @@
translating---geekpi
Security Jobs Are Hot: Get Trained and Get Noticed
============================================================

View File

@ -0,0 +1,264 @@
10 Best LaTeX Editors For Linux
======
**Brief: Once you get over the learning curve, there is nothing like LaTex.
Here are the best LaTex editors for Linux and other systems.**
## What is LaTeX?
[LaTeX][1] is a document preparation system. Unlike plain text editor, you
can't just write a plain text using LaTeX editors. Here, you will have to
utilize LaTeX commands in order to manage the content of the document.
![LaTex Sample][2]![LaTex Sample][3]
LaTex Editors are generally used to publish scientific research documents or
books for academic purposes. Most importantly, LaText editors come handy while
dealing with a document containing complex Mathematical notations. Surely,
LaTeX editors are fun to use. But, not that useful unless you have specific
needs for a document.
## Why should you use LaTex?
Well, just like I previously mentioned, LaTeX editors are meant for specific
purposes. You do not need to be a geek head in order to figure out the way to
use LaTeX editors but it is not a productive solution for users who deal with
basic text editors.
If you are looking to craft a document but you are not interested in spending
time formatting the text, then LaTeX editors should be the one you should go
for. With LaTeX editors, you just have to specify the type of document, and
the text font and sizes will be taken care of accordingly. No wonder it is
considered one of the [best open source tools for writers][4].
Do note that it isn't something automated, you will have to first learn LaTeX
commands to let the editor handle the text formatting with precision.
## 10 Of The Best LaTeX Editors For Linux
Just for information, the list is not in any specific order. Editor at number
three is not better than the editor at number seven.
### 1\. Lyx
![][2]
![][5]
Lyx is an open source LaTeX Editor. In other words, it is one of the best
document processors available on the web.LyX helps you focus on the structure
of the write-up, just as every LaTeX editor should and lets you forget about
the word formatting. LyX would manage whatsoever depending on the type of
document specified. You get to control a lot of stuff while you have it
installed. The margins, headers/footers, spacing/indents, tables, and so on.
If you are into crafting scientific documents, research thesis, or similar,
you will be delighted to experience Lyx's formula editor which should be a
charm to use. LyX also includes a set of tutorials to get started without much
of a hassle.
[Lyx][6]
### 2\. Texmaker
![][2]
![][7]
Texmaker is considered to be one of the best LaTeX editors for GNOME desktop
environment. It presents a great user interface which results in a good user
experience. It is also crowned to be one among the most useful LaTeX editor
there is.If you perform PDF conversions often, you will find TeXmaker to be
relatively faster than other LaTeX editors. You can take a look at a preview
of what the final document would look like while you write. Also, one could
observe the symbols being easy to reach when needed.
Texmaker also offers an extensive support for hotkeys configuration. Why not
give it a try?
[Texmaker][8]
### 3\. TeXstudio
![][2]
![][9]
If you want a LaTeX editor which offers you a decent level of customizability
along with an easy-to-use interface, then TeXstudio would be the perfect one
to have installed. The UI is surely very simple but not clumsy. TeXstudio lets
you highlight syntax, comes with an integrated viewer, lets you check the
references and also bundles some other assistant tools.
It also supports some cool features like auto-completion, link overlay,
bookmarks, multi-cursors, and so on - which makes writing a LaTeX document
easier than ever before.
TeXstudio is actively maintained, which makes it a compelling choice for both
novice users and advanced writers.
[TeXstudio][10]
### 4\. Gummi
![][2]
![][11]
Gummi is a very simple LaTeX editor based on the GTK+ toolkit. Well, you may
not find a lot of fancy options here but if you are just starting out - Gummi
will be our recommendation.It supports exporting the documents to PDF format,
lets you highlight syntax, and helps you with some basic error checking
functionalities. Though Gummi isn't actively maintained via GitHub it works
just fine.
[Gummi][12]
### 5\. TeXpen
![][2]
![][13]
TeXpen is yet another simplified tool to go with. You get the auto-completion
functionality with this LaTeX editor. However, you may not find the user
interface impressive. If you do not mind the UI, but want a super easy LaTeX
editor, TeXpen could fulfill that wish for you.Also, TeXpen lets you
correct/improve the English grammar and expressions used in the document.
[TeXpen][14]
### 6\. ShareLaTeX
![][2]
![][15]
ShareLaTeX is an online LaTeX editor. If you want someone (or a group of
people) to collaborate on documents you are working on, this is what you need.
It offers a free plan along with several paid packages. Even the students of
Harvard University & Oxford University utilize this for their projects. With
the free plan, you get the ability to add one collaborator.
The paid packages let you sync the documents on GitHub and Dropbox along with
the ability to record the full document history. You can choose to have
multiple collaborators as per your plan. For students, there's a separate
pricing plan available.
[ShareLaTeX][16]
### 7\. Overleaf
![][2]
![][17]
Overleaf is yet another online LaTeX editor. Similar to ShareLaTeX, it offers
separate pricing plans for professionals and students. It also includes a free
plan where you can sync with GitHub, check your revision history, and add
multiple collaborators.
There's a limit on the number of files you can create per project - so it
could bother if you are a professional working with LaTeX documents most of
the time.
[Overleaf][18]
### 8\. Authorea
![][2]
![][19]
Authorea is a wonderful online LaTeX editor. However, it is not the best out
there - when considering the pricing plans. For free, it offers just 100 MB of
data upload limit and 1 private document at a time. The paid plans offer you
more perks but it may not be the cheapest from the lot.The only reason you
should choose Authorea is the user interface. If you love to work with a tool
offering an impressive user interface, there's no looking back.
[Authorea][20]
### 9\. Papeeria
![][2]
![][21]
Papeeria is the cheapest LaTeX editor you can find on the Internet -
considering it is as reliable as the others. You do not get private projects
if you want to utilize it for free. But, if you prefer public projects it lets
you work on an unlimited number of projects with numerous collaborators. It
features a pretty simple plot builder and includes Git sync for no additional
cost.If you opt for the paid plan, it will empower you with the ability to
work on 10 private projects.
[Papeeria][22]
### 10\. Kile
![Kile LaTeX editor][2]
![Kile LaTeX editor][23]
Last entry in our list of best LaTeX editor is Kile. Some people swear by
Kile. Primarily because of the features it provides.
Kile is more than just an editor. It is an IDE tool like Eclipse that provides
a complete environment to work on documents and projects. Apart from quick
compilation and preview, you get features like auto-completion of commands,
insert citations, organize document in chapters etc. You really have to use
Kile to realize its true potential.
Kile is available for Linux and Windows.
[Kile][24]
### Wrapping Up
So, there go our recommendations for the LaTeX editors you should utilize on
Ubuntu/Linux.
There are chances that we might have missed some interesting LaTeX editors
available for Linux. If you happen to know about any, let us know down in the
comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/latex-editors-linux/
作者:[Ankush Das][a]
译者:[翻译者ID](https://github.com/翻译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ankush/
[1]:https://www.latex-project.org/
[2]:data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=
[3]:https://itsfoss.com/wp-content/uploads/2017/11/latex-sample-example.jpeg
[4]:https://itsfoss.com/open-source-tools-writers/
[5]:https://itsfoss.com/wp-content/uploads/2017/10/lyx_latex_editor.jpg
[6]:https://www.lyx.org/
[7]:https://itsfoss.com/wp-content/uploads/2017/10/texmaker_latex_editor.jpg
[8]:http://www.xm1math.net/texmaker/
[9]:https://itsfoss.com/wp-content/uploads/2017/10/tex_studio_latex_editor.jpg
[10]:https://www.texstudio.org/
[11]:https://itsfoss.com/wp-content/uploads/2017/10/gummi_latex_editor.jpg
[12]:https://github.com/alexandervdm/gummi
[13]:https://itsfoss.com/wp-content/uploads/2017/10/texpen_latex_editor.jpg
[14]:https://sourceforge.net/projects/texpen/
[15]:https://itsfoss.com/wp-content/uploads/2017/10/sharelatex.jpg
[16]:https://www.sharelatex.com/
[17]:https://itsfoss.com/wp-content/uploads/2017/10/overleaf.jpg
[18]:https://www.overleaf.com/
[19]:https://itsfoss.com/wp-content/uploads/2017/10/authorea.jpg
[20]:https://www.authorea.com/
[21]:https://itsfoss.com/wp-content/uploads/2017/10/papeeria_latex_editor.jpg
[22]:https://www.papeeria.com/
[23]:https://itsfoss.com/wp-content/uploads/2017/11/kile-latex-800x621.png
[24]:https://kile.sourceforge.io/

View File

@ -0,0 +1,77 @@
Useful GNOME Shell Keyboard Shortcuts You Might Not Know About
======
As Ubuntu has moved to Gnome Shell in its 17.10 release, many users may be interested to discover some of the most useful shortcuts in Gnome as well as how to create your own shortcuts. This article will explain both.
If you expect GNOME to ship with hundreds or thousands of shell shortcuts, you will be disappointed to learn this isn't the case. The list of shortcuts isn't miles long, and not all of them will be useful to you, but there are still many keyboard shortcuts you can take advantage of.
![gnome-shortcuts-01-settings][1]
![gnome-shortcuts-01-settings][1]
To access the list of shortcuts, go to "Settings -> Devices -> Keyboard." Here are some less popular, yet useful shortcuts.
* Ctrl + Alt + T - this combination launches the terminal; you can use this from anywhere within GNOME
Two shortcuts I personally use quite frequently are:
* Alt + F4 - close the window on focus
* Alt + F8 - resize the window
Most of you know how to switch between open applications (Alt + Tab), but you may not know you can use Alt + Shift + Tab to cycle through applications in reverse direction.
Another useful combination for switching within the windows of an application is Alt + (key above Tab) (example: Alt + ` on a US keyboard).
If you want to show the Activities overview, use Alt + F1.
There are quite a lot of shortcuts related to workspaces. If you are like me and don't use multiple workspaces frequently, these shortcuts are useless to you. Still, some of the ones worth noting are the following:
* Super + PageUp (or PageDown) moves to the workspace above or below
* Ctrl + Alt + Left (or Right) moves to the workspace on the left/right
If you add Shift to these commands, e.g. Shift + Ctrl + Alt + Left, you move the window one worskpace above, below, to the left, or to the right.
Another favorite keyboard shortcut of mine is in the Accessibility section - Increase/Decrease Text Size. You can use Ctrl + + (and Ctrl + -) to zoom text size quickly. In some cases, this may be disabled by default, so do check it out before you try it.
The above-mentioned shortcuts are lesser known, yet useful keyboard shortcuts. If you are curious to see what else is available, you can check [the official GNOME shell cheat sheet][2].
If the default shortcuts are not to your liking, you can change them or create new ones. You do this from the same "Settings -> Devices -> Keyboard" dialog. Just select the entry you want to change, and the following dialog will popup.
![gnome-shortcuts-02-change-shortcut][3]
![gnome-shortcuts-02-change-shortcut][3]
Enter the keyboard combination you want.
![gnome-shortcuts-03-set-shortcut][4]
![gnome-shortcuts-03-set-shortcut][4]
If it is already in use you will get a message. If not, just click Set, and you are done.
If you want to add new shortcuts rather than change existing ones, scroll down until you see the "Plus" sign, click it, and in the dialog that appears, enter the name and keys of your new keyboard shortcut.
![gnome-shortcuts-04-add-custom-shortcut][5]
![gnome-shortcuts-04-add-custom-shortcut][5]
GNOME doesn't come with tons of shell shortcuts by default, and the above listed ones are some of the more useful ones. If these shortcuts are not enough for you, you can always create your own. Let us know if this is helpful to you.
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/gnome-shell-keyboard-shortcuts/
作者:[Ada Ivanova][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/adaivanoff/
[1]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-01-settings.jpg (gnome-shortcuts-01-settings)
[2]https://wiki.gnome.org/Projects/GnomeShell/CheatSheet
[3]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-02-change-shortcut.png (gnome-shortcuts-02-change-shortcut)
[4]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-03-set-shortcut.png (gnome-shortcuts-03-set-shortcut)
[5]https://www.maketecheasier.com/assets/uploads/2017/10/gnome-shortcuts-04-add-custom-shortcut.png (gnome-shortcuts-04-add-custom-shortcut)

View File

@ -1,43 +0,0 @@
translating---geekpi
Someone Tries to Bring Back Ubuntu's Unity from the Dead as an Official Spin
============================================================
> The Ubuntu Unity remix would be supported for nine months
Canonical's sudden decision of killing its Unity user interface after seven years affected many Ubuntu users, and it looks like someone now tries to bring it back from the dead as an unofficial spin.
Long-time [Ubuntu][1] member Dale Beaudoin [ran a poll][2] last week on the official Ubuntu forums to take the pulse of the community and see if they are interested in an Ubuntu Unity Remix that would be released alongside Ubuntu 18.04 LTS (Bionic Beaver) next year and be supported for nine months or five years.
Thirty people voted in the poll, with 67 percent of them opting for an LTS (Long Term Support) release of the so-called Ubuntu Unity Remix, while 33 percent voted for the 9-month supported release. It also looks like this upcoming Ubuntu Unity Spin [looks to become an official flavor][3], yet this means commitment from those developing it.
"A recent poll voted 2/3rds in favor of Ubuntu Unity to become an LTS distribution. We should try to work this cycle assuming that it will be LTS and an official flavor," said Dale Beaudoin. "We will try and release an updated ISO once every week or 10 days using the current 18.04 daily builds of default Ubuntu Bionic Beaver as a platform."
### Is Ubuntu Unity making a comeback?
The last Ubuntu version to ship with Unity by default was Ubuntu 17.04 (Zesty Zapus), which will reach end of life on January 2018\. Ubuntu 17.10 (Artful Artful), the current stable release of the popular operating system, is the first to use the GNOME desktop environment by default for the main Desktop edition as Canonical CEO [announced][4] earlier this year that Unity would no longer be developed.
However, Canonical is still offering the Unity desktop environment from the official software repositories, so if someone wants to install it, it's one click away. But the bad news is that they'll be supported up until the release of Ubuntu 18.04 LTS (Bionic Beaver) in April 2018, so the developers of the Ubuntu Unity Remix would have to continue to keep in on life support on their a separate repository.
On the other hand, we don't believe Canonical will change their mind and accept this Ubuntu Unity Spin to become an official flavor, which would mean they failed to continue development of Unity, and now a handful of people can do it. Most probably, if interest in this Ubuntu Unity Remix won't fade away soon, it will be an unofficial spin supported by the nostalgic community.
Question is, would you be interested in an Ubuntu Unity spin, official or not?
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml
作者:[Marius Nestor ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]:http://linux.softpedia.com/downloadTag/Ubuntu
[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066
[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223
[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml
[5]:http://news.softpedia.com/editors/browse/marius-nestor

View File

@ -1,153 +0,0 @@
translating---geekpi
Suplemon - Modern CLI Text Editor with Multi Cursor Support
======
Suplemon is a modern text editor for CLI that emulates the multi cursor behavior and other features of [Sublime Text][1]. It's lightweight and really easy to use, just as Nano is.
One of the benefits of using a CLI editor is that you can use it whether the Linux distribution that you're using has a GUI or not. This type of text editors also stands out as being simple, fast and powerful.
You can find useful information and the source code in the [official repository][2].
### Features
These are some of its interesting features:
* Multi cursor support
* Undo / Redo
* Copy and Paste, with multi line support
* Mouse support
* Extensions
* Find, find all, find next
* Syntax highlighting
* Autocomplete
* Custom keyboard shortcuts
### Installation
First, make sure you have the latest version of python3 and pip3 installed.
Then type in a terminal:
```
$ sudo pip3 install suplemon
```
Create a new file in the current directory
Open a terminal and type:
```
$ suplemon
```
![suplemon new file](https://linoxide.com/wp-content/uploads/2017/11/suplemon-new-file.png)
Open one or multiple files
Open a terminal and type:
```
$ suplemon ...
```
```
$ suplemon example1.c example2.c
```
Main configuration
You can find the configuration file at ~/.config/suplemon/suplemon-config.json.
Editing this file is easy, you just have to enter command mode (once you are inside suplemon) and run the config command. You can view the default configuration by running config defaults.
Keymap configuration
I'll show you the default key mappings for suplemon. If you want to edit them, just run keymap command. Run keymap default to view the default keymap file.
* Exit: Ctrl + Q
* Copy line(s) to buffer: Ctrl + C
* Cut line(s) to buffer: Ctrl + X
* Insert buffer: Ctrl + V
* Duplicate line: Ctrl + K
* Goto: Ctrl + G. You can go to a line or to a file (just type the beginning of a file name). Also, it is possible to type something like 'exam:50' to go to the line 50 of the file example.c at line 50.
* Search for string or regular expression: Ctrl + F
* Search next: Ctrl + D
* Trim whitespace: Ctrl + T
* Add new cursor in arrow direction: Alt + Arrow key
* Jump to previous or next word or line: Ctrl + Left / Right
* Revert to single cursor / Cancel input prompt: Esc
* Move line(s) up / down: Page Up / Page Down
* Save file: Ctrl + S
* Save file with new name: F1
* Reload current file: F2
* Open file: Ctrl + O
* Close file: Ctrl + W
* Switch to next/previous file: Ctrl + Page Up / Ctrl + Page Down
* Run a command: Ctrl + E
* Undo: Ctrl + Z
* Redo: Ctrl + Y
* Toggle visible whitespace: F7
* Toggle mouse mode: F8
* Toggle line numbers: F9
* Toggle Full screen: F11
Mouse shortcuts
* Set cursor at pointer position: Left Click
* Add a cursor at pointer position: Right Click
* Scroll vertically: Scroll Wheel Up / Down
### Wrapping up
After trying Suplemon for some time, I have changed my opinion about CLI text editors. I had tried Nano before, and yes, I liked its simplicity, but its modern-feature lack made it non-practical for my everyday use.
This tool has the best of both CLI and GUI worlds... Simplicity and feature-richness! So I suggest you give it a try, and write your thoughts in the comments :-)
--------------------------------------------------------------------------------
via: https://linoxide.com/tools/suplemon-cli-text-editor-multi-cursor/
作者:[Ivo Ursino][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://linoxide.com/author/ursinov/
[1]:https://linoxide.com/tools/install-sublime-text-editor-linux/
[2]:https://github.com/richrd/suplemon/

View File

@ -1,3 +1,5 @@
translating---geekpi
FreeCAD A 3D Modeling and Design Software for Linux
============================================================
![FreeCAD 3D Modeling Software](https://www.fossmint.com/wp-content/uploads/2017/12/FreeCAD-3D-Modeling-Software.png)

View File

@ -1,3 +1,5 @@
translating---geekpi
# GNOME Boxes Makes It Easier to Test Drive Linux Distros
![GNOME Boxes Distribution Selection](http://www.omgubuntu.co.uk/wp-content/uploads/2017/12/GNOME-Boxes-INstall-Distros-750x475.jpg)

View File

@ -0,0 +1,133 @@
translating by lujun9972
7 rules for avoiding documentation pitfalls
======
English serves as _lingua franca_ in the open source community. To reduce
translation costs, many teams have switched to English as the source language
for their documentation. But surprisingly, writing in English for an
international audience does not necessarily put native English speakers in a
better position. On the contrary, they tend to forget that the document 's
language might not be the audience's first language.
Let's have a look at the following simple sentence as an example: "Encrypt the
password using the `foo bar` command." Grammatically, the sentence is correct.
Given that '-ing' forms (gerunds) are frequently used in the English language
and most native speakers consider them an elegant way of expressing things,
native speakers usually do not hesitate to phrase a sentence like this. On
closer inspection, the sentence is ambiguous because "using" may refer either
to the object ("the password") or to the verb ("encrypt"). Thus, the sentence
can be interpreted in two different ways:
* "Encrypt the password that uses the `foo bar` command."
* "Encrypt the password by using the `foo bar` command."
As long as you have previous knowledge about the topic (password encryption or
the `foo bar` command), you can resolve this ambiguity and correctly decide
that the second version is the intended meaning of this sentence. But what if
you lack in-depth knowledge of the topic? What if you are not an expert, but a
translator with only general knowledge of the subject? Or if you are a non-
native speaker of English, unfamiliar with advanced grammatical forms like
gerunds?
Even native English speakers need training to write clear and straightforward
technical documentation. The first step is to raise awareness about the
usability of texts and potential problems, so let's look at seven rules that
can help avoid common pitfalls.
### 1. Know your target audience and step into their shoes.
If you are a developer writing for end users, view the product from their
perspective. Does the structure reflect the users' goals? The [persona
technique][1] can help you to focus on the target audience and provide the
right level of detail for your readers.
### 2. Follow the KISS principle--keep it short and simple.
The principle can be applied on several levels, such as grammar, sentences, or
words. Here are examples:
* Use the simplest tense that is appropriate. For example, use present tense when mentioning the result of an action:
* " ~~Click 'OK.' The 'Printer Options' dialog will appear.~~" -> "Click 'OK.' The 'Printer Options' dialog appears."
* As a rule of thumb, present one idea in one sentence; however, short sentences are not automatically easy to understand (especially if they are an accumulation of nouns). Sometimes, trimming down sentences to a certain word count can introduce ambiguities. In turn, this makes the sentences more difficult to understand.
* Uncommon and long words slow reading and might be obstacles for non-native speakers. Use simpler alternatives:
* " ~~utilize~~ " -> "use"
* " ~~indicate~~ " -> "show," "tell," or "say"
* " ~~prerequisite~~ " -> "requirement"
### 3. Avoid disturbing the reading flow.
Move particles or longer parentheses to the beginning or end of a sentence:
* " ~~They are not, however, marked as installed.~~ " -> "However, they are not marked as installed."
Place long commands at the end of a sentence. This also results in better
segmentation of sentences for automatic or semi-automatic translations.
### 4. Discriminate between two basic information types.
Discriminating between _descriptive information_ and _task-based information_
is useful. Typical examples for descriptive information are command-line
references, whereas how-to 's are task-based information; however, both
information types are needed in technical writing. On closer inspection, many
texts contain a mixture of both information types. Clearly separating the
information types is helpful. For better orientation, label them accordingly.
Titles should reflect a section's content and information type. Use noun-based
titles for descriptive sections ("Types of Frobnicators") and verbally phrased
titles for task-based sections ("Installing Frobnicators"). This helps readers
quickly identify the sections they are interested in and allows them to skip
the ones they don't need at the moment.
### 5. Consider different reading situations and modes of text consumption.
Some of your readers are already frustrated when they turn to the product
documentation because they could not achieve a certain goal on their own. They
might also work in a noisy environment that makes it hard to focus on reading.
Also, do not expect your audience to read cover to cover, as many people skim
or browse texts for keywords or look up topics by using tables, indexes, or
full-text search. With that in mind, look at your text from different
perspectives. Often, compromises are needed to find a text structure that
works well for multiple situations.
### 6. Break down complex information into smaller chunks.
This makes it easier for the audience to remember and process the information.
For example, procedures should not exceed seven to 10 steps (according to
[Miller's Law][2] in cognitive psychology). If more steps are required, split
the task into separate procedures.
### 7. Form follows function.
Examine your text according to the question: What is the _purpose_ (function)
of a certain sentence, a paragraph, or a section? For example, is it an
instruction? A result? A warning? For instructions, use active voice:
"Configure the system." Passive voice may be appropriate for descriptions:
"The system is configured automatically." Add warnings _before_ the step or
action where danger arises. Focusing on the purpose also helps detect
redundant content to help eliminate fillers like "basically" or "easily,"
unnecessary modifications like " ~~already~~ existing " or " ~~completely~~
new, " or any content that is not relevant for your target audience.
As you might have guessed by now, writing is re-writing. Good writing requires
effort and practice. Even if you write only occasionally, you can
significantly improve your texts by focusing on the target audience and
following the rules above. The better the readability of a text, the easier it
is to process, even for audiences with varying language skills. Especially
when it comes to localization, good quality of the source text is important:
"Garbage in, garbage out." If the original text has deficiencies, translating
the text takes longer, resulting in higher costs. In the worst case, flaws are
multiplied during translation and must be corrected in various languages.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/7-rules
作者:[Tanja Roth][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:https://en.wikipedia.org/wiki/Persona_(user_experience)
[2]:https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two

View File

@ -1,402 +0,0 @@
translating by yongshouzhang
7 tools for analyzing performance in Linux with bcc/BPF
============================================================
### Look deeply into your Linux code with these Berkeley Packet Filter (BPF) Compiler Collection (bcc) tools.
[![](https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/brendan_face2017_620d.jpg?itok=xZzBQNcY)][7] 21 Nov 2017 [Brendan Gregg][8] [Feed][9]
43[up][10]
[4 comments][11]
![7 superpowers for Fedora bcc/BPF performance analysis](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/penguins%20in%20space_0.jpg?itok=umpCTAul)
Image by :
opensource.com
A new technology has arrived in Linux that can provide sysadmins and developers with a large number of new tools and dashboards for performance analysis and troubleshooting. It's called the enhanced Berkeley Packet Filter (eBPF, or just BPF), although these enhancements weren't developed in Berkeley, they operate on much more than just packets, and they do much more than just filtering. I'll discuss one way to use BPF on the Fedora and Red Hat family of Linux distributions, demonstrating on Fedora 26.
BPF can run user-defined sandboxed programs in the kernel to add new custom capabilities instantly. It's like adding superpowers to Linux, on demand. Examples of what you can use it for include:
* Advanced performance tracing tools: programmatic low-overhead instrumentation of filesystem operations, TCP events, user-level events, etc.
* Network performance: dropping packets early on to improve DDOS resilience, or redirecting packets in-kernel to improve performance
* Security monitoring: 24x7 custom monitoring and logging of suspicious kernel and userspace events
BPF programs must pass an in-kernel verifier to ensure they are safe to run, making it a safer option, where possible, than writing custom kernel modules. I suspect most people won't write BPF programs themselves, but will use other people's. I've published many on GitHub as open source in the [BPF Compiler Collection (bcc)][12] project. bcc provides different frontends for BPF development, including Python and Lua, and is currently the most active project for BPF tooling.
### 7 useful new bcc/BPF tools
To understand the bcc/BPF tools and what they instrument, I created the following diagram and added it to the bcc project:
### [bcc_tracing_tools.png][13]
![Linux bcc/BPF tracing tools diagram](https://opensource.com/sites/default/files/u128651/bcc_tracing_tools.png)
Brendan Gregg, [CC BY-SA 4.0][14]
These are command-line interface (CLI) tools you can use over SSH (secure shell). Much analysis nowadays, including at my employer, is conducted using GUIs and dashboards. SSH is a last resort. But these CLI tools are still a good way to preview BPF capabilities, even if you ultimately intend to use them only through a GUI when available. I've began adding BPF capabilities to an open source GUI, but that's a topic for another article. Right now I'd like to share the CLI tools, which you can use today.
### 1\. execsnoop
Where to start? How about watching new processes. These can consume system resources, but be so short-lived they don't show up in top(1) or other tools. They can be instrumented (or, using the industry jargon for this, they can be traced) using [execsnoop][15]. While tracing, I'll log in over SSH in another window:
```
# /usr/share/bcc/tools/execsnoop
PCOMM PID PPID RET ARGS
sshd 12234 727 0 /usr/sbin/sshd -D -R
unix_chkpwd 12236 12234 0 /usr/sbin/unix_chkpwd root nonull
unix_chkpwd 12237 12234 0 /usr/sbin/unix_chkpwd root chkexpiry
bash 12239 12238 0 /bin/bash
id 12241 12240 0 /usr/bin/id -un
hostname 12243 12242 0 /usr/bin/hostname
pkg-config 12245 12244 0 /usr/bin/pkg-config --variable=completionsdir bash-completion
grepconf.sh 12246 12239 0 /usr/libexec/grepconf.sh -c
grep 12247 12246 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
tty 12249 12248 0 /usr/bin/tty -s
tput 12250 12248 0 /usr/bin/tput colors
dircolors 12252 12251 0 /usr/bin/dircolors --sh /etc/DIR_COLORS
grep 12253 12239 0 /usr/bin/grep -qi ^COLOR.*none /etc/DIR_COLORS
grepconf.sh 12254 12239 0 /usr/libexec/grepconf.sh -c
grep 12255 12254 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
grepconf.sh 12256 12239 0 /usr/libexec/grepconf.sh -c
grep 12257 12256 0 /usr/bin/grep -qsi ^COLOR.*none /etc/GREP_COLORS
```
Welcome to the fun of system tracing. You can learn a lot about how the system is really working (or not working, as the case may be) and discover some easy optimizations along the way. execsnoop works by tracing the exec() system call, which is usually used to load different program code in new processes.
### 2\. opensnoop
Continuing from above, so, grepconf.sh is likely a shell script, right? I'll run file(1) to check, and also use the [opensnoop][16] bcc tool to see what file is opening:
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
12420 file 3 0 /etc/ld.so.cache
12420 file 3 0 /lib64/libmagic.so.1
12420 file 3 0 /lib64/libz.so.1
12420 file 3 0 /lib64/libc.so.6
12420 file 3 0 /usr/lib/locale/locale-archive
12420 file -1 2 /etc/magic.mgc
12420 file 3 0 /etc/magic
12420 file 3 0 /usr/share/misc/magic.mgc
12420 file 3 0 /usr/lib64/gconv/gconv-modules.cache
12420 file 3 0 /usr/libexec/grepconf.sh
1 systemd 16 0 /proc/565/cgroup
1 systemd 16 0 /proc/536/cgroup
```
```
# file /usr/share/misc/magic.mgc /etc/magic
/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (version 14) (little endian)
/etc/magic: magic text file for file(1) cmd, ASCII text
```
### 3\. xfsslower
bcc/BPF can analyze much more than just syscalls. The [xfsslower][17] tool traces common XFS filesystem operations that have a latency of greater than 1 millisecond (the argument):
```
# /usr/share/bcc/tools/xfsslower 1
Tracing XFS operations slower than 1 ms
TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME
14:17:34 systemd-journa 530 S 0 0 1.69 system.journal
14:17:35 auditd 651 S 0 0 2.43 audit.log
14:17:42 cksum 4167 R 52976 0 1.04 at
14:17:45 cksum 4168 R 53264 0 1.62 [
14:17:45 cksum 4168 R 65536 0 1.01 certutil
14:17:45 cksum 4168 R 65536 0 1.01 dir
14:17:45 cksum 4168 R 65536 0 1.17 dirmngr-client
14:17:46 cksum 4168 R 65536 0 1.06 grub2-file
14:17:46 cksum 4168 R 65536 128 1.01 grub2-fstest
[...]
```
This is a useful tool and an important example of BPF tracing. Traditional analysis of filesystem performance focuses on block I/O statistics—what you commonly see printed by the iostat(1) tool and plotted by many performance-monitoring GUIs. Those statistics show how the disks are performing, but not really the filesystem. Often you care more about the filesystem's performance than the disks, since it's the filesystem that applications make requests to and wait for. And the performance of filesystems can be quite different from that of disks! Filesystems may serve reads entirely from memory cache and also populate that cache via a read-ahead algorithm and for write-back caching. xfsslower shows filesystem performance—what the applications directly experience. This is often useful for exonerating the entire storage subsystem; if there is really no filesystem latency, then performance issues are likely to be elsewhere.
### 4\. biolatency
Although filesystem performance is important to study for understanding application performance, studying disk performance has merit as well. Poor disk performance will affect the application eventually, when various caching tricks can no longer hide its latency. Disk performance is also a target of study for capacity planning.
The iostat(1) tool shows the average disk I/O latency, but averages can be misleading. It can be useful to study the distribution of I/O latency as a histogram, which can be done using [biolatency][18]:
```
# /usr/share/bcc/tools/biolatency
Tracing block device I/O... Hit Ctrl-C to end.
^C
usecs : count distribution
0 -> 1 : 0 | |
2 -> 3 : 0 | |
4 -> 7 : 0 | |
8 -> 15 : 0 | |
16 -> 31 : 0 | |
32 -> 63 : 1 | |
64 -> 127 : 63 |**** |
128 -> 255 : 121 |********* |
256 -> 511 : 483 |************************************ |
512 -> 1023 : 532 |****************************************|
1024 -> 2047 : 117 |******** |
2048 -> 4095 : 8 | |
```
It's worth noting that many of these tools support CLI options and arguments as shown by their USAGE message:
```
# /usr/share/bcc/tools/biolatency -h
usage: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [count]
Summarize block device I/O latency as a histogram
positional arguments:
interval output interval, in seconds
count number of outputs
optional arguments:
-h, --help show this help message and exit
-T, --timestamp include timestamp on output
-Q, --queued include OS queued time in I/O time
-m, --milliseconds millisecond histogram
-D, --disks print a histogram per disk device
examples:
./biolatency # summarize block I/O latency as a histogram
./biolatency 1 10 # print 1 second summaries, 10 times
./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps
./biolatency -Q # include OS queued time in I/O time
./biolatency -D # show each disk device separately
```
### 5\. tcplife
Another useful tool and example, this time showing lifespan and throughput statistics of TCP sessions, is [tcplife][19]:
```
# /usr/share/bcc/tools/tcplife
PID COMM LADDR LPORT RADDR RPORT TX_KB RX_KB MS
12759 sshd 192.168.56.101 22 192.168.56.1 60639 2 3 1863.82
12783 sshd 192.168.56.101 22 192.168.56.1 60640 3 3 9174.53
12844 wget 10.0.2.15 34250 54.204.39.132 443 11 1870 5712.26
12851 curl 10.0.2.15 34252 54.204.39.132 443 0 74 505.90
```
### 6\. gethostlatency
Every previous example involves kernel tracing, so I need at least one user-level tracing example. Here is [gethostlatency][20], which instruments gethostbyname(3) and related library calls for name resolution:
```
# /usr/share/bcc/tools/gethostlatency
TIME PID COMM LATms HOST
06:43:33 12903 curl 188.98 opensource.com
06:43:36 12905 curl 8.45 opensource.com
06:43:40 12907 curl 6.55 opensource.com
06:43:44 12911 curl 9.67 opensource.com
06:45:02 12948 curl 19.66 opensource.cats
06:45:06 12950 curl 18.37 opensource.cats
06:45:07 12952 curl 13.64 opensource.cats
06:45:19 13139 curl 13.10 opensource.cats
```
### 7\. trace
Okay, one more example. The [trace][21] tool was contributed by Sasha Goldshtein and provides some basic printf(1) functionality with custom probes. For example:
```
# /usr/share/bcc/tools/trace 'pam:pam_start "%s: %s", arg1, arg2'
PID TID COMM FUNC -
13266 13266 sshd pam_start sshd: root
```
### Install bcc via packages
The best way to install bcc is from an iovisor repository, following the instructions from the bcc [INSTALL.md][22]. [IO Visor][23] is the Linux Foundation project that includes bcc. The BPF enhancements these tools use were added in the 4.x series Linux kernels, up to 4.9\. This means that Fedora 25, with its 4.8 kernel, can run most of these tools; and Fedora 26, with its 4.11 kernel, can run them all (at least currently).
If you are on Fedora 25 (or Fedora 26, and this post was published many months ago—hello from the distant past!), then this package approach should just work. If you are on Fedora 26, then skip to the [Install via Source][24] section, which avoids a [known][25] and [fixed][26] bug. That bug fix hasn't made its way into the Fedora 26 package dependencies at the moment. The system I'm using is:
```
# uname -a
Linux localhost.localdomain 4.11.8-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
```
```
# echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo
# dnf install bcc-tools
[...]
Total download size: 37 M
Installed size: 143 M
Is this ok [y/N]: y
```
```
# ls /usr/share/bcc/tools/
argdist dcsnoop killsnoop softirqs trace
bashreadline dcstat llcstat solisten ttysnoop
[...]
```
```
# /usr/share/bcc/tools/opensnoop
chdir(/lib/modules/4.11.8-300.fc26.x86_64/build): No such file or directory
Traceback (most recent call last):
File "/usr/share/bcc/tools/opensnoop", line 126, in
b = BPF(text=bpf_text)
File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__
raise Exception("Failed to compile BPF module %s" % src_file)
Exception: Failed to compile BPF module
```
```
# dnf install kernel-devel-4.11.8-300.fc26.x86_64
[...]
Total download size: 20 M
Installed size: 63 M
Is this ok [y/N]: y
[...]
```
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
11792 ls 3 0 /etc/ld.so.cache
11792 ls 3 0 /lib64/libselinux.so.1
11792 ls 3 0 /lib64/libcap.so.2
11792 ls 3 0 /lib64/libc.so.6
[...]
```
### Install via source
If you need to install from source, you can also find documentation and updated instructions in [INSTALL.md][27]. I did the following on Fedora 26:
```
sudo dnf install -y bison cmake ethtool flex git iperf libstdc++-static \
python-netaddr python-pip gcc gcc-c++ make zlib-devel \
elfutils-libelf-devel
sudo dnf install -y luajit luajit-devel # for Lua support
sudo dnf install -y \
http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm
sudo pip install pyroute2
sudo dnf install -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel
```
```
Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.0-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds]
```
Here are the remaining bcc compilation and install steps:
```
git clone https://github.com/iovisor/bcc.git
mkdir bcc/build; cd bcc/build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
make
sudo make install
```
```
# /usr/share/bcc/tools/opensnoop
PID COMM FD ERR PATH
4131 date 3 0 /etc/ld.so.cache
4131 date 3 0 /lib64/libc.so.6
4131 date 3 0 /usr/lib/locale/locale-archive
4131 date 3 0 /etc/localtime
[...]
```
More Linux resources
* [What is Linux?][1]
* [What are Linux containers?][2]
* [Download Now: Linux commands cheat sheet][3]
* [Advanced Linux commands cheat sheet][4]
* [Our latest Linux articles][5]
This was a quick tour of the new BPF performance analysis superpowers that you can use on the Fedora and Red Hat family of operating systems. I demonstrated the popular
[bcc][28]
frontend to BPF and included install instructions for Fedora. bcc comes with more than 60 new tools for performance analysis, which will help you get the most out of your Linux systems. Perhaps you will use these tools directly over SSH, or perhaps you will use the same functionality via monitoring GUIs once they support BPF.
Also, bcc is not the only frontend in development. There are [ply][29] and [bpftrace][30], which aim to provide higher-level language for quickly writing custom tools. In addition, [SystemTap][31] just released [version 3.2][32], including an early, experimental eBPF backend. Should this continue to be developed, it will provide a production-safe and efficient engine for running the many SystemTap scripts and tapsets (libraries) that have been developed over the years. (Using SystemTap with eBPF would be good topic for another post.)
If you need to develop custom tools, you can do that with bcc as well, although the language is currently much more verbose than SystemTap, ply, or bpftrace. My bcc tools can serve as code examples, plus I contributed a [tutorial][33] for developing bcc tools in Python. I'd recommend learning the bcc multi-tools first, as you may get a lot of mileage from them before needing to write new tools. You can study the multi-tools from their example files in the bcc repository: [funccount][34], [funclatency][35], [funcslower][36], [stackcount][37], [trace][38], and [argdist][39].
Thanks to [Opensource.com][40] for edits.
### Topics
[Linux][41][SysAdmin][42]
### About the author
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/brendan_face2017_620d.jpg?itok=LIwTJjL9)][43] Brendan Gregg
-
Brendan Gregg is a senior performance architect at Netflix, where he does large scale computer performance design, analysis, and tuning.[More about me][44]
* [Learn how you can contribute][6]
--------------------------------------------------------------------------------
via:https://opensource.com/article/17/11/bccbpf-performance
作者:[Brendan Gregg ][a]
译者:[yongshouzhang](https://github.com/yongshouzhang)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://opensource.com/resources/what-is-linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[2]:https://opensource.com/resources/what-are-linux-containers?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[3]:https://developers.redhat.com/promotions/linux-cheatsheet/?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[4]:https://developers.redhat.com/cheat-sheet/advanced-linux-commands-cheatsheet?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[5]:https://opensource.com/tags/linux?intcmp=70160000000h1jYAAQ&utm_source=intcallout&utm_campaign=linuxcontent
[6]:https://opensource.com/participate
[7]:https://opensource.com/users/brendang
[8]:https://opensource.com/users/brendang
[9]:https://opensource.com/user/77626/feed
[10]:https://opensource.com/article/17/11/bccbpf-performance?rate=r9hnbg3mvjFUC9FiBk9eL_ZLkioSC21SvICoaoJjaSM
[11]:https://opensource.com/article/17/11/bccbpf-performance#comments
[12]:https://github.com/iovisor/bcc
[13]:https://opensource.com/file/376856
[14]:https://opensource.com/usr/share/bcc/tools/trace
[15]:https://github.com/brendangregg/perf-tools/blob/master/execsnoop
[16]:https://github.com/brendangregg/perf-tools/blob/master/opensnoop
[17]:https://github.com/iovisor/bcc/blob/master/tools/xfsslower.py
[18]:https://github.com/iovisor/bcc/blob/master/tools/biolatency.py
[19]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
[20]:https://github.com/iovisor/bcc/blob/master/tools/gethostlatency.py
[21]:https://github.com/iovisor/bcc/blob/master/tools/trace.py
[22]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---binary
[23]:https://www.iovisor.org/
[24]:https://opensource.com/article/17/11/bccbpf-performance#InstallViaSource
[25]:https://github.com/iovisor/bcc/issues/1221
[26]:https://reviews.llvm.org/rL302055
[27]:https://github.com/iovisor/bcc/blob/master/INSTALL.md#fedora---source
[28]:https://github.com/iovisor/bcc
[29]:https://github.com/iovisor/ply
[30]:https://github.com/ajor/bpftrace
[31]:https://sourceware.org/systemtap/
[32]:https://sourceware.org/ml/systemtap/2017-q4/msg00096.html
[33]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[34]:https://github.com/iovisor/bcc/blob/master/tools/funccount_example.txt
[35]:https://github.com/iovisor/bcc/blob/master/tools/funclatency_example.txt
[36]:https://github.com/iovisor/bcc/blob/master/tools/funcslower_example.txt
[37]:https://github.com/iovisor/bcc/blob/master/tools/stackcount_example.txt
[38]:https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
[39]:https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt
[40]:http://opensource.com/
[41]:https://opensource.com/tags/linux
[42]:https://opensource.com/tags/sysadmin
[43]:https://opensource.com/users/brendang
[44]:https://opensource.com/users/brendang

View File

@ -0,0 +1,195 @@
Cheat A Collection Of Practical Linux Command Examples
======
Many of us very often checks **[Man Pages][1]** to know about command switches
(options), it shows you the details about command syntax, description,
details, and available switches but it doesn 't has any practical examples.
Hence, we are face some trouble to form a exact command format which we need.
Are you really facing the trouble on this and want a better solution? i would
advise you to check about cheat utility.
#### What Is Cheat
[Cheat][2] allows you to create and view interactive cheatsheets on the
command-line. It was designed to help remind *nix system administrators of
options for commands that they use frequently, but not frequently enough to
remember.
#### How to Install Cheat
Cheat package was developed using python, so install pip package to install
cheat on your system.
For **`Debian/Ubuntu`** , use [apt-get command][3] or [apt command][4] to
install pip.
```
[For Python2]
$ sudo apt install python-pip python-setuptools
[For Python3]
$ sudo apt install python3-pip
```
pip doesn't shipped with **`RHEL/CentOS`** system official repository so,
enable [EPEL Repository][5] and use [YUM command][6] to install pip.
```
$ sudo yum install python-pip python-devel python-setuptools
```
For **`Fedora`** system, use [dnf Command][7] to install pip.
```
[For Python2]
$ sudo dnf install python-pip
[For Python3]
$ sudo dnf install python3
```
For **`Arch Linux`** based systems, use [Pacman Command][8] to install pip.
```
[For Python2]
$ sudo pacman -S python2-pip python-setuptools
[For Python3]
$ sudo pacman -S python-pip python3-setuptools
```
For **`openSUSE`** system, use [Zypper Command][9] to install pip.
```
[For Python2]
$ sudo pacman -S python-pip
[For Python3]
$ sudo pacman -S python3-pip
```
pip is a python module bundled with setuptools, it's one of the recommended
tool for installing Python packages in Linux.
```
$ sudo pip install cheat
```
#### How to Use Cheat
Run `cheat` followed by corresponding `command` to view the cheatsheet, For
demonstration purpose, we are going to check about `tar` command examples.
```
$ cheat tar
# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar
# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/
# To extract a .gz archive:
tar -xzvf /path/to/foo.tgz
# To create a .gz archive:
tar -czvf /path/to/foo.tgz /path/to/foo/
# To list the content of an .gz archive:
tar -ztvf /path/to/foo.tgz
# To extract a .bz2 archive:
tar -xjvf /path/to/foo.tgz
# To create a .bz2 archive:
tar -cjvf /path/to/foo.tgz /path/to/foo/
# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/
# To list the content of an .bz2 archive:
tar -jtvf /path/to/foo.tgz
# To create a .gz archive and exclude all jpg,gif,... from the tgz
tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/
# To use parallel (multi-threaded) implementation of compression algorithms:
tar -z ... -> tar -Ipigz ...
tar -j ... -> tar -Ipbzip2 ...
tar -J ... -> tar -Ipixz ...
```
Run the following command to see what cheatsheets are available.
```
$ cheat -l
```
Navigate to help page for more details.
```
$ cheat -h
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/cheat-a-collection-of-practical-linux-command-examples/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com
[1]:https://www.2daygeek.com/linux-color-man-pages-configuration-less-most-command/
[2]:https://github.com/chrisallenlane/cheat
[3]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]:https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[6]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[7]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[8]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[9]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,77 @@
OnionShare - Share Files Anonymously
======
In this Digital World, we share our media, documents, important files via the Internet using different cloud storage like Dropbox, Mega, Google Drive and many more. But every cloud storage comes with two major problems, one is the Size and the other Security. After getting used to Bit Torrent the size is not a matter anymore, but the security is.
Even though you send your files through the secure cloud services they will be noted by the company, if the files are confidential, even the government can have them. So to overcome these problems we use OnionShare, as per the name it uses the Onion internet i.e Tor to share files Anonymously to anyone.
### How to Use **OnionShare**?
* First Download the [OnionShare][1] and [Tor Browser][2]. After downloading install both of them.
[![install onionshare and tor browser][3]][3]
* Now open OnionShare from the start menu
[![onionshare share files anonymously][4]][4]
* Click on Add and add a File/Folder to share.
* Click start sharing. It produces a .onion URL, you could share the URL with your recipient.
[![share file with onionshare anonymously][5]][5]
* To Download file from the URL, copy the URL and open Tor Browser and paste it. Open the URL and download the Files/Folder.
[![receive file with onionshare anonymously][6]][6]
### Start of **OnionShare**
A few years back when Glenn Greenwald found that some of the NSA documents which he received from Edward Snowden had been corrupted. But he needed the documents and decided to get the files by using a USB. It was not successful.
After reading the book written by Greenwald, Micah Lee crypto expert at The Intercept, released the OnionShare - simple, free software to share files anonymously and securely. He created the program to share big data dumps via a direct channel encrypted and protected by the anonymity software Tor, making it hard to get the files for the eavesdroppers.
### How Does **OnionShare** Work?
OnionShare starts a web server at 127.0.0.1 for sharing the file on a random port. It chooses any of two words from the wordlist of 6800-wordlist called slug. It makes the server available as Tor onion service to send the file. The final URL looks like:
`http://qx2d7lctsnqwfdxh.onion/subside-durable`
The OnionShare shuts down after downloading. There is an option to allow the files to be downloaded multiple times. This makes the file not available on the internet anymore.
### Advantages of using **OnionShare**
Other Websites or Applications have access to your files: The file the sender shares using OnionShare is not stored on any server. It is directly hosted on the sender's system.
No one can spy on the shared files: As the connection between the users is encrypted by the Onion service and Tor Browser. This makes the connection secure and hard to eavesdroppers to get the files.
Both users are Anonymous: OnionShare and Tor Browser make both sender and recipient anonymous.
### Conclusion
In this article, I have explained how to **share your documents, files anonymously**. I also explained how it works. Hope you have understood how OnionShare works, and if you still have a doubt regarding anything, just drop in a comment.
--------------------------------------------------------------------------------
via: https://www.theitstuff.com/onionshare-share-files-anonymously-2
作者:[Anirudh Rayapeddi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.theitstuff.com
[1]https://onionshare.org/
[2]https://www.torproject.org/projects/torbrowser.html.en
[3]http://www.theitstuff.com/wp-content/uploads/2017/12/Icons.png
[4]http://www.theitstuff.com/wp-content/uploads/2017/12/Onion-Share.png
[5]http://www.theitstuff.com/wp-content/uploads/2017/12/With-Link.png
[6]http://www.theitstuff.com/wp-content/uploads/2017/12/Tor.png

View File

@ -1,73 +0,0 @@
translating by lujun9972
Sessions And Cookies How Does User-Login Work?
======
Facebook, Gmail, Twitter we all use these websites every day. One common thing among them is that they all require you to log in to do stuff. You cannot tweet on twitter, comment on Facebook or email on Gmail unless you are authenticated and logged in to the service.
[![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1]
So how does it work? How does the website authenticate us? How does it know which user is logged in and from where? Let us answer each of these questions below.
### How User-Login works?
Whenever you enter your username and password in the login page of a site, the information you enter is sent to the server. The server then validates your password against the password on the server. If it doesnt match, you get an error of incorrect password. But if it matches, you get logged in.
### What happens when I get logged in?
When you get logged in, the web server initiates a session and sets a cookie variable in your browser. The cookie variable then acts as a reference to the session created. Confused? Let us simplify this.
### How does Session work?
When the username and password are right, the server initiates a session. Sessions have a really complicated definition so I like to call them beginning of a relationship.
[![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2]
When the credentials are right, the server begins a relationship with you. Since the server cannot see like us humans, it sets a cookie in our browsers to identify our unique relationship from all the other relationships that other people have with the server.
### What is a Cookie?
A cookie is a small amount of data that the websites can store in your browser. You must have seen them here.
[![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3]
So when you log in and the server has created a relationship or session with you, it takes the session id which is the unique identifier of that session and stores it in your browser in form of cookies.
### Whats the Point?
The reason all of this is needed is to verify that its you so that when you comment or tweet, the server knows who did that tweet or who did that comment.
As soon as youre logged in, a cookie is set which contains the session id. Now, this session id is granted to the person who enters the correct username and password combination.
[![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4]
So the session id is granted to the person who owns that account. Now whenever an activity is performed on that website, the server knows who it was by their session id.
### Keep me logged in?
The sessions have a time limit. Unlike the real world where relationships can last even without seeing the person for longer periods of time, sessions have a time limit. You have to keep telling the server that you are online by performing some or the other actions. If that doesnt happen the server will close the session and you will be logged out.
[![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5]
But when we use the Keep me logged in feature on some websites, we allow them to store another unique variable in the form of cookies in our browsers. This unique variable is used to automatically log us in by checking it against the one on the server. When someone steals this unique identifier it is called as cookie stealing. They then get access to your account.
### Conclusion
We discussed how Login Systems work and how we are authenticated on a website. We also learned about what sessions and cookies are and how they are implemented in login mechanism.
I hope you guys have grasped that how User-Login works, and if you still have a doubt regarding anything, just drop in a comment and Ill be there for you.
--------------------------------------------------------------------------------
via: http://www.theitstuff.com/sessions-cookies-user-login-work
作者:[Rishabh Kandari][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theitstuff.com/author/reevkandari
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png

View File

@ -0,0 +1,93 @@
The Biggest Problems With UC Browser
======
Before we even begin talking about the cons, I want to establish the fact that
I have been a devoted UC Browser user for the past 3 years. I really love the
download speeds I get, the ultra-sleek user interface and eye-catching icons
used for tools. I was a Chrome for Android user in the beginning but I
migrated to UC on a friend's recommendation. But in the past 1 year or so, I
have seen some changes that have made me rethink about my choice and now I
feel like migrating back to chrome again.
### The Unwanted **Notifications**
I am sure I am not the only one who gets these unwanted notifications every
few hours. These clickbait articles are a real pain and the worst part is that
you get them every few hours.
[![uc browser's annoying ads notifications][1]][1]
I tried closing them down from the notification settings but they still kept
appearing with a less frequency.
### The **News Homepage**
Another unwanted section that is completely useless. We completely understand
that UC browser is free to download and it may require funding but this is not
the way to do it. The homepage features news articles that are extremely
distracting and unwanted. Sometimes when you are in a professional or family
environment some of these click baits might even cause awkwardness.
[![uc browser's embarrassing news homepage][2]][2]
And they even have a setting for that. To Turn the **UC** **News Display ON /
OFF.** And guess what, I tried that too **.** In the image below, You can see
my efforts on the left-hand side and the output on the right-hand side.[![uc
browser homepage settings][3]][3]
And click bait news isn't enough, they have started adding some unnecessary
features. So let's include them as well.
### UC **Music**
UC browser integrated a **music player** in their browser to play music. It 's
just something that works, nothing too fancy. So why even have it? What's the
point? Who needs a music player in their browsers?
[![uc browser adds uc music player][4]][4]
It's not even like it will play audio from the web directly via that player in
the background. Instead, it is a music player that plays offline music. So why
have it? I mean it is not even good enough to be used as a primary music
player. Even if it was, it doesn't run independently of UC Browser. So why
would someone have his/her browser running just to use your Music Player?
### The **Quick** Access Bar
I have seen 9 out of 10 average users have this bar hanging around in their
notification area because it comes default with the installation and they
don't know how to get rid of it. The settings on the right get the job done.
[![uc browser annoying quick access bar][5]][5]
But I still wanna ask, "Why does it come by default ?". It's a headache for
most users. If we want it we will enable it. Why forcing the users though.
### Conclusion
UC browser is still one of the top players in the game. It provides one of the
best experiences, however, I am not sure what UC is trying to prove by packing
more and more unwanted features in their browser and forcing the user to use
them.
I have loved UC for its speed and design. But recent experiences have led to
me having a second thought about my primary browser.
--------------------------------------------------------------------------------
via: https://www.theitstuff.com/biggest-problems-uc-browser
作者:[Rishabh Kandari][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.theitstuff.com/author/reevkandari
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-6.png
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1-1.png
[3]:http://www.theitstuff.com/wp-content/uploads/2017/12/uceffort.png
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-3-1.png
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-4-1.png

View File

@ -0,0 +1,178 @@
Translating by qhwdw Internet protocols are changing
============================================================
![](https://blog.apnic.net/wp-content/uploads/2017/12/evolution-555x202.png)
When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.
For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same on the wire for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.)
As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols wire footprint — whether intended to debug issues, improve quality of service, or impose policy.
Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they wont get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things wont change.
#### Why we need to change the Internet
There are a number of factors driving these changes.
First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).
This translates into a strong motivation to evolve or replace those protocols because there is a [large body of experience showing the impact of even small performance gains][14].
Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.
Finally, [we are in the midst of a shift towards more use of encryption on the Internet][15], first spurred by Edward Snowdens revelations in 2015\. Thats really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve.
Lets have a look at whats happened, whats coming next, how it might impact networks, and how networks impact protocol design.
#### HTTP/2
[HTTP/2][16] (based on Googles SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers.
From a networks viewpoint, HTTP/2 made a few notable changes. First, its a binary protocol, so any device that assumes its HTTP/1.1 is going to break.
That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume its HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them.
[HTTP/2 also requires TLS/1.2 to be used when it is encrypted][17], and [blacklists ][18]cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here.
Finally, HTTP/2 allows more than one hosts requests to be [coalesced onto a connection][19], to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load.
For example, you could have a connection for <tt style="box-sizing: inherit;">www.example.com</tt>, but also use it for requests for <tt style="box-sizing: inherit;">images.example.com</tt>. [Future protocol extensions might also allow additional hosts to be added to the connection][20], even if they werent listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isnt going to apply.
Despite these changes, its worth noting that HTTP/2 doesnt appear to suffer from significant interoperability problems or interference from networks.
#### TLS 1.3
[TLS 1.3][21] is just going through the final processes of standardization and is already supported by some implementations.
Dont be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called 0RTT). The new design relies upon ephemeral key exchange, thereby ruling out static keys.
This has caused concern from some network operators and vendors — in particular those who need visibility into whats happening inside those connections.
For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside.
TLS 1.3 doesnt support that particular technique for intercepting traffic, since its also [a form of attack that ephemeral keys protect against][22]. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot.
Theres been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, its still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they arent long-lived.
At this point it doesnt look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe whats going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen.
#### QUIC
During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance.
[QUIC][23] is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018\. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic.
Read [Your questions answered about QUIC][24]
Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC.
iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since its UDP-based, a lot of the session information and metadata thats exposed in TCP gets encrypted in QUIC.
In fact, iQUICs current [short header][25] — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well).
Everything else is encrypted — including ACKs, to raise the bar for [traffic analysis][26] attacks.
However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isnt enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks.
One proposal to meet this need is the [Spin Bit][27] — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since its decoupled from the applications state, it doesnt appear to leak any information about the endpoints, beyond a rough estimate of location on the network.
#### DOH
The newest change on the horizon is DOH — [DNS over HTTP][28]. A [significant amount of research has shown that networks commonly use DNS as a means of imposing policy][29] (whether on behalf of the network operator or a greater authority).
Circumventing this kind of control with encryption has been [discussed for a while][30], but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access.
DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well.
For example, if Google was to deploy its [public DNS service over DOH][31]on <tt style="box-sizing: inherit;">www.google.com</tt> and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).
DOH has just started its work, but theres already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen.
Read [IETF 100, Singapore: DNS over HTTP (DOH!)][1]
#### Ossification and grease
To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic.
For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume its an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that its harmful or low-priority traffic.
When a protocol cant evolve because deployments freeze its extensibility points, we say it has  _ossified_ . TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether its blocking packets with TCP options that arent recognized, or optimizing congestion control.
Its necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a tragedy of the commons where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), its less likely to be interfered with.
Where protocol designers cant use encryption and an extension point isnt used often, artificially exercising the extension point can help; we call this  _greasing_  it.
For example, QUIC encourages endpoints to use a range of decoy values in its [version negotiation][32], to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems).
#### The network and the user
Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to [pervasive monitoring][33] but also attacks like [Firesheep][34].
As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks.
In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users machines. However, this isnt as easy in cases where the network doesnt own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces.
As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other leaf networks and the good of the Internet overall.
#### Get involved
For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators.
If these changes affect your network — or wont— please leave comments below, or better yet, get involved in the [IETF][35] by attending a meeting, joining a mailing list, or providing feedback on a draft.
Thanks to Martin Thomson and Brian Trammell for their review.
_Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETFs HTTP and QUIC Working Groups._
--------------------------------------------------------------------------------
via: https://blog.apnic.net/2017/12/12/internet-protocols-changing/
作者:[ Mark Nottingham ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.apnic.net/author/mark-nottingham/
[1]:https://blog.apnic.net/2017/11/17/ietf-100-singapore-dns-http-doh/
[2]:https://blog.apnic.net/author/mark-nottingham/
[3]:https://blog.apnic.net/category/tech-matters/
[4]:https://blog.apnic.net/tag/dns/
[5]:https://blog.apnic.net/tag/doh/
[6]:https://blog.apnic.net/tag/guest-post/
[7]:https://blog.apnic.net/tag/http/
[8]:https://blog.apnic.net/tag/ietf/
[9]:https://blog.apnic.net/tag/quic/
[10]:https://blog.apnic.net/tag/tls/
[11]:https://blog.apnic.net/tag/protocol/
[12]:https://blog.apnic.net/2017/12/12/internet-protocols-changing/#comments
[13]:https://blog.apnic.net/
[14]:https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/
[15]:https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46197.pdf
[16]:https://http2.github.io/
[17]:http://httpwg.org/specs/rfc7540.html#TLSUsage
[18]:http://httpwg.org/specs/rfc7540.html#BadCipherSuites
[19]:http://httpwg.org/specs/rfc7540.html#reuse
[20]:https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs
[21]:https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/
[22]:https://en.wikipedia.org/wiki/Forward_secrecy
[23]:https://quicwg.github.io/
[24]:https://blog.apnic.net/2016/08/30/questions-answered-quic/
[25]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#short-header
[26]:https://www.mjkranch.com/docs/CODASPY17_Kranch_Reed_IdentifyingHTTPSNetflix.pdf
[27]:https://tools.ietf.org/html/draft-trammell-quic-spin
[28]:https://datatracker.ietf.org/wg/doh/about/
[29]:https://datatracker.ietf.org/meeting/99/materials/slides-99-maprg-fingerprint-based-detection-of-dns-hijacks-using-ripe-atlas/
[30]:https://datatracker.ietf.org/wg/dprive/about/
[31]:https://developers.google.com/speed/public-dns/
[32]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#rfc.section.3.7
[33]:https://tools.ietf.org/html/rfc7258
[34]:http://codebutler.com/firesheep
[35]:https://www.ietf.org/

View File

@ -0,0 +1,282 @@
Toplip A Very Strong File Encryption And Decryption CLI Utility
======
There are numerous file encryption tools available on the market to protect
your files. We have already reviewed some encryption tools such as
[**Cryptomater**][1], [**Cryptkeeper**][2], [**CryptGo**][3], [**Cryptr**][4],
[**Tomb**][5], and [**GnuPG**][6] etc. Today, we will be discussing yet
another file encryption and decryption command line utility named **"
Toplip"**. It is a free and open source encryption utility that uses a very
strong encryption method called **[AES256][7]** , along with an **XTS-AES**
design to safeguard your confidential data. Also, it uses [**Scrypt**][8], a
password-based key derivation function, to protect your passphrases against
brute-force attacks.
### Prominent features
Compared to other file encryption tools, toplip ships with the following
unique and prominent features.
* Very strong XTS-AES256 based encryption method.
* Plausible deniability.
* Encrypt files inside images (PNG/JPG).
* Multiple passphrase protection.
* Simplified brute force recovery protection.
* No identifiable output markers.
* Open source/GPLv3.
### Installing Toplip
There is no installation required. Toplip is a standalone executable binary
file. All you have to do is download the latest toplip from the [**official
products page**][9] and make it as executable. To do so, just run:
```
chmod +x toplip
```
### Usage
If you run toplip without any arguments, you will see the help section.
```
./toplip
```
[![][10]][11]
Allow me to show you some examples.
For the purpose of this guide, I have created two files namely **file1** and
**file2**. Also, I have an image file which we need it to hide the files
inside it. And finally, I have **toplip** executable binary file. I have kept
them all in a directory called **test**.
[![][12]][13]
**Encrypt/decrypt a single file**
Now, let us encrypt **file1**. To do so, run:
```
./toplip file1 > file1.encrypted
```
This command will prompt you to enter a passphrase. Once you have given the
passphrase, it will encrypt the contents of **file1** and save them in a file
called **file1.encrypted** in your current working directory.
Sample output of the above command would be:
```
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip file1 Passphrase #1: generating keys...Done
Encrypting...Done
```
To verify if the file is really encrypted., try to open it and you will see
some random characters.
To decrypt the encrypted file, use **-d** flag like below:
```
./toplip -d file1.encrypted
```
This command will decrypt the given file and display the contents in the
Terminal window.
To restore the file instead of writing to stdout, do:
```
./toplip -d file1.encrypted > file1.decrypted
```
Enter the correct passphrase to decrypt the file. All contents of **file1.encrypted** will be restored in a file called **file1.decrypted**.
Please don't follow this naming method. I used it for the sake of easy understanding. Use any other name(s) which is very hard to predict.
**Encrypt/decrypt multiple files
**
Now we will encrypt two files with two separate passphrases for each one.
```
./toplip -alt file1 file2 > file3.encrypted
```
You will be asked to enter passphrase for each file. Use different
passphrases.
Sample output of the above command will be:
```
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
**file2 Passphrase #1** : generating keys...Done
**file1 Passphrase #1** : generating keys...Done
Encrypting...Done
```
What the above command will do is encrypt the contents of two files and save
them in a single file called **file3.encrypted**. While restoring, just give
the respective password. For example, if you give the passphrase of the file1,
toplip will restore file1. If you enter the passphrase of file2, toplip will
restore file2.
Each **toplip** encrypted output may contain up to four wholly independent
files, and each created with their own separate and unique passphrase. Due to
the way the encrypted output is put together, there is no way to easily
determine whether or not multiple files actually exist in the first place. By
default, even if only one file is encrypted using toplip, random data is added
automatically. If more than one file is specified, each with their own
passphrase, then you can selectively extract each file independently and thus
deny the existence of the other files altogether. This effectively allows a
user to open an encrypted bundle with controlled exposure risk, and no
computationally inexpensive way for an adversary to conclusively identify that
additional confidential data exists. This is called **Plausible deniability**
, one of the notable feature of toplip.
To decrypt **file1** from **file3.encrypted** , just enter:
```
./toplip -d file3.encrypted > file1.encrypted
```
You will be prompted to enter the correct passphrase of file1.
To decrypt **file2** from **file3.encrypted** , enter:
```
./toplip -d file3.encrypted > file2.encrypted
```
Do not forget to enter the correct passphrase of file2.
**Use multiple passphrase protection**
This is another cool feature that I admire. We can provide multiple
passphrases for a single file when encrypting it. It will protect the
passphrases against brute force attempts.
```
./toplip -c 2 file1 > file1.encrypted
```
Here, **-c 2** represents two different passphrases. Sample output of above
command would be:
```
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
**file1 Passphrase #1:** generating keys...Done
**file1 Passphrase #2:** generating keys...Done
Encrypting...Done
```
As you see in the above example, toplip prompted me to enter two passphrases.
Please note that you must **provide two different passphrases** , not a single
passphrase twice.
To decrypt this file, do:
```
$ ./toplip -c 2 -d file1.encrypted > file1.decrypted
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
**file1.encrypted Passphrase #1:** generating keys...Done
**file1.encrypted Passphrase #2:** generating keys...Done
Decrypting...Done
```
**Hide files inside image**
The practice of concealing a file, message, image, or video within another
file is called **steganography**. Fortunately, this feature exists in toplip
by default.
To hide a file(s) inside images, use **-m** flag as shown below.
```
$ ./toplip -m image.png file1 > image1.png
This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
file1 Passphrase #1: generating keys...Done
Encrypting...Done
```
This command conceals the contents of file1 inside an image named image1.png.
To decrypt it, run:
```
$ ./toplip -d image1.png > file1.decrypted This is toplip v1.20 (C) 2015, 2016 2 Ton Digital. Author: Jeff Marrison A showcase piece for the HeavyThing library. Commercial support available Proudly made in Cooroy, Australia. More info: https://2ton.com.au/toplip
image1.png Passphrase #1: generating keys...Done
Decrypting...Done
```
**Increase password complexity**
To make things even harder to break, we can increase the password complexity
like below.
```
./toplip -c 5 -i 0x8000 -alt file1 -c 10 -i 10 file2 > file3.encrypted
```
The above command will prompt to you enter 10 passphrases for the file1, 5
passphrases for the file2 and encrypt both of them in a single file called
"file3.encrypted". As you may noticed, we have used one more additional flag
**-i** in this example. This is used to specify key derivation iterations.
This option overrides the default iteration count of 1 for scrypt's initial
and final PBKDF2 stages. Hexadecimal or decimal values permitted, e.g.
**0x8000** , **10** , etc. Please note that this can dramatically increase the
calculation times.
To decrypt file1, use:
```
./toplip -c 5 -i 0x8000 -d file3.encrypted > file1.decrypted
```
To decrypt file2, use:
```
./toplip -c 10 -i 10 -d file3.encrypted > file2.decrypted
```
To know more about the underlying technical information and crypto methods
used in toplip, refer its official website given at the end.
My personal recommendation to all those who wants to protect their data. Don't
rely on single method. Always use more than one tools/methods to encrypt
files. Do not write passphrases/passwords in a paper and/or do not save them
in your local or cloud storage. Just memorize them and destroy the notes. If
you're poor at remembering passwords, consider to use any trustworthy password
managers.
And, that's all. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/
[2]:https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/
[3]:https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/
[4]:https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/
[5]:https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/
[6]:https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/
[7]:http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
[8]:http://en.wikipedia.org/wiki/Scrypt
[9]:https://2ton.com.au/Products/
[10]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png%201366w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-300x157.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-768x403.png%20768w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2-1024x537.png%201024w
[11]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-2.png
[12]:https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png%20779w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-300x101.png%20300w,%20https://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1-768x257.png%20768w
[13]:http://www.ostechnix.com/wp-content/uploads/2017/12/toplip-1.png

View File

@ -0,0 +1,186 @@
Creating a blog with pelican and Github pages
======
Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the [Django Girls tutorial][2], which covers that and more.
This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).
The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.
![Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'][3]
I recommend that you initialize your repository with a README, with a .gitignore for Python and with a [free software license][4]. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.
Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:
```
$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git
```
And change to the new directory:
```
$ cd YOUR_USERNAME.github.io
```
Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":
```
$ git checkout -b source
```
Create the virtualenv with the Python3 version installed on your system.
On GNU/Linux systems, the command might go as:
```
$ python3 -m venv venv
```
or as
```
$ virtualenv --python=python3.5 venv
```
And activate it:
```
$ source venv/bin/activate
```
Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:
```
(venv)$ pip install pelican markdown ghp-import
```
Once that is done, you can start creating your blog using pelican-quickstart:
```
(venv)$ pelican-quickstart
```
Which will prompt us a series of questions. Before answering them, take a look at my answers below:
```
> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io
```
About the time zone, it should be specified as TZ Time zone (full list here: [List of tz database time zones][5]).
Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:
```
.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes
Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first , misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about ? Write here.
This is the *first post* from my Pelican blog. ** YAY !**
```
Let's see how it looks?
Go to the terminal, generate the static files and start the server. To do that, use the following command:
```
(venv)$ make html && make serve
```
While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.
![Screenshot of the blog home. It has a header with the title Renata\\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.][6]
Pretty neat, right?
Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:
```
.lang="python" # DON'T COPY this line, it exists just for highlighting purposes
STATIC_PATHS = ['images']
```
Save it. Go to your post and add the image this way:
```
.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes
![Write here a good description for people who can ' t see the image]({filename}/images/IMAGE_NAME.jpg)
```
You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?
```
(venv)$ make html && make serve
```
One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:
```
.lang="python" # DON'T COPY this line, it exists just for highlighting purposes
FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
RSS_FEED_SUMMARY_ONLY = False
```
Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.
```
$ git add -A && git commit -a -m 'first commit' && git push --all
```
And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:
```
$ make github
```
You will be asked for your Github login and password again. And... voila! Your new blog should be live on https://YOUR_USERNAME.github.io.
If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.
For tips on how to write your posts using Markdown, you should read the [Daring Fireball Markdown guide][7].
To get other themes, I recommend you visit [Pelican Themes][8].
This post was adapted from [Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme][9]. I hope it was somewhat useful for you.
--------------------------------------------------------------------------------
via: https://rsip22.github.io/blog/create-a-blog-with-pelican-and-github-pages.html
作者:[][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://rsip22.github.io
[1]https://rsip22.github.io/blog/category/blog.html
[2]https://tutorial.djangogirls.org
[3]https://rsip22.github.io/blog/img/create_github_repository.png
[4]https://www.gnu.org/licenses/license-list.html
[5]https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
[6]https://rsip22.github.io/blog/img/blog_screenshot.png
[7]https://daringfireball.net/projects/markdown/syntax
[8]http://www.pelicanthemes.com/
[9]https://a-slide.github.io/blog/github-pelican

View File

@ -0,0 +1,63 @@
书评《Ours to Hack and to Own》
============================================================
![书评: Ours to Hack and to Own](https://opensource.com/sites/default/files/styles/image-full-size/public/images/education/EDUCATION_colorbooks.png?itok=liB3FyjP "Book review: Ours to Hack and to Own")
Image by : opensource.com
私有制的时代看起来似乎结束了,我将不仅仅讨论那些由我们中的许多人引入到我们的家庭与生活的设备和软件。我也将讨论这些设备与应用依赖的平台与服务。
尽管我们使用的许多服务是免费的,我们对它们并没有任何控制。本质上讲,这些企业确实控制着我们所看到的,听到的以及阅读到的内容。不仅如此,许多企业还在改变工作的性质。他们正使用封闭的平台来助长由全职工作到[零工经济][2]的转变方式,这种方式提供极少的安全性与确定性。
这项行动对于网络以及每一个使用与依赖网络的人产生了广泛的影响。仅仅二十多年前的开放网络的想象正在逐渐消逝并迅速地被一块难以穿透的幕帘所取代。
一种变得流行的补救办法就是建立[平台合作][3], 由他们的用户所拥有的电子化平台。正如这本书所阐述的,平台合作社背后的观点与开源有许多相同的根源。
学者Trebor Scholz和作家Nathan Schneider已经收集了40篇探讨平台合作社作为普通人可使用以提升开放性并对闭源系统的不透明性及各种限制予以还击的工具的增长及需求的论文。
### 哪里适合开源
任何平台合作社核心及接近核心的部分依赖与开源;不仅开源技术是必要的,构成开源开放性,透明性,协同合作以及共享的准则与理念同样不可或缺。
在这本书的介绍中, Trebor Scholz指出
> 与网络的黑盒子系统相反,这些平台需要使它们的数据流透明来辨别自身。他们需要展示客户与员工的数据在哪里存储,数据出售给了谁以及数据为了何种目的。
正是对开源如此重要的透明性,促使平台合作社如此吸引人并在目前大量已存平台之中成为令人耳目一新的变化。
开源软件在《Ours to Hack and to Own》所分享的平台合作社的构想中必然充当着重要角色。开源软件能够为群体建立助推合作社的技术型公共建设提供快速不算昂贵的途径。
Mickey Metts在论文中这样形容 "与你的友好的社区型技术合作社相遇。原文Meet Your Friendly Neighborhood Tech Co-Op." Metts为一家名为Agaric的企业工作这家企业使用Drupal为团体及小型企业建立他们不能独自完成的产品。除此以外, Metts还鼓励任何想要建立并运营自己的企业的公司或合作社的人接受免费且开源的软件。为什么呢因为它是高质量的不算昂贵的可定制的并且你能够与由乐于助人而又热情的人们组成的大型社区产生联系。
### 不总是开源的,但开源总在
这本书里不是所有的论文都聚焦或提及开源的;但是,开源方式的关键元素-合作,社区,开放管理以及电子自由化-总是在其表面若隐若现。
事实上正如《Ours to Hack and to Own》中许多论文所讨论的建立一个更加开放基于平常人的经济与社会区块平台合作社会变得非常重要。用Douglas Rushkoff的话讲那会是类似Creative Commons的组织“对共享知识资源的私有化”的补偿。它们也如Barcelona的CTO(首席执行官)Francesca Bria所描述的那样是“通过确保市民数据安全性隐私性和权利的系统”来运营他们自己的“分布式通用数据基础架构”的城市。
### 最后的思考
如果你在寻找改变互联网的蓝图以及我们工作的方式《Ours to Hack and to Own》并不是你要寻找的。这本书与其说是用户指南不如说是一种宣言。如书中所说《Ours to Hack and to Own》让我们略微了解如果我们将开源方式准则应用于社会及更加广泛的世界我们能够做的事。
--------------------------------------------------------------------------------
作者简介:
Scott Nesbitt -作家编辑雇佣兵虎猫牛仔原文Ocelot wrangle丈夫与父亲博客写手陶器收藏家。Scott正是做这样的一些事情。他还是大量写关于开源软件文章与博客的长期开源用户。你可以在Twitter,Github上找到他。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/1/review-book-ours-to-hack-and-own
作者:[Scott Nesbitt][a]
译者:[darsh8](https://github.com/darsh8)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://opensource.com/article/17/1/review-book-ours-to-hack-and-own?rate=dgkFEuCLLeutLMH2N_4TmUupAJDjgNvFpqWqYCbQb-8
[2]:https://en.wikipedia.org/wiki/Access_economy
[3]:https://en.wikipedia.org/wiki/Platform_cooperative
[4]:http://www.orbooks.com/catalog/ours-to-hack-and-to-own/
[5]:https://opensource.com/user/14925/feed
[6]:https://opensource.com/users/scottnesbitt

View File

@ -0,0 +1,363 @@
怎么去转换任何系统调用为一个事件:介绍 eBPF 内核探针
============================================================
长文预警:在最新的 Linux 内核(>=4.4)中使用 eBPF你可以使用任意数据将任何内核函数调用转换为一个用户空间事件。这通过 bcc 来做很容易。这个探针是用 C 语言写的,而数据是由 Python 来处理的。
如果你对 eBPF 或者 Linux 跟踪不熟悉,那你应该好好阅读一下本文。它尝试逐步去克服我在使用 bcc/eBPF 时遇到的困难,同时也节省了我在搜索和挖掘上花费的时间。
### 在 Linux 的世界中关于 push vs pull 的一个提示
当我开始在容器上工作的时候,我想知道我们怎么基于一个真实的系统状态去动态更新一个负载均衡器的配置。一个通用的策略是这样做的,无论什么时候只要一个容器启动,容器编排器触发一个负载均衡配置更新动作,负载均衡器去轮询每个容器,直到它的健康状态检查结束。它只是简单进行 “SYN” 测试。
虽然这种配置方式可以有效工作,但是它的缺点是你的负载均衡器为了让一些系统变得可用需要等待,而不是 … 让负载去均衡。
可以做的更好吗?
当你希望在一个系统中让一个程序对一些变化做出反应,这里有两种可能的策略。程序可以去 _轮询_  系统去检测变化,或者,如果系统支持,系统可以  _推送_ 事件并且让程序对它作出反应。你希望去使用推送还是轮询取决于上下文环境。一个好的经验法则是,基于处理时间的考虑,如果事件发生的频率较低时使用推送,而当事件发生的较快或者让系统变得不可用时切换为轮询。例如,一般情况下,网络驱动将等待来自网卡的事件,但是,像 dpdk 这样的框架对事件将激活对网卡的轮询,以达到高吞吐低延迟的目的。
理想状态下,我们将有一些内核接口告诉我们:
> * “容器管理器,你好,我刚才为容器 _servestaticfiles_ 的 Nginx-ware 创建了一个套接字,或许你应该去更新你的状态?
>
> * “好的,操作系统,感谢你告诉我这个事件“
虽然 Linux 有大量的接口去处理事件,对于文件事件高达 3 个,但是没有专门的接口去得到套接字事件提示。你可以得到路由表事件、邻接表事件、连接跟踪事件、接口变化事件。唯独没有套接字事件。或者,也许它深深地隐藏在一个 Netlink 接口中。
理想情况下,我们需要一个做这件事的通用方法,怎么办呢?
### 内核跟踪和 eBPF一些它们的历史
直到最近,仅有的方式是去在内核上打一个补丁程序或者借助于 SystemTap。[SytemTap][5] 是一个 Linux 系统跟踪器。简单地说,它提供了一个 DSL然后编译进内核模块然后被内核加载运行。除了一些因安全原因禁用动态模块加载的生产系统之外包括在那个时候我工作的那一个。另外的方式是为内核打一个补丁程序去触发一些事件可能是基于 netlink。但是这很不方便。深入内核带来的缺点包括 “有趣的” 新 “特性” 和增加了维护负担。
从 Linux 3.15 开始给我们带来了希望,它支持任何可跟踪内核函数可安全转换为用户空间事件。在一般的计算机科学中,“安全” 是指 ”一些虚拟机”。这里说的情况不是这种意思。Linux 已经有多好年了。自从 Linux 2.1.75 在 1997 年正式发行以来。但是,对被称为伯克利包过滤器的 BPF 来说它的历史是很短的。正如它的名字所表达的那样,它最初是为 BSD 防火墙开发的。它仅有两个寄存器,并且它仅允许跳转,意味着你不能使用它写一个循环(好吧,如果你知道最大迭代次数并且去手工实现它,你也可以实现循环)。这一点保证了程序总会终止并且从来不会使系统处于 hang 的状态。还不确定它的作用吗?即便你用的是 iptables。它的作用正如 [CloudFlare 的 DDos 防护基础][6]。
好的,因此,随着 Linux 3.15[BPF 被扩展了][7] 转变为 eBPF。对于 “扩展的” BPF。它从两个 32 位寄存器升级到 10 个 64 位寄存器,并且增加了它们之间向后跳转的特性。它因此将被 [在 Linux 3.18 中进一步扩展][8]并将被移到网络子系统中并且增加了像映射maps这样的工具。为保证安全 [引进了一个检查器][9],它验证所有的内存访问和可能的代码路径。如果检查器不能保证代码在固定的边界内,代码将被终止,它拒绝程序的初始插入。
关于它的更多历史,可以看 [Oracle 的关于 eBPF 的一个很捧的演讲][10]。
让我们开始吧!
### 来自 `inet_listen` 的问候
因为写一个汇编程序并不是件容易的任务,甚至对于很优秀的我们来说,我将使用 [bcc][11]。bcc 是一个基于 LLVM 的采集工具,并且用 Python 抽象了底层机制。探针是用 C 写的,并且返回的结果可以被 Python 利用,来写一些非常简单的应用程序。
首先安装 bcc。对于一些示例你可能会被要求使用一个最新的内核版本>= 4.4)。如果你想亲自去尝试一下这些示例,我强烈推荐你安装一台虚拟机。 _而不是_ 一个 Docker 容器。你不能在一个容器中改变内核。作为一个非常新的很活跃的项目,安装教程高度依赖平台/版本的。你可以在 [https://github.com/iovisor/bcc/blob/master/INSTALL.md][12] 上找到最新的教程。
现在,我希望在 TCP 套接字上进行监听,不管什么时候,只要有任何程序启动我将得到一个事件。当我在一个 `AF_INET` + `SOCK_STREAM` 套接字上调用一个 `listen()` 系统调用时,底层的内核函数是 [`inet_listen`][13]。我将钩在 `kprobe` 的输入点上,它启动时输出一个 “Hello World”。
```
from bcc import BPF
# Hello BPF Program
bpf_text = """
#include <net/inet_sock.h>
#include <bcc/proto.h>
// 1\. Attach kprobe to "inet_listen"
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
bpf_trace_printk("Hello World!\\n");
return 0;
};
"""
# 2\. Build and Inject program
b = BPF(text=bpf_text)
# 3\. Print debug output
while True:
print b.trace_readline()
```
这个程序做了三件事件1. 它使用一个命名惯例附加到一个内核探针上。如果函数被调用,输出 “my_probe”它使用 `b.attach_kprobe("inet_listen", "my_probe")` 被显式地附加。2.它使用 LLVM 去 new 一个 BPF 后端来构建程序。使用 (new) `bpf()` 系统调用去注入结果字节码并且按匹配的命名惯例自动附加探针。3. 从内核管道读取原生输出。
注意eBPF 的后端 LLVM 还很新。如果你认为你发了一个 bug你可以去升级它。
注意到 `bpf_trace_printk` 调用了吗?这是一个内核的 `printk()` 精简版的 debug 函数。使用时,它产生一个跟踪信息到 `/sys/kernel/debug/tracing/trace_pipe` 中的专门的内核管道。就像名字所暗示的那样,这是一个管道。如果多个读取者消费它,仅有一个将得到一个给定的行。对生产系统来说,这样是不合适的。
幸运的是Linux 3.19 引入了对消息传递的映射以及 Linux 4.4 带来了任意 perf 事件支持。在这篇文章的后面部分,我将演示 perf 事件。
```
# From a first console
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
nc-4940 [000] d... 22666.991714: : Hello World!
# From a second console
ubuntu@bcc:~$ nc -l 0 4242
^C
```
Yay!
### 抓取 backlog
现在,让我们输出一些很容易访问到的数据,叫做 “backlog”。backlog 是正在建立 TCP 连接的、即将成为 `accept()` 的数量。
只要稍微调整一下 `bpf_trace_printk`
```
bpf_trace_printk("Listening with with up to %d pending connections!\\n", backlog);
```
如果你用这个 “革命性” 的改善重新运行这个示例,你将看到如下的内容:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5020 [000] d... 25497.154070: : Listening with with up to 1 pending connections!
```
`nc` 是一个单个的连接程序,因此,在 1\. Nginx 或者 Redis 上的 backlog 在这里将输出 128 。但是,那是另外一件事。
简单吧?现在让我们获取它的端口。
### 抓取端口和 IP
正在研究的 `inet_listen` 的信息来源于内核,我们知道它需要从 `socket` 对象中取得 `inet_sock`。就像从源头拷贝,然后插入到跟踪器的开始处:
```
// cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
```
端口现在可以在按网络字节顺序(就是“从小到大”的顺序)的 `inet->inet_sport` 访问到。很容易吧!因此,我们将替换为 `bpf_trace_printk`
```
bpf_trace_printk("Listening on port %d!\\n", inet->inet_sport);
```
然后运行:
```
ubuntu@bcc:~/dev/listen-evts$ sudo /python tcv4listen.py
...
R1 invalid mem access 'inv'
...
Exception: Failed to load BPF program kprobe__inet_listen
```
除了它不再简单之外Bcc 现在提升了 _许多_。直到写这篇文章的时候,几个问题已经被处理了,但是并没有全部处理完。这个错误意味着内核检查器可以证实程序中的内存访问是正确的。看显式的类型转换。我们需要一点帮助,以使访问更加明确。我们将使用 `bpf_probe_read` 可信任的函数去读取一个任意内存位置,虽然为了确保,要像如下的那样做一些必需的检查:
```
// Explicit initialization. The "=0" part is needed to "give life" to the variable on the stack
u16 lport = 0;
// Explicit arbitrary memory access. Read it:
// Read into 'lport', 'sizeof(lport)' bytes from 'inet->inet_sport' memory location
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
```
使用 `inet->inet_rcv_saddr` 读取 IPv4 边界地址,和它基本上是相同的。如果我把这些一起放上去,我们将得到 backlog端口和边界 IP
```
from bcc import BPF
# BPF Program
bpf_text = """
#include <net/sock.h>
#include <net/inet_sock.h>
#include <bcc/proto.h>
// Send an event for each IPv4 listen with PID, bound address and port
int kprobe__inet_listen(struct pt_regs *ctx, struct socket *sock, int backlog)
{
// Cast types. Intermediate cast not needed, kept for readability
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
// Working values. You *need* to initialize them to give them "life" on the stack and use them afterward
u32 laddr = 0;
u16 lport = 0;
// Pull in details. As 'inet_sk' is internally a type cast, we need to use 'bpf_probe_read'
// read: load into 'laddr' 'sizeof(laddr)' bytes from address 'inet->inet_rcv_saddr'
bpf_probe_read(&laddr, sizeof(laddr), &(inet->inet_rcv_saddr));
bpf_probe_read(&lport, sizeof(lport), &(inet->inet_sport));
// Push event
bpf_trace_printk("Listening on %x %d with %d pending connections\\n", ntohl(laddr), ntohs(lport), backlog);
return 0;
};
"""
# Build and Inject BPF
b = BPF(text=bpf_text)
# Print debug output
while True:
print b.trace_readline()
```
运行一个测试,输出的内容像下面这样:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
nc-5024 [000] d... 25821.166286: : Listening on 7f000001 4242 with 1 pending connections
```
你的监听是在本地主机上提供的。因为没有处理为友好的输出,这里的地址以 16 进制的方式显示,并且那是有线的。并且它很酷。
注意:你可能想知道为什么 `ntohs` 和 `ntohl` 可以从 BPF 中被调用,即便它们并不可信。这是因为它们是宏,并且是从 “.h” 文件中来的内联函数,并且,在写这篇文章的时候一个小的 bug 已经 [修复了][14]。
全部完成之后,再来一个代码片断:我们希望获取相关的容器。在一个网络环境中,那意味着我们希望取得网络的命名空间。网络命名空间是一个容器的构建块,它允许它们拥有独立的网络。
### 抓取网络命名空间:被迫引入的 perf 事件
在用户空间中,网络命名空间可以通过检查 `/proc/PID/ns/net` 的目标来确定,它将看起来像 `net:[4026531957]` 这样。方括号中的数字是节点的网络空间编号。这就是说,我们可以通过 `/proc` 来取得,但是这并不是好的方式,我们或许可以临时处理时用一下。我们可以从内核中直接抓取节点编号。幸运的是,那样做很容易:
```
// Create an populate the variable
u32 netns = 0;
// Read the netns inode number, like /proc does
netns = sk->__sk_common.skc_net.net->ns.inum;
```
很容易!而且它做到了。
但是,如果你看到这里,你可能猜到那里有一些错误。它在:
```
bpf_trace_printk("Listening on %x %d with %d pending connections in container %d\\n", ntohl(laddr), ntohs(lport), backlog, netns);
```
如果你尝试去运行它,你将看到一些令人难解的错误信息:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
error: in function kprobe__inet_listen i32 (%struct.pt_regs*, %struct.socket*, i32)
too many args to 0x1ba9108: i64 = Constant<6>
```
clang 想尝试去告诉你的是 “Hey pal`bpf_trace_printk` 只能带四个参数,你刚才给它传递了 5 个“。在这里我不打算继续追究细节了,但是,那是 BPF 的一个限制。如果你想继续去深入研究,[这里是一个很好的起点][15]。
去修复它的唯一方式是去 … 停止调试并且准备投入使用。因此,让我们开始吧(确保运行在内核版本为 4.4 的 Linux 系统上)我将使用 perf 事件,它支持传递任意大小的结构体到用户空间。另外,只有我们的读者可以获得它,因此,多个没有关系的 eBPF 程序可以并发产生数据而不会出现问题。
去使用它吧,我们需要:
1. 定义一个结构体
2. 声明事件
3. 推送事件
4. 在 Python 端重新声明事件(这一步以后将不再需要)
5. 消费和格式化事件
这看起来似乎很多,其它并不多,看下面示例:
```
// At the begining of the C program, declare our event
struct listen_evt_t {
u64 laddr;
u64 lport;
u64 netns;
u64 backlog;
};
BPF_PERF_OUTPUT(listen_evt);
// In kprobe__inet_listen, replace the printk with
struct listen_evt_t evt = {
.laddr = ntohl(laddr),
.lport = ntohs(lport),
.netns = netns,
.backlog = backlog,
};
listen_evt.perf_submit(ctx, &evt, sizeof(evt));
```
Python 端将需要一点更多的工作:
```
# We need ctypes to parse the event structure
import ctypes
# Declare data format
class ListenEvt(ctypes.Structure):
_fields_ = [
("laddr", ctypes.c_ulonglong),
("lport", ctypes.c_ulonglong),
("netns", ctypes.c_ulonglong),
("backlog", ctypes.c_ulonglong),
]
# Declare event printer
def print_event(cpu, data, size):
event = ctypes.cast(data, ctypes.POINTER(ListenEvt)).contents
print("Listening on %x %d with %d pending connections in container %d" % (
event.laddr,
event.lport,
event.backlog,
event.netns,
))
# Replace the event loop
b["listen_evt"].open_perf_buffer(print_event)
while True:
b.kprobe_poll()
```
来试一下吧。在这个示例中,我有一个 redis 运行在一个 Docker 容器中,并且 nc 在主机上:
```
(bcc)ubuntu@bcc:~/dev/listen-evts$ sudo python tcv4listen.py
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 0 6379 with 128 pending connections in container 4026532165
Listening on 7f000001 6588 with 1 pending connections in container 4026531957
```
### 结束语
现在,所有事情都可以在内核中使用 eBPF 将任何函数的调用设置为触发事件,并且在学习 eBPF 时,你将看到了我所遇到的大多数的问题。如果你希望去看这个工具的所有版本,像 IPv6 支持这样的一些技巧,看一看 [https://github.com/iovisor/bcc/blob/master/tools/solisten.py][16]。它现在是一个官方的工具,感谢 bcc 团队的支持。
更进一步地去学习,你可能需要去关注 Brendan Gregg 的博客,尤其是 [关于 eBPF 映射和统计的文章][17]。他是这个项目的主要贡献人之一。
--------------------------------------------------------------------------------
via: https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/
作者:[Jean-Tiare Le Bigot][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.yadutaf.fr/about
[1]:https://blog.yadutaf.fr/tags/linux
[2]:https://blog.yadutaf.fr/tags/tracing
[3]:https://blog.yadutaf.fr/tags/ebpf
[4]:https://blog.yadutaf.fr/tags/bcc
[5]:https://en.wikipedia.org/wiki/SystemTap
[6]:https://blog.cloudflare.com/bpf-the-forgotten-bytecode/
[7]:https://blog.yadutaf.fr/2016/03/30/turn-any-syscall-into-event-introducing-ebpf-kernel-probes/TODO
[8]:https://lwn.net/Articles/604043/
[9]:http://lxr.free-electrons.com/source/kernel/bpf/verifier.c#L21
[10]:http://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[11]:https://github.com/iovisor/bcc
[12]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[13]:http://lxr.free-electrons.com/source/net/ipv4/af_inet.c#L194
[14]:https://github.com/iovisor/bcc/pull/453
[15]:http://lxr.free-electrons.com/source/kernel/trace/bpf_trace.c#L86
[16]:https://github.com/iovisor/bcc/blob/master/tools/solisten.py
[17]:http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html

View File

@ -1,202 +0,0 @@
# 红帽企业版 Linux 包含的系统服务 - 第一部分
在 2017 年红帽峰会上,有几个人问我“我们通常用完整的虚拟机来隔离如 DSN 和 DHCP 等网络服务,那我们可以用容器来取而代之吗?”。答案是可以的,下面是在当前红帽企业版 Linux 7 系统上创建一个系统容器的例子。
## **我们的目的**
### *创建一个可以独立于任何其他系统服务来进行更新的网络服务,并且可以从主机端容易管理和更新。*
让我们来探究在一个容器中建立一个运行在 systemd 之下的 BIND 服务器。在这一部分,我们将看到建立自己的容器以及管理 BIND 配置和数据文件。
在第二部分,我们将看到主机中的 systemd 怎样和容器中的 systmed 整合。我们将探究管理容器中的服务,并且使它作为一种主机中的服务。
## **创建 BIND 容器**
为了使 systemd 在一个容器中容易运行,我们首先需要在主机中增加两个包:`oci-register-machine` 和 `oci-systemd-hook`。`oci-systemd-hook` 这个钩子允许我们在一个容器中运行 systemd而不需要使用特权容器或者手工配置 tmpfs 和 cgroups。`oci-register-machine` 这个钩子允许我们使用 systemd 工具如 `systemctl``machinectl` 来跟踪容器。
```
[root@rhel7-host ~]# yum install oci-register-machine oci-systemd-hook
```
回到创建我们的 BIND 容器上。[红帽企业版 Linux 7 基础映像](https://access.redhat.com/containers)包含 systemd 作为一个 init 系统。我们安装并激活 BIND 正如我们在典型系统中做的那样。你可以从资源库中的 [git 仓库中下载这份 Dockerfile](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)。
```
[root@rhel7-host bind]# vi Dockerfile
# Dockerfile for BIND
FROM registry.access.redhat.com/rhel7/rhel
ENV container docker
RUN yum -y install bind && \
yum clean all && \
systemctl enable named
STOPSIGNAL SIGRTMIN+3
EXPOSE 53
EXPOSE 53/udp
CMD [ "/sbin/init" ]
```
因为我们以 PID 1 来启动一个 init 系统,当我们告诉容器停止时,需要改变 docker CLI 发送的信号。从 `kill` 系统调用手册中(man 2 kill):
```
The only signals that can be sent to process ID 1, the init
process, are those for which init has explicitly installed
signal handlers. This is done to assure the system is not
brought down accidentally.
```
对于 systemd 信号句柄,`SIGRTMIN+3`是对应于 `systemd start halt.target` 的信号。我们也应该暴露 TCP 和 UDP 端口号用来 BIND ,因为这两种协议可能都在使用中。
## **管理数据**
有了一个实用的 BIND 服务,我们需要一种管理配置和区域文件的方法。目前这些都在容器里面,所以我们任何时候都可以进入容器去更新配置或者改变一个区域文件。从管理者角度来说,这并不是很理想。当要更新 BIND 时,我们将需要重建这个容器,所以映像中的改变将会丢失。任何时候我们需要更新一个文件或者重启服务时,都需要进入这个容器,而这增加了步骤和时间。
相反的,我们将从这个容器中提取配置和数据文件,把它们拷贝到主机,然后在运行的时候挂载它们。这种方式我们可以很容易地重启或者重建容器,而不会丢失做出的更改。我们也可以使用容器外的编辑器来更改配置和区域文件。因为这个容器的数据看起来像“该系统服务的特定站点数据”,让我们遵循文件系统层次并在当前主机上创建 `/srv/named` 目录来保持管理权分离。
```
[root@rhel7-host ~]# mkdir -p /srv/named/etc
[root@rhel7-host ~]# mkdir -p /srv/named/var/named
```
***提示:如果你正在迁移一个存在的配置文件,你可以跳过下面的步骤并且将它直接拷贝到 `/srv/named` 目录下。你可能仍然想检查以一个临时容器分配给这个容器的 GID。***
让我们建立并运行一个临时容器来检查 BIND。在将 init 进程作为 PID 1 运行时,我们不能交互地运行这个容器来获取一个 shell。我们会在容器 启动后执行 shell并且使用 `rpm` 命令来检查重要文件。
```
[root@rhel7-host ~]# docker build -t named .
[root@rhel7-host ~]# docker exec -it $( docker run -d named ) /bin/bash
[root@0e77ce00405e /]# rpm -ql bind
```
对于这个例子来说,我们将需要 `/etc/named.conf``/var/named/` 目录下的任何文件。我们可以使用 `machinectl` 命令来提取它们。如果有一个以上的容器注册了,我们可以使用 `machinectl status` 命令来查看任一机器上运行的是什么。一旦有了这个配置我们就可以终止这个临时容器了。
*如果你喜欢,资源库中也有一个[样例 `named.conf` 和针对 `example.com` 的区域文件](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#repo)*
```
[root@rhel7-host bind]# machinectl list
MACHINE CLASS SERVICE
8824c90294d5a36d396c8ab35167937f container docker
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /etc/named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# machinectl copy-from 8824c90294d5a36d396c8ab35167937f /var/named /srv/named/var/named
[root@rhel7-host ~]# docker stop infallible_wescoff
```
## **最终的创建**
为了创建和运行最终的容器,添加卷选项到挂载:
- 将文件 `/srv/named/etc/named.conf` 映射为 `/etc/named.conf`
- 将目录 `/srv/named/var/named` 映射为 `/var/named`
因为这是我们最终的容器,我们将提供一个有意义的名字,以供我们以后引用。
```
[root@rhel7-host ~]# docker run -d -p 53:53 -p 53:53/udp -v /srv/named/etc/named.conf:/etc/named.conf:Z -v /srv/named/var/named:/var/named:Z --name named-container named
```
在最终容器运行时,我们可以更改本机配置来改变这个容器中 BIND 的行为。这个 BIND 服务器将需要在这个容器分配的任何 IP 上监听。确保任何新文件的 GID 与来自这个容器中的剩余 BIND 文件相匹配。
```
[root@rhel7-host bind]# cp named.conf /srv/named/etc/named.conf
[root@rhel7-host ~]# cp example.com.zone /srv/named/var/named/example.com.zone
[root@rhel7-host ~]# cp example.com.rr.zone /srv/named/var/named/example.com.rr.zone
```
> [很好奇为什么我不需要在主机目录中改变 SELinux 上下文?](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_1)
我们将运行这个容器提供的 `rndc` 二进制文件重新加载配置。我们可以使用 `journald` 以同样的方式检查 BIND 日志。如果运行出现错误,你可以在主机中编辑这个文件,并且重新加载配置。在主机中使用 `host``dig`, 我们可以检查来自针对 example.com 而包含的服务的响应。
```
[root@rhel7-host ~]# docker exec -it named-container rndc reload
server reload successful
[root@rhel7-host ~]# docker exec -it named-container journalctl -u named -n
-- Logs begin at Fri 2017-05-12 19:15:18 UTC, end at Fri 2017-05-12 19:29:17 UTC. --
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 9.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: A.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: B.E.F.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
May 12 19:29:17 ac1752c314a7 named[27]: reloading configuration succeeded
May 12 19:29:17 ac1752c314a7 named[27]: reloading zones succeeded
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: loaded serial 2001062601
May 12 19:29:17 ac1752c314a7 named[27]: zone 1.0.10.in-addr.arpa/IN: sending notifies (serial 2001062601)
May 12 19:29:17 ac1752c314a7 named[27]: all zones loaded
May 12 19:29:17 ac1752c314a7 named[27]: running
[root@rhel7-host bind]# host www.example.com localhost
Using domain server:
Name: localhost
Address: ::1#53
Aliases:
www.example.com is an alias for server1.example.com.
server1.example.com is an alias for mail
```
> [你的区域文件没有更新吗?可能是因为你的编辑器,而不是序列号。](http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/#sidebar_2)
## **终点线**
我们已经知道我们打算完成什么。从容器中为 DNS 请求和区域文件提供服务。更新之后,我们已经得到一个持久化的位置来管理更新和配置。
在这个系列的第二部分,我们将看到怎样将一个容器看作为主机中的一个普通服务。
---
[跟随 RHEL 博客](http://redhatstackblog.wordpress.com/feed/)通过电子邮件来获得本系列第二部分和其它新文章的更新。
---
## **额外的资源**
**附带文件的 Github 仓库:**[**https://github.com/nzwulfin/named-container**](https://github.com/nzwulfin/named-container)
**侧边栏 1:** **通过容器访问本地文件的 SELinux 上下文**
你可能已经注意到当我从容器向本地主机拷贝文件时,我没有运行 `chcon` 将主机中的文件类型改变为 `svirt_sandbox_file_t`。为什么它没有终止?将一个文件拷贝到 `/srv` 本应该将这个文件标记为类型 `var_t`。我 `setenforce 0` 了吗?
当然没有,这将让 Dan Walsh 大哭(译注:未知人名)。是的,`machinectl` 确实将文件标记类型设置为期望的那样,可以看一下:
启动一个容器之前:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. unconfined_u:object_r:var_t:s0 /srv/named/etc/named.conf
```
After starting the container:
不,运行中我使用了一个卷选项使 Dan Walsh 高兴,`:Z`。`-v /srv/named/etc/named.conf:/etc/named.conf:Z`命令的这部分做了两件事情:首先它表示这需要使用一个私有的卷 SELiunx 标记来重新标记,其次它表明以读写挂载。
启动容器之后:
```
[root@rhel7-host ~]# ls -Z /srv/named/etc/named.conf
-rw-r-----. root 25 system_u:object_r:svirt_sandbox_file_t:s0:c821,c956 /srv/named/etc/named.conf
```
**侧边栏 2:** **VIM 备份行为改变 inode**
如果你在本地主机中使用 `vim` 来编辑配置文件,并且你没有看到容器中的改变,你可能不经意的创建了容器感知不到的新文件。在编辑中时,有三种 `vim` 设定影响背负副本:backup, writebackup 和 backupcopy。
我从官方 VIM backup_table 中剪下了应用到 RHEL 7 中的默认配置
[[http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table](http://vimdoc.sourceforge.net/htmldoc/editing.html#backup-table)]
```
backup writebackup
off on backup current file, deleted afterwards (default)
```
So we dont create tilde copies that stick around, but we are creating backups. The other setting is backupcopy, where auto is the shipped default:
所以我们不创建停留的副本,但我们将创建备份。另外的设定是 backupcopy`auto` 是默认的设置:
```
"yes" make a copy of the file and overwrite the original one
"no" rename the file and write a new one
"auto" one of the previous, what works best
```
这种组合设定意味着当你编辑一个文件时,除非 `vim` 有理由不去(检查文件逻辑),你将会得到包含你编辑之后的新文件,当你保存时它会重命名原先的文件。这意味着这个文件获得了新的 inode。对于大多数情况这不是问题但是这里绑定挂载到一个容器对 inode 的改变很敏感。为了解决这个问题,你需要改变 backupcopy 的行为。
Either in the vim session or in your .vimrc, add set backupcopy=yes. This will make sure the original file gets truncated and overwritten, preserving the inode and propagating the changes into the container.
不管是在 `vim` 会话还是在你的 `.vimrc`中,添加 `set backupcopy=yes`。这将确保原先的文件被截断并且被覆写,维持了 inode 并且在容器中产生了改变。
------------
via: http://rhelblog.redhat.com/2017/07/19/containing-system-services-in-red-hat-enterprise-linux-part-1/
作者:[Matt Micene ][a]
译者:[liuxinyu123](https://github.com/liuxinyu123)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,713 @@
深入理解 BPF一个阅读清单
============================================================
* [什么是 BPF?][143]
* [深入理解字节码( bytecode][144]
* [资源][145]
* [简介][23]
* [关于 BPF][1]
* [关于 XDP][2]
* [关于 基于 eBPF 或者 eBPF 相关的其它组件][3]
* [文档][24]
* [关于 BPF][4]
* [关于 tc][5]
* [关于 XDP][6]
* [关于 P4 和 BPF][7]
* [教程][25]
* [示例][26]
* [来自内核的示例][8]
* [来自包 iproute2 的示例][9]
* [来自 bcc 工具集的示例][10]
* [手册页面][11]
* [代码][27]
* [内核中的 BPF 代码][12]
* [XDP 钩子hook代码][13]
* [bcc 中的 BPF 逻辑][14]
* [通过 tc 使用代码去管理 BPF][15]
* [BPF 实用工具][16]
* [其它感兴趣的 chunks][17]
* [LLVM 后端][18]
* [在用户空间中运行][19]
* [提交日志][20]
* [排错][28]
* [编译时的错误][21]
* [加载和运行时的错误][22]
* [更多][29]
_~ [更新于][146] 2017-11-02 ~_
# 什么是 BPF?
BPF是伯克利包过滤器**B**erkeley **P**acket **F**ilter的第一个字母的组合,最初构想于 1992 年,是为了提供一种过滤包的方法,以避免从内核到用户空间的无用的数据包副本。它最初是由从用户空间注入到内核的一个简单的字节码构成,它在哪里通过一个校验器进行检查 — 以避免内核崩溃或者安全问题 — 并附加到一个套接字上,然后运行在每个接收到的包上。几年后它被移植到 Linux 上并且应用于一小部分应用程序上例如tcpdump。简化的语言以及存在于内核中的即时编译器JIT使 BPF 成为一个性能卓越的工具。
然后,在 2013 年Alexei Starovoitov 对 BPF 进行彻底地改造,并增加了新的功能,改善了它的性能。这个新版本被命名为 eBPF (意思是 “extended BPF”同时将以前的变成 cBPF意思是 “classic” BPF。出现了如映射maps和 tail 调用calls。JIT 编译器被重写了。新的语言比 cBPF 更接近于原生机器语言。并且,在内核中创建了新的附加点。
感谢那些新的钩子eBPF 程序才可以被设计用于各种各样的使用案例它分为两个应用领域。其中一个应用领域是内核跟踪和事件监控。BPF 程序可以被附加到 kprobes并且它与其它跟踪模式相比有很多的优点有时也有一些缺点
其它的应用领域是网络程序。除了套接字过滤器eBPF 程序可以附加到 tcLinux 流量控制工具) ingress 或者 egress 接口,并且用一种高效的方式去执行各种包处理任务。它在这个领域打开了一个新的思路。
并且 eBPF 的性能通过为 IO Visor 项目开发的技术进一步得到提升:也为 XDP“eXpress Data Path”增加了新的钩子它是不久前增加到内核中的一种新的快速路径。XDP 与 Linux 栈一起工作,并且依赖 BPF 去执行更快的包处理。
甚至一些项目,如 P4、Open vSwitch、[考虑][155] 或者开始去接近approachBPF。其它的一些如 CETH、Cilium则是完全基于它的。BPF 是如此流行,因此,我们可以预计到不久后,将围绕它有很多工具和项目出现 …
# 深入理解字节码
就像我一样:我的一些工作(包括 [BEBA][156])是非常依赖 eBPF 的,并且在这个网站上以后的几篇文章将关注于这个主题。从逻辑上说,在这篇文章中我在深入到细节之前,希望以某种方式去介绍 BPF — 我的意思是,一个真正的介绍,更多的在 BPF 上开发的功能,它在开始节已经提供了一些简短的摘要:什么是 BPF 映射? Tail 调用?内部结构是什么样子?等等。但是,在这个网站上已经有很多这个主题的介绍了,并且,我也不希望去创建 “另一个 BPF 介绍” 的重复的文章。
毕竟,我花费了很多的时间去阅读和学习关于 BPF 的知识,并且,也是这么做的,因此,在这里我们将要做的是,我收集了非常多的关于 BPF 的阅读材料:介绍、文档、也有教程或者示例。这里有很多的材料可以去阅读,但是,为了去阅读它,首先要去 _找到_ 它。因此,为了能够帮助更多想去学习和使用 BPF 的人,现在的这篇文章是介绍了一个资源列表。这里有各种阅读材料,它可以帮你深入理解内核字节码的机制。
# 资源
![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg)
### 简介
这篇文章中下面的链接提供了一个 BPF 的基本的概述,或者,一些与它密切相关的一些主题。如果你对 BPF 非常陌生,你可以在这些介绍文章中挑选出一篇你喜欢的文章去阅读。如果你已经理解了 BPF你可以针对特定的主题去阅读下面是阅读清单。
### 关于 BPF
关于 eBPF 的简介:
* [_利用 BPF 和 XDP 实现可编程的内核网络数据路径_][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017):
快速理解所有的关于 eBPF 和 XDP 的基础概念的许多文章中的一篇(大多数是关于网络处理的)
* [BSD 包过滤器][54] (Suchakra Sharma, June 2017): 
一篇非常好的介绍文章,大多数是关于跟踪方面的。
* [_BPF跟踪及更多_][55]  (Brendan Gregg, January 2017):
大多数内容是跟踪使用案例相关的。
* [_Linux BPF 的超强功能_][56]  (Brendan Gregg, March 2016):
第一部分是关于 **火焰图flame graphs** 的使用。
* [_IO Visor_][57]  (Brenden Blanco, SCaLE 14x, January 2016):
介绍了 **IO Visor 项目**
* [_大型机上的 eBPF_][58]  (Michael Holzheu, LinuxCon, Dubin, October 2015)
* [_在 Linux 上新的令人激动的跟踪新产品_][59]  (Elena Zannoni, LinuxCon, Japan, 2015)
* [_BPF — 内核中的虚拟机_][60]  (Alexei Starovoitov, February 2015):
eBPF 的作者写的一篇介绍文章。
* [_扩展 extended BPF_][61]  (Jonathan Corbet, July 2014)
**BPF 内部结构**
* Daniel Borkmann 正在做的一项令人称奇的工作,它用于去展现 eBPF 的 **内部结构**,特别是通过几次关于 **eBPF 用于 tc ** 的演讲和论文。
* [_使用 tc 的 cls_bpf 的高级可编程和它的最新更新_][30]  (netdev 1.2, Tokyo, October 2016):
Daniel 介绍了 eBPF 的细节,它使用了隧道和封装、直接包访问、和其它特性。
* [_自 netdev 1.1 以来的 cls_bpf/eBPF 更新_][31]  (netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32])
* [_使用 cls_bpf 实现完全可编程的 tc 分类器_][33]  (netdev 1.1, Sevilla, February 2016)
介绍 eBPF 之后,它提供了许多 BPF 内部机制映射管理、tail 调用、校验器)的见解。对于大多数有志于 BPF 的人来说,这是必读的![全文在这里][34]。
* [_Linux tc 和 eBPF_][35]  (fosdem16, Brussels, Belgium, January 2016)
* [_eBPF 和 XDP 攻略和最新更新_][36]  (fosdem17, Brussels, Belgium, February 2017)
这些介绍可能是理解 eBPF 内部机制设计与实现的最佳文档资源之一。
[**IO Visor 博客**][157] 有一些关于 BPF 感兴趣的技术文章。它们中的一些包含了许多营销讨论。
**内核跟踪**:总结了所有的已有的方法,包括 BPF
* [_邂逅 eBPF 和内核跟踪_][62]  (Viller Hsiao, July 2016):
Kprobes、uprobes、ftrace
* [_Linux 内核跟踪_][63]  (Viller Hsiao, July 2016):
Systemtap、Kernelshark、trace-cmd、LTTng、perf-tool、ftrace、hist-trigger、perf、function tracer、tracepoint、kprobe/uprobe …
关于 **事件跟踪和监视**Brendan Gregg 使用 eBPF 的一些心得,它使用 eBPFR 的一些案例,他做的非常出色。如果你正在做一些内核跟踪方面的工作,你应该去看一下他的关于 eBPF 和火焰图相关的博客文章。其中的大多数都可以 [从这篇文章中][158] 访问,或者浏览他的博客。
介绍 BPF也介绍 **Linux 网络的一般概念**
* [_Linux 网络详解_][64]  (Thomas Graf, LinuxCon, Toronto, August 2016)
* [_内核网络攻略_][65]  (Thomas Graf, LinuxCon, Seattle, August 2015)
**硬件 offload**译者注offload 是指原本由软件来处理的一些操作交由硬件来完成,以提升吞吐量,降低 CPU 负荷。):
* eBPF 与 tc 或者 XDP 一起支持硬件 offload开始于 Linux 内核版本 4.9,并且由 Netronome 提出的。这里是关于这个特性的介绍:[eBPF/XDP hardware offload to SmartNICs][147] (Jakub Kicinski 和 Nic Viljoen, netdev 1.2, Tokyo, October 2016)
关于 **cBPF**
* [_BSD 包过滤器一个用户级包捕获的新架构_][66] (Steven McCanne 和 Van Jacobson, 1992)
它是关于classicBPF 的最早的论文。
* [关于 BPF 的 FreeBSD 手册][67] 是理解 cBPF 程序的可用资源。
* 关于 cBPF,Daniel Borkmann 实现的至少两个演示,[一是,在 2013 年 mmap 中BPF 和 Netsniff-NG][68],以及 [在 2014 中关于 tc 和 cls_bpf 的的一个非常完整的演示][69]。
* 在 Cloudflare 的博客上Marek Majkowski 提出的它的 [BPF 字节码与 **iptables** 的 `xt_bpf` 模块一起的应用][70]。值得一提的是,从 Linux 内核 4.10 开始eBPF 也是通过这个模块支持的。(虽然,我并不知道关于这件事的任何讨论或者文章)
* [Libpcap 过滤器语法][71]
### 关于 XDP
* 在 IO Visor 网站上的 [XDP 概述][72]。
* [_eXpress Data Path (XDP)_][73]  (Tom Herbert, Alexei Starovoitov, March 2016):
这是第一个关于 XDP 的演示。
* [_BoF - BPF 能为你做什么_][74]  (Brenden Blanco, LinuxCon, Toronto, August 2016)。
* [_eXpress Data Path_][148]  (Brenden Blanco, Linux Meetup at Santa Clara, July 2016)
包含一些(有点营销的意思?)**benchmark 结果**!使用一个单核心:
* ip 路由丢弃: ~3.6 百万包每秒Mpps
* 使用 BPFtc使用 clsact qdisc丢弃 ~4.2 Mpps
* 使用 BPFXDP 丢弃20 Mpps CPU 利用率 < 10%
* XDP 重写转发在端口上它接收到的包10 Mpps
(测试是用 mlx4 驱动执行的)。
* Jesper Dangaard Brouer 有几个非常好的幻灯片,它可以从本质上去理解 XDP 的内部结构。
* [_XDP eXpress Data Path, Intro and future use-cases_][37]  (September 2016):
_“Linux 内核与 DPDK 的斗争”_ 。**未来的计划**(在写这篇文章时)它用 XDP 和 DPDK 进行比较。
* [_网络性能研讨_][38]  (netdev 1.2, Tokyo, October 2016):
关于 XDP 内部结构和预期演化的附加提示。
* [_XDP eXpress Data Path, 可用于 DDoS 防护_][39]  (OpenSourceDays, March 2017):
包含了关于 XDP 的详细情况和使用案例、用于 **benchmarking****benchmark 结果**、和 **代码片断**,以及使用 eBPF/XDP基于一个 IP 黑名单模式)的用于 **基本的 DDoS 防护**
* [_内存 vs. 网络激发和修复内存瓶颈_][40]  (LSF Memory Management Summit, March 2017):
面对 XDP 开发者提出关于当前 **内存问题** 的许多细节。不要从这一个开始,如果你已经理解了 XDP并且想去了解它在页面分配方面的真实工作方式这是一个非常有用的资源。
* [_XDP 能为其它人做什么_][41]netdev 2.1, Montreal, April 2017with Andy Gospodarek
对于普通人使用 eBPF 和 XDP 怎么去开始。这个演示也由 Julia Evans 在 [他的博客][42] 上做了总结。
Jesper 也创建了并且尝试去扩展了有关 eBPF 和 XDP 的一些文档,查看 [相关节][75]。)
* [_XDP 研讨 — 介绍、体验、和未来发展_][76]Tom Herbert, netdev 1.2, Tokyo, October 2016) — 在这篇文章中,只有视频可用,我不知道是否有幻灯片。
* [_在 Linux 上进行高速包过滤_][149]  (Gilberto Bertin, DEF CON 25, Las Vegas, July 2017) — 在 Linux 上的最先进的包过滤的介绍,面向 DDoS 的保护、讨论了关于在内核中进行包处理、内核旁通、XDP 和 eBPF。
### 关于 基于 eBPF 或者 eBPF 相关的其它组件
* [_在边缘上的 P4_][77]  (John Fastabend, May 2016):
提出了使用 **P4**,一个包处理的描述语言,使用 BPF 去创建一个高性能的可编程交换机。
* 如果你喜欢音频的介绍,这里有一个相关的 [OvS Orbit 片断(#11),叫做 _在边缘上的 **P4**_][78],日期是 2016 年 8 月。OvS Orbit 是对 Ben Pfaff 的访谈,它是 Open vSwitch 的其中一个核心维护者。在这个场景中John Fastabend 是被访谈者。
* [_P4, EBPF 和 Linux TC Offload_][79]  (Dinan Gunawardena and Jakub Kicinski, August 2016):
另一个演示 **P4** 的,使用一些相关的元素在 Netronome 的 **NFP**(网络流处理器)架构上去实现 eBPF 硬件 offload。
* **Cilium** 是一个由 Cisco 最先发起的技术,它依赖 BPF 和 XDP 去提供 “在容器中基于 eBPF 程序,在运行中生成的强制实施的快速的内核中的网络和安全策略”。[这个项目的代码][150] 在 GitHub 上可以访问到。Thomas Graf 对这个主题做了很多的演示:
* [_Cilium: 对容器利用 BPF & XDP 实现网络 & 安全_][43]也特别展示了一个负载均衡的使用案例Linux Plumbers conference, Santa Fe, November 2016
* [_Cilium: 对容器利用 BPF & XDP 实现网络 & 安全_][44] Docker Distributed Systems Summit, October 2016 — [video][45]
* [_Cilium: 使用 BPF 和 XDP 的快速 IPv6 容器网络_][46] LinuxCon, Toronto, August 2016
* [_Cilium: 为窗口使用 BPF & XDP_][47] fosdem17, Brussels, Belgium, February 2017
在不同的演示中重复了大量的内容如果有疑问就选最近的一个。Daniel Borkmann 作为 Google 开源博客的特邀作者,也写了 [Cilium 简介][80]。
* 这里也有一个关于 **Cilium** 的播客节目:一个 [OvS Orbit episode (#4)][81],它是 Ben Pfaff 访谈 Thomas Graf 2016 年 5 月),和 [另外一个 Ivan Pepelnjak 的播客][82],仍然是 Thomas Graf 的与 eBPF、P4、XDP 和 Cilium 2016 年 10 月)。
* **Open vSwitch** (OvS),它是 **Open Virtual Network**OVN一个开源的网络虚拟化解决方案相关的项目正在考虑在不同的层次上使用 eBPF它已经实现了几个概念验证原型
* [使用 eBPF 的 Offloading OVS 流处理器][48] (William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016)
* [将 OVN 的灵活性与 IOVisor 的高效率相结合][49] (Fulvio Risso, Matteo Bertrone and Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016)
据我所知,这些 eBPF 的使用案例看上去仅处于提议阶段(并没有合并到 OvS 的主分支中),但是,看它带来了什么将是非常有趣的事情。
* XDP 的设计对分布式拒绝访问DDoS攻击是非常有用的。越来越多的演示都关注于它。例如从 Cloudflare 中的人们的讨论([_XDP in practice: integrating XDP in our DDoS mitigation pipeline_][83])或者从 Facebook 上([_Droplet: DDoS countermeasures powered by BPF + XDP_][84])在 netdev 2.1 会议上,在 Montreal、Canada、在 2017 年 4 月,都存在这样的很多使用案例。
* [_CETH for XDP_][85] Yan Chan 和 Yunsong Lu、Linux Meetup、Santa Clara、July 2016
**CETH**,是由 Mellanox 发起的为实现更快的网络 I/O 而主张的通用以太网驱动程序架构。
* [**VALE 交换机**][86],另一个虚拟交换机,它可以与 netmap 框架结合,有 [一个 BPF 扩展模块][87]。
* **Suricata**,一个开源的入侵检测系统,它的捕获旁通特性 [似乎是依赖于 eBPF 组件][88]
[_The adventures of a Suricate in eBPF land_][89]  (Éric Leblond, netdev 1.2, Tokyo, October 2016)
[_eBPF and XDP seen from the eyes of a meerkat_][90]  (Éric Leblond, Kernel Recipes, Paris, September 2017)
* [InKeV: 对于 DCN 的内核中分布式网络虚拟化][91] (Z. Ahmed, M. H. Alizai and A. A. Syed, SIGCOMM, August 2016):
**InKeV** 是一个基于 eBPF 的虚拟网络、目标数据中心网络的数据路径架构。它最初由 PLUMgrid 提出,并且声称相比基于 OvS 的 OpenStack 解决方案可以获得更好的性能。
* [_**gobpf** - 从 Go 中利用 eBPF_][92] Michael Schubert, fosdem17, Brussels, Belgium, February 2017
“一个从 Go 中的库,可以去创建、加载和使用 eBPF 程序”
* [**ply**][93] 是为 Linux 实现的一个小的但是非常灵活的开源动态 **跟踪器**,它的一些特性非常类似于 bcc 工具,是受 awk 和 dtrace 启发,但使用一个更简单的语言。它是由 Tobias Waldekranz 写的。
* 如果你读过我以前的文章,你可能对我在这篇文章中的讨论感兴趣,[使用 eBPF 实现 OpenState 接口][151],关于包状态处理,在 fosdem17 中。
![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg)
### 文档
一旦你对 BPF 是做什么的有一个大体的理解。你可以抛开一般的演示而深入到文档中了。下面是 BPF 的规范和功能的最全面的文档,按你的需要挑一个开始阅读吧!
### 关于 BPF
* **BPF 的规范**(包含 classic 和 extended 版本)可以在 Linux 内核的文档中,和特定的文件 [linux/Documentation/networking/filter.txt][94] 中找到。BPF 使用以及它的内部结构也被记录在那里。此外,当加载 BPF 代码失败时,在这里可以找到 **被校验器抛出的错误信息**,这有助于你排除不明确的错误信息。
* 此外,在内核树中,在 eBPF 那里有一个关于 **常见的问 & 答** 的文档,它在文件 [linux/Documentation/bpf/bpf_design_QA.txt][95] 中。
* … 但是,内核文档是非常难懂的,并且非常不容易阅读。如果你只是去查找一个简单的 eBPF 语言的描述,可以去 IO Visor 的 GitHub 仓库,那儿有 [它的 **概括性描述**][96]。
* 顺便说一下IO Visor 项目收集了许多 **关于 BPF 的资源**。大部分,分别在 bcc 仓库的 [文档目录][97] 中,和 [bpf-docs 仓库][98] 的整个内容中,它们都在 GitHub 上。注意,这个非常好的 [BPF **参考指南**][99] 包含一个详细的 BPF C 和 bcc Python 的 helper 的描述。
* 想深入到 BPF那里有一些必要的 **Linux 手册页**。第一个是 [`bpf(2)` man page][100] 关于 `bpf()` **系统调用**,它用于从用户空间去管理 BPF 程序和映射。它也包含一个 BPF 高级特性的描述(程序类型、映射、等等)。第二个是主要去处理希望去附加到 tc 接口的 BPF 程序:它是 [`tc-bpf(8)` man page][101],它是 **使用 BPF 和 tc** 的一个参考,并且包含一些示例命令和参考代码。
* Jesper Dangaard Brouer 发起了一个 **更新 eBPF Linux 文档** 的尝试,包含 **不同的映射**。[他有一个草案][102],欢迎去贡献。一旦完成,这个文档将被合并进 man 页面并且进入到内核文档。
* Cilium 项目也有一个非常好的 [**BPF 和 XDP 参考指南**][103],它是由核心的 eBPF 开发者写的,它被证明对于 eBPF 开发者是极其有用的。
* David Miller 在 [xdp-newbies][152] 邮件列表中发了几封关于 eBPF/XDP 内部结构的富有启发性的电子邮件。我找不到一个单独的地方收集它们的链接,因此,这里是一个列表:
* [bpf.h 和你 …][50]
* [Contextually speaking…][51]
* [BPF 校验器概述][52]
最后一个可能是目前来说关于校验器的最佳的总结。
* Ferris Ellis 发布的 [一个关于 **eBPF 的系列博客文章**][104]。作为我写的这个短文,第一篇文章是关于 eBPF 的历史背景和未来期望。接下来的文章将更多的是技术方面,和前景展望。
* [一个 **每个内核版本的 BPF 特性列表**][153] 在 bcc 仓库中可以找到。如果你想去知道运行一个给定的特性所要求的最小的内核版本,它是非常有用的。我贡献和添加了链接到提交中,它介绍了每个特性,因此,你也可以从那里很容易地去访问提交历史。
### 关于 tc
当为了网络目的使用 BPF 与 tc 进行结合时Linux 流量控制(**t**raffic **c**ontrol工具它可用于去采集关于 tc 的可用功能的信息。这里有几个关于它的资源。
* 找到关于 **Linux 上 QoS** 的简单教程是很困难的。这里有两个链接,它们很长而且很难懂,但是,如果你可以抽时间去阅读它,你将学习到几乎关于 tc 的任何东西(虽然,关于 BPF 它什么也没有)。它们在这里:[_怎么去实现流量控制_  (Martin A. Brown, 2006)][105],和 [_怎么去实现 Linux 的高级路由 & 流量控制_  (“LARTC”) (Bert Hubert & al., 2002)][106]。
* 在你的系统上的 **tc 手册页面** 并不是最新日期的,因为它们中的几个最近已经增加了。如果你没有找到关于特定的队列规则、分类或者过滤器的文档,它可能在最新的 [tc 组件的手册页面][107] 中。
* 一些额外的材料可以在 iproute2 包自已的文件中找到:这个包中有 [一些文档][108],包括一些文件,它可以帮你去理解 [**tc 的 action** 的功能][109]。
**注意:** 这些文件在 2017 年 10 月 已经从 iproute2 中删除,然而,从 Git 历史中却一直可用。
* 非精确资料:这里是 [一个关于 tc 的几个特性的研讨会][110]包含过滤、BPF、tc offload、… 由 Jamal Hadi Salim 在 netdev 1.2 会议上组织的October 2016
* 额外信息 — 如果你使用 `tc` 较多,这里有一些好消息:我用这个工具 [写了一个 bash 完整的功能][111],并且它被包 iproute2 带到内核版本 4.6 和更高版中!
### 关于 XDP
* 对于 XDP 的一些 [进展中的文档(包括规范)][112] 已经由 Jesper Dangaard Brouer 启动并且意味着将成为一个合作的工作。正在推进2016 年 9 月你期望它去改变并且或许在一些节点上移动Jesper [称为贡献][113],如果你想去改善它)。
* 自来 Cilium 项目的 [BPF 和 XDP 参考指南][114] … 好吧,这个名字已经说明了一切。
### 关于 P4 和 BPF
[P4][159] 是一个用于指定交换机行为的语言。它可以被编译为许多种目标硬件或软件。因此,你可能猜到了,这些目标中的一个就是 BPF … 仅部分支持的:一些 P4 特性并不能被转化到 BPF 中并且用类似的方法BPF 可以做的事情,而使用 P4 却不能表达出现。不过,**P4 与 BPF 使用** 的相关文档,[用于去隐藏在 bcc 仓库中][160]。这个改变在 P4_16 版本中p4c 引用的编辑器包含 [一个 eBPF 后端][161]。
![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg)
### 教程
Brendan Gregg 为想去 **使用 bcc 工具** 跟踪和监视内核中的事件的人制作了一个非常好的 **教程**。[第一个教程是关于如何使用 bcc 工具][162],它总共有十一步,教你去理解怎么去使用已有的工具,而 [**针对 Python 开发者** 的一个目标][163] 是专注于开发新工具,它总共有十七节 “课程”。
Sasha Goldshtein 也有一些 [_**Linux 跟踪研究材料**_][164] 涉及到使用几个 BPF 去进行跟踪。
作者为 Jean-Tiare Le Bigot 的文章为 ping 请求和回复,提供了一个详细的(和有指导意义的)[使用 perf 和 eBPF 去设置一个低级的跟踪器][165] 的示例。
对于网络相关的 eBPF 使用案例也有几个教程。那里有一些有趣的文档,包含一个 _eBPF Offload 入门指南_,由 Netronome 在 [Open NFP][166] 平台上操作的。其它的那些,来自 Jesper 的演讲,[_XDP 能为其它人做什么_][167],可能是 XDP 入门的最好的方法之一。
![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg)
### 示例
有示例是非常好的。看看它们是如何工作的。但是 BPF 程序示例是分散在几个项目中的,因此,我列出了我所知道的所有的示例。示例并不是使用相同的 helper例如tc 和 bcc 都有一套它们自己的 helper使它可以很容易地去用 C 语言写 BPF 程序)
### 来自内核的示例
主要的程序类型都包含在内核的示例中:过滤器绑定到套接字或者到 tc 接口、事件跟踪/监视、甚至是 XDP。你可以在 [linux/samples/bpf/][168] 目录中找到这些示例。
也不要忘记去看一下 git 相关的提交历史,它们有一些指定的特性的介绍,它们也包含一些特性的详细的示例。
### 来自包 iproute2 的示例
iproute2 包也提供了几个示例。它们都很明显地偏向网络编程,因此,这个程序是附加到 tc ingress 或者 egress 接口上。这些示例在 [iproute2/examples/bpf/][169] 目录中。
### 来自 bcc 工具集的示例
许多示例都 [与 bcc 一起提供][170]
* 一些网络编程的示例在关联的目录下面。它们包括套接字过滤器、tc 过滤器、和一个 XDP 程序。
* `tracing` 目录包含许多 **跟踪编程** 的示例。前面的教程中提到的都在那里。那些程序涉及了事件跟踪的很大的一个范围,并且,它们中的一些是面向生产系统的。注意,某些 Linux 分发版(至少是 Debian、Ubuntu、Fedora、Arch Linux、这些程序已经被 [打包了][115] 并且可以很 “容易地” 通过比如 `# apt install bcc-tools` 进行安装。但是在写这篇文章的时候(除了 Arch Linux第一个要求是去安装 IO Visor 的包仓库。
* 那里也有 **使用 Lua** 作为一个不同的 BPF 后端(那是因为 BPF 程序是用 Lua 写的,它是 C 语言的一个子集,它允许为前端和后端使用相同的语言)的一些示例,它在第三个目录中。
### 手册页面
虽然 bcc 一般可以用很容易的方式在内核中去注入和运行一个 BPF 程序,通过 `tc` 工具去将程序附加到 tc 接口也可以被执行。因此,如果你打算将 **BPF 与 tc 一起使用**,你可以在 [`tc-bpf(8)` 手册页面][171] 中找到一些调用示例。
![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg)
### 代码
有时候BPF 文档或者示例并不足够多,而且你可能没有其它的方式在你喜欢的文本编辑器(它当然应该是 Vim中去显示代码并去阅读它。或者你可能深入到代码中想去做一个补丁程序或者为机器增加一些新特性。因此这里对有关的文件的几个建议找到你想要的功能取决于你自己
### 在内核中的 BPF 代码
* 文件 [linux/include/linux/bpf.h][116] 和它的副本 [linux/include/uapi/bpf.h][117] 包含有关 eBPF 的 **定义**,它分别被内核中和用户空间程序的接口使用。
* 相同的方式,文件 [linux/include/linux/filter.h][118] 和 [linux/include/uapi/filter.h][119] 包含的信息被 **运行的 BPF 程序** 使用。
* BPF 相关的 **主要的代码片断** 在 [linux/kernel/bpf/][120] 目录下面。**被系统以不同的操作许可调用** 比如,程序加载或者映射管理是在文件 `syscall.c` 中实现,虽然 `core.c` 包含在 **解析器** 中。其它的文件有明显的命名:`verifier.c` 包含在 **校验器** 中(不是开玩笑的),`arraymap.c` 的代码用于与阵列类型的 **映射** 去互动,等等。
* **helpers**,以及几个网络(与 tc、XDP 一起)和用户可用的相关功能是实现在 [linux/net/core/filter.c][121] 中。它也包含代码去移植 cBPF 字节码到 eBPF 中(因为在运行之前,内核中的所有的 cBPF 程序被转换成 eBPF
* **JIT 编译器** 在它们各自的架构目录下面比如x86 架构的在 [linux/arch/x86/net/bpf_jit_comp.c][122] 中。
* 在 [linux/net/sched/][123] 目录下,你可以找到 **tc 的 BPF 组件** 相关的代码,尤其是在文件 `act_bpf.c` action `cls_bpf.c`filter中。
* 我并没有在 BPF 上深入到 **事件跟踪** 中,因此,我并不真正了解这些程序的钩子。在 [linux/kernel/trace/bpf_trace.c][124] 那里有一些东西。如果你对它感 兴趣,并且想去了解更多,你可以在 Brendan Gregg 的演示或者博客文章上去深入挖掘。
* 我也没有使用过 **seccomp-BPF**。但它的代码在 [linux/kernel/seccomp.c][125],并且可以在 [linux/tools/testing/selftests/seccomp/seccomp_bpf.c][126] 中找到一些它的使用示例。
### XDP 钩子代码
一旦将 BPF 虚拟机加载进内核,由一个 Netlink 命令将 **XDP** 程序从用户空间钩入到内核网络路径中。接收它的是在 [linux/net/core/dev.c][172] 文件中的被调用的 `dev_change_xdp_fd()` 函数,并且由它设置一个 XDP 钩子。比如,钩子位于在 NICs 支持的驱动中。例如,为一些 Mellanox 硬件使用的 mlx4 驱动的钩子实现在 [drivers/net/ethernet/mellanox/mlx4/][173] 目录下的文件中。文件 en_netdev.c 接收 Netlink 命令并调用 `mlx4_xdp_set()`,它再被在文件 en_rx.c 实现的实例 `mlx4_en_process_rx_cq()` 调用(对于 RX 侧)。
### 在 bcc 中的 BPF 逻辑
[在 bcc 的 GitHub 仓库][174] 上找到的 **bcc** 工具集的其中一个代码。**Python 代码**,包含在 `BPF` 类中,最初它在文件 [bcc/src/python/bcc/__init__.py][175] 中。但是许多感兴趣的东西 — 我的意见是 — 比如,加载 BPF 程序到内核中,碰巧在 [libbcc **C 库**][176]中。
### 使用 tc 去管理 BPF 的代码
**在 tc** 中与 iproute2 包中一起带来的与 BPF 相关的代码。其中的一些在 [iproute2/tc/][177] 目录中。文件 f_bpf.c 和 m_bpf.c和 e_bpf.c是各自用于处理 BPF 的过滤器和动作的和tc `exec` 命令,或许什么命令都可以)。文件 q_clsact.c 定义了 `clsact`qdisc 是为 BPF 特别创建的。但是,**大多数的 BPF 用户空间逻辑** 是在 [iproute2/lib/bpf.c][178] 库中实现的,因此,如果你想去使用 BPF 和 tc这里可能是会将你搞混乱的地方它是从文件 iproute2/tc/tc_bpf.c 中来的,你也可以在旧版本的包中找到代码相同的地方)。
### BPF 实用工具
内核中也带有 BPF 相关的三个工具的源代码(`bpf_asm.c`, `bpf_dbg.c`, `bpf_jit_disasm.c`)、根据你的版本不同,在 [linux/tools/net/][179] 或者 [linux/tools/bpf/][180] 目录下面:
* `bpf_asm` 是一个极小的汇编程序。
* `bpf_dbg` 是一个很小的 cBPF 程序调试器。
* `bpf_jit_disasm` 对于两种 BPF 都是通用的,并且对于 JIT 调试来说非常有用。
* `bpftool` 是由 Jakub Kicinski 写的通用工具,并且它可以与 eBPF 程序和来自用户空间的映射进行交互例如去展示、转储、pin 程序、或者去展示、创建、pin、更新、删除映射。
阅读在源文件顶部的注释可以得到一个它们使用方法的概述。
### 其它感兴趣的 chunks
如果你对关于 BPF 的不常见的语言的使用感兴趣bcc 包含 [一个为 BPF 目标的 **P4 编译器**][181]以及 [一个 **Lua 前端**][182],它可以被使用,它以代替 C 的一个子集,并且(在 Lua 的案例中)可以用于 Python 工具。
### LLVM 后端
在 [这个提交][183] clang / LLVM 用于将 C 编译成 BPF 后端,将它添加到 LLVM 源(也可以在 [the GitHub mirror][184] 上访问)。
### 在用户空间中运行
到目前为止,我知道那里有至少两种 eBPF 用户空间实现。第一个是 [uBPF][185],它是用 C 写的。它包含一个解析器、一个 x86_64 架构的 JIT 编译器、一个汇编器和一个反汇编器。
uBPF 的代码似乎被重用了,去产生了一个 [通用实现][186]claims 支持 FreeBSD 内核、FreeBSD 用户空间、Linux 内核、Linux 用户空间和 Mac OSX 用户空间。它被 [VALE 交换机的 BPF 扩展模块][187]使用。
其它用户空间的实现是我做的:[rbpf][188],基于 uBPF但是用 Rust 写的。写了解析器和 JIT 编译器 Linux 下两个都有Mac OSX 和 Windows 下仅有解析器),以后可能会有更多。
### 提交日志
正如前面所说的,如果你希望得到更多的关于它的信息,不要犹豫,去看一些提交日志,它介绍了一些特定的 BPF 特性。你可以在许多地方搜索日志,比如,在 [git.kernel.org][189]、[在 GitHub 上][190]、或者如果你克隆过它还有你的本地仓库中。如果你不熟悉 git你可以尝试像这些去做 `git blame <file>` 去看看介绍特定代码行的提交内容,然后,`git show <commit>` 去看详细情况(或者在 `git log` 的结果中按关键字搜索,但是这样做通常比较单调乏味)也可以看在 bcc 仓库中的 [按内核版本区分的 eBPF 特性列表][191],它链接到相关的提交上。
![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg)
### 排错
对 eBPF 的狂热是最近的事情,因此,到目前为止我还找不到许多关于怎么去排错的资源。所以这里只有几个,在使用 BPF 进行工作的时候,我对自己遇到的问题进行了记录。
### 编译时的错误
* 确保你有一个最新的 Linux 内核版本(也可以看 [这个文档][127])。
* 如果你自己编译内核:确保你安装了所有正确的组件,包括内核镜像、头文件和 libc。
* 当使用 `tc-bpf`(用于去编译 C 代码到 BPF 中)的 man 页面提供的 `bcc` shell 函数时:我只是添加包含 clang 调用的头部:
```
__bcc() {
clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \
-I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \
-emit-llvm -c $1 -o - | \
llc -march=bpf -filetype=obj -o "`basename $1 .c`.o"
}
```
(seems fixed as of today).
* 对于使用 `bcc` 的其它问题,不要忘了去看一看这个工具集的 [答疑][128]。
* 如果你从一个并不精确匹配你的内核版本的 iproute2 包中下载了示例,可能会通过在文件中包含的头文件触发一些错误。这些示例片断都假设安装在你的系统中内核的头文件与 iproute2 包是相同版本的。如果不是这种情况,下载正确的 iproute2 版本,或者编辑示例中包含的文件的路径,指向到 iproute2 中包含的头文件上(在运行时一些问题可能或者不可能发生,取决于你使用的特性)。
### 在加载和运行时的错误
* 使用 tc 去加载一个程序,确保你使用了一个与使用中的内核版本等价的 iproute2 中的 tc 二进制文件。
* 使用 bcc 去加载一个程序,确保在你的系统上安装了 bcc仅下载源代码去运行 Python 脚本是不够的)。
* 使用 tc如果 BPF 程序不能返回一个预期值,检查调用它的方式:过滤器、或者动作、或者使用 “直接传动” 模式的过滤器。
* 静态使用 tc注意不使用过滤器动作不会直接附加到 qdiscs 或者接口。
* 通过内核校验器抛出错误到解析器可能很难。[内核文档][129]或许可以提供帮助,因此,可以 [参考指南][130] 或者,万不得一的情况下,可以去看源代码(祝你好运!)。记住,校验器 _不运行_ 程序,对于这种类型的错误,它也是非常重要的。如果你得到一个关于无效内存访问或者关于未初始化的数据的错误,它并不意味着那些问题真实发生了(或者有时候,它们完全有可能发生)。它意味着你的程序是以校验器预计可能发生错误的方式写的,并且因此而拒绝这个程序。
* 注意 `tc` 工具有一个 `verbose` 模式,它与 BPF 一起工作的很好:在你的命令行尾部尝试追加一个 `verbose`
* bcc 也有一个 verbose 选项:`BPF` 类有一个 `debug` 参数,它可以带 `DEBUG_LLVM_IR`、`DEBUG_BPF` 和 `DEBUG_PREPROCESSOR` 三个标志中任何组合(详细情况在 [源文件][131]中)。 为调试代码,它甚至嵌入了 [一些条件去打印输出代码][132]。
* LLVM v4.0+ 为 eBPF 程序 [嵌入一个反汇编器][133]。因此,如果你用 clang 编译你的程序,在编译时添加 `-g` 标志允许你通过内核校验器去以人类可读的格式去转储你的程序。处理转储文件,使用:
```
$ llvm-objdump -S -no-show-raw-insn bpf_program.o
```
* 使用映射工作?你想去看 [bpf-map][134],一个为 Cilium 项目而用 Go 创建的非常有用的工具,它可以用于去转储内核中 eBPF 映射的内容。那里也有在 Rust 中的 [一个克隆][135]。
* 那里有一个旧的 [在 **StackOverflow** 上的 `bpf` 标签][136],但是,在这篇文章中它一直没有被使用过(并且那里几乎没有与新的 eBPF 相关的东西)。如果你是一位来自未来的阅读者,你可能想去看看在这方面是否有更多的活动。
![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg)
### 更多!
* 如果你想很容易地去 **测试 XDP**,那是 [一个 Vagrant 设置][137] 可以使用。你也可以 **测试 bcc**[在一个 Docker 容器中][138]。
* 想知道围绕 BPF 的 **开发和活动** 在哪里吗?好吧,内核补丁总是结束于 [netdev 上的邮件列表][139](相关 Linux 内核的网络栈开发):以关键字 “BPF” 或者 “XDP” 来搜索。自 2017 年 4 月开始,那里也有 [一个专门用于 XDP 编程的邮件列表][140](是为了架构或者寻求帮助)。[在 IO Visor 的邮件列表上][141]也有许多的讨论和辨论,因为 BPF 是一个重要的项目。如果你只是想随时了解情况,那里也有一个 [@IOVisor Twitter 帐户][142]。
我经常会回到这篇博客中,来看一看 [关于 BPF][192] 有没有新的文章!
_特别感谢 Daniel Borkmann 指引我找到了许多的 [附加的文档][154]因此我才完成了这个合集。_
--------------------------------------------------------------------------------
via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
作者:[Quentin Monnet ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://qmonnet.github.io/whirl-offload/about/
[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf
[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp
[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf
[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1
[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc
[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf
[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel
[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2
[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools
[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages
[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel
[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-·s-code
[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc
[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc
[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities
[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks
[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend
[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace
[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs
[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time
[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time
[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations
[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation
[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials
[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples
[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code
[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting
[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more
[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann
[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf
[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf
[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf
[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf
[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/
[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf
[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop
[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf
[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
[41]:http://netdevconf.org/2.1/session.html?gospodarek
[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/
[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp
[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823
[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs
[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp
[47]:https://fosdem.org/2017/schedule/event/cilium/
[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf
[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf
[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html
[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html
[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html
[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf
[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter
[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more
[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers
[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf
[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf
[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
[61]:https://lwn.net/Articles/603983/
[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing
[63]:http://www.slideshare.net/vh21/linux-kernel-tracing
[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained
[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough
[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf
[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf
[68]:http://borkmann.ch/talks/2013_devconf.pdf
[69]:http://borkmann.ch/talks/2014_devconf.pdf
[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/
[71]:http://biot.com/capstats/bpf.html
[72]:https://www.iovisor.org/technology/xdp
[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf
[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf
[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop
[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf
[78]:https://ovsorbit.benpfaff.org/#e11
[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf
[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html
[81]:https://ovsorbit.benpfaff.org/
[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html
[83]:http://netdevconf.org/2.1/session.html?bertin
[84]:http://netdevconf.org/2.1/session.html?zhou
[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016
[86]:http://info.iet.unipi.it/~luigi/vale/
[87]:https://github.com/YutaroHayakawa/vale-bpf
[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf
[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond
[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf
[92]:https://fosdem.org/2017/schedule/event/go_bpf/
[93]:https://wkz.github.io/ply/
[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef
[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
[97]:https://github.com/iovisor/bcc/tree/master/docs
[98]:https://github.com/iovisor/bpf-docs/
[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[100]:http://man7.org/linux/man-pages/man2/bpf.2.html
[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html
[103]:http://docs.cilium.io/en/latest/bpf/
[104]:https://ferrisellis.com/tags/ebpf/
[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/
[106]:http://lartc.org/lartc.html
[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8
[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0
[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0
[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b
[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html
[113]:https://marc.info/?l=linux-netdev&m=147436253625672
[114]:http://docs.cilium.io/en/latest/bpf/
[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h
[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h
[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h
[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h
[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf
[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c
[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c
[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched
[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c
[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c
[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt
[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output
[133]:https://www.spinics.net/lists/netdev/msg406926.html
[134]:https://github.com/cilium/bpf-map
[135]:https://github.com/badboy/bpf-map
[136]:https://stackoverflow.com/questions/tagged/bpf
[137]:https://github.com/iovisor/xdp-vagrant
[138]:https://github.com/zlim/bcc-docker
[139]:http://lists.openwall.net/netdev/
[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[141]:http://lists.iovisor.org/pipermail/iovisor-dev/
[142]:https://twitter.com/IOVisor
[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf
[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode
[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources
[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md
[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski
[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016
[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf
[150]:https://github.com/cilium/cilium
[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/
[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429
[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html
[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/
[157]:https://www.iovisor.org/resources/blog
[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[159]:http://p4.org/
[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4
[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md
[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[164]:https://github.com/goldshtn/linux-tracing-workshop
[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/
[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/
[167]:http://netdevconf.org/2.1/session.html?gospodarek
[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf
[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf
[170]:https://github.com/iovisor/bcc/tree/master/examples
[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c
[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/
[174]:https://github.com/iovisor/bcc/
[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c
[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc
[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c
[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net
[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf
[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler
[182]:https://github.com/iovisor/bcc/tree/master/src/lua
[183]:https://reviews.llvm.org/D6494
[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d
[185]:https://github.com/iovisor/ubpf/
[186]:https://github.com/YutaroHayakawa/generic-ebpf
[187]:https://github.com/YutaroHayakawa/vale-bpf
[188]:https://github.com/qmonnet/rbpf
[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
[190]:https://github.com/torvalds/linux
[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF

View File

@ -0,0 +1,93 @@
GitHub 欢迎所有 CI 工具
====================
[![GitHub and all CI tools](https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png)][11]
持续集成([CI][12])工具可以帮助你在每次提交时执行测试,并将[报告结果][13]提交到合并请求,从而帮助维持团队的质量标准。结合持续交付([CD][14])工具,你还可以在多种配置上测试你的代码,运行额外的性能测试,并自动执行每个步骤[直到产品][15]。
有几个[与 GitHub 集成][16]的 CI 和 CD 工具,其中一些可以在 [GitHub Marketplace][17] 中点击几下安装。有了这么多的选择,你可以选择最好的工具 - 即使它不是与你的系统预集成的工具。
最适合你的工具取决于许多因素,其中包括:
* 编程语言和程序架构
* 你计划支持的操作系统和浏览器
* 你团队的经验和技能
* 扩展能力和增长计划
* 依赖系统的地理分布和使用的人
* 打包和交付目标
当然,无法为所有这些情况优化你的 CI 工具。构建它们的人需要选择哪些情况服务更好,何时优先考虑复杂性而不是简单性。例如,如果你想测试针对一个平台的用特定语言编写的小程序,那么你就不需要那些可在数十个平台上测试,有许多编程语言和框架的,用来测试嵌入软件控制器的复杂工具。
如果你需要一些灵感来挑选最好使用哪个 CI 工具,那么看一下[ Github 上的流行项目][18]。许多人在他们的 README.md 中将他们的集成的 CI/CD 工具的状态显示为徽章。我们还分析了 GitHub 社区中超过 5000 万个仓库中 CI 工具的使用情况,并发现了很多变化。下图显示了根据我们的 pull 请求中使用最多的[提交状态上下文][19]GitHub.com 使用的前 10 个 CI 工具的相对百分比。
_我们的分析还显示许多团队在他们的项目中使用多个 CI 工具使他们能够发挥它们最擅长的。_
[![Top 10 CI systems used with GitHub.com based on most used commit status contexts](https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png)][20]
如果你想查看,下面是团队中使用最多的 10 个工具:
* [Travis CI][1]
* [Circle CI][2]
* [Jenkins][3]
* [AppVeyor][4]
* [CodeShip][5]
* [Drone][6]
* [Semaphore CI][7]
* [Buildkite][8]
* [Wercker][9]
* [TeamCity][10]
这只是尝试选择默认的、预先集成的工具,而没有花时间根据任务研究和选择最好的工具,但是对于你的特定情况会有很多[很好的选择][21]。如果你以后改变主意,没问题。当你为特定情况选择最佳工具时,你可以保证量身定制的性能和不再适合时互换的自由。
准备好了解 CI 工具如何适应你的工作流程了么?
[浏览 GitHub Marketplace][22]
--------------------------------------------------------------------------------
via: https://github.com/blog/2463-github-welcomes-all-ci-tools
作者:[jonico ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://github.com/jonico
[1]:https://travis-ci.org/
[2]:https://circleci.com/
[3]:https://jenkins.io/
[4]:https://www.appveyor.com/
[5]:https://codeship.com/
[6]:http://try.drone.io/
[7]:https://semaphoreci.com/
[8]:https://buildkite.com/
[9]:http://www.wercker.com/
[10]:https://www.jetbrains.com/teamcity/
[11]:https://user-images.githubusercontent.com/29592817/32509084-2d52c56c-c3a1-11e7-8c49-901f0f601faf.png
[12]:https://en.wikipedia.org/wiki/Continuous_integration
[13]:https://github.com/blog/2051-protected-branches-and-required-status-checks
[14]:https://en.wikipedia.org/wiki/Continuous_delivery
[15]:https://developer.github.com/changes/2014-01-09-preview-the-new-deployments-api/
[16]:https://github.com/works-with/category/continuous-integration
[17]:https://github.com/marketplace/category/continuous-integration
[18]:https://github.com/explore?trending=repositories#trending
[19]:https://developer.github.com/v3/repos/statuses/
[20]:https://user-images.githubusercontent.com/7321362/32575895-ea563032-c49a-11e7-9581-e05ec882658b.png
[21]:https://github.com/works-with/category/continuous-integration
[22]:https://github.com/marketplace/category/continuous-integration

View File

@ -0,0 +1,45 @@
Linux 长期支持版关于未来的声明
===============================
Linux 4.4 长期支持版将得到 6年的使用期但是这并不意味着其它长期支持版的使用期将持续这么久。
[视频](http://www.zdnet.com/video/video-torvalds-surprised-by-resilience-of-2-6-kernel-1/)
_视频: Torvalds 对内核版本 2.6 的弹性感到惊讶_
在 2017 年 10 月,[Linux 内核小组同意将 Linux 长期支持版(LTS)的下一个版本的生命期从两年延长至六年][5],而 LTS 的下一个版本正是 [Linux 4.14][6]。这对于 [Android][7],嵌入式 Linux 和 Linux 物联网(IoT)的开发者们是一个利好。但是这个变动并不意味着将来所有的 Linux LTS 版本将有 6 年的使用期。
正如 [Linux 基金会][8]的 IT 技术设施安全主管 Konstantin Ryabitsev 在 google+ 上发文解释说,"尽管外面的各种各样的新闻网站可能已经告知你们,但是[内核版本 4.14 的 LTS 并不计划支持 6 年][9]。仅仅因为 Greg Kroah-Hartman 正在为 LTS 4.4 版本做这项工作并不表示从现在开始所有的 LTS 内核会维持那么久。"
所以简而言之Linux 4.14 将支持到 2020年 1月份而 2016 年 1 月 20 号问世的 Linux 4.4 内核将支持到 2022 年。因此,如果你正在编写一个打算能够长期运行的 Linux 发行版,那你需要基于 [Linux 4.4 版本][10]。
[Linux LTS 版本][11]包含对旧内核树的后向移植漏洞的修复。不是所有漏洞的修复都被导入进来,只有重要漏洞的修复才用于这些内核中。它们不会非常频繁的发布,特别是对那些旧版本的内核树来说。
Linux 其它的版本有尝鲜版或发布候选版(RC),主线版,稳定版和 LTS 版。
RC 版必须从源代码编译并且通常包含漏洞的修复和新特性。这些都是由 Linux Torvalds 维护和发布的。他也维护主线版本树(这是所有新特性被引入的地方)。新的主线内核每几个月发布一次。当主线版本树为了通用才发布时,它被称为"稳定版"。一个稳定版内核漏洞的修复是从主线版本树后向移植的,并且这些修复是由一个指定的稳定版内核维护者来申请。在下一个主线内核变得可用之前,通常也有一些修复漏洞的内核发布。
对于最新的 LTS 版本Linux 4.14Ryabitsev 说,"Greg 已经担负起了 4.14 版本的维护者责任(过去发生过多次),其他人想成为该版本的维护者也是有可能的,但是你应该断然不要去计划这件事。"
Kroah-Hartman 在 Ryabitsev 的文章中仅仅添加了:"[他的言论][12]"
-------------------
via: http://www.zdnet.com/article/long-term-linux-support-future-clarified/
作者:[Steven J. Vaughan-Nichols ][a]
译者:[liuxinyu123](https://github.com/liuxinyu123)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[1]:http://www.zdnet.com/article/long-term-linux-support-future-clarified/#comments-eb4f0633-955f-4fec-9e56-734c34ee2bf2
[2]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[3]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[4]:http://www.zdnet.com/article/the-tension-between-iot-and-erp/
[5]:http://www.zdnet.com/article/long-term-support-linux-gets-a-longer-lease-on-life/
[6]:http://www.zdnet.com/article/the-new-long-term-linux-kernel-linux-4-14-has-arrived/
[7]:https://www.android.com/
[8]:https://www.linuxfoundation.org/
[9]:https://plus.google.com/u/0/+KonstantinRyabitsev/posts/Lq97ZtL8Xw9
[10]:http://www.zdnet.com/article/whats-new-and-nifty-in-linux-4-4/
[11]:https://www.kernel.org/releases.html
[12]:https://plus.google.com/u/0/+gregkroahhartman/posts/ZUcSz3Sn1Hc
[13]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[14]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/
[15]:http://www.zdnet.com/blog/open-source/
[16]:http://www.zdnet.com/topic/enterprise-software/

View File

@ -0,0 +1,41 @@
有人试图将 Ubuntu Unity 非正式地从死亡带回来
============================================================
> Ubuntu Unity Remix 将支持九个月
Canonical 在七年之后突然决定抛弃它的 Unity 用户界面影响了许多 Ubuntu 用户,看起来有人现在试图非正式地把它从死亡中带回来。
长期 [Ubuntu][1] 成员 Dale Beaudoin 上周在官方的 Ubuntu 论坛上[进行了一项调查][2]来了解社区,看看他们是否对明年发布的 Ubuntu 18.04 LTSBionic Beaver的 Ubuntu Unity Remix 感兴趣,它将支持 9 个月或 5 年。
有 30 人进行了投票,其中 67 的人选择了所谓的 Ubuntu Unity Remix 的 LTS长期支持版本33 的人投票支持 9 个月的支持版本。它也看起来像即将到来的 Ubuntu Unity Spin [看起来会成为官方版本][3],但这不意味着开发它的承诺。
Dale Beaudoin 表示“最近的一项民意调查显示2/3 的人支持 Ubuntu Unity 成为 LTS 发行版,我们应该尝试这个循环,假设它将是 LTS 和官方的风格。“我们将尝试使用当前默认的 Ubuntu Bionic Beaver 18.04 的每日版本作为平台每周或每 10 天发布一次更新的 ISO。”
### Ubuntu Unity 是否会卷土重来?
默认情况下,最后一个带有 Unity 的 Ubuntu 版本是 Ubuntu 17.04Zesty Zapus它将在 2018 年 1 月终止支持。当前流行操作系统的稳定版本 Ubuntu 17.10Artful Artful是今年早些时候 Canonical CEO [宣布][4]之后第一个默认使用 GNOME 桌面环境的版本Unity 将不再开发。
然而Canonical 仍然从官方软件仓库提供 Unity 桌面环境,所以如果有人想要安装它,只需点击一下即可。但坏消息是,它们支持到 2018 年 4 月发布 Ubuntu 18.04 LTSBionic Beaver之前所以 Ubuntu Unity Remix 的开发者们将不得不在独立的仓库中继续支持。
另一方面,我们不相信 Canonical 会改变主意,接受这个 Ubuntu Unity Spin 成为官方的风格,这意味着他们无法继续开发 Unity现在只有一小部分人可以做到这一点。最有可能的是如果对 Ubuntu Unity Remix 的兴趣没有很快消失,那么,这可能会是一个由怀旧社区支持的非官方版本。
问题是,你会对 你会对Ubuntu Unity Spin 感兴趣么,官方或者非官方?
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/someone-tries-to-bring-back-ubuntu-s-unity-from-the-dead-as-an-unofficial-spin-518778.shtml
作者:[Marius Nestor ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor
[1]:http://linux.softpedia.com/downloadTag/Ubuntu
[2]:https://community.ubuntu.com/t/poll-unity-7-distro-9-month-spin-or-lts-for-18-04/2066
[3]:https://community.ubuntu.com/t/unity-maintenance-roadmap/2223
[4]:http://news.softpedia.com/news/canonical-to-stop-developing-unity-8-ubuntu-18-04-lts-ships-with-gnome-desktop-514604.shtml
[5]:http://news.softpedia.com/editors/browse/marius-nestor

View File

@ -1,180 +0,0 @@
在Ubuntu 16.04下随机生成你的WiFi MAC地址
============================================================
你设备的MAC地址可以在不同的WiFi网络中记录你的活动。这些信息能被共享后出售用于识别特定的个体。但可以用随机生成的伪MAC地址来阻止这一行为。
![A captive portal screen for a hotel allowing you to log in with social media for an hour of free WiFi](https://www.paulfurley.com/img/captive-portal-our-hotel.gif)
_Image courtesy of [Cloudessa][4]_
每一个诸如WiFi或者以太网卡这样的网络设备都有一个叫做MAC地址的唯一标识符`b4:b6:76:31:8c:ff`。这就是你能上网的原因每当你连接上WiFi路由器就会用这一地址来向你接受和发送数据并且用它来区别你和这一网络的其他设备。
这一设计的缺陷在于唯一性不变的MAC地址正好可以用来追踪你。连上了星巴克的WiFi? 好,注意到了。在伦敦的地铁上? 也记录下来。
如果你曾经在某一个WiFi验证页面上输入过你的真实姓名你就已经把自己和这一MAC地址建立了联系。没有仔细阅读许可服务条款? 你可以认为机场的免费WiFi正通过出售所谓的 ‘顾客分析数据’(你的个人信息)获利。出售的对象包括酒店,餐饮业,和任何想要了解你的人。
我不想信息被记录,再出售给多家公司,所以我花了几个小时想出了一个解决方案。
### MAC 地址不一定总是不变的
幸运的是在不断开网络的情况下是可以随机生成一个伪MAC地址的。
我想随机生成我的MAC地址但是有三个要求
1.MAC地址在不同网络中是不相同的。这意味着我在星巴克和在伦敦地铁网络中的MAC地址是不相同的这样在不同的服务提供商中就无法将我的活动联系起来
2.MAC地址需要经常更换这样在网络上就没人知道我就是去年在这儿经过了75次的那个人
3. MAC地址一天之内应该保持不变。当MAC地址更改时大多数网络都会与你断开连接然后必须得进入验证页面再次登陆 - 这很烦人。
### 操作网络管理器
我第一次尝试用一个叫做 `macchanger`的工具但是失败了。网络管理器会根据它自己的设置恢复默认的MAC地址。
我了解到网络管理器1.4.1以上版本可以自动生成随机的MAC地址。如果你在使用Ubuntu 17.04 版本,你可以根据这一配置文件实现这一目的。但这并不能完全符合我的三个要求 (你必须在随机和稳定这两个选项之中选择一个,但没有一天之内保持不变这一选项)
因为我使用的是Ubuntu 16.04网络管理器版本为1.2,不能直接使用高版本这一新功能。可能网络管理器有一些随机化方法支持,但我没能成功。所以我编了一个脚本来实现这一目标。
幸运的是网络管理器1.2 允许生成随机MAC地址。你在已连接的网络中可以看见 ‘编辑连接’这一选项:
![Screenshot of NetworkManager's edit connection dialog, showing a text entry for a cloned mac address](https://www.paulfurley.com/img/network-manager-cloned-mac-address.png)
网络管理器也支持消息处理 - 任何位于 `/etc/NetworkManager/dispatcher.d/pre-up.d/` 的脚本在建立网络连接之前都会被执行。
### 分配随机生成的伪MAC地址
我想根据网络ID和日期来生成新的随机MAC地址。 我们可以使用网络管理器的命令行工具nmcli,来显示所有可用网络:
```
> nmcli connection
NAME UUID TYPE DEVICE
Gladstone Guest 618545ca-d81a-11e7-a2a4-271245e11a45 802-11-wireless wlp1s0
DoESDinky 6e47c080-d81a-11e7-9921-87bc56777256 802-11-wireless --
PublicWiFi 79282c10-d81a-11e7-87cb-6341829c2a54 802-11-wireless --
virgintrainswifi 7d0c57de-d81a-11e7-9bae-5be89b161d22 802-11-wireless --
```
因为每个网络都有一个唯一标识符为了实现我的计划我将UUID和日期拼接在一起然后使用MD5生成hash值:
```
# eg 618545ca-d81a-11e7-a2a4-271245e11a45-2017-12-03
> echo -n "${UUID}-$(date +%F)" | md5sum
53594de990e92f9b914a723208f22b3f -
```
生成的结果可以代替MAC地址的最后八个字节。
值得注意的是,最开始的字节 `02` 代表这个地址是自行指定的。实际上真实MAC地址的前三个字节是由制造商决定的例如 `b4:b6:76` 就代表Intel。
有可能某些路由器会拒绝自己指定的MAC地址但是我还没有遇到过这种情况。
每次连接到一个网络,这一脚本都会用`nmcli` 来指定一个随机生成的伪MAC地址
![A terminal window show a number of nmcli command line calls](https://www.paulfurley.com/img/terminal-window-nmcli-commands.png)
最后,我查看了 `ifconfig`的输出结果我发现端口MAC地址已经变成了随机生成的地址而不是我真实的MAC地址。
```
> ifconfig
wlp1s0 Link encap:Ethernet HWaddr b4:b6:76:45:64:4d
inet addr:192.168.0.86 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::648c:aff2:9a9d:764/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:12107812 errors:0 dropped:2 overruns:0 frame:0
TX packets:18332141 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11627977017 (11.6 GB) TX bytes:20700627733 (20.7 GB)
```
完整的脚本可以在Github上查看。
```
#!/bin/sh
# /etc/NetworkManager/dispatcher.d/pre-up.d/randomize-mac-addresses
# Configure every saved WiFi connection in NetworkManager with a spoofed MAC
# address, seeded from the UUID of the connection and the date eg:
# 'c31bbcc4-d6ad-11e7-9a5a-e7e1491a7e20-2017-11-20'
# This makes your MAC impossible(?) to track across WiFi providers, and
# for one provider to track across days.
# For craptive portals that authenticate based on MAC, you might want to
# automate logging in :)
# Note that NetworkManager >= 1.4.1 (Ubuntu 17.04+) can do something similar
# automatically.
export PATH=$PATH:/usr/bin:/bin
LOG_FILE=/var/log/randomize-mac-addresses
echo "$(date): $*" > ${LOG_FILE}
WIFI_UUIDS=$(nmcli --fields type,uuid connection show |grep 802-11-wireless |cut '-d ' -f3)
for UUID in ${WIFI_UUIDS}
do
UUID_DAILY_HASH=$(echo "${UUID}-$(date +F)" | md5sum)
RANDOM_MAC="02:$(echo -n ${UUID_DAILY_HASH} | sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5/')"
CMD="nmcli connection modify ${UUID} wifi.cloned-mac-address ${RANDOM_MAC}"
echo "$CMD" >> ${LOG_FILE}
$CMD &
done
wait
```
_更新使用自己指定的MAC地址可以避免和真正的intel地址冲突。感谢 [@_fink][6]_
---------------------------------------------------------------------------------
-via: https://www.paulfurley.com/randomize-your-wifi-mac-address-on-ubuntu-1604-xenial/
作者:[Paul M Furley ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.paulfurley.com/
[1]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/raw/5f02fc8f6ff7fca5bca6ee4913c63bf6de15abca/randomize-mac-addresses
[2]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f#file-randomize-mac-addresses
[3]:https://github.com/
[4]:http://cloudessa.com/products/cloudessa-aaa-and-captive-portal-cloud-service/
[5]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f/revisions#diff-824d510864d58c07df01102a8f53faef
[6]:https://twitter.com/fink_/status/937305600005943296
[7]:https://gist.github.com/paulfurley/978d4e2e0cceb41d67d017a668106c53/
[8]:https://en.wikipedia.org/wiki/MAC_address#Universal_vs._local
[9]:https://gist.github.com/paulfurley/46e0547ce5c5ea7eabeaef50dbacef3f

View File

@ -1,128 +0,0 @@
使用多阶段构建
============================================================
多阶段构建是 `Docker 17.05` 或更高版本提供的新功能。这对致力于优化 `Dockerfile` 的人来说,使得 `Dockerfile` 易于阅读和维护。
> 致谢: 特别感谢 [Alex Ellis][1] 授权使用他的关于 `Docker` 多阶段构建的博客文章 [Builder pattern vs. Multi-stage builds in Docker][2] 作为以下示例的基础.
### 在多阶段构建之前
关于构建镜像最具挑战性的事情之一是保持镜像体积小巧. `Dockerfile` 中的每条指令都会在镜像中增加一层, 并且在移动到下一层之前, 需要记住清除不需要的构件. 要编写一个非常高效的 `Dockerfile`, 传统上您需要使用 `shell` 技巧和其他逻辑来尽可能地减少层数, 并确保每一层都具有上一层所需的构件, 而不是其他任何东西.
实际上, 有一个 `Dockerfile` 用于开发(其中包含构建应用程序所需的所有内容), 以及另一个用于生产的瘦客户端, 它只包含您的应用程序以及运行它所需的内容. 这被称为"建造者模式". 维护两个 `Dockerfile` 并不理想.
下面分别是一个 `Dockerfile.build` 和 遵循上面的构建器模式的 `Dockerfile` 的例子:
`Dockerfile.build`:
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN go get -d -v golang.org/x/net/html \
&& CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
```
注意这个例子还使用 Bash && 运算符人为地将两个 `RUN` 命令压缩在一起, 以避免在镜像中创建额外的层. 这很容易失败, 很难维护. 例如, 插入另一个命令时, 很容易忘记继续使用 `\` 字符.
`Dockerfile`:
```
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app .
CMD ["./app"]
```
`build.sh`:
```
#!/bin/sh
echo Building alexellis2/href-counter:build
docker build --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \
-t alexellis2/href-counter:build . -f Dockerfile.build
docker create --name extract alexellis2/href-counter:build
docker cp extract:/go/src/github.com/alexellis/href-counter/app ./app
docker rm -f extract
echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app
```
当您运行 `build.sh` 脚本时, 它会构建第一个镜像, 从中创建一个容器, 以便将该构件复制出来, 然后构建第二个镜像. 这两个镜像和应用构件会占用您的系统的空间.
多阶段构建大大简化了这种情况!
### 使用多阶段构建
在多阶段构建中, 您需要在 `Dockerfile` 中多次使用 `FROM` 声明. 每次 `FROM` 指令可以使用不同的基础镜像, 并且每次 `FROM` 指令都会开始新阶段的构建. 您可以选择将构件从一个阶段复制到另一个阶段, 在最终镜像中, 不会留下您不需要的所有内容. 为了演示这是如何工作的, 让我们调整前一节中的 `Dockerfile` 以使用多阶段构建。
`Dockerfile`:
```
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
您只需要单一个 `Dockerfile`. 不需要分隔构建脚本. 只需运行 `docker build` .
```
$ docker build -t alexellis2/href-counter:latest .
```
最终的结果是和以前体积一样小的生产镜像, 复杂性显着降低. 您不需要创建任何中间镜像, 也不需要将任何构件提取到本地系统.
它是如何工作的呢? 第二条 `FROM` 指令以 `alpine:latest` 镜像作为基础开始新的建造阶段. `COPY --from=0` 这一行将刚才前一个阶段产生的构件复制到这个新阶段. Go SDK和任何中间构件都被保留下来, 而不是只保存在最终的镜像中.
### 命名您的构建阶段
默认情况下, 这些阶段没有命名, 您可以通过它们的整数来引用它们, 从第一个 `FROM` 指令的 0 开始. 但是, 您可以通过在 `FROM` 指令中使用 `as <NAME>`
来为阶段命名. 以下示例通过命名阶段并在 `COPY` 指令中使用名称来改进前一个示例. 这意味着, 即使您的 `Dockerfile` 中的指令稍后重新排序, `COPY` 也不会中断。
```
FROM golang:1.7.3 as builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
```
> 译者话: 1.此文章系译者第一次翻译英文文档,有描述不清楚或错误的地方,请读者给予反馈(2727586680@qq.com),不胜感激。
> 译者话: 2.本文只是简单介绍多阶段构建,不够深入,如果读者需要深入了解,请自行查阅相关资料。
--------------------------------------------------------------------------------
via: https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages
作者:[docker docs ][a]
译者:[iron0x](https://github.com/iron0x)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://docs.docker.com/engine/userguide/eng-image/multistage-build/
[1]:https://twitter.com/alexellisuk
[2]:http://blog.alexellis.io/mutli-stage-docker-builds/

View File

@ -0,0 +1,199 @@
10 个例子教你学会 ncat (nc) 命令
======
[![nc-ncat-command-examples-Linux-Systems](https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg)][1]
ncat 或者说 nc 是一款功能类似 cat 的网络工具。它是一款拥有多种功能的 CLI 工具,可以用来在网络上读,写以及重定向数据。 它被设计成可以被脚本或其他程序调用的可靠后端工具。 同时由于它能创建任意所需的连接,因此也是一个很好的网络调试工具。
ncat/nc 既是一个端口扫描工具,也是一款安全工具, 还能是一款检测工具,甚至可以做一个简单的 TCP 代理。 由于有这么多的功能,它被誉为是网络界的瑞士军刀。 这是每个系统管理员都应该知道并且掌握它。
在大多数 Debian 发行版中,`nc` 是默认可用的,它会在安装系统的过程中自动被安装。 但是在 CentOS 7 / RHEL 7 的最小化安装中,`nc` 并不会默认被安装。 你需要用下列命令手工安装。
```
[root@linuxtechi ~]# yum install nmap-ncat -y
```
系统管理员可以用它来审计系统安全,用它来找出开放的端口然后保护这些端口。 管理员还能用它作为客户端来审计 Web 服务器, 远程登陆服务器, 邮件服务器等, 通过 `nc` 我们可以控制发送的每个字符,也可以查看对方的回应。
我们还可以 o 用它捕获客户端发送的数据一次来了解这些客户端是做什么的。
在本文中,我们会通过 10 个例子来学习如何使用 `nc` 命令。
### 例子: 1) 监听入站连接
通过 `l` 选项ncat 可以进入监听模式,使我们可以在指定端口监听入站连接。 完整的命令是这样的
$ ncat -l port_number
比如,
```
$ ncat -l 8080
```
服务器就会开始在 8080 端口监听入站连接。
### 例子: 2) 连接远程系统
使用下面命令可以用 nc 来连接远程系统,
$ ncat IP_address port_number
让我们来看个例子,
```
$ ncat 192。168。1。100 80
```
这会创建一个连接,连接到 IP 为 192。168。1。100 的服务器上的 80 端口,然后我们就可以向服务器发送指令了。 比如我们可以输入下面内容来获取完整的网页内容
GET / HTTP/1.1
或者获取页面名称,
GET / HTTP/1.1
或者我们可以通过以下方式获得操作系统指纹标识,
HEAD / HTTP/1.1
这会告诉我们使用的是什么软件来运行这个 web 服务器的。
### 例子: 3) 连接 UDP 端口
默认情况下nc 创建连接时只会连接 TCP 端口。 不过我们可以使用 `u` 选项来连接到 UDP 端口,
```
$ ncat -l -u 1234
```
现在我们的系统会开始监听 udp 的'1234'端口,我们可以使用下面的 netstat 命令来验证这一点,
```
$ netstat -tunlp | grep 1234
udp 0 0 0。0。0。0:1234 0。0。0。0:* 17341/nc
udp6 0 0 :::1234 :::* 17341/nc
```
假设我们想发送或者说测试某个远程主机 UDP 端口的连通性,我们可以使用下面命令,
$ ncat -v -u {host-ip} {udp-port}
比如:
```
[root@localhost ~]# ncat -v -u 192。168。105。150 53
Ncat: Version 6。40 ( http://nmap.org/ncat )
Ncat: Connected to 192。168。105。150:53。
```
#### 例子: 4) 将 NC 作为聊天工具
NC 也可以作为聊天工具来用,我们可以配置服务器监听某个端口,然后从远程主机上连接到服务器的这个端口,就可以开始发送消息了。 在服务器这端运行:
```
$ ncat -l 8080
```
在远程客户端主机上运行:
```
$ ncat 192。168。1。100 8080
```
之后开始发送消息,这些消息会在服务器终端上显示出来。
#### 例子: 5) 将 NC 作为代理
NC 也可以用来做代理。比如下面这个例子,
```
$ ncat -l 8080 | ncat 192。168。1。200 80
```
所有发往我们服务器 8080 端口的连接都会自动转发到 192。168。1。200 上的 80 端口。 不过由于我们使用了管道,数据只能被单向传输。 要同时能够接受返回的数据,我们需要创建一个双向管道。 使用下面命令可以做到这点:
```
$ mkfifo 2way
$ ncat -l 8080 0<2way | ncat 1921681200 80 1>2way
```
现在你可以通过 nc 代理来收发数据了。
#### 例子: 6) 使用 nc/ncat 拷贝文件
NC 还能用来在系统间拷贝文件,虽然这么做并不推荐,因为绝大多数系统默认都安装了 ssh/scp。 不过如果你恰好遇见个没有 ssh/scp 的系统的话, 你可以用 nc 来作最后的努力。
在要接受数据的机器上启动 nc 并让它进入监听模式:
```
$ ncat -l 8080 > file.txt
```
现在去要被拷贝数据的机器上运行下面命令:
```
$ ncat 192。168。1。100 8080 --send-only < data.txt
```
这里data.txt 是要发送的文件。 send-only 选项会在文件拷贝完后立即关闭连接。 如果不加该选项, 我们需要手工按下 ctrl+c to 来关闭连接。
我们也可以用这种方法拷贝整个磁盘分区,不过请一定要小心。
#### 例子: 7) 通过 nc/ncat 创建后门
NC 命令还可以用来在系统中创建后门,并且这种技术也确实被黑客大量使用。 为了保护我们的系统,我们需要知道它是怎么做的。 创建后门的命令为:
```
$ ncat -l 10000 -e /bin/bash
```
e 标志将一个 bash 与端口 10000 相连。现在客户端只要连接到服务器上的 10000 端口就能通过 bash 获取我们系统的完整访问权限:
```
$ ncat 192。168。1。100 10000
```
#### 例子: 8) 通过 nc/ncat 进行端口转发
我们通过选项 `c` 来用 NC 进行端口转发,实现端口转发的语法为:
```
$ ncat -u -l 80 -c 'ncat -u -l 8080'
```
这样,所有连接到 80 端口的连接都会转发到 8080 端口。
#### 例子: 9) 设置连接超时
ncat 的监听模式会一直运行,直到手工终止。 不过我们可以通过选项 `w` 设置超时时间:
```
$ ncat -w 10 192。168。1。100 8080
```
这回导致连接 10 秒后终止,不过这个选项只能用于客户端而不是服务端。
#### 例子: 10) 使用 -k 选项强制 ncat 待命
当客户端从服务端断开连接后,过一段时间服务端也会停止监听。 但通过选项 `k` 我们可以强制服务器保持连接并继续监听端口。 命令如下:
```
$ ncat -l -k 8080
```
现在即使来自客户端的连接断了也依然会处于待命状态。
自此我们的教程就完了,如有疑问,请在下方留言。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/nc-ncat-command-examples-linux-systems/
作者:[Pradeep Kumar][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxtechi.com/author/pradeep/
[1]:https://www.linuxtechi.com/wp-content/uploads/2017/12/nc-ncat-command-examples-Linux-Systems.jpg

View File

@ -0,0 +1,72 @@
Sessions 与 Cookies 用户登录的原理是什么?
======
Facebook, Gmail, Twitter 是我们每天都会用的网站. 它们的共同点在于都需要你登录进去后才能做进一步的操作. 只有你通过认证并登录后才能在 twitter 发推, 在 Facebook 上评论,以及在 Gmail上处理电子邮件.
[![gmail, facebook login page](http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg)][1]
那么登录的原理是什么? 网站是如何认证的? 它怎么知道是哪个用户从哪儿登录进来的? 下面我们来对这些问题进行一一解答.
### 用户登录的原理是什么?
每次你在网站的登录页面中输入用户名和密码时, 这些信息都会发送到服务器. 服务器随后会将你的密码与服务器中的密码进行验证. 如果两者不匹配, 则你会得到一个错误密码的提示. 如果两则匹配, 则成功登录.
### 登陆时发生了什么?
登录后, web 服务器会初始化一个 session 并在你的浏览器中设置一个 cookie 变量. 该 cookie 变量用于作为新建 session 的一个引用. 搞晕了? 让我们说的再简单一点.
### 会话的原理是什么?
服务器在用户名和密码都正确的情况下会初始化一个 session. Sessions 的定义很复杂,你可以把它理解为 `关系的开始`.
[![session beginning of a relationship or partnership](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png)][2]
认证通过后, 服务器就开始跟你展开一段关系了. 由于服务器不能象我们人类一样看东西, 它会在我们的浏览器中设置一个 cookie 来将我们的关系从其他人与服务器的关系标识出来.
### 什么是 Cookie?
cookie 是网站在你的浏览器中存储的一小段数据. 你应该已经见过他们了.
[![theitstuff official facebook page cookies](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png)][3]
当你登录后,服务器为你创建一段关系或者说一个 session, 然后将唯一标识这个 session 的 session id 以 cookie 的形式存储在你的浏览器中.
### 什么意思?
所有这些东西存在的原因在于识别出你来,这样当你写评论或者发推时, 服务器能知道是谁在发评论,是谁在发推.
当你登录后, 会产生一个包含 session id 的 cookie. 这样, 这个 session id 就被赋予了那个输入正确用户名和密码的人了.
[![facebook cookies in web browser](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png)][4]
也就是说, session id 被赋予给了拥有这个账户的人了. 之后,所有在网站上产生的行为, 服务器都能通过他们的 session id 来判断是由谁发起的.
### 如何让我保持登录状态?
session 有一定的时间限制. 这一点与现实生活中不一样,现实生活中的关系可以在不见面的情况下持续很长一段时间, 而 session 具有时间限制. 你必须要不断地通过一些动作来告诉服务器你还在线. 否则的话,服务器会关掉这个 session而你会被登出.
[![websites keep me logged in option](http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png)][5]
不过在某些网站上可以启用 `保持登录Keep me logged in`, 这样服务器会将另一个唯一变量以 cookie 的形式保存到我们的浏览器中. 这个唯一变量会通过与服务器上的变量进行对比来实现自动登录. 若有人盗取了这个唯一标识(我们称之为 cookie stealing), 他们就能访问你的账户了.
### 结论
我们讨论了登录系统的工作原理以及网站是如何进行认证的. 我们还学到了什么是 sessions 和 cookies,以及它们在登录机制中的作用.
我们希望你们以及理解了用户登录的工作原理, 如有疑问, 欢迎提问.
--------------------------------------------------------------------------------
via: http://www.theitstuff.com/sessions-cookies-user-login-work
作者:[Rishabh Kandari][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.theitstuff.com/author/reevkandari
[1]:http://www.theitstuff.com/wp-content/uploads/2017/10/Untitled-design-1.jpg
[2]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-9.png
[3]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-1-4.png
[4]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-2-3-e1508926255472.png
[5]:http://www.theitstuff.com/wp-content/uploads/2017/10/pasted-image-0-3-3-e1508926314117.png

View File

@ -0,0 +1,78 @@
什么是僵尸进程以及如何找到并杀掉僵尸进程?
======
[![What Are Zombie Processes And How To Find & Kill Zombie Processes](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/what-are-the-zombie-processes_orig.jpg)][1]
如果你经常使用 Linux你应该遇到这个术语 `僵尸进程`。 那么什么是僵尸进程? 它们是怎么产生的? 他们是否对系统有害? 我要怎样杀掉这些进程? 下面将会回答这些问题。
### 什么是僵尸进程?
我们都知道进程的工作原理。我们启动一个程序,开始我们的任务,然后等任务结束了,我们就停止这个进程。 进程停止后, 该进程就会从进程表中移除。
你可以通过 `System-Monitor` 查看当前进程。
[![](http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg)][2]
但是,有时候有些程序即使执行完了也依然留在进程表中。
那么,这些完成了生命周期但却依然留在进程表中的进程,我们称之为 `僵尸进程`
### 他们是如何产生的?
当你运行一个程序时,它会产生一个父进程以及很多子进程。 所有这些子进程都会消耗内核分配给他们的内存和 CPU 资源。
这些子进程完成执行后会发送一个 Exit 信号然后死掉。这个 Exit 信号需要被父进程所读取。父进程需要随后调用 `wait` 命令来读取子进程的退出状态并将子进程从进程表中移除。
若父进程正确第读取了子进程的 Exit 信号,则子进程会从进程表中删掉。
但若父进程未能读取到子进程的 Exit 信号,则这个子进程虽然完成执行处于死亡的状态,但也不会从进程表中删掉。
### 僵尸进程对系统有害吗 Are Zombie processes harmful to the System
**不会**。由于僵尸进程并不做任何事情, 不会使用任何资源也不会影响其他进程, 因此存在僵尸进程也没什么坏处。 不过由于进程表中的退出状态以及其他一些进程信息也是存储在内存中的,因此存在太多僵尸进程有时也会是一些问题。
**你可以想象成这样:**
“你是一家建筑公司的老板。你每天根据工人们的工作量来支付工资。 有一个工人每天来到施工现场,就坐在那里, 你不用付钱, 它也不做任何工作。 他只是每天都来然后呆坐在那,仅此而已!”
这个工人就是僵尸进程的一个活生生的例子。**但是** 如果你有很多僵尸工人, 你的建设工地就会很拥堵从而让那些正常的工人难以工作。
### 那么如何找出僵尸进程呢?
打开终端并输入下面命令:
```
ps aux | grep Z
```
会列出进程表中所有僵尸进程的详细内容。
### 如何杀掉僵尸进程?
正常情况下我们可以用 `SIGKILL` 信号来杀死进程,但是僵尸进程已经死了, 你不能杀死已经死掉的东西。 因此你需要输入的命令应该是
```
kill -s SIGCHLD pid
```
将这里的 pid 替换成父进程的 id这样父进程就会删除所有以及完成并死掉的子进程了。
**你可以把它想象成:**
"你在道路中间发现一具尸体,于是你联系了死者的家属,随后他们就会将尸体带离道路了。"
不过许多程序写的不是那么好,无法删掉这些子僵尸(否则你一开始也见不到这些僵尸了)。 因此确保删除子僵尸的唯一方法就是杀掉它们的父进程。
--------------------------------------------------------------------------------
via: http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
作者:[linuxandubuntu][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxandubuntu.com
[1]:http://www.linuxandubuntu.com/home/what-are-zombie-processes-and-how-to-find-kill-zombie-processes
[2]:http://www.linuxandubuntu.com/uploads/2/1/1/5/21152474/linux-check-zombie-processes_orig.jpg