This commit is contained in:
geekpi 2018-04-04 08:50:18 +08:00
commit 1d56957d26
16 changed files with 2229 additions and 1113 deletions

View File

@ -1,13 +1,13 @@
# 10 个增加 UNIX/Linux Shell 脚本趣味的工具
10 个增加 UNIX/Linux Shell 脚本趣味的工具
======
有些误解认为 shell 脚本仅用于 CLI 环境。实际上在 KDE 或 Gnome 桌面下,你可以有效的使用各种工具编写 GUI 或者网络socket脚本。shell 脚本可以使用一些 GUI 组件(菜单、警告框、进度条等),你可以控制终端输出、光标位置以及各种输出效果等等。利用下面的工具,你可以构建强壮的、可交互的、对用户友好的 UNIX/Linux bash 脚本。
制作 GUI 应用不是一项昂贵的任务但需要时间和耐心。幸运的是UNIX 和 Linux 都带有大量编写漂亮 GUI 脚本的工具。以下工具是基于 FreeBSD 和 Linux 操作系统做的测试,而且也适用于其他类 UNIX 操作系统。
制作 GUI 应用不是一项困难的任务但需要时间和耐心。幸运的是UNIX 和 Linux 都带有大量编写漂亮 GUI 脚本的工具。以下工具是基于 FreeBSD 和 Linux 操作系统做的测试,而且也适用于其他类 UNIX 操作系统。
### 1#notify-send 命令
### 1notify-send 命令
notify-send 命令允许你使用通知守护进程发送桌面通知给用户。这种避免侵入用户的方式,对于通知桌面用户一个事件或显示一些信息是有用的。在 Debian 或 Ubuntu 上,你需要使用 [apt 命令][1] 或 [apt-get 命令][2] 安装的包:
`notify-send` 命令允许你借助通知守护进程发送桌面通知给用户。这种避免打扰用户的方式,对于通知桌面用户一个事件或显示一些信息是有用的。在 Debian 或 Ubuntu 上,你需要使用 [apt 命令][1] 或 [apt-get 命令][2] 安装的包:
```bash
sudo apt-get install libnotify-bin
@ -24,7 +24,7 @@ Fedora Linux 用户使用下面的 dnf 命令:
```bash
`$ sudo dnf install libnotify`
In this example, send simple desktop notification from the command line, enter:
### send some notification ##
### 发送一些通知 ###
notify-send "rsnapshot done :)"
```
@ -35,7 +35,7 @@ notify-send "rsnapshot done :)"
下面是另一个附加选项的代码:
```bash
....
...
alert=18000
live=$(lynx --dump http://money.rediff.com/ | grep 'BSE LIVE' | awk '{ print $5}' | sed 's/,//g;s/\.[0-9]*//g')
[ $notify_counter -eq 0 ] && [ $live -ge $alert ] && { notify-send -t 5000 -u low -i "BSE Sensex touched 18k"; notify_counter=1; }
@ -48,25 +48,25 @@ live=$(lynx --dump http://money.rediff.com/ | grep 'BSE LIVE' | awk '{ print $5}
这里:
* -t 5000: 指定超时时间(毫秒) 5000 毫秒 = 5 秒)
* -u low : 设置紧急等级 (如:低、普通、紧急)
* -i gtk-dialog-info : 设置要显示的图标名称或者指定的图标(你可以设置路径为:-i /path/to/your-icon.png
* `-t 5000`指定超时时间(毫秒) 5000 毫秒 = 5 秒)
* `-u low` 设置紧急等级 (如:低、普通、紧急)
* `-i gtk-dialog-info` 设置要显示的图标名称或者指定的图标(你可以设置路径为:`-i /path/to/your-icon.png`
关于更多使用 notify-send 功能的信息,请参考 notify-send 手册。在命令行下输入 `man notify-send` 即可看见:
关于更多使用 `notify-send` 功能的信息,请参考 man 手册。在命令行下输入 `man notify-send` 即可看见:
```bash
man notify-send
```
### #2tput 命令
### 2、tput 命令
tput 命令用于设置终端特性。通过 tput 你可以设置:
`tput` 命令用于设置终端特性。通过 `tput` 你可以设置:
* 在屏幕上移动光标。
* 获取终端信息。
* 设置颜色(背景和前景)。
* 设置加粗模式。
* 设置反模式等等。
* 设置反模式等等。
下面有一段示例代码:
@ -116,16 +116,16 @@ tput rc
![Fig.03: tput in action][6]
关于 tput 命令的详细信息,参见手册:
关于 `tput` 命令的详细信息,参见手册:
```bash
man 5 terminfo
man tput
```
### #3setleds 命令
### 3、setleds 命令
setleds 命令允许你设置键盘灯。下面是打开数字键灯的示例:
`setleds` 命令允许你设置键盘灯。下面是打开数字键灯的示例:
```bash
setleds -D +num
@ -137,18 +137,16 @@ setleds -D +num
setleds -D -num
```
* -caps关闭大小写锁定
* +caps打开大小写锁定
* -scroll关闭滚动锁定
* +scroll打开滚动锁定
* `-caps`:关闭大小写锁定
* `+caps`:打开大小写锁定
* `-scroll`:关闭滚动锁定
* `+scroll`:打开滚动锁定
查看 setleds 手册可看见更多信息和选项:
查看 `setleds` 手册可看见更多信息和选项 `man setleds`
`man setleds`
### 4、zenity 命令
### #4zenity 命令
[zenity 命令显示 GTK+ 对话框][7],并且返回用户输入。它允许你使用各种 Shell 脚本向用户展示或请求信息。下面是 whois 指定域名目录服务的 GUI 客户端示例。
[zenity 命令显示 GTK+ 对话框][7],并且返回用户输入。它允许你使用各种 Shell 脚本向用户展示或请求信息。下面是一个 `whois` 指定域名目录服务的 GUI 客户端示例。
```bash
#!/bin/bash
@ -181,16 +179,16 @@ fi
![Fig.04: zenity in Action][8]
参见手册获取更多 zenity 信息以及其他支持 GTK+ 的组件:
参见手册获取更多 `zenity` 信息以及其他支持 GTK+ 的组件:
```bash
zenity --help
man zenity
```
### #5kdialog 命令
### 5、kdialog 命令
kdialog 命令与 zenity 类似,但它是为 KDE 桌面和 QT 应用设计。你可以使用 kdialog 展示对话框。下面示例将在屏幕上显示信息:
`kdialog` 命令与 `zenity` 类似,但它是为 KDE 桌面和 QT 应用设计。你可以使用 `kdialog` 展示对话框。下面示例将在屏幕上显示信息:
```bash
kdialog --dontagain myscript:nofilemsg --msgbox "File: '~/.backup/config' not found."
@ -202,7 +200,7 @@ kdialog --dontagain myscript:nofilemsg --msgbox "File: '~/.backup/config' not fo
参见 《[KDE 对话框 Shell 脚本编程][10]》 教程获取更多信息。
### #6Dialog
### 6、Dialog
[Dialog 是一个使用 Shell 脚本的应用][11],显示用户界面组件的文本。它使用 curses 或者 ncurses 库。下面是一个示例代码:
@ -224,21 +222,20 @@ case $response in
esac
```
参见 dialog 手册获取详细信息:
`man dialog`
参见 `dialog` 手册获取详细信息:`man dialog`。
### 关于其他用户界面工具的注意事项
#### 关于其他用户界面工具的注意事项
UNIX、Linux 提供了大量其他工具来显示和控制命令行中的应用程序shell 脚本可以使用一些 KDE、Gnome、X 组件集:
* **gmessage** - 基于 GTK xmessage 的克隆.
* **xmessage** - 在窗口中显示或查询消息X-based /bin/echo
* **whiptail** - 显示来自 shell 脚本的对话框
* **python-dialog** - 用于制作简单文本或控制台模式用户界面的 Python 模块
* `gmessage` - 基于 GTK xmessage 的克隆
* `xmessage` - 在窗口中显示或询问消息(基于 X 的 /bin/echo
* `whiptail` - 显示来自 shell 脚本的对话框
* `python-dialog` - 用于制作简单文本或控制台模式用户界面的 Python 模块
### #7logger 命令
### 7、logger 命令
logger 命令将信息写到系统日志文件,如: /var/log/messages。它为系统日志模块 syslog 提供了一个 shell 命令行接口:
`logger` 命令将信息写到系统日志文件,如:`/var/log/messages`。它为系统日志模块 syslog 提供了一个 shell 命令行接口:
```bash
logger "MySQL database backup failed."
@ -254,19 +251,17 @@ Apr 20 00:11:45 vivek-desktop kernel: [38600.515354] CPU0: Temperature/speed nor
Apr 20 00:12:20 vivek-desktop mysqld: Database Server failed
```
参见 《[如何写消息到 syslog 或 日志文件][12]》 获得更多信息。此外,你也可以查看 logger 手册获取详细信息:
参见 《[如何写消息到 syslog 或 日志文件][12]》 获得更多信息。此外,你也可以查看 logger 手册获取详细信息:`man logger`
`man logger`
### 8、setterm 命令
### #8setterm 命令
setterm 命令可设置不同的终端属性。下面的示例代码会强制屏幕在 15分钟后变黑监视器则待机 60 分钟。
`setterm` 命令可设置不同的终端属性。下面的示例代码会强制屏幕在 15 分钟后变黑,监视器则 60 分钟后待机。
```bash
setterm -blank 15 -powersave powerdown -powerdown 60
```
下面的例子将 xterm 窗口中的文本带有下划线展示:
下面的例子将 xterm 窗口中的文本下划线展示:
```bash
setterm -underline on;
@ -274,7 +269,7 @@ echo "Add Your Important Message Here"
setterm -underline off
```
另一个有用的选项是打开或关闭光标:
另一个有用的选项是打开或关闭光标显示
```bash
setterm -cursor off
@ -286,13 +281,11 @@ setterm -cursor off
setterm -cursor on
```
参见 setterm 命令手册获取详细信息:
参见 setterm 命令手册获取详细信息:`man setterm`
`man setterm`
### 9、smbclient给 MS-Windows 工作站发送消息
### #9smbclient给 MS-Windows 工作站发送消息
smbclient 命令可以与 SMB/CIFS 服务器通讯。它可以向 MS-Windows 系统上选定或全部用户发送消息。
`smbclient` 命令可以与 SMB/CIFS 服务器通讯。它可以向 MS-Windows 系统上选定或全部用户发送消息。
```bash
smbclient -M WinXPPro <<eof
@ -309,18 +302,16 @@ EOF
echo "${Message}" | smbclient -M salesguy2
```
参见 smbclient 手册或者阅读我们之前发布的文章:《[给 Windows 工作站发送消息][13]》
参见 `smbclient` 手册或者阅读我们之前发布的文章:《[给 Windows 工作站发送消息][13]》`man smbclient`
`man smbclient`
### 10、Bash 套接字编程
### #10Bash 套接字编程
在 bash 下,你可以打开一个套接字并通过它发送数据。你不必使用 `curl` 或者 `lynx` 命令抓取远程服务器的数据。bash 和两个特殊的设备文件可用于打开网络套接字。以下选自 bash 手册:
在 bash 下,你可以打开一个套接字并通过它发送数据。你不必使用 curl 或者 lynx 命令抓取远程服务器的数据。bash 和两个特殊的设备文件可用于打开网络套接字。以下选自 bash 手册:
1. `/dev/tcp/host/port` - 如果 `host` 是一个有效的主机名或者网络地址而且端口是一个整数或者服务名bash 会尝试打开一个相应的 TCP 连接套接字。
2. `/dev/udp/host/port` - 如果 `host` 是一个有效的主机名或者网络地址而且端口是一个整数或者服务名bash 会尝试打开一个相应的 UDP 连接套接字。
1. **/dev/tcp/host/port** - 如果 host 是一个有效的主机名或者网络地址而且端口是一个整数或者服务名bash 会尝试打开一个相应的 TCP 连接套接字。
2. **/dev/udp/host/port** - 如果 host 是一个有效的主机名或者网络地址而且端口是一个整数或者服务名bash 会尝试打开一个相应的 UDP 连接套接字。
你可以使用这项技术来确定本地或远程服务器端口是打开或者关闭状态,而无需使用 nmap 或者其它的端口扫描器。
你可以使用这项技术来确定本地或远程服务器端口是打开或者关闭状态,而无需使用 `nmap` 或者其它的端口扫描器。
```bash
# find out if TCP port 25 open or not
@ -370,15 +361,15 @@ do
done
```
参见 bash 手册获取更多信息:
参见 bash 手册获取更多信息:`man bash`
`man bash`
### 关于 GUI 工具和 cron 任务的注意事项
### 关于 GUI 工具和 Cronjob 的注意事项
如果你 [使用 crontab][15] 来启动你的脚本,你需要使用 `export DISPLAY=[用户机器]:0` 命令请求本地显示或输出服务。举个例子,使用 `zenity` 工具调用 `/home/vivek/scripts/monitor.stock.sh`
如果你 [使用 crontab][15] 来启动你的脚本,你需要使用 `export DISPLAY=[用户机器]:0` 命令请求本地显示或输出服务。举个例子,使用 zenity 工具调用 /home/vivek/scripts/monitor.stock.sh
`@hourly DISPLAY=:0.0 /home/vivek/scripts/monitor.stock.sh`
```
@hourly DISPLAY=:0.0 /home/vivek/scripts/monitor.stock.sh
```
你有喜欢的可以增加 shell 脚本趣味的 UNIX 工具么?请在下面的评论区分享它吧。
@ -391,8 +382,8 @@ done
via: https://www.cyberciti.biz/tips/spice-up-your-unix-linux-shell-scripts.html
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/pygmalion666)
校对:[校对者ID](https://github.com/校对者ID)
译者:[pygmalion666](https://github.com/pygmalion666)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,14 +1,20 @@
(再次)在 Ubuntu 16.04 上配置 MSMTP
在 Ubuntu 16.04 上配置 msmtp
======
这篇文章是在我之前的博客中发表过的在 Ubuntu 16.04 上配置 MSMTP 的一个副本。我再次发表是为了后续,我并不知道它是否能在更高版本上工作。由于我没有再托管自己的 Ubuntu/MSMTP 服务器了,所以我现在看不到有更新的,但是如果我需要重新设置,我会创建一个更新的帖子!无论如何,这是我现有的。
这篇文章是在我之前的博客中发表过的在 Ubuntu 16.04 上配置 MSMTP 的一个副本。我再次发表是为了后续,我并不知道它是否能在更高版本上工作。由于我没有再托管自己的 Ubuntu/MSMTP 服务器了,所以我现在看不到有需要更新的地方,但是如果我需要重新设置,我会创建一个更新的帖子!无论如何,这是我现有的。
我之前写了一篇在 Ubuntu 12.04 上配置 msmtp 的文章,但是正如我在之前的文章中暗示的那样,当我升级到 Ubuntu 16.04 后出现了一些问题。接下来的内容基本上是一样的,但 16.04 有一些小的更新。和以前一样,这里假定你使用 Apache 作为 Web 服务器,但是我相信如果你选择其他的 Web 服务器,也应该相差不多。
我使用 [msmtp][1] 发送来自这个博客的邮件俩通知我评论和更新等。这里我会记录如何配置它通过 Google Apps 帐户发送电子邮件,虽然这应该与标准帐户一样。
我使用 [msmtp][1] 发送来自这个博客的邮件俩通知我评论和更新等。这里我会记录如何配置它通过 Google Apps 帐户发送电子邮件,虽然这应该与标准的 Google 帐户一样。
首先,我们需要安装 3 个软件包:
`sudo apt-get install msmtp msmtp-mta ca-certificates`
安装完成后就需要一个默认配置。默认情况下msmtp 会在 `/etc/msmtprc` 中查找,所以我使用 vim 创建了这个文件,尽管任何文本编辑器都可以做到这一点。这个文件看起来像这样:
```
sudo apt-get install msmtp msmtp-mta ca-certificates
```
安装完成后就需要一个默认配置。默认情况下msmtp 会在 `/etc/msmtprc` 中查找,所以我使用 `vim` 创建了这个文件,尽管任何文本编辑器都可以做到这一点。这个文件看起来像这样:
```
# Set defaults.
defaults
@ -17,50 +23,57 @@ tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account
account GMAIL
host smtp.gmail.com
port 587
auth login
user
password
from
user YOUR USERNAME
password YOUR PASSWORD
from FROM@ADDRESS
logfile /var/log/msmtp/msmtp.log
account default :
```
任何大写项(即``都是需要替换为你特定的配置。日志文件是一个例外,当然你也可以将活动/警告/错误放在任何你想要的地方。
任何大写项都是需要替换为你特定的配置。日志文件是一个例外,当然你也可以将活动/警告/错误放在任何你想要的地方。
文件保存后,我们将更新上述配置文件的权限 如果该文件的权限过于开放msmtp 将不会运行,并且创建日志文件的目录。
```
sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc
```
接下来,我选择为 msmtp 日志配置 logrotate以确保日志文件不会太大并让日志目录更加整洁。为此我们创建 `/etc/logrotate.d/msmtp` 并使用按以下内容配置。请注意,这是可选的,你可以选择不这样做,或者你可以选择以不同方式配置日志。
```
/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
rotate 12
monthly
compress
missingok
notifempty
}
```
现在配置了日志,我们需要通过编辑 `/etc/php/7.0/apache2/php.ini` 告诉 PHP 使用 msmtp并将 sendmail 路径从
`sendmail_path =`
```
sendmail_path =
```
变成
`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"`
```
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"
```
这里我遇到了一个问题,即使我指定了帐户名称,但是当我测试它时,它并没有正确发送电子邮件。这就是为什么 `account default : ` 这行被放在 msmtp 配置文件的末尾。要测试配置,请确保 PHP 文件已保存并运行 `sudo service apache2 restart`,然后运行 `php -a` 并执行以下命令
```
mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();
```
此时发生的任何错误都将显示在输出中,因此错误诊断会相对容易。如果一切顺利,你现在应该可以使用 PHP sendmail至少 WordPress 可以)中用 Gmail或 Google Apps从 Ubuntu 服务器发送电子邮件。
@ -74,7 +87,7 @@ via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ub
作者:[JOE][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,95 +1,107 @@
cTop - 用于容器监控的 CLI 工具
cTop:用于容器监控的命令行工具
======
最近 Linux 容器很火,我们中的大多数人甚至已经在使用它,同时一些人也开始学习它。
我们已经介绍了有名的 GUI用户图形界面 工具如 Portainer 和 Rancher 。这将会有助于我们通过 GUI 管理容器。
这篇指南将会通过 cTop 命令帮助我们理解和监控 Linux 容器。它是一个类似 top 命令的命令行工具。
这篇指南将会通过 cTop 命令帮助我们理解和监控 Linux 容器。它是一个类似 `top` 命令的命令行工具。
### 什么是 cTop
[ctop][1] 为多个容器提供了一个简洁凝练的实时指标概览。它是一个类 Top 的针对容器指标的界面。
[ctop][1] 为多个容器提供了一个简洁凝练的实时指标概览。它是一个类 `top` 的针对容器指标的界面。
它展示了容器指标比如 CPU 利用率,内存利用率,磁盘 I/O 读写,进程 ID(PID)和网络发送TX - 从此服务器发送以及接受RX - 此服务器接受)。
它展示了容器指标比如 CPU 利用率、内存利用率、磁盘 I/O 读写、进程 IDPID和网络发送TX - 从此服务器发送以及接受RX - 此服务器接受)。
`ctop` 带有对 Docker 和 runc 的内建支持;对其他容器和集群系统的连接计划在未来版本中推出。
ctop 伴随着对 Docker 和 runc 的内建支持;对其他容器和集群系统的连接计划在未来版本中推出。
它不需要任何参数并且默认使用 Docker 主机变量。
**建议阅读:**
**(#)** [Portainer 一个简单的 Docker 管理 GUI][2]
**(#)** [Rancher 一个完整的生产环境容器管理平台][3]
- [Portainer 一个简单的 Docker 图形管理界面][2]
- [Rancher 一个完整的生产环境容器管理平台][3]
### 如何安装 cTop
开发者提供了一个简单的 shell 脚本来帮助我们直接使用 ctop。我们要做的只是在“/bin”目录下下载 ctop shell 文件来保证全局访问。最后给予 ctop 脚本文件执行权限。
开发者提供了一个简单的 shell 脚本来帮助我们直接使用 `ctop`。我们要做的,只是在 `/bin` 目录下下载 `ctop` shell 文件来保证全局访问。最后给予 `ctop` 脚本文件执行权限。
`/usr/local/bin` 目录下下载 ctop shell 脚本。
在“/usr/local/bin”目录下下载 ctop shell 脚本。
```
$ sudo wget https://github.com/bcicen/ctop/releases/download/v0.7/ctop-0.7-linux-amd64 -O /usr/local/bin/ctop
```
对 ctop shell 脚本设置执行权限。
`ctop` shell 脚本设置执行权限。
```
$ sudo chmod +x /usr/local/bin/ctop
```
另外你可以通过 docker 来安装和运行 ctop。在此之前先确保你已经安装过 docker。为了安装 docker参考以下链接。
另外你可以通过 docker 来安装和运行 `ctop`。在此之前先确保你已经安装过 docker。为了安装 docker参考以下链接。
**建议阅读:**
**(#)** [如何在 Linux 上安装 Docker][4]
**(#)** [如何在 Linux 上玩转 Docker 镜像][5]
**(#)** [如何在 Linux 上玩转 Docker 容器][6]
**(#)** [如何在 Docker 容器中安装,运行应用][7]
- [如何在 Linux 上安装 Docker][4]
- [如何在 Linux 上玩转 Docker 镜像][5]
- [如何在 Linux 上玩转 Docker 容器][6]
- [如何在 Docker 容器中安装,运行应用][7]
```
$ docker run --rm -ti \
--name=ctop \
-v /var/run/docker.sock:/var/run/docker.sock \
quay.io/vektorlab/ctop:latest
```
### 如何使用 cTop
直接启动 ctop 程序而不用任何参数。默认它与“a”键绑定来展示所有容器运行的和没运行的
ctop 头部显示你的系统时间和容器的总数。
直接启动 `ctop` 程序而不用任何参数。默认它绑定的 `a` 键用来展示所有容器(运行的和没运行的)。
`ctop` 头部显示你的系统时间和容器的总数。
```
$ ctop
```
你可能得到以下类似输出。
![][9]
### 如何管理容器
你可以使用 ctop 来管理容器。选择一个你想要管理的容器然后按下“Enter”键选择所需选项如 start,stop,remove 等。
你可以使用 `ctop` 来管理容器。选择一个你想要管理的容器然后按下回车键,选择所需选项如 `start`、`stop`、`remove` 等。
![][10]
### 如何给容器排序
默认 ctop 使用 state 字段来给容器排序。按下“s”键来用不同的方面给容器排序。
默认 `ctop` 使用 `state` 字段来给容器排序。按下 `s` 键来按不同的方面给容器排序。
![][11]
### 如何查看容器指标
如何你想要查看关于容器的更多细节和指标只用选择你想要查看的相应容器然后按“o”键。
如何你想要查看关于容器的更多细节和指标,只用选择你想要查看的相应容器然后按 `o` 键。
![][12]
### 如何查看容器日志
选择你想要查看日志的相应容器然后按“l”键。
选择你想要查看日志的相应容器然后按 `l` 键。
![][13]
### 仅显示活动容器
使用“-a”选项运行 ctop 命令来仅显示活动容器
使用 `-a` 选项运行 `ctop` 命令来仅显示活动容器
![][14]
### 打开帮助对话框
运行 ctop,只需按“h”键来打开帮助部分。
运行 `ctop`,只需按 `h` 键来打开帮助部分。
![][15]
--------------------------------------------------------------------------------
@ -98,7 +110,7 @@ via: https://www.2daygeek.com/ctop-a-command-line-tool-for-container-monitoring-
作者:[2DAYGEEK][a]
译者:[kimii](https://github.com/kimii)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,733 @@
深入理解 BPF一个阅读清单
============================================================
_~ [更新于][146] 2017-11-02 ~_
## 什么是 BPF?
BPF<ruby>伯克利包过滤器<rt>**B**erkeley **P**acket **F**ilter</rt></ruby>,最初构想提出于 1992 年,其目的是为了提供一种过滤包的方法,并且要避免从内核空间到用户空间的无用的数据包复制行为。它最初是由从用户空间注入到内核的一个简单的字节码构成,它在那个位置利用一个校验器进行检查 —— 以避免内核崩溃或者安全问题 —— 并附着到一个套接字上,接着在每个接收到的包上运行。几年后它被移植到 Linux 上,并且应用于一小部分应用程序上(例如,`tcpdump`。其简化的语言以及存在于内核中的即时编译器JIT使 BPF 成为一个性能卓越的工具。
然后,在 2013 年Alexei Starovoitov 对 BPF 进行彻底地改造,并增加了新的功能,改善了它的性能。这个新版本被命名为 eBPF (意思是 “extended BPF”与此同时将以前的 BPF 变成 cBPF意思是 “classic” BPF。新版本出现了如映射和<ruby>尾调用<rt>tail call</rt></ruby>这样的新特性,并且 JIT 编译器也被重写了。新的语言比 cBPF 更接近于原生机器语言。并且,在内核中创建了新的附着点。
感谢那些新的钩子eBPF 程序才可以被设计用于各种各样的情形下其分为两个应用领域。其中一个应用领域是内核跟踪和事件监控。BPF 程序可以被附着到探针kprobe而且它与其它跟踪模式相比有很多的优点有时也有一些缺点
另外一个应用领域是网络编程。除了套接字过滤器外eBPF 程序还可以附加到 tcLinux 流量控制工具)的入站或者出站接口上,以一种很高效的方式去执行各种包处理任务。这种使用方式在这个领域开创了一个新的天地。
并且 eBPF 通过使用为 IO Visor 项目开发的技术,使它的性能进一步得到提升:也为 XDP“eXpress Data Path”添加了新的钩子XDP 是不久前添加到内核中的一种新式快速路径。XDP 与 Linux 栈组合,然后使用 BPF ,使包处理的速度更快。
甚至一些项目,如 P4、Open vSwitch[考虑][155] 或者开始去接洽使用 BPF。其它的一些如 CETH、Cilium则是完全基于它的。BPF 是如此流行,因此,我们可以预计,不久之后,将围绕它有更多工具和项目出现 …
## 深入理解字节码
就像我一样:我的一些工作(包括 [BEBA][156])是非常依赖 eBPF 的,并且在这个网站上以后的几篇文章将关注于这个主题。按理说,在深入到细节之前,我应该以某种方式去介绍 BPF —— 我的意思是,真正的介绍,在第一节所提供的简要介绍上更多地介绍在 BPF 上开发的新功能:什么是 BPF 映射?尾调用?内部结构是什么样子?等等。但是,在这个网站上已经有很多这个主题的介绍了,而且,我也不希望去写另一篇 “BPF 介绍” 的重复文章。
毕竟,我花费了很多的时间去阅读和学习关于 BPF 的知识,因此,在这里我们将要做什么呢,我收集了非常多的关于 BPF 的阅读材料:介绍、文档,也有教程或者示例。这里有很多的材料可以去阅读,但是,为了去阅读它,首先要去 _找到_ 它。因此,为了能够帮助更多想去学习和使用 BPF 的人,现在的这篇文章给出了一个资源清单。这里有各种阅读材料,它可以帮你深入理解内核字节码的机制。
## 资源
![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg)
### 简介
这篇文章中下面的链接提供了 BPF 的基本概述,或者,一些与它密切相关的一些主题。如果你对 BPF 非常陌生,你可以在这些介绍文章中挑选出一篇你喜欢的文章去阅读。如果你已经理解了 BPF你可以针对特定的主题去阅读下面是阅读清单。
#### 关于 BPF
**关于 eBPF 的常规介绍**
* [全面介绍 eBPF][193]Matt Flemmingon LWN.netDecember 2017
一篇写的很好的,并且易于理解的,介绍 eBPF 子系统组件的概述文章。
* [利用 BPF 和 XDP 实现可编程的内核网络数据路径][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017)
快速理解所有的关于 eBPF 和 XDP 的基础概念的最好讲稿中的一篇(主要是关于网络处理的)
* [BSD 包过滤器][54] Suchakra Sharma, June 2017 
一篇非常好的介绍文章,主要是关于跟踪方面的。
* [BPF跟踪及更多][55]Brendan Gregg, January 2017
主要内容是跟踪使用案例相关的。
* [Linux BPF 的超强功能][56] Brendan Gregg, March 2016
第一部分是关于<ruby>火焰图<rt>flame graph</rt></ruby>的使用。
* [IO Visor][57]Brenden Blanco, SCaLE 14x, January 2016
介绍了 **IO Visor 项目**
* [大型机上的 eBPF][58]Michael Holzheu, LinuxCon, Dubin, October 2015
* [在 Linux 上新的(令人激动的)跟踪新产品][59]Elena Zannoni, LinuxCon, Japan, 2015
* [BPF — 内核中的虚拟机][60]Alexei Starovoitov, February 2015
eBPF 的作者写的一篇讲稿。
* [扩展 extended BPF][61] Jonathan Corbet, July 2014
**BPF 内部结构**
* Daniel Borkmann 正在做的一项令人称奇的工作,它用于去展现 eBPF 的 **内部结构**,尤其是,它的关于 **随同 tc 使用** 的几次演讲和论文。
* [使用 tc 的 cls_bpf 的高级可编程和它的最新更新][30]netdev 1.2, Tokyo, October 2016
Daniel 介绍了 eBPF 的细节,及其用于隧道和封装、直接包访问和其它特性。
* [自 netdev 1.1 以来的 cls_bpf/eBPF 更新][31]  netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32]
* [使用 cls_bpf 实现完全可编程的 tc 分类器][33]  netdev 1.1, Sevilla, February 2016
介绍 eBPF 之后,它提供了许多 BPF 内部机制(映射管理、尾调用、校验器)的见解。对于大多数有志于 BPF 的人来说,这是必读的![全文在这里][34]。
* [Linux tc 和 eBPF][35] fosdem16, Brussels, Belgium, January 2016
* [eBPF 和 XDP 攻略和最新更新][36] fosdem17, Brussels, Belgium, February 2017
这些介绍可能是理解 eBPF 内部机制设计与实现的最佳文档资源之一。
[IO Visor 博客][157] 有一些关于 BPF 的值得关注技术文章。它们中的一些包含了一点营销讨论。
**内核跟踪**:总结了所有的已有的方法,包括 BPF
* [邂逅 eBPF 和内核跟踪][62] Viller Hsiao, July 2016
Kprobes、uprobes、ftrace
* [Linux 内核跟踪][63]Viller Hsiao, July 2016
Systemtap、Kernelshark、trace-cmd、LTTng、perf-tool、ftrace、hist-trigger、perf、function tracer、tracepoint、kprobe/uprobe …
关于 **事件跟踪和监视**Brendan Gregg 大量使用了 eBPF并且就其使用 eBPFR 的一些案例写了极好的文档。如果你正在做一些内核跟踪方面的工作,你应该去看一下他的关于 eBPF 和火焰图相关的博客文章。其中的大多数都可以 [从这篇文章中][158] 访问,或者浏览他的博客。
介绍 BPF也介绍 **Linux 网络的一般概念**
* [Linux 网络详解][64] Thomas Graf, LinuxCon, Toronto, August 2016
* [内核网络攻略][65]  (Thomas Graf, LinuxCon, Seattle, August 2015)
**硬件<ruby>卸载<rt>offload</rt></ruby>**LCTT 译注:“卸载”是指原本由软件来处理的一些操作交由硬件来完成,以提升吞吐量,降低 CPU 负荷。):
* eBPF 与 tc 或者 XDP 一起支持硬件卸载,开始于 Linux 内核版本 4.9,是由 Netronome 提出的。这里是关于这个特性的介绍:[eBPF/XDP 硬件卸载到 SmartNICs][147]Jakub Kicinski 和 Nic Viljoen, netdev 1.2, Tokyo, October 2016
* 一年后出现的更新版:
[综合 XDP 卸载——处理边界案例][194]Jakub Kicinski 和 Nic Viljoennetdev 2.2 SeoulNovember 2017
* 我现在有一个简短的,但是在 2018 年的 FOSDEM 上有一个更新版:
[XDP 硬件卸载的挑战][195]Quentin MonnetFOSDEM 2018BrusselsFebruary 2018
关于 **cBPF**
* [BSD 包过滤器:一个用户级包捕获的新架构][66] Steven McCanne 和 Van Jacobson, 1992
它是关于经典BPF 的最早的论文。
* [BPF 的 FreeBSD 手册][67] 是理解 cBPF 程序有用的资源。
* 关于 cBPFDaniel Borkmann 做至少两个演讲,[一是,在 2013 年 mmap 中BPF 和 Netsniff-NG][68],以及 [在 2014 中关于 tc 和 cls_bpf 的的一个非常完整的演讲][69]。
* 在 Cloudflare 的博客上Marek Majkowski 提出的他的 [与 iptables 的 `xt_bpf` 模块一起使用 BPF 字节码][70]。值得一提的是,从 Linux 内核 4.10 开始eBPF 也是通过这个模块支持的。(虽然,我并不知道关于这件事的任何讨论或者文章)
* [Libpcap 过滤器语法][71]
#### 关于 XDP
* 在 IO Visor 网站上的 [XDP 概述][72]。
* [eXpress Data Path (XDP)][73]  Tom Herbert, Alexei Starovoitov, March 2016
这是第一个关于 XDP 的演讲。
* [BoF - BPF 能为你做什么?][74]  Brenden Blanco, LinuxCon, Toronto, August 2016
* [eXpress Data Path][148] Brenden Blanco, Linux Meetup at Santa Clara, July 2016
包含一些(有点营销的意思?)**基准测试结果**!使用单一核心:
* ip 路由丢弃: ~3.6 百万包每秒Mpps
* 使用 BPFtc使用 clsact qdisc丢弃 ~4.2 Mpps
* 使用 BPFXDP 丢弃20 Mpps CPU 利用率 < 10%
* XDP 重写转发在端口上它接收到的包10 Mpps
(测试是用 mlx4 驱动程序执行的)。
* Jesper Dangaard Brouer 有几个非常好的幻灯片,它可以从本质上去理解 XDP 的内部结构。
* [XDP eXpress Data Path介绍及将来的用法][37] September 2016
“Linux 内核与 DPDK 的斗争” 。**未来的计划**(在写这篇文章时)它用 XDP 和 DPDK 进行比较。
* [网络性能研讨][38]  netdev 1.2, Tokyo, October 2016
关于 XDP 内部结构和预期演化的附加提示。
* [XDP eXpress Data Path, 可用于 DDoS 防护][39] OpenSourceDays, March 2017
包含了关于 XDP 的详细情况和使用案例,以及 **性能测试****性能测试结果** 和 **代码片断**,以及使用 eBPF/XDP基于一个 IP 黑名单模式)的用于 **基本的 DDoS 防护**
* [内存 vs. 网络,激发和修复内存瓶颈][40] LSF Memory Management Summit, March 2017
提供了许多 XDP 开发者当前所面对 **内存问题** 的许多细节。不要从这一个开始,但如果你已经理解了 XDP并且想去了解它在页面分配方面的真实工作方式这是一个非常有用的资源。
* [XDP 能为其它人做什么][41]netdev 2.1, Montreal, April 2017及 Andy Gospodarek
普通人怎么开始使用 eBPF 和 XDP。这个演讲也由 Julia Evans 在 [她的博客][42] 上做了总结。
* [XDP 能为其它人做什么][205]第二版netdev 2.2, Seoul, November 2017同一个作者
该演讲的修订版本,包含了新的内容。
Jesper 也创建了并且尝试去扩展了有关 eBPF 和 XDP 的一些文档,查看 [相关节][75]。)
* [XDP 研讨 — 介绍、体验和未来发展][76]Tom Herbert, netdev 1.2, Tokyo, October 2016
在这篇文章中,只有视频可用,我不知道是否有幻灯片。
* [在 Linux 上进行高速包过滤][149] Gilberto Bertin, DEF CON 25, Las Vegas, July 2017
在 Linux 上的最先进的包过滤的介绍,面向 DDoS 的保护、讨论了关于在内核中进行包处理、内核旁通、XDP 和 eBPF。
#### 关于 基于 eBPF 或者 eBPF 相关的其它组件
* [在边界上的 P4][77] John Fastabend, May 2016
提出了使用 **P4**,一个包处理的描述语言,使用 BPF 去创建一个高性能的可编程交换机。
* 如果你喜欢音频的演讲,这里有一个相关的 [OvS Orbit 片断(#11),叫做 在边界上的 P4][78],日期是 2016 年 8 月。OvS Orbit 是对 Ben Pfaff 的访谈,它是 Open vSwitch 的其中一个核心维护者。在这个场景中John Fastabend 是被访谈者。
* [P4, EBPF 和 Linux TC 卸载][79] Dinan Gunawardena 和 Jakub Kicinski, August 2016
另一个 **P4** 的演讲,一些有关于 Netronome 的 **NFP**(网络流处理器)架构上的 eBPF 硬件卸载的因素。
* **Cilium** 是一个由 Cisco 最先发起的技术,它依赖 BPF 和 XDP 去提供 “基于 eBPF 程序即时生成的,用于容器的快速内核强制的网络和安全策略”。[这个项目的代码][150] 在 GitHub 上可以访问到。Thomas Graf 对这个主题做了很多的演讲:
* [Cilium对容器利用 BPF & XDP 实现网络 & 安全][43]也特别展示了一个负载均衡的使用案例Linux Plumbers conference, Santa Fe, November 2016
* [Cilium对容器利用 BPF & XDP 实现网络 & 安全][44] Docker Distributed Systems Summit, October 2016 — [video][45]
* [Cilium使用 BPF 和 XDP 的快速 IPv6 容器网络][46] LinuxCon, Toronto, August 2016
* [Cilium 用于容器的 BPF & XDP][47] fosdem17, Brussels, Belgium, February 2017
在上述不同的演讲中重复了大量的内容嫌麻烦就选最近的一个。Daniel Borkmann 作为 Google 开源博客的特邀作者,也写了 [Cilium 简介][80]。
* 这里也有一个关于 **Cilium** 的播客节目:一个是 [OvS Orbit episode (#4)][81],它是 Ben Pfaff 访谈 Thomas Graf 2016 年 5 月),和 [另外一个 Ivan Pepelnjak 的播客][82],仍然是 Thomas Graf 关于 eBPF、P4、XDP 和 Cilium 方面的2016 年 10 月)。
* **Open vSwitch** (OvS),它是 **Open Virtual Network**OVN一个开源的网络虚拟化解决方案相关的项目正在考虑在不同的层次上使用 eBPF它已经实现了几个概念验证原型
* [使用 eBPF 卸载 OVS 流处理器][48] William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016
* [将 OVN 的灵活性与 IOVisor 的高效率相结合][49] Fulvio Risso, Matteo Bertrone 和 Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016
据我所知,这些 eBPF 的使用案例看上去仅处于提议阶段(并没有合并到 OvS 的主分支中),但是,看它带来了什么将是非常值得关注的事情。
* XDP 的设计对分布式拒绝访问DDoS攻击是非常有用的。越来越多的演讲都关注于它。例如在 2017 年 4 月加拿大蒙特利尔举办的 netdev 2.1 会议上,来自 Cloudflare 的人们的讲话([XDP 实践:将 XDP 集成到我们的 DDoS 缓解管道][83])或者来自 Facebook 的([Droplet由 BPF + XDP 驱动的 DDoS 对策][84])都存在这样的很多使用案例。
* Kubernetes 可以用很多种方式与 eBPF 交互。这里有一篇关于 [在 Kubernetes 中使用 eBPF][196] 的文章它解释了现有的产品Cilium、Weave Scope如何支持 eBPF 与 Kubernetes 一起工作并且进一步描述了在容器部署环境中eBPF 感兴趣的交互内容是什么。
* [CETH for XDP][85] Yan Chan 和 Yunsong Lu、Linux Meetup、Santa Clara、July 2016
**CETH**,是由 Mellanox 发起的为实现更快的网络 I/O 而主张的通用以太网驱动程序架构。
* [**VALE 交换机**][86],另一个虚拟交换机,它可以与 netmap 框架结合,有 [一个 BPF 扩展模块][87]。
* **Suricata**,一个开源的入侵检测系统,它的旁路捕获旁特性依赖于 XDP。有一些关于它的资源
* [Suricate 文档的 eBPF 和 XDP 部分][197]
* [SEPTun-Mark-II][198] Suricata Extreme 性能调优指南 — Mark II Michal Purzynski 和 Peter Manev 发布于 2018 年 3 月。
* [介绍这个特性的博客文章][199] Éric Leblond 发布于 2016 年 9 月。
* [Suricate 的 eBPF 历险记][89]  Éric Leblond, netdev 1.2, Tokyo, October 2016
* [eBPF 和 XDP 一窥][90]  Éric Leblond, Kernel Recipes, Paris, September 2017
当使用原生驱动的 XDP 时,这个项目要求实现非常高的性能。
* [InKeV对于 DCN 的内核中分布式网络虚拟化][91] Z. Ahmed, M. H. Alizai 和 A. A. Syed, SIGCOMM, August 2016
**InKeV** 是一个基于 eBPF 的虚拟网络、目标数据中心网络的数据路径架构。它最初由 PLUMgrid 提出,并且声称相比基于 OvS 的 OpenStack 解决方案可以获得更好的性能。
* [gobpf - 在 Go 中使用 eBPF][92] Michael Schubert, fosdem17, Brussels, Belgium, February 2017
“一个来自 Go 库,可以去创建、加载和使用 eBPF 程序”
* [ply][93] 是为 Linux 实现的一个小而灵活的开源动态 **跟踪器**,它的一些特性非常类似于 bcc 工具,是受 awk 和 dtrace 启发,但使用一个更简单的语言。它是由 Tobias Waldekranz 写的。
* 如果你读过我以前的文章,你可能对我在这篇文章中的讨论感兴趣,[使用 eBPF 实现 OpenState 接口][151],关于包状态处理,在 fosdem17 中。
![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg)
### 文档
一旦你对 BPF 是做什么的有一个大体的理解。你可以抛开一般的演讲而深入到文档中了。下面是 BPF 的规范和功能的最全面的文档,按你的需要挑一个开始阅读吧!
#### 关于 BPF
* **BPF 的规范**(包含 classic 和 extended 版本)可以在 Linux 内核的文档中,和特定的文件 [linux/Documentation/networking/filter.txt][94] 中找到。BPF 使用以及它的内部结构也被记录在那里。此外,当加载 BPF 代码失败时,在这里可以找到 **被校验器抛出的错误信息**,这有助于你排除不明确的错误信息。
* 此外,在内核树中,在 eBPF 那里有一个关于 **常见问答** 的文档,它在文件 [linux/Documentation/bpf/bpf\_design\_QA.txt][95] 中。
* … 但是,内核文档是非常难懂的,并且非常不容易阅读。如果你只是去查找一个简单的 eBPF 语言的描述,可以去 IO Visor 的 GitHub 仓库,那儿有 [它的概括性描述][96]。
* 顺便说一下IO Visor 项目收集了许多 **关于 BPF 的资源**。大部分分别在 bcc 仓库的 [文档目录][97] 中,和 [bpf-docs 仓库][98] 的整个内容中,它们都在 GitHub 上。注意,这个非常好的 [BPF 参考指南][99] 包含一个详细的 BPF C 和 bcc Python 的 helper 的描述。
* 想深入到 BPF那里有一些必要的 **Linux 手册页**。第一个是 [bpf(2) man 页面][100] 关于 `bpf()` **系统调用**,它用于从用户空间去管理 BPF 程序和映射。它也包含一个 BPF 高级特性的描述(程序类型、映射等等)。第二个是主要是处理希望附加到 tc 接口的 BPF 程序:它是 [tc-bpf(8) man 页面][101],是 **使用 BPF 和 tc** 的一个参考,并且包含一些示例命令和参考代码。
* Jesper Dangaard Brouer 发起了一个 **更新 eBPF Linux 文档** 的尝试,包含 **不同的映射**。[他有一个草案][102],欢迎去贡献。一旦完成,这个文档将被合并进 man 页面并且进入到内核文档。
* Cilium 项目也有一个非常好的 [BPF 和 XDP 参考指南][103],它是由核心的 eBPF 开发者写的,它被证明对于 eBPF 开发者是极其有用的。
* David Miller 在 [xdp-newbies][152] 邮件列表中发了几封关于 eBPF/XDP 内部结构的富有启发性的电子邮件。我找不到一个单独的地方收集它们的链接,因此,这里是一个列表:
* [bpf.h 和你 …][50]
* [从语境上讲…][51]
* [BPF 校验器概述][52]
最后一个可能是目前来说关于校验器的最佳的总结。
* Ferris Ellis 发布的 [一个关于 eBPF 的系列博客文章][104]。作为我写的这个短文,第一篇文章是关于 eBPF 的历史背景和未来期望。接下来的文章将更多的是技术方面,和前景展望。
* [每个内核版本的 BPF 特性列表][153] 在 bcc 仓库中可以找到。如果你想去知道运行一个给定的特性所要求的最小的内核版本,它是非常有用的。我贡献和添加了链接到提交中,它介绍了每个特性,因此,你也可以从那里很容易地去访问提交历史。
#### 关于 tc
当为了网络目的结合使用 BPF 与 tc Linux <ruby>流量控制<rt>**t**raffic **c**ontrol<rt></ruby>工具)时,它可用于收集 tc 的常规功能的信息。这里有几个关于它的资源。
* 找到关于 **Linux 上 QoS** 的简单教程是很困难的。这里有两个链接,它们很长而且很难懂,但是,如果你可以抽时间去阅读它,你将学习到几乎关于 tc 的任何东西(虽然,没有什么关于 BPF 的)。它们在这里:[怎么去实现流量控制  Martin A. Brown, 2006][105],和 [怎么去实现 Linux 的高级路由 & 流量控制 LARTC Bert Hubert & al., 2002][106]。
* 在你的系统上的 **tc 手册页面** 并不是最新的,因为它们中的几个最近已经增加了内容。如果你没有找到关于特定的队列规则、分类或者过滤器的文档,它可能在最新的 [tc 组件的手册页面][107] 中。
* 一些额外的材料可以在 iproute2 包自已的文件中找到:这个包中有 [一些文档][108],包括一些文件,它可以帮你去理解 [tc 的 action 的功能][109]。
**注意:** 这些文件在 2017 年 10 月 已经从 iproute2 中删除,然而,从 Git 历史中却一直可用。
* 不完全是文档:[有一个关于 tc 的几个特性的研讨会][110]包含过滤、BPF、tc 卸载、…) 由 Jamal Hadi Salim 在 netdev 1.2 会议上组织的October 2016
* 额外信息 — 如果你使用 `tc` 较多,这里有一些好消息:我用这个工具 [写了一个 bash 补完功能][111],并且它被包 iproute2 带到内核版本 4.6 和更高版中!
#### 关于 XDP
* 对于 XDP 的一些 [进展中的文档(包括规范)][112] 已经由 Jesper Dangaard Brouer 启动并且意味着将成为一个协作工作。正在推进的2016 年 9 月你期望它去改变并且或许在一些节点上移动Jesper [称为贡献][113],如果你想去改善它)。
* 自来 Cilium 项目的 [BPF 和 XDP 参考指南][114] … 好吧,这个名字已经说明了一切。
#### 关于 P4 和 BPF
[P4][159] 是一个用于指定交换机行为的语言。它可以为多种目标硬件或软件编译。因此,你可能猜到了,这些目标中的一个就是 BPF … 仅部分支持的:一些 P4 特性并不能被转化到 BPF 中并且用类似的方法BPF 可以做的事情,而使用 P4 却不能表达出现。不过,**P4 与 BPF 使用** 的相关文档,[被隐藏在 bcc 仓库中][160]。这个改变在 P4_16 版本中p4c 引用的编辑器包含 [一个 eBPF 后端][161]。
![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg)
### 教程
Brendan Gregg 为想要 **使用 bcc 工具** 跟踪和监视内核中的事件的人制作了一个非常好的 **教程**。[第一个教程是关于如何使用 bcc 工具][162],它有许多章节,可以教你去理解怎么去使用已有的工具,而 [针对 Python 开发者的一篇][163] 专注于开发新工具,它总共有十七节 “课程”。
Sasha Goldshtein 也有一些 [Linux 跟踪研究材料][164] 涉及到使用几个 BPF 工具进行跟踪。
Jean-Tiare Le Bigot 的另一篇文章提供了一个详细的(和有指导意义的)[使用 perf 和 eBPF 去设置一个低级的跟踪器][165] 的示例。
对于网络相关的 eBPF 使用案例也有几个教程。有一些值得关注的文档,包括一篇 _eBPF 卸载入门指南_,是关于在 [Open NFP][166] 平台上用 Netronome 操作的。其它的那些,来自 Jesper 的演讲,[XDP 能为其它人做什么][167](及其[第二版][205]),可能是 XDP 入门的最好的方法之一。
![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg)
### 示例
有示例是非常好的。看看它们是如何工作的。但是 BPF 程序示例是分散在几个项目中的,因此,我列出了我所知道的所有的示例。示例并不是总是使用相同的 helper例如tc 和 bcc 都有一套它们自己的 helper使它可以很容易地去用 C 语言写 BPF 程序)
#### 来自内核的示例
内核中包含了大多数类型的程序:过滤器绑定到套接字或者 tc 接口、事件跟踪/监视、甚至是 XDP。你可以在 [linux/samples/bpf/][168] 目录中找到这些示例。
现在,更多的示例已经作为单元测试被添加到 [linux/tools/testing/selftests/bpf][200] 目录下,这里面包含对硬件卸载的测试或者对于 libbpf 的测试。
Jesper 的 Dangaard Brouer 在他的 [prototype-kernel][201] 仓库中也维护了一套专门的示例。 这些示例与那些内核中提供的示例非常类似但是它们可以脱离内核架构Makefile 和头文件)编译。
也不要忘记去看一下 git 相关的提交历史,它们介绍了一些特定的特性,也许包含了一些特性的详细示例。
#### 来自包 iproute2 的示例
iproute2 包也提供了几个示例。它们都很明显地偏向网络编程,因此,这个程序是附着到 tc 入站或者出站接口上。这些示例在 [iproute2/examples/bpf/][169] 目录中。
#### 来自 bcc 工具集的示例
许多示例都是 [与 bcc 一起提供的][170]
* 一些网络的示例放在相关的目录下面。它们包括套接字过滤器、tc 过滤器、和一个 XDP 程序。
* `tracing` 目录包含许多 **跟踪编程** 的示例。前面的教程中提到的都在那里。那些程序涉及了很大部分的事件监视功能,并且,它们中的一些是面向生产系统的。注意,某些 Linux 发行版(至少是 Debian、Ubuntu、Fedora、Arch Linux、这些程序已经被 [打包了][115] 并且可以很 “容易地” 通过比如 `# apt install bcc-tools` 进行安装。但是在写这篇文章的时候(除了 Arch Linux首先要求安装 IO Visor 的包仓库。
* 也有一些 **使用 Lua** 作为一个不同的 BPF 后端(那是因为 BPF 程序是用 Lua 写的,它是 C 语言的一个子集,它允许为前端和后端使用相同的语言)的一些示例,它在第三个目录中。
* 当然,[bcc 工具][202] 自身就是 eBPF 程序使用案例的值得关注示例。
#### 手册页面
虽然 bcc 一般很容易在内核中去注入和运行一个 BPF 程序,将程序附着到 tc 接口也能通过 `tc` 工具自己完成。因此,如果你打算将 **BPF 与 tc 一起使用**,你可以在 [`tc-bpf(8)` 手册页面][171] 中找到一些调用示例。
![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg)
### 代码
有时候BPF 文档或者示例并不够,而且你只想在你喜欢的文本编辑器(它当然应该是 Vim中去显示代码并去阅读它。或者你可能想深入到代码中去做一个补丁程序或者为机器增加一些新特性。因此这里对有关的文件的几个建议找到你想要的函数只取决于你自己
#### 在内核中的 BPF 代码
* 文件 [linux/include/linux/bpf.h][116] 及其相对的 [linux/include/uapi/bpf.h][117] 包含有关 eBPF 的 **定义**,它们分别用在内核中和用户空间程序的接口。
* 相同的方式,文件 [linux/include/linux/filter.h][118] 和 [linux/include/uapi/filter.h][119] 包含了用于 **运行 BPF 程序** 的信息。
* BPF 相关的 **主要的代码片断** 在 [linux/kernel/bpf/][120] 目录下面。**系统调用的不同操作许可**,比如,程序加载或者映射管理是在文件 `syscall.c` 中实现,而 `core.c` 包含了 **解析器**。其它文件的命名显而易见:`verifier.c` 包含 **校验器**(不是开玩笑的),`arraymap.c` 的代码用于与数组类型的 **映射** 交互,等等。
* 有几个与网络(及 tc、XDP )相关的函数和 **helpers** 是用户可用,其实现在 [linux/net/core/filter.c][121] 中。它也包含了移植 cBPF 字节码到 eBPF 的代码(因为在运行之前,内核中的所有的 cBPF 程序被转换成 eBPF
* 相关于 **事件跟踪** 的函数和 **helpers** 都在 [linux/kernel/trace/bpf_trace.c][203] 中。
* **JIT 编译器** 在它们各自的架构目录下面比如x86 架构的在 [linux/arch/x86/net/bpf_jit_comp.c][122] 中。例外是用于硬件卸载的 JIT 编译器,它们放在它们的驱动程序下,例如 Netronome NFP 网卡的就放在 [linux/drivers/net/ethernet/netronome/nfp/bpf/jit.c][206] 。
* 在 [linux/net/sched/][123] 目录下,你可以找到 **tc 的 BPF 组件** 相关的代码,尤其是在文件 `act_bpf.c` action `cls_bpf.c`filter中。
* 我并没有在 BPF 上深入到 **事件跟踪** 中,因此,我并不真正了解这些程序的钩子。在 [linux/kernel/trace/bpf_trace.c][124] 那里有一些东西。如果你对它感 兴趣,并且想去了解更多,你可以在 Brendan Gregg 的演示或者博客文章上去深入挖掘。
* 我也没有使用过 **seccomp-BPF**,不过你能在 [linux/kernel/seccomp.c][125] 找到它的代码,并且可以在 [linux/tools/testing/selftests/seccomp/seccomp_bpf.c][126] 中找到一些它的使用示例。
#### XDP 钩子代码
一旦装载进内核的 BPF 虚拟机,由一个 Netlink 命令将 **XDP** 程序从用户空间钩入到内核网络路径中。接收它的是在 [linux/net/core/dev.c][172] 文件中的 `dev_change_xdp_fd()` 函数,它被调用并设置一个 XDP 钩子。钩子被放在支持的网卡的驱动程序中。例如,用于 Netronome 硬件钩子的 ntp 驱动程序实现放在 [drivers/net/ethernet/netronome/nfp/][207] 中。文件 `nfp_net_common.c` 接受 Netlink 命令,并调用 `nfp_net_xdp_setup()`,它会转而调用 `nfp_net_xdp_setup_drv()` 实例来安装该程序。
#### 在 bcc 中的 BPF 逻辑
[在 bcc 的 GitHub 仓库][174] 能找到的 **bcc** 工具集的代码。其 **Python 代码**,包含在 `BPF` 类中,最初它在文件 [bcc/src/python/bcc/\_\_init\_\_.py][175] 中。但是许多我觉得有意思的东西,比如,加载 BPF 程序到内核中,出现在 [libbcc 的 C 库][176]中。
#### 使用 tc 去管理 BPF 的代码
当然,这些代码与 iproute2 包中的 **tc 中的** BPF 相关。其中的一些在 [iproute2/tc/][177] 目录中。文件 `f_bpf.c``m_bpf.c`(和 `e_bpf.c`)各自用于处理 BPF 的过滤器和动作的(和 tc `exec` 命令,等等)。文件 `q_clsact.c` 定义了为 BPF 特别创建的 `clsact` qdisc。但是**大多数的 BPF 用户空间逻辑** 是在 [iproute2/lib/bpf.c][178] 库中实现的,因此,如果你想去使用 BPF 和 tc这里可能是会将你搞混乱的地方它是从文件 iproute2/tc/tc_bpf.c 中移动而来的,你也可以在旧版本的包中找到相同的代码)。
#### BPF 实用工具
内核中也带有 BPF 相关的三个工具的源代码(`bpf_asm.c`、 `bpf_dbg.c`、 `bpf_jit_disasm.c`),根据你的版本不同,在 [linux/tools/net/][179] (直到 Linux 4.14)或者 [linux/tools/bpf/][180] 目录下面:
* `bpf_asm` 是一个极小的 cBPF 汇编程序。
* `bpf_dbg` 是一个很小的 cBPF 程序调试器。
* `bpf_jit_disasm` 对于两种 BPF 都是通用的,并且对于 JIT 调试来说非常有用。
* `bpftool` 是由 Jakub Kicinski 写的通用工具,它可以与 eBPF 程序交互并从用户空间的映射例如去展示、转储、pin 程序、或者去展示、创建、pin、更新、删除映射。
阅读在源文件顶部的注释可以得到一个它们使用方法的概述。
与 eBPF 一起工作的其它必需的文件是来自内核树的两个**用户空间库**,它们可以用于管理 eBPF 程序或者映射来自外部的程序。这个函数可以通过 [linux/tools/lib/bpf/][204] 目录中的头文件 `bpf.h``libbpf.h`(更高层面封装)来访问。比如,工具 `bpftool` 主要依赖这些库。
#### 其它值得关注的部分
如果你对关于 BPF 的不常见的语言的使用感兴趣bcc 包含 [一个 BPF 目标的 **P4 编译器**][181]以及 [**一个 Lua 前端**][182],它可以被用以代替 C 的一个子集,并且(用 Lua )替代 Python 工具。
#### LLVM 后端
这个 BPF 后端用于 clang / LLVM 将 C 编译到 eBPF ,是在 [这个提交][183] 中添加到 LLVM 源代码的(也可以在 [这个 GitHub 镜像][184] 上访问)。
#### 在用户空间中运行
到目前为止,我知道那里有至少两种 eBPF 用户空间实现。第一个是 [uBPF][185],它是用 C 写的。它包含一个解析器、一个 x86_64 架构的 JIT 编译器、一个汇编器和一个反汇编器。
uBPF 的代码似乎被重用来产生了一个 [通用实现][186],其声称支持 FreeBSD 内核、FreeBSD 用户空间、Linux 内核、Linux 用户空间和 Mac OSX 用户空间。它被 [VALE 交换机的 BPF 扩展模块][187]使用。
其它用户空间的实现是我做的:[rbpf][188],基于 uBPF但是用 Rust 写的。写了解析器和 JIT 编译器 Linux 下两个都有Mac OSX 和 Windows 下仅有解析器),以后可能会有更多。
#### 提交日志
正如前面所说的,如果你希望得到更多的关于一些特定的 BPF 特性的信息,不要犹豫,去看一些提交日志。你可以在许多地方搜索日志,比如,在 [git.kernel.org][189]、[在 GitHub 上][190]、或者如果你克隆过它还有你的本地仓库中。如果你不熟悉 git你可以尝试像这些去做 `git blame <file>` 去看看介绍特定代码行的提交内容,然后,`git show <commit>` 去看详细情况(或者在 `git log` 的结果中按关键字搜索,但是这样做通常比较单调乏味)也可以看在 bcc 仓库中的 [按内核版本区分的 eBPF 特性列表][191],它链接到相关的提交上。
![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg)
### 排错
对 eBPF 的追捧是最近的事情,因此,到目前为止我还找不到许多关于怎么去排错的资源。所以这里只有几个,是我在使用 BPF 进行工作的时候,对自己遇到的问题进行的记录。
#### 编译时的错误
* 确保你有一个最新的 Linux 内核版本(也可以看 [这个文档][127])。
* 如果你自己编译内核:确保你安装了所有正确的组件,包括内核镜像、头文件和 libc。
* 当使用 `tc-bpf`(用于去编译 C 代码到 BPF 中)的 man 页面提供的 `bcc` shell 函数时:我曾经必须添加包含 clang 调用的头文件:
```
__bcc() {
clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \
-I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \
-emit-llvm -c $1 -o - | \
llc -march=bpf -filetype=obj -o "`basename $1 .c`.o"
}
```
(现在似乎修复了)。
* 对于使用 `bcc` 的其它问题,不要忘了去看一看这个工具集的 [答疑][128]。
* 如果你从一个并不精确匹配你的内核版本的 iproute2 包中下载了示例,可能会由于在文件中包含的头文件触发一些错误。这些示例片断都假设安装在你的系统中内核的头文件与 iproute2 包是相同版本的。如果不是这种情况,下载正确的 iproute2 版本,或者编辑示例中包含的文件的路径,指向到 iproute2 中包含的头文件上(在运行时一些问题可能或者不可能发生,取决于你使用的特性)。
#### 在加载和运行时的错误
* 使用 `tc` 去加载一个程序,确保你使用了一个与使用中的内核版本等价的 iproute2 中的 `tc` 二进制文件。
* 使用 `bcc` 去加载一个程序,确保在你的系统上安装了 bcc仅下载源代码去运行 Python 脚本是不够的)。
* 使用 `tc`,如果 BPF 程序不能返回一个预期值,检查调用它的方式:过滤器,或者动作,或者使用 “直传” 模式的过滤器。
* 还是 `tc`,注意不使用过滤器,动作不会直接附着到 qdiscs 或者接口。
* 通过内核校验器抛出错误到解析器可能很难。[内核文档][129]或许可以提供帮助,因此,可以 [参考指南][130] 或者,万不得一的情况下,可以去看源代码(祝你好运!)。记住,校验器 _不运行_ 程序,对于这种类型的错误,记住这点是非常重要的。如果你得到一个关于无效内存访问或者关于未初始化的数据的错误,它并不意味着那些问题真实发生了(或者有时候是,它们完全有可能发生)。它意味着你的程序是以校验器预计可能发生错误的方式写的,并且因此而拒绝这个程序。
* 注意 `tc` 工具有一个 `verbose` 模式,它与 BPF 一起工作的很好:在你的命令行尾部尝试追加一个 `verbose`
* `bcc` 也有一个 `verbose` 选项:`BPF` 类有一个 `debug` 参数,它可以带 `DEBUG_LLVM_IR`、`DEBUG_BPF` 和 `DEBUG_PREPROCESSOR` 三个标志中任何组合(详细情况在 [源文件][131]中)。 为调试该代码,它甚至嵌入了 [一些条件去打印输出代码][132]。
* LLVM v4.0+ 为 eBPF 程序 [嵌入一个反汇编器][133]。因此,如果你用 clang 编译你的程序,在编译时添加 `-g` 标志允许你通过内核校验器去以人类可读的格式去转储你的程序。处理转储文件,使用:
```
$ llvm-objdump -S -no-show-raw-insn bpf_program.o
```
* 使用映射?你应该去看看 [bpf-map][134],这是一个为 Cilium 项目而用 Go 创建的非常有用的工具,它可以用于去转储内核中 eBPF 映射的内容。也有一个用 Rust 开发的 [克隆][135]。
* [在 **StackOverflow** 上有个旧的 bpf 标签][136],但是,在这篇文章中从没用过它(并且那里几乎没有与新版本的 eBPF 相关的东西。如果你是一位来自未来的阅读者你可能想去看看在这方面是否有更多的活动LCTT 译注:意即只有旧东西)。
![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg)
### 更多!
* 如果你想轻松地 **测试 XDP**,有 [一个配置好的 Vagrant ][137] 可以使用。你也可以 [在 Docker 容器中][138] **测试 bcc**
* 想知道 BPF 的 **开发和活动** 在哪里吗?好吧,内核补丁总是出自于 [netdev 上的邮件列表][139](相关 Linux 内核的网络栈开发):以关键字 “BPF” 或者 “XDP” 来搜索。自 2017 年 4 月开始,那里也有 [一个专门用于 XDP 编程的邮件列表][140](是为了架构或者寻求帮助)。[在 IO Visor 的邮件列表上][141]也有许多的讨论和辨论,因为 BPF 是一个重要的项目。如果你只是想随时了解情况,那里也有一个 [@IOVisor Twitter 帐户][142]。
请经常会回到[这篇博客][0]中,来看一看 [关于 BPF][192] 有没有新的文章!
_特别感谢 Daniel Borkmann 指引我找到了 [更多的文档][154]因此我才完成了这个合集。_
--------------------------------------------------------------------------------
via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
作者:[Quentin Monnet][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://qmonnet.github.io/whirl-offload/about/
[0]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf
[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp
[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf
[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1
[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc
[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf
[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel
[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2
[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools
[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages
[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel
[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-·s-code
[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc
[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc
[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities
[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks
[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend
[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace
[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs
[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time
[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time
[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations
[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation
[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials
[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples
[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code
[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting
[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more
[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann
[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf
[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf
[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf
[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf
[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/
[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf
[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop
[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf
[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
[41]:http://netdevconf.org/2.1/session.html?gospodarek
[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/
[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp
[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823
[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&amp;amp;amp;list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs
[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp
[47]:https://fosdem.org/2017/schedule/event/cilium/
[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf
[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf
[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html
[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html
[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html
[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf
[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter
[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more
[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers
[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf
[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf
[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
[61]:https://lwn.net/Articles/603983/
[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing
[63]:http://www.slideshare.net/vh21/linux-kernel-tracing
[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained
[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough
[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf
[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf
[68]:http://borkmann.ch/talks/2013_devconf.pdf
[69]:http://borkmann.ch/talks/2014_devconf.pdf
[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/
[71]:http://biot.com/capstats/bpf.html
[72]:https://www.iovisor.org/technology/xdp
[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf
[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf
[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop
[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf
[78]:https://ovsorbit.benpfaff.org/#e11
[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf
[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html
[81]:https://ovsorbit.benpfaff.org/
[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html
[83]:http://netdevconf.org/2.1/session.html?bertin
[84]:http://netdevconf.org/2.1/session.html?zhou
[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016
[86]:http://info.iet.unipi.it/~luigi/vale/
[87]:https://github.com/YutaroHayakawa/vale-bpf
[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf
[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond
[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf
[92]:https://fosdem.org/2017/schedule/event/go_bpf/
[93]:https://wkz.github.io/ply/
[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef
[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
[97]:https://github.com/iovisor/bcc/tree/master/docs
[98]:https://github.com/iovisor/bpf-docs/
[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[100]:http://man7.org/linux/man-pages/man2/bpf.2.html
[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html
[103]:http://docs.cilium.io/en/latest/bpf/
[104]:https://ferrisellis.com/tags/ebpf/
[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/
[106]:http://lartc.org/lartc.html
[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8
[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0
[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0
[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b
[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html
[113]:https://marc.info/?l=linux-netdev&amp;amp;amp;m=147436253625672
[114]:http://docs.cilium.io/en/latest/bpf/
[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h
[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h
[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h
[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h
[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf
[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c
[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c
[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched
[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c
[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c
[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt
[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output
[133]:https://www.spinics.net/lists/netdev/msg406926.html
[134]:https://github.com/cilium/bpf-map
[135]:https://github.com/badboy/bpf-map
[136]:https://stackoverflow.com/questions/tagged/bpf
[137]:https://github.com/iovisor/xdp-vagrant
[138]:https://github.com/zlim/bcc-docker
[139]:http://lists.openwall.net/netdev/
[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[141]:http://lists.iovisor.org/pipermail/iovisor-dev/
[142]:https://twitter.com/IOVisor
[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf
[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode
[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources
[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md
[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski
[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016
[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf
[150]:https://github.com/cilium/cilium
[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/
[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429
[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html
[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/
[157]:https://www.iovisor.org/resources/blog
[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[159]:http://p4.org/
[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4
[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md
[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[164]:https://github.com/goldshtn/linux-tracing-workshop
[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/
[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/
[167]:http://netdevconf.org/2.1/session.html?gospodarek
[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf
[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf
[170]:https://github.com/iovisor/bcc/tree/master/examples
[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c
[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/
[174]:https://github.com/iovisor/bcc/
[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c
[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc
[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c
[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net
[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf
[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler
[182]:https://github.com/iovisor/bcc/tree/master/src/lua
[183]:https://reviews.llvm.org/D6494
[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d
[185]:https://github.com/iovisor/ubpf/
[186]:https://github.com/YutaroHayakawa/generic-ebpf
[187]:https://github.com/YutaroHayakawa/vale-bpf
[188]:https://github.com/qmonnet/rbpf
[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
[190]:https://github.com/torvalds/linux
[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF
[193]:https://lwn.net/Articles/740157/
[194]:https://www.netdevconf.org/2.2/session.html?viljoen-xdpoffload-talk
[195]:https://fosdem.org/2018/schedule/event/xdp/
[196]:http://blog.kubernetes.io/2017/12/using-ebpf-in-kubernetes.html
[197]:http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html?highlight=XDP#ebpf-and-xdp
[198]:https://github.com/pevma/SEPTun-Mark-II
[199]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[200]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/testing/selftests/bpf
[201]:https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/samples/bpf
[202]:https://github.com/iovisor/bcc/tree/master/tools
[203]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[204]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/lib/bpf
[205]:https://www.netdevconf.org/2.2/session.html?gospodarek-xdp-workshop
[206]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/netronome/nfp/bpf/jit.c
[207]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/netronome/nfp/

View File

@ -0,0 +1,81 @@
Meet OpenAuto, an Android Auto emulator for Raspberry Pi
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb_computer_person_general_.png?itok=BRGJXU7e)
In 2015, Google introduced [Android Auto][1], a system that allows users to project certain apps from their Android smartphones onto a car's infotainment display. Android Auto's driver-friendly interface, with larger touchscreen buttons and voice commands, aims to make it easier and safer for drivers to control navigation, music, podcasts, radio, phone calls, and more while keeping their eyes on the road. Android Auto can also run as an app on an Android smartphone, enabling owners of older-model vehicles without modern head unit displays to take advantage of these features.
While there are many [apps][2] available for Android Auto, developers are working to add to its catalog. A new, open source tool named [OpenAuto][3] is hoping to make that easier by giving developers a way to emulate Android Auto on a Raspberry Pi. With OpenAuto, developers can test their applications in conditions similar to how they'll work on an actual car head unit.
OpenAuto's creator, Michal Szwaj, answered some questions about his project for Opensource.com. Some responses have been edited for conciseness and clarity.
### What is OpenAuto?
In a nutshell, OpenAuto is an emulator for the Android Auto head unit. It emulates the head unit software and allows you to use Android Auto on your PC or on any other embedded platform like Raspberry Pi 3.
Head unit software is a frontend for the Android Auto projection. All magic related to the Android Auto, like navigation, Google Voice Assistant, or music playback, is done on the Android device. Projection of Android Auto on the head unit is accomplished using the [H.264][4] codec for video and [PCM][5] codec for audio streaming. This is what the head unit software mostly does—it decodes the H.264 video stream and PCM audio streams and plays them back together. Another function of the head unit is providing user inputs. OpenAuto supports both touch events and hard keys.
### What platforms does OpenAuto run on?
My target platform for deployment of the OpenAuto is Raspberry Pi 3 computer. For successful deployment, I needed to implement support of video hardware acceleration using the Raspberry Pi 3 GPU (VideoCore 4). Thanks to this, Android Auto projection on the Raspberry Pi 3 computer can be handled even using 1080p@60 fps resolution. I used [OpenMAX IL][6] and IL client libraries delivered together with the Raspberry Pi firmware to implement video hardware acceleration.
Taking advantage of the fact that the Raspberry Pi operating system is Raspbian based on Debian Linux, OpenAuto can be also built for any other Linux-based platform that provides support for hardware video decoding. Most of the Linux-based platforms provide support for hardware video decoding directly in GStreamer. Thanks to highly portable libraries like Boost and [Qt][7], OpenAuto can be built and run on the Windows platform. Support of MacOS is being implemented by the community and should be available soon.
![][https://www.youtube.com/embed/k9tKRqIkQs8?origin=https://opensource.com&enablejsapi=1]
### What software libraries does the project use?
The core of the OpenAuto is the [aasdk][8] library, which provides support for all Android Auto features. aasdk library is built on top of the Boost, libusb, and OpenSSL libraries. [libusb][9] implements communication between the head unit and an Android device (via USB bus). [Boost][10] provides support for the asynchronous mechanisms for communication. It is required for high efficiency and scalability of the head unit software. [OpenSSL][11] is used for encrypting communication.
The aasdk library is designed to be fully reusable for any purposes related to implementation of the head unit software. You can use it to build your own head unit software for your desired platform.
Another very important library used in OpenAuto is Qt. It provides support for OpenAuto's multimedia, user input, and graphical interface. And the build system OpenAuto is using is [CMake][12].
Note: The Android Auto protocol is taken from another great Android Auto head unit project called [HeadUnit][13]. The people working on this project did an amazing job in reverse engineering the AndroidAuto protocol and creating the protocol buffers that structurize all messages.
### What equipment do you need to run OpenAuto on Raspberry Pi?
In addition to a Raspberry Pi 3 computer and an Android device, you need:
* **USB sound card:** The Raspberry Pi 3 doesn't have a microphone input, which is required to use Google Voice Assistant
* **Video output device:** You can use either a touchscreen or any other video output device connected to HDMI or composite output (RCA)
* **Input device:** For example, a touchscreen or a USB keyboard
### What else do you need to get started?
In order to use OpenAuto, you must build it first. On the OpenAuto's wiki page you can find [detailed instructions][14] for how to build it for the Raspberry Pi 3 platform. On other Linux-based platforms, the build process will look very similar.
On the wiki page you can also find other useful instructions, such as how to configure the Bluetooth Hands-Free Profile (HFP) and Advanced Audio Distribution Profile (A2DP) and PulseAudio.
### What else should we know about OpenAuto?
OpenAuto allows anyone to create a head unit based on the Raspberry Pi 3 hardware. Nevertheless, you should always be careful about safety and keep in mind that OpenAuto is just an emulator. It was not certified by any authority and was not tested in a driving environment, so using it in a car is not recommended.
OpenAuto is licensed under GPLv3. For more information, visit the [project's GitHub page][3], where you can find its source code and other information.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/openauto-emulator-Raspberry-Pi
作者:[Michal Szwaj][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/michalszwaj
[1]:https://www.android.com/auto/faq/
[2]:https://play.google.com/store/apps/collection/promotion_3001303_android_auto_all
[3]:https://github.com/f1xpl/openauto
[4]:https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC
[5]:https://en.wikipedia.org/wiki/Pulse-code_modulation
[6]:https://www.khronos.org/openmaxil
[7]:https://www.qt.io/
[8]:https://github.com/f1xpl/aasdk
[9]:http://libusb.info/
[10]:http://www.boost.org/
[11]:https://www.openssl.org/
[12]:https://cmake.org/
[13]:https://github.com/gartnera/headunit
[14]:https://github.com/f1xpl/

View File

@ -0,0 +1,249 @@
Understanding Linux filesystems: ext4 and beyond
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux11x_cc.png?itok=XMDOouJR)
The majority of modern Linux distributions default to the ext4 filesystem, just as previous Linux distributions defaulted to ext3, ext2, and—if you go back far enough—ext.
If you're new to Linux—or to filesystems—you might wonder what ext4 brings to the table that ext3 didn't. You might also wonder whether ext4 is still in active development at all, given the flurries of news coverage of alternate filesystems such as btrfs, xfs, and zfs.
We can't cover everything about filesystems in a single article, but we'll try to bring you up to speed on the history of Linux's default filesystem, where it stands, and what to look forward to.
I drew heavily on Wikipedia's various ext filesystem articles, kernel.org's wiki entries on ext4, and my own experiences while preparing this overview.
### A brief history of ext
#### MINIX filesystem
Before there was ext, there was the MINIX filesystem. If you're not up on your Linux history, MINIX was a very small Unix-like operating system for IBM PC/AT microcomputers. Andrew Tannenbaum developed it for teaching purposes and released its source code (in print form!) in 1987.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/ibm_pc_at.jpg?itok=Tfk3hQYB)
Although you could peruse MINIX's source, it was not actually free and open source software (FOSS). The publishers of Tannebaum's book required a $69 license fee to operate MINIX, which was included in the cost of the book. Still, this was incredibly inexpensive for the time, and MINIX adoption took off rapidly, soon exceeding Tannenbaum's original intent of using it simply to teach the coding of operating systems. By and throughout the 1990s, you could find MINIX installations thriving in universities worldwide—and a young Linus Torvalds used MINIX to develop the original Linux kernel, first announced in 1991, and released under the GPL in December 1992.
But wait, this is a filesystem article, right? Yes, and MINIX had its own filesystem, which early versions of Linux also relied on. Like MINIX, it could uncharitably be described as a "toy" example of its kind—the MINIX filesystem could handle filenames only up to 14 characters and address only 64MB of storage. In 1991, the typical hard drive was already 40-140MB in size. Linux clearly needed a better filesystem!
#### ext
While Linus hacked away on the fledgling Linux kernel, Rémy Card worked on the first ext filesystem. First implemented in 1992—only a year after the initial announcement of Linux itself!—ext solved the worst of the MINIX filesystem's problems.
1992's ext used the new virtual filesystem (VFS) abstraction layer in the Linux kernel. Unlike the MINIX filesystem before it, ext could address up to 2GB of storage and handle 255-character filenames.
But ext didn't have a long reign, largely due to its primitive timestamping (only one timestamp per file, rather than the three separate stamps for inode creation, file access, and file modification we're familiar with today). A mere year later, ext2 ate its lunch.
#### ext2
Rémy clearly realized ext's limitations pretty quickly, since he designed ext2 as its replacement a year later. While ext still had its roots in "toy" operating systems, ext2 was designed from the start as a commercial-grade filesystem, along the same principles as BSD's Berkeley Fast File System.
Ext2 offered maximum filesizes in the gigabytes and filesystem sizes in the terabytes, placing it firmly in the big leagues for the 1990s. It was quickly and widely adopted, both in the Linux kernel and eventually in MINIX, as well as by third-party modules making it available for MacOS and Windows.
There were still problems to solve, though: ext2 filesystems, like most filesystems of the 1990s, were prone to catastrophic corruption if the system crashed or lost power while data was being written to disk. They also suffered from significant performance losses due to fragmentation (the storage of a single file in multiple places, physically scattered around a rotating disk) as time went on.
Despite these problems, ext2 is still used in some isolated cases today—most commonly, as a format for portable USB thumb drives.
#### ext3
In 1998, six years after ext2's adoption, Stephen Tweedie announced he was working on significantly improving it. This became ext3, which was adopted into mainline Linux with kernel version 2.4.15, in November 2001.
![Packard Bell computer][2]
Mid-1990s Packard Bell computer, [Spacekid][3], [CC0][4]
Ext2 had done very well by Linux distributions for the most part, but—like FAT, FAT32, HFS, and other filesystems of the time—it was prone to catastrophic corruption during power loss. If you lose power while writing data to the filesystem, it can be left in what's called an inconsistent state—one in which things have been left half-done and half-undone. This can result in loss or corruption of vast swaths of files unrelated to the one being saved or even unmountability of the entire filesystem.
Ext3, and other filesystems of the late 1990s, such as Microsoft's NTFS, uses journaling to solve this problem. The journal is a special allocation on disk where writes are stored in transactions; if the transaction finishes writing to disk, its data in the journal is committed to the filesystem itself. If the system crashes before that operation is committed, the newly rebooted system recognizes it as an incomplete transaction and rolls it back as though it had never taken place. This means that the file being worked on may still be lost, but the filesystem itself remains consistent, and all other data is safe. Three levels of journaling are available in the Linux kernel implementation of ext3: **journal** , **ordered** , and **writeback**.
* **Journal** is the lowest risk mode, writing both data and metadata to the journal before committing it to the filesystem. This ensures consistency of the file being written to, as well as the filesystem as a whole, but can significantly decrease performance.
* **Ordered** is the default mode in most Linux distributions; ordered mode writes metadata to the journal but commits data directly to the filesystem. As the name implies, the order of operations here is rigid: First, metadata is committed to the journal; second, data is written to the filesystem, and only then is the associated metadata in the journal flushed to the filesystem itself. This ensures that, in the event of a crash, the metadata associated with incomplete writes is still in the journal, and the filesystem can sanitize those incomplete writes while rolling back the journal. In ordered mode, a crash may result in corruption of the file or files being actively written to during the crash, but the filesystem itself—and files not actively being written to—are guaranteed safe.
* **Writeback** is the third—and least safe—journaling mode. In writeback mode, like ordered mode, metadata is journaled, but data is not. Unlike ordered mode, metadata and data alike may be written in whatever order makes sense for best performance. This can offer significant increases in performance, but it's much less safe. Although writeback mode still offers a guarantee of safety to the filesystem itself, files that were written to during or before the crash are vulnerable to loss or corruption.
Like ext2 before it, ext3 uses 16-bit internal addressing. This means that with a blocksize of 4K, the largest filesize it can handle is 2 TiB in a maximum filesystem size of 16 TiB.
#### ext4
Theodore Ts'o (who by then was ext3's principal developer) announced ext4 in 2006, and it was added to mainline Linux two years later, in kernel version 2.6.28. Ts'o describes ext4 as a stopgap technology which significantly extends ext3 but is still reliant on old technology. He expects it to be supplanted eventually by a true next-generation filesystem.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/dell_precision_380_workstation.jpeg?itok=3EjYXY2i)
Ext4 is functionally very similar to ext3, but brings large filesystem support, improved resistance to fragmentation, higher performance, and improved timestamps.
### Ext4 vs ext3
Ext3 and ext4 have some very specific differences, which I'll focus on here.
#### Backwards compatibility
Ext4 was specifically designed to be as backward-compatible as possible with ext3. This not only allows ext3 filesystems to be upgraded in place to ext4; it also permits the ext4 driver to automatically mount ext3 filesystems in ext3 mode, making it unnecessary to maintain the two codebases separately.
#### Large filesystems
Ext3 filesystems used 32-bit addressing, limiting them to 2 TiB files and 16 TiB filesystems (assuming a 4 KiB blocksize; some ext3 filesystems use smaller blocksizes and are thus limited even further).
Ext4 uses 48-bit internal addressing, making it theoretically possible to allocate files up to 16 TiB on filesystems up to 1,000,000 TiB (1 EiB). Early implementations of ext4 were still limited to 16 TiB filesystems by some userland utilities, but as of 2011, e2fsprogs has directly supported the creation of >16TiB ext4 filesystems. As one example, Red Hat Enterprise Linux contractually supports ext4 filesystems only up to 50 TiB and recommends ext4 volumes no larger than 100 TiB.
#### Allocation improvements
Ext4 introduces a lot of improvements in the ways storage blocks are allocated before writing them to disk, which can significantly increase both read and write performance.
##### Extents
An extent is a range of contiguous physical blocks (up to 128 MiB, assuming a 4 KiB block size) that can be reserved and addressed at once. Utilizing extents decreases the number of inodes required by a given file and significantly decreases fragmentation and increases performance when writing large files.
##### Multiblock allocation
Ext3 called its block allocator once for each new block allocated. This could easily result in heavy fragmentation when multiple writers are open concurrently. However, ext4 uses delayed allocation, which allows it to coalesce writes and make better decisions about how to allocate blocks for the writes it has not yet committed.
##### Persistent pre-allocation
When pre-allocating disk space for a file, most file systems must write zeroes to the blocks for that file on creation. Ext4 allows the use of `fallocate()` instead, which guarantees the availability of the space (and attempts to find contiguous space for it) without first needing to write to it. This significantly increases performance in both writes and future reads of the written data for streaming and database applications.
##### Delayed allocation
This is a chewy—and contentious—feature. Delayed allocation allows ext4 to wait to allocate the actual blocks it will write data to until it's ready to commit that data to disk. (By contrast, ext3 would allocate blocks immediately, even while the data was still flowing into a write cache.)
Delaying allocation of blocks as data accumulates in cache allows the filesystem to make saner choices about how to allocate those blocks, reducing fragmentation (write and, later, read) and increasing performance significantly. Unfortunately, it increases the potential for data loss in programs that have not been specifically written to call `fsync()` when the programmer wants to ensure data has been flushed entirely to disk.
Let's say a program rewrites a file entirely:
`fd=open("file" ,O_TRUNC); write(fd, data); close(fd);`
With legacy filesystems, `close(fd);` is sufficient to guarantee that the contents of `file` will be flushed to disk. Even though the write is not, strictly speaking, transactional, there's very little risk of losing the data if a crash occurs after the file is closed.
If the write does not succeed (due to errors in the program, errors on the disk, power loss, etc.), both the original version and the newer version of the file may be lost or corrupted. If other processes access the file as it is being written, they will see a corrupted version. And if other processes have the file open and do not expect its contents to change—e.g., a shared library mapped into multiple running programs—they may crash.
To avoid these issues, some programmers avoid using `O_TRUNC` at all. Instead, they might write to a new file, close it, then rename it over the old one:
`fd=open("newfile"); write(fd, data); close(fd); rename("newfile", "file");`
Under filesystems without delayed allocation, this is sufficient to avoid the potential corruption and crash problems outlined above: Since `rename()` is an atomic operation, it won't be interrupted by a crash; and running programs will continue to reference the old, now unlinked version of `file` for as long as they have an open filehandle to it. But because ext4's delayed allocation can cause writes to be delayed and re-ordered, the `rename("newfile","file")` may be carried out before the contents of `newfile` are actually written to disk, which opens the problem of parallel processes getting bad versions of `file` all over again.
To mitigate this, the Linux kernel (since version 2.6.30) attempts to detect these common code cases and force the files in question to be allocated immediately. This reduces, but does not prevent, the potential for data loss—and it doesn't help at all with new files. If you're a developer, please take note: The only way to guarantee data is written to disk immediately is to call `fsync()` appropriately.
#### Unlimited subdirectories
Ext3 was limited to a total of 32,000 subdirectories; ext4 allows an unlimited number. Beginning with kernel 2.6.23, ext4 uses HTree indices to mitigate performance loss with huge numbers of subdirectories.
#### Journal checksumming
Ext3 did not checksum its journals, which presented problems for disk or controller devices with caches of their own, outside the kernel's direct control. If a controller or a disk with its own cache did writes out of order, it could break ext3's journaling transaction order, potentially corrupting files being written to during (or for some time preceding) a crash.
In theory, this problem is resolved by the use of write barriers—when mounting the filesystem, you set `barrier=1` in the mount options, and the device will then honor `fsync()` calls all the way down to the metal. In practice, it's been discovered that storage devices and controllers frequently do not honor write barriers—improving performance (and benchmarks, where they're compared to their competitors) but opening up the possibility of data corruption that should have been prevented.
Checksumming the journal allows the filesystem to realize that some of its entries are invalid or out-of-order on the first mount after a crash. This thereby avoids the mistake of rolling back partial or out-of-order journal entries and further damaging the filesystem—even if the storage devices lie and don't honor barriers.
#### Fast filesystem checks
Under ext3, the entire filesystem—including deleted and empty files—required checking when `fsck` is invoked. By contrast, ext4 marks unallocated blocks and sections of the inode table as such, allowing `fsck` to skip them entirely. This greatly reduces the time to run `fsck` on most filesystems and has been implemented since kernel 2.6.24.
#### Improved timestamps
Ext3 offered timestamps granular to one second. While sufficient for most uses, mission-critical applications are frequently looking for much, much tighter time control. Ext4 makes itself available to those enterprise, scientific, and mission-critical applications by offering timestamps in the nanoseconds.
Ext3 filesystems also did not provide sufficient bits to store dates beyond January 18, 2038. Ext4 adds an additional two bits here, extending [the Unix epoch][5] another 408 years. If you're reading this in 2446 AD, you have hopefully already moved onto a better filesystem—but it'll make me posthumously very, very happy if you're still measuring the time since UTC 00:00, January 1, 1970.
#### Online defragmentation
Neither ext2 nor ext3 directly supported online defragmentation—that is, defragging the filesystem while mounted. Ext2 had an included utility, **e2defrag** , that did what the name implies—but it needed to be run offline while the filesystem was not mounted. (This is, obviously, especially problematic for a root filesystem.) The situation was even worse in ext3—although ext3 was much less likely to suffer from severe fragmentation than ext2 was, running **e2defrag** against an ext3 filesystem could result in catastrophic corruption and data loss.
Although ext3 was originally deemed "unaffected by fragmentation," processes that employ massively parallel write processes to the same file (e.g., BitTorrent) made it clear that this wasn't entirely the case. Several userspace hacks and workarounds, such as [Shake][6], addressed this in one way or another—but they were slower and in various ways less satisfactory than a true, filesystem-aware, kernel-level defrag process.
Ext4 addresses this problem head on with **e4defrag** , an online, kernel-mode, filesystem-aware, block-and-extent-level defragmentation utility.
### Ongoing ext4 development
Ext4 is, as the Monty Python plague victim once said, "not quite dead yet!" Although [its principal developer regards it][7] as a mere stopgap along the way to a truly [next-generation filesystem][8], none of the likely candidates will be ready (due to either technical or licensing problems) for deployment as a root filesystem for some time yet.
There are still a few key features being developed into future versions of ext4, including metadata checksumming, first-class quota support, and large allocation blocks.
#### Metadata checksumming
Since ext4 has redundant superblocks, checksumming the metadata within them offers the filesystem a way to figure out for itself whether the primary superblock is corrupt and needs to use an alternate. It is possible to recover from a corrupt superblock without checksumming—but the user would first need to realize that it was corrupt, and then try manually mounting the filesystem using an alternate. Since mounting a filesystem read-write with a corrupt primary superblock can, in some cases, cause further damage, this isn't a sufficient solution, even with a sufficiently experienced user!
Compared to the extremely robust per-block checksumming offered by next-gen filesystems such as btrfs or zfs, ext4's metadata checksumming is a pretty weak feature. But it's much better than nothing.
Although it sounds like a no-brainer—yes, checksum ALL THE THINGS!—there are some significant challenges to bolting checksums into a filesystem after the fact; see [the design document][9] for the gritty details.
#### First-class quota support
Wait, quotas?! We've had those since the ext2 days! Yes, but they've always been an afterthought, and they've always kinda sucked. It's probably not worth going into the hairy details here, but the [design document][10] lays out the ways quotas will be moved from userspace into the kernel and more correctly and performantly enforced.
#### Large allocation blocks
As time goes by, those pesky storage systems keep getting bigger and bigger. With some solid-state drives already using 8K hardware blocksizes, ext4's current limitation to 4K blocks gets more and more limiting. Larger storage blocks can decrease fragmentation and increase performance significantly, at the cost of increased "slack" space (the space left over when you only need part of a block to store a file or the last piece of a file).
You can view the hairy details in the [design document][11].
### Practical limitations of ext4
Ext4 is a robust, stable filesystem, and it's what most people should probably be using as a root filesystem in 2018. But it can't handle everything. Let's talk briefly about some of the things you shouldn't expect from ext4—now or probably in the future.
Although ext4 can address up to 1 EiB—equivalent to 1,000,000 TiB—of data, you really, really shouldn't try to do so. There are problems of scale above and beyond merely being able to remember the addresses of a lot more blocks, and ext4 does not now (and likely will not ever) scale very well beyond 50-100 TiB of data.
Ext4 also doesn't do enough to guarantee the integrity of your data. As big an advancement as journaling was back in the ext3 days, it does not cover a lot of the common causes of data corruption. If data is [corrupted][12] while already on disk—by faulty hardware, impact of cosmic rays (yes, really), or simple degradation of data over time—ext4 has no way of either detecting or repairing such corruption.
Building on the last two items, ext4 is only a pure filesystem, and not a storage volume manager. This means that even if you've got multiple disks—and therefore parity or redundancy, which you could theoretically recover corrupt data from—ext4 has no way of knowing that or using it to your benefit. While it's theoretically possible to separate a filesystem and storage volume management system in discrete layers without losing automatic corruption detection and repair features, that isn't how current storage systems are designed, and it would present significant challenges to new designs.
### Alternate filesystems
Before we get started, a word of warning: Be very careful with any alternate filesystem which isn't built into and directly supported as a part of your distribution's mainline kernel!
Even if a filesystem is safe, using it as the root filesystem can be absolutely terrifying if something hiccups during a kernel upgrade. If you aren't extremely comfortable with the idea of booting from alternate media and poking manually and patiently at kernel modules, grub configs, and DKMS from a chroot... don't go off the reservation with the root filesystem on a system that matters to you.
There may well be good reasons to use a filesystem your distro doesn't directly support—but if you do, I strongly recommend you mount it after the system is up and usable. (For example, you might have an ext4 root filesystem, but store most of your data on a zfs or btrfs pool.)
#### XFS
XFS is about as mainline as a non-ext filesystem gets under Linux. It's a 64-bit, journaling filesystem that has been built into the Linux kernel since 2001 and offers high performance for large filesystems and high degrees of concurrency (i.e., a really large number of processes all writing to the filesystem at once).
XFS became the default filesystem for Red Hat Enterprise Linux, as of RHEL 7. It still has a few disadvantages for home or small business users—most notably, it's a real pain to resize an existing XFS filesystem, to the point it usually makes more sense to create another one and copy your data over.
While XFS is stable and performant, there's not enough of a concrete end-use difference between it and ext4 to recommend its use anywhere that it isn't the default (e.g., RHEL7) unless it addresses a specific problem you're having with ext4, such as >50 TiB capacity filesystems.
XFS is not in any way a "next-generation" filesystem in the ways that ZFS, btrfs, or even WAFL (a proprietary SAN filesystem) are. Like ext4, it should most likely be considered a stopgap along the way towards [something better][8].
#### ZFS
ZFS was developed by Sun Microsystems and named after the zettabyte—equivalent to 1 trillion gigabytes—as it could theoretically address storage systems that large.
A true next-generation filesystem, ZFS offers volume management (the ability to address multiple individual storage devices in a single filesystem), block-level cryptographic checksumming (allowing detection of data corruption with an extremely high accuracy rate), [automatic corruption repair][12] (where redundant or parity storage is available), rapid [asynchronous incremental replication][13], inline compression, and more. [A lot more][14].
The biggest problem with ZFS, from a Linux user's perspective, is the licensing. ZFS was licensed CDDL, which is a semi-permissive license that conflicts with the GPL. There is a lot of controversy over the implications of using ZFS with the Linux kernel, with opinions ranging from "it's a GPL violation" to "it's a CDDL violation" to "it's perfectly fine, it just hasn't been tested in court." Most notably, Canonical has included ZFS code inline in its default kernels since 2016 without legal challenge so far.
At this time, even as a very avid ZFS user myself, I would not recommend ZFS as a root Linux filesystem. If you want to leverage the benefits of ZFS on Linux, set up a small root filesystem on ext4, then put ZFS on your remaining storage, and put data, applications, whatever you like on it—but keep root on ext4, until your distribution explicitly supports a zfs root.
#### btrfs
Btrfs—short for B-Tree Filesystem, and usually pronounced "butter"—was announced by Chris Mason in 2007 during his tenure at Oracle. Btrfs aims at most of the same goals as ZFS, offering multiple device management, per-block checksumming, asynchronous replication, inline compression, and [more][8].
As of 2018, btrfs is reasonably stable and usable as a standard single-disk filesystem but should probably not be relied on as a volume manager. It suffers from significant performance problems compared to ext4, XFS, or ZFS in many common use cases, and its next-generation features—replication, multiple-disk topologies, and snapshot management—can be pretty buggy, with results ranging from catastrophically reduced performance to actual data loss.
The ongoing status of btrfs is controversial; SUSE Enterprise Linux adopted it as its default filesystem in 2015, whereas Red Hat announced it would no longer support btrfs beginning with RHEL 7.4 in 2017. It is probably worth noting that production, supported deployments of btrfs use it as a single-disk filesystem, not as a multiple-disk volume manager a la ZFS—even Synology, which uses btrfs on its storage appliances, but layers it atop conventional Linux kernel RAID (mdraid) to manage the disks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/ext4-filesystem
作者:[Jim Salter][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jim-salter
[1]:https://opensource.com/file/391546
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/packard_bell_pc.jpg?itok=VI8dzcwp (Packard Bell computer)
[3]:https://commons.wikimedia.org/wiki/File:Old_packard_bell_pc.jpg
[4]:https://creativecommons.org/publicdomain/zero/1.0/deed.en
[5]:https://en.wikipedia.org/wiki/Unix_time
[6]:https://vleu.net/shake/
[7]:http://www.linux-mag.com/id/7272/
[8]:https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
[9]:https://ext4.wiki.kernel.org/index.php/Ext4_Metadata_Checksums
[10]:https://ext4.wiki.kernel.org/index.php/Design_For_1st_Class_Quota_in_Ext4
[11]:https://ext4.wiki.kernel.org/index.php/Design_for_Large_Allocation_Blocks
[12]:https://en.wikipedia.org/wiki/Data_degradation#Visual_example_of_data_degradation
[13]:https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/
[14]:https://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/

View File

@ -1,193 +0,0 @@
ch-cn translating
5 SSH alias examples in Linux
======
[![][1]][1]
As a Linux user, we use[ ssh command][2] to log in to remote machines. The more you use ssh command, the more time you are wasting in typing some significant commands. We can use either [alias defined in your .bashrc file][3] or functions to minimize the time you spend on CLI. But this is not a better solution. The better solution is to use **SSH-alias** in ssh config file.
A couple of examples where we can better the ssh commands we use.
Connecting to ssh to AWS instance is a pain. Just to type below command, every time is complete waste your time as well.
to
```
ssh aws1
```
Connecting to a system when debugging.
to
```
ssh xyz
```
In this post, we will see how to achieve shorting of your ssh commands without using bash alias or functions. The main advantage of ssh alias is that all your ssh command shortcuts are stored in a single file and easy to maintain. The other advantage is we can use same alias **for both SSH and SCP commands alike**.
Before we jump into actual configurations, we should know difference between /etc/ssh/ssh_config, /etc/ssh/sshd_config, and ~/.ssh/config files. Below is the explanation for these files.
## Difference between /etc/ssh/ssh_config and ~/.ssh/config
System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas user-level ssh configurations are stored in ~/.ssh/config file.
## Difference between /etc/ssh/ssh_config and /etc/ssh/sshd_config
System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas system level SSH server configurations are stored in /etc/ssh/sshd_config file.
## **Syntax for configuration in ~/.ssh/config file**
Syntax for ~/.ssh/config file content.
```
config val
config val1 val2
```
**Example1:** Create SSH alias for a host(www.linuxnix.com)
Edit file ~/.ssh/config with following content
```
Host tlj
User root
HostName 18.197.176.13
port 22
```
Save the file
The above ssh alias uses
1. **tlj as an alias name**
2. **root as a user who will log in**
3. **18.197.176.13 as hostname IP address**
4. **22 as a port to access SSH service.**
Output:
```
sanne@Surendras-MacBook-Pro:~ > ssh tlj
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sat Oct 14 01:00:43 2017 from 20.244.25.231
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
```
**Example2:** Using ssh key to login to the system without using password using **IdentityFile**.
Example:
```
Host aws
User ec2-users
HostName ec2-54-200-184-202.us-west-2.compute.amazonaws.com
IdentityFile ~/Downloads/surendra.pem
port 22
```
**Example3:** Use a different alias for the same host. In below example, we use **tlj, linuxnix, linuxnix.com** for same IP/hostname 18.197.176.13.
~/.ssh/config file content
```
Host tlj linuxnix linuxnix.com
User root
HostName 18.197.176.13
port 22
```
**Output:**
```
sanne@Surendras-MacBook-Pro:~ > ssh tlj
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sat Oct 14 01:00:43 2017 from 220.244.205.231
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
sanne@Surendras-MacBook-Pro:~ > ssh linuxnix.com
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
```
```
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sun Oct 15 20:31:08 2017 from 1.129.110.13
root@linuxnix:~# exit
logout
Connection to 138.197.176.103 closed.
[6571] sanne@Surendras-MacBook-Pro:~ > ssh linuxnix
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sun Oct 15 20:31:20 2017 from 1.129.110.13
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
```
**Example4:** Copy a file to remote system using same SSH alias
Syntax:
```
**scp <filename> <ssh_alias>:<location>**
```
Example:
```
sanne@Surendras-MacBook-Pro:~ > scp abc.txt tlj:/tmp
abc.txt 100% 12KB 11.7KB/s 00:01
sanne@Surendras-MacBook-Pro:~ >
```
As we already set ssh host as an alias, using SCP is a breeze as both ssh and SCP use almost same syntax and options.
To do scp a file from local machine to remote one use below.
**Examaple5:** Resolve SSH timeout issues in Linux. By default, your ssh logins are timed out if you don 't activily use the terminial.
[SSH timeouts][5] are one more pain where you have to re-login to a remote machine after a certain time. We can set SSH time out right in side your ~/.ssh/config file to make your session active for whatever time you want. To achieve this we will use two SSH options for keeping the session alive. One ServerAliveInterval keeps your session live for number of seconds and ServerAliveCountMax will initial session after session for a given number.
```
**ServerAliveInterval A**
**ServerAliveCountMax B**
```
**Example:**
```
Host tlj linuxnix linuxnix.com
User root
HostName 18.197.176.13
port 22
ServerAliveInterval 60**
ServerAliveCountMax 30
```
We will see some other exiting howto in our next post. Keep visiting linuxnix.com.
--------------------------------------------------------------------------------
via: https://www.linuxnix.com/5-ssh-alias-examples-using-ssh-config-file/
作者:[Surendra Anne;Max Ntshinga;Otto Adelfang;Uchechukwu Okeke][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxnix.com
[1]:https://www.linuxnix.com/wp-content/uploads/2017/10/SSH-alias-1.png
[2]:https://www.linuxnix.com/ssh-access-remote-linux-server/
[3]:https://www.linuxnix.com/linux-alias-command-explained-with-examples/
[4]:/cdn-cgi/l/email-protection
[5]:https://www.linuxnix.com/how-to-auto-logout/

View File

@ -0,0 +1,120 @@
Could we run Python 2 and Python 3 code in the same VM with no code changes?
======
Theoretically, yes. Zed Shaw famously jested that if this is impossible then Python 3 must not be Turing-complete. But in practice, this is unrealistic and I'll share why by giving you a few examples.
### What does it mean to be a dict?
Lets imagine a Python 6 VM. It can read `module3.py` which was written for Python 3.6 but in this module it can import `module2.py` which was written for Python 2.7 and successfully use it with no issues. This is obviously toy code but lets say that `module2.py` includes functions like:
```
def update_config_from_dict(config_dict):
items = config_dict.items()
while items:
k, v = items.pop()
memcache.set(k, v)
def config_to_dict():
result = {}
for k, v in memcache.getall():
result[k] = v
return result
def update_in_place(config_dict):
for k, v in config_dict.items():
new_value = memcache.get(k)
if new_value is None:
del config_dict[k]
elif new_value != v:
config_dict[k] = v
```
Now, when we want to use those functions from `module3`, we are faced with a problem: the dict type in Python 3.6 is different from the dict type in Python 2.7. In Python 2, dicts were unordered and their `.keys()`, `.values()`, `.items()` methods returned proper lists. That meant calling `.items()` created a copy of the state in the dictionary. In Python 3 those methods return dynamic views on the current state of the dictionary.
This means if `module3` called `module2.update_config_from_dict(some_dictionary)`, it would fail to run because the value returned by `dict.items()` in Python 3 isnt a list and doesnt have a `.pop()` method. The reverse is also true. If `module3` called `module2.config_to_dict()`, it would presumably return a Python 2 dictionary. Now calling `.items()` is suddenly returning a list so this code would not work correctly (which works fine with Python 3 dictionaries):
```
def main(cmdline_options):
d = module2.config_to_dict()
items = d.items()
for k, v in items:
print(f'Config from memcache: {k}={v}')
for k, v in cmdline_options:
d[k] = v
for k, v in items:
print(f'Config with cmdline overrides: {k}={v}')
```
Finally, using `module2.update_in_place()` would fail because the value of `.items()` in Python 3 now doesnt allow the dictionary to change during iteration.
Theres more issues with this dictionary situation. Should a Python 2 dictionary respond `True` to `isinstance(d, dict)` on Python 3? If it did, itd be a lie. If it didnt, code would break anyway.
### Python should magically know types and translate!
Why cant our Python 6 VM recognize that in Python 3 code we mean something else when calling `some_dict.keys()` than in Python 2 code? Well, Python doesnt know what the author of the code thought `some_dict` should be when she was writing that code. There is nothing in the code that signifies whether its a dictionary at all. Type annotations werent there in Python 2 and, since theyre optional, even in Python 3 most code doesnt use them yet.
At runtime, when you call `some_dict.keys()`, Python simply looks up an attribute on the object that happens to hide under the `some_dict` name and tries to run `__call__()` on that attribute. Theres some technicalities with method binding, descriptors, slots, etc. but this is the gist of it. We call this behavior “duck typing”.
Because of duck typing, the Python 6 VM would not be able to make compile-time decisions to translate calls and attribute lookups correctly.
### OK, so lets make this decision at runtime instead
The Python 6 VM could implement this by tagging every attribute lookup with information “call comes from py2” or “call comes from py3” and make the object on the other side dispatch the right attribute. That would slow things down a lot and use more memory, too. It would require us to keep both versions of the given type in memory with a proxy used by user code. We would need to sync the state of those objects behind the users back, doubling the work. After all, the memory representation of the new dictionary is different than in Python 2.
If your head spun thinking about the problems with dictionaries, think about all the issues with Unicode strings in Python 3 and the do-it-all byte strings in Python 2.
### Is everything lost? Cant Python 3 run old code ever?
Everything is not lost. Projects get ported to Python 3 every day. The recommended way to port Python 2 code to work on both versions of Python is to run [Python-Modernize][1] on your code. It will catch code that wouldnt work on Python 3 and translate it to use the [six][2] library instead so it runs on both Python 2 and Python 3 after. Its an adaptation of `2to3` which was producing Python 3-only code. `Modernize` is preferred since it provides a more incremental migration route. All this is outlined very well in the [Porting Python 2 Code to Python 3][3] document in the Python documentation.
But wait, didnt you say a Python 6 VM couldnt do this automatically? Right. `Modernize` looks at your code and tries to guess whats going to be safe. It will make some changes that are unnecessary and miss others that are necessary. Famously, it wont help you with strings. That transformation is not trivial if your code didnt keep the boundaries between “binary data from outside” and “text data within the process”.
So, migrating big projects cannot be done automatically and involves humans running tests, finding problems and fixing them. Does it work? Yes, I helped [moving a million lines of code to Python 3][4] and the switch caused no incidents. This particular move regained 1/3 of memory on our servers and made the code run 12% faster. That was on Python 3.5. But Python 3.6 got quite a bit faster and depending on your workload you could maybe even achieve [a 4X speedup][5] .
### Dear Zed
Hi, man. I follow your story for over 10 years now. Ive been watching when you were upset you were getting no credit for Mongrel even though the Rails ecosystem pretty much ran all on it. Ive been there when you reimagined it and started the Mongrel 2 project. I was following your surprising move to use Fossil for it. Ive seen you abruptly depart from the Ruby community with your “Rails is a Ghetto” post. I was thrilled when you started working on “Learn Python The Hard Way” and have been recommending it ever since. I met you in 2013 at [DjangoCon Europe][6] and we talked quite a bit about painting, singing and burnout. [This photo of you][7] is one of my first posts on Instagram.
You almost pulled another “Is a Ghetto” move with your [“The Case Against Python 3”][8] post. I think your heart is in the right place but that post caused a lot of confusion, including many people seriously thinking you believe Python 3 is not Turing-complete. I spent quite a few hours convincing people that you said so in jest. But given your very valuable contribution of “Learn Python The Hard Way”, I think it was worth doing that. Especially that you did update your book for Python 3. Thank you for doing that work. If there really are people in our community that called for blacklisting you and your book on the grounds of your post alone, call them out. Its a lose-lose situation and its wrong.
For the record, no core Python dev thinks that the Python 2 -> Python 3 transition was smooth and well planned, [including Guido van Rossum][9]. Seriously, watch that video. Hindsight is 20/20 of course. In this sense we are in fact aggressively agreeing with each other. If we went to do it all over again, it would look differently. But at this point, [on January 1st 2020 Python 2 will reach End Of Life][10]. Most third-party libraries already support Python 3 and even started releasing Python 3-only versions (see [Django][11] or the [Python 3 Statement of the scientific projects][12]).
We are also aggressively agreeing on another thing. Just like you with Mongrel, Python core devs are volunteers who arent compensated for their work. Most of us invested a lot of time and effort in this project, and so [we are naturally sensitive][13] to dismissive and aggressive comments against their contribution. Especially if the message is both attacking the current state of affairs and calling for more free labor.
I hoped that by 2018 youd let your 2016 post go. There were a bunch of good rebuttals. [I especially like eevees][14]. It specifically addresses the “run Python 2 alongside Python 3” scenario as not realistic, just like running Ruby 1.8 and Ruby 2.x in the same VM, or like running Lua 5.1 alongside 5.3. You cant even run C binaries compiled against libc.so.5 with libc.so.6. What I find most surprising though is that you claimed that Python core is “purposefully” creating broken tools like 2to3, created by Guido in whose best interest it is for everybody to migrate as smoothly and quickly as possible. Im glad that you backed out of that claim in your post later but you have to realize you antagonized people who read the original version. Accusations of deliberate harm better be backed by strong evidence.
But it seems like you still do that. [Just today][15] you said that Python core “ignores” attempts to solve the API problem, specifically `six`. As I wrote above, `six` is covered by the official porting guide in the Python documentation. More importantly, `six` was written by Benjamin Peterson, the release manager of Python 2.7. A lot of people learned to program thanks to you and you have a large following online. People will read a tweet like this and they will believe it at face value. This is harmful.
I have a suggestion. Lets put this “Python 3 was poorly managed” dispute behind us. Python 2 is dying, we are slow to kill it and the process was ugly and bloody but its a one way street. Arguing about it is not actionable anymore. Instead, lets focus on what we can do now to make Python 3.8 better than any other Python release. Maybe you prefer the role of an outsider looking in, but you would be much more impactful as a member of this community. Saying “we” instead of “they”.
--------------------------------------------------------------------------------
via: http://lukasz.langa.pl/13/could-we-run-python-2-and-python-3-code-same-vm/
作者:[Łukasz Langa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://lukasz.langa.pl
[1]:https://python-modernize.readthedocs.io/
[2]:http://pypi.python.org/pypi/six
[3]:https://docs.python.org/3/howto/pyporting.html
[4]:https://www.youtube.com/watch?v=66XoCk79kjM
[5]:https://twitter.com/llanga/status/963834977745022976
[6]:https://www.instagram.com/p/ZVC9CwH7G1/
[7]:https://www.instagram.com/p/ZXtdtUn7Gk/
[8]:https://learnpythonthehardway.org/book/nopython3.html
[9]:https://www.youtube.com/watch?v=Oiw23yfqQy8
[10]:https://mail.python.org/pipermail/python-dev/2018-March/152348.html
[11]:https://pypi.python.org/pypi/Django/2.0.3
[12]:http://python3statement.org/
[13]:https://www.youtube.com/watch?v=-Nk-8fSJM6I
[14]:https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/
[15]:https://twitter.com/zedshaw/status/977909970795745281

View File

@ -0,0 +1,104 @@
Build a baby monitor with a Raspberry Pi
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/baby-chick-egg.png?itok=RcFfqdbA)
Hong Kong can be hot and humid, even at night, and many people use air conditioning to make their homes more bearable. When my oldest son was a baby, the air conditioning unit in his bedroom had manual controls and no thermostat functionality. It was either on or off, and allowing it to run continuously overnight caused the room to get cold and wasted energy and money.
I decided to fix this problem with an [Internet of Things][1] solution based on a [Raspberry Pi][2]. Later I took it a step further with a [baby monitor][3] add-on. In this article, I'll explain how I did it, and the code is [available on my GitHub][4] page.
### Setting up the air conditioner controller
I solved the first part of my problem with an Orvibo S20 [WiFi-connected smart plug][5] and smartphone application. Although this allowed me to control the air conditioning unit remotely, it was still a manual process, and I wanted to try and automate it. I found a project on [Instructables][6] that seemed to match my requirements: It used a Raspberry Pi to measure local temperature and humidity readings from an [AM2302 sensor][7] and record them to a MySQL database.
Using crimp terminal contacts with crimp housings made it a pinch to connect the temperature/humidity sensor to the correct GPIO pins on the Raspberry Pi. Fortunately, the AM2302 sensor has [open source software][8] for taking readings, with helpful [Python][9] examples.
The software for [interfacing with the AM2302 sensor][10] has been updated since I put my project together, and the original code I used is now considered legacy and unmaintained. The code is made up of a small binary object to connect to the sensor and some Python scripts to interpret the readings and return the correct values.
![Raspberry Pi, sensor, and Python code][12]
Raspberry Pi, sensor, and Python code used to build the temperature/humidity monitor.
With the sensor connected to the Raspberry Pi, the Python code can correctly return temperature and humidity readings. Connecting Python to a MySQL database is straightforward, and there are plenty of code examples that use the `python-``mysql` bindings. Because I needed to monitor the temperature and humidity continuously, I wrote software to do this.
In fact, I ended up with two solutions, one that would run continuously as a process and periodically poll the sensor (typically at one-minute intervals), and another Python script that ran once and exited. I decided to use the run-once-and-exit approach coupled with cron to call this script every minute. The main reason was that the continuous (looped) script occasionally would not return a reading, which could lead to a buildup of processes trying to read the sensor, and that would eventually cause a system to hang due to lack of available resources.
I also found a convenient [Perl script][13] to programmatically control my smart plug. This was an essential piece of the jigsaw, as it meant I could trigger the Perl script if certain temperature and/or humidity conditions were met. After some testing, I decided to create a separate `checking` script that would pull the latest values from the MySQL database and set the smart plug on or off depending upon the values returned. Separating the logic to run the plug control script from the sensor-reading script also meant that it operated independently and would continue to run, even if the sensor-reading script developed problems.
It made sense to make the temperature at which the air conditioner would switch on/off configurable, so I moved these values to a configuration file that the control script read. I also found that, although the sensor was generally accurate, occasionally it would return incorrect readings. The sensor script was modified to not write temperature or humidity values to the MySQL database that were significantly different from the previous values. Likewise the allowed variance of temperature or humidity between consecutive readings was set in a general configuration file, and if the reading was outside these limits the values would not be committed to the database.
Although this seemed like quite a lot of effort to make a thermostat, recording the data to a MySQL database meant it was available for further analysis to identify usage patterns. There are many graphing options available to present data from a MySQL database, and I decided to use [Google Chart][14]to display the data on a web page.
![Temperature and humidity chart][16]
Temperature and humidity measured over the previous six hours.
### Adding a baby monitor camera
The open nature of the Raspberry Pi meant I could continue to add functionality—and I had plenty of open GPIO pins available. My next idea was to add a camera module to set it up as a baby monitor, given that the device was already in the baby's bedroom.
I needed a camera that works in the dark, and the [Pi Noir][17] camera module is perfect for this. The Pi Noir is the same as the Raspberry Pi's regular camera module, except it doesn't have an infrared (IR) filter. This means daytime images may have a slightly purple tint, but it will display images lit with IR light in the dark.
Now I needed a source of IR light. Due to the Pi's popularity and low barrier of entry, there are a huge number of peripherals and add-ons for it. Of the many IR sources available, the one that caught my attention was the [Bright Pi][18]. It draws power from the Raspberry Pi and fits around the camera Pi module to provide a source of IR and normal light. The only drawback was I needed to dust off my rusty soldering skills.
It might have taken me longer than most, but my soldering skills were up to it, and I was able to successfully attach all the IR LEDs to the housing and connect the IR light source to the Pi's GPIO pins. This also meant that the Pi could programmatically control when the IR LEDs were lit, as well as their light intensity.
It also made sense to have the video capture exposed via a web stream so I could watch it from the web page with the temperature and humidity readings chart. After further research, I chose to use a [streaming software][19] that used M-JPEG captures. Exposing the JPG source via the web page also allowed me to connect camera viewer applications on my smartphone to view the camera output there, as well.
### Putting on the finishing touches
No Raspberry Pi project is complete without selecting an appropriate case for the Pi and its various components. After a lot of searching and comparing, there was one clear [winner][20]: SmartPi's Lego-style case. The Lego compatibility allowed me to build mounts for the temperature/humidity sensor and camera. Here's the final outcome:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pibabymonitor_case.png?itok=_ofyN73a)
Since then, I've made other changes and updates to my setup:
* I upgraded from a Raspberry Pi 2 Model B to a [Raspberry Pi 3][21], which meant I could do away with the USB WiFi module.
* I replaced the Orvibo S20 with a [TP-Link HS110][22] smart plug.
* I also plugged the Pi into a smart plug so I can do remote reboots/resets.
* I migrated the MySQL database off the Raspberry Pi, and it now runs in a container on a NAS device.
* I added a [flexible tripod][23] to allow for the best camera angle.
* I recompiled the USB WiFi module to disable the onboard flashing LED, which was one of the main advantages to upgrading to a Raspberry Pi 3.
* I've since built another monitor for my second child.
* I bought a bespoke night camera for my third child … due to lack of time.
Want to learn more? All the code is [available on my GitHub][4] page.
Do you have a Raspberry Pi project to share? [Send us your story idea][24].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/build-baby-monitor-raspberry-pi
作者:[Jonathan Ervine][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jervine
[1]:https://opensource.com/tags/internet-things
[2]:https://opensource.com/tags/raspberry-pi
[3]:https://opensource.com/article/17/9/gonimo
[4]:https://github.com/jervine/rpi-temp-humid-monitor
[5]:https://www.amazon.co.uk/marsboy-S20-Automation-Control-Smartphone/dp/B01LXKPUDK/ref=sr_1_1/258-6082934-2585109?ie=UTF8&qid=1520578769&sr=8-1&keywords=orvibo+s20
[6]:http://www.instructables.com/id/Raspberry-Pi-Temperature-Humidity-Network-Monitor/
[7]:https://www.adafruit.com/product/393
[8]:https://github.com/adafruit/Adafruit_Python_DHT
[9]:https://opensource.com/tags/python
[10]:https://github.com/adafruit/Adafruit-Raspberry-Pi-Python-Code/tree/legacy/Adafruit_DHT_Driver_Python
[11]:/file/390916
[12]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pibabymonitor_materials.png?itok=2w03CdKM (Raspberry Pi, sensor, and Python code)
[13]:https://github.com/franc-carter/bauhn-wifi
[14]:https://developers.google.com/chart/
[15]:/file/390876
[16]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pibabymonitor_temp-humidity.png?itok=2jqtQU0x (Temperature and humidity chart)
[17]:https://www.raspberrypi.org/products/pi-noir-camera-v2/
[18]:https://www.pi-supply.com/product/bright-pi-bright-white-ir-camera-light-raspberry-pi/
[19]:https://elinux.org/RPi-Cam-Web-Interface
[20]:https://smarticase.com/collections/all/products/smartipi-kit-3
[21]:https://opensource.com/article/18/3/raspberry-pi-3b-model-news
[22]:https://www.tp-link.com/uk/products/details/cat-5258_HS110.html
[23]:https://www.amazon.com/Flexpod-Flexible-Tripod-Discontinued-Manufacturer/dp/B000JC8WYA
[24]:http://opensource.com/story

View File

@ -1,3 +1,6 @@
Translating by MjSeven
Getting started with Jupyter Notebooks
======

View File

@ -0,0 +1,149 @@
How To Create/Extend Swap Partition In Linux Using LVM
======
We are using LVM for flexible volume management so, why cant we use LVM for swap space?
This allow users to increase the swap partition whenever we need.
If you upgraded the RAM in your system, it is necessary to add more swap space.
This help you to manage the system that run applications that require a large amount of memory.
Swap can be created in three ways
* Create a new swap partition
* Create a new swap file
* Extend swap on an existing logical volume (LVM)
Its recommended to create a dedicated swap partition instead of swap file.
**Suggested Read :**
**(#)** [3 Easy Ways To Create Or Extend Swap Space In Linux][1]
**(#)** [Automatically Create/Remove And Mount Swap File In Linux Using Shell Script][2]
What is the recommended swap size in Linux?
### What Is Swap Space
Swap space in Linux is used when the amount of physical memory (RAM) is full. When physical RAM is full, inactive pages in memory are moved to the swap space.
This helps system to run the application continuously but its not considered a replacement for more RAM.
Swap space is located on hard drives so, it would not processing the request like physical RAM.
### How To Create A Swap Partition Using LVM
As we already know how to create logical volume do the same for swap as well. Just follow the below procedure.
Create a logical volume which you required. In my case im going to create `5GB` of swap partition.
```
$ sudo lvcreate -L 5G -n LogVol_swap1 vg00
Logical volume "LogVol_swap1" created.
```
Format the new swap space.
```
$ sudo mkswap /dev/vg00/LogVol_swap1
Setting up swapspace version 1, size = 5 GiB (5368705024 bytes)
no label, UUID=d278e9d6-4c37-4cb0-83e5-2745ca708582
```
Add the following entry to the `/etc/fstab` file.
```
# vi /etc/fstab
/dev/mapper/vg00-LogVol_swap1 swap swap defaults 0 0
```
Enable the extended logical volume.
```
$ sudo swapon -va
swapon: /swapfile: already active -- ignored
swapon: /dev/mapper/vg00-LogVol_swap1: found signature [pagesize=4096, signature=swap]
swapon: /dev/mapper/vg00-LogVol_swap1: pagesize=4096, swapsize=5368709120, devsize=5368709120
swapon /dev/mapper/vg00-LogVol_swap1
```
Test that the swap space has been added properly.
```
$ cat /proc/swaps
Filename Type Size Used Priority
/swapfile file 1459804 526336 -1
/dev/dm-0 partition 5242876 0 -2
$ free -g
total used free shared buff/cache available
Mem: 1 1 0 0 0 0
Swap: 6 0 6
```
### How To Expand A Swap Partition Using LVM
Just follow the below procedure to extend an LVM swap logical volume.
Disable swapping for the associated logical volume.
```
$ sudo swapoff -v /dev/vg00/LogVol_swap1
swapoff /dev/vg00/LogVol_swap1
```
Resize the logical volume. Im going to increase the swap volume from `5GB to 11GB`.
```
$ sudo lvresize /dev/vg00/LogVol_swap1 -L +6G
Size of logical volume vg00/LogVol_swap1 changed from 5.00 GiB (1280 extents) to 11.00 GiB (2816 extents).
Logical volume vg00/LogVol_swap1 successfully resized.
```
Format the new swap space.
```
$ sudo mkswap /dev/vg00/LogVol_swap1
mkswap: /dev/vg00/LogVol_swap1: warning: wiping old swap signature.
Setting up swapspace version 1, size = 11 GiB (11811155968 bytes)
no label, UUID=2e3b2ee0-ad0b-402c-bd12-5a9431b73623
```
Enable the extended logical volume.
```
$ sudo swapon -va
swapon: /swapfile: already active -- ignored
swapon: /dev/mapper/vg00-LogVol_swap1: found signature [pagesize=4096, signature=swap]
swapon: /dev/mapper/vg00-LogVol_swap1: pagesize=4096, swapsize=11811160064, devsize=11811160064
swapon /dev/mapper/vg00-LogVol_swap1
```
Test that the logical volume has been extended properly.
```
$ free -g
total used free shared buff/cache available
Mem: 1 1 0 0 0 0
Swap: 12 0 12
$ cat /proc/swaps
Filename Type Size Used Priority
/swapfile file 1459804 237024 -1
/dev/dm-0 partition 11534332 0 -2
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-create-extend-swap-partition-in-linux-using-lvm/
作者:[Ramya Nuvvula][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/ramya/
[1]:https://www.2daygeek.com/add-extend-increase-swap-space-memory-file-partition-linux/
[2]:https://www.2daygeek.com/shell-script-create-add-extend-swap-space-linux/

View File

@ -0,0 +1,157 @@
How to configure multiple websites with Apache web server
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/apache-feathers.jpg?itok=fnrpsu3G)
In my [last post][1], I explained how to configure an Apache web server for a single website. It turned out to be very easy. In this post, I will show you how to serve multiple websites using a single instance of Apache.
Note: I wrote this article on a virtual machine using Fedora 27 with Apache 2.4.29. If you have another distribution or release of Fedora, the commands you will use and the locations and content of the configuration files may be different.
As my previous article mentioned, all of the configuration files for Apache are located in `/etc/httpd/conf` and `/etc/httpd/conf.d`. The data for the websites is located in `/var/www` by default. With multiple websites, you will need to provide multiple locations, one for each site you host.
### Name-based virtual hosting
With name-based virtual hosting, you can use a single IP address for multiple websites. Modern web servers, including Apache, use the `hostname` portion of the specified URL to determine which virtual web host responds to the page request. This requires only a little more configuration than for a single site.
Even if you are starting with only a single website, I recommend that you set it up as a virtual host, which will make it easier to add more sites later. In this article, I'll pick up where we left off in the previous article, so you'll need to set up the original website, a name-based virtual website.
### Preparing the original website
Before you set up a second website, you need to get name-based virtual hosting working for the existing site. If you do not have an existing website, [go back and create one now][1].
Once you have your site, add the following stanza to the bottom of its `/etc/httpd/conf/httpd.conf` configuration file (adding this stanza is the only change you need to make to the `httpd.conf` file):
```
<VirtualHost 127.0.0.1:80>
    DocumentRoot /var/www/html
    ServerName www.site1.org
</VirtualHost>
```
This will be the first virtual host stanza, and it should remain first, to make it the default definition. That means that HTTP access to the server by IP address, or by another name that resolves to this IP address but that does not have a specific named host configuration stanza, will be directed to this virtual host. All other virtual host configuration stanzas should follow this one.
You also need to set up your websites with entries in `/etc/hosts` to provide name resolution. Last time, we just used the IP address for `localhost`. Normally, this would be done using whichever name service you use; for example, Google or Godaddy. For your test website, do this by adding a new name to the `localhost` line in `/etc/hosts`. Add the entries for both websites so you don't need to edit this file again later. The result looks like this:
```
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 www.site1.org www.site2.org
```
Lets also change the `/var/www/html/index.html` file to be a little more explicit. It should look like this (with some additional text to identify this as website number 1):
```
<h1>Hello World</h1>
Web site 1.
```
Restart the HTTPD server to enable the changes to the `httpd` configuration. You can then look at the website using the Lynx text mode browser from the command line.
```
[root@testvm1 ~]# systemctl restart httpd
[root@testvm1 ~]# lynx www.site1.org
                                              Hello World
  Web site 1.
<snip>
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
Arrow keys: Up and Down to move.  Right to follow a link; Left to go back.
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
```
You can see that the revised content for the original website is displayed and that there are no obvious errors. Press the “Q” key, followed by “Y” to exit the Lynx web browser.
### Configuring the second website
Now you are ready to set up the second website. Create a new website directory structure with the following command:
```
[root@testvm1 html]# mkdir -p /var/www/html2
```
Notice that the second website is simply a second `html` directory in the same `/var/www` directory as the first site.
Now create a new index file, `/var/www/html2/index.html`, with the following content (this index file is a bit different, to distinguish it from the one for the original website):
```
<h1>Hello World -- Again</h1>
Web site 2.
```
Create a new configuration stanza in `httpd.conf` for the second website and place it below the previous virtual host stanza (the two should look very similar). This stanza tells the web server where to find the HTML files for the second site.
```
<VirtualHost 127.0.0.1:80>
    DocumentRoot /var/www/html2
    ServerName www.site2.org
</VirtualHost>
```
Restart HTTPD again and use Lynx to view the results.
```
[root@testvm1 httpd]# systemctl restart httpd
[root@testvm1 httpd]# lynx www.site2.org
                                    Hello World -- Again
   Web site 2.
<snip>
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
Arrow keys: Up and Down to move.  Right to follow a link; Left to go back.
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
```
Here I have compressed the resulting output to fit this space. The difference in the page indicates that this is the second website. To show both websites at the same time, open another terminal session and use the Lynx web browser to view the other site.
### Other considerations
This simple example shows how to serve up two websites with a single instance of the Apache HTTPD server. Configuring the virtual hosts becomes a bit more complex when other factors are considered.
For example, you may want to use some CGI scripts for one or both of these websites. To do this, you would create directories for the CGI programs in `/var/www`: `/var/www/cgi-bin` and `/var/www/cgi-bin2`, to be consistent with the HTML directory naming. You would then need to add configuration directives to the virtual host stanzas to specify the directory location for the CGI scripts. Each website could also have directories from which files could be downloaded; this would also require entries in the appropriate virtual host stanza.
The [Apache website][2] describes other methods for managing multiple websites, as well as configuration options from performance tuning to security.
Apache is a powerful web server that can be used to manage websites ranging from simple to highly complex. Although its overall share is shrinking, Apache remains the single most commonly used HTTPD server on the Internet.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/configuring-multiple-web-sites-apache
作者:[David Both][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dboth
[1]:https://opensource.com/article/18/2/how-configure-apache-web-server
[2]:https://httpd.apache.org/docs/2.4/

View File

@ -0,0 +1,241 @@
Python ChatOps libraries: Opsdroid and Errbot
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
This article was co-written with [Lacey Williams Henschel][1].
ChatOps is conversation-driven development. The idea is you can write code that is executed in response to something typed in a chat window. As a developer, you could use ChatOps to merge pull requests from Slack, automatically assign a support ticket to someone from a received Facebook message, or check the status of a deployment through IRC.
In the Python world, the most widely used ChatOps libraries are Opsdroid and Errbot. In this month's Python column, let's chat about what it's like to use them, what each does well, and how to get started with them.
### Opsdroid
[Opsdroid][2] is a relatively young (since 2016) open source chatbot library written in Python. It has good documentation, a great tutorial, and includes plugins to help you connect to popular chat services.
#### What's built in
The library itself doesn't ship with everything you need to get started, but this is by design. The lightweight framework encourages you to enable its existing connectors (what Opsdroid calls the plugins that help you connect to chat services) or write your own, but it doesn't weigh itself down by shipping with connectors you may not need. You can easily enable existing Opsdroid connectors for:
+ The command line
+ Cisco Spark
+ Facebook
+ GitHub
+ Matrix
+ Slack
+ Telegram
+ Twitter
+ Websockets
Opsdroid calls the functions the chatbot performs "skills." Skills are `async` Python functions and use Opsdroid's matching decorators, called "matchers." You can configure your Opsdroid project to use skills from the same codebase your configuration file is in or import skills from outside public or private repositories.
You can enable some existing Opsdroid skills as well, including [seen][3], which tells you when a specific user was last seen by the bot, and [weather][4], which will report the weather to the user.
Finally, Opdroid allows you to configure databases using its existing database modules. Current databases with Opsdroid support include:
+ Mongo
+ Redis
+ SQLite
You configure databases, skills, and connectors in the `configuration.yaml` file in your Opsdroid project.
#### Opsdroid pros
**Docker support:** Opsdroid is meant to work well in Docker from the get-go. Docker instructions are part of its [installation documentation][5]. Using Opsdroid with Docker Compose is also simple: Set up Opsdroid as a service and when you run `docker-compose up`, your Opsdroid service will start and your chatbot will be ready to chat.
```
version: "3"
services:
  opsdroid:
    container_name: opsdroid
    build:
      context: .
      dockerfile: Dockerfile
```
**Lots of connectors:** Opsdroid supports nine connectors to services like Slack and GitHub out of the box; all you need to do is enable those connectors in your configuration file and pass necessary tokens or API keys. For example, to enable Opsdroid to post in a Slack channel named `#updates`, add this to the `connectors` section of your configuration file:
```
- name: slack
    api-token: "this-is-my-token"
    default-room: "#updates"
```
You will have to [add a bot user][6] to your Slack workspace before configuring Opsdroid to connect to Slack.
If you need to connect to a service that Opsdroid does not support, there are instructions for adding your own connectors in the [docs][7].
**Pretty good docs.** Especially for a young-ish library in active development, Opsdroid's docs are very helpful. The docs include a [tutorial][8] that leads you through creating a couple of different basic skills. The Opsdroid documentation on [skills][9], [connectors][7], [databases][10], and [matchers][11] is also clear.
The repositories for its supported skills and connectors provide helpful example code for when you start writing your own custom skills and connectors.
**Natural language processing:** Opsdroid supports regular expressions for its skills, but also several NLP APIs, including [Dialogflow][12], [luis.ai][13], [Recast.AI][14], and [wit.ai][15].
#### Possible Opsdroid concern
Opsdroid doesn't yet enable the full features of some of its connectors. For example, the Slack API allows you to add color bars, images, and other "attachments" to your message. The Opsdroid Slack connector doesn't enable the "attachments" feature, so you would need to write a custom Slack connector if those features were important to you. If a connector is missing a feature you need, though, Opsdroid would welcome your [contribution][16]. The docs could use some more examples, especially of expected use cases.
#### Example usage
`hello/__init__.py`
```
from opsdroid.matchers import match_regex
import random
@match_regex(r'hi|hello|hey|hallo')
async def hello(opsdroid, config, message):
    text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
    await message.respond(text)
```
`configuration.yaml`
```
connectors:
  - name: websocket
skills:
  - name: hello
    repo: "https://github.com/<user_id>/hello-skill"
```
### Errbot
[Errbot][17] is a batteries-included open source chatbot. Errbot was released in 2012 and has everything anyone would expect from a mature project, including good documentation, a great tutorial, and plenty of plugins to help you connect to existing popular chat services.
#### What's built in
Unlike Opsdroid, which takes a more lightweight approach, Errbot ships with everything you need to build a customized bot safely.
Errbot includes support for XMPP, IRC, Slack, Hipchat, and Telegram services natively. It lists support for 10 other services through community-supplied backends.
#### Errbot pros
**Good docs:** Errbot's docs are mature and easy to use.
**Dynamic plugin architecture:** Errbot allow you to securely install, uninstall, update, enable, and disable plugins by chatting with the bot. This makes development and adding features easy. For the security conscious, this can all be locked down thanks to Errbot's granular permission system.
Errbot uses your plugin docstrings to generate documentation for available commands when someone types `!help`, which makes it easier to know what each command does.
**Built-in administration and security:** Errbot allows you to restrict lists of users who have administrative rights and even has fine-grained access controls. For example, you can restrict which commands may be called by specific users and/or specific rooms.
**Extensive plugin framework:** Errbot supports hooks, callbacks, subcommands, webhooks, polling, and many [more features][18]. If those aren't enough, you can even write [Dynamic plugins][19]. This feature is useful if you want to enable chat commands based on what commands are available on a remote server.
**Ships with a testing framework:** Errbot supports [pytest][20] and ships with some useful utilities that make testing your plugins easy and possible. Its "[testing your plugins][21]" docs are well thought out and provide enough to get started.
#### Possible Errbot concerns
**Initial !:** By default, Errbot commands are issued starting with an exclamation mark (`!help` and `!hello`). Some people may like this, but others may find it annoying. Thankfully, this is easy to turn off.
**Plugin metadata:** At first, Errbot's [Hello World][22] plugin example seems easy to use. However, I couldn't get my plugin to load until I read further into the tutorial and discovered that I also needed a `.plug` file, a file Errbot uses to load plugins. This is a pretty minor nitpick, but it wasn't obvious to me until I dug further into the docs.
#### Example usage
`hello.py`
```
import random
from errbot import BotPlugin, botcmd
class Hello(BotPlugin):
    @botcmd
    def hello(self, msg, args):
        text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user)
        return text
```
`hello.plug`
```
[Core]
Name = Hello
Module = hello
[Python]
Version = 2+
[Documentation]
Description = Example "Hello" plugin
```
Have you used Errbot or Opsdroid? If so, please leave a comment with your impressions on these tools.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/python-chatops-libraries-opsdroid-and-errbot
作者:[Jeff Triplett][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/laceynwilliams
[1]:https://opensource.com/users/laceynwilliams
[2]:https://opsdroid.github.io/
[3]:https://github.com/opsdroid/skill-seen
[4]:https://github.com/opsdroid/skill-weather
[5]:https://opsdroid.readthedocs.io/en/stable/#docker
[6]:https://api.slack.com/bot-users
[7]:https://opsdroid.readthedocs.io/en/stable/extending/connectors/
[8]:https://opsdroid.readthedocs.io/en/stable/tutorials/introduction/
[9]:https://opsdroid.readthedocs.io/en/stable/extending/skills/
[10]:https://opsdroid.readthedocs.io/en/stable/extending/databases/
[11]:https://opsdroid.readthedocs.io/en/stable/matchers/overview/
[12]:https://opsdroid.readthedocs.io/en/stable/matchers/dialogflow/
[13]:https://opsdroid.readthedocs.io/en/stable/matchers/luis.ai/
[14]:https://opsdroid.readthedocs.io/en/stable/matchers/recast.ai/
[15]:https://opsdroid.readthedocs.io/en/stable/matchers/wit.ai/
[16]:https://opsdroid.readthedocs.io/en/stable/contributing/
[17]:http://errbot.io/en/latest/
[18]:http://errbot.io/en/latest/features.html#extensive-plugin-framework
[19]:http://errbot.io/en/latest/user_guide/plugin_development/dynaplugs.html
[20]:http://pytest.org/
[21]:http://errbot.io/en/latest/user_guide/plugin_development/testing.html
[22]:http://errbot.io/en/latest/index.html#simple-to-build-upon

View File

@ -0,0 +1,200 @@
Linux 中的 5 个 SSH 别名例子
======
[![][1]][1]
作为一个 Linux 用户,我们常用[ ssh 命令][2] 来登入远程机器。ssh 命令你用得越多,你在键入一些重要的命令上花的时间也越多。我们可以用 [定义在你的 .bashrc 文件里的别名][3] 或函数来大幅度缩减花在命令行界面CLI的时间。但这不是最佳解决之道。最佳办法是在 ssh 配置文件中使用 **SSH-别名**
这里是我们能把 ssh 命令用得更好的几个例子。
ssh 到 AWS译注Amazon Web Services亚马逊公司旗下云计算服务平台实例的连接是一种痛。仅仅输入以下命令每次也完全是浪费你时间。
```
ssh -p 3000 -i /home/surendra/mysshkey.pem ec2-user@ec2-54-20-184-202.us-west-2.compute.amazonaws.com
```
缩短到
```
ssh aws1
```
调试时连接到系统。
```
ssh -vvv the_good_user@red1.taggle.abc.com.au
```
缩短到
```
ssh xyz
```
在本篇中,我们将看到如何不使用 bash 别名或函数实现 ssh 命令的缩短。ssh 别名的主要优点是所有的 ssh 命令快捷方式都存储在一个单一文件,如此就易于维护。其他优点是 **对于类似于 SSH 和 SCP 的命令** 我们能用相同的别名。
在我们进入实际配置之前,我们应该知道 /etc/ssh/ssh_config、/etc/ssh/sshd_config 和 ~/.ssh/config 文件三者的区别。以下是对这些文件的解释。
## /etc/ssh/ssh_config 和 ~/.ssh/config 间的区别
系统级别的 SSH 配置项存放在 /etc/ssh/ssh_config而用户级别的 ssh 配置项存放在 ~/.ssh/config 文件中。
## /etc/ssh/ssh_config 和 /etc/ssh/sshd_config 间的区别
系统级别的 SSH 配置项是在 /etc/ssh/ssh_config 文件中,而系统级别的 SSH 服务端配置项存放在 /etc/ssh/sshd_config 文件。
## **在 ~/.ssh/config 文件里配置项的语法**
~/.ssh/config 文件内容的语法。
```
配置项 值
配置项 值1 值2
```
**例 1** 创建主机www.linuxnix.com的 SSH 别名
编辑 ~/.ssh/config 文件写入以下内容
```
Host tlj
User root
HostName 18.197.176.13
port 22
```
保存此文件
以上 ssh 别名用了
1. **tlj 作为一个别名的名称**
2. **root 作为将要登入的用户**
3. **18.197.176.13 作为主机的 IP 地址**
4. **22 作为访问 SSH 服务的端口。**
输出:
```
sanne@Surendras-MacBook-Pro:~ > ssh tlj
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sat Oct 14 01:00:43 2017 from 20.244.25.231
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
```
**例 2** 不用密码用 ssh 密钥登到系统要用 **IdentityFile**
例:
```
Host aws
User ec2-users
HostName ec2-54-200-184-202.us-west-2.compute.amazonaws.com
IdentityFile ~/Downloads/surendra.pem
port 22
```
**例 3** 对同一主机使用不同的别名。在下例中,我们对同一 IP/主机 18.197.176.13 用了 **tlj, linuxnix, linuxnix.com** 三个别名。
~/.ssh/config 文件内容
```
Host tlj linuxnix linuxnix.com
User root
HostName 18.197.176.13
port 22
```
**输出:**
```
sanne@Surendras-MacBook-Pro:~ > ssh tlj
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sat Oct 14 01:00:43 2017 from 220.244.205.231
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
sanne@Surendras-MacBook-Pro:~ > ssh linuxnix.com
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
```
```
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sun Oct 15 20:31:08 2017 from 1.129.110.13
root@linuxnix:~# exit
logout
Connection to 138.197.176.103 closed.
[6571] sanne@Surendras-MacBook-Pro:~ > ssh linuxnix
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-93-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Sun Oct 15 20:31:20 2017 from 1.129.110.13
root@linuxnix:~# exit
logout
Connection to 18.197.176.13 closed.
```
**例 4** 用相同的 SSH 别名复制文件到远程系统
语法:
```
**scp <文件名> <ssh_别名>:<位置>**
```
例子:
```
sanne@Surendras-MacBook-Pro:~ > scp abc.txt tlj:/tmp
abc.txt 100% 12KB 11.7KB/s 00:01
sanne@Surendras-MacBook-Pro:~ >
```
若我们已经将 ssh 主机设置好一个别名,由于 ssh 和 SCP 两者用几乎相同的语法和选项SCP 用起来就轻而易举
请在下面尝试从本机 scp 一个文件到远程机器。
**例 5** 解决 Linux中的 SSH 超时问题。默认情况,如果你不积极地使用终端,你的 ssh 登入就会超时。
[SSH 超时问题][5] 是一个更痛的点意味着你在一段时间后不得不重新登入到远程机器。我们能在 ~/.ssh/config 文件里边恰当地设置 SSH 超时时间来使你的会话不管在什么时间总是激活的。我们将用 2 个能保持会话存活的 SSH 选项来实现这一目的。之一是 ServerAliveInterval 保持你会话存活的秒数和 ServerAliveCountMax 在(经历了一个)给定数值的会话之后初始化会话。
```
**ServerAliveInterval A**
**ServerAliveCountMax B**
```
**例:**
```
Host tlj linuxnix linuxnix.com
User root
HostName 18.197.176.13
port 22
ServerAliveInterval 60**
ServerAliveCountMax 30
```
在下篇中我们将会看到一些其他的退出方式。请保持访问 linuxnix.com。
--------------------------------------------------------------------------------
via: https://www.linuxnix.com/5-ssh-alias-examples-using-ssh-config-file/
作者:[Surendra Anne;Max Ntshinga;Otto Adelfang;Uchechukwu Okeke][a]
译者:[ch-cn](https://github.com/ch-cn)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxnix.com
[1]:https://www.linuxnix.com/wp-content/uploads/2017/10/SSH-alias-1.png
[2]:https://www.linuxnix.com/ssh-access-remote-linux-server/
[3]:https://www.linuxnix.com/linux-alias-command-explained-with-examples/
[4]:/cdn-cgi/l/email-protection
[5]:https://www.linuxnix.com/how-to-auto-logout/

View File

@ -1,15 +1,15 @@
How to use KVM cloud images on Ubuntu Linux
======
如何在 Ubuntu Linux 上使用 KVM 云镜像
=====
Kernel-based Virtual Machine (KVM) is a virtualization module for the Linux kernel that turns it into a hypervisor. You can create an Ubuntu cloud image with KVM from the command line using Ubuntu virtualisation front-end for libvirt and KVM.
基于内核的虚拟机KVM时 Linux 内核的虚拟化模块,可将其转变为虚拟机管理程序。你可以使用 Ubuntu 虚拟化前端为 libvirt 和 KVM 在命令行中使用 KVM 创建 Ubuntu 云镜像。
How do I download and use a cloud image with kvm running on an Ubuntu Linux server? How do I create create a virtual machine without the need of a complete installation on an Ubuntu Linux 16.04 LTS server?Kernel-based Virtual Machine (KVM) is a virtualization module for the Linux kernel that turns it into a hypervisor. You can create an Ubuntu cloud image with KVM from the command line using Ubuntu virtualisation front-end for libvirt and KVM.
如何下载并使用运行在 Ubuntu Linux 服务器上的 KVM 云镜像?如何在 Ubuntu Linux 16.04 LTS 服务器上无需完整安装即可创建虚拟机基于内核的虚拟机KVM时 Linux 内核的虚拟化模块,可将其转变为虚拟机管理程序。你可以使用 Ubuntu 虚拟化前端为 libvirt 和 KVM 在命令行中使用 KVM 创建 Ubuntu 云镜像。
This quick tutorial shows to install and use uvtool that provides a unified and integrated VM front-end to Ubuntu cloud image downloads, libvirt, and cloud-init.
这个快速教程展示了如何安装和使用 uvtool它为 Ubuntu 云镜像下载libvirt 和 clout_int 提供了统一的集成虚拟机前端。
### Step 1 - Install KVM
### 步骤 1 - 安装 KVM
You must have kvm installed and configured. Use the [apt command][1]/[apt-get command][2] as follows:
你必须安装并配置 KVM。使用 [apt 命令][1]/[apt-get 命令][2],如下所示:
```
$ sudo apt install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
$ kvm-ok
@ -18,15 +18,17 @@ $ sudo vi /etc/network/interfaces
$ sudo systemctl restart networking
$ sudo brctl show
```
See "[How to install KVM on Ubuntu 16.04 LTS Headless Server][4]" for more info.
### Step 2 - Install uvtool
参阅[如何在 Ubuntu 16.04 LTS Headless 服务器上安装 KVM][4] 以获得更多信息。译注Headless 服务器是指没有本地接口的计算设备,专用于向其他计算机及其用户提供服务。)
Type the following [apt command][1]/[apt-get command][2]:
### 步骤 2 - 安装 uvtool
键入以下 [apt 命令][1]/[apt-get 命令][2]
```
$ sudo apt install uvtool
```
Sample outputs:
示例输出:
```
[sudo] password for vivek:
Reading package lists... Done
@ -96,47 +98,47 @@ Processing triggers for man-db (2.7.6.1-2) ...
Setting up uvtool-libvirt (0~git122-0ubuntu1) ...
```
### 步骤 3 - 下载 Ubuntu 云镜像
### Step 3 - Download the Ubuntu Cloud image
You need to use the uvt-simplestreams-libvirt command. It maintains a libvirt volume storage pool as a local mirror of a subset of images available from a simplestreams source, such as Ubuntu cloud images. To update uvtool's libvirt volume storage pool with all current amd64 images, run:
你需要使用 uvt-simplestreams-libvirt 命令。它维护一个 libvirt 容量存储池,作为一个简单流源的图像子集的本地镜像,比如 Ubuntu 云镜像。要使用当前所有 amd64 镜像更新 uvtool 的 libvirt 容量存储此,运行:
`$ uvt-simplestreams-libvirt sync arch=amd64`
To just update/grab Ubuntu 16.04 LTS (xenial/amd64) image run:
要更新/获取 Ubuntu 16.04 LTS (xenial/amd64) 镜像,运行:
`$ uvt-simplestreams-libvirt --verbose sync release=xenial arch=amd64`
Sample outputs:
示例输出:
```
Adding: com.ubuntu.cloud:server:16.04:amd64 20171121.1
```
Pass the query option to queries the local mirror:
通过 query 选项查询本地镜像:
`$ uvt-simplestreams-libvirt query`
Sample outputs:
示例输出:
```
release=xenial arch=amd64 label=release (20171121.1)
```
Now, I have an image for Ubuntu xenial and I create the VM.
现在,我为 Ubuntu xenial 创建了一个镜像,接下来我会创建虚拟机。
### Step 4 - Create the SSH keys
### 步骤 4 - 创建 SSH 密钥
You need ssh keys for login into KVM VMs. Use the ssh-keygen command to create a new one if you do not have any keys at all.
你需要使用 SSH 密钥才能登录到 KVM 虚拟机。如果你根本没有任何密钥,请使用 ssh-keygen 命令创建一个新的密钥。
`$ ssh-keygen`
See "[How To Setup SSH Keys on a Linux / Unix System][5]" and "[Linux / UNIX: Generate SSH Keys][6]" for more info.
参阅“[如何在 Linux / Unix 系统上设置 SSH 密钥][5]” 和 “[Linux / UNIX: 生成 SSH 密钥][6]” 以获取更多信息。
### Step 5 - Create the VM
### 步骤 5 - 创建 VM
It is time to create the VM named vm1 i.e. create an Ubuntu Linux 16.04 LTS VM:
是时候创建虚拟机了,它叫 vm1即创建一个 Ubuntu Linux 16.04 LTS 虚拟机:
`$ uvt-kvm create vm1`
By default vm1 created using the following characteristics:
默认情况下 vm1 使用以下配置创建:
1. RAM/memory : 512M
2. Disk size: 8GiB
3. CPU: 1 vCPU core
To control ram, disk, cpu, and other characteristics use the following syntax:
`$ uvt-kvm create vm1 \
要控制 ramdiskcpu 和其他配置,使用以下语法:
```
$ uvt-kvm create vm1 \
--memory MEMORY \
--cpu CPU \
--disk DISK \
@ -145,70 +147,72 @@ To control ram, disk, cpu, and other characteristics use the following syntax:
--packages PACKAGES1, PACKAGES2, .. \
--run-script-once RUN_SCRIPT_ONCE \
--password PASSWORD
`
Where,
```
1. **\--password PASSWORD** : Set the password for the ubuntu user and allow login using the ubuntu user (not recommended use ssh keys).
2. **\--run-script-once RUN_SCRIPT_ONCE** : Run RUN_SCRIPT_ONCE script as root on the VM the first time it is booted, but never again. Give full path here. This is useful to run custom task on VM such as setting up security or other stuff.
3. **\--packages PACKAGES1, PACKAGES2, ..** : Install the comma-separated packages on first boot.
其中
1. **\--password PASSWORD** : 设置 ubuntu 用户的密码和允许使用 ubuntu 的用户登录(不推荐使用 ssh 密钥)。
2. **\--run-script-once RUN_SCRIPT_ONCE** : 第一次启动时,在虚拟机上以 root 身份运行 RUN_SCRIPT_ONCE 脚本,但再也不会运行。这里给出完整的路径。这对于在虚拟机上运行自定义任务时非常有用,例如设置安全性或其他内容。
3. **\--packages PACKAGES1, PACKAGES2, ..** : 在第一次启动时安装逗号分隔的软件包。
To get help, run:
要获取帮助,运行:
```
$ uvt-kvm -h
$ uvt-kvm create -h
```
#### How do I delete my VM?
#### 如何删除虚拟机?
To destroy/delete your VM named vm1, run (please use the following command with care as there would be no confirmation box):
要销毁/删除名为 vm1 的虚拟机,运行(请小心使用以下命令,因为没有确认框):
`$ uvt-kvm destroy vm1`
#### To find out the IP address of the vm1, run:
#### 获取 vm1 的 IP 地址,运行:
`$ uvt-kvm ip vm1`
192.168.122.52
#### To list all VMs run
#### 列出所有运行的虚拟机
`$ uvt-kvm list`
Sample outputs:
示例输出:
```
vm1
freebsd11.1
```
### Step 6 - How to login to the vm named vm1
### 步骤 6 - 如何登录 vm1
The syntax is:
语法是:
`$ uvt-kvm ssh vm1`
Sample outputs:
示例输出:
```
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-101-generic x86_64)
comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md Documentation: https://help.ubuntu.com
comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md Management: https://landscape.canonical.com
comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated 选题模板.txt 中文排版指北.md Support: https://ubuntu.com/advantage
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
Last login: Thu Dec 7 09:55:06 2017 from 192.168.122.1
Last login: Thu Dec 7 09:55:06 2017 from 192.168.122.1
```
Another option is to use the regular ssh command from macOS/Linux/Unix/Windows client:
`$ ssh [[email protected]][7]
$ ssh -i ~/.ssh/id_rsa [[email protected]][7]`
Sample outputs:
另一个选择是从 macOS/Linux/Unix/Windows 客户端使用常规的 ssh 命令:
```
$ ssh ubuntu@192.168.122.52
$ ssh -i ~/.ssh/id_rsa ubuntu@192.168.122.52
```
(上面的是根据原文修改的)
示例输出:
[![Connect to the running VM using ssh][8]][8]
Once vim created you can use the virsh command as usual:
一旦创建了 vim你可以照常使用 virsh 命令:
`$ virsh list`
@ -217,7 +221,7 @@ Once vim created you can use the virsh command as usual:
via: https://www.cyberciti.biz/faq/how-to-use-kvm-cloud-images-on-ubuntu-linux/
作者:[Vivek Gite][a]
译者:[译者ID](https://github.com/译者ID)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,748 +0,0 @@
深入理解 BPF一个阅读清单
============================================================
* [什么是 BPF?][143]
* [深入理解字节码( bytecode][144]
* [资源][145]
* [简介][23]
* [关于 BPF][1]
* [关于 XDP][2]
* [关于 基于 eBPF 或者 eBPF 相关的其它组件][3]
* [文档][24]
* [关于 BPF][4]
* [关于 tc][5]
* [关于 XDP][6]
* [关于 P4 和 BPF][7]
* [教程][25]
* [示例][26]
* [来自内核的示例][8]
* [来自包 iproute2 的示例][9]
* [来自 bcc 工具集的示例][10]
* [手册页面][11]
* [代码][27]
* [内核中的 BPF 代码][12]
* [XDP 钩子hook代码][13]
* [bcc 中的 BPF 逻辑][14]
* [通过 tc 使用代码去管理 BPF][15]
* [BPF 实用工具][16]
* [其它感兴趣的 chunks][17]
* [LLVM 后端][18]
* [在用户空间中运行][19]
* [提交日志][20]
* [排错][28]
* [编译时的错误][21]
* [加载和运行时的错误][22]
* [更多][29]
_~ [更新于][146] 2017-11-02 ~_
# 什么是 BPF?
BPF是伯克利包过滤器**B**erkeley **P**acket **F**ilter的第一个字母的组合在 1992 年提出最初的设想,它的目的是为了提供一种过滤包的方法,并且要避免这种从内核空间到用户空间的没用的数据包复制行为。它最初是由从用户空间注入到内核的一个简单的字节码构成,它在那个位置利用一个校验器进行检查 —— 以避免内核崩溃或者安全问题 —— 并附加到一个套接字上,接着在每个接收到的包上运行。几年后它被移植到 Linux 上并且应用于一小部分应用程序上例如tcpdump。简化的语言以及存在于内核中的即时编译器JIT使 BPF 成为一个性能卓越的工具。
然后,在 2013 年Alexei Starovoitov 对 BPF 进行彻底地改造,并增加了新的功能,改善了它的性能。这个新版本被命名为 eBPF (意思是 “extended BPF”与此同时将以前的 BPF 变成 cBPF意思是 “classic” BPF。新版本出现了如映射和尾调用这样的新特性并且 JIT 编译器也被重写了。新的语言比 cBPF 更接近于原生机器语言。并且,在内核中创建了新的附加点。
感谢那些新的钩子eBPF 程序才可以被设计用于各种各样的使用案例它分为两个应用领域。其中一个应用领域是内核跟踪和事件监控。BPF 程序可以被附加到探针,而且它与其它跟踪模式相比,有很多的优点(有时也有一些缺点)。
另外一个应用领域是网络编程。除了套接字过滤器外eBPF 程序还可以附加到 tcLinux 流量控制工具)的入站或者出站接口上,以一种很高效的方式去执行各种包处理任务。这种使用方式在这个领域开创了一个新的天地。
并且 eBPF 通过使用为 IO Visor 项目开发的技术,使它的性能进一步得到提升:也为 XDP“eXpress Data Path”添加了新的钩子XDP 是不久前添加到内核中的一种新式快速路径。XDP 与 Linux 栈组合,然后使用 BPF ,使包处理的速度更快。
甚至一些项目,如 P4、Open vSwitch[考虑][155] 或者开始去接洽使用 BPF。其它的一些如 CETH、Cilium则是完全基于它的。BPF 是如此流行,因此,我们可以预计,不久之后,将围绕它有更多工具和项目出现 …
# 深入理解字节码
就像我一样:我的一些工作(包括 [BEBA][156])是非常依赖 eBPF 的,并且在这个网站上以后的几篇文章将关注于这个主题。这篇文章的基本逻辑是,在深入到细节之前,我希望以某种方式去介绍 BPF —— 我的意思是,更多地介绍在 BPF 上开发的新功能,它在开始节已经提供了一些简短的概述:什么是 BPF 映射?尾调用?内部结构是什么样子?等等。但是,在这个网站上已经有很多这个主题的介绍了,而且,我也不希望去写另一篇 “BPF 介绍” 的重复文章。
毕竟,我花费了很多的时间去阅读和学习关于 BPF 的知识,因此,在这里我们将要做什么呢,我收集了非常多的关于 BPF 的阅读材料:介绍、文档、也有教程或者示例。这里有很多的材料可以去阅读,但是,为了去阅读它,首先要去 _找到_ 它。因此,为了能够帮助更多想去学习和使用 BPF 的人,现在的这篇文章是介绍了一个资源清单。这里有各种阅读材料,它可以帮你深入理解内核字节码的机制。
# 资源
![](https://qmonnet.github.io/whirl-offload/img/icons/pic.svg)
### 简介
这篇文章中下面的链接提供了一个 BPF 的基本的概述,或者,一些与它密切相关的一些主题。如果你对 BPF 非常陌生,你可以在这些介绍文章中挑选出一篇你喜欢的文章去阅读。如果你已经理解了 BPF你可以针对特定的主题去阅读下面是阅读清单。
### 关于 BPF
关于 eBPF 的简介:
* [*全面介绍 eBPF*][193]Matt Flemmingon LWN.netDecember 2017
一篇写的很好的,并且易于理解的,介绍 eBPF 子系统组件的概述文章。
* [_利用 BPF 和 XDP 实现可编程的内核网络数据路径_][53]  (Daniel Borkmann, OSSNA17, Los Angeles, September 2017):
快速理解所有的关于 eBPF 和 XDP 的基础概念的许多文章中的一篇(大多数是关于网络处理的)
* [*BSD 包过滤器*][54] (Suchakra Sharma, June 2017): 
一篇非常好的介绍文章,大多数是关于跟踪方面的。
* [_BPF跟踪及更多_][55]  (Brendan Gregg, January 2017):
大多数内容是跟踪使用案例相关的。
* [_Linux BPF 的超强功能_][56]  (Brendan Gregg, March 2016):
第一部分是关于 **火焰图flame graphs** 的使用。
* [_IO Visor_][57]  (Brenden Blanco, SCaLE 14x, January 2016):
介绍了 **IO Visor 项目**
* [_大型机上的 eBPF_][58]  (Michael Holzheu, LinuxCon, Dubin, October 2015)
* [_在 Linux 上新的令人激动的跟踪新产品_][59]  (Elena Zannoni, LinuxCon, Japan, 2015)
* [_BPF — 内核中的虚拟机_][60]  (Alexei Starovoitov, February 2015):
eBPF 的作者写的一篇介绍文章。
* [_扩展 extended BPF_][61]  (Jonathan Corbet, July 2014)
**BPF 内部结构**
* Daniel Borkmann 正在做的一项令人称奇的工作,它用于去展现 eBPF 的 **内部结构**,尤其是,它的关于 **eBPF 用于 tc ** 的几次演讲和论文。
* [_使用 tc 的 cls_bpf 的高级可编程和它的最新更新_][30]  (netdev 1.2, Tokyo, October 2016):
Daniel 介绍了 eBPF 的细节,它使用了隧道和封装、直接包访问、和其它特性。
* [_自 netdev 1.1 以来的 cls_bpf/eBPF 更新_][31]  (netdev 1.2, Tokyo, October 2016, part of [this tc workshop][32])
* [_使用 cls_bpf 实现完全可编程的 tc 分类器_][33]  (netdev 1.1, Sevilla, February 2016)
介绍 eBPF 之后,它提供了许多 BPF 内部机制映射管理、tail 调用、校验器)的见解。对于大多数有志于 BPF 的人来说,这是必读的![全文在这里][34]。
* [_Linux tc 和 eBPF_][35]  (fosdem16, Brussels, Belgium, January 2016)
* [_eBPF 和 XDP 攻略和最新更新_][36]  (fosdem17, Brussels, Belgium, February 2017)
这些介绍可能是理解 eBPF 内部机制设计与实现的最佳文档资源之一。
[***IO Visor 博客***][157] 有一些关于 BPF 感兴趣的技术文章。它们中的一些包含了许多营销讨论。
**内核跟踪**:总结了所有的已有的方法,包括 BPF
* [_邂逅 eBPF 和内核跟踪_][62]  (Viller Hsiao, July 2016):
Kprobes、uprobes、ftrace
* [_Linux 内核跟踪_][63]  (Viller Hsiao, July 2016):
Systemtap、Kernelshark、trace-cmd、LTTng、perf-tool、ftrace、hist-trigger、perf、function tracer、tracepoint、kprobe/uprobe …
关于 **事件跟踪和监视**Brendan Gregg 使用 eBPF 的一些心得,它使用 eBPFR 的一些案例,他做的非常出色。如果你正在做一些内核跟踪方面的工作,你应该去看一下他的关于 eBPF 和火焰图相关的博客文章。其中的大多数都可以 *[从这篇文章中][158]* 访问,或者浏览他的博客。
介绍 BPF也介绍 **Linux 网络的一般概念**
* [_Linux 网络详解_][64]  (Thomas Graf, LinuxCon, Toronto, August 2016)
* [_内核网络攻略_][65]  (Thomas Graf, LinuxCon, Seattle, August 2015)
**硬件 offload**译者注offload 是指原本由软件来处理的一些操作交由硬件来完成,以提升吞吐量,降低 CPU 负荷。):
* eBPF 与 tc 或者 XDP 一起支持硬件 offload开始于 Linux 内核版本 4.9,并且由 Netronome 提出的。这里是关于这个特性的介绍:[*eBPF/XDP hardware offload to SmartNICs*][147] (Jakub Kicinski 和 Nic Viljoen, netdev 1.2, Tokyo, October 2016)
* 年后出现的更新版:
[*综合的关于 XDP offload 处理边界的案例*][194](Jakub Kicinski 和 Nic Viljoennetdev 2.2 SeoulNovember 2017)
* 我现在有一个简短的,但是在 2018 年的 FOSDEM 上有一个更新版:
[*XDP 硬件 Offload 的挑战*][195](Quentin MonnetFOSDEM 2018BrusselsFebruary 2018)
关于 **cBPF**
* [_BSD 包过滤器一个用户级包捕获的新架构_][66] (Steven McCanne 和 Van Jacobson, 1992)
它是关于classicBPF 的最早的论文。
* *[关于 BPF 的 FreeBSD 手册][67]* 是理解 cBPF 程序的可用资源。
* 关于 cBPFDaniel Borkmann 实现的至少两个演示,[*一是,在 2013 年 mmap 中BPF 和 Netsniff-NG*][68],以及 *[在 2014 中关于 tc 和 cls_bpf 的的一个非常完整的演示][69]*
* 在 Cloudflare 的博客上Marek Majkowski 提出的它的 *[BPF 字节码与 **iptables** 的 `xt_bpf` 模块一起的应用][70]*。值得一提的是,从 Linux 内核 4.10 开始eBPF 也是通过这个模块支持的。(虽然,我并不知道关于这件事的任何讨论或者文章)
* [*Libpcap 过滤器语法*][71]
### 关于 XDP
* 在 IO Visor 网站上的 *[XDP 概述][72]*
* [_eXpress Data Path (XDP)_][73]  (Tom Herbert, Alexei Starovoitov, March 2016):
这是第一个关于 XDP 的演示。
* [_BoF - BPF 能为你做什么_][74]  (Brenden Blanco, LinuxCon, Toronto, August 2016)。
* [_eXpress Data Path_][148]  (Brenden Blanco, Linux Meetup at Santa Clara, July 2016)
包含一些(有点营销的意思?)**benchmark 结果**!使用一个单核心:
* ip 路由丢弃: ~3.6 百万包每秒Mpps
* 使用 BPFtc使用 clsact qdisc丢弃 ~4.2 Mpps
* 使用 BPFXDP 丢弃20 Mpps CPU 利用率 < 10%
* XDP 重写转发在端口上它接收到的包10 Mpps
(测试是用 mlx4 驱动执行的)。
* Jesper Dangaard Brouer 有几个非常好的幻灯片,它可以从本质上去理解 XDP 的内部结构。
* [_XDP eXpress Data Path, Intro and future use-cases_][37]  (September 2016):
_“Linux 内核与 DPDK 的斗争”_ 。**未来的计划**(在写这篇文章时)它用 XDP 和 DPDK 进行比较。
* [_网络性能研讨_][38]  (netdev 1.2, Tokyo, October 2016):
关于 XDP 内部结构和预期演化的附加提示。
* [_XDP eXpress Data Path, 可用于 DDoS 防护_][39]  (OpenSourceDays, March 2017):
包含了关于 XDP 的详细情况和使用案例、用于 **benchmarking****benchmark 结果**、和 **代码片断**,以及使用 eBPF/XDP基于一个 IP 黑名单模式)的用于 **基本的 DDoS 防护**
* [_内存 vs. 网络激发和修复内存瓶颈_][40]  (LSF Memory Management Summit, March 2017):
面对 XDP 开发者提出关于当前 **内存问题** 的许多细节。不要从这一个开始,如果你已经理解了 XDP并且想去了解它在页面分配方面的真实工作方式这是一个非常有用的资源。
* [_XDP 能为其它人做什么_][41]netdev 2.1, Montreal, April 2017with Andy Gospodarek
对于普通人使用 eBPF 和 XDP 怎么去开始。这个演示也由 Julia Evans 在 [他的博客][42] 上做了总结。
Jesper 也创建了并且尝试去扩展了有关 eBPF 和 XDP 的一些文档,查看 [相关节][75]。)
* [_XDP 研讨 — 介绍、体验、和未来发展_][76]Tom Herbert, netdev 1.2, Tokyo, October 2016) — 在这篇文章中,只有视频可用,我不知道是否有幻灯片。
* [_在 Linux 上进行高速包过滤_][149]  (Gilberto Bertin, DEF CON 25, Las Vegas, July 2017) — 在 Linux 上的最先进的包过滤的介绍,面向 DDoS 的保护、讨论了关于在内核中进行包处理、内核旁通、XDP 和 eBPF。
### 关于 基于 eBPF 或者 eBPF 相关的其它组件
* [_在边界上的 P4_][77]  (John Fastabend, May 2016):
提出了使用 **P4**,一个包处理的描述语言,使用 BPF 去创建一个高性能的可编程交换机。
* 如果你喜欢音频的介绍,这里有一个相关的 [OvS Orbit 片断(#11),叫做 _在边缘上的 **P4**_][78],日期是 2016 年 8 月。OvS Orbit 是对 Ben Pfaff 的访谈,它是 Open vSwitch 的其中一个核心维护者。在这个场景中John Fastabend 是被访谈者。
* [_P4, EBPF 和 Linux TC Offload_][79]  (Dinan Gunawardena and Jakub Kicinski, August 2016):
另一个演示 **P4** 的,使用一些相关的元素在 Netronome 的 **NFP**(网络流处理器)架构上去实现 eBPF 硬件 offload。
* **Cilium** 是一个由 Cisco 最先发起的技术,它依赖 BPF 和 XDP 去提供 “在容器中基于 eBPF 程序,在运行中生成的强制实施的快速的内核中的网络和安全策略”。[这个项目的代码][150] 在 GitHub 上可以访问到。Thomas Graf 对这个主题做了很多的演示:
* [_Cilium: 对容器利用 BPF & XDP 实现网络 & 安全_][43]也特别展示了一个负载均衡的使用案例Linux Plumbers conference, Santa Fe, November 2016
* [_Cilium: 对容器利用 BPF & XDP 实现网络 & 安全_][44] Docker Distributed Systems Summit, October 2016 — [video][45]
* [_Cilium: 使用 BPF 和 XDP 的快速 IPv6 容器网络_][46] LinuxCon, Toronto, August 2016
* [_Cilium: 为窗口使用 BPF & XDP_][47] fosdem17, Brussels, Belgium, February 2017
在不同的演示中重复了大量的内容如果有疑问就选最近的一个。Daniel Borkmann 作为 Google 开源博客的特邀作者,也写了 [Cilium 简介][80]。
* 这里也有一个关于 **Cilium** 的播客节目:一个 *[OvS Orbit episode (#4)][81]*,它是 Ben Pfaff 访谈 Thomas Graf 2016 年 5 月),和 *[另外一个 Ivan Pepelnjak 的播客][82]*,仍然是 Thomas Graf 的与 eBPF、P4、XDP 和 Cilium 2016 年 10 月)。
* **Open vSwitch** (OvS),它是 **Open Virtual Network**OVN一个开源的网络虚拟化解决方案相关的项目正在考虑在不同的层次上使用 eBPF它已经实现了几个概念验证原型
* [*使用 eBPF 的 Offloading OVS 流处理器*][48] (William (Cheng-Chun) Tu, OvS conference, San Jose, November 2016)
* *[将 OVN 的灵活性与 IOVisor 的高效率相结合][49]* (Fulvio Risso, Matteo Bertrone and Mauricio Vasquez Bernal, OvS conference, San Jose, November 2016)
据我所知,这些 eBPF 的使用案例看上去仅处于提议阶段(并没有合并到 OvS 的主分支中),但是,看它带来了什么将是非常有趣的事情。
* XDP 的设计对分布式拒绝访问DDoS攻击是非常有用的。越来越多的演示都关注于它。例如从 Cloudflare 中的人们的讨论([_XDP in practice: integrating XDP in our DDoS mitigation pipeline_][83])或者从 Facebook 上([_Droplet: DDoS countermeasures powered by BPF + XDP_][84])在 netdev 2.1 会议上,在 Montreal、Canada、在 2017 年 4 月,都存在这样的很多使用案例。
* Kubernetes 可以用很多种方式与 eBPF 交互。这里有一篇关于 *[在 Kubernetes 中使用 eBPF][196]* 的文章,它解释了现有的产品 (CiliumWeave Scope) 如何支持 eBPF 与 Kubernetes 一起工作并且进一步描述了在容器部署环境中eBPF 感兴趣的交互内容是什么。
* [_CETH for XDP_][85] Yan Chan 和 Yunsong Lu、Linux Meetup、Santa Clara、July 2016
**CETH**,是由 Mellanox 发起的为实现更快的网络 I/O 而主张的通用以太网驱动程序架构。
* [***VALE 交换机***][86],另一个虚拟交换机,它可以与 netmap 框架结合,有 *[一个 BPF 扩展模块][87]*
* **Suricata**,一个开源的入侵检测系统,它的捕获旁通特性 [*似乎是依赖于 eBPF 组件*][88]
[*Suricate 文档的 eBPF 和 XDP 部分*][197]
[*SEPTun-Mark-II*][198] (Suricata Extreme 性能调优指南 — Mark II)Published by Michal Purzynski 和 Peter Manev in March 2018
[*介绍这个特性的博客文章*][199] Published by Éric Leblond in September 2016
[_The adventures of a Suricate in eBPF land_][89]  (Éric Leblond, netdev 1.2, Tokyo, October 2016)
[_eBPF and XDP seen from the eyes of a meerkat_][90]  (Éric Leblond, Kernel Recipes, Paris, September 2017)
当使用原生驱动的 XDP 时,这个项目要求实现非常高的性能。
* [*InKeV: 对于 DCN 的内核中分布式网络虚拟化*][91] (Z. Ahmed, M. H. Alizai and A. A. Syed, SIGCOMM, August 2016):
**InKeV** 是一个基于 eBPF 的虚拟网络、目标数据中心网络的数据路径架构。它最初由 PLUMgrid 提出,并且声称相比基于 OvS 的 OpenStack 解决方案可以获得更好的性能。
* [_**gobpf** - 从 Go 中利用 eBPF_][92] Michael Schubert, fosdem17, Brussels, Belgium, February 2017
“一个从 Go 中的库,可以去创建、加载和使用 eBPF 程序”
* [***ply***][93] 是为 Linux 实现的一个小的但是非常灵活的开源动态 **跟踪器**,它的一些特性非常类似于 bcc 工具,是受 awk 和 dtrace 启发,但使用一个更简单的语言。它是由 Tobias Waldekranz 写的。
* 如果你读过我以前的文章,你可能对我在这篇文章中的讨论感兴趣,[使用 eBPF 实现 OpenState 接口][151],关于包状态处理,在 fosdem17 中。
![](https://qmonnet.github.io/whirl-offload/img/icons/book.svg)
### 文档
一旦你对 BPF 是做什么的有一个大体的理解。你可以抛开一般的演示而深入到文档中了。下面是 BPF 的规范和功能的最全面的文档,按你的需要挑一个开始阅读吧!
### 关于 BPF
* **BPF 的规范**(包含 classic 和 extended 版本)可以在 Linux 内核的文档中,和特定的文件 *[linux/Documentation/networking/filter.txt][94]* 中找到。BPF 使用以及它的内部结构也被记录在那里。此外,当加载 BPF 代码失败时,在这里可以找到 **被校验器抛出的错误信息**,这有助于你排除不明确的错误信息。
* 此外,在内核树中,在 eBPF 那里有一个关于 **常见的问 & 答** 的文档,它在文件 [*linux/Documentation/bpf/bpf_design_QA.txt*][95] 中。
* … 但是,内核文档是非常难懂的,并且非常不容易阅读。如果你只是去查找一个简单的 eBPF 语言的描述,可以去 IO Visor 的 GitHub 仓库,那儿有 [***它的概括性描述***][96]。
* 顺便说一下IO Visor 项目收集了许多 **关于 BPF 的资源**。大部分,分别在 bcc 仓库的 *[文档目录][97]* 中,和 *[bpf-docs 仓库][98]* 的整个内容中,它们都在 GitHub 上。注意,这个非常好的 *[BPF **参考指南**][99]* 包含一个详细的 BPF C 和 bcc Python 的 helper 的描述。
* 想深入到 BPF那里有一些必要的 **Linux 手册页**。第一个是 [*`bpf(2)` man 页面*][100] 关于 `bpf()` **系统调用**,它用于从用户空间去管理 BPF 程序和映射。它也包含一个 BPF 高级特性的描述(程序类型、映射、等等)。第二个是主要去处理希望去附加到 tc 接口的 BPF 程序:它是 [*`tc-bpf(8)` man 页面*][101],它是 **使用 BPF 和 tc** 的一个参考,并且包含一些示例命令和参考代码。
* Jesper Dangaard Brouer 发起了一个 **更新 eBPF Linux 文档** 的尝试,包含 **不同的映射**。[*他有一个草案*][102],欢迎去贡献。一旦完成,这个文档将被合并进 man 页面并且进入到内核文档。
* Cilium 项目也有一个非常好的 [***BPF 和 XDP 参考指南***][103],它是由核心的 eBPF 开发者写的,它被证明对于 eBPF 开发者是极其有用的。
* David Miller 在 *[xdp-newbies][152]* 邮件列表中发了几封关于 eBPF/XDP 内部结构的富有启发性的电子邮件。我找不到一个单独的地方收集它们的链接,因此,这里是一个列表:
* [*bpf.h 和你 …*][50]
* [*Contextually speaking…*][51]
* [*BPF 校验器概述*][52]
最后一个可能是目前来说关于校验器的最佳的总结。
* Ferris Ellis 发布的 *[一个关于 **eBPF 的系列博客文章**][104]*。作为我写的这个短文,第一篇文章是关于 eBPF 的历史背景和未来期望。接下来的文章将更多的是技术方面,和前景展望。
* [***一个每个内核版本的 BPF 特性列表***][153] 在 bcc 仓库中可以找到。如果你想去知道运行一个给定的特性所要求的最小的内核版本,它是非常有用的。我贡献和添加了链接到提交中,它介绍了每个特性,因此,你也可以从那里很容易地去访问提交历史。
### 关于 tc
当为了网络目的使用 BPF 与 tc 进行结合时Linux 流量控制(**t**raffic **c**ontrol工具它可用于去采集关于 tc 的可用功能的信息。这里有几个关于它的资源。
* 找到关于 **Linux 上 QoS** 的简单教程是很困难的。这里有两个链接,它们很长而且很难懂,但是,如果你可以抽时间去阅读它,你将学习到几乎关于 tc 的任何东西(虽然,关于 BPF 它什么也没有)。它们在这里:[_怎么去实现流量控制_  (Martin A. Brown, 2006)][105],和 [_怎么去实现 Linux 的高级路由 & 流量控制_  (“LARTC”) (Bert Hubert & al., 2002)][106]。
* 在你的系统上的 **tc 手册页面** 并不是最新日期的,因为它们中的几个最近已经增加了。如果你没有找到关于特定的队列规则、分类或者过滤器的文档,它可能在最新的 *[tc 组件的手册页面][107]* 中。
* 一些额外的材料可以在 iproute2 包自已的文件中找到:这个包中有 [*一些文档*][108],包括一些文件,它可以帮你去理解 *[**tc 的 action** 的功能][109]*。
**注意:** 这些文件在 2017 年 10 月 已经从 iproute2 中删除,然而,从 Git 历史中却一直可用。
* 非精确资料:这里是 [*一个关于 tc 的几个特性的研讨会*][110]包含过滤、BPF、tc offload、… 由 Jamal Hadi Salim 在 netdev 1.2 会议上组织的October 2016
* 额外信息 — 如果你使用 `tc` 较多,这里有一些好消息:我用这个工具 *[写了一个 bash 完整的功能][111]*,并且它被包 iproute2 带到内核版本 4.6 和更高版中!
### 关于 XDP
* 对于 XDP 的一些 *[进展中的文档(包括规范)][112]* 已经由 Jesper Dangaard Brouer 启动并且意味着将成为一个合作的工作。正在推进2016 年 9 月你期望它去改变并且或许在一些节点上移动Jesper *[称为贡献][113]*,如果你想去改善它)。
* 自来 Cilium 项目的 *[BPF 和 XDP 参考指南][114]* … 好吧,这个名字已经说明了一切。
### 关于 P4 和 BPF
*[P4][159]* 是一个用于指定交换机行为的语言。它可以被编译为许多种目标硬件或软件。因此,你可能猜到了,这些目标中的一个就是 BPF … 仅部分支持的:一些 P4 特性并不能被转化到 BPF 中并且用类似的方法BPF 可以做的事情,而使用 P4 却不能表达出现。不过,**P4 与 BPF 使用** 的相关文档,[*用于去隐藏在 bcc 仓库中*][160]。这个改变在 P4_16 版本中p4c 引用的编辑器包含 *[一个 eBPF 后端][161]*
![](https://qmonnet.github.io/whirl-offload/img/icons/flask.svg)
### 教程
Brendan Gregg 为想去 **使用 bcc 工具** 跟踪和监视内核中的事件的人制作了一个非常好的 **教程**。[*第一个教程是关于如何使用 bcc 工具*][162],它总共有十一步,教你去理解怎么去使用已有的工具,而 [***针对 Python 开发者** 的一个目标*][163] 是专注于开发新工具,它总共有十七节 “课程”。
Sasha Goldshtein 也有一些 [_**Linux 跟踪研究材料**_][164] 涉及到使用几个 BPF 去进行跟踪。
作者为 Jean-Tiare Le Bigot 的文章为 ping 请求和回复,提供了一个详细的(和有指导意义的)[*使用 perf 和 eBPF 去设置一个低级的跟踪器*][165] 的示例。
对于网络相关的 eBPF 使用案例也有几个教程。那里有一些有趣的文档,包含一个 _eBPF Offload 入门指南_,由 Netronome 在 *[Open NFP][166]* 平台上操作的。其它的那些,来自 Jesper 的演讲,[_XDP 能为其它人做什么_][167],可能是 XDP 入门的最好的方法之一。
![](https://qmonnet.github.io/whirl-offload/img/icons/gears.svg)
### 示例
有示例是非常好的。看看它们是如何工作的。但是 BPF 程序示例是分散在几个项目中的,因此,我列出了我所知道的所有的示例。示例并不是使用相同的 helper例如tc 和 bcc 都有一套它们自己的 helper使它可以很容易地去用 C 语言写 BPF 程序)
### 来自内核的示例
主要的程序类型都包含在内核的示例中:过滤器绑定到套接字或者到 tc 接口、事件跟踪/监视、甚至是 XDP。你可以在 *[linux/samples/bpf/][168]* 目录中找到这些示例。
现在,更多的示例已经作为单元测试被添加到 *[linux/tools/testing/selftests/bpf][200]* 目录下,这里面包含对硬件 offload 的测试或者对于 libbpf 的测试。
Jesper 的 Dangaard Brouer 在他的 *[prototype-kernel][201]* 仓库中也维护了一套专门的示例。 这些示例与那些内核中提供的示例非常类似,但是它们可以以在内核基本文件 (Makefiles 和 headers) 之外编译。
也不要忘记去看一下 git 相关的提交历史,它们有一些指定的特性的介绍,它们也包含一些特性的详细的示例。
### 来自包 iproute2 的示例
iproute2 包也提供了几个示例。它们都很明显地偏向网络编程,因此,这个程序是附加到 tc ingress 或者 egress 接口上。这些示例在 *[iproute2/examples/bpf/][169]* 目录中。
### 来自 bcc 工具集的示例
许多示例都 *[与 bcc 一起提供][170]*
* 一些网络编程的示例在关联的目录下面。它们包括套接字过滤器、tc 过滤器、和一个 XDP 程序。
* `tracing` 目录包含许多 **跟踪编程** 的示例。前面的教程中提到的都在那里。那些程序涉及了事件跟踪的很大的一个范围,并且,它们中的一些是面向生产系统的。注意,某些 Linux 分发版(至少是 Debian、Ubuntu、Fedora、Arch Linux、这些程序已经被 *[打包了][115]* 并且可以很 “容易地” 通过比如 `# apt install bcc-tools` 进行安装。但是在写这篇文章的时候(除了 Arch Linux第一个要求是去安装 IO Visor 的包仓库。
* 那里也有 **使用 Lua** 作为一个不同的 BPF 后端(那是因为 BPF 程序是用 Lua 写的,它是 C 语言的一个子集,它允许为前端和后端使用相同的语言)的一些示例,它在第三个目录中。
* 当然,[*bcc 工具*][202] 自身就是 eBPF 程序使用案例的有趣示例。
### 手册页面
虽然 bcc 一般可以用很容易的方式在内核中去注入和运行一个 BPF 程序,通过 `tc` 工具去将程序附加到 tc 接口也可以被执行。因此,如果你打算将 **BPF 与 tc 一起使用**,你可以在 *[`tc-bpf(8)` 手册页面][171]* 中找到一些调用示例。
![](https://qmonnet.github.io/whirl-offload/img/icons/srcfile.svg)
### 代码
有时候BPF 文档或者示例并不足够多,而且你可能没有其它的方式在你喜欢的文本编辑器(它当然应该是 Vim中去显示代码并去阅读它。或者你可能深入到代码中想去做一个补丁程序或者为机器增加一些新特性。因此这里对有关的文件的几个建议找到你想要的功能取决于你自己
### 在内核中的 BPF 代码
* 文件 *[linux/include/linux/bpf.h][116]* 和它的副本 *[linux/include/uapi/bpf.h][117]* 包含有关 eBPF 的 **定义**,它分别被内核中和用户空间程序的接口使用。
* 相同的方式,文件 *[linux/include/linux/filter.h][118]* 和 *[linux/include/uapi/filter.h][119]* 包含的信息被 **运行的 BPF 程序** 使用。
* BPF 相关的 **主要的代码片断** 在 *[linux/kernel/bpf/][120]* 目录下面。**被系统以不同的操作许可调用** 比如,程序加载或者映射管理是在文件 `syscall.c` 中实现,虽然 `core.c` 包含在 **解析器** 中。其它的文件有明显的命名:`verifier.c` 包含在 **校验器** 中(不是开玩笑的),`arraymap.c` 的代码用于与阵列类型的 **映射** 去互动,等等。
* **helpers**,以及几个网络(与 tc、XDP 一起)和用户可用的相关功能是实现在 [*linux/net/core/filter.c*][121] 中。它也包含代码去移植 cBPF 字节码到 eBPF 中(因为在运行之前,内核中的所有的 cBPF 程序被转换成 eBPF
* 功能和 **helpers** 相关的 **事件跟踪** 都在 *[linux/kernel/trace/bpf_trace.c][203]* 中。
* **JIT 编译器** 在它们各自的架构目录下面比如x86 架构的在 *[linux/arch/x86/net/bpf_jit_comp.c][122]* 中。
* 在 *[linux/net/sched/][123]* 目录下,你可以找到 **tc 的 BPF 组件** 相关的代码,尤其是在文件 `act_bpf.c` action `cls_bpf.c`filter中。
* 我并没有在 BPF 上深入到 **事件跟踪** 中,因此,我并不真正了解这些程序的钩子。在 *[linux/kernel/trace/bpf_trace.c][124]* 那里有一些东西。如果你对它感 兴趣,并且想去了解更多,你可以在 Brendan Gregg 的演示或者博客文章上去深入挖掘。
* 我也没有使用过 **seccomp-BPF**。但它的代码在 *[linux/kernel/seccomp.c][125]*,并且可以在 [*linux/tools/testing/selftests/seccomp/seccomp_bpf.c*][126] 中找到一些它的使用示例。
### XDP 钩子代码
一旦将 BPF 虚拟机加载进内核,由一个 Netlink 命令将 **XDP** 程序从用户空间钩入到内核网络路径中。接收它的是在 *[linux/net/core/dev.c][172]* 文件中的被调用的 `dev_change_xdp_fd()` 函数,并且由它设置一个 XDP 钩子。比如,钩子位于在 NICs 支持的驱动中。例如,为一些 Mellanox 硬件使用的 mlx4 驱动的钩子实现在 *[drivers/net/ethernet/mellanox/mlx4/][173]* 目录下的文件中。文件 en_netdev.c 接收 Netlink 命令并调用 `mlx4_xdp_set()`,它再被在文件 en_rx.c 实现的实例 `mlx4_en_process_rx_cq()` 调用(对于 RX 侧)。
### 在 bcc 中的 BPF 逻辑
[*在 bcc 的 GitHub 仓库*][174] 上找到的 **bcc** 工具集的其中一个代码。**Python 代码**,包含在 `BPF` 类中,最初它在文件 *[bcc/src/python/bcc/__init__.py][175]* 中。但是许多感兴趣的东西 — 我的意见是 — 比如,加载 BPF 程序到内核中,碰巧在 *[libbcc 的 **C 库**][176]*中。
### 使用 tc 去管理 BPF 的代码
**在 tc** 中与 iproute2 包中一起带来的与 BPF 相关的代码。其中的一些在 *[iproute2/tc/][177]* 目录中。文件 f_bpf.c 和 m_bpf.c和 e_bpf.c是各自用于处理 BPF 的过滤器和动作的和tc `exec` 命令,或许什么命令都可以)。文件 q_clsact.c 定义了 `clsact`qdisc 是为 BPF 特别创建的。但是,**大多数的 BPF 用户空间逻辑** 是在 *[iproute2/lib/bpf.c][178]* 库中实现的,因此,如果你想去使用 BPF 和 tc这里可能是会将你搞混乱的地方它是从文件 iproute2/tc/tc_bpf.c 中来的,你也可以在旧版本的包中找到代码相同的地方)。
### BPF 实用工具
内核中也带有 BPF 相关的三个工具的源代码(`bpf_asm.c`, `bpf_dbg.c`, `bpf_jit_disasm.c`)、根据你的版本不同,在 *[linux/tools/net/][179]* 或者 *[linux/tools/bpf/][180]* 目录下面:
* `bpf_asm` 是一个极小的汇编程序。
* `bpf_dbg` 是一个很小的 cBPF 程序调试器。
* `bpf_jit_disasm` 对于两种 BPF 都是通用的,并且对于 JIT 调试来说非常有用。
* `bpftool` 是由 Jakub Kicinski 写的通用工具,并且它可以与 eBPF 程序和来自用户空间的映射进行交互例如去展示、转储、pin 程序、或者去展示、创建、pin、更新、删除映射。
阅读在源文件顶部的注释可以得到一个它们使用方法的概述。
与 eBPF 一起工作的其它必需的文件是来自内核树的两个**用户空间库**,它们可以用于管理 eBPF 程序或者映射来自外部的程序。这个函数可以通过 *[linux/tools/lib/bpf/][204]* 目录中的头文件 `bpf.h``libbpf.h`(更高级别) 来访问。比如,工具 `bpftool` 主要依赖这些库。
### 其它感兴趣的 chunks
如果你对关于 BPF 的不常见的语言的使用感兴趣bcc 包含 *[一个为 BPF 目标的 **P4 编译器***][181]以及 [***一个 Lua 前端***][182],它可以被使用,它以代替 C 的一个子集,并且(在 Lua 的案例中)可以用于 Python 工具。
### LLVM 后端
*[这个提交][183]* clang / LLVM 用于将 C 编译成 BPF 后端,将它添加到 LLVM 源(也可以在 [the GitHub mirror][184] 上访问)。
### 在用户空间中运行
到目前为止,我知道那里有至少两种 eBPF 用户空间实现。第一个是 *[uBPF][185]*,它是用 C 写的。它包含一个解析器、一个 x86_64 架构的 JIT 编译器、一个汇编器和一个反汇编器。
uBPF 的代码似乎被重用了,去产生了一个 [*通用实现*][186]claims 支持 FreeBSD 内核、FreeBSD 用户空间、Linux 内核、Linux 用户空间和 Mac OSX 用户空间。它被 [*VALE 交换机的 BPF 扩展模块*][187]使用。
其它用户空间的实现是我做的:[*rbpf*][188],基于 uBPF但是用 Rust 写的。写了解析器和 JIT 编译器 Linux 下两个都有Mac OSX 和 Windows 下仅有解析器),以后可能会有更多。
### 提交日志
正如前面所说的,如果你希望得到更多的关于它的信息,不要犹豫,去看一些提交日志,它介绍了一些特定的 BPF 特性。你可以在许多地方搜索日志,比如,在 *[git.kernel.org][189]、[在 GitHub 上][190]*、或者如果你克隆过它还有你的本地仓库中。如果你不熟悉 git你可以尝试像这些去做 `git blame <file>` 去看看介绍特定代码行的提交内容,然后,`git show <commit>` 去看详细情况(或者在 `git log` 的结果中按关键字搜索,但是这样做通常比较单调乏味)也可以看在 bcc 仓库中的 *[按内核版本区分的 eBPF 特性列表][191]*,它链接到相关的提交上。
![](https://qmonnet.github.io/whirl-offload/img/icons/wand.svg)
### 排错
对 eBPF 的狂热是最近的事情,因此,到目前为止我还找不到许多关于怎么去排错的资源。所以这里只有几个,在使用 BPF 进行工作的时候,我对自己遇到的问题进行了记录。
### 编译时的错误
* 确保你有一个最新的 Linux 内核版本(也可以看 *[这个文档][127]*)。
* 如果你自己编译内核:确保你安装了所有正确的组件,包括内核镜像、头文件和 libc。
* 当使用 `tc-bpf`(用于去编译 C 代码到 BPF 中)的 man 页面提供的 `bcc` shell 函数时:我只是添加包含 clang 调用的头部:
```
__bcc() {
clang -O2 -I "/usr/src/linux-headers-$(uname -r)/include/" \
-I "/usr/src/linux-headers-$(uname -r)/arch/x86/include/" \
-emit-llvm -c $1 -o - | \
llc -march=bpf -filetype=obj -o "`basename $1 .c`.o"
}
```
(seems fixed as of today).
* 对于使用 `bcc` 的其它问题,不要忘了去看一看这个工具集的 *[答疑][128]*
* 如果你从一个并不精确匹配你的内核版本的 iproute2 包中下载了示例,可能会通过在文件中包含的头文件触发一些错误。这些示例片断都假设安装在你的系统中内核的头文件与 iproute2 包是相同版本的。如果不是这种情况,下载正确的 iproute2 版本,或者编辑示例中包含的文件的路径,指向到 iproute2 中包含的头文件上(在运行时一些问题可能或者不可能发生,取决于你使用的特性)。
### 在加载和运行时的错误
* 使用 tc 去加载一个程序,确保你使用了一个与使用中的内核版本等价的 iproute2 中的 tc 二进制文件。
* 使用 bcc 去加载一个程序,确保在你的系统上安装了 bcc仅下载源代码去运行 Python 脚本是不够的)。
* 使用 tc如果 BPF 程序不能返回一个预期值,检查调用它的方式:过滤器、或者动作、或者使用 “直接传动” 模式的过滤器。
* 静态使用 tc注意不使用过滤器动作不会直接附加到 qdiscs 或者接口。
* 通过内核校验器抛出错误到解析器可能很难。[*内核文档*][129]或许可以提供帮助,因此,可以 *[参考指南][130]* 或者,万不得一的情况下,可以去看源代码(祝你好运!)。记住,校验器 _不运行_ 程序,对于这种类型的错误,它也是非常重要的。如果你得到一个关于无效内存访问或者关于未初始化的数据的错误,它并不意味着那些问题真实发生了(或者有时候,它们完全有可能发生)。它意味着你的程序是以校验器预计可能发生错误的方式写的,并且因此而拒绝这个程序。
* 注意 `tc` 工具有一个 `verbose` 模式,它与 BPF 一起工作的很好:在你的命令行尾部尝试追加一个 `verbose`
* bcc 也有一个 verbose 选项:`BPF` 类有一个 `debug` 参数,它可以带 `DEBUG_LLVM_IR`、`DEBUG_BPF` 和 `DEBUG_PREPROCESSOR` 三个标志中任何组合(详细情况在 *[源文件][131]*中)。 为调试代码,它甚至嵌入了 [*一些条件去打印输出代码*][132]。
* LLVM v4.0+ 为 eBPF 程序 *[嵌入一个反汇编器][133]*。因此,如果你用 clang 编译你的程序,在编译时添加 `-g` 标志允许你通过内核校验器去以人类可读的格式去转储你的程序。处理转储文件,使用:
```
$ llvm-objdump -S -no-show-raw-insn bpf_program.o
```
* 使用映射工作?你想去看 [*bpf-map*][134],一个为 Cilium 项目而用 Go 创建的非常有用的工具,它可以用于去转储内核中 eBPF 映射的内容。那里也有在 Rust 中的 [*一个克隆*][135]。
* 那里有一个旧的 *[在 **StackOverflow** 上的 `bpf` 标签][136]*,但是,在这篇文章中它一直没有被使用过(并且那里几乎没有与新的 eBPF 相关的东西)。如果你是一位来自未来的阅读者,你可能想去看看在这方面是否有更多的活动。
![](https://qmonnet.github.io/whirl-offload/img/icons/zoomin.svg)
### 更多!
* 如果你想很容易地去 **测试 XDP**,那是 *[一个 Vagrant 设置][137]* 可以使用。你也可以 **测试 bcc** *[在一个 Docker 容器中][138]*
* 想知道围绕 BPF 的 **开发和活动** 在哪里吗?好吧,内核补丁总是结束于 *[netdev 上的邮件列表][139]*(相关 Linux 内核的网络栈开发):以关键字 “BPF” 或者 “XDP” 来搜索。自 2017 年 4 月开始,那里也有 *[一个专门用于 XDP 编程的邮件列表][140]*(是为了架构或者寻求帮助)。[*在 IO Visor 的邮件列表上*][141]也有许多的讨论和辨论,因为 BPF 是一个重要的项目。如果你只是想随时了解情况,那里也有一个 [*@IOVisor Twitter 帐户*][142]。
我经常会回到这篇博客中,来看一看 *[关于 BPF][192]* 有没有新的文章!
_特别感谢 Daniel Borkmann 指引我找到了许多的 [附加的文档][154]因此我才完成了这个合集。_
--------------------------------------------------------------------------------
via: https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/
作者:[Quentin Monnet ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://qmonnet.github.io/whirl-offload/about/
[1]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf
[2]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp
[3]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-other-components-related-or-based-on-ebpf
[4]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-bpf-1
[5]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-tc
[6]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[7]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-p4-and-bpf
[8]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-the-kernel
[9]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-package-iproute2
[10]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#from-bcc-set-of-tools
[11]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#manual-pages
[12]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-code-in-the-kernel
[13]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#xdp-·s-code
[14]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-logic-in-bcc
[15]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#code-to-manage-bpf-with-tc
[16]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#bpf-utilities
[17]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#other-interesting-chunks
[18]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#llvm-backend
[19]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#running-in-userspace
[20]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#commit-logs
[21]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-compilation-time
[22]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#errors-at-load-and-run-time
[23]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#generic-presentations
[24]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#documentation
[25]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#tutorials
[26]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#examples
[27]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#the-code
[28]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#troubleshooting
[29]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#and-still-more
[30]:http://netdevconf.org/1.2/session.html?daniel-borkmann
[31]:http://netdevconf.org/1.2/slides/oct5/07_tcws_daniel_borkmann_2016_tcws.pdf
[32]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[33]:http://www.netdevconf.org/1.1/proceedings/slides/borkmann-tc-classifier-cls-bpf.pdf
[34]:http://www.netdevconf.org/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf
[35]:https://archive.fosdem.org/2016/schedule/event/ebpf/attachments/slides/1159/export/events/attachments/ebpf/slides/1159/ebpf.pdf
[36]:https://fosdem.org/2017/schedule/event/ebpf_xdp/
[37]:http://people.netfilter.org/hawk/presentations/xdp2016/xdp_intro_and_use_cases_sep2016.pdf
[38]:http://netdevconf.org/1.2/session.html?jesper-performance-workshop
[39]:http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP_DDoS_protecting_osd2017.pdf
[40]:http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
[41]:http://netdevconf.org/2.1/session.html?gospodarek
[42]:http://jvns.ca/blog/2017/04/07/xdp-bpf-tutorial/
[43]:http://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp
[44]:http://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823
[45]:https://www.youtube.com/watch?v=TnJF7ht3ZYc&amp;amp;amp;list=PLkA60AVN3hh8oPas3cq2VA9xB7WazcIgs
[46]:http://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp
[47]:https://fosdem.org/2017/schedule/event/cilium/
[48]:http://openvswitch.org/support/ovscon2016/7/1120-tu.pdf
[49]:http://openvswitch.org/support/ovscon2016/7/1245-bertrone.pdf
[50]:https://www.spinics.net/lists/xdp-newbies/msg00179.html
[51]:https://www.spinics.net/lists/xdp-newbies/msg00181.html
[52]:https://www.spinics.net/lists/xdp-newbies/msg00185.html
[53]:http://schd.ws/hosted_files/ossna2017/da/BPFandXDP.pdf
[54]:https://speakerdeck.com/tuxology/the-bsd-packet-filter
[55]:http://www.slideshare.net/brendangregg/bpf-tracing-and-more
[56]:http://fr.slideshare.net/brendangregg/linux-bpf-superpowers
[57]:https://www.socallinuxexpo.org/sites/default/files/presentations/Room%20211%20-%20IOVisor%20-%20SCaLE%2014x.pdf
[58]:https://events.linuxfoundation.org/sites/events/files/slides/ebpf_on_the_mainframe_lcon_2015.pdf
[59]:https://events.linuxfoundation.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015_0.pdf
[60]:https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
[61]:https://lwn.net/Articles/603983/
[62]:http://www.slideshare.net/vh21/meet-cutebetweenebpfandtracing
[63]:http://www.slideshare.net/vh21/linux-kernel-tracing
[64]:http://www.slideshare.net/ThomasGraf5/linux-networking-explained
[65]:http://www.slideshare.net/ThomasGraf5/linuxcon-2015-linux-kernel-networking-walkthrough
[66]:http://www.tcpdump.org/papers/bpf-usenix93.pdf
[67]:http://www.gsp.com/cgi-bin/man.cgi?topic=bpf
[68]:http://borkmann.ch/talks/2013_devconf.pdf
[69]:http://borkmann.ch/talks/2014_devconf.pdf
[70]:https://blog.cloudflare.com/introducing-the-bpf-tools/
[71]:http://biot.com/capstats/bpf.html
[72]:https://www.iovisor.org/technology/xdp
[73]:https://github.com/iovisor/bpf-docs/raw/master/Express_Data_Path.pdf
[74]:https://events.linuxfoundation.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf
[75]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#about-xdp-1
[76]:http://netdevconf.org/1.2/session.html?herbert-xdp-workshop
[77]:https://schd.ws/hosted_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf
[78]:https://ovsorbit.benpfaff.org/#e11
[79]:http://open-nfp.org/media/pdfs/Open_NFP_P4_EBPF_Linux_TC_Offload_FINAL.pdf
[80]:https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html
[81]:https://ovsorbit.benpfaff.org/
[82]:http://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html
[83]:http://netdevconf.org/2.1/session.html?bertin
[84]:http://netdevconf.org/2.1/session.html?zhou
[85]:http://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016
[86]:http://info.iet.unipi.it/~luigi/vale/
[87]:https://github.com/YutaroHayakawa/vale-bpf
[88]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[89]:http://netdevconf.org/1.2/slides/oct6/10_suricata_ebpf.pdf
[90]:https://www.slideshare.net/ennael/kernel-recipes-2017-ebpf-and-xdp-eric-leblond
[91]:https://github.com/iovisor/bpf-docs/blob/master/university/sigcomm-ccr-InKev-2016.pdf
[92]:https://fosdem.org/2017/schedule/event/go_bpf/
[93]:https://wkz.github.io/ply/
[94]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[95]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/Documentation/bpf/bpf_design_QA.txt?id=2e39748a4231a893f057567e9b880ab34ea47aef
[96]:https://github.com/iovisor/bpf-docs/blob/master/eBPF.md
[97]:https://github.com/iovisor/bcc/tree/master/docs
[98]:https://github.com/iovisor/bpf-docs/
[99]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[100]:http://man7.org/linux/man-pages/man2/bpf.2.html
[101]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[102]:https://prototype-kernel.readthedocs.io/en/latest/bpf/index.html
[103]:http://docs.cilium.io/en/latest/bpf/
[104]:https://ferrisellis.com/tags/ebpf/
[105]:http://linux-ip.net/articles/Traffic-Control-HOWTO/
[106]:http://lartc.org/lartc.html
[107]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/man/man8
[108]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc?h=v4.13.0
[109]:https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/tree/doc/actions?h=v4.13.0
[110]:http://netdevconf.org/1.2/session.html?jamal-tc-workshop
[111]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/commit/bash-completion/tc?id=27d44f3a8a4708bcc99995a4d9b6fe6f81e3e15b
[112]:https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html
[113]:https://marc.info/?l=linux-netdev&amp;amp;amp;m=147436253625672
[114]:http://docs.cilium.io/en/latest/bpf/
[115]:https://github.com/iovisor/bcc/blob/master/INSTALL.md
[116]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/bpf.h
[117]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/bpf.h
[118]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/filter.h
[119]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/filter.h
[120]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf
[121]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/filter.c
[122]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/net/bpf_jit_comp.c
[123]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sched
[124]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[125]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/seccomp.c
[126]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/seccomp/seccomp_bpf.c
[127]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[128]:https://github.com/iovisor/bcc/blob/master/FAQ.txt
[129]:https://www.kernel.org/doc/Documentation/networking/filter.txt
[130]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[131]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[132]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#output
[133]:https://www.spinics.net/lists/netdev/msg406926.html
[134]:https://github.com/cilium/bpf-map
[135]:https://github.com/badboy/bpf-map
[136]:https://stackoverflow.com/questions/tagged/bpf
[137]:https://github.com/iovisor/xdp-vagrant
[138]:https://github.com/zlim/bcc-docker
[139]:http://lists.openwall.net/netdev/
[140]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[141]:http://lists.iovisor.org/pipermail/iovisor-dev/
[142]:https://twitter.com/IOVisor
[143]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#what-is-bpf
[144]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#dive-into-the-bytecode
[145]:https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/#resources
[146]:https://github.com/qmonnet/whirl-offload/commits/gh-pages/_posts/2016-09-01-dive-into-bpf.md
[147]:http://netdevconf.org/1.2/session.html?jakub-kicinski
[148]:http://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016
[149]:https://cdn.shopify.com/s/files/1/0177/9886/files/phv2017-gbertin.pdf
[150]:https://github.com/cilium/cilium
[151]:https://fosdem.org/2017/schedule/event/stateful_ebpf/
[152]:http://vger.kernel.org/vger-lists.html#xdp-newbies
[153]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[154]:https://github.com/qmonnet/whirl-offload/commit/d694f8081ba00e686e34f86d5ee76abeb4d0e429
[155]:http://openvswitch.org/pipermail/dev/2014-October/047421.html
[156]:https://qmonnet.github.io/whirl-offload/2016/07/15/beba-research-project/
[157]:https://www.iovisor.org/resources/blog
[158]:http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html
[159]:http://p4.org/
[160]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4
[161]:https://github.com/p4lang/p4c/blob/master/backends/ebpf/README.md
[162]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[163]:https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
[164]:https://github.com/goldshtn/linux-tracing-workshop
[165]:https://blog.yadutaf.fr/2017/07/28/tracing-a-packet-journey-using-linux-tracepoints-perf-ebpf/
[166]:https://open-nfp.org/dataplanes-ebpf/technical-papers/
[167]:http://netdevconf.org/2.1/session.html?gospodarek
[168]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/samples/bpf
[169]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/examples/bpf
[170]:https://github.com/iovisor/bcc/tree/master/examples
[171]:http://man7.org/linux/man-pages/man8/tc-bpf.8.html
[172]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/core/dev.c
[173]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/mellanox/mlx4/
[174]:https://github.com/iovisor/bcc/
[175]:https://github.com/iovisor/bcc/blob/master/src/python/bcc/__init__.py
[176]:https://github.com/iovisor/bcc/blob/master/src/cc/libbpf.c
[177]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/tc
[178]:https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/lib/bpf.c
[179]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/net
[180]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/bpf
[181]:https://github.com/iovisor/bcc/tree/master/src/cc/frontends/p4/compiler
[182]:https://github.com/iovisor/bcc/tree/master/src/lua
[183]:https://reviews.llvm.org/D6494
[184]:https://github.com/llvm-mirror/llvm/commit/4fe85c75482f9d11c5a1f92a1863ce30afad8d0d
[185]:https://github.com/iovisor/ubpf/
[186]:https://github.com/YutaroHayakawa/generic-ebpf
[187]:https://github.com/YutaroHayakawa/vale-bpf
[188]:https://github.com/qmonnet/rbpf
[189]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
[190]:https://github.com/torvalds/linux
[191]:https://github.com/iovisor/bcc/blob/master/docs/kernel-versions.md
[192]:https://qmonnet.github.io/whirl-offload/categories/#BPF
[193]:https://lwn.net/Articles/740157/
[194]:https://www.netdevconf.org/2.2/session.html?viljoen-xdpoffload-talk
[195]:https://fosdem.org/2018/schedule/event/xdp/
[196]:http://blog.kubernetes.io/2017/12/using-ebpf-in-kubernetes.html
[197]:http://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html?highlight=XDP#ebpf-and-xdp
[198]:https://github.com/pevma/SEPTun-Mark-II
[199]:https://www.stamus-networks.com/2016/09/28/suricata-bypass-feature/
[200]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/testing/selftests/bpf
[201]:https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/samples/bpf
[202]:https://github.com/iovisor/bcc/tree/master/tools
[203]:https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/trace/bpf_trace.c
[204]:https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/tools/lib/bpf