Merge pull request #4 from LCTT/master

与上游同步
This commit is contained in:
jx.zeng 2020-07-01 09:34:28 +08:00 committed by GitHub
commit f311d12772
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
61 changed files with 2024 additions and 218 deletions

View File

@ -1,43 +1,41 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: (Yufei-Yan)
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (Yufei-Yan)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12367-1.html)
[#]: subject: (How IoT will rescue aviation)
[#]: via: (https://www.networkworld.com/article/3543318/how-iot-will-rescue-aviation.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
物联网IoT将如何拯救航空业
物联网将如何拯救航空业
======
为防止感染新冠病毒的乘客登机欧洲飞机制造商空中客车Airbus公司正在研究能够识别病毒的物联网IoT传感器。
[Stéphan Valentin][1] [(CC0)][2]
> 为防止感染新冠病毒的乘客登机,欧洲飞机制造商,空中客车公司正在研究能够识别病毒的物联网传感器。
一家开发用于探测飞机上爆炸物和其他化学物品的生化公司正在和空中客车Airbus合作一同开发一种可以检测已经感染新冠病毒乘客的传感器。
![Stéphan Valentin][1]
Koniku 的创始人兼首席执行管 Osh Agabi 在[一篇博文][3]中说,总部位于加州的 Konibu 公司和空中客车Airbus公司从 2017 年就开始合作共同开发能够探测出不同化学物质的非接触式设备
一家开发传感器来探测飞机上和机场中的爆炸物和其他化学物品的生物技术公司正在和空中客车合作,一同开发一种可以检测已经感染新冠病毒乘客的传感器。
[[订阅网络世界Network World简报定期获得新的视野]][4]
Koniku 的创始人兼首席执行管 Osh Agabi 在[一篇博文][3]中说,总部位于加州的 Konibu 公司和空中客车Airbus公司从 2017 年就开始合作共同开发能够探测出不同化学物质的非接触式设备。
他们希望通过识别从呼吸或者汗液中的气味来判断是否感染新冠病毒,因为这些气味可能是新冠病毒中化学物质的标记。“大多数感染和疾病都会或多或少的改变我们呼吸和汗液里的化学成分,也就会制造出不同的气味,” Agabi 写道。“如果我们检测到这些气味,我们就可以检测是否存在感染。”
他们希望通过识别从呼吸或者汗液中的气味来判断是否感染新冠病毒,因为这些气味可能是新冠病毒中化学物质的标记。“大多数感染和疾病都会或多或少的改变我们呼吸和汗液里的化学成分,也就会产生不同的气味,” Agabi 写道。“如果我们检测到这些气味,我们就可以检测是否存在感染。”
这两家公司希望能够识别这种新冠病毒的特标记并且能找到一种可以检测这些标记的物联网IoT传感器这些传感器配备有通过基因工程改造过的受体从而对病毒进行探测。“那些受体会过滤空气中的分子并且当他们和危险的分子化合物接触或者遇到已经提前编辑好的可能的威胁的时候,就会产生一个信号,”他写道。
这两家公司希望能够识别这种新冠病毒的特异性标记并且能找到一种可以检测这些标记的物联网IoT传感器这些传感器配备有通过基因工程改造过的受体从而对病毒进行探测。“那些受体会过滤空气中的分子并且当它们接触到已经提前被编程检测的存在威胁或危险的分子化合物的时候,就会产生一个信号,”他写道。
他说,当乘客经过一个装有传感器的密闭通道的时候,他们会受到检测。“通过对构成这些受体的细胞中的 DNA 进行编程,就可以让它对出现在感染者呼吸或者汗液中的化合物作出反应,我们相信,我们将能够迅速且可靠地筛查 COVID-19,并且确定一个人是否已经被感染,”他写道。
他说,乘客将通过走过一个装有传感器的封闭通道来进行筛选。“通过对构成这些受体细胞中的 DNA 进行编程,使其对出现在感染者呼吸或者汗液中的化合物作出反应,我们相信,我们将能够迅速且可靠地筛查新冠病毒,并且确定一个人是否已经被感染,”他写道。
其他类型的无接触检测器已经在使用中了包括高温皮肤elevated-skin-temperatureEST摄像头。
其他类型的非接触检测器已经在使用中了,包括<ruby>皮肤温度升高<rt>elevated-skin-temperature</rt></ruby>EST摄像头。
意大利的最主要的机场莱昂纳德·达·芬奇Leonardo da Vinci为了发现发烧的人他们购买了三个热成像头盔。机场已经配备了固定的热感应扫描仪,并且订购了更多的这种设备。[根据当地媒体 Fiumicino 网上的报道][15],被检测出可能发烧的乘客,被会要求做进一步的医学检查.
意大利的最主要的机场 Leonardo da Vinci 购置了三个热成像头盔来发现发烧的人。机场已经配备了固定的热感应扫描仪,并且订购了更多的这种设备。[根据当地媒体 Fiumicino Online 的报道][5],被发现潜在发烧的乘客被会要求做进一步的医学检查。
位于中国深圳制造这种头盔的 KC Wearable 公司表示,这种头盔可以由员工佩戴,并且可以与乘客保持一定的距离。
制造热感应摄像头的 FLIR Systems 公司在其本月的[财报][6]中表示,对 EST 系统的需求正在持续增加。
“尽管这些热感应摄像头不能检测或者诊断任何医疗状况,这些摄像头可以作为识别高温皮肤的有效工具。”报告说。
“尽管这些热感应摄像头不能检测或者诊断任何医疗状况,这些摄像头可以作为识别皮肤温度升高的有效工具。”报告说。
FLIR 公司 CEO Jim Cannon 在本月的收入电话会议上表示,“许多公司都在寻求在他们的设施中安装这种技术,以便解除现在挤压的订单”。[根据路透社报道][8]通用汽车General Motors就是其中之一。
FLIR 公司 CEO Jim Cannon 在本月的收入电话会议上表示,“许多公司都在寻求在他们的设施中安装这种技术,以便解除<ruby>就地避难<rt>shelter-in-place</rt></ruby>法令”。[根据路透社报道][8],通用汽车就是其中之一。
在 Facebook[9] 和[领英][10]上加入网络世界Network World社区对重要话题发表评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3543318/how-iot-will-rescue-aviation.html
@ -45,13 +43,13 @@ via: https://www.networkworld.com/article/3543318/how-iot-will-rescue-aviation.h
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/s7NGQU2Nt8k
[1]: https://images.idgesg.net/images/article/2018/08/passenger-view_of_airplane_wing_above_clouds_travel_journey_transportation_by_stephan_valentin_cc0_via_unsplash_1200x800-100766542-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.linkedin.com/pulse/what-happens-when-airports-open-back-up-osh-agabi/?src=aff-lilpar&veh=aff_src.aff-lilpar_c.partners_pkw.10078_plc.Skimbit%20Ltd._pcrid.449670_learning&trk=aff_src.aff-lilpar_c.partners_pkw.10078_plc.Skimbit%20Ltd._pcrid.449670_learning&clickid=WNmzMlyalxyOUI7wUx0Mo34HUkiwwpy%3APQ3X1Y0&irgwc=1
[4]: https://www.networkworld.com/newsletters/signup.html

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12364-1.html)
[#]: subject: (Style your data plots in Python with Pygal)
[#]: via: (https://opensource.com/article/20/6/pygal-python)
[#]: author: (Shaun Taylor-Morgan https://opensource.com/users/shaun-taylor-morgan)
使用 Pygal 在 Python 中设置数据图的样式
======
介绍一种更时尚的 Python 绘图库。
![Python in a coffee cup.][1]
[Python][2] 一个可以可视化数据的库。一个更具互动性的选择是 Pygal我认为这个库适合喜欢好看的人。它生成用户可以与之交互的漂亮的 SVG可缩放矢量图形文件。SVG 是交互式图形的标准格式,仅使用几行 Python 就可以带来丰富的用户体验。
> 介绍一种更时尚的 Python 绘图库。
![](https://img.linux.net.cn/data/attachment/album/202006/30/120650hlf8lm0em3l1m8zd.jpg)
[Python][2] 有很多可以将数据可视化的库。其中一个互动性较强的库是 Pygal我认为这个库适合喜欢漂亮事物的人。它可以生成用户可以与之交互的漂亮的 SVG可缩放矢量图形文件。SVG 是交互式图形的标准格式,仅使用几行 Python 就可以带来丰富的用户体验。
### 使用 Pygal 进行时尚的 Python 绘图
在本文中,我们要重新创建多柱状图,它代表了 1966 年至 2020 年英国大选的结果:
在本文中,我们要重新创建多柱状图,用来表示 1966 年至 2020 年英国大选的结果:
![Pygal plot][3]
@ -25,18 +27,14 @@
* 运行最新版本的 Python[Linux][4]、[Mac][5] 和 [Windows][6] 的说明)
* 确认你运行的是与这些库兼容的 Python 版本
数据可在线获得,并可使用 pandas 导入:
```
import pandas as pd
df = pd.read_csv('<https://anvil.works/blog/img/plotting-in-python/uk-election-results.csv>')
df = pd.read_csv('https://anvil.works/blog/img/plotting-in-python/uk-election-results.csv')
```
现在可以了。数据如下所示:
现在我们可以继续进行了。。数据如下所示:
```
        year  conservative  labour  liberal  others
@ -49,33 +47,29 @@ df = pd.read_csv('<https://anvil.works/blog/img/plotting-in-python/uk-election-r
14      2019           365     202       11      72
```
 
在 Pygal 中进行绘制会以一种易于阅读的方式显示。首先,我们以简化柱状图定义的方式定义样式对象。然后我们将自定义样式以及其他元数据传递给 `Bar` 对象:
在 Pygal 中进行绘制会以一种易于阅读的方式显示。首先,我们以一种简化柱状图定义的方式定义样式对象。然后我们将自定义样式以及其他元数据传递给 `Bar` 对象:
```
import pygal
from pygal.style import Style
custom_style = Style(
    colors=('#0343df', '#e50000', '#ffff14', '#929591'),
    font_family='Roboto,Helvetica,Arial,sans-serif',
    background='transparent',
    label_font_size=14,
colors=('#0343df', '#e50000', '#ffff14', '#929591'),
font_family='Roboto,Helvetica,Arial,sans-serif',
background='transparent',
label_font_size=14,
)
c = pygal.Bar(
    title="UK Election Results",
    style=custom_style,
    y_title='Seats',
    width=1200,
    x_label_rotation=270,
title="UK Election Results",
style=custom_style,
y_title='Seats',
width=1200,
x_label_rotation=270,
)
```
然后,我们将数据`添加`到 `Bar` 对象中:
然后,我们将数据添加到 `Bar` 对象中:
```
c.add('Conservative', df['conservative'])
@ -88,9 +82,8 @@ c.x_labels = df['year']
最后,我们将图另存为 SVG 文件:
```
`c.render_to_file('pygal.svg')`
c.render_to_file('pygal.svg')
```
结果是一个交互式 SVG 图,你可以在此 gif 中看到:
@ -101,12 +94,7 @@ c.x_labels = df['year']
### 总结
、Python 中的某些绘图工具需要非常详细地构建每个对象,而 Pygal 从一开始就为你提供这些。如果你手边有数据并且想做一个干净、漂亮、简单的交互式图表,请尝试一下 Pygal。
\---
_本文最初发表于[此][8]并获得许可编辑并重新发布。_
Python 中的某些绘图工具需要非常详细地构建每个对象,而 Pygal 从一开始就为你提供这些。如果你手边有数据并且想做一个干净、漂亮、简单的交互式图表,请尝试一下 Pygal。
--------------------------------------------------------------------------------
@ -115,7 +103,7 @@ via: https://opensource.com/article/20/6/pygal-python
作者:[Shaun Taylor-Morgan][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,107 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12365-1.html)
[#]: subject: (Customize your Raspberry Pi operating system for everyday use)
[#]: via: (https://opensource.com/article/20/6/custom-raspberry-pi)
[#]: author: (Patrick H. Mullins https://opensource.com/users/pmullins)
定制用于日常使用的树莓派系统
======
> 安装精简版树莓派系统,让你的树莓派更加轻盈。
![](https://img.linux.net.cn/data/attachment/album/202006/30/133836pfm4u3sq073ffmbf.jpg)
你有一个运行<ruby>[树莓派系统][2]<rt>Raspberry Pi OS</rt></ruby>(以前称为 Raspbian操作系统的<ruby>树莓派<rt>Raspberry Pi</rt></ruby>,你肯定知道,它是一个非常棒的小型电脑,有一个很好的操作系统,对于初学者来说,它包括了你可能想要的一切。然而,一旦你熟悉了它,并且想用它干一些别的事情的时候,你可能不想要那个默认囊括了一切组件的操作系统。
在这种情况下,你两个选择:要么你可以绞尽脑汁地把所有你不需要的东西都删干净,要么你还可以用<ruby>精简版树莓派系统<rt>Raspberry Pi OS Lite</rt></ruby>来构建专门为你的需求而定制的轻量级操作系统。我的建议是,用第二种方法,既简便又节省时间。
### Raspberry Pi OS Lite
<ruby>精简版<rt>Lite</rt></ruby>”的<ruby>树莓派系统<rt>Raspberry Pi OS</rt></ruby>其实没什么特别的,就是一个基于最新版本的 [Debian][3] 的最小化镜像。这个镜像仅包含了操作系统核心和启动到命令行的部分,而不会进入图形桌面。可以将这个作为你的定制树莓派系统的基础。这之后的所有东西都是建立在这个核心之上的。
前往树莓派基金会的网站上[下载][2]这个轻量级镜像。下载完成后,查看详细的[安装指南][4],这里面介绍了在 Linux、Windows 或者 macOS 下如何烧制树莓派操作系统的 SD 卡。
如果你计划用树莓派作为一个极简系统来运行一些脚本和服务的话,差不多这么多就够了。如果你还想干更多事,那继续往下读。
### X Window
首先如果偶尔需要通过图形用户界面GUI连接到你的树莓派安装一个窗口系统还是不错的。
[X Window 系统][5],有时候称为 X11是 Unix 操作系统上一个常见的基本窗口系统。X11 提供了一套 GUI 桌面环境的基本框架。它可以让你通过窗口、鼠标和键盘与计算机交互。
#### 安装 X Window
下面这一行安装了能让 X11 运行的最少的包。
```
sudo apt install -y --no-install-recommends xserver-xorg-core xserver-xorg xfonts-base xinit
```
如果使用 `--no-install-recommends`,则只安装了主要的一些依赖(`Depends` 字段中的包)。这样可以节省很多空间,因为没有安装那些建议却不一定需要的包。
### 进阶:使用 Xfce 桌面环境
如果你愿意,可以就此停下了,然后开始使用 X Window 作为你的桌面。不过我并不建议这么做。X Window 自带的这种最小化的窗口管理工具走的是极简主义风格,某种程度上让人感觉过时了。相反,我建议安装现代化的桌面环境,比如说像 Xfce、GNOME 或者 KDE。当用在微型计算机上时我更倾向于 [Xfce][6] 而不是其他的,因为它就是为资源有限的系统设计的,而且你可以通过主题、图标或者其他东西对它进行定制。
#### 安装 Xfce
安装 Xfce 桌面环境相当简单。只需要:
```
sudo apt install -y --no-install-recommends xfce4 desktop-base lightdm
```
这就够了。你现在安装了 X WindowX11和 Xfce 了。现在是时候来定制一下环境并且安装一些核心应用了。
### 核心应用
目前为止,你已经安装了 X WindowX11、Xfce 桌面环境和 LightDM一个 Xfce 自带的显示管理器)。现在,你已经有了一个可以启动并且正常使用的轻量级的完整系统。不过,在彻底完成之前,我还是喜欢装一些核心应用。
下面这条命令安装了一个终端程序、[Audacious][7] 音频播放器、[Ristretto][8] 图像浏览器、[Mousepad][9] 文本编辑器、[File Roller][10] 存档管理器和 [Thunar][11] 容量管理器。
```
sudo apt install -y --no-install-recommends xfce4-terminal audacious ristretto
sudo apt install -y --no-install-recommends mousepad file-roller thunar-volman
```
#### 其他可选项
其他一些你可能想安装的东西包括一个好的网络管理器、任务管理器、PDF 阅读器和通知工具,以及桌面背景管理器、截图工具、一些新的图标和光标主题。简单来说,如果树莓派是你的首选系统,这些都算是日常工作的一些补充。
```
sudo apt install -y --no-install-recommends network-manager xfce4-taskmanager xfce4-notifyd
sudo apt install -y --no-install-recommends  xpdf gnome-icon-theme dmz-cursor-theme
```
### 下一步该做什么?
如果一切都正常工作的话,你现在就有一个基于 Xfce 和 Debian Lite 超轻量级操作系统的树莓派了。我建议现在你去 Xfce 网站上查看其它很酷的好东西,这些你都可以安装并使用。下一步做什么完全由你决定!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/custom-raspberry-pi
作者:[Patrick H. Mullins][a]
选题:[lujun9972][b]
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pmullins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_cartoon.png?itok=m3TcBONJ (Cartoon graphic of Raspberry Pi board)
[2]: https://www.raspberrypi.org/downloads/raspberry-pi-os/
[3]: https://www.debian.org/
[4]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
[5]: https://en.wikipedia.org/wiki/X_Window_System
[6]: http://xfce.org
[7]: https://audacious-media-player.org/
[8]: https://docs.xfce.org/apps/ristretto/start
[9]: https://github.com/codebrainz/mousepad
[10]: https://gitlab.gnome.org/GNOME/file-roller
[11]: https://docs.xfce.org/xfce/thunar/thunar-volman
[12]: https://goodies.xfce.org/

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NIST aims to make frequency sharing more efficient for wireless networks)
[#]: via: (https://www.networkworld.com/article/3561618/nist-aims-to-make-frequency-sharing-more-efficient-for-wireless-networks.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
NIST aims to make frequency sharing more efficient for wireless networks
======
Machine-learning formula will help different radio protocols, such as Wi-Fi and LTE, work together more efficiently in the same wireless spectrum.
Martyn Williams/IDG
A machine-learning formula developed by the National Institute of Standards and Technology ([NIST][1]) has the potential to significantly improve how [5G][2] and other wireless networks select and share communications frequencies. Compared to trial-and-error methods, NIST's formula could make the process of sharing communications frequencies as much as 5,000 times more efficient, researchers claim.
The NIST system is based the idea that radio equipment can learn its network environments from experience rather than, as is done now, simply select frequency channels based on trial-and-error.
"The algorithm learns which channel provides the best outcome" under specific environmental conditions, NIST says in an [article on its website][3].
**READ MORE:** [How beamforming makes wireless communication faster][4]
"The formula could be programmed into software on transmitters in many [different] types of real-world networks," the team says.
Essentially, the computer-modeled algorithm is a formula that maps prior experience in environmental RF conditions. Those conditions can include the number of transmitters operating within a channel (set of adjacent frequencies), for example.
"… if a transmitter selects a channel that is not occupied, then the probability of a successful transmission rises, leading to a higher data rate," the article says. Likewise, when a transmitter selects a channel that doesn't have much interference on it, the signal is stronger, and you get a better data rate. The transmitter remembers which channel provides the best outcome and learns to choose that spot on the dial when it next needs a clear signal.
That's different from the way things generally work today. That is, a radio simply tries to find an open frequency and then communicates with like-protocol radios. In sophisticated cases, like Wi-Fi, for example, frequency hopping and [beamforming][4] are used to optimize channels.
Where NIST's machine-learning technique shines is in the case of shared spectrum, such as sharing Wi-Fi with License Assisted Access (LAA), the researchers explain. LAA is LTE in unlicensed spectrum, known as LTE-U, at 5 GHz. In that combination of Wi-Fi with LAA, on the same frequencies, the protocols are disparate: the radios don't communicate with each other to function in harmony, and chaos could occur the busier the band got—transmissions would bump into other transmissions. But, if all the radios were better at choosing their slot, by learning what works and what doesn't, then things would be better.
"This could potentially make communications in the unlicensed bands much more efficient," says Jason Coder, a NIST engineer, in the article.
Indeed, it "could help 5G and other wireless networks select and share communications frequencies about 5,000 times more efficiently than trial-and-error methods," NIST claims.
The key word here is "share," because in order to increase communications in limited spectrum, more sharing must take place—the users, such as IoT, or media streaming, are all competing for the same metaphorical real estate. Combining unlicensed and licensed bands, as is the case in LAA, will likely become more common as IoT and digital continues to expand. (Unlicensed bands are those not assigned to a specific user, like a mobile network operator; licensed bands are won in auctions and allocated.)
In the NIST scenario, the competing transmitters "each learn to maximize the total network data rate without communicating with each other." Therefore, multiple protocols and data types, like video or sensor data, or Wi-Fi and mobile networks, can function alongside each other.
NIST's formula significantly simplifies the process of assigning optimum channels to transmitters, according to the article: "The study found that an exhaustive effort [using trial and error] to identify the best solution would require about 45,600 trials, whereas the formula could select a similar solution by trying only 10 channels, just 0.02 percent of the effort."
The NIST researchers recently presented their research at [IEEE's 91st Vehicular Technology Conference][5], held virtually this year.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3561618/nist-aims-to-make-frequency-sharing-more-efficient-for-wireless-networks.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://www.nist.gov/
[2]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[3]: https://www.nist.gov/news-events/news/2020/05/nist-formula-may-help-5g-wireless-networks-efficiently-share-communications
[4]: https://www.networkworld.com/article/3445039/beamforming-explained-how-it-makes-wireless-communication-faster.html
[5]: https://events.vtsociety.org/vtc2020-spring/conference-sessions/program/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,50 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An open source browser extension to zoom in on images)
[#]: via: (https://opensource.com/article/20/6/hoverzoom)
[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas)
An open source browser extension to zoom in on images
======
Taking a closer look at web images isn't always straightforward, but
Hover Zoom+ makes it as easy as a slide of your mouse.
![Digital images of a computer desktop][1]
Have you ever visited a website and wanted to see the images displayed larger? That happens to me all the time, and it isn't always easy to make that happen.
On occasion, I sift through the source code, use **Ctrl + F** to search for the image, copy the image source address and paste it into a new window in order to see the image at its full-size glory. Or, the other option is to right-click, copy the image address, and paste into a new tab.
![Example image source code ][2]
Hover Zoom+ makes this a much simpler process. Issued under the MIT license, Hover Zoom+ is a simple Chrome extension available at the Chrome Store, and the source code is available on [GitHub.][3] It is also available for Firefox. 
This app makes that process much easier. By simply hovering over an image, you will see a pop-up show the image in its entirety, fit to your browser window, whether it is cropped or not (or if the image was placed, sized to fit, it will look the same). This can be interesting, as sometimes, the original image may have been cropped, either to fit the space or to focus on a specific part of the image. You cannot right-click and save the image directly from the pop-up, however.
According to San Jose, California-based developer Oleg Anashkin, "This is an open source version of the original HoverZoom extension, which is now overrun by malware and deleted from the store. In this version, all spyware has been removed, many bugs were fixed, and new features were added. It doesn't collect any statistics by default."
I installed the extension in Chrome on my Windows 10 laptop and took it for a spin. With the extension installed, I simply hovered over an image, and it displayed larger-than-life in a pop-up window.
However, Hover Zoom+ does not work on all websites or for all images. It works great for Facebook and Twitter, but not for sponsored content on those sites. The user can easily toggle the app to enable or disable it for specific sites. It was easy to see the entire Instagram screenshot of this cute kitten by using Hover Zoom+ without having to actually read the post (convenient!):
![Zoomed-in image of a kitten from Facebook][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/hoverzoom
作者:[Jeff Macharyas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jeffmacharyas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://opensource.com/sites/default/files/uploads/source.jpg (Example image source code)
[3]: https://github.com/extesy/hoverzoom/
[4]: https://opensource.com/sites/default/files/uploads/fb-cat.jpg (HoverZoom+ enlarged image)

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A minimalist Mac terminal for Linux fans)
[#]: via: (https://opensource.com/article/20/6/iterm2-mac)
[#]: author: (Lisa Seelye https://opensource.com/users/lisa)
A minimalist Mac terminal for Linux fans
======
Here is how I keep a terminal simple and my dotfiles secure through a
lot of subtle complexity.
![Coffee and laptop][1]
I have a confession to make: I have been a Mac user for more than 10 years now. At first, I felt a little shame, given my strong Linux background, but the Mac gives me a Unix-like shell and a great window manager. Because of that history, I have a mix of features that will run on macOS but feel familiar to Linux users. There's no reason it can't port over to Linux (and it has!).
### Using iTerm2 on a Mac
For a long time, my preferred terminal was the basic built-in Terminal.app, but I recently switched to [iTerm2][2] because it has much better customization and profile support. One of its key wins for me is that it's easy to transplant settings from Mac to Mac. For daily use, I prefer the Solarized Dark theme, but for presentations, I have a separate profile that enlarges the text and uses a plain black background with more vibrant colors.
The first thing I do to make iTerm2 usable is to configure the **Ctrl+Left** and **Ctrl+Right** arrows to respect the classic terminal behavior of jumping to the start and end of a word boundary. To do so, navigate to Preferences &gt; Profiles &gt; Your Profile &gt; Keys and enter the following.
* Keyboard Shortcut: ^←
* Action: Send Escape Sequence
* Esc+: b
Then the other: 
* Keyboard Shortcut: ^→
* Action: Send Escape Sequence
* Esc+: f
Learn more about what you can do with [iTerm2][3] and enjoy the custom experience.
### A simple command prompt
I am one of those boring terminal prompt users. I don't include Git directory or exit code, and I only use a single line. The only fancy component I use is [kubectx][4], which includes the current Kubernetes context. As an [OpenShift Dedicated][5] Site Reliability Engineer (SRE), I have to run commands with the appropriate context, and `kubectx` makes it easy to know where I am when I'm typing. So, my Bash PS1 is the boring `username@host cwd $`, save for the Kubernetes context prefix.
There is no doubt that I'm on the minimalist side, compared to some fancy terminals I've seen. Some people enjoy transparency, and others prefer a lot of information on their prompts—from the time to the exit code and everything else. I find it distracting in my terminals, so I enjoy those setups from afar.
### Beautifully complex dotfiles
Compared to my minimalist terminal, it's easy to see where I put my maximalist efforts: deploying my [dotfiles][6], including my `.bash_profile` and my overall Mac setup.
I use a [series of Makefiles][7], hosted through GitHub, to manage my Mac setup. This pulls in my [dotfile-specific deployment mechanism][8], which is also in GitHub. Why all the tooling around security you ask? IT professionals and hobbyists alike need a robust way to put secure pieces of data on new systems. Maybe you prefer your SSH config to be hidden, or maybe you're deploying credentials through a third-party system. I find it useful to keep my secure data with everything else, and I solved this problem with [Ansible Vault][9]. All my secrets are stored in Git, encrypted with Ansible Vault. Decryption is handled with Makefiles.
Whether I'm installing for the first time or updating existing dotfiles, I (of course) must have Ansible Vault, and to avoid having to install that everywhere, I put it in a container that I run with Docker, which I do have installed everywhere. I put the decryption passphrase into a file, `run make`, and clean up everything with `make clean`. (You can learn more by [exploring the dotfiles][8].)
I will say that this management scheme may be over the top, but some folks like complicated terminal prompts. So perhaps in the balance, it all evens out.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/iterm2-mac
作者:[Lisa Seelye][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lisa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://www.iterm2.com/
[3]: https://www.iterm2.com/documentation.html
[4]: https://github.com/ahmetb/kubectx
[5]: https://www.openshift.com/products/dedicated/
[6]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
[7]: https://github.com/lisa/mac-setup
[8]: https://github.com/lisa/dotrc
[9]: https://docs.ansible.com/ansible/latest/user_guide/vault.html

View File

@ -0,0 +1,416 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (LaTeX typesetting part 2 (tables))
[#]: via: (https://fedoramagazine.org/latex-typesetting-part-2-tables/)
[#]: author: (Earl Ramirez https://fedoramagazine.org/author/earlramirez/)
LaTeX typesetting part 2 (tables)
======
![][1]
LaTeX offers a number of tools to create and customise tables, in this series we will be using the tabular and tabularx environment to create and customise tables.
### Basic table
To create a table you simply specify the environment \begin{tabular}{columns}
```
```
\begin{tabular}{c|c}
    Release &amp;amp;amp;Codename \\\ \hline
    Fedora  Core 1 &amp;amp;amp;Yarrow \\\
    Fedora Core 2 &amp;amp;amp;Tettnang \\\
    Fedora Core 3 &amp;amp;amp;Heidelberg \\\
    Fedora Core 4 &amp;amp;amp;Stentz \\\
\end{tabular}
```
```
![Basic Table][2]
In the above example "{c|c}" in the curly bracket refers to the position of the text in the column. The below table summarises the positional argument together with the description.
Position | Argument
---|---
c | Position text in the centre
l | Position text left-justified
r | Position text right-justified
p{width} | Align the text at the top of the cell
m{width} | Align the text in the middle of the cell
b{width} | Align the text at the bottom of the cell
&gt;Both m{width} and b{width} requires the array package to be specified in the preamble.
Using the example above, let us breakdown the important points used and describe a few more options that you will see in this series
Option | Description
---|---
&amp; | Defines each cell, the ampersand is only used from the second column
\ | This terminates the row and start a new row
|
\hline | Specifies the horizontal line (optional)
*{num}{form} | This is handy when you have many columns and is an efficient way of limiting the repetition
|
### Customising our table
Now that some of the options available let create a table using the options described in the previous section.
```
```
\begin{tabular}{*{3}{|l|}}
\hline
    \textbf{Version} &amp;amp;amp;\textbf{Code name} &amp;amp;amp;\textbf{Year released} \\\
\hline
    Fedora 6 &amp;amp;amp;Zod &amp;amp;amp;2006 \\\ \hline
    Fedora 7 &amp;amp;amp;Moonshine &amp;amp;amp;2007 \\\ \hline
    Fedora 8 &amp;amp;amp;Werewolf &amp;amp;amp;2007 \\\
\hline
\end{tabular}
```
```
![Customise Table][3]
### Managing long text
With LaTeX if there are many texts in a column it will not be formatted well and does not look presentable.
The below example shows how long text is formatted, we will use "blindtext" in the preamble so that we can produce sample text.
```
```
\begin{tabular}{|l|l|}\hline
    Summary &amp;amp;amp;Description \\\ \hline
    Test &amp;amp;amp;\blindtext \\\
\end{tabular}
```
```
![Default Formatting][4]
As you can see the text exceed the page width; however, there are a couple of options to overcome this challenge.
* Specify the column width, for example, m{5cm}
* Utilise the tabularx environment, this requires tabularx package in the preamble.
### Managing long text with column width
By specifying the column width the text will be wrapped into the width as shown in the example below.
```
```
\begin{tabular}{|l|m{14cm}|} \hline
    Summary &amp;amp;amp;Description \\\ \hline
    Test &amp;amp;amp;\blindtext \\\ \hline
\end{tabular}\vspace{3mm}
```
```
![Column width][5]
### Managing long text with tabularx
Before we can leverage tabularx we need to add it in the preamble. Tabularx takes the following example
**\begin{tabularx}{width}{columns}**
```
```
\begin{tabularx}{\textwidth}{|l|X|} \hline
Summary &amp;amp;amp; Tabularx Description\\\ \hline
Text &amp;amp;amp;\blindtext \\\ \hline
\end{tabularx}
```
```
![Tabularx][6]
Notice that the column that we want the long text to be wrapped has a capital "X" specified.
### Multirow and multicolumn
There are times when you will need to merge rows and/or column. This section describes how it is accomplished. To use multirow and multicolumn add multirow to the preamble.
### Multirow
Multirow takes the following argument _\multirow{number_of_rows}{width}{text}_, let us look at the below example.
```
```
\begin{tabular}{|l|l|}\hline
    Release &amp;amp;amp;Codename \\\ \hline
    Fedora Core 4 &amp;amp;amp;Stentz \\\ \hline
    \multirow{2}{*}{MultiRow} &amp;amp;amp;Fedora 8 \\\
    &amp;amp;amp;Werewolf \\\ \hline
\end{tabular}
```
```
![MultiRow][7]
In the above example, two rows were specified, the * tells LaTeX to automatically manage the size of the cell.
### Multicolumn
Multicolumn argument is _\multicolumn{number_of_columns}{cell_position}{text}_, below example demonstrates multicolumn.
```
```
\begin{tabular}{|l|l|l|}\hline
    Release &amp;amp;amp;Codename &amp;amp;amp;Date \\\ \hline
    Fedora Core 4 &amp;amp;amp;Stentz &amp;amp;amp;2005 \\\ \hline
    \multicolumn{3}{|c|}{Mulit-Column} \\\ \hline
\end{tabular}
```
```
![Multi-Column][8]
### Working with colours
Colours can be assigned to the text, an individual cell or the entire row. Additionally, we can configure alternating colours for each row.
Before we can add colour to our tables we need to include _\usepackage[table]{xcolor}_ into the preamble. We can also define colours using the following colour reference [LaTeX Colour][9] or by adding an exclamation after the colour prefixed by the shade from 0 to 100. For example, _gray!30_
```
```
\definecolor{darkblue}{rgb}{0.0, 0.0, 0.55}
\definecolor{darkgray}{rgb}{0.66, 0.66, 0.66}
```
```
Below example demonstrate this a table with alternate colours, \rowcolors take the following options _\rowcolors{row_start_colour}{even_row_colour}{odd_row_colour}_.
```
```
\rowcolors{2}{darkgray}{gray!20}
\begin{tabular}{c|c}
    Release &amp;amp;amp;Codename \\\ \hline
    Fedora  Core 1 &amp;amp;amp;Yarrow \\\
    Fedora Core 2 &amp;amp;amp;Tettnang \\\
    Fedora Core 3 &amp;amp;amp;Heidelberg \\\
    Fedora Core 4 &amp;amp;amp;Stentz \\\
\end{tabular}
```
```
![Alt colour table][10]
In addition to the above example, \rowcolor can be used to specify the colour of each row, this method works best when there are multi-rows. The following examples show the impact of using the \rowcolours with multi-row and how to work around it.
![Impact on multi-row][11]
As you can see the _multi-row_ is visible in the first row, to fix this we have to do the following.
```
```
\begin{tabular}{|l|l|}\hline
    \rowcolor{darkblue}\textsc{\color{white}Release}  &amp;amp;amp;\textsc{\color{white}Codename} \\\ \hline
    \rowcolor{gray!10}Fedora Core 4 &amp;amp;amp;Stentz \\\ \hline
    \rowcolor{gray!40}&amp;amp;amp;Fedora 8 \\\
    \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &amp;amp;amp;Werewolf \\\ \hline
\end{tabular}
```
```
![Multi-row][12]
Let us discuss the changes that were implemented to resolve the multi-row with the alternate colour issue.
* The first row started above the multi-row
* The number of rows was changed from 2 to -2, which means to read from the line above
* \rowcolor was specified for each row, more importantly, the multi-rows must have the same colour so that you can have the desired results.
One last note on colour, to change the colour of a column you need to create a new column type and define the colour. The example below illustrates how to define the new column colour.
```
```
\newcolumntype{g}{&amp;amp;gt;{\columncolor{darkblue}}l}
```
```
Lets break it down
* \newcolumntype{g}: defines the letter _g_ as the new column
* {&gt;{\columncolor{darkblue}}l}: here we select our desired colour, and _l_ tells the column to be left-justified, this can be subsitued with _c_ or _r_
```
```
\begin{tabular}{g|l}
    \textsc{Release}  &amp;amp;amp;\textsc{Codename} \\\ \hline
    Fedora Core 4 &amp;amp;amp;Stentz \\\
    &amp;amp;amp;Fedora 8 \\\
    \multirow{-2}{*}{Multi-Row} &amp;amp;amp;Werewolf \\\
\end{tabular}\
```
```
![Column Colour][13]
### Landscape table
There may be times when your table has many columns and will not fit elegantly in portrait. With the _rotating_ package in preamble you will be able to create a sideways table. The below example demonstrates this.
For the landscape table, we will use the _sidewaystable_ environment and add the tabular environment within it, we also specified additional options.
* \centering to position the table in the centre of the page
* \caption{} to give our table a name
* \label{} this enables us to reference the table in our document
```
```
\begin{sidewaystable}
\centering
\caption{Sideways Table}
\label{sidetable}
\begin{tabular}{ll}
    \rowcolor{darkblue}\textsc{\color{white}Release}  &amp;amp;amp;\textsc{\color{white}Codename} \\\
    \rowcolor{gray!10}Fedora Core 4 &amp;amp;amp;Stentz \\\
    \rowcolor{gray!40} &amp;amp;amp;Fedora 8 \\\
    \rowcolor{gray!40}\multirow{-2}{*}{Multi-Row} &amp;amp;amp;Werewolf \\\
\end{tabular}\vspace{3mm}
\end{sidewaystable}
```
```
![Sideways Table][14]
### List and tables
To include a list into a table you can use tabularx and include the list in the column where the _X_ is specified. Another option will be to use tabular but you must specify the column width.
### List in tabularx
```
```
\begin{tabularx}{\textwidth}{|l|X|} \hline
    Fedora Version &amp;amp;amp;Editions \\\ \hline
    Fedora 32 &amp;amp;amp;\begin{itemize}[noitemsep]
        \item CoreOS
        \item Silverblue
        \item IoT
    \end{itemize} \\\ \hline
\end{tabularx}\vspace{3mm}
```
```
![List in tabularx][15]
### List in tabular
```
```
\begin{tabular}{|l|m{6cm}|}\hline
        Fedora Version &amp;amp;amp;Editions \\\ \hline
    Fedora 32 &amp;amp;amp;\begin{itemize}[noitemsep]
        \item CoreOS
        \item Silverblue
        \item IoT
    \end{itemize} \\\ \hline
\end{tabular}
```
```
![List in tabular][16]
### Conclusion
LaTeX offers many ways to customise your table with tabular and tabularx, you can also add both tabular and tabularx within the table environment (\begin\table) to add the table name and to position the table.
### LaTeX packages
The packages used in this series are.
```
```
\usepackage{fullpage}
\usepackage{blindtext}  % add demo text
\usepackage{array} % used for column positions
\usepackage{tabularx} % adds tabularx which is used for text wrapping
\usepackage{multirow} % multi-row and multi-colour support
\usepackage[table]{xcolor} % add colour to the columns
\usepackage{rotating} % for landscape/sideways tables
```
```
### Additional Reading
This was an intermediate lesson on tables; for more advanced information about tables and LaTex in general, you can go to [LaTeX Wiki][17]
![][13]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/latex-typesetting-part-2-tables/
作者:[Earl Ramirez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/earlramirez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/06/latex-series-816x345.png
[2]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-13.png
[3]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-23.png
[4]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-10.png
[5]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-11.png
[6]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-12.png
[7]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-15.png
[8]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-16.png
[9]: https://latexcolor.com
[10]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-17.png
[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-18.png
[12]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-19.png
[13]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-24.png
[14]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-20.png
[15]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-21.png
[16]: https://fedoramagazine.org/wp-content/uploads/2020/06/image-22.png
[17]: https://en.wikibooks.org/wiki/LaTeX/Tables

View File

@ -0,0 +1,336 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Scaling a GraphQL Website)
[#]: via: (https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Scaling a GraphQL Website
======
For obvious reasons, I normally write abstractly about work Ive done for other people, but Ive been given permission to write about a website, [Vocal][1], that I did some SRE work on last year. I actually gave [a presentation at GraphQL Sydney back in February][2], but for various reasons its taken me this long to get it into a blog post.
Vocal is a GraphQL-based website that got traction and hit scaling problems that I got called in to fix. Heres what I did. Obviously, youll find this post useful if youre scaling another GraphQL website, but most of its representative of what you have to deal with when a site gets enough traffic to cause technical problems. If website scalability is a key interest of yours, you might want to read [my recent post about scalability][3] first.
### Vocal
![][4]
Vocal is a blogging platform publishing everything from diaries to movie reviews to opinion pieces to recipes to professional and amateur photography to beauty and lifestyle tips and poetry. Of course, theres no shortage of proud pet owners with cute cat and dog pictures.
![][5]
One thing thats a bit different about Vocal is that it lets everyday people get paid for producing works that viewers find interesting. Authors get a small amount of money per page view, and can also receive donations from other users. There are professionals using the platform to show off their work, but for most users its just a fun hobby that happens to make some extra pocket money as a bonus.
Vocal is the product of [Jerrick Media][6], a New Jersey startup. Development started in 2015 in collaboration with [Thinkmill][7], a medium-sized Sydney software development consultancy that specialises in all things JavaScript, React and GraphQL.
### Some spoilers for the rest of this post
I was told that unfortunately I cant give hard traffic numbers for legal reasons, but publicly available information can give an idea. Alexa ranks all websites it knows of by traffic level. Heres a plot of Alexa rank I showed in my talk, showing growth from November 2019 up to getting ranked number 5,567 in the world by February.
![Vocal global Alexa rank rising from#9,574 in November 2019 to #5,567 in February 2020.][8]
Its normal for the curve to slow down because it requires more and more traffic to win each position. Vocal is now at around #4,900. Obviously theres a long way to go, but thats not shabby at all for a startup. Most startups would gladly swap their Alexa rank with Vocal.
Shortly after the site was upgraded, Jerrick Media ran a marketing campaign that doubled traffic. All we had to do on the technical side was watch numbers go up in the dashboards. In the past 9 months since launch, there have only been two platform issues needing staff intervention: [the once-in-five-years AWS RDS certificate rotation that landed in March][9], and an app rollout hitting a Terraform bug. Ive been very happy with how little platform busywork is needed to keep Vocal running.
Heres an overview of the technical stuff Ill talk about in this post:
* Technical and historical background
* Database migration from MongoDB to Postgres
* Deployment infrastructure revamp
* Making the app compatible with scaling
* Making HTTP caching work
* Miscellaneous performances tweaks
### Some background
Thinkmill built a website using [Next.js][10] (a React-based web framework), talking to a GraphQL API provided by [Keystone][11] in front of MongoDB. Keystone is a GraphQL-based headless CMS library: you define a schema in JavaScript, hook it up to some data storage, and get an automatically generated GraphQL API for data access. Its a free and open-source software project thats commercially backed by Thinkmill.
#### Vocal V2
The version 1 of Vocal got traction. It found a userbase that liked the product, and it grew, and eventually Jerrick Media asked Thinkmill to help develop a version 2, which was successfully launched in September last year. The Jerrick Media folk avoided the [second system effect][12] by generally basing changes on user feedback, so they were [mostly UI and feature changes that I wont go into][13]. Instead, Ill talk about the stuff I was brought in for: making the new site more robust and scalable.
For the record, Im thankful that I got to work with Jerrick Media and Thinkmill on Vocal, and that they let me present this story, but [Im still an independent consultant][14]. I wasnt paid or even asked to write this post, and this is still my own personal blog.
### The database migration
Thinkmill suffered several scalability problems with using MongoDB for Vocal, and decided to upgrade Keystone to version 5 to take advantage of its new Postgres support.
If youve been in tech long enough to remember the “NoSQL” marketing from the end of the 00s, that might surprise you. The message was that relational (SQL) databases like Postgres arent as scalable as “webscale” NoSQL databases like MongoDB. Its technically true, but the scalability of NoSQL databases comes from compromises in the variety of queries that can be efficiently handled. Simple, non-relational databases (like document and key-value databases) have their places, but when used as a general-purpose backend for an app, the app often outgrows the querying limitations of the database before it outgrows the theoretical scaling limit a relational database would have. Most of Vocals DB queries worked just fine with MongoDB, but over time more and more queries needed hacks to work at all.
In terms of technical requirements, Vocal is very similar to Wikipedia, one of the biggest sites in the world. Wikipedia runs on MySQL (or rather, its fork, MariaDB). Sure, some significant engineering is needed to make that work, but I dont see relational databases being a serious threat to Vocals scaling in the foreseeable future.
At one point I checked, the managed AWS RDS Postgres instances cost less than a fifth of the old MongoDB instances, yet CPU usage of the Postgres instances was still under 10%, despite serving more traffic than the old site. Thats mostly because of a few important queries that just never were efficient under the document database architecture.
The migration could be a blog post of its own, but basically a Thinkmill dev built an [ETL pipeline][15] using [MoSQL][16] to do the heavy lifting. Thanks to Keystone being a FOSS project, I was also able to contribute some performance improvements to its GraphQL to SQL mapping. For that kind of stuff, I always recommend Markus Winands SQL blogs: [Use the Index Luke][17] and [Modern SQL][18]. His writing is friendly and accessible to non-experts, yet has most of the theory you need for writing fast and effective SQL. A good, DB-specific book on performance gives you the rest.
### The platform
#### The architecture
V1 was a couple of Node.js apps running on a single virtual private server (VPS) behind Cloudflare as a CDN. Im a fan of avoiding overengineering as a high priority, so that gets a thumbs up from me. However, by the time V2 development started, it was obvious that Vocal had outgrown that simple architecture. It didnt give Thinkmillers many options when handling big traffic spikes, and it made updates hard to deploy safely and without downtime.
Heres the new architecture for V2:
![Architecture of Vocal V2. Requests come through a CDN to a load balancer in AWS. The load balancer distributes traffic to two apps, "Platform" and "Website". "Platform" is a Keystone app storing data in Redis and Postgres.][19]
Basically, the two Node.js apps have been replicated and put behind a load balancer. Yes, thats it. In my SRE work, I often meet engineers who expect a scalable architecture to be more complicated than that, but Ive worked on sites that are orders of magnitude bigger than Vocal but are still just replicated services behind load balancers, with DB backends. If you think about it, if the platform architecture needs to keep getting significantly more complicated as the site grows, its not really very scalable. Website scalability is mostly about fixing the many little implementation details that prevent scaling.
Vocals architecture might need a few additions if traffic grows enough, but the main reason it would get more complicated is new features. For example, if (for some reason) Vocal needed to handle real-time geospatial data in future, that would be a very different technical beast from blog posts, so Id expect architectural changes for it. Most of the complexity in big site architecture is because of feature complexity.
If you dont know how to make your architecture scalable, I always recommend keeping it as simple as you can. Fixing an architecture thats too simple is easier and cheaper than fixing an architecture thats too complex. Also, an unnecessarily complex architecture is more likely to have mistakes, and those mistakes will be harder to debug.
By the way, Vocal happened to be split into two apps, but thats not important. A common scaling mistake is to prematurely split an app into smaller services in the name of scalability, but split the app in the wrong place and cause more scalability problems overall. Vocal could have scaled okay as a monolithic app, but the split is also in a good place.
#### The infrastructure
Thinkmill has a few people who have experience working with AWS, but its primarily a dev shop and needed something more “hands off” than the old Vocal deployment. I ended up deploying the new Vocal on [AWS Fargate][20], which is a relatively new backend to Elastic Container Service (ECS). In the old days, many people wanted ECS to be a simple “run my Docker container as a managed service” product, and were disappointed that they still had to build and manage their own server cluster. With ECS Fargate, AWS manages the cluster. It supports running Docker containers with the basic nice things like replication, health checking, rolling updates, autoscaling and simple alerting.
A good alternative would have been a managed Platform-as-a-Service (PaaS) like App Engine or Heroku. Thinkmill was already using them for simple projects, but often needed more flexibility with other projects. There are much bigger sites running on PaaSes, but Vocal is at a scale where a custom cloud deployment can make sense economically.
Another obvious alternative would have been Kubernetes. Kubernetes has a lot more features than ECS Fargate, but its a lot more expensive — both in resource overhead, and the staffing needed for maintenance (such as regular node upgrades). As a rule, I dont recommend Kubernetes to any place that doesnt have dedicated DevOps staff. Fargate has the features Vocal needs, and has let Thinkmill and Jerrick Media focus on website improvements, not infrastructure busywork.
Yet another option was “Serverless” function products like AWS Lambda or Google Cloud Functions. Theyre great for handling services with very low or highly irregular traffic, but (as Ill explain) ECS Fargates autoscaling is enough for Vocals backend. Another plus of these products is that they allow developers to deploy things in cloud environments without needing to learn a lot about cloud environments. The tradeoff is that the Serverless product becomes tightly coupled to the development process, and to the testing and debugging processes. Thinkmill already had enough AWS expertise in-house to manage a Fargate deployment, and any dev who knows how to make a Node.js Express Hello World app can work on Vocal without learning anything about either Serverless functions or Fargate.
An obvious downside of ECS Fargate is vendor lock-in. However, avoiding vendor lock-in is a tradeoff like avoiding downtime. If youre worried about migrating, it doesnt make sense to spend more on platform independence than you would on a migration. The total amount of Fargate-specific code in Vocal is &lt;500 lines of [Terraform][21]. The most important thing is that the Vocal app code itself is platform agnostic. It can run on normal developer machines, and then be packaged up into a Docker container that can run practically anywhere a Docker container can, including ECS Fargate.
Another downside of Fargate is that its not trivial to set up. Like most things in AWS, its in a world of VPCs, subnets, IAM policies, etc. Fortunately, that kind of stuff is quite static (unlike a server cluster that requires maintenance).
### Making a scaling-ready app
Theres a bunch of stuff to get right if you want to run an app painlessly at scale. Youre doing well if you follow [the Twelve-Factor App design][22], so I wont repeat it here.
Theres no point building a “scalable” system if staff cant operate it at scale — thats like putting a jet engine on a unicycle. An important part of making Vocal scalable was setting up stuff like CI/CD and [infrastructure as code][23]. Similarly, some deployment ideas arent worth it because they make production too different from the development environment (see also [point #10 of the Twelve-Factor App][24]). Every difference between production and development slows app development and can be expected to lead to a bug eventually.
### Caching
Caching is a really big topic — I once gave [a presentation on just HTTP caching][25], and that still wasnt enough. Ill stick to the essentials for GraphQL here.
First, an important warning: Whenever you have performance problems, you might wonder, “Can I make this faster by putting this value into a cache for future reuse?” **Microbenchmarks will practically _always_ tell you the answer is “yes”.** However, putting caches everywhere will tend to make your overall system **slower**, thanks to problems like cache coherency. Heres my mental checklist for caching:
1. Ask if the performance problem needs to be solved with caching
2. Really ask (non-caching performance wins tend to be more robust)
3. Ask if the problem can be solved by improving existing caches
4. If all else fails, maybe add a new cache
One cache system youll always have is the HTTP caching system, so a corollary is that its a good idea to use HTTP caching effectively before trying to add extra caches. Ill focus on that in this post.
Another very common trap is using a hash map or something inside the app for caching. [It works great in local development but performs badly when scaled.][26] The best thing is to use an explicit caching library that supports pluggable backends like Redis or Memcached.
#### The basics
There are two types of caches in the HTTP spec: private and public. Private caches are caches that dont share data with multiple users — in practice, the users browser cache. Public caches are all the rest. They include ones under your control (such as CDNs or servers like Varnish or Nginx) and ones that arent (proxies). Proxy caches are rarer in todays HTTPS world, but some corporate networks have them.
![][27]
Caching lookup keys are normally based on URLs, so caching is less painful if you stick to a “same content, same URL; different content, different URL” rule. I.e., give each page a canonical URL, and avoid “clever” tricks returning varying content from one URL. Obviously, this has implications for GraphQL API endpoints (that Ill discuss later).
Your servers can take custom configuration, but the primary way to configure HTTP caching is through HTTP headers you set on web responses. The most important header is `cache-control`. The following says that all caches down the line may cache the page for up to 3600 seconds (one hour):
```
cache-control: max-age=3600, public
```
For user-specific pages (such as user settings pages), its important to use `private` instead of `public` to tell public caches not to store the response and serve it to other users.
Another common header is `vary`. This tells caches that the response varies based on some things other than the URL. (Effectively it adds HTTP headers to the the cache key, alongside the URL.) Its a very blunt tool, which is why I recommend using a good URL structure instead if possible, but an important use case is telling browsers that the response depends on the login cookie, so that they update pages on login/logout.
```
vary: cookie
```
If a page can vary based on login status, you need `cache-control: private` (and `vary: cookie`) even on the public, logged out version, to make sure responses dont get mixed up.
Other useful headers include `etag` and `last-modified`, but I wont cover them here. You might still see some old headers like `expires` and `pragma: cache`. They were made obsolete by HTTP/1.1 back in 1997, so I only use them if I want to disable caching and Im feeling paranoid.
#### Clientside headers
Less well known is that the HTTP spec allows `cache-control` headers to be used in client requests to reduce the cache time and get a fresher response. Unfortunately `max-age` greater than 0 doesnt seem to be widely supported by browsers, but `no-cache` can be useful if you sometimes need a fresh response after an update.
#### HTTP caching and GraphQL
As above, the normal cache key is the URL. But GraphQL APIs often use just one endpoint (lets call it `/api/`). If you want a GraphQL query to be cachable, you need the query and its parameters to appear in the URL path, like `/api/?query={user{id}}&variables={"x":99}` (ignoring URL escaping). The trick is to configure your GraphQL client to use HTTP GET requests for queries (e.g., [set `useGETForQueries` for `apollo-link-http`][28]).
Mutations mustnt be cached, so they still need to use HTTP POST requests. With POST requests, caches will only see `/api/` as the URL path, but caches will refuse to cache POST requests outright. Remember: GET for non-mutating queries, POST for mutations. Theres a case where you might want to avoid GET for a query: if the query variables contain sensitive information. URLs have a habit of appearing in log files, browser history and chat channels, so sensitive information in URLs is usually a bad idea. Things like authentication should be done as non-cachable mutations, anyway, so this is a rare case, but one worth remembering.
Unfortunately, theres a problem: GraphQL queries tend to be much larger than REST API URLs. If you simply switch on GET-based queries, youll get some pretty big URLs, easily bigger than the ~2000 byte limit before some popular browsers and servers just wont accept them. A solution is to send some kind of query ID, instead of sending the whole query. (I.e., something like `/api/?queryId=42&variables={"x":99}`.) Apollo GraphQL server supports two ways of doing this.
One way is to [extract all the GraphQL queries from the code and build a lookup table thats shared serverside and clientside][29]. One downside is that it makes the build process more complicated. Another downside is that it couples the client project to the server project, which goes against a selling point of GraphQL. Yet another downside is that version X of your code might recognise a different set of queries from version Y of your code. This is a problem because 1) your replicated app will serve multiple versions during an update rollout, or rollback, and 2) clients might use cached JavaScript, even as you upgrade or downgrade the server.
Another way is what Apollo GraphQL calls [Automatic Persisted Queries (APQs)][30]. With APQs, the query ID is a hash of the query. The client optimistically makes a request to the server, referring to the query by hash. If the server doesnt recognise the query, the client sends the full query in a POST request. The server stores that query by hash so that it can be recognised in future.
![][31]
#### HTTP caching and Keystone 5
As above, Vocal uses Keystone 5 for generating its GraphQL API, and Keystone 5 works with Apollo GraphQL server. How do we actually set the caching headers?
Apollo supports cache hints on GraphQL schemas. The neat thing is that Apollo gathers all the hints for everything thats touched by a query, and then it automatically calculates the appropriate overall cache header values. For example, take this query:
```
query userAvatarUrl {
authenticatedUser {
name
avatar_url
}
}
```
If `name` has a max age of one day, and the `avatar_url` has a max age of one hour, the overall cache max age would be the minimum, one hour. `authenticatedUser` depends on the login cookie, so it needs a `private` hint, which overrides the `public` on the other fields, so the resulting header would be `cache-control: max-age=3600, private`.
I added [cache hint support to Keystone lists and fields][32]. Heres a simple example of adding a cache hint to a field in the to-do list demo from the docs:
```
const keystone = new Keystone({
name: 'Keystone To-Do List',
adapter: new MongooseAdapter(),
});
keystone.createList('Todo', {
schemaDoc: 'A list of things which need to be done',
fields: {
name: {
type: Text,
schemaDoc: 'This is the thing you need to do',
isRequired: true,
cacheHint: {
scope: 'PUBLIC',
maxAge: 3600,
},
},
},
});
```
#### One more problem: CORS
Cross-Origin Resource Sharing (CORS) rules create a frustrating conflict with caching in an API-based website.
Before getting stuck into the problem details, let me jump to the easiest solution: putting the main site and API onto one domain. If your site and API are served from one domain, you wont have to worry about CORS rules (but you might want to consider [restricting cookies][33]). If your API is specifically for the website, this is the cleanest solution, and you can happily skip this section.
In Vocal V1, the Website (Next.js) and Platform (Keystone GraphQL) apps were on different domains (`vocal.media` and `api.vocal.media`). To protect users from malicious websites, modern browsers dont just let one website interact with another. So, before allowing `vocal.media` to make requests to `api.vocal.media`, the browser would make a “pre-flight” check to `api.vocal.media`. This is an HTTP request using the `OPTIONS` method that essentially asks if the cross-origin sharing of resources is okay. After getting the okay from the pre-flight check, the browser makes the normal request that was originally intended.
The frustrating thing about pre-flight checks is that they are per-URL. The browser makes a new `OPTIONS` request for each URL, and the server response applies to that URL. [The server cant say that `vocal.media` is a trusted origin for all `api.vocal.media` requests][34]. This wasnt a serious problem when everything was a POST request to the one api endpoint, but after giving every query its own GET-able URL, every query got delayed by a pre-flight check. For extra frustration, the HTTP spec says `OPTIONS` requests cant be cached, so you can find that all your GraphQL data is beautifully cached in a CDN right next to the user, but browsers still have to make pre-flight requests all the way to the origin server every time they use it.
There are a few solutions (if you cant just use a shared domain).
If your API is simple enough, you might be able to exploit the [exceptions to the CORS rules][35].
Some cache servers can be configured to ignore the HTTP spec and cache `OPTIONS` requests anyway (e.g., Varnish-based caches and AWS CloudFront). This isnt as efficient as avoiding the pre-flight requests completely, but its better than the default.
Another (really hacky) option is [JSONP][36]. Beware: you can create security bugs if you dont get this right.
#### Making Vocal more cachable
After making HTTP caching work at the low level, I needed to make the app take better advantage of it.
A limitation of HTTP caching is that its all-or-nothing at the response level. Most of a response can be cachable, but if a single byte isnt, all bets are off. As a blogging platform, most Vocal data is highly cachable, but in the old site almost no _pages_ were cachable at all because of a menu bar in the top right corner. For an anonymous user, the menu bar would show links inviting the user to log in or create an account. That bar would change to a user avatar and profile menu for signed-in users. Because the page varied based on user login status, it wasnt possible to cache any of it in CDNs.
![A typical page from Vocal. Most of the page is highly cachable content, but in the old site none of it was actually cachable because of a little menu in the top right corner.][37]
These pages are generated by Server-Side Rendering (SSR) of React components. The fix was to take all the React components that depended on the login cookie, and force them to be [lazily rendered clientside only][38]. Now the server returns completely generic pages with placeholders for personalised components like the login menu bar. When a page loads in the users browser, these placeholders are filled in clientside by making calls to the GraphQL API. The generic pages can be safely cached in CDNs.
Not only does this trick improve cache hit ratios, it helps improve perceived page load time thanks to human psychology. Blank screens and even spinner animations make us impatient, but once the first content appears, it distracts us for several hundred milliseconds. If people click a Vocal post link from social media and the main content appears immediately from a CDN, very few will ever notice that some components arent fully interactive until a few hundred milliseconds later.
By the way, another trick for getting the first content in front of the user faster is to [stream render the SSR response as its generated][39], instead of waiting for the whole page to be rendered before sending it. Unfortunately, [Next.js doesnt support that yet][40].
The idea of splitting responses for improved cachability also applies to GraphQL. The ability to query multiple pieces of data with one request is normally an advantage of GraphQL, but if the different parts of the response have very different cachability, it can be better overall to split them. As a simple example, Vocals pagination component needs to know the number of pages plus the content for the current page. Originally the component fetched both in one query, but because the total number of pages is a constant across all pages, I made it a separate query so it can be cached.
#### Benefits of caching
The obvious benefit of caching is that it reduces the load on Vocals backend servers. Thats good, but its dangerous to rely on caching for capacity, though, because you still need a backup plan for when you inevitably drop the cache one day.
The improved responsiveness is a better reason for caching.
A couple of other benefits might be less obvious. Traffic spikes tend to be highly localised. If someone with a lot of social media followers shares a link to a page, Vocal will get a big surge of traffic, but mostly to that one page and its assets. Thats why caches are good at absorbing the worst traffic spikes, making the backend traffic patterns relatively smoother and easier for autoscaling to handle.
Another benefit is graceful degradation. Even if the backends are in serious trouble for some reason, the most popular parts of the site will still be served from the CDN cache.
### Other performance tweaks
As I always say, the secret to scaling isnt making things complicated. Its making things no more complicated than needed, and then thoroughly fixing all the things that prevent scaling. Scaling Vocal involved a lot of little things that wont fit in this post.
Heres one tip: for the difficult debugging problems in distributed systems, the hardest part is usually getting the right data to see whats going on. I can think of plenty of times that Ive got stuck and tried to just “wing it” by guessing instead of figuring out how to find the right data. Sometimes that works, but not for the hard problems.
A related tip is that you can learn a lot by getting real-time data (even just log files under [`tail -F`][41]) on each component in a system, displaying it in various windows in one monitor, and just clicking around the site in another. Im talking about things like, “Hey, why does toggling this one checkbox generate dozens of DB queries in the backend?”
Heres an example of one fix. Some pages were taking more than a couple of seconds to render, but only in the deployment environment, and only with SSR. The monitoring dashboards didnt show any CPU usage spikes, and the apps werent using disk, so it suggested that maybe the app was waiting on network requests, probably to a backend. In a dev environment I could watch how the app worked using [the sysstat tools][42] to record CPU/RAM/disk usage, along with Postgres statement logging and the usual app logs. [Node.js supports probes for tracing HTTP requests][43] using something like [bpftrace][44], but boring reasons meant they didnt work in the dev environment, so instead I found the probes in the source code and made a custom Node.js build with request logging. I used [tcpdump][45] to record network data. That let me find the problem: for every API request made by Website, a new network connection was being created to Platform. (If that hadnt worked, I guess I would have added request tracing to the apps.)
Network connections are fast on a local machine, but take non-negligible time on a real network. Setting up an encrypted connection (like in the production environment) takes even longer. If youre making lots of requests to one server (like an API), its important to keep the connection open and reuse it. Browsers do that automatically, but Node.js doesnt by default because it cant know if youre making more requests. Thats why the problem only appeared with SSR. Like many long debugging sessions, the fix was very simple: just configure SSR to [keep connections alive][46]. The rendering time of the slower pages dropped dramatically.
If you want to know more about this kind of stuff, I highly recommend reading [the High Performance Browser Networking book][47] (free to read online) and following up with [guides Brendan Gregg has published][48].
### What about your site?
Theres actually a lot more stuff we could have done to improve Vocal, but we didnt do it all. Thats a big difference between doing SRE work for a startup and doing it for a big company as a permanent employee. We had goals, a budget and a launch date, and now Vocal V2 has been running for 9 months with a healthy growth rate.
Similarly, your site will have its own requirements, and is likely quite different from Vocal. However, I hope this post and its links give you at least some useful ideas to make something better for users.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://vocal.media
[2]: https://www.meetup.com/en-AU/GraphQL-Sydney/events/267681845/
[3]: https://theartofmachinery.com/2020/04/21/what_is_high_traffic.html
[4]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal1.png
[5]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal2.png
[6]: https://jerrick.media
[7]: https://www.thinkmill.com.au/
[8]: https://theartofmachinery.com/images/scaling_a_graphql_site/alexa.png
[9]: https://aws.amazon.com/blogs/database/amazon-rds-customers-update-your-ssl-tls-certificates-by-february-5-2020/
[10]: https://github.com/vercel/next.js
[11]: https://www.keystonejs.com/
[12]: https://wiki.c2.com/?SecondSystemEffect
[13]: https://vocal.media/resources/vocal-2-0
[14]: https://theartofmachinery.com/about.html
[15]: https://en.wikipedia.org/wiki/Extract,_transform,_load
[16]: https://github.com/stripe/mosql
[17]: https://use-the-index-luke.com/
[18]: https://modern-sql.com/
[19]: https://theartofmachinery.com/images/scaling_a_graphql_site/architecture.svg
[20]: https://aws.amazon.com/fargate/
[21]: https://www.terraform.io/docs/providers/aws/r/ecs_task_definition.html
[22]: https://12factor.net/
[23]: https://theartofmachinery.com/2019/02/16/talks.html
[24]: https://12factor.net/dev-prod-parity
[25]: https://www.meetup.com/en-AU/Port80-Sydney/events/lwcdjlyvjblb/
[26]: https://theartofmachinery.com/2016/07/30/server_caching_architectures.html
[27]: https://theartofmachinery.com/images/scaling_a_graphql_site/http_caches.svg
[28]: https://www.apollographql.com/docs/link/links/http/#options
[29]: https://www.apollographql.com/blog/persisted-graphql-queries-with-apollo-client-119fd7e6bba5
[30]: https://www.apollographql.com/blog/improve-graphql-performance-with-automatic-persisted-queries-c31d27b8e6ea
[31]: https://theartofmachinery.com/images/scaling_a_graphql_site/apq.png
[32]: https://www.keystonejs.com/api/create-list/#cachehint
[33]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Define_where_cookies_are_sent
[34]: https://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0236.html
[35]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests
[36]: https://en.wikipedia.org/wiki/JSONP
[37]: https://theartofmachinery.com/images/scaling_a_graphql_site/cachablepage.png
[38]: https://nextjs.org/docs/advanced-features/dynamic-import#with-no-ssr
[39]: https://medium.com/the-thinkmill/progressive-rendering-the-key-to-faster-web-ebfbbece41a4
[40]: https://github.com/vercel/next.js/issues/1209
[41]: https://linux.die.net/man/1/tail
[42]: https://github.com/sysstat/sysstat/
[43]: http://www.brendangregg.com/blog/2016-10-12/linux-bcc-nodejs-usdt.html
[44]: https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html
[45]: https://danielmiessler.com/study/tcpdump/
[46]: https://www.npmjs.com/package/agentkeepalive
[47]: https://hpbn.co/
[48]: http://www.brendangregg.com/

View File

@ -0,0 +1,600 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use intent parsers for your open source home automation project)
[#]: via: (https://opensource.com/article/20/6/mycroft-intent-parsers)
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
Use intent parsers for your open source home automation project
======
Use Padatious, Adapt, conversational context, and dialogs to start
programming your first voice AI.
![Working from home at a laptop][1]
In [part 1][2] and [part 2][3] of this series on the [Mycroft][4] open source voice assistant, I laid the groundwork for learning how to create a skill. In [part 3][5], I walked through creating an outline for a skill and recommended creating the skill in pure Python first to ensure the methods work as intended. That way, when things go wrong, you know it is related to how your Mycroft skill is constructed and not the code itself.
In this article, you will enhance the outline from part 3 by adding:
* Mycroft entities
* Padatious intents
* Adapt intents
* Dialogs
* Conversational contexts
The code for this project can be found in [my GitLab][6] repository.
Let's get to it!
### Improve your skill
As a reminder, the purpose of this project is to use Mycroft to add items to shopping lists in the [OurGroceries][7] app. However, this tutorial can be applied to a variety of home automation applications like turning on your lights, getting the morning weather report, or controlling your entertainment systems.
Here's what the skill's outline looks like so far:
```
from mycroft import intent_file_handler, MycroftSkill, intent_handler
from mycroft.skills.context import adds_context, removes_context
class OurGroceriesSkill(MycroftSkill):
    def __init__(self):
        MycroftSkill.__init__(self)
    # Mycroft should call this function directly when the user
    # asks to create a new item
    def create_item_on_list(self, message):
        pass
    # Mycroft should also call this function directly
    def create_shopping_list(self, message):
        pass
    # This is not called directly, but instead should be triggered
    # as part of context aware decisions
    def handle_dont_create_anyways_context(self):
        pass
    # This function is also part of the context aware decision tree
    def handle_create_anyways_context(self):
        pass
    def stop(self):
        pass
def create_skill():
    return OurGroceriesSkill()
```
As it stands, Mycroft will load the skill successfully, but it won't do anything because all the methods have the command `pass` in them. For now, ignore the `__init__(self)` method and start working with the `create_item_on_list` method. From the comments, you can see the intent is for Mycroft to call this method directly. That means that you need to declare an _intent_. How do you do that?
### Working with intents
You may have noticed that the [outline][8] `mycroft-msk` created (in the third article) has what looks like a weird function, `@intent_file_handler('ourgroceries.intent')`, above the `handle_test` method. These are special notations in Python called decorators (if you want, head over to Real Python for a [primer on Python decorators][9]). For this tutorial, it is sufficient to know that a decorator is a way to pass your function into a prebuilt function developed by Mycroft. This saves a lot of work and boilerplate code.
Recall from part three of this series that this project uses two intent parsers: Padacious and Adapt, which I described in the second article.
#### Padatious intents
So how do you know which decorator to use and where to use it? Great questions! I'll start with Padatious, which is the easier of the two to understand. If you recall from the second article, [Padatious][10] skills are _trained_ based on phrases that the skill's developer decides are relevant for the skill. As Mycroft may install many skills using the Padatious intent engine, each intent is given a score by the neural network module that Mycroft employs. Mycroft then chooses the highest-scoring intent and executes its functions.
The phrases which Mycroft will use to train the intent are placed inside a file with a `.intent` file extension. You can have multiple `.intent` files, but you must reference each file explicitly. That means if you have `create.item.intent` and `create.category.intent`, there will be no confusion about which file your variables are populated from, as you must call them by file name. As you can see in the output from `mycroft-msk`, the decorator is intuitively named `@intent_file_handler()`. Simply use the name of the file as the argument for the decorator, such as `@intent_file_handler("create.item.intent")`.
Think about what phrases someone might use to add an item to a shopping list. Since the motivating factor for this skill was using Mycroft to create a grocery list, the example code uses food-related terms, but you could use generic terms. With that said, here are some phrases you might say to add an item to your grocery list:
* Add tomatoes to my shopping list
* Add tomatoes to the grocery list
* Add tomatoes to Costco list
You may choose to have some grammatically incorrect phrases as well, to account for Mycroft misunderstanding the user's voice. From the list above, what pieces of information are programmatically relevant? `tomatoes`, `shopping list`, `grocery list`, and `Costco list`. The official documentation refers to this type of object as an _entity_. You can think of an entity as a variable, if that makes more sense to you. This will become clearer later when you create an intent file. While the `mycroft-msk` command will put intents in `locale/en-us` by default, I put mine under `vocab/en-us/`. Why? Well, that's because the Adapt intent parser stores its files in `vocab`, and I prefer to keep all my intent files in the same location. My file `vocab/en-us/create.item.intent` starts with:
```
`add {Food} to my {ShoppingList}`
```
This defines the _entities_ `Food` and `ShoppingList`.
**IMPORTANT NOTE**:  Padatious entities are **not** case-sensitive, and Padatious interprets everything in lower-case. For example, `ShoppingList` will be `shoppinglist`.
Now that you have an intent, have Mycroft say a phrase containing your _entities_. Don't forget to add the intent decorator! Your new function will look like this:
```
    @intent_file_handler("create.item.intent")
    def create_item_on_list(self, message):
        """
        This function adds an item to the specified list
        :param message:
        :return: Nothing
        """
        item_to_add = message.data.get('food')
        list_name = message.data.get('shoppinglist')
        self.speak("Adding %s to %s" % (item_to_add, list_name))
```
The graphic below uses three phrases:
* Add tomatoes to my shopping list
* Add nails to my hardware list
* Add buns to groceries list
Mycroft will not be able to figure out the intent behind one of these phrases. Can you guess which one and why?
![Mycroft processing intent][11]
(Steve Ovens, [CC BY-SA 4.0][12])
In case the video was a little too fast for you, here's the answer: Mycroft can't process the phrase `add buns to groceries list` because it is missing the keyword `my`. The intent explicitly says `add {Food} to my {ShoppingList}`. Without the word `my` as input from the user, the Padatious intent for the skill scores low and, therefore, Mycroft doesn't choose this skill to handle the request. The simplest solution is to add a new line to your intent file, like this:
```
`add {Food} to {ShoppingList}`
```
Mycroft can often reload skills when it detects a change, but I prefer to restart Mycroft's skills section to make sure. I also clear the logs a lot during testing, so I run the following command to do everything in one line:
```
` ./stop-mycroft.sh skills;sudo rm -f /var/log/mycroft/skills.log; ./start-mycroft.sh skills; mycroft-cli-client`
```
Testing the skill after Mycroft restarts yields the following results:
```
add buns to groceries list                                                                
 &gt;&gt; Adding buns to groceries list
```
In case it is not clear, any response Mycroft makes in the `mycroft-cli-client` is prefixed by `>>` to indicate its response. Now that you have a basic intent, go back and review the objectives for this skill from part 3 of this series:
1. Login/authenticate
2. Get a list of the current grocery lists
3. Add item to a specific grocery list
4. Add item to a category under a specific list
5. Be able to add a category (since OurGroceries allows items to be placed in categories)
Ignore the first two items for now—those deal with the online portion of the project, and you need to complete the other objectives first. For the third item, you have a basic intent that, in theory, should be able to take the _entities_ Mycroft detects and turn them into variables in the Python code. For the fourth item on the list, add two new lines to your intent:
```
add {Food} to my {ShoppingList} under {Category}
add {Food} to {ShoppingList} under {Category}
```
You also need to alter your function slightly. When using the Padatious intent parser, _entities_ are returned via the `message.data.get()` function. This function will return `None` if the entity is undefined. In other words, if Mycroft cannot parse `{Category}` from an `utterance` the user makes, `message.data.get()` will return `None`. With this in mind, here is some quick test code:
```
    @intent_file_handler("create.item.intent")
    def create_item_on_list(self, message):
        """
        This function adds an item to the specified list
        :param message:
        :return: Nothing
        """
        item_to_add = message.data.get('food')
        list_name = message.data.get('shoppinglist')
        category_name = message.data.get('category')
        if category_name is None:
            self.speak("Adding %s to %s" % (item_to_add, list_name))
        else:
            self.speak("Adding %s to %s under the category %s" % (item_to_add, list_name, category_name))
```
Here is an example that tests these code changes:
![Mycroft testing code changes][13]
(Steve Ovens, [CC BY-SA 4.0][12])
In the example, Mycroft responds with `>> Adding nails to my hardware list under`, yet the only time you told Mycroft to say the word `under` is when `category_name` has a value other than `None`. This is because the intent parser is interpreting the word `under` as a part of the entity `ShoppingList`. Because the utterance had the word `my` in it, the sentences that matched the utterances could have been either:
1. `add {Food} to my {ShoppingList}`
or
2. `add {Food} to my {ShoppingList} under {Category}`
Since the user did not state the `{Category}`, Mycroft selected the first statement as the most correct. This means that anything _after_ the word `my` would be cast into the entity `{ShoppingList}`. Since, therefore, `{Category}` is `None`, Mycroft speaks, "Adding nails to my hardware list under" instead of "Adding nails to my hardware list under None."
Padatious may seem a bit simplistic at first. For each phrase you need Mycroft to match, simply add a line to the intent file. However, with a complex intent, you could have several dozen lines attempting to cover all the different utterances you want to handle.
There is another option that may be worth considering. Padatious intents support _parentheses expansion_. This means you can use a form of _OR_ statements to cut down on the number for lines in an intent. Going back, the example is trying to account for three cases:
```
add {Food} to my {ShoppingList}
add {Food} to my {ShoppingList} under {Category}
add {Food} to the {ShoppingList}
add {Food} to the {ShoppingList} under {Category}
add {Food} to {ShoppingList}
add {Food} to {ShoppingList} under {Category}
```
If you want to rewrite this using an _OR_ statement to combine the `my` and `the` keywords, you could write:
```
add {Food} to ( my | the ) {ShoppingList}
add {Food} to ( my | the ) {ShoppingList} under {Category}
add {Food} to {ShoppingList}
add {Food} to {ShoppingList} under {Category}
```
This removes two lines from the intent. Parentheses expansion also supports making something optional. So, if you want to make `the` and `my` optional and thus allow for the phrase `add {Food} to {ShoppingList}`, it would look like:
```
add {Food} to ( | my | the ) {ShoppingList}
add {Food} to ( | my | the ) {ShoppingList} under {Category}
```
This simple change covers all three scenarios (once you restart the Mycroft skills subsystem). You can go a step further and condense this into a single line if you wish:
```
`add {Food} to ( | my | the ) {ShoppingList} ( | under {Category})`
```
**Note:** For easier readability, use white spaces in your intent parentheses expansion.
To summarize the important points about Padatious intent parsing:
* You have to give several examples of phrases for Mycroft to come up with the correct match.
* Padatious intents use entities such as `{Food}` to identify object values that can be retrieved from your Python code.
* Entities are _always lower-case_ regardless of how you declare them in the intent file.
* If an entity cannot be parsed from an utterance, its value is `None`.
* The decorator for Padatious intents is `@intent_file_handler('my.intent.file.intent')`.
#### Adapt intents
Unlike Padatious intents, where you specify entities in the intent file, the Adapt intent parser works with a series of keywords that work in combination with regular expression (regex) files to attempt to capture an entity. You would use Adapt over Padatious when you:
1. Expect the utterance to be complex and the more robust parsing of regex is required
2. Want or need Mycroft to be context-aware
3. Need intents to be as lightweight as possible
That said, the `voc` files Adapt uses are quite flexible. They can include a single word (as shown in the [official documentation][14]), or they can include the start of a sentence you want to react to.
As one goal of this project is to have Mycroft create a new shopping list in the OurGroceries app, I wanted to add in some rudimentary checking so that the user is informed if a list with a similar name exists and asked if they still want to create a new list. This should cut down on list duplication and misplacement of items.
Mock up some code, and then you can deal with the vocab and regex files. While you could use Pytest or similar unit tests to assert specific values, for the sake of simplicity, you will create a list called "shopping list." The Python mock function will look like this:
```
    def create_shopping_list(self, message):
        fake_list = ["shopping list"]
        self.new_shopping_list_name = message.data['ListName'].lower()
        for current_shopping_list in fake_list:
            try:
                if self.new_shopping_list_name in current_shopping_list:
                    if self.new_shopping_list_name == current_shopping_list:
                        self.speak("The shopping list %s already exists" % self.new_shopping_list_name )
                        break
                    else:
                        self.speak("I found a similar naming list called %s" % current_shopping_list)
                        # This hands off to either handle_dont_create_anyways_context or handle_create_anyways_context
                        # to make a context aware decision
                        self.speak("Would you like me to add your new list anyways?", expect_response=True)
                        break
                else:
                    self.speak("Ok creating a new list called %s" % self.new_shopping_list_name)
            except Exception as ex:
                print(ex)
                pass
```
Notice I am using a `forloop` to iterate over `fake_list`. That is because, in theory, multiple lists will be returned from the OurGroceries app. Also note the `try/except` block; I have given a general pass on the exception because, right now, I don't know what kind of exceptions I may run into. As you use and debug your code, you can tighten this up a bit.
Another line to note is:
```
`self.speak("Would you like me to add your new list anyways?", expect_response=True) `
```
This bit of code will have Mycroft prompt the user for a response and store the result. I'll talk more about this bit of code in the conversational contexts section.
#### Regular expressions, entities, and Adapt intents
Now you have some pseudo code, but you need to add the decorator for Mycroft to action your code. You need to create three files for this to work: two vocab files and one regex file. The regex file, which I will name `add.shopping.list.rx`, looks like this:
```
start a new list called (?P&lt;ListName&gt;.*)
create a new list called (?P&lt;ListName&gt;.*)
add a new list called (?P&lt;ListName&gt;.*)
```
You could make this a one-liner, but for simplicity's sake, keep it as three lines. Note this strange-looking notation: `(?P<ListName>.*)`. This is the part of the code that captures and creates the entity. The entity, in this case, is called `ListName`. For checking your syntax, I recommend [Pythex][15]. It is very helpful when I am debugging my regex (I'm pretty terrible at regex).
**IMPORTANT NOTE:** Adapt intents are case sensitive.
#### Adapt and vocab files
Now that your regex includes the full sentences you expect, create your two vocab files. The first file is called `CreateKeyword.voc`. As you can surmise from the file name, all the words you want to associate with the `create` action should reside here. This file is very simple:
```
start a new
create a new
add a new
```
In the documentation, you will often see only a single word per line. However, due to some Mycroft default skills using `start` and `create`, I need to add words so that Mycroft will pick my skill appropriately.
The second file is even easier. It's called `ListKeyword.voc` and has a single word in it:
```
`list`
```
With these files defined, you can now construct your decorator:
```
`@intent_handler(IntentBuilder('CreateShoppingIntent').require('CreateKeyword').require('ListKeyword').require("ListName"))`
```
The first argument in the `IntentBuilder` is `'CreateShoppingIntent'`; this is the name of the intent and is completely optional. If you want to leave this blank, you can. The `require` section is a bit confusing. When it comes to keywords, the argument for `require` is the name of the file without the file extension. In this case, one of the files is called `ListKeyword.voc`, so the argument being passed into `require` is just `'ListKeyword'`.
While you can name your vocab files anything you want, I highly recommend using the word `Keyword` in the file so that when you are building your `intent_handler` decorator, it is clear what you are requiring.
If `require` is actually an entity from a regex file, the argument for `require` is the name of the entity as you defined it in the regex. If your regex was `start a new list called (?P<NewList>.*)`, then you would write `require('NewList')`.
Restart the Mycroft skills subsection and try it out. You should see this in the Mycroft command-line interface:
```
 add a new list called hardware
 &gt;&gt; Ok creating a new list called hardware
 
 create a new list called hardware
 &gt;&gt; Ok creating a new list called hardware
 
 start a new list called hardware
 &gt;&gt; Ok creating a new list called hardware
```
#### Conversational contexts
Great, it works! Now add the following decorator to your function:
```
`@adds_context("CreateAnywaysContext")`
```
This decorator is tied to the [conversational context][16] that Mycroft supports. Conversational contexts are essentially where you can speak normally to Mycroft and it will understand your meaning. For example, you could ask: "Who was John Quincy Adams?" After Mycroft responds, saying something like "John Quincy Adams was the sixth president of the United States," you could ask: "How old was he when he became president?" If you ask the second question first, Mycroft has no way to know who the pronoun _he_ refers to. However, in the context of this conversation, Mycroft understands that _he_ refers to John Quincy Adams.
Getting back to creating a conversational context, the argument for its decorator is the name of the context. This example calls the context `CreateAnywaysContext` and hence, the full decorator is `@adds_context("CreateAnywaysContext")`. This mock method is now complete. However, you now need to add two simple methods to handle the user's feedback. You can simplify the grocery list skill by requiring either a yes or a no answer. Create a `YesKeyword.voc` and a `NoKeyword.voc`, and place the words `yes` and `no` in them, respectively.
Now create two more methods in your Python:
```
@intent_handler(IntentBuilder('DoNotAddIntent').require("NoKeyword").require('CreateAnywaysContext').build())
@removes_context("CreateAnywayscontext")
def handle_dont_create_anyways_context(self):
    """
    Does nothing but acknowledges the user does not wish to proceed
    Uses dont.add.response.dialog
    :return:
    """
    self.speak_dialog('dont.add.response')
    @intent_handler(IntentBuilder('AddAnywaysIntent').require("YesKeyword").require('CreateAnywaysContext').build())
@removes_context("CreateAnywayscontext")
def handle_create_anyways_context(self):
    """
    If the user wants to create a similarly named list, it is handled here
    Uses do.add.response.dialog
    :return:
    """
    self.speak_dialog('do.add.response')
```
There are two things here you have not seen so far:
1. `@remove_context`
2. `self.speak_dialog`
If a method that requires `CreateAnywaysContext` is called, the decorator `@remove_context` gets rid of the context so that Mycroft does not accidentally action a context more than once. While multiple contexts can be applied to a method, this project will not use them.
#### Dialogs
Dialogs are files that have several prebuilt responses that Mycroft can pick from. These dialogs are stored in `dialog/{language tag}/`, and the language tag is based on the IETF standard. Examples can be found on [Venea.net][17] in the IETF LanguageTag column.
Mycroft picks randomly from the list of sentences in a specified dialog file. Why would you use a dialog file instead of implementing `self.speak` in Python? The answer is simple: When you create and use a dialog file, you do not have to change the Python code to support other languages.
For example, if the dialog file called `dont.add.response.dialog` exists under `en-us` with the following content:
```
Ok... exiting
Gotcha I won't add it
Ok I'll disregard it
Make up your mind!
```
You could also create `de-de/dont.add.response.dialog` with the following content:
```
Ok... Beenden
Erwischt Ich werde es nicht hinzufügen
Ok, ich werde es ignorieren.
Entscheiden Sie sich!
```
In your Python code, you would use `self.speak_dialog('dont.add.response')` to randomly select one of the answers for Mycroft to use. If a user's Mycroft language is set to German, Mycroft will automatically select the correct dialog and play the dialog in German instead of English.
To wrap up this section, create two files under `dialog/en-us`. For `dont.add.response.dialog`, use the same content as in the above example. For `do.add.response.dialog`, use:
```
Ok adding it now
Sure thing
Yup yup yup
```
At this point in this project, your tree should look something like this:
```
├── dialog
  └── en-us
      ├── do.add.response.dialog
      └── dont.add.response.dialog
├── __init__.py
├── regex
  └── en-us
      └── ADD.shopping.list.rx
└── vocab
    └── en-us
        ├── CREATE.item.intent
        ├── CreateKeyword.voc
        └── ListKeyword.voc
```
Note that I created the files by hand. If you used the `mycroft-msk create` method, you might have `locale` directories, `settingsmeta.yaml,` or other artifacts.
#### Wrapping up
We've covered a lot so far. You have implemented the Padatious intent parser to, theoretically, add a new item to a list, whether or not you put it under a category. You have also used the Adapt intent parser to add a new category. You used conversational context to prompt the user for confirmation if a similar list already exists. Finally, you learned the concept of dialogs as a way for Mycroft to provide varied confirmation responses to the user.
Currently, the code looks like:
```
from mycroft import intent_file_handler, MycroftSkill, intent_handler
from mycroft.skills.context import adds_context, removes_context
from adapt.intent import IntentBuilder
class OurGroceriesSkill(MycroftSkill):
    def __init__(self):
        MycroftSkill.__init__(self)
    # Mycroft should call this function directly when the user
    # asks to create a new item
    @intent_file_handler("create.item.intent")
    def create_item_on_list(self, message):
        """
        This function adds an item to the specified list
        :param message:
        :return: Nothing
        """
        item_to_add = message.data.get('food')
        list_name = message.data.get('shoppinglist')
        category_name = message.data.get('category')
        if category_name is None:
            self.speak("Adding %s to %s" % (item_to_add, list_name))
        else:
            self.speak("Adding %s to %s under the category %s" % (item_to_add, list_name, category_name))
    # Mycroft should also call this function directly
    @intent_handler(IntentBuilder('CreateShoppingIntent').require('CreateKeyword').require('ListKeyword').require("ListName"))
    def create_shopping_list(self, message):
        fake_list = ["shopping list"]
        self.new_shopping_list_name = message.data['ListName'].lower()
        for current_shopping_list in fake_list:
            try:
                if self.new_shopping_list_name in current_shopping_list:
                    if self.new_shopping_list_name == current_shopping_list:
                        self.speak("The shopping list %s already exists" % self.new_shopping_list_name )
                        break
                    else:
                        self.speak("I found a similar naming list called %s" % current_shopping_list)
                        # This hands off to either handle_dont_create_anyways_context or handle_create_anyways_context
                        # to make a context aware decision
                        self.speak("Would you like me to add your new list anyways?", expect_response=True)
                        break
                else:
                    self.speak("Ok creating a new list called %s" % self.new_shopping_list_name)
            except AttributeError:
                pass
    # This is not called directly, but instead should be triggered
    # as part of context aware decisions
    @intent_handler(IntentBuilder('DoNotAddIntent').require("NoKeyword").require('CreateAnywaysContext').build())
    @removes_context("CreateAnywayscontext")
    def handle_dont_create_anyways_context(self):
        """
        Does nothing but acknowledges the user does not wish to proceed
        Uses dont.add.response.dialog
        :return:
        """
        self.speak_dialog('dont.add.response')
    # This function is also part of the context aware decision tree
    @intent_handler(IntentBuilder('AddAnywaysIntent').require("YesKeyword").require('CreateAnywaysContext').build())
    @removes_context("CreateAnywayscontext")
    def handle_create_anyways_context(self):
        """
        If the user wants to create a similarly named list, it is handled here
        Uses do.add.response.dialog
        :return:
        """
        self.speak_dialog('do.add.response')
    def stop(self):
        pass
def create_skill():
    return OurGroceriesSkill()
```
 
In the next article, I will go into logging, getting settings from the web UI, and continuing to fill out the skill into something more useful.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/mycroft-intent-parsers
作者:[Steve Ovens][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stratusss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://opensource.com/article/20/6/open-source-voice-assistant
[3]: https://opensource.com/article/20/6/mycroft
[4]: https://mycroft.ai/
[5]: https://opensource.com/article/20/6/outline-mycroft-voice-assistant-skill
[6]: https://gitlab.com/stratus-ss/mycroft-ourgroceries-skill
[7]: https://www.ourgroceries.com/overview
[8]: https://opensource.com/article/20/6/outline-mycroft-voice-assistant-skill#decorator
[9]: https://realpython.com/primer-on-python-decorators/
[10]: https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/intents/padatious-intents
[11]: https://opensource.com/sites/default/files/uploads/mycroft-padatious-first-intent.gif (Mycroft processing intent)
[12]: https://creativecommons.org/licenses/by-sa/4.0/
[13]: https://opensource.com/sites/default/files/uploads/mycroft-padatious-category1.gif (Mycroft testing code changes)
[14]: https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/intents/adapt-intents
[15]: https://pythex.org/
[16]: https://mycroft-ai.gitbook.io/docs/skill-development/user-interaction/conversational-context
[17]: https://www.venea.net/web/culture_code

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Bash traps in your scripts)
[#]: via: (https://opensource.com/article/20/6/bash-trap)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Using Bash traps in your scripts
======
Traps help your scripts end cleanly, whether they run successfully or
not.
![Hands programming][1]
It's easy to detect when a shell script starts, but it's not always easy to know when it stops. A script might end normally, just as its author intends it to end, but it could also fail due to an unexpected fatal error. Sometimes it's beneficial to preserve the remnants of whatever was in progress when a script failed, and other times it's inconvenient. Either way, detecting the end of a script and reacting to it in some pre-calculated manner is why the [Bash][2] `trap` directive exists.
### Responding to failure
Here's an example of how one failure in a script can lead to future failures. Say you have written a program that creates a temporary directory in `/tmp` so that it can unarchive and process files before bundling them back together in a different format:
```
#!/usr/bin/env bash
CWD=`pwd`
TMP=${TMP:-/tmp/tmpdir}
## create tmp dir
mkdir $TMP
## extract files to tmp
tar xf "${1}" --directory $TMP
## move to tmpdir and run commands
pushd $TMP
for IMG in *.jpg; do
  mogrify -verbose -flip -flop $IMG
done
tar --create --file "${1%.*}".tar *.jpg
## move back to origin
popd
## bundle with bzip2
bzip2 --compress $TMP/"${1%.*}".tar \
      --stdout &gt; "${1%.*}".tbz
## clean up
/usr/bin/rm -r /tmp/tmpdir
```
Most of the time, the script works as expected. However, if you accidentally run it on an archive filled with PNG files instead of the expected JPEG files, it fails halfway through. One failure leads to another, and eventually, the script exits without reaching its final directive to remove the temporary directory. As long as you manually remove the directory, you can recover quickly, but if you aren't around to do that, then the next time the script runs, it has to deal with an existing temporary directory full of unpredictable leftover files.
One way to combat this is to reverse and double-up on the logic by adding a precautionary removal to the start of the script. While valid, that relies on brute force instead of structure. A more elegant solution is `trap`.
### Catching signals with trap
The `trap` keyword catches _signals_ that may happen during execution. You've used one of these signals if you've ever used the `kill` or `killall` commands, which call `SIGTERM` by default. There are many other signals that shells respond to, and you can see most of them with `trap -l` (as in "list"):
```
$ trap --list
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX
```
Any of these signals may be anticipated with `trap`. In addition to these, `trap` recognizes:
* `EXIT`: Occurs when a process exits
* `ERR`: Occurs when a process exits with a non-zero status
* `DEBUG`: A Boolean representing debug mode
To set a trap in Bash, use `trap` followed by a list of commands you want to be executed, followed by a list of signals to trigger it.
For instance, this trap detects a `SIGINT`, the signal sent when a user presses **Ctrl+C** while a process is running:
```
`trap "{ echo 'Terminated with Ctrl+C'; }" SIGINT`
```
The example script with temporary directory problems can be fixed with a trap detecting `SIGINT`, errors, and successful exits:
```
#!/usr/bin/env bash
CWD=`pwd`
TMP=${TMP:-/tmp/tmpdir}
trap \
 "{ /usr/bin/rm -r $TMP ; exit 255; }" \
 SIGINT SIGTERM ERR EXIT
## create tmp dir
mkdir $TMP
tar xf "${1}" --directory $TMP
## move to tmp and run commands
pushd $TMP
for IMG in *.jpg; do
  mogrify -verbose -flip -flop $IMG
done
tar --create --file "${1%.*}".tar *.jpgh
## move back to origin
popd
## zip tar
bzip2 --compress $TMP/"${1%.*}".tar \
      --stdout &gt; "${1%.*}".tbz
```
For complex actions, you can simplify `trap` statements with [Bash functions][3].
### Traps in Bash
Traps are useful to ensure that your scripts end cleanly, whether they run successfully or not. It's never safe to rely completely on automated garbage collection, so this is a good habit to get into in general. Try using them in your scripts, and see what they can do!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/bash-trap
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming)
[2]: https://opensource.com/resources/what-bash
[3]: https://opensource.com/article/20/6/how-write-functions-bash

View File

@ -0,0 +1,182 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is End of Life in Ubuntu? Everything You Should Know About it)
[#]: via: (https://itsfoss.com/end-of-life-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
What is End of Life in Ubuntu? Everything You Should Know About it
======
If you have been following Its FOSS for some time, you might have noticed that I publish news articles like Ubuntu XYZ version has reached end of life (EoL).
This end of life is one of those essential concepts that every Ubuntu user should be aware of.
This is why I decided to write this detailed guide to explain what does an Ubuntu release reaching end of life means, why it matters to you and how to check when your Ubuntu install has reaches end of life.
### What is end of life in Ubuntu?
![][1]
First thing first, end of life is not really an Ubuntu-specific concept. It is a generic term widely used in the software industry.
The end of life of a software means the software has reached the end of its predefined support period. Beyond this date, the software wont get any feature, maintenance or security updates.
You may continue using the software past its end of life date but at your own risk. If there are security vulnerability, your system and data will be at risk.
Compare it to the use by date or the expiry date on a food item. You may consume the yogurt one day after its use by date but can you eat it after a week or a month?
#### Why end of life?
Software is not a living being then why they have an end of life? Why doesnt Ubuntu just keep on supporting one version forever?
It is to maintain a balance between stability and features. You want new features in your system but you dont want it to break your system. Software compatibility is complex and testing takes time.
So what Ubuntu does is to give you a release and takes the responsibility of maintaining it by providing security and other updates for a certain time period.
Ubuntu team and volunteers also work on the new release in parallel to add new features to the future release.
#### Support life cycle of Ubuntu releases
Ubuntu has two new version releases every year. These releases can be categorized into:
* Regular release with 9 months of support period
* Long-term support (LTS) release with 5 years of support period
A new LTS version is released every two years while the regular releases come every six months.
This table should give you a better understanding:
Ubuntu Version | Release | End of Life
---|---|---
Ubuntu 18.04 (LTS) | April, 2018 | April, 2023 (5 years)
Ubuntu 18.10 | October, 2018 | July, 2019 (9 months)
Ubuntu 19.04 | April, 2019 | January, 2020 (9 months)
Ubuntu 19.10 | October, 2019 | July, 2020 (9 months)
Ubuntu 20.04 (LTS) | April, 2020 | April, 2025 (5 years)
Ubuntu 20.10 | October, 2020 | July, 2021 (9 months)
The long-term support release focus on providing stability for a longer period. You probably know that Linux distributions like Ubuntu are also responsible for providing applications to you. These distributions have thousands of applications/packages in their repositories.
The LTS versions often hold on to software versions as they cannot test every new version of so many software in the five years of support period.
When Ubuntu releases a new LTS version, it also updates a number of software to a newer version. For example, Ubuntu 18.04 LTS has PHP 7.2 whereas Ubuntu 20.04 LTS has PHP 7.4 available.
The regular release are short-live, but they bring new features (like newer versions of software like file managers, desktop environments, newer kernels etc).
Personally, I think of these regular releases as a stepping platform for the next LTS releases. For examples, the features introduced in Ubuntu 18.10, 19.04, 19.10 will eventually be added in Ubuntu 20.04 (but not in 18.04).
#### How to check how long your Ubuntu system will be supported?
The simplest way to check the end of life support in Ubuntu is using this command in the terminal:
```
hwe-support-status --verbose
```
It will show an output that mentions the support period of your Ubuntu version.
```
You are not running a system with a Hardware Enablement Stack. Your system is supported until April 2025.
```
The [Hardware Enablement Stack in Ubuntu][2] allows you to receive the latest generic Linux kernel supported by Ubuntu. The important part is the support status date.
If you want a detailed overview of how many software packages you have got and how long those packages will be supported, you can use the ubuntu-security-status command:
```
ubuntu-security-status
```
**In older versions of Ubuntu, the same command is known as ubuntu-support-status.** For both commands, the output is nearly identical:
```
[email protected]:~$ ubuntu-security-status
2242 packages installed, of which:
1695 receive package updates with LTS until 4/2025
510 could receive security updates with ESM Apps until 4/2030
30 packages are from third parties
7 packages are no longer available for download
Packages from third parties are not provided by the official Ubuntu
archive, for example packages from Personal Package Archives in
Launchpad.
For more information on the packages, run 'ubuntu-security-status
--thirdparty'.
Packages that are not available for download may be left over from a
previous release of Ubuntu, may have been installed directly from a
.deb file, or are from a source which has been disabled.
For more information on the packages, run 'ubuntu-security-status
--unavailable'.
Enable Extended Security Maintenance (ESM Apps) to get 0 security
updates (so far) and enable coverage of 510 packages.
This machine is not attached to an Ubuntu Advantage subscription.
See https://ubuntu.com/advantage
```
As you can see in the above output, my system will majorly get supported till April 2025. Ubuntu can provide maintenance support for 510 packages till April 2030 but youll have to purchase the ESM.
The ESM is more useful to mission-critical business infrastructure where upgrading to a newer version of the OS will impact the business. For desktop users, upgrading to a newer version is easier and more sensible thing to do.
#### What happens when your Ubuntu install reaches end of life? What if you continue using Ubuntu even after its end of life?
When your Ubuntu install reaches end of life, it stops getting system updates including any security updates. There wont be updates for installed software as well.
Without the security updates your system will become vulnerable to hacking attacks (if you connect to internet). Suppose a vulnerability gets discovered in one of the software you use or even in Linux kernel. You dont get the update so this vulnerability is not patched and some malicious hackers take advantage of it to steal your data.
Eventually, youll not be able to use the Ubuntu repositories. If you try to install a new application, youll see [unable to locate package error][3].
So, basically, you wont be able to install new software and your system will be at risk. Not a pretty scenario.
The worst part is that if you wait way too long, you wont be able to upgrade to the newer version. For example, a system running 17.04 can no longer update to 17.10 because even 17.10 is not supported anymore. A [fresh new Ubuntu installation][4] is the only suggested option in such case.
#### What should you do when your Ubuntu install reaches end of life?
Ubuntu doesnt just abandon you after your system reaches end of life. It notifies you either in terminal or on the desktop that your system is no longer supported.
![Ubuntu No Longer Supported][5]
It even provides a mechanism to [upgrade your current Ubuntu version to the newer version][6]. Most of the software you have currently installed and your pictures, videos and other documents remain as it is. Making a backup of your important data on an external disk is still recommended.
![Upgrade to Ubuntu 20.04 From 18.04][7]
The rule of thumb is:
* if you are using an LTS release, you should upgrade when the next LTS version is available.
* if you are using a regular release, you should upgrade whenever the next version is available.
#### Still confused?
I wrote this article because this is one of the most common confusion for Its FOSS readers. I hope it clears the air and you have a better understanding of Ubuntu release cycle.
If you still have doubts, please feel free to ask your question in the comment section. Ill be happy to answer your queries.
--------------------------------------------------------------------------------
via: https://itsfoss.com/end-of-life-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/end-of-life-ubuntu.png?ssl=1
[2]: https://itsfoss.com/ubuntu-hwe-kernel/
[3]: https://itsfoss.com/unable-to-locate-package-error-ubuntu/
[4]: https://itsfoss.com/install-ubuntu/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/ubuntu-no-longer-supported.png?ssl=1
[6]: https://itsfoss.com/upgrade-ubuntu-version/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/03/upgrade-ubuntu-20-04-from-18-04.jpg?ssl=1

View File

@ -1,110 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Customize your Raspberry Pi operating system for everyday use)
[#]: via: (https://opensource.com/article/20/6/custom-raspberry-pi)
[#]: author: (Patrick H. Mullins https://opensource.com/users/pmullins)
定制你的树莓派Raspberry Pi用于日常使用
======
安装 Raspberry Pi OS Lite 可以更加轻量化你的树莓派Raspberry Pi
![Cartoon graphic of Raspberry Pi board][1]
如果你有一个运行 [Raspberry Pi OS][2](以前称为 Raspbian操作系统的树莓派Raspberry Pi的话你懂的他就是一个运行了可以为初学者提供一切可能的伟大操作系统的小电脑这个电脑简直太棒了。然而一旦你开始熟悉他并且想开始用他干一些别的事情的时候你可能不再想要那个默认包含了所有东西的操作系统。
当这种情况发生的时候,你两个选择:要么你可以尽可能的把所有你不需要的东西都删干净,要么你还可以用 Raspberry Pi OS Lite 来构建专门为你的需求而定制的轻量级操作系统。我的建议是,用第二种方法,既简便又节省时间。
### Raspberry Pi OS Lite
Raspberry Pi OS 的“Lite”版本其实没什么特别的就是一个基于最新 [Debian][3] 版本的最小化镜像。这个镜像仅包含了操作系统核心和启动到命令行的部分,而不会进入图形桌面。考虑用这个作为你的定制 Raspberry Pi OS 的基础吧。这之后的所有东西都是建立在这个核心之上的。
到树莓派基金会Raspberry Pi Foundation的网站上[下载][2]这个轻量级镜像。下载完成后,查看详细的[安装指南][4],这里面介绍了在 Linux、Windows 或者 macOS 下如何烧制一个树莓派Raspberry Pi操作系统的 SD 卡。
如果你计划用树莓派作为一个最小化操作系统运行一些脚本和服务的话,差不多这么多就够了。如果你还想干更多事,那继续往下读。
### X Windows
首先为了偶尔需要通过用户界面GUI连接到你的树莓派Raspberry Pi安装一个可视化的系统还是不错的。
[X Window 系统][5],有时候称为 X11是 Unix 操作系统上一个很常见且基础的可视化系统。X11提供了一套 GUI 桌面环境下的基础框架。它可以让你通过窗口、鼠标和键盘与计算机交互。
#### 安装 X Windows
下面这一行安装了能让 X11 运行的最少的包。
```
`sudo apt install -y --no-install-recommends xserver-xorg-core xserver-xorg xfonts-base xinit`
```
通过 `--no-install-recommends`只安装了主要的一些依赖Depends 字段中的包)。这样可以节省很多空间,因为没有安装那些建议却不需要的包。
### 进阶:使用 Xfce 桌面环境
如果你喜欢的话,可以就此停下了,然后开始使用 X Windows 作为你的桌面。然而, 我并不建议这么做。X Windows自带的这种最小化的窗口管理工具某种程度上让人感觉过时了。相反我建议安装现代化的桌面环境比如说像 Xfce、GNOME 或者 KDE。当我们说的是微型计算机的时候我更倾向于 [Xfce][6] 而不是其他的,因为他就是为资源有限的系统设计的,而且你可以通过主题、图标或者其他东西对他进行定制。
#### Install Xfce
#### 安装 Xfce
安装 Xfce 桌面环境相当简单。只需要:
```
`sudo apt install -y --no-install-recommends xfce4 desktop-base lightdm`
```
这就够了。你现在安装了 X Windows X11和 Xfce 了。现在是时候来定制一下环境并且安装一些核心的应用了。
### 核心应用
目前为止,你已经安装了 X WindowsX11Xfce 桌面环境,和 LightDM一个 Xfce 自带的显示管理器)。现在,你已经有了一个可以启动并且正常使用的轻量、完整的系统。然而,在彻底完成之前,我还是喜欢装一些核心应用。
The following command installs a terminal app, the [Audacious][7] audio player, the [Ristretto][8] image viewer, the [Mousepad][9] text editor, the [File Roller][10] archive manager, and the [Thunar][11] volume manager:
下面这条命令安装了一个终端程序,[Audacious][7] 音乐播放器,[Ristretto][8] 图像浏览器,[Mousepad][9] 文本编辑器,[File Roller][10] 文档管理器和 [Thunar][11] 容量管理器。
```
sudo apt install -y --no-install-recommends xfce4-terminal audacious ristretto
sudo apt install -y --no-install-recommends mousepad file-roller thunar-volman
```
#### 其他可选项
其他一些你可能想安装的东西包括一个好的网络管理器任务管理器PDF 阅读器和通知工具再加上桌面背景管理器截图工具一些新的图标和光标主题。简单来说如果树莓派Raspberry Pi是你的首选系统这些都算是日常工作的一些补充。
```
sudo apt install -y --no-install-recommends network-manager xfce4-taskmanager xfce4-notifyd
sudo apt install -y --no-install-recommends  xpdf gnome-icon-theme dmz-cursor-theme
```
### 下一步该做什么?
如果一切都正常工作的话,你现在就有一个基于 Xfce 和 Debian Lite 超轻量级操作系统的树莓派了Raspberry Pi。我建议现在上 Xfce 网站上查看其他所有不错的工具,这些东西你都可以安装并使用。下一步做什么完全由你决定!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/custom-raspberry-pi
作者:[Patrick H. Mullins][a]
选题:[lujun9972][b]
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/pmullins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_cartoon.png?itok=m3TcBONJ (Cartoon graphic of Raspberry Pi board)
[2]: https://www.raspberrypi.org/downloads/raspberry-pi-os/
[3]: https://www.debian.org/
[4]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
[5]: https://en.wikipedia.org/wiki/X_Window_System
[6]: http://xfce.org
[7]: https://audacious-media-player.org/
[8]: https://docs.xfce.org/apps/ristretto/start
[9]: https://github.com/codebrainz/mousepad
[10]: https://gitlab.gnome.org/GNOME/file-roller
[11]: https://docs.xfce.org/xfce/thunar/thunar-volman
[12]: https://goodies.xfce.org/

View File

@ -0,0 +1,49 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An open source browser extension to zoom in on images)
[#]: via: (https://opensource.com/article/20/6/hoverzoom)
[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas)
可放大图像的开源浏览器扩展
======
仔细看网络图像并不总是那么简单,但是 Hover Zoom+ 让它像鼠标滑动一样容易。
![Digital images of a computer desktop][1]
你是否曾经在访问网站时希望查看更大的图像?这无时无刻不在我身上发生,要做到这点并不总是那么容易。
有时,我在源码中筛选,使用 **Ctrl + F** 搜索图像,复制图像源地址并将其粘贴到新窗口中,以便以全尺寸查看图像。或者,另一个选择是右键单击,复制图像地址,然后粘贴到新选项卡中。
![Example image source code ][2]
Hover Zoom+ 让这个过程更加简单。Hover Zoom+ 是在 MIT 许可证下发行的,是可在 Chrome 商店中找到的简单 Chrome 扩展程序,它的源码可在 [GitHub][3] 上获得。它也可用于 Firefox。
这个应用使得该过程更加容易。通过简单地将鼠标悬停在图像上,你将看到一个弹出窗口来显示该图像的全部内容,它会匹配你的浏览器窗口,无论图像是否被裁剪(或者图像适应尺寸,它看上去将是相同的) )。这可能很有趣,因为有时可能会裁剪原始图像,以适应空间或聚焦于图像的特定部分。但是,你无法右键单击并直接从弹出窗口中保存图像。
根据加州圣何塞的开发人员 Oleg Anashkin 的说法,“这是原始 HoverZoom 扩展的开源版本,现已被恶意软件淹没,并从商店中删除。在此版本中,所有间谍软件均已删除,许多 bug 已被修复,并添加了新功能。它默认不会收集任何统计信息。”
我在 Windows 10 笔记本上的 Chrome 中安装了扩展,然后试用了一下。安装扩展后,我将鼠标悬停在图像上,它在弹出窗口中显示了比实际更大的图像。
但是Hover Zoom+ 不适用于所有网站或所有图像。它适用于 Facebook 和 Twitter但不适用于这些网站上的赞助内容。用户可以轻松切换该应用以针对特定站点启用或禁用它。使用 Hover Zoom+ 可以很容易地看到 Instagram 上的这只可爱小猫而无需实际阅读帖子(方便!):
![Zoomed-in image of a kitten from Facebook][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/hoverzoom
作者:[Jeff Macharyas][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jeffmacharyas
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://opensource.com/sites/default/files/uploads/source.jpg (Example image source code)
[3]: https://github.com/extesy/hoverzoom/
[4]: https://opensource.com/sites/default/files/uploads/fb-cat.jpg (HoverZoom+ enlarged image)