Merge pull request #9 from LCTT/master

合并主分支
This commit is contained in:
warmfrog 2019-05-13 14:11:00 +08:00 committed by GitHub
commit 79a0bb0e01
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 4499 additions and 3552 deletions

View File

@ -0,0 +1,130 @@
DomTerm一款为 Linux 打造的终端模拟器
======
> 了解一下 DomTerm这是一款终端模拟器和复用器带有 HTML 图形和其它不多见的功能。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
[DomTerm][1] 是一款现代化的终端模拟器,它使用浏览器引擎作为 “GUI 工具包”。这就支持了一些相关的特性例如可嵌入图像和链接、HTML 富文本以及可折叠(显示/隐藏)命令。除此以外,它看起来感觉就像一个功能完整、独立的终端模拟器,有着出色 xterm 兼容性(包括鼠标处理和 24 位色)和恰当的 “装饰” (菜单)。另外它内置支持了会话管理和副窗口(如同 `tmux``GNU Screen` 中一样)、基本输入编辑(如在 `readline` 中)以及分页(如在 `less` 中)。
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
*图 1: DomTerminal 终端模拟器。*
在以下部分我们将看一看这些特性。我们将假设你已经安装好了 `domterm` (如果你需要获取并构建 Dormterm 请跳到本文最后)。开始之前先让我们概览一下这项技术。
### 前端 vs. 后端
DomTerm 大部分是用 JavaScript 写的,它运行在一个浏览器引擎中。它可以是像例如 Chrome 或者 Firefox 一样的桌面浏览器(见图 3也可以是一个内嵌的浏览器。使用一个通用的网页浏览器没有问题但是用户体验却不够好因为菜单是为通用的网页浏览而不是为了终端模拟器所打造),并且其安全模型也会妨碍使用。因此使用内嵌的浏览器更好一些。
目前以下这些是支持的:
* qdomterm使用了 Qt 工具包 和 QtWebEngine
* 一个内嵌的 [Electron][2](见图 1
* atom-domterm 以 [Atom 文本编辑器][3](同样基于 Electron包的形式运行 DomTerm并和 Atom 面板系统集成在一起(见图 2
* 一个为 JavaFX 的 WebEngine 包装器,这对 Java 编程十分有用(见图 4
* 之前前端使用 [Firefox-XUL][4] 作为首选,但是 Mozilla 已经终止了 XUL
![在 Atom 编辑器中的 DomTerm 终端面板][6]
*图 2在 Atom 编辑器中的 DomTerm 终端面板。*
目前Electron 前端可能是最佳选择,紧随其后的是 Qt 前端。如果你使用 Atomatom-domterm 也工作得相当不错。
后端服务器是用 C 写的。它管理着伪终端PTY和会话。它同样也是一个为前端提供 Javascript 和其它文件的 HTTP 服务器。`domterm` 命令启动终端任务和执行其它请求。如果没有服务器在运行domterm 就会自己来服务。后端与服务器之间的通讯通常是用 WebSockets在服务器端是[libwebsockets][8]完成的。然而JavaFX 的嵌入既不用 Websockets 也不用 DomTerm 服务器。相反 Java 应用直接通过 Java-Javascript 桥接进行通讯。
### 一个稳健的可兼容 xterm 的终端模拟器
DomTerm 看上去感觉像一个现代的终端模拟器。它处理鼠标事件、24 位色、Unicode、倍宽字符CJK以及输入方式。DomTerm 在 [vttest 测试套件][9] 上工作地十分出色。
其不同寻常的特性包括:
**展示/隐藏按钮(“折叠”):** 小三角(如上图 2是隐藏/展示相应输出的按钮。仅需在[提示符][11]中添加特定的[转义字符][10]就可以创建按钮。
**对于 readline 和类似输入编辑器的鼠标点击支持:** 如果你点击输入区域黄色DomTerm 会向应用发送正确的方向键按键序列。(可以通过提示符中的转义字符启用这一特性,你也可以通过 `Alt+点击` 强制使用。)
**用 CSS 样式化终端:** 这通常是在 `~/.domterm/settings.ini` 里完成的,保存时会自动重载。例如在图 2 中,设置了终端专用的背景色。
### 一个更好的 REPL 控制台
一个经典的终端模拟器基于长方形的字符单元格工作的。这在 REPL命令行上没问题但是并不理想。这里有些通常在终端模拟器中不常见的 REPL 很有用的 DomTerm 特性:
**一个能“打印”图片、图形、数学公式或者一组可点击的链接的命令:** 应用可以发送包含几乎任何 HTML 的转义字符。HTML 会被剔除部分,以移除 JavaScript 和其它危险特性。)
图 3 显示了来自 [gnuplot][12] 会话的一个片段。Gnuplot2.1 或者跟高版本)支持 DormTerm 作为终端类型。图形输出被转换成 [SVG 图片][13],然后被打印到终端。我的博客帖子[在 DormTerm 上的 Gnuplot 展示][14]在这方面提供了更多信息。
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
*图 3Gnuplot 截图。*
[Kawa][15] 语言有一个创建并转换[几何图像值][16]的库。如果你将这样的图片值打印到 DomTerm 终端,图片就会被转换成 SVG 形式并嵌入进输出中。
![](https://opensource.com/sites/default/files/dt-kawa1.png)
*图 4Kawa 中可计算的几何形状。*
**富文本输出:** 有着 HTML 样式的帮助信息更加便于阅读,看上去也更漂亮。图片 1 的下面面板展示 `dormterm help` 的输出。(如果没在 DomTerm 下运行的话输出的是普通文本。)注意自带的分页器中的 `PAUSED` 消息。
**包括可点击链接的错误消息:** DomTerm 可以识别语法 `filename:line:column` 并将其转化成一个能在可定制文本编辑器中打开文件并定位到行的链接。(这适用于相对路径的文件名,如果你用 `PROMPT_COMMAND` 或类似的跟踪目录。)
编译器可以侦测到它在 DomTerm 下运行,并直接用转义字符发出文件链接。这比依赖 DomTerm 的样式匹配要稳健得多,因为它可以处理空格和其他字符并且无需依赖目录追踪。在图 4 中,你可以看到来自 [Kawa Compiler][15] 的错误消息。悬停在文件位置上会使其出现下划线,`file:` URL 出现在 `atom-domterm` 消息栏(窗口底部)中。(当不用 atom-domterm 时,这样的消息会在一个浮层的框中显示,如图 1 中所看到的 `PAUSED` 消息所示。)
点击链接时的动作是可以配置的。默认对于带有 `#position` 后缀的 `file:` 链接的动作是在文本编辑器中打开那个文件。
**结构化内部表示:**以下内容均以内部节点结构表示:命令、提示符、输入行、正常和错误输出、标签,如果“另存为 HTML”则保留结构。HTML 文件与 XML 兼容,因此你可以使用 XML 工具搜索或转换输出。命令 `domterm view-saved` 会以一种启用命令折叠(显示/隐藏按钮处于活动状态)和重新调整窗口大小的方式打开保存的 HTML 文件。
**内建的 Lisp 样式优美打印:** 你可以在输出中包括优美打印指令比如grouping这样断行会根据窗口大小调整而重新计算。查看我的文章 [DomTerm 中的动态优美打印][17]以更深入探讨。
**基本的内建行编辑**,带着历史记录(像 GNU readline 一样): 这使用浏览器自带的编辑器,因此它有着优秀的鼠标和选择处理机制。你可以在正常字符模式(大多数输入的字符被指接送向进程);或者行模式(通常的字符是直接插入的,而控制字符导致编辑操作,回车键会向进程发送被编辑行)之间转换。默认的是自动模式,根据 PTY 是在原始模式还是终端模式中DomTerm 在字符模式与行模式间转换。
**自带的分页器**(类似简化版的 `less`):键盘快捷键控制滚动。在“页模式”中,输出在每个新的屏幕(或者单独的行,如果你想一行行地向前移)后暂停;页模式对于用户输入简单智能,因此(如果你想的话)你无需阻碍交互式程序就可以运行它。
### 多路复用和会话
**标签和平铺:** 你不仅可以创建多个终端标签,也可以平铺它们。你可以要么使用鼠标或键盘快捷键来创建或者切换面板和标签。它们可以用鼠标重新排列并调整大小。这是通过 [GoldenLayout][18] JavaScript 库实现的。图 1 展示了一个有着两个面板的窗口。上面的有两个标签,一个运行 [Midnight Commander][20];底下的面板以 HTML 形式展示了 `dormterm help` 输出。然而相反在 Atom 中我们使用其自带的可拖拽的面板和标签。你可以在图 2 中看到这个。
**分离或重接会话:** 与 `tmux` 和 GNU `screen` 类似DomTerm 支持会话安排。你甚至可以给同样的会话接上多个窗口或面板。这支持多用户会话分享和远程链接。(为了安全,同一个服务器的所有会话都需要能够读取 Unix 域接口和一个包含随机密钥的本地文件。当我们有了良好、安全的远程链接,这个限制将会有所放松。)
**domterm 命令** 类似与 `tmux` 和 GNU `screen`,它有多个选项可以用于控制或者打开单个或多个会话的服务器。主要的差别在于,如果它没在 DomTerm 下运行,`dormterm` 命令会创建一个新的顶层窗口,而不是在现有的终端中运行。
`tmux``git` 类似,`dormterm` 命令有许多子命令。一些子命令创建窗口或者会话。另一些(例如“打印”一张图片)仅在现有的 DormTerm 会话下起作用。
命令 `domterm browse` 打开一个窗口或者面板以浏览一个指定的 URL例如浏览文档的时候。
### 获取并安装 DomTerm
DomTerm 可以从其 [Github 仓库][21]获取。目前没有提前构建好的包,但是有[详细指导][22]。所有的前提条件在 Fedora 27 上都有,这使得其特别容易被搭建。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
作者:[Per Bothner][a]
译者:[tomjlw](https://github.com/tomjlw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/perbothner
[1]:http://domterm.org/
[2]:https://electronjs.org/
[3]:https://atom.io/
[4]:https://en.wikipedia.org/wiki/XUL
[5]:/file/385346
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
[8]:https://libwebsockets.org/
[9]:http://invisible-island.net/vttest/
[10]:http://domterm.org/Wire-byte-protocol.html
[11]:http://domterm.org/Shell-prompts.html
[12]:http://www.gnuplot.info/
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
[15]:https://www.gnu.org/software/kawa/
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
[18]:https://golden-layout.com/
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
[20]:https://midnight-commander.org/
[21]:https://github.com/PerBothner/DomTerm
[22]:http://domterm.org/Downloading-and-building.html

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10830-1.html)
[#]: subject: (How to use autofs to mount NFS shares)
[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
@ -10,9 +10,11 @@
如何使用 autofs 挂载 NFS 共享
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
> 给你的网络文件系统NFS配置一个基本的自动挂载功能。
大多数 Linux 文件系统在引导时挂载,并在系统运行时保持挂载状态。对于已在 `fstab` 中配置的任何远程文件系统也是如此。但是,有时你可能希望仅按需挂载远程文件系统 - 例如,通过减少网络带宽使用来提高性能,或出于安全原因隐藏或混淆某些目录。[autofs][1] 软件包提供此功能。在本文中,我将介绍如何配置基本的自动挂载。
![](https://img.linux.net.cn/data/attachment/album/201905/08/115328rva7kqw9wqh2qees.jpg)
大多数 Linux 文件系统在引导时挂载,并在系统运行时保持挂载状态。对于已在 `fstab` 中配置的任何远程文件系统也是如此。但是,有时你可能希望仅按需挂载远程文件系统。例如,通过减少网络带宽使用来提高性能,或出于安全原因隐藏或混淆某些目录。[autofs][1] 软件包提供此功能。在本文中,我将介绍如何配置基本的自动挂载。
首先做点假设:假设有台 NFS 服务器 `tree.mydatacenter.net` 已经启动并运行。另外假设一个名为 `ourfiles` 的数据目录还有供 Carl 和 Sarah 使用的用户目录,它们都由服务器共享。
@ -20,106 +22,88 @@
```
alan@workstation1:~$ sudo getent passwd carl sarah
[sudo] password for alan:
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
alan@workstation1:~$ sudo getent hosts
127.0.0.1       localhost
127.0.1.1       workstation1.mydatacenter.net workstation1
10.10.1.5       tree.mydatacenter.net tree
127.0.0.1 localhost
127.0.1.1 workstation1.mydatacenter.net workstation1
10.10.1.5 tree.mydatacenter.net tree
```
如你所见,客户端工作站和 NFS 服务器都在 `hosts` 中配置。我假设一个基本的家庭甚至小型办公室网络,可能缺乏适合的内部域名服务(即 DNS
如你所见,客户端工作站和 NFS 服务器都在 `hosts` 文件中配置。我假设这是一个基本的家庭甚至小型办公室网络,可能缺乏适合的内部域名服务(即 DNS
### 安装软件包
你只需要安装两个软件包:用于 NFS 客户端的 `nfs-common` 和提供自动挂载的 `autofs`
```
alan@workstation1:~$ sudo apt-get install nfs-common autofs
```
你可以验证 autofs 是否已放在 `etc` 目录中:
你可以验证 autofs 相关的文件是否已放在 `/etc` 目录中:
```
alan@workstation1:~$ cd /etc; ll auto*
-rw-r--r-- 1 root root 12596 Nov 19  2015 autofs.conf
-rw-r--r-- 1 root root   857 Mar 10  2017 auto.master
-rw-r--r-- 1 root root   708 Jul  6  2017 auto.misc
-rwxr-xr-x 1 root root  1039 Nov 19  2015 auto.net*
-rwxr-xr-x 1 root root  2191 Nov 19  2015 auto.smb*
-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
-rwxr-xr-x 1 root root 1039 Nov 19 2015 auto.net*
-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
alan@workstation1:/etc$
```
### 配置 autofs
现在你需要编辑其中几个文件并添加 `auto.home` 文件。首先,将以下两行添加到文件 `auto.master` 中:
```
/mnt/tree  /etc/auto.misc
/home/tree  /etc/auto.home
```
每行以挂载 NFS 共享的目录开头。继续创建这些目录:
```
alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
```
接下来,将以下行添加到文件 `auto.misc`
```
ourfiles        -fstype=nfs     tree:/share/ourfiles
```
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.misc``ourfiles` 共享。如上所示,这些文件将在 `/mnt/tree/ourfiles` 目录中。
第三步,使用以下行创建文件 `auto.home`
```
*               -fstype=nfs     tree:/home/&
```
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.home` 的用户共享。在这种情况下Carl 和 Sarah 的文件将分别在目录 `/home/tree/carl``/home/tree/sarah`中。星号(称为通配符)使每个用户的共享可以在登录时自动挂载。& 符号也可以作为表示服务器端用户目录的通配符。它们的主目录会相应地根据 `passwd` 文件映射。如果你更喜欢本地主目录,则无需执行此操作。相反,用户可以将其用作特定文件的简单远程存储。
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.home` 的用户共享。在这种情况下Carl 和 Sarah 的文件将分别在目录 `/home/tree/carl``/home/tree/sarah`中。星号 `*`(称为通配符)使每个用户的共享可以在登录时自动挂载。`` 符号也可以作为表示服务器端用户目录的通配符。它们的主目录会相应地根据 `passwd` 文件映射。如果你更喜欢本地主目录,则无需执行此操作。相反,用户可以将其用作特定文件的简单远程存储。
最后,重启 `autofs` 守护进程,以便识别并加载这些配置的更改。
```
alan@workstation1:/etc$ sudo service autofs restart
```
### 测试 autofs
如果更改文件 `auto.master` 中的列出目录并运行 `ls` 命令,那么不会立即看到任何内容。例如,`(cd)` 到目录 `/mnt/tree`。首先,`ls` 的输出不会显示任何内容,但在运行 `cd ourfiles` 之后,将自动挂载 `ourfiles` 共享目录。 `cd` 命令也将被执行,你将进入新挂载的目录中。
如果更改文件 `auto.master` 中的列出目录,并运行 `ls` 命令,那么不会立即看到任何内容。例如,切换到目录 `/mnt/tree`。首先,`ls` 的输出不会显示任何内容,但在运行 `cd ourfiles` 之后,将自动挂载 `ourfiles` 共享目录。 `cd` 命令也将被执行,你将进入新挂载的目录中。
```
carl@workstation1:~$ cd /mnt/tree
carl@workstation1:/mnt/tree$ ls
carl@workstation1:/mnt/tree$ cd ourfiles
carl@workstation1:/mnt/tree/ourfiles$
```
为了进一步确认正常工作,`mount` 命令会显示已挂载共享的细节
为了进一步确认正常工作,`mount` 命令会显示已挂载共享的细节。
```
carl@workstation1:~$ mount
@ -127,7 +111,7 @@ tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,r
```
对于Carl和Sarah`/home/tree` 目录工作方式相同。
对于 Carl Sarah`/home/tree` 目录工作方式相同。
我发现在我的文件管理器中添加这些目录的书签很有用,可以用来快速访问。
@ -138,7 +122,7 @@ via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
作者:[Alan Formy-Duval][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,594 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10848-1.html)
[#]: subject: (TLP An Advanced Power Management Tool That Improve Battery Life On Linux Laptop)
[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
TLP一个可以延长 Linux 笔记本电池寿命的高级电源管理工具
======
![](https://img.linux.net.cn/data/attachment/album/201905/13/094413iu77i8w75t80tq7h.jpg)
笔记本电池是针对 Windows 操作系统进行了高度优化的,当我在笔记本电脑中使用 Windows 操作系统时,我已经意识到这一点,但对于 Linux 来说却不一样。
多年来Linux 在电池优化方面取得了很大进步,但我们仍然需要做一些必要的事情来改善 Linux 中笔记本电脑的电池寿命。
当我考虑延长电池寿命时,我没有多少选择,但我觉得 TLP 对我来说是一个更好的解决方案,所以我会继续使用它。
在本教程中,我们将详细讨论 TLP 以延长电池寿命。
我们之前在我们的网站上写过三篇关于 Linux [笔记本电池节电工具][1] 的文章:[PowerTOP][2] 和 [电池充电状态][3]。
### TLP
[TLP][4] 是一款自由开源的高级电源管理工具,可在不进行任何配置更改的情况下延长电池寿命。
由于它的默认配置已针对电池寿命进行了优化,因此你可能只需要安装,然后就忘记它吧。
此外它可以高度定制化以满足你的特定要求。TLP 是一个具有自动后台任务的纯命令行工具。它不包含GUI。
TLP 适用于各种品牌的笔记本电脑。设置电池充电阈值仅适用于 IBM/Lenovo ThinkPad。
所有 TLP 设置都存储在 `/etc/default/tlp` 中。其默认配置提供了开箱即用的优化的节能设置。
以下 TLP 设置可用于自定义,如果需要,你可以相应地进行必要的更改。
### TLP 功能
* 内核笔记本电脑模式和脏缓冲区超时
* 处理器频率调整,包括 “turbo boost”/“turbo core”
* 限制最大/最小的 P 状态以控制 CPU 的功耗
* HWP 能源性能提示
* 用于多核/超线程的功率感知进程调度程序
* 处理器性能与节能策略(`x86_energy_perf_policy`
* 硬盘高级电源管理级别APM和降速超时按磁盘
* AHCI 链路电源管理ALPM与设备黑名单
* PCIe 活动状态电源管理PCIe ASPM
* PCI(e) 总线设备的运行时电源管理
* Radeon 图形电源管理KMS 和 DPM
* Wifi 省电模式
* 关闭驱动器托架中的光盘驱动器
* 音频省电模式
* I/O 调度程序(按磁盘)
* USB 自动暂停,支持设备黑名单/白名单(输入设备自动排除)
* 在系统启动和关闭时启用或禁用集成的 wifi、蓝牙或 wwan 设备
* 在系统启动时恢复无线电设备状态(从之前的关机时的状态)
* 无线电设备向导:在网络连接/断开和停靠/取消停靠时切换无线电
* 禁用 LAN 唤醒
* 挂起/休眠后恢复集成的 WWAN 和蓝牙状态
* 英特尔处理器的动态电源降低 —— 需要内核和 PHC-Patch 支持
* 电池充电阈值 —— 仅限 ThinkPad
* 重新校准电池 —— 仅限 ThinkPad
### 如何在 Linux 上安装 TLP
TLP 包在大多数发行版官方存储库中都可用,因此,使用发行版的 [包管理器][5] 来安装它。
对于 Fedora 系统,使用 [DNF 命令][6] 安装 TLP。
```
$ sudo dnf install tlp tlp-rdw
```
ThinkPad 需要一些附加软件包。
```
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm
$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel
```
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
```
$ sudo dnf install smartmontools
```
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][7] 或 [APT 命令][8] 安装 TLP。
```
$ sudo apt install tlp tlp-rdw
```
ThinkPad 需要一些附加软件包。
```
$ sudo apt-get install tp-smapi-dkms acpi-call-dkms
```
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
```
$ sudo apt-get install smartmontools
```
当基于 Ubuntu 的系统的官方软件包过时时,请使用以下 PPA 存储库,该存储库提供最新版本。运行以下命令以使用 PPA 安装 TLP。
```
$ sudo apt-get install tlp tlp-rdw
```
对于基于 Arch Linux 的系统,使用 [Pacman 命令][9] 安装 TLP。
```
$ sudo pacman -S tlp tlp-rdw
```
ThinkPad 需要一些附加软件包。
```
$ pacman -S tp_smapi acpi_call
```
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
```
$ sudo pacman -S smartmontools
```
对于基于 Arch Linux 的系统,在启动时启用 TLP 和 TLP-Sleep 服务。
```
$ sudo systemctl enable tlp.service
$ sudo systemctl enable tlp-sleep.service
```
对于基于 Arch Linux 的系统,你还应该屏蔽以下服务以避免冲突,并确保 TLP 的无线电设备切换选项的正确操作。
```
$ sudo systemctl mask systemd-rfkill.service
$ sudo systemctl mask systemd-rfkill.socket
```
对于 RHEL/CentOS 系统,使用 [YUM 命令][10] 安装 TLP。
```
$ sudo yum install tlp tlp-rdw
```
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
```
$ sudo yum install smartmontools
```
对于 openSUSE Leap 系统,使用 [Zypper 命令][11] 安装 TLP。
```
$ sudo zypper install TLP
```
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
```
$ sudo zypper install smartmontools
```
成功安装 TLP 后,使用以下命令启动服务。
```
$ systemctl start tlp.service
```
### 使用方法
#### 显示电池信息
```
$ sudo tlp-stat -b
$ sudo tlp-stat --battery
```
```
--- TLP 1.1 --------------------------------------------
+++ Battery Status
/sys/class/power_supply/BAT0/manufacturer = SMP
/sys/class/power_supply/BAT0/model_name = L14M4P23
/sys/class/power_supply/BAT0/cycle_count = (not supported)
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
/sys/class/power_supply/BAT0/energy_full = 48850 [mWh]
/sys/class/power_supply/BAT0/energy_now = 48850 [mWh]
/sys/class/power_supply/BAT0/power_now = 0 [mW]
/sys/class/power_supply/BAT0/status = Full
Charge = 100.0 [%]
Capacity = 81.4 [%]
```
#### 显示磁盘信息
```
$ sudo tlp-stat -d
$ sudo tlp-stat --disk
```
```
--- TLP 1.1 --------------------------------------------
+++ Storage Devices
/dev/sda:
Model = WDC WD10SPCX-24HWST1
Firmware = 02.01A02
APM Level = 128
Status = active/idle
Scheduler = mq-deadline
Runtime PM: control = on, autosuspend_delay = (not available)
SMART info:
4 Start_Stop_Count = 18787
5 Reallocated_Sector_Ct = 0
9 Power_On_Hours = 606 [h]
12 Power_Cycle_Count = 1792
193 Load_Cycle_Count = 25775
194 Temperature_Celsius = 31 [°C]
+++ AHCI Link Power Management (ALPM)
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
+++ AHCI Host Controller Runtime Power Management
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
```
#### 显示 PCI 设备信息
```
$ sudo tlp-stat -e
$ sudo tlp-stat --pcie
```
```
$ sudo tlp-stat -e
or
$ sudo tlp-stat --pcie
--- TLP 1.1 --------------------------------------------
+++ Runtime Power Management
Device blacklist = (not configured)
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
......
```
#### 显示图形卡信息
```
$ sudo tlp-stat -g
$ sudo tlp-stat --graphics
```
```
--- TLP 1.1 --------------------------------------------
+++ Intel Graphics
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
/sys/module/i915/parameters/enable_psr = 0 (disabled)
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
```
#### 显示处理器信息
```
$ sudo tlp-stat -p
$ sudo tlp-stat --processor
```
```
--- TLP 1.1 --------------------------------------------
+++ Processor
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
......
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
x86_energy_perf_policy: program not installed.
/sys/module/workqueue/parameters/power_efficient = Y
/proc/sys/kernel/nmi_watchdog = 0
+++ Undervolting
PHC kernel not available.
```
#### 显示系统数据信息
```
$ sudo tlp-stat -s
$ sudo tlp-stat --system
```
```
--- TLP 1.1 --------------------------------------------
+++ System Info
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
BIOS = CDCN35WW
Release = "Manjaro Linux"
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
Init system = systemd
Boot mode = BIOS (CSM, Legacy)
+++ TLP Status
State = enabled
Last run = 11:04:00 IST, 596 sec(s) ago
Mode = battery
Power source = battery
```
#### 显示温度和风扇速度信息
```
$ sudo tlp-stat -t
$ sudo tlp-stat --temp
```
```
--- TLP 1.1 --------------------------------------------
+++ Temperatures
CPU temp = 36 [°C]
Fan speed = (not available)
```
#### 显示 USB 设备数据信息
```
$ sudo tlp-stat -u
$ sudo tlp-stat --usb
```
```
--- TLP 1.1 --------------------------------------------
+++ USB
Autosuspend = disabled
Device whitelist = (not configured)
Device blacklist = (not configured)
Bluetooth blacklist = disabled
Phone blacklist = disabled
WWAN blacklist = enabled
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
......
```
#### 显示警告信息
```
$ sudo tlp-stat -w
$ sudo tlp-stat --warn
```
```
--- TLP 1.1 --------------------------------------------
No warnings detected.
```
#### 状态报告及配置和所有活动的设置
```
$ sudo tlp-stat
```
```
--- TLP 1.1 --------------------------------------------
+++ Configured Settings: /etc/default/tlp
TLP_ENABLE=1
TLP_DEFAULT_MODE=AC
TLP_PERSISTENT_DEFAULT=0
DISK_IDLE_SECS_ON_AC=0
DISK_IDLE_SECS_ON_BAT=2
MAX_LOST_WORK_SECS_ON_AC=15
MAX_LOST_WORK_SECS_ON_BAT=60
......
+++ System Info
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
BIOS = CDCN35WW
Release = "Manjaro Linux"
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
Init system = systemd
Boot mode = BIOS (CSM, Legacy)
+++ TLP Status
State = enabled
Last run = 11:04:00 IST, 684 sec(s) ago
Mode = battery
Power source = battery
+++ Processor
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
......
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
x86_energy_perf_policy: program not installed.
/sys/module/workqueue/parameters/power_efficient = Y
/proc/sys/kernel/nmi_watchdog = 0
+++ Undervolting
PHC kernel not available.
+++ Temperatures
CPU temp = 42 [°C]
Fan speed = (not available)
+++ File System
/proc/sys/vm/laptop_mode = 2
/proc/sys/vm/dirty_writeback_centisecs = 6000
/proc/sys/vm/dirty_expire_centisecs = 6000
/proc/sys/vm/dirty_ratio = 20
/proc/sys/vm/dirty_background_ratio = 10
+++ Storage Devices
/dev/sda:
Model = WDC WD10SPCX-24HWST1
Firmware = 02.01A02
APM Level = 128
Status = active/idle
Scheduler = mq-deadline
Runtime PM: control = on, autosuspend_delay = (not available)
SMART info:
4 Start_Stop_Count = 18787
5 Reallocated_Sector_Ct = 0
9 Power_On_Hours = 606 [h]
12 Power_Cycle_Count = 1792
193 Load_Cycle_Count = 25777
194 Temperature_Celsius = 31 [°C]
+++ AHCI Link Power Management (ALPM)
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
+++ AHCI Host Controller Runtime Power Management
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
+++ PCIe Active State Power Management
/sys/module/pcie_aspm/parameters/policy = powersave
+++ Intel Graphics
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
/sys/module/i915/parameters/enable_psr = 0 (disabled)
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
+++ Wireless
bluetooth = on
wifi = on
wwan = none (no device)
hci0(btusb) : bluetooth, not connected
wlp8s0(iwlwifi) : wifi, connected, power management = on
+++ Audio
/sys/module/snd_hda_intel/parameters/power_save = 1
/sys/module/snd_hda_intel/parameters/power_save_controller = Y
+++ Runtime Power Management
Device blacklist = (not configured)
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
......
+++ USB
Autosuspend = disabled
Device whitelist = (not configured)
Device blacklist = (not configured)
Bluetooth blacklist = disabled
Phone blacklist = disabled
WWAN blacklist = enabled
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
+++ Battery Status
/sys/class/power_supply/BAT0/manufacturer = SMP
/sys/class/power_supply/BAT0/model_name = L14M4P23
/sys/class/power_supply/BAT0/cycle_count = (not supported)
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
/sys/class/power_supply/BAT0/energy_full = 51690 [mWh]
/sys/class/power_supply/BAT0/energy_now = 50140 [mWh]
/sys/class/power_supply/BAT0/power_now = 12185 [mW]
/sys/class/power_supply/BAT0/status = Discharging
Charge = 97.0 [%]
Capacity = 86.2 [%]
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/
[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/
[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/
[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
[5]: https://www.2daygeek.com/category/package-management/
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -1,48 +1,47 @@
[#]: collector: "lujun9972"
[#]: translator: "zgj1024 "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: translator: "zgj1024"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-10834-1.html"
[#]: subject: "Why DevOps is the most important tech strategy today"
[#]: via: "https://opensource.com/article/19/3/devops-most-important-tech-strategy"
[#]: author: "Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht"
[#]: author: "Kelly Albrecht https://opensource.com/users/ksalbrecht"
为何 DevOps 是如今最重要的技术策略
======
消除一些关于 DevOps 的疑惑
> 消除一些关于 DevOps 的疑惑。
![CICD with gears][1]
很多人初学 [DevOps][2] 时,看到它其中一个结果就问这个是如何得来的。其实理解这部分 Devops 的怎样实现并不重要,重要的是——理解(使用) DevOps 策略的原因——这是做一个行业的领导者还是追随者的差别。
你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,“混世猴子”([Chaos Monkey][3])程序运行时,将周围的连接随机切断,每天仍可以处理数千个版本。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的无力案例,本质上会被一个[反例][4]困扰DevOps 环境有弹性是因为没有观察到严重的故障。。。还没有。
你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,就算是有个“<ruby>[癫狂的猴子][3]<rt>Chaos Monkey</rt></ruby>)跳来跳去将不知道哪个插头随便拔下,每天仍可以处理数千个发布。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的证据不足的案例,其本质上会被一个[反例][4]困扰DevOps 环境有弹性是因为严重的故障还没有被观测到
有很多关于 DevOps 的疑惑,并且许多人还在尝试弄清楚它的意义。下面是来自我 LinkedIn Feed 中的某个人的一个案例:
> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解洽洽相反。
> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解恰恰相反。
>
> 能听一下你们的想法吗?你认为敏捷开发和 DevOps 之间是什么关系呢?
>
> 1. DevOps 是敏捷开发的子集
> 2. 敏捷开发 是 DevOps 的子集
> 2. 敏捷开发是 DevOps 的子集
> 3. DevOps 是敏捷开发的扩展,从敏捷开发结束的地方开始
> 4. DevOps 是敏捷开发的新版本
>
科技行业的专业人士在那篇 LinkedIn 的帖子上达标各样的答案,你会怎样回复呢?
科技行业的专业人士在那篇 LinkedIn 的帖子上表达了各种各样的答案,你会怎样回复呢?
### DevOps源于精益和敏捷
### DevOps 源于精益和敏捷
如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维([Lean Thinking][5])提炼为五个原则:
如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维( [Lean Thinking][5])提炼为五个原则:
1. 指明客户所需的价值
2. 确定提供该价值的每个产品的价值流,并对当前提供该价值所需的所有浪费步骤提起挑战
3. 使产品通过剩余的增值步骤持续流动
4. 在可以连续流动的所有步骤之间引入拉力
5. 管理要尽善尽美,以便为客户服务所需的步骤数量和时间以及信息量持续下降
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。(译者注:有相关的视频的)
精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。LCTT 译注:有相关的视频的)
这是巨大的不同,但是不是生活中的所有事都像硬币游戏那样简单并可预测的。这就是敏捷的出现的原因。我们当然看到了高效绩敏捷团队的精益原则,但这些团队需要的不仅仅是精益去做他们要做的事。
@ -52,13 +51,13 @@ Lean seeks to continuously remove waste and increase the flow of value to the cu
### 最佳批量大小
要了解 DevOps 在软件开发中的强大功能这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例
要了解 DevOps 在软件开发中的强大功能这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例
![U-curve optimization illustration of optimal batch size][9]
这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分的路程。买一个鸡蛋(图种最左边)意味着每次要花 30 分钟的路程这就是你的_交易成本_。_持有成本_可能是鸡蛋变质和在你的冰箱中持续地占用空间。_总成本_是_交易成本_加上你的_持有成本_。这 U 型曲线解释了为什么对大部分来说一次买一打鸡蛋是他们的_最佳批量大小_。如果你就住在商店的旁边,步行到那里不会花费你任何的时候,你可能每次只会买一小盒鸡蛋,以此来节省冰箱的空间并享受新鲜的鸡蛋。
这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分钟的路程。买一个鸡蛋(图中最左边)意味着每次要花 30 分钟的路程,这就是你的*交易成本*。*持有成本*可能是鸡蛋变质和在你的冰箱中持续地占用空间。*总成本*是*交易成本*加上你的*持有成本*。这个 U 型曲线解释了为什么对大部分人来说,一次买一打鸡蛋是他们的*最佳批量大小*。如果你就住在商店的旁边,步行到那里不会花费你任何的时候,你可能每次只会买一小盒鸡蛋,以此来节省冰箱的空间并享受新鲜的鸡蛋。
这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
下面的动画演示了降低事务成本后,最佳批量大小是如何向左移动。在更频繁地做出更快的决策方面,你不能低估组织的价值。
@ -66,22 +65,21 @@ Lean seeks to continuously remove waste and increase the flow of value to the cu
### DevOps 适合哪些地方
自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速持续地为客户提供价值。
自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是可以免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速持续地为客户提供价值。
更传统地说DevOps 被理解为一种打破开发团队和运营团队之间混乱局面的方法。在这个模型中开发团队开发新的功能而运营团队则保持系统的稳定和平稳运行。摩擦的发生是因为开发过程中的新功能将更改引入到系统中从而增加了停机的风险运营团队并不认为要对此负责但无论如何都必须处理这一问题。DevOps 不仅仅尝试让人们一起工作,更重要的是尝试在复杂的环境中安全地进行更频繁的更改。
我们可以向 [Ron Westrum][11] 寻求有关在复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态,官僚主义的和生产式的。他发现病理的可以预测安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
我们可以看看 [Ron Westrum][11] 在有关复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态的、官僚主义的和生产式的。他发现病态的可以预测其安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
![Three types of culture identified by Ron Westrum][12]
高效的 DevOps 团队通过精益和敏捷的实践实现了一种生成性文化这表明速度和安全性是互补的或者说是同一个问题的两个方面。通过将决策和功能的最佳批量大小减少到非常小DevOps 实现了更快的信息流和价值,同时消除了浪费并降低了风险。
与 Westrum的研究一致在提高安全性和可靠性的同时变化也很容易发生。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
与 Westrum 的研究一致,在提高安全性和可靠性的同时,变化也很容易发生。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
### 流动、反馈、学习
DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[_DevOps手册_][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[DevOps手册][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和 scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
### 从 DevOps 评估开始
@ -91,18 +89,14 @@ DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流
在本文的[第二部分][16]中,我们将查看 Drupal 社区中 DevOps 调查的结果,并了解最有可能找到快速获胜的位置。
* * *
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
作者:[Kelly Albrecht][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/zgj1024)
校对:[校对者ID](https://github.com/校对者ID)
译者:[zgj1024](https://github.com/zgj1024)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10833-1.html)
[#]: subject: (Getting started with Python's cryptography library)
[#]: via: (https://opensource.com/article/19/4/cryptography-python)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Python 的加密库入门
======
> 加密你的数据并使其免受攻击者的攻击。
![lock on world map][1]
密码学俱乐部的第一条规则是:永远不要自己*发明*密码系统。密码学俱乐部的第二条规则是:永远不要自己*实现*密码系统:在现实世界中,在*实现*以及设计密码系统阶段都找到过许多漏洞。
Python 中的一个有用的基本加密库就叫做 [cryptography][2]。它既是一个“安全”方面的基础库,也是一个“危险”层。“危险”层需要更加小心和相关的知识,并且使用它很容易出现安全漏洞。在这篇介绍性文章中,我们不会涵盖“危险”层中的任何内容!
cryptography 库中最有用的高级安全功能是一种 Fernet 实现。Fernet 是一种遵循最佳实践的加密缓冲区的标准。它不适用于非常大的文件,如千兆字节以上的文件,因为它要求你一次加载要加密或解密的内容到内存缓冲区中。
Fernet 支持<ruby>对称<rt>symmetric</rt></ruby>(即<ruby>密钥<rt>secret key</rt></ruby>)加密方式*:加密和解密使用相同的密钥,因此必须保持安全。
生成密钥很简单:
```
>>> k = fernet.Fernet.generate_key()
>>> type(k)
<class 'bytes'>
```
这些字节可以写入有适当权限的文件,最好是在安全的机器上。
有了密钥后,加密也很容易:
```
>>> frn = fernet.Fernet(k)
>>> encrypted = frn.encrypt(b"x marks the spot")
>>> encrypted[:10]
b'gAAAAABb1'
```
如果在你的机器上加密,你会看到略微不同的值。不仅因为(我希望)你生成了和我不同的密钥,而且因为 Fernet 将要加密的值与一些随机生成的缓冲区连接起来。这是我之前提到的“最佳实践”之一:它将阻止对手分辨哪些加密值是相同的,这有时是攻击的重要部分。
解密同样简单:
```
>>> frn = fernet.Fernet(k)
>>> frn.decrypt(encrypted)
b'x marks the spot'
```
请注意,这仅加密和解密*字节串*。为了加密和解密*文本串*,通常需要对它们使用 [UTF-8][3] 进行编码和解码。
20 世纪中期密码学最有趣的进展之一是<ruby>公钥<rt>public key</rt></ruby>加密。它可以在发布加密密钥的同时而让*解密密钥*保持保密。例如,它可用于保存服务器使用的 API 密钥:服务器是唯一可以访问解密密钥的一方,但是任何人都可以保存公共加密密钥。
虽然 cryptography 没有任何支持公钥加密的*安全*功能,但 [PyNaCl][4] 库有。PyNaCl 封装并提供了一些很好的方法来使用 Daniel J. Bernstein 发明的 [NaCl][5] 加密系统。
NaCl 始终同时<ruby>加密<rt>encrypt</rt></ruby><ruby>签名<rt>sign</rt></ruby>或者同时<ruby>解密<rt>decrypt</rt></ruby><ruby>验证签名<rt>verify signature</rt></ruby>。这是一种防止<ruby>基于可伸缩性<rt>malleability-based</rt></ruby>的攻击的方法,其中攻击者会修改加密值。
加密是使用公钥完成的,而签名是使用密钥完成的:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> source = PrivateKey.generate()
>>> with open("target.pubkey", "rb") as fpin:
... target_public_key = PublicKey(fpin.read())
>>> enc_box = Box(source, target_public_key)
>>> result = enc_box.encrypt(b"x marks the spot")
>>> result[:4]
b'\xe2\x1c0\xa4'
```
解密颠倒了角色:它需要私钥进行解密,需要公钥验证签名:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> with open("source.pubkey", "rb") as fpin:
... source_public_key = PublicKey(fpin.read())
>>> with open("target.private_key", "rb") as fpin:
... target = PrivateKey(fpin.read())
>>> dec_box = Box(target, source_public_key)
>>> dec_box.decrypt(result)
b'x marks the spot'
```
最后,[PocketProtector][6] 库构建在 PyNaCl 之上,包含完整的密钥管理方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/cryptography-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq (lock on world map)
[2]: https://cryptography.io/en/latest/
[3]: https://en.wikipedia.org/wiki/UTF-8
[4]: https://pynacl.readthedocs.io/en/stable/
[5]: https://nacl.cr.yp.to/
[6]: https://github.com/SimpleLegal/pocket_protector/blob/master/USER_GUIDE.md

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10822-1.html)
[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
如何快速部署并作为 unikernel 运行 Linux 应用
======
unikernel 是一种用于在云基础架构上部署应用程序的更小、更快、更安全的方式。使用 NanoVMs OPS任何人都可以将 Linux 应用程序作为 unikernel 运行而无需额外编码。
![Marcho Verch \(CC BY 2.0\)][1]
随着 unikernel 的出现,构建和部署轻量级应用变得更容易、更可靠。虽然功能有限,但 unikernal 在速度和安全性方面有许多优势。
### 什么是 unikernel?
unikernel 是一种非常特殊的<ruby>单一地址空间<rt>single-address-space</rt></ruby>的机器镜像,类似于已经主导大批互联网的云应用,但它们相当小并且是单一用途的。它们很轻,只提供所需的资源。它们加载速度非常快,而且安全性更高 —— 攻击面非常有限。单个可执行文件中包含所需的所有驱动、I/O 例程和支持库。其最终生成的虚拟镜像可以无需其它部分就可以引导和运行。它们通常比容器快 10 到 20 倍。
潜在的攻击者无法进入 shell 并获得控制权,因为它没有 shell。他们无法获取系统的 `/etc/passwd``/etc/shadow` 文件,因为这些文件不存在。创建一个 unikernel 就像应用将自己变成操作系统。使用 unikernel应用和操作系统将成为一个单一的实体。你忽略了不需要的东西从而消除了漏洞并大幅提高性能。
简而言之unikernel
* 提供更高的安全性例如shell 破解代码无用武之地)
* 比标准云应用占用更小空间
* 经过高度优化
* 启动非常快
### unikernel 有什么缺点吗?
unikernel 的唯一严重缺点是你必须构建它们。对于许多开发人员来说,这是一个巨大的进步。由于应用的底层特性,将应用简化为所需的内容然后生成紧凑、平稳运行的应用可能很复杂。在过去,你几乎必须是系统开发人员或底层程序员才能生成它们。
### 这是怎么改变的?
最近2019 年 3 月 24 日)[NanoVMs][3] 宣布了一个将任何 Linux 应用加载为 unikernel 的工具。使用 NanoVMs OPS任何人都可以将 Linux 应用作为 unikernel 运行而无需额外编码。该应用还可以更快、更安全地运行,并且成本和开销更低。
### 什么是 NanoVMs OPS
NanoVMs 是给开发人员的 unikernel 工具。它能让你运行各种企业级软件,但仍然可以非常严格地控制它的运行。
使用 OPS 的其他好处包括:
* 无需经验或知识,开发人员就可以构建 unikernel。
* 该工具可在笔记本电脑上本地构建和运行 unikernel。
* 无需创建帐户,只需下载并一个命令即可执行 OPS。
NanoVMs 的介绍可以在 [Youtube 上的 NanoVMs 视频][5] 上找到。你还可以查看该公司的 [LinkedIn 页面][6]并在[此处][7]阅读有关 NanoVMs 安全性的信息。
还有有关如何[入门][8]的一些信息。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
[3]: https://nanovms.com/
[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
[6]: https://www.linkedin.com/company/nanovms/
[7]: https://nanovms.com/security
[8]: https://nanovms.gitbook.io/ops/getting_started
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10843-1.html)
[#]: subject: (Anbox Easy Way To Run Android Apps On Linux)
[#]: via: (https://www.2daygeek.com/anbox-best-android-emulator-for-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Anbox在 Linux 上运行 Android 应用程序的简单方式
======
Android 模拟器允许我们直接从 Linux 系统上运行我们最喜欢的 Android 应用程序或游戏。对于 Linux 来说,有很多的这样的 Android 模拟器,在过去我们介绍过几个此类应用程序。
你可以通过导航到下面的网址回顾它们。
* [如何在 Linux 上安装官方 Android 模拟器 (SDK)][1]
* [如何在 Linux 上安装 GenyMotion (Android 模拟器)][2]
今天我们将讨论 Anbox Android 模拟器。
### Anbox 是什么?
Anbox 是 “Android in a box” 的缩写。Anbox 是一个基于容器的方法,可以在普通的 GNU/Linux 系统上启动完整的 Android 系统。
它是现代化的新模拟器之一。
Anbox 可以让你在 Linux 系统上运行 Android而没有虚拟化的迟钝因为核心的 Android 操作系统已经使用 Linux 命名空间LXE放置到容器中了。
Android 容器不能直接访问到任何硬件,所有硬件的访问都是通过在主机上的守护进程进行的。
每个应用程序将在一个单独窗口打开,就像其它本地系统应用程序一样,并且它可以显示在启动器中。
### 如何在 Linux 中安装 Anbox
Anbox 也可作为 snap 软件包安装,请确保你已经在你的系统上启用了 snap 支持。
Anbox 软件包最近被添加到 Ubuntu 18.10 (Cosmic) 和 Debian 10 (Buster) 软件仓库。如果你正在运行这些版本,那么你可以轻松地在官方发行版的软件包管理器的帮助下安装。否则可以用 snap 软件包安装。
为使 Anbox 工作,确保需要的内核模块已经安装在你的系统中。对于基于 Ubuntu 的用户,使用下面的 PPA 来安装它。
```
$ sudo add-apt-repository ppa:morphis/anbox-support
$ sudo apt update
$ sudo apt install linux-headers-generic anbox-modules-dkms
```
在你安装 `anbox-modules-dkms` 软件包后,你必须手动重新加载内核模块,或需要系统重新启动。
```
$ sudo modprobe ashmem_linux
$ sudo modprobe binder_linux
```
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][3] 或 [APT 命令][4] 来安装 anbox。
```
$ sudo apt install anbox
```
对于基于 Arch Linux 的系统,我们总是习惯从 AUR 储存库中获取软件包。所以,使用任一个的 [AUR 助手][5] 来安装它。我喜欢使用 [Yay 工具][6]。
```
$ yuk -S anbox-git
```
否则,你可以通过导航到下面的文章来 [在 Linux 中安装和配置 snap][7]。如果你已经在你的系统上安装 snap其它的步骤可以忽略。
```
$ sudo snap install --devmode --beta anbox
```
### Anbox 的必要条件
默认情况下Anbox 并没有带有 Google Play Store。因此我们需要手动下载每个应用程序APK并使用 Android 调试桥ADB安装它。
ADB 工具在大多数的发行版的软件仓库是轻易可获得的,我们可以容易地安装它。
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][3] 或 [APT 命令][4] 来安装 ADB。
```
$ sudo apt install android-tools-adb
```
对于 Fedora 系统,使用 [DNF 命令][8] 来安装 ADB。
```
$ sudo dnf install android-tools
```
对于基于 Arch Linux 的系统,使用 [Pacman 命令][9] 来安装 ADB。
```
$ sudo pacman -S android-tools
```
对于 openSUSE Leap 系统,使用 [Zypper 命令][10] 来安装 ADB。
```
$ sudo zypper install android-tools
```
### 在哪里下载 Android 应用程序?
既然我们不能使用 Play Store ,你就得从信得过的网站来下载 APK 软件包,像 [APKMirror][11] ,然后手动安装它。
### 如何启动 Anbox?
Anbox 可以从 Dash 启动。这是默认的 Anbox 外貌。
![][13]
### 如何把应用程序推到 Anbox
像我先前所说,我们需要手动安装它。为测试目的,我们将安装 YouTube 和 Firefox 应用程序。
首先,你需要启动 ADB 服务。为做到这样,运行下面的命令。
```
$ adb devices
```
我们已经下载 YouTube 和 Firefox 应用程序,现在我们将安装。
语法格式:
```
$ adb install Name-Of-Your-Application.apk
```
安装 YouTube 和 Firefox 应用程序:
```
$ adb install 'com.google.android.youtube_14.13.54-1413542800_minAPI19(x86_64)(nodpi)_apkmirror.com.apk'
Success
$ adb install 'org.mozilla.focus_9.0-330191219_minAPI21(x86)(nodpi)_apkmirror.com.apk'
Success
```
我已经在我的 Anbox 中安装 YouTube 和 Firefox。查看下面的截图。
![][14]
像我们在文章的开始所说,它将以新的标签页打开任何的应用程序。在这里,我们将打开 Firefox ,并访问 [2daygeek.com][15] 网站。
![][16]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/anbox-best-android-emulator-for-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-sdk-android-emulator-on-linux/
[2]: https://www.2daygeek.com/install-genymotion-android-emulator-on-ubuntu-debian-fedora-arch-linux/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/category/aur-helper/
[6]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/
[7]: https://www.2daygeek.com/linux-snap-package-manager-ubuntu/
[8]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]: https://www.apkmirror.com/
[12]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[13]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-1.jpg
[14]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-2.jpg
[15]: https://www.2daygeek.com/
[16]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-3.jpg

View File

@ -1,69 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10820-1.html)
[#]: subject: (How To Install And Configure Chrony As NTP Client?)
[#]: via: (https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何正确安装和配置Chrony作为NTP客户端
如何安装和配置 Chrony 作为 NTP 客户端?
======
NTP服务器和NTP客户端运行我们通过网络来同步时钟
NTP 服务器和 NTP 客户端可以让我们通过网络来同步时钟。之前,我们已经撰写了一篇关于 [NTP 服务器和 NTP 客户端的安装与配置][1] 的文章
在过去,我们已经撰写了一篇关于 **[NTP服务器和NTP客户端的安装与配置][1]** 的文章
如果你想看这些内容,点击上述的 URL 访问
如果你想看这些内容点击上述的URL访问。
### Chrony 客户端
### 什么是Chrony客户端?
Chrony 是 NTP 客户端的替代品。它能以更精确的时间和更快的速度同步时钟,并且它对于那些不是全天候在线的系统非常有用。
Chrony是NTP客户端的替代品。
它能以更精确的时间和更快的速度同步时钟,并且它对于那些不是全天候在线的系统非常有用。
chronyd更小、更省电它占用更少的内存且仅当需要时它才唤醒CPU。
即使网络拥塞较长时间,它也能很好地运行。
它支持Linux上的硬件时间戳允许在本地网络进行极其准确的同步。
chronyd 更小、更节能,它占用更少的内存且仅当需要时它才唤醒 CPU。即使网络拥塞较长时间它也能很好地运行。它支持 Linux 上的硬件时间戳,允许在本地网络进行极其准确的同步。
它提供下列两个服务。
* **`chronyc:`** Chrony的命令行接口。
* **`chronyd:`** Chrony守护进程服务。
* `chronyc`Chrony 的命令行接口。
* `chronyd`Chrony 守护进程服务。
### 如何在Linux上安装和配置Chrony
### 如何在 Linux 上安装和配置 Chrony
由于安装包在大多数发行版的官方仓库中可用,因此直接使用包管理器去安装它。
对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装chrony.
对于 Fedora 系统,使用 [DNF 命令][2] 去安装 chrony。
```
$ sudo dnf install chrony
```
对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装chrony.
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][3] 或者 [APT 命令][4] 去安装 chrony。
```
$ sudo apt install chrony
```
对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装chrony.
对基于 Arch Linux 的系统,使用 [Pacman 命令][5] 去安装 chrony。
```
$ sudo pacman -S chrony
```
对于 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装chrony.
对于 RHEL/CentOS 系统,使用 [YUM 命令][6] 去安装 chrony。
```
$ sudo yum install chrony
```
对于**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装chrony.
对于 openSUSE Leap 系统,使用 [Zypper 命令][7] 去安装 chrony。
```
$ sudo zypper install chrony
@ -71,20 +61,18 @@ $ sudo zypper install chrony
在这篇文章中,我们将使用下列设置去测试。
* **`NTP服务器:`** 主机名: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
* **`Chrony客户端:`** 主机名: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
* NTP 服务器主机名CentOS7.2daygeek.comIP192.168.1.5OSCentOS 7
* Chrony 客户端主机名Ubuntu18.2daygeek.comIP192.168.1.3OSUbuntu 18.04
服务器的安装请访问 [在 Linux 上安装和配置 NTP 服务器][1] 的 URL。
导航到 **[在Linux上安装和配置NTP服务器][1]** 的URL
我已经在 CentOS7.2daygeek.com 这台主机上安装和配置了 NTP 服务器,因此,将其附加到所有的客户端机器上。此外,还包括其他所需信息
`chrony.conf` 文件的位置根据你的发行版不同而不同。
我已经在`CentOS7.2daygeek.com`这台主机上安装和配置了NTP服务器因此将其附加到所有的客户端机器上。此外还包括其他所需信息
对基于 RHEL 的系统,它位于 `/etc/chrony.conf`
`chrony.conf`文件的位置根据你的发行版不同而不同。
对基于RHEL的系统它位于`/etc/chrony.conf`。
对基于Debian的系统它位于`/etc/chrony/chrony.conf`。
对基于 Debian 的系统,它位于 `/etc/chrony/chrony.conf`
```
# vi /etc/chrony/chrony.conf
@ -98,28 +86,25 @@ makestep 1 3
cmdallow 192.168.1.0/24
```
更新配置后需要重启Chrony服务。
更新配置后需要重启 Chrony 服务。
对于sysvinit系统。基于RHEL的系统需要去运行`chronyd`而不是chrony。
对于 sysvinit 系统。基于 RHEL 的系统需要去运行 `chronyd` 而不是 `chrony`
```
# service chronyd restart
# chkconfig chronyd on
```
对于systemctl系统。 基于RHEL的系统需要去运行`chronyd`而不是chrony。
对于 systemctl 系统。 基于 RHEL 的系统需要去运行 `chronyd` 而不是 `chrony`
```
# systemctl restart chronyd
# systemctl enable chronyd
```
使用像tackingsources和sourcestats这样的命令去检查chrony的同步细节。
去检查chrony的跟踪状态。
使用像 `tacking`、`sources` 和 `sourcestats` 这样的子命令去检查 chrony 的同步细节。
去检查 chrony 的追踪状态。
```
# chronyc tracking
@ -138,7 +123,7 @@ Update interval : 2.0 seconds
Leap status : Normal
```
运行sources命令去显示当前时间源的信息。
运行 `sources` 命令去显示当前时间源的信息。
```
# chronyc sources
@ -148,7 +133,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample
^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
```
sourcestats命令显示有关chronyd当前正在检查的每个源的漂移率和偏移估计过程的信息。
`sourcestats` 命令显示有关 chronyd 当前正在检查的每个源的漂移率和偏移估计过程的信息。
```
# chronyc sourcestats
@ -158,7 +143,7 @@ Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
CentOS7.2daygeek.com 5 3 71 -97.314 78.754 -469us 441us
```
chronyd配置为NTP客户端或对等端时你就能通过chronyc ntpdata命令向每一个NTP源发送和接收时间戳模式和交错模式报告。
chronyd 配置为 NTP 客户端或对等端时,你就能通过 `chronyc ntpdata` 命令向每一个 NTP 源发送/接收时间戳模式和交错模式的报告。
```
# chronyc ntpdata
@ -191,15 +176,14 @@ Total RX : 46
Total valid RX : 46
```
最后运行`date`命令。
最后运行 `date` 命令。
```
# date
Thu Mar 28 03:08:11 CDT 2019
```
为了立即切换系统时钟通过转换绕过任何正在进行的调整请以root身份发出以下命令手动调整系统时钟
To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root (To adjust the system clock manually).
为了立即跟进系统时钟,绕过任何正在进行的缓步调整,请以 root 身份运行以下命令(以手动调整系统时钟)。
```
# chronyc makestep
@ -212,13 +196,13 @@ via: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[arrowfeng](https://github.com/arrowfeng)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
[1]: https://linux.cn/article-10811-1.html
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -1,38 +1,38 @@
[#]: collector: (lujun9972)
[#]: translator: (warmfrog)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10823-1.html)
[#]: subject: (12 Single Board Computers: Alternative to Raspberry Pi)
[#]: via: (https://itsfoss.com/raspberry-pi-alternatives/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
12 个可替换 Raspberry Pi 的单片
12 个可替代树莓派的单板
================================
_**简介: 正在寻找 Raspberry Pi 的替代品? 这里有一些单片机可以满足你的 DIY 渴求**_
> 正在寻找树莓派的替代品?这里有一些单板机可以满足你的 DIY 渴求。
Raspberry Pi 是当前最流行的单片机。你可以在你的 DIY 项目中使用它,或者用它作为一个成本效益高的系统来学习编代码,或者为了你的便利,利用一个[流媒体软件][1]运行在上面作为流媒体设备。
树莓派是当前最流行的单板机。你可以在你的 DIY 项目中使用它,或者用它作为一个成本效益高的系统来学习编代码,或者为了你的便利,利用一个[流媒体软件][1]运行在上面作为流媒体设备。
你可以使用 Raspberry Pi 做很多事,但它不是各种极客的最终解决方案。一些人可能在寻找更便宜的开发板,一些可能在寻找更强大的。
你可以使用树莓派做很多事,但它不是各种极客的最终解决方案。一些人可能在寻找更便宜的开发板,一些可能在寻找更强大的。
无论是哪种情况,我们都有很多原因需要 Raspberry Pi 的替代品。因此,在这片文章里,我们将讨论最好的十个我们认为能够替代 Raspberry Pi 的单片机。
无论是哪种情况,我们都有很多原因需要树莓派的替代品。因此,在这片文章里,我们将讨论最好的 12 个我们认为能够替代树莓派的单板机。
![][2]
### 满足你 DIY 渴望的 Raspberry Pi 替代品
### 满足你 DIY 渴望的树莓派替代品
这个列表没有特定的顺序排名。链接的一部分是附属链接。请阅读我们的[附属政策][3].
这个列表没有特定的顺序排名。链接的一部分是赞助链接。请阅读我们的[赞助政策][3]。
#### 1\. Onion Omega2+
#### 1Onion Omega2+
![][4]
只要 **$13**Omega2+ 是这里你可以找到的最便宜的 IoT 单片机设备。它运行 LEDELinux 嵌入式开发环境Linux 系统 - 一个基于 [OpenWRT][5] 的分发版。
只要 $13Omega2+ 是这里你可以找到的最便宜的 IoT 单板机设备。它运行 LEDELinux 嵌入式开发环境Linux 系统 —— 这是一个基于 [OpenWRT][5] 的发行版。
由于运行一个自定义 Linux 系统,它的组成因素,花费,和灵活性使它完美适合几乎所有类型的 IoT 应用。
由于运行一个自定义 Linux 系统,它的组成元件、花费和灵活性使它完美适合几乎所有类型的 IoT 应用。
你可以在[亚马逊商城的 Onion Omega 装备][6]或者从他们的网站下单,可能会收取额外的邮费。
你可以在[亚马逊商城的 Onion Omega 套件][6]或者从他们的网站下单,可能会收取额外的邮费。
**关键参数:**
@ -44,15 +44,15 @@ Raspberry Pi 是当前最流行的单片机。你可以在你的 DIY 项目中
* USB 2.0
* 12 GPIO Pins
[查看官网][7]
[查看官网][7]
#### 2\. NVIDIA Jetson Nano Developer Kit
#### 2NVIDIA Jetson Nano Developer Kit
这是来自 NVIDIA 的只要 **$99** 的非常独特和有趣的 Raspberry Pi 替代品。是的,它不是每个人都能充分利用的设备 - 只为特定的一组极客或者开发者
这是来自 NVIDIA 的只要 **$99** 的非常独特和有趣的树莓派替代品。是的,它不是每个人都能充分利用的设备 —— 只为特定的一组极客或者开发者而生
NVIDIA 使用下面的用例解释它:
> NVIDIA® Jetson Nano™ Developer Kit 是一个小的,强大的让你并行运行多个神经网络的应用像图像分类,对象侦察,分段,语音处理。全部在一个易于使用的运行功率只有 5 瓦特平台
> NVIDIA® Jetson Nano™ Developer Kit 是一个小的、强大的计算机,可以让你并行运行多个神经网络的应用像图像分类、对象侦察、图像分段、语音处理。全部在一个易于使用的、运行功率只有 5 瓦特的平台上
>
> nvidia
@ -66,20 +66,17 @@ NVIDIA 使用下面的用例解释它:
* Display: HDMI 2.0
* 4 x USB 3.0 and eDP 1.4
[查看官网][9]
[查看官网
][9]
#### 3\. ASUS Tinker Board S
#### 3、ASUS Tinker Board S
![][10]
ASUS Tinker Board S 不是大多数可负担得起的可替代 Raspberry Pi 的替换设备 **$82**, [亚马逊商城][11]),但是它是一个强大的替代品。它的特点是有你通常可以发现与标准 Raspberry Pi 3 Model 一样的 40 针脚的连接器,但是提供了强大的处理器和 GPU。同样的Tinker Board S 的大小恰巧和标准的 Raspberry Pi 3 一样大。
ASUS Tinker Board S 不是大多数人可负担得起的树莓派的替换设备 **$82**[亚马逊商城][11]),但是它是一个强大的替代品。它的特点是它有你通常可以发现与标准树莓派 3 一样的 40 针脚的连接器,但是提供了强大的处理器和 GPU。同样的Tinker Board S 的大小恰巧和标准的树莓派3 一样大。
这个板子的主要亮点是 16 GB [eMMC][12] (用外行术语说,它的板上有一个类似 SSD 的存储单元使它工作时运行的更快。) 的存在。
**关键参数**
**关键参数**
* Rockchip Quad-Core RK3288 processor
* 2 GB DDR3 RAM
@ -92,20 +89,17 @@ ASUS Tinker Board S 不是大多数可负担得起的可替代 Raspberry Pi 的
* 28 GPIO pins
* HDMI Interface
[查看网站][13]
[查看网站
][13]
#### 4\. ClockworkPi
#### 4、ClockworkPi
![][14]
如果你在想方设法组装一个模块化的复古的游戏控制台Clockwork Pi 通常是 [GameShell Kit][15] 的一部分。然而,你可以 使用 $49 单独购买板子。
如果你正在想方设法组装一个模块化的复古的游戏控制台Clockwork Pi 可能就是你需要的,它通常是 [GameShell Kit][15] 的一部分。然而,你可以 使用 $49 单独购买板子。
它紧凑的大小WiFi 连接性,和 micro HDMI 端口的存在使它成为很多事物的选择。
它紧凑的大小、WiFi 连接性和 micro HDMI 端口的存在使它成为许多方面的选择。
**关键参数**
**关键参数**
* Allwinner R16-J Quad-core Cortex-A7 CPU @1.2GHz
* Mali-400 MP2 GPU
@ -114,41 +108,33 @@ ASUS Tinker Board S 不是大多数可负担得起的可替代 Raspberry Pi 的
* Micro HDMI output
* MicroSD Card Slot
[查看官网][16]
[查看官网
][16]
#### 5\. Arduino Mega 2560
#### 5、Arduino Mega 2560
![][17]
如果你正在研究机器人项目或者你想要一个 3D 打印机 - Arduino Mega 2560 将是 Raspberry Pi 的便利的替代品。不像 Raspberry Pi,它是基于微控制器而不是微处理器的。
如果你正在研究机器人项目或者你想要一个 3D 打印机 —— Arduino Mega 2560 将是树莓派的便利的替代品。不像树莓派,它是基于微控制器而不是微处理器的。
在他们的[官网][18]它会花费你 $38.50 或者在[在亚马逊商城 $33][19]。
在他们的[官网][18]你需要花费 $38.50,或者在[在亚马逊商城是 $33][19]。
**关键参数:**
**Key Specifications:**
* Microcontroller: ATmega2560
* Clock Speed: 16 MHz
* Digital I/O Pins: 54
* Analog Input Pins: 16
* Flash Memory: 256 KB of which 8 KB used by bootloader
[查看官网][18]
[查看官网
][18]
#### 6\. Rock64 Media Board
#### 6、Rock64 Media Board
![][20]
对于与你可能想要 Raspberry Pi 3 B+ 相同的投资,你将在 Rock64 Media Board 上获得更快的处理器和双倍的内存。除此之外,如果你想要 1 GB RAM 版的,它提供了一个 Raspberry Pi 的 更便宜的替代,花费更少,只要 $10 。
用与你可能想要的树莓派 3 B+ 相同的价格,你将在 Rock64 Media Board 上获得更快的处理器和双倍的内存。除此之外,如果你想要 1 GB RAM 版的,它提供了一个比树莓派更便宜的替代品,花费更少,只要 $10 。
不像 Raspberry Pi这里没有无线连接支持,但是 USB 3.0 和 HDMI 2.0 的存在使它与众不同,如果它对你很重要的话。
不像树莓派,它没有无线连接支持,但是 USB 3.0 和 HDMI 2.0 的存在使它与众不同,如果它对你很重要的话。
**关键参数:**
@ -159,20 +145,18 @@ ASUS Tinker Board S 不是大多数可负担得起的可替代 Raspberry Pi 的
* USB 3.0
* HDMI 2.0
[查看官网][21]
[查看官网
][21]
#### 7\. Odroid-XU4
#### 7、Odroid-XU4
![][22]
Odroid-XU4 是一个完美的 Raspberry Pi 的替代,如果你有能够稍微提高预算的空间($80-$100 甚至更低,取决于存储的容量)。
Odroid-XU4 是一个完美的树莓派的替代品,如果你有能够稍微提高预算的空间($80-$100 甚至更低,取决于存储的容量)。
它确实是一个强大的替代并且体积更小。 支持 eMMC 和 USB 3.0 使它工作起来更快。
它确实是一个强大的替代并且体积更小。支持 eMMC 和 USB 3.0 使它工作起来更快。
**关键参数:**
* Samsung Exynos 5422 Octa ARM Cortex™-A15 Quad 2Ghz and Cortex™-A7 Quad 1.3GHz CPUs
* 2Gbyte LPDDR3 RAM
* GPU: Mali-T628 MP6
@ -181,16 +165,13 @@ Odroid-XU4 是一个完美的 Raspberry Pi 的替代,如果你有能够稍微提
* eMMC 5.0 module socket
* MicroSD Card Slot
[查看官网][23]
[查看官网
][23]
#### 8\. **PocketBeagle**
#### 8、PocketBeagle
![][24]
它是一个难以置信的小的单片机 - 几乎和 Raspberry Pi Zero 相似。然而它会花费完全大小的 Raspberry Pi 3 相同的价格。主要的亮点是你可以用它作为一个 USB 便携式信息终端 并且进入 Linux 命令行工作。
它是一个难以置信的小的单板机 —— 几乎和树莓派Zero 相似。然而它的价格相当于完整大小的树莓派 3。主要的亮点是你可以用它作为一个 USB 便携式信息终端,并且进入 Linux 命令行工作。
**关键参数:**
@ -200,22 +181,18 @@ Odroid-XU4 是一个完美的 Raspberry Pi 的替代,如果你有能够稍微提
* microUSB
* USB 2.0
[查看官网][25]
[查看官网
][25]
#### 9\. Le Potato
#### 9、Le Potato
![][26]
由 [Libre Computer][27] 出品的 Le Potato同样被它的型号 AML-S905X-CC 标识。它花费你 [$45][28]。
由 [Libre Computer][27] 出品的 Le Potato其型号是 AML-S905X-CC。它需要花费你 [$45][28]。
如果你花费的比 Raspberry Pi 更多的钱,你就能得到想要双倍内存和 HDMI 2.0 接口,这可能是一个完美的选择。尽管,你还是不能发现嵌入的无线连接。
如果你花费的比树莓派更多的钱,你就能得到双倍内存和 HDMI 2.0 接口,这可能是一个完美的选择。尽管,你还是不能找到嵌入的无线连接。
**关键参数:**
* Amlogic S905X SoC
* 2GB DDR3 SDRAM
* USB 2.0
@ -224,18 +201,15 @@ Odroid-XU4 是一个完美的 Raspberry Pi 的替代,如果你有能够稍微提
* MicroSD Card Slot
* eMMC Interface
[查看官网][29]
[查看官网
][29]
#### 10\. Banana Pi M64
#### 10、Banana Pi M64
![][30]
它自带了 8 Gigs 的 eMMC - 是替代 Raspberry Pi 的主要亮点。由于相同的原因,它花费 $60。
它自带了 8G 的 eMMC —— 这是替代树莓派的主要亮点。因此,它需要花费 $60。
HDMI 接口的存在使它胜任 4K。除此之外Banana Pi 提供了更多种类的开源单片机作为 Raspberry Pi 的替代。
HDMI 接口的存在使它胜任 4K。除此之外Banana Pi 提供了更多种类的开源单板机作为树莓派的替代。
**关键参数:**
@ -246,18 +220,15 @@ HDMI 接口的存在使它胜任 4K。除此之外Banana Pi 提供了更多
* USB 2.0
* HDMI
[查看官网][31]
[查看官网
][31]
#### 11\. Orange Pi Zero
#### 11、Orange Pi Zero
![][32]
Orange Pi Zero 相对于 Raspberry Pi 难以置信的便宜。你可以在 Aliexpress 或者亚马逊上以最多 $10 就能够获得。如果[稍微投资多点,你能够获得 512 MB RAM][33]。
Orange Pi Zero 相对于树莓派来说难以置信的便宜。你可以在 Aliexpress 或者亚马逊上以最多 $10 就能够获得。如果[稍微多花点,你能够获得 512 MB RAM][33]。
如果这还不够充分,你可以花费大概 $25 获得更好的配置 Orange Pi 3。
如果这还不够,你可以花费大概 $25 获得更好的配置,比如 Orange Pi 3。
**关键参数:**
@ -268,18 +239,15 @@ Orange Pi Zero 相对于 Raspberry Pi 难以置信的便宜。你可以在 Aliex
* WiFi
* USB 2.0
[查看官网][34]
[查看官网
][34]
#### 12\. VIM 2 SBC by Khadas
#### 12、VIM 2 SBC by Khadas
![][35]
由 Khadas 出品的 VIM 2 是最新的单片机,因此你能够在板上获取到蓝牙 5.0。[从 $99 的基础款到上限 $140][36].
由 Khadas 出品的 VIM 2 是最新的单板机,因此你能够在板上得到蓝牙 5.0 支持。它的价格范围[从 $99 的基础款到上限 $140][36]。
基础款包含 2 GB RAM16 GB eMMC 和蓝牙 4.1。然而Pro/Max 版包含蓝牙 5.0,更多的内存,更多的 eMMC 存储。
基础款包含 2 GB RAM16 GB eMMC 和蓝牙 4.1。然而Pro/Max 版包含蓝牙 5.0,更多的内存,更多的 eMMC 存储。
**关键参数:**
@ -292,13 +260,11 @@ Orange Pi Zero 相对于 Raspberry Pi 难以置信的便宜。你可以在 Aliex
* HDMI 2.0a
* WiFi
### 总结
我们知道有很多不同种类的单板机电脑。一些比树莓派更好 —— 它的一些小规格的版本有更便宜的价格。同样的,像 Jetson Nano 这样的单板机已经被裁剪用于特定用途。因此,取决于你需要什么 —— 你应该检查一下单板机的配置。
**总结**
我们知道有很多不同种类的单片机电脑。一些比 Raspberry Pi 更好 - 它的一些小规格的版本有更便宜的价格。同样的,单片机像 Jetson Nano 已经被裁剪用于特定用途。因此,取决于你需要什么 - 你应该验证单片机的配置。
如果你认为你知道比上述提到的更好的东西,请随意在下方评论来让我们知道。
如果你知道比上述提到的更好的东西,请随意在下方评论来让我们知道。
--------------------------------------------------------------------------------
@ -307,7 +273,7 @@ via: https://itsfoss.com/raspberry-pi-alternatives/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[warmfrog](https://github.com/warmfrog)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,38 +1,30 @@
[#]: collector: (lujun9972)
[#]: translator: (bodhix)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10844-1.html)
[#]: subject: (How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?)
[#]: via: (https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?
Linux 中如何启用和禁用网卡?
======
You may need to run these commands based on your requirements.
你可能会根据你的需要执行以下命令。我会在这里列举一些你会用到这些命令的例子。
I can tell you few examples, where you would be needed this.
当你添加一个网卡或者从一个物理网卡创建出一个虚拟网卡的时候,你可能需要使用这些命令将新网卡启用起来。另外,如果你对网卡做了某些修改或者网卡本身没有启用,那么你也需要使用以下的某个命令将网卡启用起来。
When you add a new network interface or when you create a new virtual network interface from the original physical interface.
启用、禁用网卡有很多种方法。在这篇文章里,我们会介绍我们使用过的最好的 5 种方法。
you may need to bounce these commands to bring up the new interface.
启用禁用网卡可以使用以下 5 个方法来完成:
Also, if you made any changes or if its down then you need to run one of the below commands to bring them up.
* `ifconfig` 命令:用于配置网卡。它可以提供网卡的很多信息。
* `ifdown/up` 命令:`ifdown` 命令用于禁用网卡,`ifup` 命令用于启用网卡。
* `ip` 命令:用于管理网卡,用于替代老旧的、不推荐使用的 `ifconfig` 命令。它和 `ifconfig` 命令很相似,但是提供了很多 `ifconfig` 命令所不具有的强大的特性。
* `nmcli` 命令:是一个控制 NetworkManager 并报告网络状态的命令行工具。
* `nmtui` 命令:是一个与 NetworkManager 交互的、基于 curses 图形库的终端 UI 应用。
It can be done on many ways and we would like to add best five method which we used in the article.
It can be done using the below five methods.
* **`ifconfig Command:`** The ifconfig command is used configure a network interface. It provides so many information about NIC.
* **`ifdown/up Command:`** The ifdown command take a network interface down and the ifup command bring a network interface up.
* **`ip Command:`** ip command is used to manage NIC. Its replacement of old and deprecated ifconfig command. Its similar to ifconfig command but has many powerful features which isnt available in ifconfig command.
* **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager and reporting network status.
* **`nmtui Command:`** nmtui is a cursesbased TUI application for interacting with NetworkManager.
The below output shows the available network interface card (NIC) information in my Linux system.
以下显示的是我的 Linux 系统中可用网卡的信息。
```
# ip a
@ -56,25 +48,25 @@ The below output shows the available network interface card (NIC) information in
valid_lft forever preferred_lft forever
```
### 1) How To Bring UP And Bring Down A Network Interface In Linux Using ifconfig Command?
### 1、如何使用 ifconfig 命令启用禁用网卡?
The ifconfig command is used configure a network interface.
`ifconfig` 命令用于配置网卡。
It is used at boot time to set up interfaces as necessary. It provides so many information about NIC. We can use ifconfig command when we need to make any changes on NIC.
在系统启动过程中如果需要启用网卡,调用的命令就是 `ifconfig`。`ifconfig` 可以提供很多网卡的信息。不管我们想修改网卡的什么配置,都可以使用该命令。
Common Syntax for ifconfig:
`ifconfig` 的常用语法:
```
# ifconfig [NIC_NAME] Down/Up
```
Run the following command to bring down the `enp0s3` interface in Linux. Make a note, you have to input your interface name instead of us.
执行以下命令禁用 `enp0s3` 网卡。注意,这里你需要输入你自己的网卡名字。
```
# ifconfig enp0s3 down
```
Yes, the given interface is down now as per the following output.
从以下输出结果可以看到网卡已经被禁用了。
```
# ip a | grep -A 1 "enp0s3:"
@ -82,13 +74,13 @@ Yes, the given interface is down now as per the following output.
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
```
Run the following command to bring down the `enp0s3` interface in Linux.
执行以下命令启用 `enp0s3` 网卡。
```
# ifconfig enp0s3 up
```
Yes, the given interface is up now as per the following output.
从以下输出结果可以看到网卡已经启用了。
```
# ip a | grep -A 5 "enp0s3:"
@ -100,27 +92,26 @@ Yes, the given interface is up now as per the following output.
valid_lft forever preferred_lft forever
```
### 2) How To Enable And Disable A Network Interface In Linux Using ifdown/up Command?
### 2、如何使用 ifdown/up 命令启用禁用网卡?
The ifdown command take a network interface down and the ifup command bring a network interface up.
`ifdown` 命令用于禁用网卡,`ifup` 命令用于启用网卡。
**Note:**It doesnt work on new interface device name like `enpXXX`
注意:这两个命令不支持以 `enpXXX` 命名的新的网络设备。
Common Syntax for ifdown/ifup:
`ifdown`/`ifup` 的常用语法:
```
# ifdown [NIC_NAME]
# ifup [NIC_NAME]
```
Run the following command to bring down the `eth1` interface in Linux.
执行以下命令禁用 `eth1` 网卡。
```
# ifdown eth0
# ifdown eth1
```
Run the following command to bring down the `eth1` interface in Linux.
从以下输出结果可以看到网卡已经被禁用了。
```
# ip a | grep -A 3 "eth1:"
@ -128,13 +119,13 @@ Run the following command to bring down the `eth1` interface in Linux.
link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
```
Run the following command to bring down the `eth1` interface in Linux.
执行以下命令启用 `eth1` 网卡。
```
# ifup eth0
# ifup eth1
```
Yes, the given interface is up now as per the following output.
从以下输出结果可以看到网卡已经启用了。
```
# ip a | grep -A 5 "eth1:"
@ -145,32 +136,32 @@ Yes, the given interface is up now as per the following output.
valid_lft forever preferred_lft forever
```
ifup and ifdown doesnt supporting the latest interface device `enpXXX` names. I got the below message when i ran the command.
`ifup``ifdown` 不支持以 `enpXXX` 命名的网卡。当执行该命令时得到的结果如下:
```
# ifdown enp0s8
Unknown interface enp0s8
```
### 3) How To Bring UP/Bring Down A Network Interface In Linux Using ip Command?
### 3、如何使用 ip 命令启用禁用网卡?
ip command is used to manage Network Interface Card (NIC). Its replacement of old and deprecated ifconfig command on modern Linux systems.
`ip` 命令用于管理网卡,用于替代老旧的、不推荐使用的 `ifconfig` 命令。
Its similar to ifconfig command but has many powerful features which isnt available in ifconfig command.
它和 `ifconfig` 命令很相似,但是提供了很多 `ifconfig` 命令不具有的强大的特性。
Common Syntax for ip:
`ip` 的常用语法:
```
# ip link set Down/Up
```
Run the following command to bring down the `enp0s3` interface in Linux.
执行以下命令禁用 `enp0s3` 网卡。
```
# ip link set enp0s3 down
```
Yes, the given interface is down now as per the following output.
从以下输出结果可以看到网卡已经被禁用了。
```
# ip a | grep -A 1 "enp0s3:"
@ -178,13 +169,13 @@ Yes, the given interface is down now as per the following output.
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
```
Run the following command to bring down the `enp0s3` interface in Linux.
执行以下命令启用 `enp0s3` 网卡。
```
# ip link set enp0s3 up
```
Yes, the given interface is up now as per the following output.
从以下输出结果可以看到网卡已经启用了。
```
# ip a | grep -A 5 "enp0s3:"
@ -196,15 +187,13 @@ Yes, the given interface is up now as per the following output.
valid_lft forever preferred_lft forever
```
### 4) How To Enable And Disable A Network Interface In Linux Using nmcli Command?
### 4、如何使用 nmcli 命令启用禁用网卡?
nmcli is a command-line tool for controlling NetworkManager and reporting network status.
`nmcli` 是一个控制 NetworkManager 并报告网络状态的命令行工具。
It can be utilized as a replacement for nm-applet or other graphical clients. nmcli is used to create, display, edit, delete, activate, and deactivate network
`nmcli` 可以用做 nm-applet 或者其他图形化客户端的替代品。它可以用于展示、创建、修改、删除、启用和停用网络连接。除此之后,它还可以用来管理和展示网络设备状态。
connections, as well as control and display network device status.
Run the following command to identify the interface name because nmcli command is perform most of the task using `profile name` instead of `device name`.
`nmcli` 命令大部分情况下都是使用“配置名称”工作而不是“设备名称”。所以执行以下命令获取网卡对应的配置名称。LCTT 译注:在使用 `nmtui` 或者 `nmcli` 管理网络连接的时候,可以为网络连接配置一个名称,就是这里提到的<ruby>配置名称<rt>Profile name</rt></ruby>`
```
# nmcli con show
@ -213,20 +202,20 @@ Wired connection 1 3d5afa0a-419a-3d1a-93e6-889ce9c6a18c ethernet enp0s3
Wired connection 2 a22154b7-4cc4-3756-9d8d-da5a4318e146 ethernet enp0s8
```
Common Syntax for ip:
`nmcli` 的常用语法:
```
# nmcli con Down/Up
```
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
执行以下命令禁用 `enp0s3` 网卡。在禁用网卡的时候,你需要使用配置名称而不是设备名称。
```
# nmcli con down 'Wired connection 1'
Connection 'Wired connection 1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
```
Yes, the given interface is down now as per the following output.
从以下输出结果可以看到网卡已经禁用了。
```
# nmcli dev status
@ -236,14 +225,14 @@ enp0s3 ethernet disconnected --
lo loopback unmanaged --
```
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
执行以下命令启用 `enp0s3` 网卡。同样的,这里你需要使用配置名称而不是设备名称。
```
# nmcli con up 'Wired connection 1'
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
```
Yes, the given interface is up now as per the following output.
从以下输出结果可以看到网卡已经启用了。
```
# nmcli dev status
@ -253,25 +242,27 @@ enp0s3 ethernet connected Wired connection 1
lo loopback unmanaged --
```
### 5) How To Bring UP/Bring Down A Network Interface In Linux Using nmtui Command?
### 5、如何使用 nmtui 命令启用禁用网卡?
nmtui is a curses based TUI application for interacting with NetworkManager.
`nmtui` 是一个与 NetworkManager 交互的、基于 curses 图形库的终端 UI 应用。
When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument.
在启用 `nmtui` 的时候,如果第一个参数没有特别指定,它会引导用户选择对应的操作去执行。
Run the following command launch the nmtui interface. Select “Active a connection” and hit “OK”
执行以下命令打开 `mntui` 界面。选择 “Active a connection” 然后点击 “OK”。
```
# nmtui
```
[![][1]![][1]][2]
![][2]
Select the interface which you want to bring down then hit “Deactivate” button.
[![][1]![][1]][3]
选择你要禁用的网卡,然后点击 “Deactivate” 按钮,就可以将网卡禁用。
For activation do the same above procedure.
[![][1]![][1]][4]
![][3]
如果要启用网卡,使用上述同样的步骤即可。
![][4]
--------------------------------------------------------------------------------
@ -279,8 +270,8 @@ via: https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[bodhix](https://github.com/bodhix)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,444 @@
[#]: collector: (lujun9972)
[#]: translator: (FSSlc)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10826-1.html)
[#]: subject: (Inter-process communication in Linux: Shared storage)
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Linux 下的进程间通信:共享存储
======
> 学习在 Linux 中进程是如何与其他进程进行同步的。
![Filing papers and documents][1]
本篇是 Linux 下[进程间通信][2]IPC系列的第一篇文章。这个系列将使用 C 语言代码示例来阐明以下 IPC 机制:
* 共享文件
* 共享内存(使用信号量)
* 管道(命名的或非命名的管道)
* 消息队列
* 套接字
* 信号
在聚焦上面提到的共享文件和共享内存这两个机制之前,这篇文章将带你回顾一些核心的概念。
### 核心概念
*进程*是运行着的程序,每个进程都有着它自己的地址空间,这些空间由进程被允许访问的内存地址组成。进程有一个或多个执行*线程*,而线程是一系列执行指令的集合:*单线程*进程就只有一个线程,而*多线程*的进程则有多个线程。一个进程中的线程共享各种资源,特别是地址空间。另外,一个进程中的线程可以直接通过共享内存来进行通信,尽管某些现代语言(例如 Go鼓励一种更有序的方式例如使用线程安全的通道。当然对于不同的进程默认情况下它们**不**能共享内存。
有多种方法启动之后要进行通信的进程,下面所举的例子中主要使用了下面的两种方法:
* 一个终端被用来启动一个进程,另外一个不同的终端被用来启动另一个。
* 在一个进程(父进程)中调用系统函数 `fork`,以此生发另一个进程(子进程)。
第一个例子采用了上面使用终端的方法。这些[代码示例][3]的 ZIP 压缩包可以从我的网站下载到。
### 共享文件
程序员对文件访问应该都已经很熟识了,包括许多坑(不存在的文件、文件权限损坏等等),这些问题困扰着程序对文件的使用。尽管如此,共享文件可能是最为基础的 IPC 机制了。考虑一下下面这样一个相对简单的例子,其中一个进程(生产者 `producer`)创建和写入一个文件,然后另一个进程(消费者 `consumer `)从这个相同的文件中进行读取:
```
writes +-----------+ reads
producer-------->| disk file |<-------consumer
+-----------+
```
在使用这个 IPC 机制时最明显的挑战是*竞争条件*可能会发生:生产者和消费者可能恰好在同一时间访问该文件,从而使得输出结果不确定。为了避免竞争条件的发生,该文件在处于*读*或*写*状态时必须以某种方式处于被锁状态,从而阻止在*写*操作执行时和其他操作的冲突。在标准系统库中与锁相关的 API 可以被总结如下:
* 生产者应该在写入文件时获得一个文件的排斥锁。一个*排斥*锁最多被一个进程所拥有。这样就可以排除掉竞争条件的发生,因为在锁被释放之前没有其他的进程可以访问这个文件。
* 消费者应该在从文件中读取内容时得到至少一个共享锁。多个*读取者*可以同时保有一个*共享*锁,但是没有*写入者*可以获取到文件内容,甚至在当只有一个*读取者*保有一个共享锁时。
共享锁可以提升效率。假如一个进程只是读入一个文件的内容,而不去改变它的内容,就没有什么原因阻止其他进程来做同样的事。但如果需要写入内容,则很显然需要文件有排斥锁。
标准的 I/O 库中包含一个名为 `fcntl` 的实用函数,它可以被用来检查或者操作一个文件上的排斥锁和共享锁。该函数通过一个*文件描述符*(一个在进程中的非负整数值)来标记一个文件(在不同的进程中不同的文件描述符可能标记同一个物理文件)。对于文件的锁定, Linux 提供了名为 `flock` 的库函数,它是 `fcntl` 的一个精简包装。第一个例子中使用 `fcntl` 函数来暴露这些 API 细节。
#### 示例 1. 生产者程序
```c
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define FileName "data.dat"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
report_and_exit("open to read failed...");
/* If the file is write-locked, we can't continue. */
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("can't get a read-only lock...");
/* Read the bytes (they happen to be ASCII codes) one at a time. */
int c; /* buffer for read bytes */
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
/* Release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd);
return 0;
}
```
上面生产者程序的主要步骤可以总结如下:
* 这个程序首先声明了一个类型为 `struct flock` 的变量,它代表一个锁,并对它的 5 个域做了初始化。第一个初始化
```c
lock.l_type = F_WRLCK; /* exclusive lock */
```
使得这个锁为排斥锁read-write而不是一个共享锁read-only。假如生产者获得了这个锁则其他的进程将不能够对文件做读或者写操作直到生产者释放了这个锁或者显式地调用 `fcntl`,又或者隐式地关闭这个文件。(当进程终止时,所有被它打开的文件都会被自动关闭,从而释放了锁)
* 上面的程序接着初始化其他的域。主要的效果是*整个*文件都将被锁上。但是,有关锁的 API 允许特别指定的字节被上锁。例如,假如文件包含多个文本记录,则单个记录(或者甚至一个记录的一部分)可以被锁,而其余部分不被锁。
* 第一次调用 `fcntl`
```c
if (fcntl(fd, F_SETLK, &lock) < 0)
```
尝试排斥性地将文件锁住,并检查调用是否成功。一般来说, `fcntl` 函数返回 `-1` (因此小于 0意味着失败。第二个参数 `F_SETLK` 意味着 `fcntl` 的调用*不是*堵塞的;函数立即做返回,要么获得锁,要么显示失败了。假如替换地使用 `F_SETLKW`(末尾的 `W` 代指*等待*),那么对 `fcntl` 的调用将是阻塞的,直到有可能获得锁的时候。在调用 `fcntl` 函数时,它的第一个参数 `fd` 指的是文件描述符,第二个参数指定了将要采取的动作(在这个例子中,`F_SETLK` 指代设置锁),第三个参数为锁结构的地址(在本例中,指的是 `&lock`)。
* 假如生产者获得了锁,这个程序将向文件写入两个文本记录。
* 在向文件写入内容后,生产者改变锁结构中的 `l_type` 域为 `unlock` 值:
```c
lock.l_type = F_UNLCK;
```
并调用 `fcntl` 来执行解锁操作。最后程序关闭了文件并退出。
#### 示例 2. 消费者程序
```c
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define FileName "data.dat"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
report_and_exit("open to read failed...");
/* If the file is write-locked, we can't continue. */
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("can't get a read-only lock...");
/* Read the bytes (they happen to be ASCII codes) one at a time. */
int c; /* buffer for read bytes */
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
/* Release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd);
return 0;
}
```
相比于锁的 API消费者程序会相对复杂一点儿。特别的消费者程序首先检查文件是否被排斥性的被锁然后才尝试去获得一个共享锁。相关的代码为
```
lock.l_type = F_WRLCK;
...
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
```
`fcntl` 调用中的 `F_GETLK` 操作指定检查一个锁,在本例中,上面代码的声明中给了一个 `F_WRLCK` 的排斥锁。假如特指的锁不存在,那么 `fcntl` 调用将会自动地改变锁类型域为 `F_UNLCK` 以此来显示当前的状态。假如文件是排斥性地被锁,那么消费者将会终止。(一个更健壮的程序版本或许应该让消费者*睡*会儿,然后再尝试几次。)
假如当前文件没有被锁那么消费者将尝试获取一个共享read-only`F_RDLCK`)。为了缩短程序,`fcntl` 中的 `F_GETLK` 调用可以丢弃,因为假如其他进程已经保有一个读写锁,`F_RDLCK` 的调用就可能会失败。重新调用一个只读锁能够阻止其他进程向文件进行写的操作,但可以允许其他进程对文件进行读取。简而言之,共享锁可以被多个进程所保有。在获取了一个共享锁后,消费者程序将立即从文件中读取字节数据,然后在标准输出中打印这些字节的内容,接着释放锁,关闭文件并终止。
下面的 `%` 为命令行提示符,下面展示的是从相同终端开启这两个程序的输出:
```
% ./producer
Process 29255 has written to data file...
% ./consumer
Now is the winter of our discontent
Made glorious summer by this sun of York
```
在本次的代码示例中,通过 IPC 传输的数据是文本:它们来自莎士比亚的戏剧《理查三世》中的两行台词。然而,共享文件的内容还可以是纷繁复杂的,任意的字节数据(例如一个电影)都可以,这使得文件共享变成了一个非常灵活的 IPC 机制。但它的缺点是文件获取速度较慢,因为文件的获取涉及到读或者写。同往常一样,编程总是伴随着折中。下面的例子将通过共享内存来做 IPC而不是通过共享文件在性能上相应的有极大的提升。
### 共享内存
对于共享内存Linux 系统提供了两类不同的 API传统的 System V API 和更新一点的 POSIX API。在单个应用中这些 API 不能混用。但是POSIX 方式的一个坏处是它的特性仍在发展中并且依赖于安装的内核版本这非常影响代码的可移植性。例如默认情况下POSIX API 用*内存映射文件*来实现共享内存:对于一个共享的内存段,系统为相应的内容维护一个*备份文件*。在 POSIX 规范下共享内存可以被配置为不需要备份文件,但这可能会影响可移植性。我的例子中使用的是带有备份文件的 POSIX API这既结合了内存获取的速度优势又获得了文件存储的持久性。
下面的共享内存例子中包含两个程序,分别名为 `memwriter``memreader`,并使用*信号量*来调整它们对共享内存的获取。在任何时候当共享内存进入一个*写入者*场景时,无论是多进程还是多线程,都有遇到基于内存的竞争条件的风险,所以,需要引入信号量来协调(同步)对共享内存的获取。
`memwriter` 程序应当在它自己所处的终端首先启动,然后 `memreader` 程序才可以在它自己所处的终端启动(在接着的十几秒内)。`memreader` 的输出如下:
```
This is the way the world ends...
```
在每个源程序的最上方注释部分都解释了在编译它们时需要添加的链接参数。
首先让我们复习一下信号量是如何作为一个同步机制工作的。一般的信号量也被叫做一个*计数信号量*,因为带有一个可以增加的值(通常初始化为 0。考虑一家租用自行车的商店在它的库存中有 100 辆自行车,还有一个供职员用于租赁的程序。每当一辆自行车被租出去,信号量就增加 1当一辆自行车被还回来信号量就减 1。在信号量的值为 100 之前都还可以进行租赁业务,但如果等于 100 时,就必须停止业务,直到至少有一辆自行车被还回来,从而信号量减为 99。
*二元信号量*是一个特例它只有两个值0 和 1。在这种情况下信号量的表现为*互斥量*(一个互斥的构造)。下面的共享内存示例将把信号量用作互斥量。当信号量的值为 0 时,只有 `memwriter` 可以获取共享内存,在写操作完成后,这个进程将增加信号量的值,从而允许 `memreader` 来读取共享内存。
#### 示例 3. memwriter 进程的源程序
```c
/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, /* name from smem.h */
O_RDWR | O_CREAT, /* read/write, create if needed */
AccessPerms); /* access permissions (0644) */
if (fd < 0) report_and_exit("Can't open shared mem segment...");
ftruncate(fd, ByteSize); /* get the bytes */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
/* semahore code to lock the shared mem */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
/* increment the semaphore so that memreader can read */
if (sem_post(semptr) < 0) report_and_exit("sem_post");
sleep(12); /* give reader a chance */
/* clean up */
munmap(memptr, ByteSize); /* unmap the storage */
close(fd);
sem_close(semptr);
shm_unlink(BackingFile); /* unlink from the backing file */
return 0;
}
```
下面是 `memwriter``memreader` 程序如何通过共享内存来通信的一个总结:
* 上面展示的 `memwriter` 程序调用 `shm_open` 函数来得到作为系统协调共享内存的备份文件的文件描述符。此时,并没有内存被分配。接下来调用的是令人误解的名为 `ftruncate` 的函数
```c
ftruncate(fd, ByteSize); /* get the bytes */
```
它将分配 `ByteSize` 字节的内存,在该情况下,一般为大小适中的 512 字节。`memwriter` 和 `memreader` 程序都只从共享内存中获取数据,而不是从备份文件。系统将负责共享内存和备份文件之间数据的同步。
* 接着 `memwriter` 调用 `mmap` 函数:
```c
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
```
来获得共享内存的指针。(`memreader` 也做一次类似的调用。) 指针类型 `caddr_t``c` 开头,它代表 `calloc`,而这是动态初始化分配的内存为 0 的一个系统函数。`memwriter` 通过库函数 `strcpy`(字符串复制)来获取后续*写*操作的 `memptr`
* 到现在为止,`memwriter` 已经准备好进行写操作了,但首先它要创建一个信号量来确保共享内存的排斥性。假如 `memwriter` 正在执行写操作而同时 `memreader` 在执行读操作,则有可能出现竞争条件。假如调用 `sem_open` 成功了:
```c
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
```
那么,接着写操作便可以执行。上面的 `SemaphoreName`(任意一个唯一的非空名称)用来在 `memwriter``memreader` 识别信号量。初始值 0 将会传递给信号量的创建者,在这个例子中指的是 `memwriter` 赋予它执行*写*操作的权利。
* 在写操作完成后,`memwriter* 通过调用 `sem_post` 函数将信号量的值增加到 1
```c
if (sem_post(semptr) < 0) ..
```
增加信号了将释放互斥锁,使得 `memreader` 可以执行它的*读*操作。为了更好地测量,`memwriter` 也将从它自己的地址空间中取消映射,
```c
munmap(memptr, ByteSize); /* unmap the storage *
```
这将使得 `memwriter` 不能进一步地访问共享内存。
#### 示例 4. memreader 进程的源代码
```c
/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
if (fd < 0) report_and_exit("Can't get file descriptor...");
/* get a pointer to memory */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
/* create a semaphore for mutual exclusion */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
/* use semaphore as a mutex (lock) by waiting for writer to increment it */
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
int i;
for (i = 0; i < [strlen][6](MemContents); i++)
write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
sem_post(semptr);
}
/* cleanup */
munmap(memptr, ByteSize);
close(fd);
sem_close(semptr);
unlink(BackingFile);
return 0;
}
```
`memwriter``memreader` 程序中,共享内存的主要着重点都在 `shm_open``mmap` 函数上:在成功时,第一个调用返回一个备份文件的文件描述符,而第二个调用则使用这个文件描述符从共享内存段中获取一个指针。它们对 `shm_open` 的调用都很相似,除了 `memwriter` 程序创建共享内存,而 `memreader 只获取这个已经创建的内存:
```c
int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
```
有了文件描述符,接着对 `mmap` 的调用就是类似的了:
```c
caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
```
`mmap` 的第一个参数为 `NULL`,这意味着让系统自己决定在虚拟内存地址的哪个地方分配内存,当然也可以指定一个地址(但很有技巧性)。`MAP_SHARED` 标志着被分配的内存在进程中是共享的,最后一个参数(在这个例子中为 0 意味着共享内存的偏移量应该为第一个字节。`size` 参数特别指定了将要分配的字节数目(在这个例子中是 512另外的保护参数`AccessPerms`)暗示着共享内存是可读可写的。
`memwriter` 程序执行成功后,系统将创建并维护备份文件,在我的系统中,该文件为 `/dev/shm/shMemEx`,其中的 `shMemEx` 是我为共享存储命名的(在头文件 `shmem.h` 中给定)。在当前版本的 `memwriter``memreader` 程序中,下面的语句
```c
shm_unlink(BackingFile); /* removes backing file */
```
将会移除备份文件。假如没有 `unlink` 这个语句,则备份文件在程序终止后仍然持久地保存着。
`memreader``memwriter` 一样,在调用 `sem_open` 函数时,通过信号量的名字来获取信号量。但 `memreader` 随后将进入等待状态,直到 `memwriter` 将初始值为 0 的信号量的值增加。
```c
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
```
一旦等待结束,`memreader` 将从共享内存中读取 ASCII 数据,然后做些清理工作并终止。
共享内存 API 包括显式地同步共享内存段和备份文件。在这次的示例中,这些操作都被省略了,以免文章显得杂乱,好让我们专注于内存共享和信号量的代码。
即便在信号量代码被移除的情况下,`memwriter` 和 `memreader` 程序很大几率也能够正常执行而不会引入竞争条件:`memwriter` 创建了共享内存段,然后立即向它写入;`memreader` 不能访问共享内存,直到共享内存段被创建好。然而,当一个*写操作*处于混合状态时,最佳实践需要共享内存被同步。信号量 API 足够重要,值得在代码示例中着重强调。
### 总结
上面共享文件和共享内存的例子展示了进程是怎样通过*共享存储*来进行通信的,前者通过文件而后者通过内存块。这两种方法的 API 相对来说都很直接。这两种方法有什么共同的缺点吗?现代的应用经常需要处理流数据,而且是非常大规模的数据流。共享文件或者共享内存的方法都不能很好地处理大规模的流数据。按照类型使用管道会更加合适一些。所以这个系列的第二部分将会介绍管道和消息队列,同样的,我们将使用 C 语言写的代码示例来辅助讲解。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
[3]: http://condor.depaul.edu/mkalin
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html

View File

@ -0,0 +1,347 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10840-1.html)
[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
用 OpenStack Designate 构建一个 DNS 即服务DNSaaS
======
> 学习如何安装和配置 Designate这是一个 OpenStack 的多租户 DNS 即服务DNSaaS
![Command line prompt](https://img.linux.net.cn/data/attachment/album/201905/11/110822rjub9wtwtwtmccet.jpg)
[Designate][2] 是一个多租户的 DNS 即服务,它包括一个用于域名和记录管理的 REST API 和集成了 [Neutron][3] 的框架,并支持 Bind9。
DNSaaS 可以提供:
* 一个管理区域和记录的干净利落的 REST API
* 自动生成记录(集成 OpenStack
* 支持多个授权名字服务器
* 可以托管多个项目/组织
![Designate's architecture][4]
这篇文章解释了如何在 CentOS 和 RHEL 上手动安装和配置 Designate 的最新版本,但是同样的配置也可以用在其它发行版上。
### 在 OpenStack 上安装 Designate
在我的 [GitHub 仓库][5]里,我已经放了 Ansible 的 bind 和 Designate 角色的示范设置。
这个设置假定 bing 服务是安装 OpenStack 控制器节点之外(即使你可以在本地安装 bind
1、在 OpenStack 控制节点上安装 Designate 和 bind 软件包:
```
# yum install openstack-designate-* bind bind-utils -y
```
2、创建 Designate 数据库和用户:
```
MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
'designate'@'localhost' IDENTIFIED BY 'rhlab123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
IDENTIFIED BY 'rhlab123';
```
注意bind 包必须安装在控制节点之外才能使<ruby>远程名字服务控制<rt>Remote Name Daemon Control</rt></ruby>RNDC功能正常。
### 配置 bindDNS 服务器)
1、生成 RNDC 文件:
```
rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
cat <<EOF> etcrndc.conf
include "/etc/rndc.key";
options {
default-key "designate";
default-server {{ DNS_SERVER_IP }};
default-port 953;
};
EOF
```
2、将下列配置添加到 `named.conf`
```
include "/etc/rndc.key";
controls {
inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; };
};
```
`option` 节中,添加:
```
options {
...
allow-new-zones yes;
request-ixfr no;
listen-on port 53 { any; };
recursion no;
allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
};
```
添加正确的权限:
```
chown named:named /etc/rndc.key
chown named:named /etc/rndc.conf
chmod 600 /etc/rndc.key
chown -v root:named /etc/named.conf
chmod g+w /var/named
# systemctl restart named
# setsebool named_write_master_zones 1
```
3、把 `rndc.key``rndc.conf` 推入 OpenStack 控制节点:
```
# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/
```
### 创建 OpenStack Designate 服务和端点
输入:
```
# openstack user create --domain default --password-prompt designate
# openstack role add --project services --user designate admin
# openstack service create --name designate --description "DNS" dns
# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
```
### 配置 Designate 服务
1、编辑 `/etc/designate/designate.conf`
`[service:api]` 节配置 `auth_strategy`
```
[service:api]
listen = 0.0.0.0:9001
auth_strategy = keystone
api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
enable_api_v2 = True
enabled_extensions_v2 = quotas, reports
```
`[keystone_authtoken]` 节配置下列选项:
```
[keystone_authtoken]
auth_type = password
username = designate
password = rhlab123
project_name = service
project_domain_name = Default
user_domain_name = Default
www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
```
`[service:worker]` 节,启用 worker 模型:
```
enabled = True
notify = True
```
`[storage:sqlalchemy]` 节,配置数据库访问:
```
[storage:sqlalchemy]
connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
```
填充 Designate 数据库:
```
# su -s /bin/sh -c "designate-manage database sync" designate
```
2、 创建 Designate 的 `pools.yaml` 文件(包含 target 和 bind 细节):
编辑 `/etc/designate/pools.yaml`
```
- name: default
# The name is immutable. There will be no option to change the name after
# creation and the only way will to change it will be to delete it
# (and all zones associated with it) and recreate it.
description: Default Pool
attributes: {}
# List out the NS records for zones hosted within this pool
# This should be a record that is created outside of designate, that
# points to the public IP of the controller node.
ns_records:
- hostname: {{Controller_FQDN}}. # Thisis mDNS
priority: 1
# List out the nameservers for this pool. These are the actual BIND servers.
# We use these to verify changes have propagated to all nameservers.
nameservers:
- host: {{ DNS_SERVER_IP }}
port: 53
# List out the targets for this pool. For BIND there will be one
# entry for each BIND server, as we have to run rndc command on each server
targets:
- type: bind9
description: BIND9 Server 1
# List out the designate-mdns servers from which BIND servers should
# request zone transfers (AXFRs) from.
# This should be the IP of the controller node.
# If you have multiple controllers you can add multiple masters
# by running designate-mdns on them, and adding them here.
masters:
- host: {{ CONTROLLER_SERVER_IP }}
port: 5354
# BIND Configuration options
options:
host: {{ DNS_SERVER_IP }}
port: 53
rndc_host: {{ DNS_SERVER_IP }}
rndc_port: 953
rndc_key_file: /etc/rndc.key
rndc_config_file: /etc/rndc.conf
```
填充 Designate 池:
```
su -s /bin/sh -c "designate-manage pool update" designate
```
3、启动 Designate 中心和 API 服务:
```
systemctl enable --now designate-central designate-api
```
4、验证 Designate 服务运行:
```
# openstack dns service list
+--------------+--------+-------+--------------+
| service_name | status | stats | capabilities |
+--------------+--------+-------+--------------+
| central | UP | - | - |
| api | UP | - | - |
| mdns | UP | - | - |
| worker | UP | - | - |
| producer | UP | - | - |
+--------------+--------+-------+--------------+
```
### 用外部 DNS 配置 OpenStack Neutron
1、为 Designate 服务配置 iptables
```
# iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
# service iptables save; service iptables restart
# setsebool named_write_master_zones 1
```
2、 编辑 `/etc/neutron/neutron.conf``[default]` 节:
```
external_dns_driver = designate
```
3、 在 `/etc/neutron/neutron.conf` 中添加 `[designate]` 节:
```
[designate]
url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
auth_type = password
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
username = designate
password = rhlab123
project_name = services
project_domain_name = Default
user_domain_name = Default
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
```
4、编辑 `neutron.conf``dns_domain`
```
dns_domain = rhlab.dev.
```
重启:
```
# systemctl restart neutron-*
```
5、在 `/etc/neutron/plugins/ml2/ml2_conf.ini` 中的组成层 2ML2中添加 `dns`
```
extension_drivers=port_security,qos,dns
```
6、在 Designate 中添加区域:
```
# openstack zone create email=admin@rhlab.dev rhlab.dev.
```
`rhlab.dev` 区域中添加记录:
```
# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test
```
Designate 现在就安装和配置好了。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/getting-started-openstack-designate
作者:[Amjad Yaseen][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayaseen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://docs.openstack.org/designate/latest/
[3]: /article/19/3/openstack-neutron
[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
[5]: https://github.com/ayaseen/designate

View File

@ -1,46 +1,49 @@
[#]: collector: (lujun9972)
[#]: translator: (FSSlc)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10845-1.html)
[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Linux 下的进程间通信:使用管道和消息队列
======
学习在 Linux 中进程是如何与其他进程进行同步的。
> 学习在 Linux 中进程是如何与其他进程进行同步的。
![Chat bubbles][1]
本篇是 Linux 下[进程间通信][2]IPC系列的第二篇文章。[第一篇文章][3] 聚焦于通过共享文件和共享内存段这样的共享存储来进行 IPC。这篇文件的重点将转向管道它是连接需要通信的进程之间的通道。管道拥有一个 _写端_ 用于写入字节数据,还有一个 _读端_ 用于按照先入先出的顺序读入这些字节数据。而这些字节数据可能代表任何东西:数字、数字电影等等。
本篇是 Linux 下[进程间通信][2]IPC系列的第二篇文章。[第一篇文章][3] 聚焦于通过共享文件和共享内存段这样的共享存储来进行 IPC。这篇文件的重点将转向管道它是连接需要通信的进程之间的通道。管道拥有一个*写端*用于写入字节数据,还有一个*读端*用于按照先入先出的顺序读入这些字节数据。而这些字节数据可能代表任何东西:数字、员工记录、数字电影等等。
管道有两种类型,命名管道和无名管道,都可以交互式的在命令行或程序中使用它们;相关的例子在下面展示。这篇文章也将介绍内存队列,尽管它们有些过时了,但它们不应该受这样的待遇。
在本系列的第一篇文章中的示例代码承认了在 IPC 中可能受到竞争条件(不管是基于文件的还是基于内存的)的威胁。自然地我们也会考虑基于管道的 IPC 的安全并发问题,这个也将在本文中提及。针对管道和内存队列的例子将会使用 POSIX 推荐使用的 API POSIX 的一个核心目标就是线程安全。
在本系列的第一篇文章中的示例代码承认了在 IPC 中可能受到竞争条件(不管是基于文件的还是基于内存的)的威胁。自然地我们也会考虑基于管道的 IPC 的安全并发问题,这个也将在本文中提及。针对管道和内存队列的例子将会使用 POSIX 推荐使用的 APIPOSIX 的一个核心目标就是线程安全。
考虑查看 [**mq_open** 函数的 man 页][4],这个函数属于内存队列的 API。这个 man 页中有一个章节是有关 [特性][5] 的小表格:
请查看一些 [mq_open 函数的 man 页][4],这个函数属于内存队列的 API。这个 man 页中有关 [特性][5] 的章节带有一个小表格:
Interface | Attribute | Value
接口 | 特性 | 值
---|---|---
mq_open() | Thread safety | MT-Safe
`mq_open()` | 线程安全 | MT-Safe
上面的 **MT-Safe****MT** 指的是 multi-threaded多线程意味着 **mq_open** 函数是线程安全的,进而暗示是进程安全的:一个进程的执行和它的一个线程执行的过程类似,假如竞争条件不会发生在处于 _相同_ 进程的线程中,那么这样的条件也不会发生在处于不同进程的线程中。**MT-Safe** 特性保证了调用 **mq_open** 时不会出现竞争条件。一般来说,基于通道的 IPC 是并发安全的,尽管在下面例子中会出现一个有关警告的注意事项。
上面的 MT-SafeMT 指的是<ruby>多线程<rt>multi-threaded</rt></ruby>)意味着 `mq_open` 函数是线程安全的,进而暗示是进程安全的:一个进程的执行和它的一个线程执行的过程类似,假如竞争条件不会发生在处于*相同*进程的线程中那么这样的条件也不会发生在处于不同进程的线程中。MT-Safe 特性保证了调用 `mq_open` 时不会出现竞争条件。一般来说,基于通道的 IPC 是并发安全的,尽管在下面例子中会出现一个有关警告的注意事项。
### 无名管道
首先让我们通过一个特意构造的命令行例子来展示无名管道是如何工作的。在所有的现代系统中,符号 **|** 在命令行中都代表一个无名管道。假设我们的命令行提示符为 **%**,接下来考虑下面的命令:
首先让我们通过一个特意构造的命令行例子来展示无名管道是如何工作的。在所有的现代系统中,符号 `|` 在命令行中都代表一个无名管道。假设我们的命令行提示符为 `%`,接下来考虑下面的命令:
```shell
% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right
## 写入方在 | 左边,读取方在右边
% sleep 5 | echo "Hello, world!"
```
_sleep_ 和 _echo_ 程序以不同的进程执行,无名管道允许它们进行通信。但是上面的例子被特意设计为没有通信发生。问候语 _Hello, world!_ 出现在屏幕中,然后过了 5 秒后,命令行返回,暗示 _sleep__echo_ 进程都已经结束了。这期间发生了什么呢?
`sleep``echo` 程序以不同的进程执行,无名管道允许它们进行通信。但是上面的例子被特意设计为没有通信发生。问候语 “Hello, world!” 出现在屏幕中,然后过了 5 秒后,命令行返回,暗示 `sleep``echo` 进程都已经结束了。这期间发生了什么呢?
在命令行中的竖线 **|** 的语法中左边的进程_sleep_是写入方右边的进程_echo_为读取方。默认情况下读入方将会堵塞直到从通道中能够读取字节数据而写入方在写完它的字节数据后将发送 流已终止 的标志。(即便写入方永久地停止了,一个流已终止的标志还是会发给读取方。)无名管道将保持到写入方和读取方都停止的那个时刻。
在命令行中的竖线 `|` 的语法中,左边的进程(`sleep`)是写入方,右边的进程(`echo`)为读取方。默认情况下,读取方将会阻塞,直到从通道中能够读取到字节数据,而写入方在写完它的字节数据后,将发送 <ruby>流已终止<rt>end-of-stream</rt></ruby>的标志。(即便写入方过早终止了,一个流已终止的标志还是会发给读取方。)无名管道将保持到写入方和读取方都停止的那个时刻。
在上面的例子中,_sleep_ 进程并没有向通道写入任何的字节数据,但在 5 秒后就停止了这时将向通道发送一个流已终止的标志。与此同时_echo_ 进程立即向标准输出(屏幕)写入问候语,因为这个进程并不从通道中读入任何字节,所以它并没有等待。一旦 _sleep__echo_ 进程都终止了,不会再用作通信的无名管道将会消失然后返回命令行提示符。
在上面的例子中,`sleep` 进程并没有向通道写入任何的字节数据,但在 5 秒后就终止了,这时将向通道发送一个流已终止的标志。与此同时,`echo` 进程立即向标准输出(屏幕)写入问候语,因为这个进程并不从通道中读入任何字节,所以它并没有等待。一旦 `sleep``echo` 进程都终止了,不会再用作通信的无名管道将会消失然后返回命令行提示符。
下面这个更加实用示例将使用两个无名管道。我们假定文件 _test.dat_ 的内容如下:
下面这个更加实用示例将使用两个无名管道。我们假定文件 `test.dat` 的内容如下:
```
this
@ -52,13 +55,13 @@ world
ends
```
下面的命令
下面的命令
```
% cat test.dat | sort | uniq
```
会将 _cat_concatenate 的缩写)进程的输出通过管道传给 _sort_ 进程以生成排序后的输出,然后将排序后的输出通过管道传给 _uniq_ 进程以消除重复的记录(在本例中,会将两次出现的 **the** 缩减为一个):
会将 `cat`<ruby>连接<rt>concatenate</rt></ruby>的缩写)进程的输出通过管道传给 `sort` 进程以生成排序后的输出,然后将排序后的输出通过管道传给 `uniq` 进程以消除重复的记录(在本例中,会将两次出现的 “the” 缩减为一个):
```
ends
@ -73,7 +76,6 @@ world
#### 示例 1. 两个进程通过一个无名管道来进行通信
```c
#include <sys/wait.h> /* wait */
#include <stdio.h>
@ -120,21 +122,23 @@ int main() {
}
```
上面名为 _pipeUN_ 的程序使用系统函数 **fork** 来创建一个进程。尽管这个程序只有一个单一的源文件,在它正确执行的情况下将会发生多进程的情况。下面的内容是对库函数 **fork** 如何工作的一个简要回顾:
上面名为 `pipeUN` 的程序使用系统函数 `fork` 来创建一个进程。尽管这个程序只有一个单一的源文件,在它正确执行的情况下将会发生多进程的情况。
* **fork** 函数由 _父_ 进程调用,在失败时返回 **-1** 给父进程。在 _pipeUN_ 这个例子中,相应的调用是
> 下面的内容是对库函数 `fork` 如何工作的一个简要回顾:
```c
> * `fork` 函数由*父*进程调用,在失败时返回 `-1` 给父进程。在 `pipeUN` 这个例子中,相应的调用是:
> ```c
pid_t cpid = fork(); /* called in parent */
```
函数调用后的返回值也被保存下来了。在这个例子中,保存在整数类型 **pid_t** 的变量 **cpid** 中。(每个进程有它自己的 _进程 ID_一个非负的整数,用来标记进程)。复刻一个新的进程可能会因为多种原因而失败。最终它们将会被包括进一个完整的 _进程表_,这个结构由系统维持,以此来追踪进程状态。明确地说,僵尸进程假如没有被处理掉,将可能引起一个进程表被填满。
* 假如 **fork** 调用成功,则它将创建一个新的子进程,向父进程返回一个值,向子进程返回另外的一个值。在调用 **fork** 后父进程和子进程都将执行相同的代码。(子进程继承了到此为止父进程中声明的所有变量的拷贝),特别地,一次成功的 **fork** 调用将返回如下的东西:
* 向子进程返回 0
* 向父进程返回子进程的进程 ID
* 在依次成功的 **fork** 调用后,一个 _if/else_ 或等价的结构将会被用来隔离针对父进程和子进程的代码。在这个例子中,相应的声明为:
> 函数调用后的返回值也被保存下来了。在这个例子中,保存在整数类型 `pid_t` 的变量 `cpid` 中。(每个进程有它自己的*进程 ID*,这是一个非负的整数,用来标记进程)。复刻一个新的进程可能会因为多种原因而失败,包括*进程表*满了的原因,这个结构由系统维持,以此来追踪进程状态。明确地说,僵尸进程假如没有被处理掉,将可能引起进程表被填满的错误
> * 假如 `fork` 调用成功,则它将创建一个新的子进程,向父进程返回一个值,向子进程返回另外的一个值。在调用 `fork` 后父进程和子进程都将执行相同的代码。(子进程继承了到此为止父进程中声明的所有变量的拷贝),特别地,一次成功的 `fork` 调用将返回如下的东西:
> * 向子进程返回 `0`
> * 向父进程返回子进程的进程 ID
> * 在一次成功的 `fork` 调用后,一个 `if`/`else` 或等价的结构将会被用来隔离针对父进程和子进程的代码。在这个例子中,相应的声明为:
```c
> ```c
if (0 == cpid) { /*** child ***/
...
}
@ -143,25 +147,25 @@ else { /*** parent ***/
}
```
假如成功地复刻出了一个子进程,_pipeUN_ 程序将像下面这样去执行。存在一个整数的数列
假如成功地复刻出了一个子进程,`pipeUN` 程序将像下面这样去执行。在一个整数的数列里:
```c
int pipeFDs[2]; /* two file descriptors */
```
来保存两个文件描述符,一个用来向管道中写入,另一个从管道中写入。(数组元素 **pipeFDs[0]** 是读端的文件描述符,元素 **pipeFDs[1]** 是写端的文件描述符。)在调用 **fork** 之前,对系统 **pipe** 函数的成功调用,将立刻使得这个数组获得两个文件描述符:
来保存两个文件描述符,一个用来向管道中写入,另一个从管道中写入。(数组元素 `pipeFDs[0]` 是读端的文件描述符,元素 `pipeFDs[1]` 是写端的文件描述符。)在调用 `fork` 之前,对系统 `pipe` 函数的成功调用,将立刻使得这个数组获得两个文件描述符:
```c
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
```
父进程和子进程现在都有了文件描述符的副本。但 _分离关注点_ 模式意味着每个进程恰好只需要一个描述符。在这个例子中,父进程负责写入,而子进程负责读取,尽管这样的角色分配可以反过来。在 _if_ 子句中的第一个语句将用于关闭管道的读端:
父进程和子进程现在都有了文件描述符的副本。但*分离关注点*模式意味着每个进程恰好只需要一个描述符。在这个例子中,父进程负责写入,而子进程负责读取,尽管这样的角色分配可以反过来。在 `if` 子句中的第一个语句将用于关闭管道的读端:
```c
close(pipeFDs[WriteEnd]); /* called in child code */
```
在父进程中的 _else_ 子句将会关闭管道的读端:
在父进程中的 `else` 子句将会关闭管道的读端:
```c
close(pipeFDs[ReadEnd]); /* called in parent code */
@ -169,23 +173,23 @@ close(pipeFDs[ReadEnd]); /* called in parent code */
然后父进程将向无名管道中写入某些字节数据ASCII 代码),子进程读取这些数据,然后向标准输出中回放它们。
在这个程序中还需要澄清的一点是在父进程代码中的 **wait** 函数。一旦被创建后,子进程很大程度上独立于它的父进程,正如简短的 _pipeUN_ 程序所展示的那样。子进程可以执行任意的代码,而它们可能与父进程完全没有关系。但是,假如当子进程终止时,系统将会通过一个信号来通知父进程。
在这个程序中还需要澄清的一点是在父进程代码中的 `wait` 函数。一旦被创建后,子进程很大程度上独立于它的父进程,正如简短的 `pipeUN` 程序所展示的那样。子进程可以执行任意的代码,而它们可能与父进程完全没有关系。但是,假如当子进程终止时,系统将会通过一个信号来通知父进程。
要是父进程在子进程之前终止又该如何呢?在这种情形下,除非采取了预防措施,子进程将会变成在进程表中的一个 _僵尸_ 进程。预防措施有两大类型:第一种是让父进程去通知系统,告诉系统它对子进程的终止没有任何兴趣:
要是父进程在子进程之前终止又该如何呢?在这种情形下,除非采取了预防措施,子进程将会变成在进程表中的一个*僵尸*进程。预防措施有两大类型:第一种是让父进程去通知系统,告诉系统它对子进程的终止没有任何兴趣:
```c
signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */
```
第二种方法是在子进程终止时,让父进程执行一个 **wait**。这样就确保了父进程可以独立于子进程而存在。在 _pipeUN_ 程序中使用了第二种方法,其中父进程的代码使用的是下面的调用:
第二种方法是在子进程终止时,让父进程执行一个 `wait`。这样就确保了父进程可以独立于子进程而存在。在 `pipeUN` 程序中使用了第二种方法,其中父进程的代码使用的是下面的调用:
```c
wait(NULL); /* called in parent */
```
这个对 **wait** 的调用意味着 _一直等待直到任意一个子进程的终止发生_,因此在 _pipeUN_ 程序中,只有一个子进程。(其中的 **NULL** 参数可以被替换为一个保存有子程序退出状态的整数变量的地址。)对于更细颗粒度的控制,还可以使用更灵活的 **waitpid** 函数,例如特别指定多个子进程中的某一个。
这个对 `wait` 的调用意味着*一直等待直到任意一个子进程的终止发生*,因此在 `pipeUN` 程序中,只有一个子进程。(其中的 `NULL` 参数可以被替换为一个保存有子程序退出状态的整数变量的地址。)对于更细粒度的控制,还可以使用更灵活的 `waitpid` 函数,例如特别指定多个子进程中的某一个。
_pipeUN_ 将会采取另一个预防措施。当父进程结束了等待,父进程将会调用常规的 **exit** 函数去退出。对应的,子进程将会调用 **_exit** 变种来退出,这类变种将快速跟踪终止相关的通知。在效果上,子进程会告诉系统立刻去通知父进程它的这个子进程已经终止了。
`pipeUN` 将会采取另一个预防措施。当父进程结束了等待,父进程将会调用常规的 `exit` 函数去退出。对应的,子进程将会调用 `_exit` 变种来退出,这类变种将快速跟踪终止相关的通知。在效果上,子进程会告诉系统立刻去通知父进程它的这个子进程已经终止了。
假如两个进程向相同的无名管道中写入内容,字节数据会交错吗?例如,假如进程 P1 向管道写入内容:
@ -205,42 +209,42 @@ baz baz
baz foo baz bar
```
POSIX 标准确保了写不是交错的,使得没有写操作能够超过 **PIPE_BUF** 的范围。在 Linux 系统中, **PIPE_BUF** 的大小是 4096 字节。对于管道我更喜欢只有一个写方和一个读方,从而绕过这个问题。
只要没有写入超过 `PIPE_BUF` 字节POSIX 标准就能确保写入不会交错。在 Linux 系统中, `PIPE_BUF` 的大小是 4096 字节。对于管道我更喜欢只有一个写方和一个读方,从而绕过这个问题。
## 命名管道
### 命名管道
无名管道没有备份文件:系统将维持一个内存缓存来将字节数据从写方传给读方。一旦写方和读方终止,这个缓存将会被回收,进而无名管道消失。相反的,命名管道有备份文件和一个不同的 API。
下面让我们通过另一个命令行示例来知晓命名管道的要点。下面是具体的步骤:
下面让我们通过另一个命令行示例来了解命名管道的要点。下面是具体的步骤:
* 开启两个终端。这两个终端的工作目录应该相同。
* 在其中一个终端中,键入下面的两个命令(命令行提示符仍然是 **%**,我的注释以 **##** 打头。):
* 开启两个终端。这两个终端的工作目录应该相同。
* 在其中一个终端中,键入下面的两个命令(命令行提示符仍然是 `%`,我的注释以 `##` 打头。):
```shell
% mkfifo tester ## creates a backing file named tester
% cat tester ## type the pipe's contents to stdout
```shell
% mkfifo tester ## 创建一个备份文件,名为 tester
% cat tester ## 将管道的内容输出到 stdout
```
在最开始,没有任何东西会出现在终端中,因为到现在为止没有命名管道中写入任何东西。
* 在第二个终端中输入下面的命令:
在最开始,没有任何东西会出现在终端中,因为到现在为止没有命名管道中写入任何东西。
* 在第二个终端中输入下面的命令:
```shell
```shell
% cat > tester ## redirect keyboard input to the pipe
hello, world! ## then hit Return key
bye, bye ## ditto
<Control-C> ## terminate session with a Control-C
```
无论在这个终端中输入什么,它都会在另一个终端中显示出来。一旦键入 **Ctrl+C**,就会回到正常的命令行提示符,因为管道已经被关闭了。
* 通过移除实现命名管道的文件来进行清理:
无论在这个终端中输入什么,它都会在另一个终端中显示出来。一旦键入 `Ctrl+C`,就会回到正常的命令行提示符,因为管道已经被关闭了。
* 通过移除实现命名管道的文件来进行清理:
```shell
```shell
% unlink tester
```
正如 _mkfifo_ 程序的名字所暗示的那样,一个命名管道也被叫做一个 FIFO因为第一个字节先进然后第一个字节就先出其他的类似。存在一个名为 **mkfifo** 的库函数,用它可以在程序中创建一个命名管道,它将在下一个示例中被用到,该示例由两个进程组成:一个向命名管道写入,而另一个从该管道读取。
正如 `mkfifo` 程序的名字所暗示的那样,命名管道也被叫做 FIFO因为第一个进入的字节就会第一个出其他的类似。有一个名为 `mkfifo` 的库函数,用它可以在程序中创建一个命名管道,它将在下一个示例中被用到,该示例由两个进程组成:一个向命名管道写入,而另一个从该管道读取。
#### 示例 2. _fifoWriter_ 程序
#### 示例 2. fifoWriter 程序
```c
#include <sys/types.h>
@ -283,29 +287,29 @@ int main() {
}
```
上面的 _fifoWriter_ 程序可以被总结为如下:
上面的 `fifoWriter` 程序可以被总结为如下:
* 首先程序创建了一个命名管道用来写入数据:
* 首先程序创建了一个命名管道用来写入数据:
```c
```c
mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
int fd = open(pipeName, O_CREAT | O_WRONLY);
```
其中的 **pipeName** 是传递给 **mkfifo** 作为它的第一个参数的备份文件的名字。接着命名管道通过我们熟悉的 **open** 函数调用被打开,而这个函数将会返回一个文件描述符。
* 在实现层面上_fifoWriter_ 不会一次性将所有的数据都写入,而是写入一个块,然后休息随机数目的微秒时间,接着再循环往复。总的来说,有 768000 个 4 比特的整数值被写入到命名管道中。
* 在关闭命名管道后_fifoWriter_ 也将使用 unlink 去掉关联
其中的 `pipeName` 是备份文件的名字,传递给 `mkfifo` 作为它的第一个参数。接着命名管道通过我们熟悉的 `open` 函数调用被打开,而这个函数将会返回一个文件描述符。
* 在实现层面上,`fifoWriter` 不会一次性将所有的数据都写入,而是写入一个块,然后休息随机数目的微秒时间,接着再循环往复。总的来说,有 768000 个 4 字节整数值被写入到命名管道中。
* 在关闭命名管道后,`fifoWriter` 也将使用 `unlink` 取消对该文件的连接
```c
```c
close(fd); /* close pipe: generates end-of-stream marker */
unlink(pipeName); /* unlink from the implementing file */
```
一旦连接到管道的每个进程都执行了 unlink 操作后,系统将回收这些备份文件。在这个例子中,只有两个这样的进程 _fifoWriter__fifoReader_,它们都做了 _unlink_ 操作。
一旦连接到管道的每个进程都执行了 `unlink` 操作后,系统将回收这些备份文件。在这个例子中,只有两个这样的进程 `fifoWriter``fifoReader`,它们都做了 `unlink` 操作。
这个两个程序应该在位于相同工作目录下的不同终端中被执行。但是 _fifoWriter_ 应该在 _fifoReader_ 之前被启动,因为需要 _fifoWriter_ 去创建管道。然后 _fifoReader_ 才能够获取到刚被创建的命名管道。
这个两个程序应该在不同终端的相同工作目录中执行。但是 `fifoWriter` 应该在 `fifoReader` 之前被启动,因为需要 `fifoWriter` 去创建管道。然后 `fifoReader` 才能够获取到刚被创建的命名管道。
#### 示例 3. _fifoReader_ 程序
#### 示例 3. fifoReader 程序
```c
#include <stdio.h>
@ -352,28 +356,28 @@ int main() {
}
```
上面的 _fifoReader_ 的内容可以总结为如下:
上面的 `fifoReader` 的内容可以总结为如下:
* 因为 _fifoWriter_ 已经创建了命名管道,所以 _fifoReader_ 只需要利用标准的 **open** 调用来通过备份文件来获取到管道中的内容:
* 因为 `fifoWriter` 已经创建了命名管道,所以 `fifoReader` 只需要利用标准的 `open` 调用来通过备份文件来获取到管道中的内容:
```c
```c
const char* file = "./fifoChannel";
int fd = open(file, O_RDONLY);
```
这个文件的打开是只读的。
* 然后这个程序进入一个潜在的无限循环,在每次循环时,尝试读取 4 比特的块。**read** 调用:
这个文件的是只读打开的。
* 然后这个程序进入一个潜在的无限循环,在每次循环时,尝试读取 4 字节的块。`read` 调用:
```c
```c
ssize_t count = read(fd, &next, sizeof(int));
```
返回 0 来暗示流的结束。在这种情况下_fifoReader_ 跳出循环,关闭命名管道,并在终止前 unlink 备份文件。
* 在读入 4 比特整数后_fifoReader_ 检查这个数是否为质数。这个操作代表了一个生产级别的读取器可能在接收到的字节数据上执行的逻辑操作。在示例运行中,接收了 768000 个整数中的 37682 个质数。
返回 0 来暗示该流的结束。在这种情况下,`fifoReader` 跳出循环,关闭命名管道,并在终止前 `unlink` 备份文件。
* 在读入 4 字节整数后,`fifoReader` 检查这个数是否为质数。这个操作代表了一个生产级别的读取器可能在接收到的字节数据上执行的逻辑操作。在示例运行中,在接收到的 768000 个整数中有 37682 个质数。
在重复的运行示例时, _fifoReader_ 将成功地读取 _fifoWriter_ 写入的所有字节。这不是很让人惊讶的。这两个进程在相同的机器上执行,从而可以不用考虑网络相关的问题。命名管道是一个可信且高效的 IPC 机制,因而被广泛使用。
重复运行示例, `fifoReader` 将成功地读取 `fifoWriter` 写入的所有字节。这不是很让人惊讶的。这两个进程在相同的机器上执行,从而可以不用考虑网络相关的问题。命名管道是一个可信且高效的 IPC 机制,因而被广泛使用。
下面是这两个程序的输出,在不同的终端中启动,但处于相同的工作目录:
下面是这两个程序的输出,它们在不同的终端中启动,但处于相同的工作目录:
```shell
% ./fifoWriter
@ -385,13 +389,14 @@ Received ints: 768000, primes: 37682
### 消息队列
管道有着严格的先入先出行为:第一个被写入的字节将会第一个被读,第二个写入的字节将第二个被读,以此类推。消息队列可以做出相同的表现,但它又足够灵活,可以使得字节块不以先入先出的次序来接收。
管道有着严格的先入先出行为:第一个被写入的字节将会第一个被读,第二个写入的字节将第二个被读,以此类推。消息队列可以做出相同的表现,但它又足够灵活,可以使得字节块可以不以先入先出的次序来接收。
正如它的名字所建议的那样,消息队列是一系列的消息,每个消息包含两部分:
* 荷载,一个字节序列(在 C 中是 **char**
* 一个类型,以一个正整数值的形式给定,类型用来分类消息,为了更灵活的回收
正如它的名字所提示的那样,消息队列是一系列的消息,每个消息包含两部分:
考虑下面对一个消息队列的描述,每个消息被一个整数类型标记:
* 荷载,一个字节序列(在 C 中是 char
* 类型,以一个正整数值的形式给定,类型用来分类消息,为了更灵活的回收
看一下下面对一个消息队列的描述,每个消息由一个整数类型标记:
```
+-+ +-+ +-+ +-+
@ -399,11 +404,11 @@ sender--->|3|--->|2|--->|2|--->|1|--->receiver
+-+ +-+ +-+ +-+
```
在上面展示的 4 个消息中,标记为 1 的是开头,即最接近接收端,然后另个标记为 2 的消息,最后接着一个标记为 3 的消息。假如按照严格的 FIFO 行为执行,消息将会以 1-2-2-3 这样的次序被接收。但是消息队列允许其他收次序。例如,消息可以被接收方以 3-2-1-2 的次序接收。
在上面展示的 4 个消息中,标记为 1 的是开头,即最接近接收端,然后另个标记为 2 的消息,最后接着一个标记为 3 的消息。假如按照严格的 FIFO 行为执行,消息将会以 1-2-2-3 这样的次序被接收。但是消息队列允许其他收次序。例如,消息可以被接收方以 3-2-1-2 的次序接收。
_mqueue_ 示例包含两个程序_sender_ 将向消息队列中写入数据,而 _receiver_ 将从这个队列中读取数据。这两个程序都包含下面展示的头文件 _queue.h_
`mqueue` 示例包含两个程序,`sender` 将向消息队列中写入数据,而 `receiver` 将从这个队列中读取数据。这两个程序都包含的头文件 `queue.h` 如下所示
#### 示例 4. 头文件 _queue.h_
#### 示例 4. 头文件 queue.h
```c
#define ProjectId 123
@ -417,16 +422,16 @@ typedef struct {
} queuedMessage;
```
上面的头文件定义了一个名为 **queuedMessage** 的结构类型,它带有 **payload**(字节数组)和 **type**(整数)这两个域。该文件也定义了一些符号常数(使用 **#define** 语句)。前两个常数被用来生成一个 key而这个 key 反过来被用来获取一个消息队列的 ID。**ProjectId** 可以是任何正整数值,而 **PathName** 必须是一个存在的,可访问的文件,在这个示例中,指的是文件 _queue.h_。在 _sender__receiver_ 中,它们都有的设定语句为:
上面的头文件定义了一个名为 `queuedMessage` 的结构类型,它带有 `payload`(字节数组)和 `type`(整数)这两个域。该文件也定义了一些符号常数(使用 `#define` 语句),前两个常数被用来生成一个 `key`,而这个 `key` 反过来被用来获取一个消息队列的 ID。`ProjectId` 可以是任何正整数值,而 `PathName` 必须是一个存在的、可访问的文件,在这个示例中,指的是文件 `queue.h`。在 `sender``receiver` 中,它们都有的设定语句为:
```c
key_t key = ftok(PathName, ProjectId); /* generate key */
int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
```
ID **qid** 在效果上是消息队列文件描述符的对应物。
ID `qid` 在效果上是消息队列文件描述符的对应物。
#### 示例 5. _sender_ 程序
#### 示例 5. sender 程序
```c
#include <stdio.h>
@ -465,15 +470,15 @@ int main() {
}
```
上面的 _sender_ 程序将发送出 6 个消息,每两个为一个类型:前两个是类型 1接着的连个是类型 2最后的两个为类型 3。发送的语句
上面的 `sender` 程序将发送出 6 个消息,每两个为一个类型:前两个是类型 1接着的连个是类型 2最后的两个为类型 3。发送的语句
```c
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);
```
被配置为非阻塞的(**IPC_NOWAIT** 标志),因为这里的消息体量上都很小。唯一的危险在于一个完整的序列将可能导致发送失败,而这个例子不会。下面的 _receiver_ 程序也将使用 **IPC_NOWAIT** 标志来接收消息。
被配置为非阻塞的(`IPC_NOWAIT` 标志),是因为这里的消息体量上都很小。唯一的危险在于一个完整的序列将可能导致发送失败,而这个例子不会。下面的 `receiver` 程序也将使用 `IPC_NOWAIT` 标志来接收消息。
#### 示例 6. _receiver_ 程序
#### 示例 6. receiver 程序
```c
#include <stdio.h>
@ -511,13 +516,13 @@ int main() {
}
```
这个 _receiver_ 程序不会创建消息队列,尽管 API 看起来像是那样。在 _receiver_ 中,对
这个 `receiver` 程序不会创建消息队列,尽管 API 尽管建议那样。在 `receiver` 中,对
```c
int qid = msgget(key, 0666 | IPC_CREAT);
```
的调用可能因为带有 **IPC_CREAT** 标志而具有误导性,但是这个标志的真实意义是 _如果需要就创建否则直接获取_。_sender_ 程序调用 **msgsnd** 来发送消息,而 _receiver_ 调用 **msgrcv** 来接收它们。在这个例子中_sender_ 以 1-1-2-2-3-3 的次序发送消息,但 _receiver_ 接收它们的次序为 3-1-2-1-3-2这显示消息队列没有被严格的 FIFO 行为所拘泥:
的调用可能因为带有 `IPC_CREAT` 标志而具有误导性,但是这个标志的真实意义是*如果需要就创建,否则直接获取*。`sender` 程序调用 `msgsnd` 来发送消息,而 `receiver` 调用 `msgrcv` 来接收它们。在这个例子中,`sender` 以 1-1-2-2-3-3 的次序发送消息,但 `receiver` 接收它们的次序为 3-1-2-1-3-2这显示消息队列没有被严格的 FIFO 行为所拘泥:
```shell
% ./sender
@ -537,7 +542,7 @@ msg6 received as type 3
msg4 received as type 2
```
上面的输出显示 _sender__receiver_ 可以在同一个终端中启动。输出也显示消息队列是持久的,即便在 _sender_ 进程在完成创建队列,向队列写数据,然后离开的整个过程后,队列仍然存在。只有在 _receiver_ 进程显式地调用 **msgctl** 来移除该队列,这个队列才会消失:
上面的输出显示 `sender``receiver` 可以在同一个终端中启动。输出也显示消息队列是持久的,即便 `sender` 进程在完成创建队列、向队列写数据、然后退出的整个过程后,该队列仍然存在。只有在 `receiver` 进程显式地调用 `msgctl` 来移除该队列,这个队列才会消失:
```c
if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */
@ -545,7 +550,7 @@ if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */
### 总结
管道和消息队列的 API 在根本上来说都是单向的:一个进程写,然后另一个进程读。当然还存在双向命名管道的实现,但我认为这个 IPC 机制在它最为简单的时候反而是最佳的。正如前面提到的那样,消息队列已经不大受欢迎了,尽管没有找到什么特别好的原因来解释这个现象。而队列仍然是 IPC 工具箱中的另一个工具。这个快速的 IPC 工具箱之旅将以第 3 部分-通过套接字和信号来示例 IPC -来终结。
管道和消息队列的 API 在根本上来说都是单向的:一个进程写,然后另一个进程读。当然还存在双向命名管道的实现,但我认为这个 IPC 机制在它最为简单的时候反而是最佳的。正如前面提到的那样,消息队列已经不大受欢迎了,尽管没有找到什么特别好的原因来解释这个现象;而队列仍然是 IPC 工具箱中的一个工具。这个快速的 IPC 工具箱之旅将以第 3 部分(通过套接字和信号来示例 IPC来终结。
--------------------------------------------------------------------------------
@ -554,7 +559,7 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-channe
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -562,7 +567,7 @@ via: https://opensource.com/article/19/4/interprocess-communication-linux-channe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
[3]: https://linux.cn/article-10826-1.html
[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html

View File

@ -0,0 +1,140 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10841-1.html)
[#]: subject: (2 new apps for music tweakers on Fedora Workstation)
[#]: via: (https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/)
[#]: author: (Justin W. Flory https://fedoramagazine.org/author/jflory7/)
2 个给使用 Fedora 工作站的音乐爱好者的新应用
======
![][1]
Linux 操作系统非常适合进行独特的自定义和调整,以使你的计算机更好地为你工作。例如,[i3 窗口管理器][2] 就让用户认识到了构成现代 Linux 桌面的各种组件和部分。
Fedora 上有两个音乐爱好者会感兴趣的新软件包mpris-scrobbler 和 playerctl。mpris-scrobbler 可以在 Last.fm 和/或 ListenBrainz 等音乐跟踪服务上[跟踪你的音乐收听历史][3]。 playerctl 是一个命令行的[音乐播放器的控制器][4]。
### mpris-scrobbler记录你的音乐收听趋势
mpris-scrobbler 是一个命令行应用程序,用于将音乐的播放历史记录提交给 [Last.fm][5]、[Libre.fm][6] 或 [ListenBrainz][7] 等服务。它监听 [MPRIS D-Bus 接口][8] 以检测正在播放的内容。它可以连接几个不同的音乐客户端,如 spotify 客户端、[vlc][9]、audacious、bmp、[cmus][10] 等。
![Last.fm last week in music report. Generated from user-submitted listening history.][11]
#### 安装和配置 mpris-scrobbler
mpris-scrobbler 在 Fedora 28 或更高版本以及 EPEL 7 存储库中可用。在终端中运行以下命令进行安装:
```
sudo dnf install mpris-scrobbler
```
安装完成后,使用 `systemctl` 启动并启用该服务。以下命令启动 mpris-scrobbler 并始终在系统重启后启动它:
```
systemctl --user enable --now mpris-scrobbler.service
```
#### 提交播放信息给 ListenBrainz
这里将介绍如何将 mpris-scrobbler 与 ListenBrainz 帐户相关联。要使用 Last.fm 或 Libre.fm请参阅其[上游文档][12]。
要将播放信息提交到 ListenBrainz 服务器,你需要有一个 ListenBrainz API 令牌。如果你有帐户,请从[个人资料设置页面][13]中获取该令牌。如果有了令牌,请运行此命令以使用 ListenBrainz API 令牌进行身份验证:
```
$ mpris-scrobbler-signon token listenbrainz
Token for listenbrainz.org:
```
最后,通过在 Fedora 上用你的音乐客户端播放一首歌来测试它。你播放的歌曲会出现在 ListenBrainz 个人资料页中。
![Basic statistics and play history from a user profile on ListenBrainz. The current track is playing on a Fedora Workstation laptop with mpris-scrobbler.][14]
### playerctl 可以控制音乐回放
`playerctl` 是一个命令行工具,它可以控制任何实现了 MPRIS D-Bus 接口的音乐播放器。你可以轻松地将其绑定到键盘快捷键或媒体热键上。以下是如何在命令行中安装、使用它,以及为 i3 窗口管理器创建键绑定的方法。
#### 安装和使用 playerctl
`playerctl` 在 Fedora 28 或更高版本中可用。在终端运行如下命令以安装:
```
sudo dnf install playerctl
```
现在已安装好,你可以立即使用它。在 Fedora 上打开你的音乐播放器。接下来,尝试用以下命令来控制终端的播放。
播放或暂停当前播放的曲目:
```
playerctl play-pause
```
如果你想跳过下一首曲目:
```
playerctl next
```
列出所有正在运行的播放器:
```
playerctl -l
```
仅使用 spotify 客户端播放或暂停当前播放的内容:
```
playerctl -p spotify play-pause
```
#### 在 i3wm 中创建 playerctl 键绑定
你是否使用窗口管理器,比如 [i3 窗口管理器][2]?尝试使用 `playerctl` 进行键绑定。你可以将不同的命令绑定到不同的快捷键,例如键盘上的播放/暂停按钮。参照下面的 [i3wm 配置摘录][15] 看看如何做:
```
# Media player controls
bindsym XF86AudioPlay exec "playerctl play-pause"
bindsym XF86AudioNext exec "playerctl next"
bindsym XF86AudioPrev exec "playerctl previous"
```
### 体验一下音乐播放器
想了解关于在 Fedora 上定制音乐聆听体验的更多信息吗Fedora Magazine 为你提供服务。看看 Fedora 上这[五个很酷的音乐播放器][16]。
也可以通过使用 MusicBrainz Picard 对音乐库进行排序和组织,[为你的混乱的音乐库带来秩序][17]。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/
作者:[Justin W. Flory][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jflory7/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/2-music-tweak-apps-816x345.jpg
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
[3]: https://github.com/mariusor/mpris-scrobbler
[4]: https://github.com/acrisci/playerctl
[5]: https://www.last.fm/
[6]: https://libre.fm/
[7]: https://listenbrainz.org/
[8]: https://specifications.freedesktop.org/mpris-spec/latest/
[9]: https://www.videolan.org/vlc/
[10]: https://cmus.github.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/02/Screenshot_2019-04-13-jflory7%E2%80%99s-week-in-music2-1024x500.png
[12]: https://github.com/mariusor/mpris-scrobbler#authenticate-to-the-service
[13]: https://listenbrainz.org/profile/
[14]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot_2019-04-13-User-jflory-ListenBrainz.png
[15]: https://github.com/jwflory/swiss-army/blob/ba6ac0c71855e33e3caa1ee1fe51c05d2df0529d/roles/apps/i3wm/files/config#L207-L210
[16]: https://fedoramagazine.org/5-cool-music-player-apps/
[17]: https://fedoramagazine.org/picard-brings-order-music-library/
[18]: https://unsplash.com/photos/Qrspubmx6kE?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[19]: https://unsplash.com/search/photos/music?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,142 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10847-1.html)
[#]: subject: (Kindd A Graphical Frontend To dd Command)
[#]: via: (https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Kindd一个图形化 dd 命令前端
======
![Kindd - A Graphical Frontend To dd Command][1]
前不久,我们已经学习如何在类 Unix 系统中 [使用 dd 命令创建可启动的 ISO][2]。请记住,`dd` 命令是最具危险性和破坏性的命令之一。如果你不确定你实际在做什么,你可能会在几分钟内意外地擦除你的硬盘数据。`dd` 命令仅仅从 `if` 参数获取数据,并写入数据到 `of` 参数。它将不关心它正在覆盖什么,它也不关心是否在磁道上有一个分区表,或一个启动扇区,或者一个家文件夹,或者任何重要的东西。它将简单地做它被告诉去做的事。如果你是初学者,一般地尝试避免使用 `dd` 命令来做实验。幸好,这有一个支持 `dd` 命令的简单的 GUI 实用程序。向 “Kindd” 问好,一个属于 `dd` 命令的图形化前端。它是自由开源的、用 Qt Quick 所写的工具。总的来说,这个工具对那些对命令行不适应的初学者是非常有用的。
它的开发者创建这个工具主要是为了提供:
1. 一个用于 `dd` 命令的现代化的、简单而安全的图形化用户界面,
2. 一种简单地创建可启动设备的图形化方法,而不必使用终端。
### 安装 Kindd
Kindd 在 [AUR][3] 中是可用的。所以,如果你是 Arch 用户,使用任一的 AUR 助手工具来安装它,例如 [Yay][4] 。
要安装其 Git 发布版,运行:
```
$ yay -S kindd-git
```
要安装正式发布版,运行:
```
$ yay -S kindd
```
在安装后,从菜单或应用程序启动器启动 Kindd。
对于其它的发行版,你需要从源文件手动编译和安装它,像下面所示。
确保你已经安装下面的必要条件。
* git
* coreutils
* polkit
* qt5-base
* qt5-quickcontrols
* qt5-quickcontrols2
* qt5-graphicaleffects
一旦所有必要条件安装,使用 `git` 克隆 Kindd 储存库:
```
git clone https://github.com/LinArcX/Kindd/
```
转到你刚刚克隆 Kindd 的目录,并编译和安装它:
```
cd Kindd
qmake
make
```
最后运行下面的命令来启动 Kindd 应用程序:
```
./kindd
```
Kindd 内部使用 pkexec。pkexec 代理被默认安装在大多数桌面环境中。但是,如果你使用 i3 (或者可能还有一些其它的桌面环境),你应该首先安装 polkit-gnome ,然后粘贴下面的行到 i3 配置文件:
```
exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
```
### 使用 Kindd 创建可启动的 ISO
为从一个 ISO 创建一个可启动的 USB插入 USB 驱动器。然后,从菜单或终端启动 Kindd 。
这是 Kindd 默认界面的外观:
![][5]
*Kindd 界面*
正如你所能看到的Kindd 界面是非常简单的和明白易懂的。这里仅有两部分即设备列表它显示你的系统上的可用的设备hdd 和 Usb并创建可启动的 .iso 。默认情况下,你将在“创建可启动 .iso”部分。
在第一列中输入块大小,在第二列中选择 ISO 文件的路径并在第三列中选择正确的设备USB 驱动器路径)。单击“转换/复制”按钮来开始创建可启动的 ISO 。
![][6]
一旦进程被完成,你将看到成功的信息。
![][7]
现在,拔出 USB 驱动器,并用该 USB 启动器启动你的系统,来检查它是否真地工作。
如果你不知道真实的设备名称(目标路径),只需要在列出的设备上单击,并检查 USB 驱动器名称。
![][8]
Kindd 还处在早期开发阶段。因此,可能有错误。如果你找到一些错误,请在这篇的指南的结尾所给的 GitHub 页面报告它们。
这就是全部。希望这是有用的。更多的好东西将会来。敬请期待!
谢谢!
资源:
* [Kindd GitHub 储存库][11]
相关阅读:
* [Etcher一个来创建可启动 SD 卡或 USB 驱动器的漂亮的应用程序][9]
* [Bootiso 让你安全地创建可启动的 USB 驱动器][10]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/
作者:[sk][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/kindd-720x340.png
[2]: https://www.ostechnix.com/how-to-create-bootable-usb-drive-using-dd-command/
[3]: https://aur.archlinux.org/packages/kindd-git/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-interface.png
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-1.png
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-2.png
[8]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-3.png
[9]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
[10]: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
[11]: https://github.com/LinArcX/Kindd

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10824-1.html)
[#]: subject: (Ping Multiple Servers And Show The Output In Top-like Text UI)
[#]: via: (https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
ping 多台服务器并在类似 top 的界面中显示
======
![Ping Multiple Servers And Show The Output In Top-like Text UI][1]
不久前,我们写了篇关于 [fping][2] 的文章,该程序能使我们能够同时 ping 多台主机。与传统的 `ping` 不同,`fping` 不会等待一台主机的超时。它使用循环法,这表示它将 ICMP Echo 请求发送到一台主机,然后转到另一台主机,最后一次显示哪些主机开启或关闭。今天,我们将讨论一个名为 `pingtop` 的类似程序。顾名思义,它会一次 ping 多台服务器,并在类似 `top` 的终端 UI 中显示结果。它是用 Python 写的自由开源程序。
### 安装 pingtop
可以使用 `pip` 安装 `pingtop``pip` 是一个软件包管理器,用于安装用 Python 开发的程序。确保已在 Linux 中安装了 Python 3.7.x 和 pip。
要在 Linux 上安装 `pip`,请参阅以下链接。
* [如何使用 pip 管理 Python 包][3]
安装 `pip` 后,运行以下命令安装 `pingtop`
```
$ pip install pingtop
```
现在让我们继续使用 `pingtop` ping 多个系统。
### ping 多台服务器并在类似 top 的终端 UI 中显示
要 ping 多个主机/系统,请运行:
```
$ pingtop ostechnix.com google.com facebook.com twitter.com
```
现在,你将在一个漂亮的类似 `top` 的终端 UI 中看到结果,如下所示。
![][4]
*使用 pingtop ping 多台服务器*
建议阅读:
* [一些你可能想知道的替代 “top” 命令的程序][5]
我个人目前没有使用 pingtop 的情况。但我喜欢在这个在文本界面中展示 ping 命令输出的想法。试试看它,也许有帮助。
就是这些了。还有更多好东西。敬请期待!干杯!
资源:
* [pingtop GitHub 仓库][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/
作者:[sk][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-720x340.png
[2]: https://www.ostechnix.com/ping-multiple-hosts-linux/
[3]: https://linux.cn/article-10110-1.html
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-1.gif
[5]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/
[6]: https://github.com/laixintao/pingtop

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10837-1.html)
[#]: subject: (apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System)
[#]: via: (https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
apt-clone备份已安装的软件包并在新的 Ubuntu 系统上恢复它们
======
当我们在基于 Ubuntu/Debian 的系统上使用 `apt-clone`,包安装会变得更加容易。如果你需要在少量系统上安装相同的软件包时,`apt-clone` 会适合你。
如果你想在每个系统上手动构建和安装必要的软件包这是一个耗时的过程。它可以通过多种方式实现Linux 中有许多程序可用。我们过去曾写过一篇关于 [Aptik][1] 的文章。它是能让 Ubuntu 用户备份和恢复系统设置和数据的程序之一。
### 什么是 apt-clone
[apt-clone][2] 能让你为 Debian/Ubuntu 系统创建所有已安装软件包的备份,这些软件包可以在新安装的系统(或容器)或目录中恢复。
该备份可以在相同操作系统版本和架构的多个系统上还原。
### 如何安装 apt-clone
`apt-clone` 包可以在 Ubuntu/Debian 的官方仓库中找到,所以,使用 [apt 包管理器][3] 或 [apt-get 包管理器][4] 来安装它。
使用 `apt` 包管理器安装 `apt-clone`
```
$ sudo apt install apt-clone
```
使用 `apt-get` 包管理器安装 `apt-clone`
```
$ sudo apt-get install apt-clone
```
### 如何使用 apt-clone 备份已安装的软件包?
成功安装 `apt-clone` 之后。只需提供一个保存备份文件的位置。我们将在 `/backup` 目录下保存已安装的软件包备份。
`apt-clone` 会将已安装的软件包列表保存到 `apt-clone-state-Ubuntu18.2daygeek.com.tar.gz` 中。
```
$ sudo apt-clone clone /backup
```
我们同样可以通过运行 `ls` 命令来检查。
```
$ ls -lh /backup/
total 32K
-rw-r--r-- 1 root root 29K Apr 20 19:06 apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
运行以下命令,查看备份文件的详细信息。
```
$ apt-clone info /backup/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
Hostname: Ubuntu18.2daygeek.com
Arch: amd64
Distro: bionic
Meta: libunity-scopes-json-def-desktop, ubuntu-desktop
Installed: 1792 pkgs (194 automatic)
Date: Sat Apr 20 19:06:43 2019
```
根据上面的输出,备份文件中总共有 1792 个包。
### 如何恢复使用 apt-clone 进行备份的软件包?
你可以使用任何远程复制程序来复制远程服务器上的文件。
```
$ scp /backup/apt-clone-state-ubunt-18-04.tar.gz Destination-Server:/opt
```
复制完成后,使用 `apt-clone` 执行还原。
使用以下命令进行还原。
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
请注意,还原将覆盖现有的 `/etc/apt/sources.list` 并安装/删除包。所以要小心。
如果你要将所有软件包还原到文件夹而不是实际还原,可以使用以下命令。
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz --destination /opt/oldubuntu
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/aptik-backup-restore-ppas-installed-apps-users-data/
[2]: https://github.com/mvo5/apt-clone
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -0,0 +1,221 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Ultimate Guide to JavaScript Fatigue: Realities of our industry)
[#]: via: (https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html)
[#]: author: (Lucas Fernandes Da Costa https://lucasfcosta.com)
The Ultimate Guide to JavaScript Fatigue: Realities of our industry
======
**Complaining about JS Fatigue is just like complaining about the fact that humanity has created too many tools to solve the problems we have** , from email to airplanes and spaceships.
Last week Ive done a talk about this very same subject at the NebraskaJS 2017 Conference and I got so many positive feedbacks that I just thought this talk should also become a blog post in order to reach more people and help them deal with JS Fatigue and understand the realities of our industry. **My goal with this post is to change the way you think about software engineering in general and help you in any areas you might work on**.
One of the things that has inspired me to write this blog post and that totally changed my life is [this great post by Patrick McKenzie, called “Dont Call Yourself a Programmer and other Career Advice”][1]. **I highly recommend you read that**. Most of this blog post is advice based on what Patrick has written in that post applied to the JavaScript ecosystem and with a few more thoughts Ive developed during these last years working in the tech industry.
This first section is gonna be a bit philosophical, but I swear it will be worth reading.
### Realities of Our Industry 101
Just like Patrick has done in [his post][1], lets start with the most basic and essential truth about our industry:
Software solves business problems
This is it. **Software does not exist to please us as programmers** and let us write beautiful code. Neither it exists to create jobs for people in the tech industry. **Actually, it exists to kill as many jobs as possible, including ours** , and this is why basic income will become much more important in the next few years, but thats a whole other subject.
Im sorry to say that, but the reason things are that way is that there are only two things that matter in the software engineering (and any other industries):
**Cost versus Revenue**
**The more you decrease cost and increase revenue, the more valuable you are** , and one of the most common ways of decreasing cost and increasing revenue is replacing human beings by machines, which are more effective and usually cost less in the long run.
You are not paid to write code
**Technology is not a goal.** Nobody cares about which programming language you are using, nobody cares about which frameworks your team has chosen, nobody cares about how elegant your data structures are and nobody cares about how good is your code. **The only thing that somebody cares about is how much does your software cost and how much revenue it generates**.
Writing beautiful code does not matter to your clients. We write beautiful code because it makes us more productive in the long run and this decreases cost and increases revenue.
The whole reason why we try not to write bugs is not that we value correctness, but that **our clients** value correctness. If you have ever seen a bug becoming a feature you know what Im talking about. That bug exists but it should not be fixed. That happens because our goal is not to fix bugs, our goal is to generate revenue. If our bugs make clients happy then they increase revenue and therefore we are accomplishing our goals.
Reusable space rockets, self-driving cars, robots, artificial intelligence: these things do not exist just because someone thought it would be cool to create them. They exist because there are business interests behind them. And Im not saying the people behind them just want money, Im sure they think that stuff is also cool, but the truth is that if they were not economically viable or had any potential to become so, they would not exist.
Probably I should not even call this section “Realities of Our Industry 101”, maybe I should just call it “Realities of Capitalism 101”.
And given that our only goal is to increase revenue and decrease cost, I think we as programmers should be paying more attention to requirements and design and start thinking with our minds and participating more actively in business decisions, which is why it is extremely important to know the problem domain we are working on. How many times before have you found yourself trying to think about what should happen in certain edge cases that have not been thought before by your managers or business people?
In 1975, Boehm has done a research in which he found out that about 64% of all errors in the software he was studying were caused by design, while only 36% of all errors were coding errors. Another study called [“Higher Order Software—A Methodology for Defining Software”][2] also states that **in the NASA Apollo project, about 73% of all errors were design errors**.
The whole reason why Design and Requirements exist is that they define what problems were going to solve and solving problems is what generates revenue.
> Without requirements or design, programming is the art of adding bugs to an empty text file.
>
> * Louis Srygley
>
This same principle also applies to the tools weve got available in the JavaScript ecosystem. Babel, webpack, react, Redux, Mocha, Chai, Typescript, all of them exist to solve a problem and we gotta understand which problem they are trying to solve, we need to think carefully about when most of them are needed, otherwise, we will end up having JS Fatigue because:
JS Fatigue happens when people use tools they don't need to solve problems they don't have.
As Donald Knuth once said: “Premature optimization is the root of all evil”. Remember that software only exists to solve business problems and most software out there is just boring, it does not have any high scalability or high-performance constraints. Focus on solving business problems, focus on decreasing cost and generating revenue because this is all that matters. Optimize when you need, otherwise you will probably be adding unnecessary complexity to your software, which increases cost, and not generating enough revenue to justify that.
This is why I think we should apply [Test Driven Development][3] principles to everything we do in our job. And by saying this Im not just talking about testing. **Im talking about waiting for problems to appear before solving them. This is what TDD is all about**. As Kent Beck himself says: “TDD reduces fear” because it guides your steps and allows you take small steps towards solving your problems. One problem at a time. By doing the same thing when it comes to deciding when to adopt new technologies then we will also reduce fear.
Solving one problem at a time also decreases [Analysis Paralysis][4], which is basically what happens when you open Netflix and spend three hours concerned about making the optimal choice instead of actually watching something. By solving one problem at a time we reduce the scope of our decisions and by reducing the scope of our decisions we have fewer choices to make and by having fewer choices to make we decrease Analysis Paralysis.
Have you ever thought about how easier it was to decide what you were going to watch when there were only a few TV channels available? Or how easier it was to decide which game you were going to play when you had only a few cartridges at home?
### But what about JavaScript?
By the time Im writing this post NPM has 489,989 packages and tomorrow approximately 515 new ones are going to be published.
And the packages we use and complain about have a history behind them we must comprehend in order to understand why we need them. **They are all trying to solve problems.**
Babel, Dart, CoffeeScript and other transpilers come from our necessity of writing code other than JavaScript but making it runnable in our browsers. Babel even lets us write new generation JavaScript and make sure it will work even on older browsers, which has always been a great problem given the inconsistencies and different amount of compliance to the ECMA Specification between browsers. Even though the ECMA spec is becoming more and more solid these days, we still need Babel. And if you want to read more about Babels history I highly recommend that you read [this excellent post by Henry Zhu][5].
Module bundlers such as Webpack and Browserify also have their reason to exist. If you remember well, not so long ago we used to suffer a lot with lots of `script` tags and making them work together. They used to pollute the global namespace and it was reasonably hard to make them work together when one depended on the other. In order to solve this [`Require.js`][6] was created, but it still had its problems, it was not that straightforward and its syntax also made it prone to other problems, as you can see [in this blog post][7]. Then Node.js came with `CommonJS` imports, which were synchronous, simple and clean, but we still needed a way to make that work on our browsers and this is why we needed Webpack and Browserify.
And Webpack itself actually solves more problems than that by allowing us to deal with CSS, images and many other resources as if they were JavaScript dependencies.
Front-end frameworks are a bit more complicated, but the reason why they exist is to reduce the cognitive load when we write code so that we dont need to worry about manipulating the DOM ourselves or even dealing with messy browser APIs (another problem JQuery came to solve), which is not only error prone but also not productive.
This is what we have been doing this whole time in computer science. We use low-level abstractions and build even more abstractions on top of it. The more we worry about describing how our software should work instead of making it work, the more productive we are.
But all those tools have something in common: **they exist because the web platform moves too fast**. Nowadays were using web technology everywhere: in web browsers, in desktop applications, in phone applications or even in watch applications.
This evolution also creates problems we need to solve. PWAs, for example, do not exist only because theyre cool and we programmers have fun writing them. Remember the first section of this post: **PWAs exist because they create business value**.
And usually standards are not fast enough to be created and therefore we need to create our own solutions to these things, which is why it is great to have such a vibrant and creative community with us. Were solving problems all the time and **we are allowing natural selection to do its job**.
The tools that suit us better thrive, get more contributors and develop themselves more quickly and sometimes other tools end up incorporating the good ideas from the ones that thrive and becoming even more popular than them. This is how we evolve.
By having more tools we also have more choices. If you remember the UNIX philosophy well, it states that we should aim at creating programs that do one thing and do it well.
We can clearly see this happening in the JS testing environment, for example, where we have Mocha for running tests and Chai for doing assertions, while in Java JUnit tries to do all these things. This means that if we have a problem with one of them or if we find another one that suits us better, we can simply replace that small part and still have the advantages of the other ones.
The UNIX philosophy also states that we should write programs that work together. And this is exactly what we are doing! Take a look at Babel, Webpack and React, for example. They work very well together but we still do not need one to use the other. In the testing environment, for example, if were using Mocha and Chai all of a sudden we can just install Karma and run those same tests in multiple environments.
### How to Deal With It
My first advice for anyone suffering from JS Fatigue would definitely be to stay aware that **you dont need to know everything**. Trying to learn it all at once, even when we dont have to do so, only increases the feeling of fatigue. Go deep in areas that you love and for which you feel an inner motivation to study and adopt a lazy approach when it comes to the other ones. Im not saying that you should be lazy, Im just saying that you can learn those only when needed. Whenever you face a problem that requires you to use a certain technology to solve it, go learn.
Another important thing to say is that **you should start from the beginning**. Make sure you have learned enough about JavaScript itself before using any JavaScript frameworks. This is the only way you will be able to understand them and bend them to your will, otherwise, whenever you face an error you have never seen before you wont know which steps to take in order to solve it. Learning core web technologies such as CSS, HTML5, JavaScript and also computer science fundamentals or even how the HTTP protocol works will help you master any other technologies a lot more quickly.
But please, dont get too attached to that. Sometimes you gotta risk yourself and start doing things on your own. As Sacha Greif has written in [this blog post][8], spending too much time learning the fundamentals is just like trying to learn how to swim by studying fluid dynamics. Sometimes you just gotta jump into the pool and try to swim by yourself.
And please, dont get too attached to a single technology. All of the things we have available nowadays have already been invented in the past. Of course, they have different features and a brand new name, but, in their essence, they are all the same.
If you look at NPM, it is nothing new, we already had Maven Central and Ruby Gems quite a long time ago.
In order to transpile your code, Babel applies the very same principles and theory as some of the oldest and most well-known compilers, such as the GCC.
Even JSX is not a new idea. It E4X (ECMAScript for XML) already existed more than 10 years ago.
Now you might ask: “what about Gulp, Grunt and NPM Scripts?” Well, Im sorry but we can solve all those problems with GNU Make in 1976. And actually, there are a reasonable number of JavaScript projects that still use it, such as Chai.js, for example. But we do not do that because we are hipsters that like vintage stuff. We use `make` because it solves our problems, and this is what you should aim at doing, as weve talked before.
If you really want to understand a certain technology and be able to solve any problems you might face, please, dig deep. One of the most decisive factors to success is curiosity, so **dig deep into the technologies you like**. Try to understand them from bottom-up and whenever you think something is just “magic”, debunk that myth by exploring the codebase by yourself.
In my opinion, there is no better quote than this one by Richard Feinman, when it comes to really learning something:
> What I cannot create, I do not understand
And just below this phrase, [in the same blackboard, Richard also wrote][9]:
> Know how to solve every problem that has been solved
Isnt this just amazing?
When Richard said that, he was talking about being able to take any theoretical result and re-derive it, but I think the exact same principle can be applied to software engineering. The tools that solve our problems have already been invented, they already exist, so we should be able to get to them all by ourselves.
This is the very reason I love [some of the videos available in Egghead.io][10] in which Dan Abramov explains how to implement certain features that exist in Redux from scratch or [blog posts that teach you how to build your own JSX renderer][11].
So why not trying to implement these things by yourself or going to GitHub and reading their codebase in order to understand how they work? Im sure you will find a lot of useful knowledge out there. Comments and tutorials might lie and be incorrect sometimes, the code cannot.
Another thing that we have been talking a lot in this post is that **you should not get ahead of yourself**. Follow a TDD approach and solve one problem at a time. You are paid to increase revenue and decrease cost and you do this by solving problems, this is the reason why software exists.
And since we love comparing our role to the ones related to civil engineering, lets do a quick comparison between software development and civil engineering, just as [Sam Newman does in his brilliant book called “Building Microservices”][12].
We love calling ourselves “engineers” or “architects”, but is that term really correct? We have been developing software for what we know as computers less than a hundred years ago, while the Colosseum, for example, exists for about two thousand years.
When was the last time youve seen a bridge falling and when was the last time your telephone or your browser crashed?
In order to explain this, Ill use an example I love.
This is the beautiful and awesome city of Barcelona:
![The City of Barcelona][13]
When we look at it this way and from this distance, it just looks like any other city in the world, but when we look at it from above, this is how Barcelona looks:
![Barcelona from above][14]
As you can see, every block has the same size and all of them are very organized. If youve ever been to Barcelona you will also know how good it is to move through the city and how well it works.
But the people that planned Barcelona could not predict what it was going to look like in the next two or three hundred years. In cities, people come in and people move through it all the time so what they had to do was make it grow organically and adapt as the time goes by. They had to be prepared for changes.
This very same thing happens to our software. It evolves quickly, refactors are often needed and requirements change more frequently than we would like them to.
So, instead of acting like a Software Engineer, act as a Town Planner. Let your software grow organically and adapt as needed. Solve problems as they come by but make sure everything still has its place.
Doing this when it comes to software is even easier than doing this in cities due to the fact that **software is flexible, civil engineering is not**. **In the software world, our build time is compile time**. In Barcelona we cannot simply destroy buildings to give space to new ones, in Software we can do that a lot easier. We can break things all the time, we can make experiments because we can build as many times as we want and it usually takes seconds and we spend a lot more time thinking than building. Our job is purely intellectual.
So **act like a town planner, let your software grow and adapt as needed**.
By doing this you will also have better abstractions and know when its the right time to adopt them.
As Sam Koblenski says:
> Abstractions only work well in the right context, and the right context develops as the system develops.
Nowadays something I see very often is people looking for boilerplates when theyre trying to learn a new technology, but, in my opinion, **you should avoid boilerplates when youre starting out**. Of course boilerplates and generators are useful if you are already experienced, but they take a lot of control out of your hands and therefore you wont learn how to set up a project and you wont understand exactly where each piece of the software you are using fits.
When you feel like you are struggling more than necessary to get something simple done, it might be the right time for you to look for an easier way to do this. In our role **you should strive to be lazy** , you should work to not work. By doing that you have more free time to do other things and this decreases cost and increases revenue, so thats another way of accomplishing your goal. You should not only work harder, you should work smarter.
Probably someone has already had the same problem as youre having right now, but if nobody did it might be your time to shine and build your own solution and help other people.
But sometimes you will not be able to realize you could be more effective in your tasks until you see someone doing them better. This is why it is so important to **talk to people**.
By talking to people you share experiences that help each others careers and we discover new tools to improve our workflow and, even more important than that, learn how they solve their problems. This is why I like reading blog posts in which companies explain how they solve their problems.
Especially in our area we like to think that Google and StackOverflow can answer all our questions, but we still need to know which questions to ask. Im sure you have already had a problem you could not find a solution for because you didnt know exactly what was happening and therefore didnt know what was the right question to ask.
But if I needed to sum this whole post in a single advice, it would be:
Solve problems.
Software is not a magic box, software is not poetry (unfortunately). It exists to solve problems and improves peoples lives. Software exists to push the world forward.
**Now its your time to go out there and solve problems**.
--------------------------------------------------------------------------------
via: https://lucasfcosta.com/2017/07/17/The-Ultimate-Guide-to-JavaScript-Fatigue.html
作者:[Lucas Fernandes Da Costa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://lucasfcosta.com
[b]: https://github.com/lujun9972
[1]: http://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-programmer/
[2]: http://ieeexplore.ieee.org/document/1702333/
[3]: https://en.wikipedia.org/wiki/Test_Driven_Development
[4]: https://en.wikipedia.org/wiki/Analysis_paralysis
[5]: https://babeljs.io/blog/2016/12/07/the-state-of-babel
[6]: http://requirejs.org
[7]: https://benmccormick.org/2015/05/28/moving-past-requirejs/
[8]: https://medium.freecodecamp.org/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1
[9]: https://www.quora.com/What-did-Richard-Feynman-mean-when-he-said-What-I-cannot-create-I-do-not-understand
[10]: https://egghead.io/lessons/javascript-redux-implementing-store-from-scratch
[11]: https://jasonformat.com/wtf-is-jsx/
[12]: https://www.barnesandnoble.com/p/building-microservices-sam-newman/1119741399/2677517060476?st=PLA&sid=BNB_DRS_Marketplace+Shopping+Books_00000000&2sid=Google_&sourceId=PLGoP4760&k_clickid=3x4760
[13]: /assets/barcelona-city.jpeg
[14]: /assets/barcelona-above.jpeg
[15]: https://twitter.com/thewizardlucas

View File

@ -1,162 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to add a player to your Python game)
[#]: via: (https://opensource.com/article/17/12/game-python-add-a-player)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to add a player to your Python game
======
Part three of a series on building a game from scratch with Python.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python3-game.png?itok=jG9UdwC3)
In the [first article of this series][1], I explained how to use Python to create a simple, text-based dice game. In the second part, I showed you how to build a game from scratch, starting with [creating the game's environment][2]. But every game needs a player, and every player needs a playable character, so that's what we'll do next in the third part of the series.
In Pygame, the icon or avatar that a player controls is called a sprite. If you don't have any graphics to use for a player sprite yet, create something for yourself using [Krita][3] or [Inkscape][4]. If you lack confidence in your artistic skills, you can also search [OpenClipArt.org][5] or [OpenGameArt.org][6] for something pre-generated. Then, if you didn't already do so in the previous article, create a directory called `images` alongside your Python project directory. Put the images you want to use in your game into the `images` folder.
To make your game truly exciting, you ought to use an animated sprite for your hero. It means you have to draw more assets, but it makes a big difference. The most common animation is a walk cycle, a series of drawings that make it look like your sprite is walking. The quick and dirty version of a walk cycle requires four drawings.
![](https://opensource.com/sites/default/files/u128651/walk-cycle-poses.jpg)
Note: The code samples in this article allow for both a static player sprite and an animated one.
Name your player sprite `hero.png`. If you're creating an animated sprite, append a digit after the name, starting with `hero1.png`.
### Create a Python class
In Python, when you create an object that you want to appear on screen, you create a class.
Near the top of your Python script, add the code to create a player. In the code sample below, the first three lines are already in the Python script that you're working on:
```
import pygame
import sys
import os # new code below
class Player(pygame.sprite.Sprite):
    '''
    Spawn a player
    '''
    def __init__(self):
        pygame.sprite.Sprite.__init__(self)
        self.images = []
    img = pygame.image.load(os.path.join('images','hero.png')).convert()
    self.images.append(img)
    self.image = self.images[0]
    self.rect  = self.image.get_rect()
```
If you have a walk cycle for your playable character, save each drawing as an individual file called `hero1.png` to `hero4.png` in the `images` folder.
Use a loop to tell Python to cycle through each file.
```
'''
Objects
'''
class Player(pygame.sprite.Sprite):
    '''
    Spawn a player
    '''
    def __init__(self):
        pygame.sprite.Sprite.__init__(self)
        self.images = []
        for i in range(1,5):
            img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
            self.images.append(img)
            self.image = self.images[0]
            self.rect  = self.image.get_rect()
```
### Bring the player into the game world
Now that a Player class exists, you must use it to spawn a player sprite in your game world. If you never call on the Player class, it never runs, and there will be no player. You can test this out by running your game now. The game will run just as well as it did at the end of the previous article, with the exact same results: an empty game world.
To bring a player sprite into your world, you must call the Player class to generate a sprite and then add it to a Pygame sprite group. In this code sample, the first three lines are existing code, so add the lines afterwards:
```
world       = pygame.display.set_mode([worldx,worldy])
backdrop    = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = screen.get_rect()
# new code below
player = Player()   # spawn player
player.rect.x = 0   # go to x
player.rect.y = 0   # go to y
player_list = pygame.sprite.Group()
player_list.add(player)
```
Try launching your game to see what happens. Warning: it won't do what you expect. When you launch your project, the player sprite doesn't spawn. Actually, it spawns, but only for a millisecond. How do you fix something that only happens for a millisecond? You might recall from the previous article that you need to add something to the main loop. To make the player spawn for longer than a millisecond, tell Python to draw it once per loop.
Change the bottom clause of your loop to look like this:
```
    world.blit(backdrop, backdropbox)
    player_list.draw(screen) # draw player
    pygame.display.flip()
    clock.tick(fps)
```
Launch your game now. Your player spawns!
### Setting the alpha channel
Depending on how you created your player sprite, it may have a colored block around it. What you are seeing is the space that ought to be occupied by an alpha channel. It's meant to be the "color" of invisibility, but Python doesn't know to make it invisible yet. What you are seeing, then, is the space within the bounding box (or "hit box," in modern gaming terms) around the sprite.
![](https://opensource.com/sites/default/files/u128651/greenscreen.jpg)
You can tell Python what color to make invisible by setting an alpha channel and using RGB values. If you don't know the RGB values your drawing uses as alpha, open your drawing in Krita or Inkscape and fill the empty space around your drawing with a unique color, like #00ff00 (more or less a "greenscreen green"). Take note of the color's hex value (#00ff00, for greenscreen green) and use that in your Python script as the alpha channel.
Using alpha requires the addition of two lines in your Sprite creation code. Some version of the first line is already in your code. Add the other two lines:
```
            img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
            img.convert_alpha()     # optimise alpha
            img.set_colorkey(ALPHA) # set alpha
```
Python doesn't know what to use as alpha unless you tell it. In the setup area of your code, add some more color definitions. Add this variable definition anywhere in your setup section:
```
ALPHA = (0, 255, 0)
```
In this example code, **0,255,0** is used, which is the same value in RGB as #00ff00 is in hex. You can get all of these color values from a good graphics application like [GIMP][7], Krita, or Inkscape. Alternately, you can also detect color values with a good system-wide color chooser, like [KColorChooser][8].
![](https://opensource.com/sites/default/files/u128651/kcolor.png)
If your graphics application is rendering your sprite's background as some other value, adjust the values of your alpha variable as needed. No matter what you set your alpha value, it will be made "invisible." RGB values are very strict, so if you need to use 000 for alpha, but you need 000 for the black lines of your drawing, just change the lines of your drawing to 111, which is close enough to black that nobody but a computer can tell the difference.
Launch your game to see the results.
![](https://opensource.com/sites/default/files/u128651/alpha.jpg)
In the [fourth part of this series][9], I'll show you how to make your sprite move. How exciting!
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/game-python-add-a-player
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/17/10/python-101
[2]: https://opensource.com/article/17/12/program-game-python-part-2-creating-game-world
[3]: http://krita.org
[4]: http://inkscape.org
[5]: http://openclipart.org
[6]: https://opengameart.org/
[7]: http://gimp.org
[8]: https://github.com/KDE/kcolorchooser
[9]: https://opensource.com/article/17/12/program-game-python-part-4-moving-your-sprite

View File

@ -1,745 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (TLP An Advanced Power Management Tool That Improve Battery Life On Linux Laptop)
[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
TLP An Advanced Power Management Tool That Improve Battery Life On Linux Laptop
======
Laptop battery is highly optimized for Windows OS, that i had realized when i was using Windows OS in my laptop but its not same for Linux.
Over the years Linux has improved a lot for battery optimization but still we need make some necessary things to improve laptop battery life in Linux.
When i think about battery life, i got few options for that but i felt TLP is a better solutions for me so, im going with it.
In this tutorial we are going to discuss about TLP in details to improve battery life.
We had written three articles previously in our site about **[laptop battery saving utilities][1]** for Linux **[PowerTOP][2]** and **[Battery Charging State][3]**.
### What is TLP?
[TLP][4] is a free opensource advanced power management tool that improve your battery life without making any configuration change.
Since it comes with a default configuration already optimized for battery life, so you may just install and forget it.
Also, it is highly customizable to fulfill your specific requirements. TLP is a pure command line tool with automated background tasks. It does not contain a GUI.
TLP runs on every laptop brand. Setting the battery charge thresholds is available for IBM/Lenovo ThinkPads only.
All TLP settings are stored in `/etc/default/tlp`. The default configuration provides optimized power saving out of the box.
The following TLP settings is available for customization and you need to make the necessary changes accordingly if you want it.
### TLP Features
* Kernel laptop mode and dirty buffer timeouts
* Processor frequency scaling including “turbo boost” / “turbo core”
* Limit max/min P-state to control power dissipation of the CPU
* HWP energy performance hints
* Power aware process scheduler for multi-core/hyper-threading
* Processor performance versus energy savings policy (x86_energy_perf_policy)
* Hard disk advanced power magement level (APM) and spin down timeout (per disk)
* AHCI link power management (ALPM) with device blacklist
* PCIe active state power management (PCIe ASPM)
* Runtime power management for PCI(e) bus devices
* Radeon graphics power management (KMS and DPM)
* Wifi power saving mode
* Power off optical drive in drive bay
* Audio power saving mode
* I/O scheduler (per disk)
* USB autosuspend with device blacklist/whitelist (input devices excluded automatically)
* Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown
* Restore radio device state on system startup (from previous shutdown).
* Radio device wizard: switch radios upon network connect/disconnect and dock/undock
* Disable Wake On LAN
* Integrated WWAN and bluetooth state is restored after suspend/hibernate
* Untervolting of Intel processors requires kernel with PHC-Patch
* Battery charge thresholds ThinkPads only
* Recalibrate battery ThinkPads only
### How to Install TLP in Linux
TLP package is available in most of the distributions official repository so, use the distributions **[Package Manager][5]** to install it.
For **`Fedora`** system, use **[DNF Command][6]** to install TLP.
```
$ sudo dnf install tlp tlp-rdw
```
ThinkPads require an additional packages.
```
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm
$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel
```
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
```
$ sudo dnf install smartmontools
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install TLP.
```
$ sudo apt install tlp tlp-rdw
```
ThinkPads require an additional packages.
```
$ sudo apt-get install tp-smapi-dkms acpi-call-dkms
```
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
```
$ sudo apt-get install smartmontools
```
When the official package becomes outdated for Ubuntu based systems then use the following PPA repository which provides an up-to-date version. Run the following commands to install TLP using the PPA.
```
$ sudo apt-get install tlp tlp-rdw
```
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install TLP.
```
$ sudo pacman -S tlp tlp-rdw
```
ThinkPads require an additional packages.
```
$ pacman -S tp_smapi acpi_call
```
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
```
$ sudo pacman -S smartmontools
```
Enable TLP & TLP-Sleep service on boot for Arch Linux based systems.
```
$ sudo systemctl enable tlp.service
$ sudo systemctl enable tlp-sleep.service
```
You should also mask the following services to avoid conflicts and assure proper operation of TLPs radio device switching options for Arch Linux based systems.
```
$ sudo systemctl mask systemd-rfkill.service
$ sudo systemctl mask systemd-rfkill.socket
```
For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install TLP.
```
$ sudo yum install tlp tlp-rdw
```
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
```
$ sudo yum install smartmontools
```
For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install TLP.
```
$ sudo zypper install TLP
```
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
```
$ sudo zypper install smartmontools
```
After successfully TLP installed, use the following command to start the service.
```
$ systemctl start tlp.service
```
To show battery information.
```
$ sudo tlp-stat -b
or
$ sudo tlp-stat --battery
--- TLP 1.1 --------------------------------------------
+++ Battery Status
/sys/class/power_supply/BAT0/manufacturer = SMP
/sys/class/power_supply/BAT0/model_name = L14M4P23
/sys/class/power_supply/BAT0/cycle_count = (not supported)
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
/sys/class/power_supply/BAT0/energy_full = 48850 [mWh]
/sys/class/power_supply/BAT0/energy_now = 48850 [mWh]
/sys/class/power_supply/BAT0/power_now = 0 [mW]
/sys/class/power_supply/BAT0/status = Full
Charge = 100.0 [%]
Capacity = 81.4 [%]
```
To show disk information.
```
$ sudo tlp-stat -d
or
$ sudo tlp-stat --disk
--- TLP 1.1 --------------------------------------------
+++ Storage Devices
/dev/sda:
Model = WDC WD10SPCX-24HWST1
Firmware = 02.01A02
APM Level = 128
Status = active/idle
Scheduler = mq-deadline
Runtime PM: control = on, autosuspend_delay = (not available)
SMART info:
4 Start_Stop_Count = 18787
5 Reallocated_Sector_Ct = 0
9 Power_On_Hours = 606 [h]
12 Power_Cycle_Count = 1792
193 Load_Cycle_Count = 25775
194 Temperature_Celsius = 31 [°C]
+++ AHCI Link Power Management (ALPM)
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
+++ AHCI Host Controller Runtime Power Management
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
```
To show PCI device information.
```
$ sudo tlp-stat -e
or
$ sudo tlp-stat --pcie
--- TLP 1.1 --------------------------------------------
+++ Runtime Power Management
Device blacklist = (not configured)
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
```
To show graphics card information.
```
$ sudo tlp-stat -g
or
$ sudo tlp-stat --graphics
--- TLP 1.1 --------------------------------------------
+++ Intel Graphics
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
/sys/module/i915/parameters/enable_psr = 0 (disabled)
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
```
To show Processor information.
```
$ sudo tlp-stat -p
or
$ sudo tlp-stat --processor
--- TLP 1.1 --------------------------------------------
+++ Processor
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
x86_energy_perf_policy: program not installed.
/sys/module/workqueue/parameters/power_efficient = Y
/proc/sys/kernel/nmi_watchdog = 0
+++ Undervolting
PHC kernel not available.
```
To show system data information.
```
$ sudo tlp-stat -s
or
$ sudo tlp-stat --system
--- TLP 1.1 --------------------------------------------
+++ System Info
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
BIOS = CDCN35WW
Release = "Manjaro Linux"
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
Init system = systemd
Boot mode = BIOS (CSM, Legacy)
+++ TLP Status
State = enabled
Last run = 11:04:00 IST, 596 sec(s) ago
Mode = battery
Power source = battery
```
To show temperatures and fan speed information.
```
$ sudo tlp-stat -t
or
$ sudo tlp-stat --temp
--- TLP 1.1 --------------------------------------------
+++ Temperatures
CPU temp = 36 [°C]
Fan speed = (not available)
```
To show USB device data information.
```
$ sudo tlp-stat -u
or
$ sudo tlp-stat --usb
--- TLP 1.1 --------------------------------------------
+++ USB
Autosuspend = disabled
Device whitelist = (not configured)
Device blacklist = (not configured)
Bluetooth blacklist = disabled
Phone blacklist = disabled
WWAN blacklist = enabled
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
```
To show warnings.
```
$ sudo tlp-stat -w
or
$ sudo tlp-stat --warn
--- TLP 1.1 --------------------------------------------
No warnings detected.
```
Status report with configuration and all active settings.
```
$ sudo tlp-stat
--- TLP 1.1 --------------------------------------------
+++ Configured Settings: /etc/default/tlp
TLP_ENABLE=1
TLP_DEFAULT_MODE=AC
TLP_PERSISTENT_DEFAULT=0
DISK_IDLE_SECS_ON_AC=0
DISK_IDLE_SECS_ON_BAT=2
MAX_LOST_WORK_SECS_ON_AC=15
MAX_LOST_WORK_SECS_ON_BAT=60
CPU_HWP_ON_AC=balance_performance
CPU_HWP_ON_BAT=balance_power
SCHED_POWERSAVE_ON_AC=0
SCHED_POWERSAVE_ON_BAT=1
NMI_WATCHDOG=0
ENERGY_PERF_POLICY_ON_AC=performance
ENERGY_PERF_POLICY_ON_BAT=power
DISK_DEVICES="sda sdb"
DISK_APM_LEVEL_ON_AC="254 254"
DISK_APM_LEVEL_ON_BAT="128 128"
SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance"
SATA_LINKPWR_ON_BAT="med_power_with_dipm max_performance"
AHCI_RUNTIME_PM_TIMEOUT=15
PCIE_ASPM_ON_AC=performance
PCIE_ASPM_ON_BAT=powersave
RADEON_POWER_PROFILE_ON_AC=default
RADEON_POWER_PROFILE_ON_BAT=low
RADEON_DPM_STATE_ON_AC=performance
RADEON_DPM_STATE_ON_BAT=battery
RADEON_DPM_PERF_LEVEL_ON_AC=auto
RADEON_DPM_PERF_LEVEL_ON_BAT=auto
WIFI_PWR_ON_AC=off
WIFI_PWR_ON_BAT=on
WOL_DISABLE=Y
SOUND_POWER_SAVE_ON_AC=0
SOUND_POWER_SAVE_ON_BAT=1
SOUND_POWER_SAVE_CONTROLLER=Y
BAY_POWEROFF_ON_AC=0
BAY_POWEROFF_ON_BAT=0
BAY_DEVICE="sr0"
RUNTIME_PM_ON_AC=on
RUNTIME_PM_ON_BAT=auto
RUNTIME_PM_DRIVER_BLACKLIST="amdgpu nouveau nvidia radeon pcieport"
USB_AUTOSUSPEND=0
USB_BLACKLIST_BTUSB=0
USB_BLACKLIST_PHONE=0
USB_BLACKLIST_PRINTER=1
USB_BLACKLIST_WWAN=1
RESTORE_DEVICE_STATE_ON_STARTUP=0
+++ System Info
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
BIOS = CDCN35WW
Release = "Manjaro Linux"
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
Init system = systemd
Boot mode = BIOS (CSM, Legacy)
+++ TLP Status
State = enabled
Last run = 11:04:00 IST, 684 sec(s) ago
Mode = battery
Power source = battery
+++ Processor
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
x86_energy_perf_policy: program not installed.
/sys/module/workqueue/parameters/power_efficient = Y
/proc/sys/kernel/nmi_watchdog = 0
+++ Undervolting
PHC kernel not available.
+++ Temperatures
CPU temp = 42 [°C]
Fan speed = (not available)
+++ File System
/proc/sys/vm/laptop_mode = 2
/proc/sys/vm/dirty_writeback_centisecs = 6000
/proc/sys/vm/dirty_expire_centisecs = 6000
/proc/sys/vm/dirty_ratio = 20
/proc/sys/vm/dirty_background_ratio = 10
+++ Storage Devices
/dev/sda:
Model = WDC WD10SPCX-24HWST1
Firmware = 02.01A02
APM Level = 128
Status = active/idle
Scheduler = mq-deadline
Runtime PM: control = on, autosuspend_delay = (not available)
SMART info:
4 Start_Stop_Count = 18787
5 Reallocated_Sector_Ct = 0
9 Power_On_Hours = 606 [h]
12 Power_Cycle_Count = 1792
193 Load_Cycle_Count = 25777
194 Temperature_Celsius = 31 [°C]
+++ AHCI Link Power Management (ALPM)
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
+++ AHCI Host Controller Runtime Power Management
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
+++ PCIe Active State Power Management
/sys/module/pcie_aspm/parameters/policy = powersave
+++ Intel Graphics
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
/sys/module/i915/parameters/enable_psr = 0 (disabled)
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
+++ Wireless
bluetooth = on
wifi = on
wwan = none (no device)
hci0(btusb) : bluetooth, not connected
wlp8s0(iwlwifi) : wifi, connected, power management = on
+++ Audio
/sys/module/snd_hda_intel/parameters/power_save = 1
/sys/module/snd_hda_intel/parameters/power_save_controller = Y
+++ Runtime Power Management
Device blacklist = (not configured)
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
+++ USB
Autosuspend = disabled
Device whitelist = (not configured)
Device blacklist = (not configured)
Bluetooth blacklist = disabled
Phone blacklist = disabled
WWAN blacklist = enabled
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
+++ Battery Status
/sys/class/power_supply/BAT0/manufacturer = SMP
/sys/class/power_supply/BAT0/model_name = L14M4P23
/sys/class/power_supply/BAT0/cycle_count = (not supported)
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
/sys/class/power_supply/BAT0/energy_full = 51690 [mWh]
/sys/class/power_supply/BAT0/energy_now = 50140 [mWh]
/sys/class/power_supply/BAT0/power_now = 12185 [mW]
/sys/class/power_supply/BAT0/status = Discharging
Charge = 97.0 [%]
Capacity = 86.2 [%]
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/
[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/
[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/
[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
[5]: https://www.2daygeek.com/category/package-management/
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (cycoe)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,111 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Python's cryptography library)
[#]: via: (https://opensource.com/article/19/4/cryptography-python)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Getting started with Python's cryptography library
======
Encrypt your data and keep it safe from attackers.
![lock on world map][1]
The first rule of cryptography club is: never _invent_ a cryptography system yourself. The second rule of cryptography club is: never _implement_ a cryptography system yourself: many real-world holes are found in the _implementation_ phase of a cryptosystem as well as in the design.
One useful library for cryptographic primitives in Python is called simply [**cryptography**][2]. It has both "secure" primitives as well as a "hazmat" layer. The "hazmat" layer requires care and knowledge of cryptography and it is easy to implement security holes using it. We will not cover anything in the "hazmat" layer in this introductory article!
The most useful high-level secure primitive in **cryptography** is the Fernet implementation. Fernet is a standard for encrypting buffers in a way that follows best-practices cryptography. It is not suitable for very big files—anything in the gigabyte range and above—since it requires you to load the whole buffer that you want to encrypt or decrypt into memory at once.
Fernet supports _symmetric_ , or _secret key_ , cryptography: the same key is used for encryption and decryption, and therefore must be kept safe.
Generating a key is easy:
```
>>> k = fernet.Fernet.generate_key()
>>> type(k)
<class 'bytes'>
```
Those bytes can be written to a file with appropriate permissions, ideally on a secure machine.
Once you have key material, encrypting is easy as well:
```
>>> frn = fernet.Fernet(k)
>>> encrypted = frn.encrypt(b"x marks the spot")
>>> encrypted[:10]
b'gAAAAABb1'
```
You will get slightly different values if you encrypt on your machine. Not only because (I hope) you generated a different key from me, but because Fernet concatenates the value to be encrypted with some randomly generated buffer. This is one of the "best practices" I alluded to earlier: it will prevent an adversary from being able to tell which encrypted values are identical, which is sometimes an important part of an attack.
Decryption is equally simple:
```
>>> frn = fernet.Fernet(k)
>>> frn.decrypt(encrypted)
b'x marks the spot'
```
Note that this only encrypts and decrypts _byte strings_. In order to encrypt and decrypt _text strings_ , they will need to be encoded and decoded, usually with [UTF-8][3].
One of the most interesting advances in cryptography in the mid-20th century was _public key_ cryptography. It allows the encryption key to be published while the _decryption key_ is kept secret. It can, for example, be used to store API keys to be used by a server: the server is the only thing with access to the decryption key, but anyone can add to the store by using the public encryption key.
While **cryptography** does not have any public key cryptographic _secure_ primitives, the [**PyNaCl**][4] library does. PyNaCl wraps and offers some nice ways to use the [**NaCl**][5] encryption system invented by Daniel J. Bernstein.
NaCl always _encrypts_ and _signs_ or _decrypts_ and _verifies signatures_ simultaneously. This is a way to prevent malleability-based attacks, where an adversary modifies the encrypted value.
Encryption is done with a public key, while signing is done with a secret key:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> source = PrivateKey.generate()
>>> with open("target.pubkey", "rb") as fpin:
... target_public_key = PublicKey(fpin.read())
>>> enc_box = Box(source, target_public_key)
>>> result = enc_box.encrypt(b"x marks the spot")
>>> result[:4]
b'\xe2\x1c0\xa4'
```
Decryption reverses the roles: it needs the private key for decryption and the public key to verify the signature:
```
>>> from nacl.public import PrivateKey, PublicKey, Box
>>> with open("source.pubkey", "rb") as fpin:
... source_public_key = PublicKey(fpin.read())
>>> with open("target.private_key", "rb") as fpin:
... target = PrivateKey(fpin.read())
>>> dec_box = Box(target, source_public_key)
>>> dec_box.decrypt(result)
b'x marks the spot'
```
The [**PocketProtector**][6] library builds on top of PyNaCl and contains a complete secrets management solution.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/cryptography-python
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-cloud-safe.png?itok=yj2TFPzq (lock on world map)
[2]: https://cryptography.io/en/latest/
[3]: https://en.wikipedia.org/wiki/UTF-8
[4]: https://pynacl.readthedocs.io/en/stable/
[5]: https://nacl.cr.yp.to/
[6]: https://github.com/SimpleLegal/pocket_protector/blob/master/USER_GUIDE.md

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to quickly deploy, run Linux applications as unikernels)
[#]: via: (https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to quickly deploy, run Linux applications as unikernels
======
Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.
![Marcho Verch \(CC BY 2.0\)][1]
Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.
### What are unikernels?
A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can't try to grab the system's /etc/passwd or /etc/shadow files because these files don't exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don't need, thereby removing vulnerabilities and improving performance many times over.
In short, unikernels:
* Provide improved security (e.g., making shell code exploits impossible)
* Have much smaller footprints then standard cloud apps
* Are highly optimized
* Boot extremely quickly
### Are there any downsides to unikernels?
The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application's low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.
### How is this changing?
Just recently (March 24, 2019) [NanoVMs][3] announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.
### What is NanoVMs OPS?
NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
Other benefits associated with OPS include:
* Developers need no prior experience or knowledge to build unikernels.
* The tool can be used to build and run unikernels locally on a laptop.
* No accounts need to be created and only a single download and one command is required to execute OPS.
An intro to NanoVMs is available on [NanoVMs on youtube][5]. You can also check out the company's [LinkedIn page][6] and can read about NanoVMs security [here][7].
Here is some information on how to [get started][8].
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387299/how-to-quickly-deploy-run-linux-applications-as-unikernels.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/corn-kernels-100792925-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://nanovms.com/
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.youtube.com/watch?v=VHWDGhuxHPM
[6]: https://www.linkedin.com/company/nanovms/
[7]: https://nanovms.com/security
[8]: https://nanovms.gitbook.io/ops/getting_started
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,182 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Anbox Easy Way To Run Android Apps On Linux)
[#]: via: (https://www.2daygeek.com/anbox-best-android-emulator-for-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Anbox Easy Way To Run Android Apps On Linux
======
Android emulator applications are allow us to run our favorite Android apps or games directly from Linux system.
There are many android emulators were available for Linux and we had covered few applications in the past.
You can review those by navigating to the following URLs.
* [How To Install Official Android Emulator (SDK) On Linux][1]
* [How To Install GenyMotion (Android Emulator) On Linux][2]
Today we are going to discuss about the Anbox Android emulator.
### What Is Anbox?
Anbox stands for Android in a box. Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system.
Its new and modern emulator among others.
Since Anbox places the core Android OS into a container using Linux namespaces (LXE) so, there is no slowness while accessing the installed applications.
Anbox will let you run Android on your Linux system without the slowness of virtualization because the core Android OS has placed into a container using Linux namespaces (LXE).
There is no direct access to any hardware from the Android container. All hardware access are going through the anbox daemon on the host.
Each applications will be open in a separate window, just like other native system applications, and it can be showing up in the launcher.
### How To Install Anbox In Linux?
Anbox application is available as snap package so, make sure you have enabled snap support on your system.
Anbox package is recently added to the Ubuntu (Cosmic) and Debian (Buster) repositories. If you are running these version then you can easily install with help of official distribution package manager. Other wise go with snap package installation.
Make sure the necessary kernel modules should be installed in your system in order to work Anbox. For Ubuntu based users, use the following PPA to install it.
```
$ sudo add-apt-repository ppa:morphis/anbox-support
$ sudo apt update
$ sudo apt install linux-headers-generic anbox-modules-dkms
```
After you installed the `anbox-modules-dkms` package you have to manually reload the kernel modules or system reboot is required.
```
$ sudo modprobe ashmem_linux
$ sudo modprobe binder_linux
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install anbox.
```
$ sudo apt install anbox
```
We always used to get package for Arch Linux based systems from AUR repository. So, use any of the **[AUR helper][5]** to install it. I prefer to go with **[Yay utility][6]**.
```
$ yuk -S anbox-git
```
If no, you can **[install and configure snaps in Linux][7]** by navigating to the following article. Others can ignore if you have already installed snaps on your system.
```
$ sudo snap install --devmode --beta anbox
```
### Prerequisites For Anbox
By default, Anbox doesnt ship with the Google Play Store.
Hence, we need to manually download each application (APK) and install it using Android Debug Bridge (ADB).
The ADB tool is readily available in most of the distributions repository so, we can easily install it.
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install ADB.
```
$ sudo apt install android-tools-adb
```
For **`Fedora`** system, use **[DNF Command][8]** to install ADB.
```
$ sudo dnf install android-tools
```
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install ADB.
```
$ sudo pacman -S android-tools
```
For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install ADB.
```
$ sudo zypper install android-tools
```
### Where To Download The Android Apps?
Since you cant use the Play Store so, you have to download the APK packages from trusted sites like [APKMirror][11] then manually install it.
### How To Launch Anbox?
Anbox can be launched from the Dash. This is how the default Anbox looks.
![][13]
### How To Push The Apps Into Anbox?
As i told previously, we need to manually install it. For testing purpose, we are going to install `YouTube` and `Firefox` apps.
First, you need to start ADB server. To do so, run the following command.
```
$ adb devices
```
We have already downloaded the `YouTube` and `Firefox` apps and the same we will install now.
**Common Syntax:**
```
$ adb install Name-Of-Your-Application.apk
```
Installing YouTube and Firefox app.
```
$ adb install 'com.google.android.youtube_14.13.54-1413542800_minAPI19(x86_64)(nodpi)_apkmirror.com.apk'
Success
$ adb install 'org.mozilla.focus_9.0-330191219_minAPI21(x86)(nodpi)_apkmirror.com.apk'
Success
```
I have installed `YouTube` and `Firefox` in my Anbox. See the screenshot below.
![][14]
As we told in the beginning of the article, it will open any app as a new tab. Here, Im going to open Firefox and accessing the **[2daygeek.com][15]** website.
![][16]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/anbox-best-android-emulator-for-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-sdk-android-emulator-on-linux/
[2]: https://www.2daygeek.com/install-genymotion-android-emulator-on-ubuntu-debian-fedora-arch-linux/
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[5]: https://www.2daygeek.com/category/aur-helper/
[6]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/
[7]: https://www.2daygeek.com/linux-snap-package-manager-ubuntu/
[8]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[11]: https://www.apkmirror.com/
[12]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[13]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-1.jpg
[14]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-2.jpg
[15]: https://www.2daygeek.com/
[16]: https://www.2daygeek.com/wp-content/uploads/2019/04/anbox-best-android-emulator-for-linux-3.jpg

View File

@ -1,263 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
Building a DNS-as-a-service with OpenStack Designate
======
Learn how to install and configure Designate, a multi-tenant
DNS-as-a-service (DNSaaS) for OpenStack.
![Command line prompt][1]
[Designate][2] is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with [Neutron][3], and integration support for Bind9.
You would want to consider a DNSaaS for the following:
* A clean REST API for managing zones and records
* Automatic records generated (with OpenStack integration)
* Support for multiple authoritative name servers
* Hosting multiple projects/organizations
![Designate's architecture][4]
This article explains how to manually install and configure the latest release of Designate service on CentOS or Red Hat Enterprise Linux 7 (RHEL 7), but you can use the same configuration on other distributions.
### Install Designate on OpenStack
I have Ansible roles for bind and Designate that demonstrate the setup in my [GitHub repository][5].
This setup presumes bind service is external (even though you can install bind locally) on the OpenStack controller node.
1. Install Designate's packages and bind (on OpenStack controller): [code]`# yum install openstack-designate-* bind bind-utils -y`
```
2. Create the Designate database and user: [code] MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
'designate'@'localhost' IDENTIFIED BY 'rhlab123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
IDENTIFIED BY 'rhlab123';
```
Note: Bind packages must be installed on the controller side for Remote Name Daemon Control (RNDC) to function properly.
### Configure bind (DNS server)
1. Generate RNDC files: [code] rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
cat <<EOF> etcrndc.conf
include "/etc/rndc.key";
options {
default-key "designate";
default-server {{ DNS_SERVER_IP }};
default-port 953;
};
EOF
```
2. Add the following into **named.conf** : [code]`include "/etc/rndc.key"; controls { inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; }; };`[/code] In the **option** section, add: [code] options {
...
allow-new-zones yes;
request-ixfr no;
listen-on port 53 { any; };
recursion no;
allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
}; [/code] Add the right permissions: [code] chown named:named /etc/rndc.key
chown named:named /etc/rndc.conf
chmod 600 /etc/rndc.key
chown -v root:named /etc/named.conf
chmod g+w /var/named
# systemctl restart named
# setsebool named_write_master_zones 1
```
3. Push **rndc.key** and **rndc.conf** into the OpenStack controller: [code]`# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/`
```
## Create OpenStack Designate service and endpoints
Enter:
```
# openstack user create --domain default --password-prompt designate
# openstack role add --project services --user designate admin
# openstack service create --name designate --description "DNS" dns
# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
```
## Configure Designate service
1. Edit **/etc/designate/designate.conf** :
* In the **[service:api]** section, configure **auth_strategy** : [code] [service:api]
listen = 0.0.0.0:9001
auth_strategy = keystone
api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
enable_api_v2 = True
enabled_extensions_v2 = quotas, reports
```
* In the **[keystone_authtoken]** section, configure the following options: [code] [keystone_authtoken]
auth_type = password
username = designate
password = rhlab123
project_name = service
project_domain_name = Default
user_domain_name = Default
www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
```
* In the **[service:worker]** section, enable the worker model: [code] enabled = True
notify = True
```
* In the **[storage:sqlalchemy]** section, configure database access: [code] [storage:sqlalchemy]
connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
```
* Populate the Designate database: [code]`# su -s /bin/sh -c "designate-manage database sync" designate`
```
2. Create Designate's **pools.yaml** file (has target and bind details):
* Edit **/etc/designate/pools.yaml** : [code] - name: default
# The name is immutable. There will be no option to change the name after
# creation and the only way will to change it will be to delete it
# (and all zones associated with it) and recreate it.
description: Default Pool
attributes: {}
# List out the NS records for zones hosted within this pool
# This should be a record that is created outside of designate, that
# points to the public IP of the controller node.
ns_records:
\- hostname: {{Controller_FQDN}}. # Thisis mDNS
priority: 1
# List out the nameservers for this pool. These are the actual BIND servers.
# We use these to verify changes have propagated to all nameservers.
nameservers:
\- host: {{ DNS_SERVER_IP }}
port: 53
# List out the targets for this pool. For BIND there will be one
# entry for each BIND server, as we have to run rndc command on each server
targets:
\- type: bind9
description: BIND9 Server 1
# List out the designate-mdns servers from which BIND servers should
# request zone transfers (AXFRs) from.
# This should be the IP of the controller node.
# If you have multiple controllers you can add multiple masters
# by running designate-mdns on them, and adding them here.
masters:
\- host: {{ CONTROLLER_SERVER_IP }}
port: 5354
# BIND Configuration options
options:
host: {{ DNS_SERVER_IP }}
port: 53
rndc_host: {{ DNS_SERVER_IP }}
rndc_port: 953
rndc_key_file: /etc/rndc.key
rndc_config_file: /etc/rndc.conf
```
* Populate Designate's pools: [code]`su -s /bin/sh -c "designate-manage pool update" designate`
```
3. Start Designate central and API services: [code]`systemctl enable --now designate-central designate-api`
```
4. Verify Designate's services are up: [code] # openstack dns service list
+--------------+--------+-------+--------------+
| service_name | status | stats | capabilities |
+--------------+--------+-------+--------------+
| central | UP | - | - |
| api | UP | - | - |
| mdns | UP | - | - |
| worker | UP | - | - |
| producer | UP | - | - |
+--------------+--------+-------+--------------+
```
### Configure OpenStack Neutron with external DNS
1. Configure iptables for Designate services: [code] # iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
# service iptables save; service iptables restart
# setsebool named_write_master_zones 1
```
2. Edit the **[default]** section of **/etc/neutron/neutron.conf** : [code]`external_dns_driver = designate`
```
3. Add the **[designate]** section in **/_etc/_neutron/neutron.conf** : [code] [designate]
url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
auth_type = password
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
username = designate
password = rhlab123
project_name = services
project_domain_name = Default
user_domain_name = Default
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
```
4. Edit **dns_domain** in **neutron.conf** : [code] dns_domain = rhlab.dev.
# systemctl restart neutron-*
```
5. Add **dns** to the list of Modular Layer 2 (ML2) drivers in **/etc/neutron/plugins/ml2/ml2_conf.ini** : [code]`extension_drivers=port_security,qos,dns`
```
6. Add **zone** in Designate: [code]`# openstack zone create email=admin@rhlab.dev rhlab.dev.`[/code] Add a new record in **zone rhlab.dev** : [code]`# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test`
```
Designate should now be installed and configured.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/getting-started-openstack-designate
作者:[Amjad Yaseen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayaseen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://docs.openstack.org/designate/latest/
[3]: /article/19/3/openstack-neutron
[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
[5]: https://github.com/ayaseen/designate

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (FSSlc)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,117 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with social media sentiment analysis in Python)
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
Getting started with social media sentiment analysis in Python
======
Learn the basics of natural language processing and explore two useful
Python packages.
![Raspberry Pi and Python][1]
Natural language processing (NLP) is a type of machine learning that addresses the correlation between spoken/written languages and computer-aided analysis of those languages. We experience numerous innovations from NLP in our daily lives, from writing assistance and suggestions to real-time speech translation and interpretation.
This article examines one specific area of NLP: sentiment analysis, with an emphasis on determining the positive, negative, or neutral nature of the input language. This part will explain the background behind NLP and sentiment analysis and explore two open source Python packages. [Part 2][2] will demonstrate how to begin building your own scalable sentiment analysis services.
When learning sentiment analysis, it is helpful to have an understanding of NLP in general. This article won't dig into the mathematical guts, rather our goal is to clarify key concepts in NLP that are crucial to incorporating these methods into your solutions in practical ways.
### Natural language and text data
A reasonable place to begin is defining: "What is natural language?" It is the means by which we, as humans, communicate with one another. The primary modalities for communication are verbal and text. We can take this a step further and focus solely on text communication; after all, living in an age of pervasive Siri, Alexa, etc., we know speech is a group of computations away from text.
### Data landscape and challenges
Limiting ourselves to textual data, what can we say about language and text? First, language, particularly English, is fraught with exceptions to rules, plurality of meanings, and contextual differences that can confuse even a human interpreter, let alone a computational one. In elementary school, we learn articles of speech and punctuation, and from speaking our native language, we acquire intuition about which words have less significance when searching for meaning. Examples of the latter would be articles of speech such as "a," "the," and "or," which in NLP are referred to as _stop words_ , since traditionally an NLP algorithm's search for meaning stops when reaching one of these words in a sequence.
Since our goal is to automate the classification of text as belonging to a sentiment class, we need a way to work with text data in a computational fashion. Therefore, we must consider how to represent text data to a machine. As we know, the rules for utilizing and interpreting language are complicated, and the size and structure of input text can vary greatly. We'll need to transform the text data into numeric data, the form of choice for machines and math. This transformation falls under the area of _feature extraction_.
Upon extracting numeric representations of input text data, one refinement might be, given an input body of text, to determine a set of quantitative statistics for the articles of speech listed above and perhaps classify documents based on them. For example, a glut of adverbs might make a copywriter bristle, or excessive use of stop words might be helpful in identifying term papers with content padding. Admittedly, this may not have much bearing on our goal of sentiment analysis.
### Bag of words
When you assess a text statement as positive or negative, what are some contextual clues you use to assess its polarity (i.e., whether the text has positive, negative, or neutral sentiment)? One way is connotative adjectives: something called "disgusting" is viewed as negative, but if the same thing were called "beautiful," you would judge it as positive. Colloquialisms, by definition, give a sense of familiarity and often positivity, whereas curse words could be a sign of hostility. Text data can also include emojis, which carry inherent sentiments.
Understanding the polarity influence of individual words provides a basis for the [_bag-of-words_][3] (BoW) model of text. It considers a set of words or vocabulary and extracts measures about the presence of those words in the input text. The vocabulary is formed by considering text where the polarity is known, referred to as _labeled training data_. Features are extracted from this set of labeled data, then the relationships between the features are analyzed and labels are associated with the data.
The name "bag of words" illustrates what it utilizes: namely, individual words without consideration of spatial locality or context. A vocabulary typically is built from all words appearing in the training set, which tends to be pruned afterward. Stop words, if not cleaned prior to training, are removed due to their high frequency and low contextual utility. Rarely used words can also be removed, given the lack of information they provide for general input cases.
It is important to note, however, that you can (and should) go further and consider the appearance of words beyond their use in an individual instance of training data, or what is called [_term frequency_][4] (TF). You should also consider the counts of a word through all instances of input data; typically the infrequency of words among all documents is notable, which is called the [_inverse document frequency_][5] (IDF). These metrics are bound to be mentioned in other articles and software packages on this subject, so having an awareness of them can only help.
BoW is useful in a number of document classification applications; however, in the case of sentiment analysis, things can be gamed when the lack of contextual awareness is leveraged. Consider the following sentences:
* We are not enjoying this war.
* I loathe rainy days, good thing today is sunny.
* This is not a matter of life and death.
The sentiment of these phrases is questionable for human interpreters, and by strictly focusing on instances of individual vocabulary words, it's difficult for a machine interpreter as well.
Groupings of words, called _n-grams_ , can also be considered in NLP. A bigram considers groups of two adjacent words instead of (or in addition to) the single BoW. This should alleviate situations such as "not enjoying" above, but it will remain open to gaming due to its loss of contextual awareness. Furthermore, in the second sentence above, the sentiment context of the second half of the sentence could be perceived as negating the first half. Thus, spatial locality of contextual clues also can be lost in this approach. Complicating matters from a pragmatic perspective is the sparsity of features extracted from a given input text. For a thorough and large vocabulary, a count is maintained for each word, which can be considered an integer vector. Most documents will have a large number of zero counts in their vectors, which adds unnecessary space and time complexity to operations. While a number of clever approaches have been proposed for reducing this complexity, it remains an issue.
### Word embeddings
Word embeddings are a distributed representation that allows words with a similar meaning to have a similar representation. This is based on using a real-valued vector to represent words in connection with the company they keep, as it were. The focus is on the manner that words are used, as opposed to simply their existence. In addition, a huge pragmatic benefit of word embeddings is their focus on dense vectors; by moving away from a word-counting model with commensurate amounts of zero-valued vector elements, word embeddings provide a more efficient computational paradigm with respect to both time and storage.
Following are two prominent word embedding approaches.
#### Word2vec
The first of these word embeddings, [Word2vec][6], was developed at Google. You'll probably see this embedding method mentioned as you go deeper in your study of NLP and sentiment analysis. It utilizes either a _continuous bag of words_ (CBOW) or a _continuous skip-gram_ model. In CBOW, a word's context is learned during training based on the words surrounding it. Continuous skip-gram learns the words that tend to surround a given word. Although this is more than what you'll probably need to tackle, if you're ever faced with having to generate your own word embeddings, the author of Word2vec advocates the CBOW method for speed and assessment of frequent words, while the skip-gram approach is better suited for embeddings where rare words are more important.
#### GloVe
The second word embedding, [_Global Vectors for Word Representation_][7] (GloVe), was developed at Stanford. It's an extension to the Word2vec method that attempts to combine the information gained through classical global text statistical feature extraction with the local contextual information determined by Word2vec. In practice, GloVe has outperformed Word2vec for some applications, while falling short of Word2vec's performance in others. Ultimately, the targeted dataset for your word embedding will dictate which method is optimal; as such, it's good to know the existence and high-level mechanics of each, as you'll likely come across them.
#### Creating and using word embeddings
Finally, it's useful to know how to obtain word embeddings; in part 2, you'll see that we are standing on the shoulders of giants, as it were, by leveraging the substantial work of others in the community. This is one method of acquiring a word embedding: namely, using an existing trained and proven model. Indeed, myriad models exist for English and other languages, and it's possible that one does what your application needs out of the box!
If not, the opposite end of the spectrum in terms of development effort is training your own standalone model without consideration of your application. In essence, you would acquire substantial amounts of labeled training data and likely use one of the approaches above to train a model. Even then, you are still only at the point of acquiring understanding of your input-text data; you then need to develop a model specific for your application (e.g., analyzing sentiment valence in software version-control messages) which, in turn, requires its own time and effort.
You also could train a word embedding on data specific to your application; while this could reduce time and effort, the word embedding would be application-specific, which would reduce reusability.
### Available tooling options
You may wonder how you'll ever get to a point of having a solution for your problem, given the intensive time and computing power needed. Indeed, the complexities of developing solid models can be daunting; however, there is good news: there are already many proven models, tools, and software libraries available that may provide much of what you need. We will focus on [Python][8], which conveniently has a plethora of tooling in place for these applications.
#### SpaCy
[SpaCy][9] provides a number of language models for parsing input text data and extracting features. It is highly optimized and touted as the fastest library of its kind. Best of all, it's open source! SpaCy performs tokenization, parts-of-speech classification, and dependency annotation. It contains word embedding models for performing this and other feature extraction operations for over 46 languages. You will see how it can be used for text analysis and feature extraction in the second article in this series.
#### vaderSentiment
The [vaderSentiment][10] package provides a measure of positive, negative, and neutral sentiment. As the [original paper][11]'s title ("VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text") indicates, the models were developed and tuned specifically for social media text data. VADER was trained on a thorough set of human-labeled data, which included common emoticons, UTF-8 encoded emojis, and colloquial terms and abbreviations (e.g., meh, lol, sux).
For given input text data, vaderSentiment returns a 3-tuple of polarity score percentages. It also provides a single scoring measure, referred to as _vaderSentiment's compound metric_. This is a real-valued measurement within the range **[-1, 1]** wherein sentiment is considered positive for values greater than **0.05** , negative for values less than **-0.05** , and neutral otherwise.
In [part 2][2], you will learn how to use these tools to add sentiment analysis capabilities to your designs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
作者:[Michael McCune ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/elmiko/users/jschlessman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python)
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-2
[3]: https://en.wikipedia.org/wiki/Bag-of-words_model
[4]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
[5]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency
[6]: https://en.wikipedia.org/wiki/Word2vec
[7]: https://en.wikipedia.org/wiki/GloVe_(machine_learning)
[8]: https://www.python.org/
[9]: https://pypi.org/project/spacy/
[10]: https://pypi.org/project/vaderSentiment/
[11]: http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf

View File

@ -1,148 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (2 new apps for music tweakers on Fedora Workstation)
[#]: via: (https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/)
[#]: author: (Justin W. Flory https://fedoramagazine.org/author/jflory7/)
2 new apps for music tweakers on Fedora Workstation
======
![][1]
Linux operating systems are great for making unique customizations and tweaks to make your computer work better for you. For example, the [i3 window manager][2] encourages users to think about the different components and pieces that make up the modern Linux desktop.
Fedora has two new packages of interest for music tweakers: **mpris-scrobbler** and **playerctl**. _mpris-scrobbler_ [tracks your music listening history][3] on a music-tracking service like Last.fm and/or ListenBrainz. _playerctl_ is a command-line [music player controller][4].
## _mpris-scrobbler_ records your music listening trends
_mpris-scrobbler_ is a CLI application to submit play history of your music to a service like [Last.fm][5], [Libre.fm][6], or [ListenBrainz][7]. It listens on the [MPRIS D-Bus interface][8] to detect whats playing. It connects with several different music clients like spotify-client, [vlc][9], audacious, bmp, [cmus][10], and others.
![Last.fm last week in music report. Generated from user-submitted listening history.][11]
### Install and configure _mpris-scrobbler_
_mpris-scrobbler_ is available for Fedora 28 or later, as well as the EPEL 7 repositories. Run the following command in a terminal to install it:
```
sudo dnf install mpris-scrobbler
```
Once it is installed, use _systemctl_ to start and enable the service. The following command starts _mpris-scrobbler_ and always starts it after a system reboot:
```
systemctl --user enable --now mpris-scrobbler.service
```
### Submit plays to ListenBrainz
This article explains how to link _mpris-scrobbler_ with a ListenBrainz account. To use Last.fm or Libre.fm, see the [upstream documentation][12].
To submit plays to a ListenBrainz server, you need a ListenBrainz API token. If you have an account, get the token from your [profile settings page][13]. When you have a token, run this command to authenticate with your ListenBrainz API token:
```
$ mpris-scrobbler-signon token listenbrainz
Token for listenbrainz.org:
```
Finally, test it by playing a song in your preferred music client on Fedora. The songs you play appear on your ListenBrainz profile.
![Basic statistics and play history from a user profile on ListenBrainz. The current track is playing on a Fedora Workstation laptop with mpris-scrobbler.][14]
## _playerctl_ controls your music playback
_playerctl_ is a CLI tool to control any music player implementing the MPRIS D-Bus interface. You can easily bind it to keyboard shortcuts or media hotkeys. Heres how to install it, use it in the command line, and create key bindings for the i3 window manager.
### Install and use _playerctl_
_playerctl_ is available for Fedora 28 or later. Run the following command in a terminal to install it:
```
sudo dnf install playerctl
```
Now that its installed, you can use it right away. Open your preferred music player on Fedora. Next, try the following commands to control playback from a terminal.
To play or pause the currently playing track:
```
playerctl play-pause
```
If you want to skip to the next track:
```
playerctl next
```
For a list of all running players:
```
playerctl -l
```
To play or pause whats currently playing, only on the spotify-client app:
```
playerctl -p spotify play-pause
```
### Create _playerctl_ key bindings in i3wm
Do you use a window manager like the [i3 window manager?][2] Try using _playerctl_ for key bindings. You can bind different commands to different key shortcuts, like the play/pause buttons on your keyboard. Look at the following [i3wm config excerpt][15] to see how:
```
# Media player controls
bindsym XF86AudioPlay exec "playerctl play-pause"
bindsym XF86AudioNext exec "playerctl next"
bindsym XF86AudioPrev exec "playerctl previous"
```
## Try it out with your favorite music players
Need to know more about customizing the music listening experience on Fedora? The Fedora Magazine has you covered. Check out these five cool music players on Fedora:
> [5 cool music player apps][16]
Bring order to your music library chaos by sorting and organizing it with MusicBrainz Picard:
> [Picard brings order to your music library][17]
* * *
_Photo by _[ _Frank Septillion_][18]_ on _[_Unsplash_][19]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/2-new-apps-for-music-tweakers-on-fedora-workstation/
作者:[Justin W. Flory][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jflory7/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/2-music-tweak-apps-816x345.jpg
[2]: https://fedoramagazine.org/getting-started-i3-window-manager/
[3]: https://github.com/mariusor/mpris-scrobbler
[4]: https://github.com/acrisci/playerctl
[5]: https://www.last.fm/
[6]: https://libre.fm/
[7]: https://listenbrainz.org/
[8]: https://specifications.freedesktop.org/mpris-spec/latest/
[9]: https://www.videolan.org/vlc/
[10]: https://cmus.github.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/02/Screenshot_2019-04-13-jflory7%E2%80%99s-week-in-music2-1024x500.png
[12]: https://github.com/mariusor/mpris-scrobbler#authenticate-to-the-service
[13]: https://listenbrainz.org/profile/
[14]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot_2019-04-13-User-jflory-ListenBrainz.png
[15]: https://github.com/jwflory/swiss-army/blob/ba6ac0c71855e33e3caa1ee1fe51c05d2df0529d/roles/apps/i3wm/files/config#L207-L210
[16]: https://fedoramagazine.org/5-cool-music-player-apps/
[17]: https://fedoramagazine.org/picard-brings-order-music-library/
[18]: https://unsplash.com/photos/Qrspubmx6kE?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[19]: https://unsplash.com/search/photos/music?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (bodhix)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,99 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Format Python however you like with Black)
[#]: via: (https://opensource.com/article/19/5/python-black)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
Format Python however you like with Black
======
Learn more about solving common Python problems in our series covering
seven PyPI libraries.
![OpenStack source code \(Python\) in VIM][1]
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. In the first article, we learned about [Cython][4]; today, we'll examine the **[Black][5]** code formatter.
### Black
Sometimes creativity can be a wonderful thing. Sometimes it is just a pain. I enjoy solving hard problems creatively, but I want my Python formatted as consistently as possible. Nobody has ever been impressed by code that uses "interesting" indentation.
But even worse than inconsistent formatting is a code review that consists of nothing but formatting nits. It is annoying to the reviewer—and even more annoying to the person whose code is reviewed. It's also infuriating when your linter tells you that your code is indented incorrectly, but gives no hint about the _correct_ amount of indentation.
Enter Black. Instead of telling you _what_ to do, Black is a good, industrious robot: it will fix your code for you.
To see how it works, feel free to write something beautifully inconsistent like:
```
def add(a, b): return a+b
def mult(a, b):
return \
a * b
```
Does Black complain? Goodness no, it just fixes it for you!
```
$ black math
reformatted math
All done! ✨ 🍰 ✨
1 file reformatted.
$ cat math
def add(a, b):
return a + b
def mult(a, b):
return a * b
```
Black does offer the option of failing instead of fixing and even outputting a **diff** -style edit. These options are great in a continuous integration (CI) system that enforces running Black locally. In addition, if the **diff** output is logged to the CI output, you can directly paste it into **patch** in the rare case that you need to fix your output but cannot install Black locally.
```
$ black --check --diff bad
\--- math 2019-04-09 17:24:22.747815 +0000
+++ math 2019-04-09 17:26:04.269451 +0000
@@ -1,7 +1,7 @@
-def add(a, b): return a + b
+def add(a, b):
\+ return a + b
def mult(a, b):
\- return \
\- a * b
\+ return a * b
would reformat math
All done! 💥 💔 💥
1 file would be reformatted.
$ echo $?
1
```
In the next article in this series, we'll look at **attrs** , a library that helps you write concise, correct code quickly.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-black
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[5]: https://pypi.org/project/black/

View File

@ -1,107 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Say goodbye to boilerplate in Python with attrs)
[#]: via: (https://opensource.com/article/19/5/python-attrs)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez)
Say goodbye to boilerplate in Python with attrs
======
Learn more about solving common Python problems in our series covering
seven PyPI libraries.
![Programming at a browser, orange hands][1]
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**attrs**][4], a Python package that helps you write concise, correct code quickly.
### attrs
If you have been using Python for any length of time, you are probably used to writing code like:
```
class Book(object):
def __init__(self, isbn, name, author):
self.isbn = isbn
self.name = name
self.author = author
```
Then you write a **__repr__** function; otherwise, it would be hard to log instances of **Book** :
```
def __repr__(self):
return f"Book({self.isbn}, {self.name}, {self.author})"
```
Next, you write a nice docstring documenting the expected types. But you notice you forgot to add the **edition** and **published_year** attributes, so you have to modify them in five places.
What if you didn't have to?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str
name: str
author: str
published_year: int
edition: int
```
Annotating the attributes with types using the new type annotation syntax, **attrs** detects the annotations and creates a class.
ISBNs have a specific format. What if we want to enforce that format?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str = attr.ib()
@isbn.validator
def pattern_match(self, attribute, value):
m = re.match(r"^(\d{3}-)\d{1,3}-\d{2,3}-\d{1,7}-\d$", value)
if not m:
raise ValueError("incorrect format for isbn", value)
name: str
author: str
published_year: int
edition: int
```
The **attrs** library also has great support for [immutability-style programming][5]. Changing the first line to **@attr.s(auto_attribs=True, frozen=True)** means that **Book** is now immutable: trying to modify an attribute will raise an exception. Instead, we can get a _new_ instance with modification using **attr.evolve(old_book, published_year=old_book.published_year+1)** , for example, if we need to push publication forward by a year.
In the next article in this series, we'll look at **singledispatch** , a library that allows you to add methods to Python libraries retroactively.
#### Review the previous articles in this series
* [Cython][6]
* [Black][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-attrs
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y (Programming at a browser, orange hands)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://pypi.org/project/attrs/
[5]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
[6]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[7]: https://opensource.com/article/19/4/python-problems-solved-black

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Moelf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,209 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Create SSH Alias In Linux)
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
How To Create SSH Alias In Linux
======
![How To Create SSH Alias In Linux][1]
If you frequently access a lot of different remote systems via SSH, this trick will save you some time. You can create SSH alias to frequently-accessed systems via SSH. This way you need not to remember all the different usernames, hostnames, ssh port numbers and IP addresses etc. Additionally, It avoids the need to repetitively type the same username/hostname, ip address, port no whenever you SSH into a Linux server(s).
### Create SSH Alias In Linux
Before I know this trick, usually, I connect to a remote system over SSH using anyone of the following ways.
Using IP address:
```
$ ssh 192.168.225.22
```
Or using port number, username and IP address:
```
$ ssh -p 22 sk@server.example.com
```
Or using port number, username and hostname:
```
$ ssh -p 22 sk@server.example.com
```
Here,
* **22** is the port number,
* **sk** is the username of the remote system,
* **192.168.225.22** is the IP of my remote system,
* **server.example.com** is the hostname of remote system.
I believe most of the newbie Linux users and/or admins would SSH into a remote system this way. However, If you SSH into multiple different systems, remembering all hostnames/ip addresses, usernames is bit difficult unless you write them down in a paper or save them in a text file. No worries! This can be easily solved by creating an alias(or shortcut) for SSH connections.
We can create an alias for SSH commands in two methods.
##### Method 1 Using SSH Config File
This is my preferred way of creating aliases.
We can use SSH default configuration file to create SSH alias. To do so, edit **~/.ssh/config** file (If this file doesnt exist, just create one):
```
$ vi ~/.ssh/config
```
Add all of your remote hosts details like below:
```
Host webserver
HostName 192.168.225.22
User sk
Host dns
HostName server.example.com
User root
Host dhcp
HostName 192.168.225.25
User ostechnix
Port 2233
```
![][2]
Create SSH Alias In Linux Using SSH Config File
Replace the values of **Host** , **Hostname** , **User** and **Port** with your own. Once you added the details of all remote hosts, save and exit the file.
Now you can SSH into the systems with commands:
```
$ ssh webserver
$ ssh dns
$ ssh dhcp
```
It is simple as that.
Have a look at the following screenshot.
![][3]
Access remote system using SSH alias
See? I only used the alias name (i.e **webserver** ) to access my remote system that has IP address **192.168.225.22**.
Please note that this applies for current user only. If you want to make the aliases available for all users (system wide), add the above lines in **/etc/ssh/ssh_config** file.
You can also add plenty of other things in the SSH config file. For example, if you have [**configured SSH Key-based authentication**][4], mention the SSH keyfile location as below.
```
Host ubuntu
HostName 192.168.225.50
User senthil
IdentityFIle ~/.ssh/id_rsa_remotesystem
```
Make sure you have replace the hostname, username and SSH keyfile path with your own.
Now connect to the remote server with command:
```
$ ssh ubuntu
```
This way you can add as many as remote hosts you want to access over SSH and quickly access them using their alias name.
##### Method 2 Using Bash aliases
This is quick and dirty way to create SSH aliases for faster communication. You can use the [**alias command**][5] to make this task much easier.
Open **~/.bashrc** or **~/.bash_profile** file:
Add aliases for each SSH connections one by one like below.
```
alias webserver='ssh sk@server.example.com'
alias dns='ssh sk@server.example.com'
alias dhcp='ssh sk@server.example.com -p 2233'
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
```
Again make sure you have replaced the host, hostname, port number and ip address with your own. Save the file and exit.
Then, apply the changes using command:
```
$ source ~/.bashrc
```
Or,
```
$ source ~/.bash_profile
```
In this method, you dont even need to use “ssh alias-name” command. Instead, just use alias name only like below.
```
$ webserver
$ dns
$ dhcp
$ ubuntu
```
![][6]
These two methods are very simple, yet useful and much more convenient for those who often SSH into multiple different systems. Use any one of the aforementioned methods that suits for you to quickly access your remote Linux systems over SSH.
* * *
**Suggested read:**
* [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][7]
* [**How To SSH Into A Particular Directory On Linux**][8]
* [**How To Stop SSH Session From Disconnecting In Linux**][9]
* [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
* [**SSLH Share A Same Port For HTTPS And SSH**][11]
* * *
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/

View File

@ -1,156 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kindd A Graphical Frontend To dd Command)
[#]: via: (https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Kindd A Graphical Frontend To dd Command
======
![Kindd - A Graphical Frontend To dd Command][1]
A while ago we learned how to [**create bootable ISO using dd command**][2] in Unix-like systems. Please keep in mind that dd command is one of the dangerous and destructive command. If youre not sure what you are actually doing, you might accidentally wipe your hard drive in minutes. The dd command just takes bytes from **if** and writes them to **of**. It wont care what its overwriting, it wont care if theres a partition table in the way, or a boot sector, or a home folder, or anything important. It will simply do what it is told to do. If youre beginner, mostly try to avoid using dd command to do stuffs. Thankfully, there is a simple GUI utility for dd command. Say hello to **“Kindd”** , a graphical frontend to dd command. It is free, open source tool written in **Qt Quick**. This tool can be very helpful for the beginners and who are not comfortable with command line in general.
The developer created this tool mainly to provide,
1. a modern, simple and safe graphical user interface for dd command,
2. a graphical way to easily create bootable device without having to use Terminal.
### Installing Kindd
Kindd is available in [**AUR**][3]. So if youre a Arch user, install it using any AUR helper tools, for example [**Yay**][4].
To install Git version, run:
```
$ yay -S kindd-git
```
To install release version, run:
```
$ yay -S kindd
```
After installing, launch Kindd from the Menu or Application launcher.
For other distributions, you need to manually compile and install it from source as shown below.
Make sure you have installed the following prerequisites.
* git
* coreutils
* polkit
* qt5-base
* qt5-quickcontrols
* qt5-quickcontrols2
* qt5-graphicaleffects
Once all prerequisites installed, git clone the Kindd repository:
```
git clone https://github.com/LinArcX/Kindd/
```
Go to the directory where you just cloned Kindd and compile and install it:
```
cd Kindd
qmake
make
```
Finally run the following command to launch Kindd application:
```
./kindd
```
Kindd uses **pkexec** internally. The pkexec agent is installed by default in most most Desktop environments. But if you use **i3** (or maybe some other DE), you should install **polkit-gnome** first, and then paste the following line into i3 config file:
```
exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
```
### Create bootable ISO using Kindd
To create a bootable USB from an ISO, plug in the USB drive. Then, launch Kindd either from the Menu or Terminal.
This is how Kindd default interface looks like:
![][5]
Kindd interface
As you can see, Kindd interface is very simple and self-explanatory. There are just two sections namely **List Devices** which displays the list of available devices (hdd and Usb) on your system and **Create Bootable .iso**. You will be in “Create Bootable .iso” section by default.
Enter the block size in the first column, select the path of the ISO file in the second column and choose the correct device (USB drive path) in third column. Click **Convert/Copy** button to start creating bootable ISO.
![][6]
Once the process is completed, you will see successful message.
![][7]
Now, unplug the USB drive and boot your system with USB to check if it really works.
If you dont know the actual device name (target path), just click on the List devices and check the USB drive name.
![][8]
* * *
**Related read:**
* [**Etcher A Beautiful App To Create Bootable SD Cards Or USB Drives**][9]
* [**Bootiso Lets You Safely Create Bootable USB Drive**][10]
* * *
Kindd is in its early development stage. So, there would be bugs. If you find any bugs, please report them in its GitHub page given at the end of this guide.
And, thats all. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
**Resource:**
* [**Kindd GitHub Repository**][11]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/kindd-720x340.png
[2]: https://www.ostechnix.com/how-to-create-bootable-usb-drive-using-dd-command/
[3]: https://aur.archlinux.org/packages/kindd-git/
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-interface.png
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-1.png
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-2.png
[8]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-3.png
[9]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
[10]: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
[11]: https://github.com/LinArcX/Kindd

View File

@ -1,89 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ping Multiple Servers And Show The Output In Top-like Text UI)
[#]: via: (https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
Ping Multiple Servers And Show The Output In Top-like Text UI
======
![Ping Multiple Servers And Show The Output In Top-like Text UI][1]
A while ago, we wrote about [**“Fping”**][2] utility which enables us to ping multiple hosts at once. Unlike the traditional **“Ping”** utility, Fping doesnt wait for one hosts timeout. It uses round-robin method. Meaning It will send the ICMP Echo request to one host, then move to the another host and finally display which hosts are up or down at a time. Today, we are going to discuss about a similar utility named **“Pingtop”**. As the name says, it will ping multiple servers at a time and show the result in Top-like Terminal UI. It is free and open source, command line program written in **Python**.
### Install Pingtop
Pingtop can be installed using Pip, a package manager to install programs developed in Python. Make sure sure you have installed Python 3.7.x and Pip in your Linux box.
To install Pip on Linux, refer the following link.
* [**How To Manage Python Packages Using Pip**][3]
Once Pip is installed, run the following command to install Pingtop:
```
$ pip install pingtop
```
Now let us go ahead and ping multiple systems using Pingtop.
### Ping Multiple Servers And Show The Output In Top-like Terminal UI
To ping mulstiple hosts/systems, run:
```
$ pingtop ostechnix.com google.com facebook.com twitter.com
```
You will now see the result in a nice top-like Terminal UI as shown in the following output.
![][4]
Ping multiple servers using Pingtop
* * *
**Suggested read:**
* [**Some Alternatives To top Command line Utility You Might Want To Know**][5]
* * *
I personally couldnt find any use cases for Pingtop utility at the moment. But I like the idea of showing the ping commands output in text user interface. Give it a try and see if it helps.
And, thats all for now. More good stuffs to come. Stay tuned!
Cheers!
**Resource:**
* [**Pingtop GitHub Repository**][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/
作者:[sk][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-720x340.png
[2]: https://www.ostechnix.com/ping-multiple-hosts-linux/
[3]: https://www.ostechnix.com/manage-python-packages-using-pip/
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-1.gif
[5]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/
[6]: https://github.com/laixintao/pingtop

View File

@ -1,121 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System)
[#]: via: (https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System
======
Package installation is become more easier on Ubuntu/Debian based systems when we use apt-clone utility.
apt-clone will work for you, if you want to build few systems with same set of packages.
Its time consuming process if you want to build and install necessary packages manually on each systems.
It can be achieved in many ways and there are many utilities are available in Linux.
We have already wrote an article about **[Aptik][1]** in the past.
Its one of the utility that allow Ubuntu users to backup and restore system settings and data
### What Is apt-clone?
[apt-clone][2] lets allow you to create backup of all installed packages for your Debian/Ubuntu systems that can be restored on freshly installed systems (or containers) or into a directory.
This backup can be restored on multiple systems with same operating system version and architecture.
### How To Install apt-clone?
The apt-clone package is available on Ubuntu/Debian official repository so, use **[apt Package Manager][3]** or **[apt-get Package Manager][4]** to install it.
Install apt-clone package using apt package manager.
```
$ sudo apt install apt-clone
```
Install apt-clone package using apt-get package manager.
```
$ sudo apt-get install apt-clone
```
### How To Backup Installed Packages Using apt-clone?
Once you have successfully installed the apt-clone package. Simply give a location where do you want to save the backup file.
We are going to save the installed packages backup under `/backup` directory.
The apt-clone utility will save the installed packages list into `apt-clone-state-Ubuntu18.2daygeek.com.tar.gz` file.
```
$ sudo apt-clone clone /backup
```
We can check the same by running the ls Command.
```
$ ls -lh /backup/
total 32K
-rw-r--r-- 1 root root 29K Apr 20 19:06 apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
Run the following command to view the details of the backup file.
```
$ apt-clone info /backup/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
Hostname: Ubuntu18.2daygeek.com
Arch: amd64
Distro: bionic
Meta: libunity-scopes-json-def-desktop, ubuntu-desktop
Installed: 1792 pkgs (194 automatic)
Date: Sat Apr 20 19:06:43 2019
```
As per the above output, totally we have 1792 packages in the backup file.
### How To Restore The Backup Which Was Taken Using apt-clone?
You can use any of the remote copy utility to copy the files on remote server.
```
$ scp /backup/apt-clone-state-ubunt-18-04.tar.gz Destination-Server:/opt
```
Once you copy the file then perform the restore using apt-clone utility.
Run the following command to restore it.
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
Make a note, The restore will override your existing `/etc/apt/sources.list` and will install/remove packages. So be careful.
If you want to restore all the packages into a folder instead of actual restore, you can do it by using the following command.
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz --destination /opt/oldubuntu
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/aptik-backup-restore-ppas-installed-apps-users-data/
[2]: https://github.com/mvo5/apt-clone
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -0,0 +1,432 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Check storage performance with dd)
[#]: via: (https://fedoramagazine.org/check-storage-performance-with-dd/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
Check storage performance with dd
======
![][1]
This article includes some example commands to show you how to get a _rough_ estimate of hard drive and RAID array performance using the _dd_ command. Accurate measurements would have to take into account things like [write amplification][2] and [system call overhead][3], which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm][4].
To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING** : The _write_ tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!**
### Four tests
Below are four example dd commands that can be used to test the performance of a block device:
1. One process reading from $MY_DISK:
```
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
2. One process writing to $MY_DISK:
```
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
3. Two processes reading concurrently from $MY_DISK:
```
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
4. Two processes writing concurrently to $MY_DISK:
```
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
The _iflag=nocache_ and _oflag=direct_ parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM][5] rather than the hard drive.
The values for the _bs_ and _count_ parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.
The _null_ and _zero_ devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.
The _skip=200_ parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.
### 16 examples
Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:
1. MY_DISK=/dev/sda2 (used in examples 1-X)
2. MY_DISK=/dev/sdb2 (used in examples 2-X)
3. MY_DISK=/dev/md/stripped (used in examples 3-X)
4. MY_DISK=/dev/md/mirrored (used in examples 4-X)
A video demonstration of the these tests being run on a PC is provided at the end of this guide.
Begin by putting your computer into _rescue_ mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING** : This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your _root_ password to get into rescue mode. The _passwd_ command, when run as the root user, will prompt you to (re)set your root account password.
```
$ sudo -i
# passwd
# setenforce 0
# systemctl rescue
```
You might also want to temporarily disable logging to disk:
```
# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
```
If you have a swap device, it can be temporarily disabled and used to perform the following tests:
```
# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS
```
#### Example 1-1 (reading from sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s
```
#### Example 1-2 (writing to sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s
```
#### Example 1-3 (reading concurrently from sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
```
#### Example 1-4 (writing concurrently to sda)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s
```
#### Example 2-1 (reading from sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s
```
#### Example 2-2 (writing to sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s
```
#### Example 2-3 (reading concurrently from sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
```
#### Example 2-4 (writing concurrently to sdb)
```
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
```
#### Example 3-1 (reading from RAID0)
```
# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s
```
#### Example 3-2 (writing to RAID0)
```
# MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s
```
#### Example 3-3 (reading concurrently from RAID0)
```
# MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
```
#### Example 3-4 (writing concurrently to RAID0)
```
# MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
```
#### Example 4-1 (reading from RAID1)
```
# mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s
```
#### Example 4-2 (writing to RAID1)
```
# MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s
```
#### Example 4-3 (reading concurrently from RAID1)
```
# MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
```
#### Example 4-4 (writing concurrently to RAID1)
```
# MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
```
```
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
```
#### Restore your swap device and journald configuration
```
# mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot
```
### Interpreting the results
Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.
Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drives bandwidth (60 MB/s).
The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure][6].
The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails][7].
### Video demo
Testing storage throughput using dd
### Troubleshooting
If the above tests arent performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology ([SMART][8]). If your drive supports it, the _smartctl_ command can be used to query your hard drive for its internal statistics:
```
# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda
```
Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler][9]. Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue][10] variant of the [deadline][11] scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.
To view which I/O scheduler your drives are using, issue the following command:
```
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
```
You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:
```
# echo bfq > /sys/block/sda/queue/scheduler
```
You can make your changes permanent by creating a [udev rule][12] for your drive. The following example shows how to create a udev rule that will set all [rotational drives][13] to use the [BFQ][14] I/O scheduler:
```
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
END
```
Here is another example that sets all [solid-state drives][15] to use the [NOOP][16] I/O scheduler:
```
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
END
```
Changing your I/O scheduler wont affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.
* * *
_Photo by _[ _James Donovan_][17]_ on _[_Unsplash_][18]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/check-storage-performance-with-dd/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dd-performance-816x345.jpg
[2]: https://www.ibm.com/developerworks/community/blogs/ibmnas/entry/misalignment_can_be_twice_the_cost1?lang=en
[3]: https://eklitzke.org/efficient-file-copying-on-linux
[4]: https://en.wikipedia.org/wiki/Hdparm
[5]: https://en.wikipedia.org/wiki/Random-access_memory
[6]: https://blog.elcomsoft.com/2019/01/why-ssds-die-a-sudden-death-and-how-to-deal-with-it/
[7]: https://www.computerworld.com/article/2484998/ssds-do-die--as-linus-torvalds-just-discovered.html
[8]: https://en.wikipedia.org/wiki/S.M.A.R.T.
[9]: https://en.wikipedia.org/wiki/I/O_scheduling
[10]: https://lwn.net/Articles/552904/
[11]: https://en.wikipedia.org/wiki/Deadline_scheduler
[12]: http://www.reactivated.net/writing_udev_rules.html
[13]: https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics
[14]: http://algo.ing.unimo.it/people/paolo/disk_sched/
[15]: https://en.wikipedia.org/wiki/Solid-state_drive
[16]: https://en.wikipedia.org/wiki/Noop_scheduler
[17]: https://unsplash.com/photos/0ZBRKEG_5no?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[18]: https://unsplash.com/search/photos/speed?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,112 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Use 7Zip in Ubuntu and Other Linux [Quick Tip])
[#]: via: (https://itsfoss.com/use-7zip-ubuntu-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Use 7Zip in Ubuntu and Other Linux [Quick Tip]
======
_**Brief: Cannot extract .7z file in Linux? Learn how to install and use 7zip in Ubuntu and other Linux distributions.**_
[7Zip][1] (properly written as 7-Zip) is an archive format hugely popular among Windows users. A 7Zip archive file usually ends in .7z extension. Its mostly an open source software barring a few part of the code that deals with unRAR.
The 7Zip support is not enabled by default in most Linux distributions. If you try to extract it, you may see this error:
_**Could not open this file type
There is no command installed for 7-zip archive files. Do you want to search for a command to open this file?**_
![][2]
Dont worry, you can easily install 7zip in Ubuntu or other Linux distributions.
The one problem youll notice if you try to use the [apt-get install command][3], youll see that there are no installation candidate that starts with 7zip. Its because the 7Zip package in Linux is named [p7zip][4]., start with letter p instead of the expected number 7.
Lets see how to install 7zip in Ubuntu and (possibly) other distributions.
### Install 7Zip in Ubuntu Linux
![][5]
First thing you need is to install the p7zip package. Youll find three 7zip packages in Ubuntu: p7zip, p7zip-full and p7zip-rar.
The difference between p7zip and p7zip-full is that p7zip is a lighter version providing support only for .7z while the full version provides support for more 7z compression algorithms (for audio files etc).
The p7zip-rar package provides support for [RAR files][6] along with 7z.
Installing p7zip-full should be sufficient in most cases but you may also install p7zip-rar for additional support for the rar file.
p7zip packages are in the [universe repository in Ubuntu][7] so make sure that you have enabled it using this command:
```
sudo add-apt-repository universe
sudo apt update
```
Use the following command to install 7zip support in Ubuntu and Debian based distributions.
```
sudo apt install p7zip-full p7zip-rar
```
Thats good. Now you have 7zip archive support in your system.
[][8]
Suggested read Easily Share Files Between Linux, Windows And Mac Using NitroShare
### Extract 7Zip archive file in Linux
With 7Zip installed, you can either use the GUI or the command line to extract 7zip files in Linux.
In GUI, you can extract a .7z file as you extract any other compressed file. You right click on the file and proceed to extract it.
In terminal, you can extract a .7z archive file using this command:
```
7z e file.7z
```
### Compress a file in 7zip archive format in Linux
You can compress a file in 7zip archive format graphically. Simply right click on the file/directory, and select **Compress**. You should see several types of archive format options. Choose .7z for 7zip.
![7zip Archive Ubuntu][9]
Alternatively, you can also use the command line. Heres the command that you can use for this purpose:
```
7z a OutputFile files_to_compress
```
By default, the archived file will have .7z extension. You can compress the file in zip format by specifying the extension (.zip) of the output file.
**Conclusion**
Thats it. See, how easy it is to use 7zip in Linux? I hope you liked this quick tip. If you have questions or suggestions, feel free to let me know the comment sections.
--------------------------------------------------------------------------------
via: https://itsfoss.com/use-7zip-ubuntu-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.7-zip.org/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/07/Install_7zip_ubuntu_1.png?ssl=1
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://sourceforge.net/projects/p7zip/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/7zip-linux.png?resize=800%2C450&ssl=1
[6]: https://itsfoss.com/use-rar-ubuntu-linux/
[7]: https://itsfoss.com/ubuntu-repositories/
[8]: https://itsfoss.com/easily-share-files-linux-windows-mac-nitroshare/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/7zip-archive-ubuntu.png?resize=800%2C239&ssl=1

View File

@ -0,0 +1,141 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?
======
We have recently published an article about bulk package installation.
While doing that, i was struggled to get the installed package information and did a small google search and found few methods about it.
I would like to share it in our website so, that it will be helpful for others too.
There are numerous ways we can achieve this.
I have add seven ways to achieve this. However, you can choose the preferred method for you.
Those methods are listed below.
* **`apt-cache Command:`** apt-cache command is used to query the APT cache or package metadata.
* **`apt Command:`** APT is a powerful command-line tool for installing, downloading, removing, searching and managing packages on Debian based systems.
* **`dpkg-query Command:`** dpkg-query is a tool to query the dpkg database.
* **`dpkg Command:`** dpkg is a package manager for Debian based systems.
* **`which Command:`** The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
* **`whereis Command:`** The whereis command used to search the binary, source, and man page files for a given command.
* **`locate Command:`** locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
### Method-1 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt-cache Command?
apt-cache command is used to query the APT cache or package metadata from APTs internal database.
It will search and display an information about the given package. It shows whether the package is installed or not, installed package version, source repository information.
The below output clearly showing that `nano` package has already installed in the system. Since installed part is showing the installed version of nano package.
```
# apt-cache policy nano
nano:
Installed: 2.9.3-2
Candidate: 2.9.3-2
Version table:
*** 2.9.3-2 500
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
```
### Method-2 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt Command?
APT is a powerful command-line tool for installing, downloading, removing, searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library. Its contains some less used command-line utilities related to package management.
```
# apt -qq list nano
nano/bionic,now 2.9.3-2 amd64 [installed]
```
### Method-3 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg-query Command?
dpkg-query is a tool to show information about packages listed in the dpkg database.
In the below output first column showing `ii`. It means, the given package has already installed in the system.
```
# dpkg-query --list | grep -i nano
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
```
### Method-4 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg Command?
DPKG stands for Debian Package is a tool to install, build, remove and manage Debian packages, but unlike other package management systems, it cannot automatically download and install packages or their dependencies.
In the below output first column showing `ii`. It means, the given package has already installed in the system.
```
# dpkg -l | grep -i nano
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
```
### Method-5 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using which Command?
The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
Its very useful when you want to create a desktop shortcut or symbolic link for executable files.
Which command searches the directories listed in the current users PATH environment variable not for all the users. I mean, when you are logged in your own account and you cant able to search for root user file or directory.
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
```
# which nano
/bin/nano
```
### Method-6 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using whereis Command?
The whereis command used to search the binary, source, and man page files for a given command.
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
```
# whereis nano
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
```
### Method-7 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using locate Command?
locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
It uses a database rather than hunting individual directory paths to get a given file.
locate command doesnt pre-installed in most of the distributions so, use your distribution package manager to install it.
The database is updated regularly through cron. Even, we can update it manually.
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
```
# locate --basename '\nano'
/usr/bin/nano
/usr/share/nano
/usr/share/doc/nano
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,243 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Set Password Complexity On Linux?)
[#]: via: (https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Set Password Complexity On Linux?
======
User management is one of the important task of Linux system administration.
There are many aspect is involved in this and implementing the strong password policy is one of them.
Navigate to the following URL, if you would like to **[generate a strong password on Linux][1]**.
It will Restrict unauthorized access to systems.
By default Linux is secure that everybody know. however, we need to make necessary tweak on this to make it more secure.
Insecure password will leads to breach security. So, take additional care on this.
Navigate to the following URL, if you would like to see the **[password strength and score][2]** of the generated strong password.
In this article, we will teach you, how to implement the best security policy on Linux.
We can use PAM (the “pluggable authentication module”) to enforce password policy On most Linux systems.
The file can be found in the following location.
For Redhat based systems @ `/etc/pam.d/system-auth` and Debian based systems @ `/etc/pam.d/common-password`.
The default password aging details can be found in the `/etc/login.defs` file.
I have trimmed this file for better understanding.
```
# vi /etc/login.defs
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
```
**Details:**
* **`PASS_MAX_DAYS:`**` ` Maximum number of days a password may be used.
* **`PASS_MIN_DAYS:`**` ` Minimum number of days allowed between password changes.
* **`PASS_MIN_LEN:`**` ` Minimum acceptable password length.
* **`PASS_WARN_AGE:`**` ` Number of days warning given before a password expires.
We will show you, how to implement the below eleven password policies in Linux.
* Password Max days
* Password Min days
* Password warning days
* Password history or Deny Re-Used Passwords
* Password minimum length
* Minimum upper case characters
* Minimum lower case characters
* Minimum digits in password
* Minimum other characters (Symbols)
* Account lock retries
* Account unlock time
### What Is Password Max days?
This parameter limits the maximum number of days a password can be used. Its mandatory for user to change his/her account password before expiry.
If they forget to change, they are not allowed to login into the system. They need to work with admin team to get rid of it.
It can be set in `/etc/login.defs` file. Im going to set `90 days`.
```
# vi /etc/login.defs
PASS_MAX_DAYS 90
```
### What Is Password Min days?
This parameter limits the minimum number of days after password can be changed.
Say for example, if this parameter is set to 15 and user changed password today. Then he wont be able to change the password again before 15 days from now.
It can be set in `/etc/login.defs` file. Im going to set `15 days`.
```
# vi /etc/login.defs
PASS_MIN_DAYS 15
```
### What Is Password Warning Days?
This parameter controls the password warning days and it will warn the user when the password is going to expires.
A warning will be given to the user regularly until the warning days ends. This can helps user to change their password before expiry. Otherwise we need to work with admin team for unlock the password.
It can be set in `/etc/login.defs` file. Im going to set `10 days`.
```
# vi /etc/login.defs
PASS_WARN_AGE 10
```
**Note:** All the above parameters only applicable for new accounts and not for existing accounts.
### What Is Password History Or Deny Re-Used Passwords?
This parameter keep controls of the password history. Keep history of passwords used (the number of previous passwords which cannot be reused).
When the users try to set a new password, it will check the password history and warn the user when they set the same old password.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `5` for history of password.
```
# vi /etc/pam.d/system-auth
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok remember=5
```
### What Is Password Minimum Length?
This parameter keeps the minimum password length. When the users set a new password, it will check against this parameter and warn the user if they try to set the password length less than that.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `12` character for minimum password length.
```
# vi /etc/pam.d/system-auth
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12
```
**try_first_pass retry=3** : Allow users to set a good password before the passwd command aborts.
### Set Minimum Upper Case Characters?
This parameter keeps, how many upper case characters should be added in the password. These are password strengthening parameters ,which increase the password strength.
When the users set a new password, it will check against this parameter and warn the user if they are not including any upper case characters in the password.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `1` character for minimum password length.
```
# vi /etc/pam.d/system-auth
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ucredit=-1
```
### Set Minimum Lower Case Characters?
This parameter keeps, how many lower case characters should be added in the password. These are password strengthening parameters ,which increase the password strength.
When the users set a new password, it will check against this parameter and warn the user if they are not including any lower case characters in the password.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `1` character.
```
# vi /etc/pam.d/system-auth
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 lcredit=-1
```
### Set Minimum Digits In Password?
This parameter keeps, how many digits should be added in the password. These are password strengthening parameters ,which increase the password strength.
When the users set a new password, it will check against this parameter and warn the user if they are not including any digits in the password.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `1` character.
```
# vi /etc/pam.d/system-auth
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 dcredit=-1
```
### Set Minimum Other Characters (Symbols) In Password?
This parameter keeps, how many Symbols should be added in the password. These are password strengthening parameters ,which increase the password strength.
When the users set a new password, it will check against this parameter and warn the user if they are not including any Symbol in the password.
It can be set in `/etc/pam.d/system-auth` file. Im going to set `1` character.
```
# vi /etc/pam.d/system-auth
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ocredit=-1
```
### Set Account Lock?
This parameter controls users failed attempts. It locks user account after reaches the given number of failed login attempts.
It can be set in `/etc/pam.d/system-auth` file.
```
# vi /etc/pam.d/system-auth
auth required pam_tally2.so onerr=fail audit silent deny=5
account required pam_tally2.so
```
### Set Account Unlock Time?
This parameter keeps users unlock time. If the user account is locked after consecutive failed authentications.
Its unlock the locked user account after reaches the given time. Sets the time (900 seconds = 15 minutes) for which the account should remain locked.
It can be set in `/etc/pam.d/system-auth` file.
```
# vi /etc/pam.d/system-auth
auth required pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900
account required pam_tally2.so
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/5-ways-to-generate-a-random-strong-password-in-linux-terminal/
[2]: https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/

View File

@ -0,0 +1,162 @@
[#]: collector: (lujun9972)
[#]: translator: (cycoe)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to add a player to your Python game)
[#]: via: (https://opensource.com/article/17/12/game-python-add-a-player)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
如何在你的 Python 游戏中添加一个玩家
======
用 Python 从头开始构建游戏的系列文章的第三部分。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python3-game.png?itok=jG9UdwC3)
在 [这个系列的第一篇文章][1] 中,我解释了如何使用 Python 创建一个简单的基于文本的骰子游戏。在第二部分中,我向你们展示了如何从头开始构建游戏,即从 [创建游戏的环境][2] 开始。但是每个游戏都需要一名玩家,并且每个玩家都需要一个可操控的角色,这也就是我们接下来要在这个系列的第三部分中需要做的。
在 Pygame 中,玩家操控的图标或者化身被称作妖精。如果你现在还没有任何图像可用于玩家妖精,你可以使用 [Krita][3] 或 [Inkscape][4] 来自己创建一些图像。如果你对自己的艺术细胞缺乏自信,你也可以在 [OpenClipArt.org][5] 或 [OpenGameArt.org][6] 搜索一些现成的图像。如果你还未按照上一篇文章所说的单独创建一个 images 文件夹,那么你需要在你的 Python 项目目录中创建它。将你想要在游戏中使用的图片都放 images 文件夹中。
为了使你的游戏真正的刺激,你应该为你的英雄使用一张动态的妖精图片。这意味着你需要绘制更多的素材,并且它们要大不相同。最常见的动画就是走路循环,通过一系列的图像让你的妖精看起来像是在走路。走路循环最快捷粗糙的版本需要四张图像。
![](https://opensource.com/sites/default/files/u128651/walk-cycle-poses.jpg)
注意:这篇文章中的代码示例同时兼容静止的和动态的玩家妖精。
将你的玩家妖精命名为 `hero.png`。如果你正在创建一个动态的妖精,则需要在名字后面加上一个数字,从 `hero1.png` 开始。
### 创建一个 Python 类
在 Python 中,当你在创建一个你想要显示在屏幕上的对象时,你需要创建一个类。
在你的 Python 脚本靠近顶端的位置,加入如下代码来创建一个玩家。在以下的代码示例中,前三行已经在你正在处理的 Python 脚本中:
```
import pygame
import sys
import os # 以下是新代码
class Player(pygame.sprite.Sprite):
    '''
    生成一个玩家
    '''
    def __init__(self):
        pygame.sprite.Sprite.__init__(self)
        self.images = []
    img = pygame.image.load(os.path.join('images','hero.png')).convert()
    self.images.append(img)
    self.image = self.images[0]
    self.rect  = self.image.get_rect()
```
如果你的可操控角色拥有一个走路循环,在 `images` 文件夹中将对应图片保存为 `hero1.png``hero4.png` 的独立文件。
使用一个循环来告诉 Python 遍历每个文件。
```
'''
对象
'''
class Player(pygame.sprite.Sprite):
    '''
    生成一个玩家
    '''
    def __init__(self):
        pygame.sprite.Sprite.__init__(self)
        self.images = []
        for i in range(1,5):
            img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
            self.images.append(img)
            self.image = self.images[0]
            self.rect  = self.image.get_rect()
```
### 将玩家带入游戏世界
现在一个 Player 类已经创建好了,你需要使用它在你的游戏世界中生成一个玩家妖精。如果你不调用 Player 类,那它永远不会起作用,(游戏世界中)也就不会有玩家。你可以通过立马运行你的游戏来验证一下。游戏会像上一篇文章末尾看到的那样运行,并得到明确的结果:一个空荡荡的游戏世界。
为了将一个玩家妖精带到你的游戏世界,你必须通过调用 Player 类来生成一个妖精,并将它加入到 Pygame 的妖精组中。在如下的代码示例中,前三行是已经存在的代码,你需要在其后添加代码:
```
world       = pygame.display.set_mode([worldx,worldy])
backdrop    = pygame.image.load(os.path.join('images','stage.png')).convert()
backdropbox = screen.get_rect()
# 以下是新代码
player = Player()   # 生成玩家
player.rect.x = 0   # 移动 x 坐标
player.rect.y = 0   # 移动 y 坐标
player_list = pygame.sprite.Group()
player_list.add(player)
```
尝试启动你的游戏来看看发生了什么。高能预警:它不会像你预期的那样工作,当你启动你的项目,玩家妖精没有出现。事实上它生成了,只不过只出现了一毫秒。你要如何修复一个只出现了一毫秒的东西呢?你可能回想起上一篇文章中,你需要在主循环中添加一些东西。为了使玩家的存在时间超过一毫秒,你需要告诉 Python 在每次循环中都绘制一次。
将你的循环底部的语句更改如下:
```
    world.blit(backdrop, backdropbox)
    player_list.draw(screen) # 绘制玩家
    pygame.display.flip()
    clock.tick(fps)
```
现在启动你的游戏,你的玩家出现了!
### 设置 alpha 通道
根据你如何创建你的玩家妖精,在它周围可能会有一个色块。你所看到的是 alpha 通道应该占据的空间。它本来是不可见的“颜色”,但 Python 现在还不知道要使它不可见。那么你所看到的,是围绕在妖精周围的边界区(或现代游戏术语中的“命中区”)内的空间。
![](https://opensource.com/sites/default/files/u128651/greenscreen.jpg)
你可以通过设置一个 alpha 通道和 RGB 值来告诉 Python 使哪种颜色不可见。如果你不知道你使用 alpha 通道的图像的 RGB 值,你可以使用 Krita 或 Inkscape 打开它,并使用一种独特的颜色,比如 #00ff00(差不多是“绿屏绿”)来填充图像周围的空白区域。记下颜色对应的十六进制值(此处为 #00ff00,绿屏绿)并将其作为 alpha 通道用于你的 Python 脚本。
使用 alpha 通道需要在你的妖精生成相关代码中添加如下两行。类似第一行的代码已经存在于你的脚本中,你只需要添加另外两行:
```
            img = pygame.image.load(os.path.join('images','hero' + str(i) + '.png')).convert()
            img.convert_alpha()     # 优化 alpha
            img.set_colorkey(ALPHA) # 设置 alpha
```
除非你告诉它,否则 Python 不知道将哪种颜色作为 alpha 通道。在你代码的设置相关区域,添加一些颜色定义。将如下的变量定义添加于你的设置相关区域的任意位置:
```
ALPHA = (0, 255, 0)
```
在以上示例代码中,**0,255,0** 被我们使用,它在 RGB 中所代表的值与 #00ff00 在十六进制中所代表的值相同。你可以通过一个优秀的图像应用程序,如 [GIMP][7]、Krita 或 Inkscape来获取所有这些颜色值。或者你可以使用一个优秀的系统级颜色选择器如 [KColorChooser][8],来检测颜色。
![](https://opensource.com/sites/default/files/u128651/kcolor.png)
如果你的图像应用程序将你的妖精背景渲染成了其他的值,你可以按需调整 ``ALPHA`` 变量的值。不论你将 alpha 设为多少最后它都将“不可见”。RGB 颜色值是非常严格的,因此如果你需要将 alpha 设为 000但你又想将 000 用于你图像中的黑线,你只需要将图像中线的颜色设为 111。这样一来图像中的黑线就足够接近黑色但除了电脑以外没有人能看出区别。
运行你的游戏查看结果。
![](https://opensource.com/sites/default/files/u128651/alpha.jpg)
在 [这个系列的第四篇文章][9] 中,我会向你们展示如何使你的妖精动起来。多么的激动人心啊!
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/game-python-add-a-player
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[cycoe](https://github.com/cycoe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/17/10/python-101
[2]: https://opensource.com/article/17/12/program-game-python-part-2-creating-game-world
[3]: http://krita.org
[4]: http://inkscape.org
[5]: http://openclipart.org
[6]: https://opengameart.org/
[7]: http://gimp.org
[8]: https://github.com/KDE/kcolorchooser
[9]: https://opensource.com/article/17/12/program-game-python-part-4-moving-your-sprite

View File

@ -1,125 +0,0 @@
DomTerm 一款为 Linux 打造的终端模拟器
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
[DomTerm][1] 是一款现代化的终端模拟器,它使用浏览器引擎作为 “GUI 工具包”。这就使得一些干净特性例如可嵌入的图像和链接HTML 富文本以及可折叠(显示/隐藏)命令成为可能。除此以外,它看起来感觉就像一个功能强大,有着优秀 xterm 兼容性包括鼠标处理和24位色和恰当的 “chrome” (菜单)的独特终端模拟器。另外它同样有对会话管理和副窗口(例如 `tmux``GNU Screen` 中),基本输入编辑(例如 `readline` 中)以及分页(例如 `less` 中)的内置支持。
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
图 1: DomTerminal 终端模拟器。 查看大图
在以下部分我们将看一看这些特性。我们将假设你已经安装好了 `domterm` (如果你需要获取并搭建 Dormterm 请跳到本文最后)。开始之前先让我们概览一下这项技术。
### 前端 vs. 后端
DomTerm 大部分是用 JavaScript 写的,它运行在一个浏览器引擎中。这个引擎可以是一个桌面浏览器,例如 Chrome 或者 Firefox见图三也可以是一个内嵌的浏览器。使用一个通用的网页浏览器没有问题但是用户体验却不够好因为菜单是为通用的网页浏览而不是为了终端模拟器所打造),并且安全模型也会妨碍使用。因此使用内嵌的浏览器更好一些。
目前以下这些是支持的:
* `qdomterm`,使用了 Qt 工具包 和 `QtWebEngine`
* 一个内嵌的 `[Electron][2]`(见图一)
* `atom-domterm` 以 [Atom 文本编辑器][3](同样基于 Electron包的形式运行 DomTerm并和 Atom 面板系统集成在一起(见图二)
* 一个为 JavaFX 的 `WebEngine` 包装器,这对 Java 编程十分有用(见图四)
* 之前前端使用 [Firefox-XUL][4] 作为首选,但是 Mozilla 已经终止了 XUL
![在 Atom 编辑器中的 DomTerm 终端面板][6]
图二:在 Atom 编辑器中的 DomTerm 终端面板。[查看大图][7]
目前Electron 前端可能是最佳选择,紧随其后的是 Qt 前端。如果你使用 Atom`atom-domterm` 也工作得相当不错。
后端服务器是用 C 写的。它管理着伪终端PTYs和会话。它同样也是一个为前端提供 Javascript 和其他文件的 HTTP 服务器。如果没有服务器在运行,`domterm` 就会使用它自己。后端与服务器之间的通讯通常是用 WebSockets (在服务器端是[libwebsockets][8]完成的。然而JavaFX 嵌入时既不用 Websockets 也不用 DomTerm 服务器。相反 Java 应用直接通过 Java-Javascript 桥接进行通讯。
### 一个稳健的可兼容 xterm 的终端模拟器
DomTerm 看上去感觉像一个现代的终端模拟器。它处理鼠标事件24位色万国码倍宽字符CJK以及输入方式。DomTerm 在 [vttest 测试套件][9] 上工作地十分出色。
不同寻常的特性包括:
**展示/隐藏按钮(“折叠”):** 小三角(如上图二)是隐藏/展示相应输出的按钮。仅需在[提示文字][11]中添加特定的[转义字符][10]就可以创建按钮。
**对于 `readline` 和类似输入编辑器的鼠标点击支持:** 如果你点击黄色输入区域DomTerm 会向应用发送正确的方向键按键序列。(提示窗口中的转义字符使能这一特性,你也可以通过 Alt+Click 强制使用。)
**用CSS样式化终端** 这通常是在 `~/.domterm/settings.ini` 里完成的,保存时会自动重载。例如在图二中,终端专用的背景色被设置。
### 一个更好的 REPL 控制台
一个经典的终端模拟器基于长方形的字符单元格工作。这在 REPL命令行上没问题但是并不理想。这有些对通常在终端模拟器中不常见的 REPL 很有用的 DomTerm 特性:
**一个能“打印”图片,图表,数学公式或者一组可点击的链接的命令:** 一个应用可以发送包含几乎任何 HTML 的转义字符。(擦除 HTML 以移除 JavaScript 和其它危险特性。)
图三从[`gnuplot`][12]会话展示了一个片段。Gnuplot2.1或者跟高版本)支持 `dormterm` 作为终端类型。图像输出被转换成 [SVG 图][13],然后图片被打印到终端。我的博客帖子[在 DormTerm 上的 Gnuplot 展示]在这方面提供了更多信息。
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
图三: Gnuplot 截图。查看大图
[Kawa][15] 语言有一个创建并转换[几何图像值][16]的库。如果你将这样的图片值打印到 DomTerm 终端,图片就会被转换成 SVG 形式并嵌入进输出中。
![](https://opensource.com/sites/default/files/dt-kawa1.png)
图四: Kawa 中可计算的几何形状。查看大图
**输出中的富文本:** 有着 HTML 样式的帮助信息更加便于阅读,看上去也更漂亮。图片一的面板下部分展示 `dormterm help` 的输出。(如果没在 DomTerm 下运行的话输出的是普通文本。)注意自带的分页器中的 `PAUSED` 消息。
**包括可点击链接的错误消息:** DomTerm 识别语法 `filename:line:column` 并将其转化成一个能在可定制文本编辑器中打开文件并定位到行的链接。(这适用相对的文件名上如果你用 `PROMPT_COMMAND` 或类似的以跟踪目录。)
一个编译器可以侦测到它在 DomTerm 下运行,并直接用转义字符发出文件链接。这比依赖 DomTerm 的样式匹配要稳健得多,因为它可以处理空格和其他字符并且无需依赖目录追踪。在图四中,你可以看到来自 [Kawa Compiler][15] 的错误消息。悬停在文件位置上会使其出现下划线,`file:` URL 出现在 `atom-domterm` 消息栏(窗口底部)中。(当不用 `atom-domterm` 时,这样的消息会在一个覆盖框中显示,如图一中所看到的 `PAUSED` 消息所示。)
点击链接时的动作是可以配置的。默认对于带有 `#position` 后缀的 `file:` 链接的动作是在文本编辑器中打开那个文件。
**内建的 Lisp 样式优美打印:** 你可以在输出中包括优美打印目录(比如,组)这样行分隔符会随着窗口调整二重新计算。查看我的文章 [DomTerm 中的动态优美打印][17]以更深入探讨。
**基本的有着历史记录的内建行编辑**(像 `GNU readline` 一样): 这使用浏览器自带的编辑器,因此它有着优秀的鼠标和选择处理机制。你可以在正常字符模式(大多数输入的字符被指接送向进程); 或者行模式(当控制字符导致编辑动作,回车键向进程发送被编辑行,通常的字符是被插入的)之间转换。默认的是自动模式,根据 PTY 是在原始模式还是终端模式中DomTerm 在字符模式与行模式间转换。
**自带的分页器**(类似简化版的 `less`):键盘快捷键控制滚动。在“页模式”中,输出在每个新的屏幕(或者单独的行如果你一行行地向前移)后暂停; 页模式对于用户输入简单智能,因此(如果你想的话)你可以无需阻碍交互程序就可以运行它。
### 多路传输和会话
**标签和平铺:** 你不仅可以创建多个终端标签,也可以平铺它们。你可以要么使用鼠标要么使用键盘快捷键来创建或者切换面板和标签。它们可以用鼠标重新排列并调整大小。这是通过[GoldenLayout][18] JavaScript 库实现的。[图一][19]展示了一个有着两个面板的窗口。上面的有两个标签,一个运行 [Midnight Commander][20]; 底下的面板以 HTML 形式展示了 `dormterm help` 输出。然而相反在 Atom 中我们使用其自带的可拖拽的面板和标签。你可以在图二中看到这个。
**分离或重接会话:** 与 `tmux` 和 GNU `screen` 类似DomTerm 支持会话安排。你甚至可以给同样的会话接上多个窗口或面板。这支持多用户会话分享和远程链接。(为了安全,同一个服务器的所有会话都需要能够读取 Unix 域接口和包含随机密钥的本地文件。当我们有了良好,安全的远程链接,这个限制将会有所改善。)
**`domterm`** **命令**`tmux` 和 GNU `screen` 同样相似的地方在于它为控制或者打开单个或多个会话的服务器有着多个选项。主要的差别在于,如果它没在 DomTerm 下运行,`dormterm` 命令会创建一个新的顶层窗口,而不是在现有的终端中运行。
`tmux``git` 类似dormterm` 命令有许多副命令。一些副命令创建窗口或者会话。另一些(例如“打印”一张图片)仅在现有的 DormTerm 会话下起作用。
命令 `domterm browse` 打开一个窗口或者面板以浏览一个指定的 URL例如浏览文档的时候。
### 获取并安装 DomTerm
DomTerm 从其 [Github 仓库][21]可以获取。目前没有提前搭建好的包,但是有[详细指导][22]。所有的前提条件都可以在 Fedora 27 上获取,这使得其特别容易被搭建。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
作者:[Per Bothner][a]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/perbothner
[1]:http://domterm.org/
[2]:https://electronjs.org/
[3]:https://atom.io/
[4]:https://en.wikipedia.org/wiki/XUL
[5]:/file/385346
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
[8]:https://libwebsockets.org/
[9]:http://invisible-island.net/vttest/
[10]:http://domterm.org/Wire-byte-protocol.html
[11]:http://domterm.org/Shell-prompts.html
[12]:http://www.gnuplot.info/
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
[15]:https://www.gnu.org/software/kawa/
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
[18]:https://golden-layout.com/
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
[20]:https://midnight-commander.org/
[21]:https://github.com/PerBothner/DomTerm
[22]:http://domterm.org/Downloading-and-building.html

View File

@ -1,435 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (FSSlc)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Inter-process communication in Linux: Shared storage)
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Linux 下的进程间通信:共享存储
======
学习在 Linux 中进程是如何与其他进程进行同步的。
![Filing papers and documents][1]
本篇是 Linux 下[进程间通信][2]IPC系列的第一篇文章。这个系列将使用 C 语言代码示例来阐明以下 IPC 机制:
* 共享文件
* 共享内存(使用信号量)
* 管道(命名的或非命名的管道)
* 消息队列
* 套接字
* 信号
在聚焦上面提到的共享文件和共享内存这两个机制之前,这篇文章将复习一些核心的概念。
### 核心概念
_进程_ 是运行着的程序,每个进程都有着它自己的地址空间,这些空间由进程被允许获取的内存地址组成。进程有一个或多个执行 _线程_,而线程是一系列执行指令的集合: _单线程_ 进程就只有一个线程,而 _多线程_ 的进程则有多个线程。一个进程中的线程共享各种资源,特别是地址空间。另外,一个进程中的线程可以直接通过共享内存来进行通信,尽管某些现代语言(例如 Go鼓励一种更有序的方式例如使用线程安全的通道。当然对于不同的进程默认情况下它们 _不_ 能共享内存。
启动进程后进行通信有多种方法,下面所举的例子中主要使用了下面的两种方法:
* 一个终端被用来启动一个进程,另外一个不同的终端被用来启动另一个。
* 在一个进程(父进程)中调用系统函数 **fork**,以此生发另一个进程(子进程)。
第一个例子采用了上面使用终端的方法。这些[代码示例][3]的 ZIP 压缩包可以从我的网站下载到。
### 共享文件
程序员对文件获取问题应该都已经很熟识了,包括许多坑(不存在的文件、文件权限损坏等等),这些问题困扰着程序对文件的使用。尽管如此,共享文件可能是最为基础的 IPC 机制了。考虑一下下面这样一个相对简单的例子其中一个进程_生产者_创建和写入一个文件然后另一个进程_消费者_从这个相同的文件中进行读取
```
writes +-----------+ reads
producer-------->| disk file |<-------consumer
+-----------+
```
在使用这个 IPC 机制时最明显的挑战是 _竞争条件_ 可能会发生:生产者和消费者可能恰好在同一时间访问该文件,从而使得输出结果不确定。为了避免竞争条件的发生,该文件在处于 _读__写_ 状态时必须以某种方式处于被锁状态,从而阻止在 _写_ 操作执行时和其他操作的冲突。在标准系统库中与锁相关的 API 可以被总结如下:
* 生产者应该在写入文件时获得一个文件的排斥锁。一个 _排斥_ 锁最多被一个进程所拥有。这样就可以排除掉竞争条件的发生,因为在锁被释放之前没有其他的进程可以访问这个文件。
* 消费者应该在从文件中读取内容时得到至少一个共享锁。多个 _readers_ 可以同时保有一个 _共享_ 锁,但是没有 _writer_ 可以获取到文件内容,甚至在当一个单独的 _reader_ 保有一个共享锁时。
共享锁可以提升效率。假如一个进程只是读入一个文件的内容,而不去改变它的内容,就没有什么原因阻止其他进程来做同样的事。但如果需要写入内容,则很显然需要文件有排斥锁。
标准的 I/O 库中包含一个名为 **fcntl** 的实用函数,它可以被用来检查或者操作一个文件上的排斥锁和共享锁。该函数通过一个 _文件描述符_ (一个在进程中的非负整数值)来标记一个文件(在不同的进程中不同的文件描述符可能标记同一个物理文件)。对于文件的锁定, Linux 提供了名为 **flock** 的库函数,它是 **fcntl** 的一个精简包装。第一个例子中使用 **fcntl** 函数来暴露这些 API 细节。
#### 示例 1. _生产者_ 程序
```c
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define FileName "data.dat"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
report_and_exit("open to read failed...");
/* If the file is write-locked, we can't continue. */
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("can't get a read-only lock...");
/* Read the bytes (they happen to be ASCII codes) one at a time. */
int c; /* buffer for read bytes */
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
/* Release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd);
return 0;
}
```
上面 _生产者_ 程序的主要步骤可以总结如下:
* 这个程序首先声明了一个类型为 **struct flock** 的变量,它代表一个锁,并对它的 5 个域做了初始化。第一个初始化
```c
lock.l_type = F_WRLCK; /* exclusive lock */
````
使得这个锁为排斥锁_read-write_而不是一个共享锁_read-only_。假如 _生产者_ 获得了这个锁,则其他的进程将不能够对文件做读或者写操作,直到 _生产者_ 释放了这个锁,或者显式地调用 **fcntl**,又或者隐式地关闭这个文件。(当进程终止时,所有被它打开的文件都会被自动关闭,从而释放了锁)
* 上面的程序接着初始化其他的域。主要的效果是 _整个_ 文件都将被锁上。但是,有关锁的 API 允许特别指定的字节被上锁。例如,假如文件包含多个文本记录,则单个记录(或者甚至一个记录的一部分)可以被锁,而其余部分不被锁。
* 第一次调用 **fcntl**
```c
if (fcntl(fd, F_SETLK, &lock) < 0)
```
尝试排斥性地将文件锁住,并检查调用是否成功。一般来说, **fcntl** 函数返回 **-1** (因此小于 0意味着失败。第二个参数 **F_SETLK** 意味着 **fcntl** 的调用 _不是_ 堵塞的;函数立即做返回,要么获得锁,要么显示失败了。假如替换地使用 **F_SETLKW**(末尾的 **W** 代指 _等待_),那么对 **fcntl** 的调用将是阻塞的,直到有可能获得锁的时候。在调用 **fcntl** 函数时,它的第一个参数 **fd** 指的是文件描述符,第二个参数指定了将要采取的动作(在这个例子中,**F_SETLK** 指代设置锁),第三个参数为锁结构的地址(在本例中,指的是 **& lock**)。
* 假如 _生产者_ 获得了锁,这个程序将向文件写入两个文本记录。
* 在向文件写入内容后_生产者_ 改变锁结构中的 **l_type** 域为 _unlock_ 值:
```c
lock.l_type = F_UNLCK;
```
并调用 **fcntl** 来执行解锁操作。最后程序关闭了文件并退出。
#### 示例 2. _消费者_ 程序
```c
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define FileName "data.dat"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
report_and_exit("open to read failed...");
/* If the file is write-locked, we can't continue. */
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("can't get a read-only lock...");
/* Read the bytes (they happen to be ASCII codes) one at a time. */
int c; /* buffer for read bytes */
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
/* Release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd);
return 0;
}
```
相比于着重解释锁的 API_消费者_ 程序会相对复杂一点儿。特别的_消费者_ 程序首先检查文件是否被排斥性的被锁,然后才尝试去获得一个共享锁。相关的代码为:
```
lock.l_type = F_WRLCK;
...
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
```
**fcntl** 调用中的 **F_GETLK** 操作指定检查一个锁,在本例中,上面代码的声明中给了一个 **F_WRLCK** 的排斥锁。假如特指的锁不存在,那么 **fcntl** 调用将会自动地改变锁类型域为 **F_UNLCK** 以此来显示当前的状态。假如文件是排斥性地被锁,那么 _消费者_ 将会终止。(一个更健壮的程序版本或许应该让 _消费者_ _睡_ 会儿,然后再尝试几次。)
假如当前文件没有被锁,那么 _消费者_ 将尝试获取一个共享_read-only_**F_RDLCK**)。为了缩短程序,**fcntl** 中的 **F_GETLK** 调用可以丢弃,因为假如其他进程已经保有一个 _读写_ 锁,**F_RDLCK** 的调用就可能会失败。重新调用一个 _只读_ 锁能够阻止其他进程向文件进行写的操作但可以允许其他进程对文件进行读取。简而言之共享锁可以被多个进程所保有。在获取了一个共享锁后_消费者_ 程序将立即从文件中读取字节数据,然后在标准输出中打印这些字节的内容,接着释放锁,关闭文件并终止。
下面的 **%** 为命令行提示符,下面展示的是从相同终端开启这两个程序的输出:
```
% ./producer
Process 29255 has written to data file...
% ./consumer
Now is the winter of our discontent
Made glorious summer by this sun of York
```
在本次的代码示例中,通过 IPC 传输的数据是文本:它们来自莎士比亚的戏剧《理查三世》中的两行台词。然而,共享文件的内容还可以是纷繁复杂的,任意的字节数据(例如一个电影)都可以,这使得文件共享变成了一个非常灵活的 IPC 机制。但它的缺点是文件获取速度较慢,因为文件的获取涉及到读或者写。同往常一样,编程总是伴随着折中。下面的例子将通过共享内存来做 IPC而不是通过共享文件在性能上相应的有极大的提升。
### 共享内存
对于共享内存Linux 系统提供了两类不同的 API传统的 System V API 和更新一点的 POSIX API。在单个应用中这些 API 不能混用。但是, POSIX 方式的一个坏处是它的特性仍在发展中,并且依赖于安装的内核版本,这非常影响代码的可移植性。例如, 默认情况下POSIX API 用 _内存映射文件_ 来实现共享内存:对于一个共享的内存段,系统为相应的内容维护一个 _备份文件_。在 POSIX 规范下共享内存可以被配置为不需要备份文件,但这可能会影响可移植性。我的例子中使用的是带有备份文件的 POSIX API这既结合了内存获取的速度优势又获得了文件存储的持久性。
下面的共享内存例子中包含两个程序,分别名为 _memwriter__memreader_,并使用 _信号量_ 来调整它们对共享内存的获取。在任何时候当共享内存进入一个 _writer_ 的版图时,无论是多进程还是多线程,都有遇到基于内存的竞争条件的风险,所以,需要引入信号量来协调(同步)对共享内存的获取。
_memwriter_ 程序应当在它自己所处的终端首先启动,然后 _memreader_ 程序才可以在它自己所处的终端启动在接着的十几秒内。_memreader_ 的输出如下:
```
This is the way the world ends...
```
在每个源程序的最上方注释部分都解释了在编译它们时需要添加的链接参数。
首先让我们复习一下信号量是如何作为一个同步机制工作的。一般的信号量也被叫做一个 _计数信号量_,因为带有一个可以增加的值(通常初始化为 0。考虑一家租用自行车的商店在它的库存中有 100 辆自行车,还有一个供职员用于租赁的程序。每当一辆自行车被租出去,信号量就增加 1当一辆自行车被还回来信号量就减 1。在信号量的值为 100 之前都还可以进行租赁业务,但如果等于 100 时,就必须停止业务,直到至少有一辆自行车被还回来,从而信号量减为 99。
_二元信号量_ 是一个特例,它只有两个值: 0 和 1。在这种情况下信号量的表现为 _互斥量_(一个互斥的构造)。下面的共享内存示例将把信号量用作互斥量。当信号量的值为 0 时,只有 _memwriter_ 可以获取共享内存,在写操作完成后,这个进程将增加信号量的值,从而允许 _memreader_ 来读取共享内存。
#### 示例 3. _memwriter_ 进程的源程序
```c
/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, /* name from smem.h */
O_RDWR | O_CREAT, /* read/write, create if needed */
AccessPerms); /* access permissions (0644) */
if (fd < 0) report_and_exit("Can't open shared mem segment...");
ftruncate(fd, ByteSize); /* get the bytes */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
/* semahore code to lock the shared mem */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
/* increment the semaphore so that memreader can read */
if (sem_post(semptr) < 0) report_and_exit("sem_post");
sleep(12); /* give reader a chance */
/* clean up */
munmap(memptr, ByteSize); /* unmap the storage */
close(fd);
sem_close(semptr);
shm_unlink(BackingFile); /* unlink from the backing file */
return 0;
}
```
下面是 _memwriter__memreader_ 程序如何通过共享内存来通信的一个总结:
* 上面展示的 _memwriter_ 程序调用 **shm_open** 函数来得到作为系统协调共享内存的备份文件的文件描述符。此时,并没有内存被分配。接下来调用的是令人误解的名为 **ftruncate** 的函数
```c
ftruncate(fd, ByteSize); /* get the bytes */
```
它将分配 **ByteSize** 字节的内存,在该情况下,一般为大小适中的 512 字节。_memwriter_ 和 _memreader_ 程序都只从共享内存中获取数据,而不是从备份文件。系统将负责共享内存和备份文件之间数据的同步。
* 接着 _memwriter_ 调用 **mmap** 函数:
```c
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
```
来获得共享内存的指针。_memreader_ 也做一次类似的调用。) 指针类型 **caddr_t****c** 开头,它代表 **calloc**,而这是动态初始化分配的内存为 0 的一个系统函数。_memwriter_ 通过库函数 **strcpy**string copy来获取后续 _写_ 操作的 **memptr**
* 到现在为止, _memwriter_ 已经准备好进行写操作了,但首先它要创建一个信号量来确保共享内存的排斥性。假如 _memwriter_ 正在执行写操作而同时 _memreader_ 在执行读操作,则有可能出现竞争条件。假如调用 **sem_open**
成功了:
```c
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
```
那么,接着写操作便可以执行。上面的 **SemaphoreName**(任意一个唯一的非空名称)用来在 _memwriter__memreader_ 识别信号量。 初始值 0 将会传递给信号量的创建者,在这个例子中指的是 _memwriter_ 赋予它执行 _写_ 操作的权利。
* 在写操作完成后_memwriter_ 通过调用 **sem_post** 函数将信号量的值增加到 1
```c
if (sem_post(semptr) < 0) ..
```
增加信号了将释放互斥锁,使得 _memreader_ 可以执行它的 _读_ 操作。为了更好地测量_memwriter_ 也将从它自己的地址空间中取消映射,
```c
munmap(memptr, ByteSize); /* unmap the storage *
```
这将使得 _memwriter_ 不能进一步地访问共享内存。
#### 示例 4. _memreader_ 进程的源代码
```c
/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
if (fd < 0) report_and_exit("Can't get file descriptor...");
/* get a pointer to memory */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
/* create a semaphore for mutual exclusion */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
/* use semaphore as a mutex (lock) by waiting for writer to increment it */
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
int i;
for (i = 0; i < [strlen][6](MemContents); i++)
write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
sem_post(semptr);
}
/* cleanup */
munmap(memptr, ByteSize);
close(fd);
sem_close(semptr);
unlink(BackingFile);
return 0;
}
```
_memwriter_ 和 _memreader_ 程序中,共享内存的主要着重点都在 **shm_open****mmap** 函数上:在成功时,第一个调用返回一个备份文件的文件描述符,而第二个调用则使用这个文件描述符从共享内存段中获取一个指针。它们对 **shm_open** 的调用都很相似,除了 _memwriter_ 程序创建共享内存,而 _memreader_ 只获取这个已经创建
的内存:
```c
int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
```
手握文件描述符,接着对 **mmap** 的调用就是类似的了:
```c
caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
```
**mmap** 的第一个参数为 **NULL**,这意味着让系统自己决定在虚拟内存地址的哪个地方分配内存,当然也可以指定一个地址(但很有技巧性)。**MAP_SHARED** 标志着被分配的内存在进程中是共享的,最后一个参数(在这个例子中为 0 意味着共享内存的偏移量应该为第一个字节。**size** 参数特别指定了将要分配的字节数目(在这个例子中是 512另外的保护参数AccessPerms暗示着共享内存是可读可写的。
_memwriter_ 程序执行成功后,系统将创建并维护备份文件,在我的系统中,该文件为 _/dev/shm/shMemEx_,其中的 _shMemEx_ 是我为共享存储命名的(在头文件 _shmem.h_ 中给定)。在当前版本的 _memwriter__memreader_ 程序中,下面的语句
```c
shm_unlink(BackingFile); /* removes backing file */
```
将会移除备份文件。假如没有 **unlink** 这个词,则备份文件在程序终止后仍然持久地保存着。
_memreader_ 和 _memwriter_ 一样,在调用 **sem_open** 函数时,通过信号量的名字来获取信号量。但 _memreader_ 随后将进入等待状态,直到 _memwriter_ 将初始值为 0 的信号量的值增加。
```c
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
```
一旦等待结束_memreader_ 将从共享内存中读取 ASCII 数据,然后做些清理工作并终止。
共享内存 API 包括显式地同步共享内存段和备份文件。在这次的示例中,这些操作都被省略了,以免文章显得杂乱,好让我们专注于内存共享和信号量的代码。
即便在信号量代码被移除的情况下_memwriter_ 和 _memreader_ 程序很大几率也能够正常执行而不会引入竞争条件_memwriter_ 创建了共享内存段然后立即向它写入_memreader_ 不能访问共享内存,直到共享内存段被创建好。然而,当一个 _写操作_ 处于混合状态时,最佳实践需要共享内存被同步。信号量 API 足够重要,值得在代码示例中着重强调。
### 总结
上面共享文件和共享内存的例子展示了进程是怎样通过 _共享存储_ 来进行通信的,前者通过文件而后者通过内存块。这两种方法的 API 相对来说都很直接。这两种方法有什么共同的缺点吗?现代的应用经常需要处理流数据,而且是非常大规模的数据流。共享文件或者共享内存的方法都不能很好地处理大规模的流数据。按照类型使用管道会更加合适一些。所以这个系列的第二部分将会介绍管道和消息队列,同样的,我们将使用 C 语言写的代码示例来辅助讲解。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
[3]: http://condor.depaul.edu/mkalin
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with social media sentiment analysis in Python)
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
使用 Python 进行社交媒体情感分析入门
======
学习自然语言处理的基础知识并探索两个有用的 Python 包。
![Raspberry Pi and Python][1]
自然语言处理NLP是机器学习的一种它解决了口语或书面语言和计算机辅助分析这些语言之间的相关性。日常生活中我们经历了无数的 NLP 创新,从写作帮助和建议到实时语音翻译,还有口译。
本文研究了 NLP 的一个特定领域:情感分析。重点是确定输入语言的积极、消极或中性性质。本部分将解释 NLP 和情感分析的背景,并探讨两个开源的 Python 包。[第 2 部分][2]将演示如何开始构建自己的可扩展情感分析服务。
在学习情感分析时,对 NLP 有一个大体了解是有帮助的。本文不会深入研究数学本质。相反,我们的目标是阐明 NLP 中的关键概念,这些概念对于将这些方法实际结合到你的解决方案中至关重要。
### 自然语言和文本数据
合理的起点是从定义开始:“什么是自然语言?”它是我们人类相互交流的方式,沟通的主要方式是口语和文字。我们可以更进一步,只关注文本交流。毕竟,生活在 Siri, Alexa 等无处不在的时代,我们知道语音是一组与文本无关的计算。
### 数据前景和挑战
我们只考虑使用文本数据,我们可以对语言和文本做什么呢?首先是语言,特别是英语,除了规则还有很多例外,含义的多样性和语境差异,这些都可能使人类口译员感到困惑,更不用说计算机翻译了。在小学,我们学习文章和标点符号,通过讲母语,我们获得了寻找直觉上表示唯一意义的词的能力。比如,出现诸如 "a"、"the" 和 "or" 之类的文章,它们在 NLP 中被称为 _停用词_,因为传统上 NLP 算法是在一个序列中找到这些词时意味着搜索停止。
由于我们的目标是自动将文本分类为情感类,因此我们需要一种以计算方式处理文本数据的方法。因此,我们必须考虑如何向机器表示文本数据。众所周知,利用和解释语言的规则很复杂,输入文本的大小和结构可能会有很大差异。我们需要将文本数据转换为数字数据,这是机器和数学的首选方式。这种转变属于 _特征提取_ 的范畴。
在提取输入文本数据的数字表示形式后,一个改进可能是:给定一个文本输入体,为上面列出的文章确定一组向量统计数据,并根据这些数据对文档进行分类。例如,过多的副词可能会使撰稿人感到愤怒,或者过度使用停用词可能有助于识别带有内容填充的学期论文。诚然,这可能与我们情感分析的目标没有太大关系。
### 词袋
当你评估一个文本陈述是积极还是消极的时候,你使用哪些上下文来评估它的极性?(例如,文本中是否具有积极的、消极的或中性的情感)一种方式是隐含形容词:被称为 "disgusting" 的东西被认为是消极的,但如果同样的东西被称为 "beautiful",你会认为它是积极的。从定义上讲,俗语给人一种熟悉感,通常是积极的,而脏话可能是敌意的表现。文本数据也可以包括表情符号,它带有固定的情感。
理解单个单词的极性影响为文本的[_词袋_][3](BoW) 模型提供了基础。它考虑一组单词或词汇表,并提取关于这些单词在输入文本中是否存在的度量。词汇表是通过考虑极性已知的文本形成的,称为 _标记的训练数据_。从这组标记数据中提取特征,然后分析特征之间的关系,并将标签与数据关联起来。
“词袋”这个名称说明了它的用途:即不考虑空间位置或上下文的的单个词。词汇表通常是由训练集中出现的所有单词构建的,在训练结束后被删除。如果在训练之前没有清理停用词,那么停用词会因为其高频率和低语境而被移除。很少使用的单词也可以删除,因为一般情况下它们提供了缺失的信息。
但是,重要的是要注意,你可以(并且应该)进一步考虑单词在单独的训练数据实例中使用之外的外观,称为[_词频_][4] (TF)。你还应该考虑输入数据的所有实例中的单词计数,通常,所有文档中的单词频率显著,这被称为[_逆文本频率指数_][5](IDF)。这些指标一定会在本主题的其他文章和软件包中提及,因此了解它们会有所帮助。
词袋在许多文档分类应用程序中很有用。然而,在情感分析中,当缺乏情境意识的问题被利用时,事情就可以解决。考虑以下句子:
* 我们不喜欢这场战争。
* 我讨厌下雨天,好事是今天是晴天。
* 这不是生死攸关的问题。
这些短语的情感对于人类口译员来说是有难度的,而且由于严格关注单个词汇的实例,对于机器翻译来说也是困难的。
在 NLP 中也可以考虑称为 _n-grams_ 的单词分组。一个二元组考虑两个相邻单词组成的组而不是(或除了)单个词袋。这应该可以缓解诸如上述“不喜欢”之类的情况,但由于缺乏语境意思,它仍然是个问题。此外,在上面的第二句中,下半句的情感语境可以被理解为否定前半部分。因此,这种方法中也会丢失上下文线索的空间局部性。从实用角度来看,使问题复杂化的是从给定输入文本中提取的特征的稀疏性。对于一个完整的大型词汇表,每个单词都有一个计数,可以将其视为一个整数向量。大多数文档的向量中都有大量的零计数,这给操作增加了不必要的空间和时间复杂度。虽然已经提出了许多用于降低这种复杂性的简便方法,但它仍然是一个问题。
### 词嵌入
词嵌入是一种分布式表示,它允许具有相似含义的单词具有相似的表示。这是基于使用实值向量来与它们周围相关联。重点在于使用单词的方式,而不仅仅是它们的存在。此外,词嵌入的一个巨大语用优势是它们对密集向量的关注。通过摆脱具有相应数量的零值向量元素的单词计数模型,词嵌入在时间和存储方面提供了一个更有效的计算范例。
以下是两个优秀的词嵌入方法。
#### Word2vec
第一个是 [Word2vec][6],它是由 Google 开发的。随着你对 NLP 和情绪分析研究的深入,你可能会看到这种嵌入方法。它要么使用一个 _连续的词袋_CBOW要么使用一个 _连续的 skip-gram_ 模型。在 CBOW 中,一个单词的上下文是在训练中根据围绕它的单词来学习的。连续的 skip-gram 学习倾向于围绕给定的单词学习单词。虽然这可能超出了你需要解决的问题,但是如果你曾经面对必须生成自己的词嵌入情况,那么 Word2vec 的作者提倡使用 CBOW 方法来提高速度并评估频繁的单词,而 skip-gram 方法更适合嵌入稀有单词更重要的嵌入。
#### GloVe
第二个是 [ _Global Vectors for Word Representation_][7(GloVe),它是斯坦福大学开发的。它是 Word2vec 方法的扩展,它试图将通过经典的全局文本统计特征提取获得的信息与 Word2vec 确定的本地上下文信息相结合。实际上在一些应用程序中GloVe 性能优于 Word2vec而在另一些应用程序中则不如 Word2vec。最终用于词嵌入的目标数据集将决定哪种方法最优。因此最好了解它们的存在性和高级机制因为你很可能会遇到它们。
#### 创建和使用词嵌入
最后,知道如何获得词嵌入是有用的。在第 2 部分中,你将看到我们通过利用社区中其他人的实质性工作,可以说我们是站在了巨人的肩膀上。这是获取词嵌入的一种方法:即使用现有的经过训练和验证的模型。实际上,有无数的模型适用于英语和其他语言,一定会有一种模型可以满足你的应用程序,让你开箱即用!
如果没有的话,就开发工作而言,另一个极端是培训你自己的独立模型,而不考虑你的应用程序。实质上,你将获得大量标记的训练数据,并可能使用上述方法之一来训练模型。即使这样,你仍然只是在获取对输入文本数据的理解。然后,你需要为你应用程序开发一个特定的模型(例如,分析软件版本控制消息中的情感价值),这反过来又需要自己的时间和精力。
你还可以为你的应用程序数据训练一个词嵌入,虽然这可以减少时间和精力,但这个词嵌入将是特定于应用程序的,这将会降低它的可重用性。
### 可用的工具选项
考虑到所需的大量时间和计算能力,你可能想知道如何才能找到解决问题的方法。的确,开发可靠模型的复杂性可能令人望而生畏。但是,有一个好消息:已经有许多经过验证的模型、工具和软件库可以为我们提供所需的大部分内容。我们将重点关注 [Python][8],因为它为这些应用程序提供了大量方便的工具。
#### SpaCy
[SpaCy][9] 提供了许多用于解析输入文本数据和提取特征的语言模型。它经过了高度优化并被誉为同类中最快的库。最棒的是它是开源的SpaCy 会执行标识化、词性分类和依赖项注释。它包含了用于执行此功能的词嵌入模型,还有用于为超过 46 种语言的其他特征提取操作。在本系列的第二篇文章中,你将看到它如何用于文本分析和特征提取。
#### vaderSentiment
[vaderSentiment][10] 包提供了积极、消极和中性情绪的衡量标准。正如 [original paper][11] 的标题“VADER一个基于规则的社交媒体文本情感分析模型”所示这些模型是专门为社交媒体文本数据开发和调整的。VADER 接受了一组完整的人类标记数据的训练包括常见的表情符号、UTF-8 编码的表情符号以及口语术语和缩写(例如 meh、lol、sux
对于给定的输入文本数据vaderSentiment 返回一个极性分数百分比的三元组。它还提供了一个单个的评分标准,称为 _vaderSentiment 复合指标_。这是一个在 **[-1, 1]** 范围内的实值,其中对于分值大于 **0.05** 的情绪被认为是积极的,对于分值小于 **-0.05** 的被认为是消极的,否则为中性。
在[第 2 部分][2]中,你将学习如何使用这些工具为你的设计添加情感分析功能。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
作者:[Michael McCune ][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/elmiko/users/jschlessman
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python)
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-2
[3]: https://en.wikipedia.org/wiki/Bag-of-words_model
[4]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
[5]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency
[6]: https://en.wikipedia.org/wiki/Word2vec
[7]: https://en.wikipedia.org/wiki/GloVe_(machine_learning)
[8]: https://www.python.org/
[9]: https://pypi.org/project/spacy/
[10]: https://pypi.org/project/vaderSentiment/
[11]: http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Format Python however you like with Black)
[#]: via: (https://opensource.com/article/19/5/python-black)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
使用 Black 随意格式化 Python
======
> 在我们覆盖 7 个 PyPI 库的系列文章中了解解决 Python 问题的更多信息。
![OpenStack source code \(Python\) in VIM][1]
Python 是当今使用最多的[流行编程语言][2]之一,因为:它是开源的,它有广泛的用途(例如 Web 编程、业务应用、游戏、科学编程等等),它有一个充满活力和专注的社区支持它。这个社区可以让我们在 [Python Package Index][3]PyPI中有如此庞大、多样化的软件包用以扩展和改进 Python 并解决不可避免的问题。
在本系列中,我们将介绍七个可以帮助你解决常见 Python 问题的 PyPI 库。在第一篇文章中,我们了解了 [Cython][4]。今天,我们将使用 [Black][5] 这个代码格式化工具。
### Black
有时创意可能是一件美妙的事情。有时它只是一种痛苦。我喜欢创造性地解决难题,但我希望我的 Python 格式尽可能一致。没有人对使用“有趣”缩进的代码印象深刻。
但是比不一致的格式更糟糕的是除了检查格式之外什么都没有做的代码审查。这对审查者来说很烦人,对于被审查者来说甚至更烦人。当你的 linter 告诉你代码缩进不正确时,但没有提示*正确*的缩进量,这也会令人气愤。
使用 Black它不会告诉你*要*做什么,它是一个优良、勤奋的机器人:它将为你修复代码。
要了解它如何工作的,请随意写一些非常不一致的内容,例如:
```
def add(a, b): return a+b
def mult(a, b):
return \
a * b
```
Black 抱怨了么?并没有,它为你修复了!
```
$ black math
reformatted math
All done! ✨ 🍰 ✨
1 file reformatted.
$ cat math
def add(a, b):
return a + b
def mult(a, b):
return a * b
```
Black 确实提供了报错而不是修复的选项,甚至还有输出 diff 编辑样式的选项。这些选项在持续集成 CI系统中非常有用可以在本地强制运行 Black。此外如果 diff 输出被记录到 CI 输出中,你可以直接将其粘贴到 `patch` 中,以便在极少数情况下你需要修复输出,但无法本地安装 Black 使用。
```
$ black --check --diff bad
--- math 2019-04-09 17:24:22.747815 +0000
+++ math 2019-04-09 17:26:04.269451 +0000
@@ -1,7 +1,7 @@
-def add(a, b): return a + b
+def add(a, b):
+ return a + b
def mult(a, b):
- return \
- a * b
+ return a * b
would reformat math
All done! 💥 💔 💥
1 file would be reformatted.
$ echo $?
1
```
在本系列的下一篇文章中,我们将介绍 attrs ,这是一个可以帮助你快速编写简洁、正确的代码的库。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-black
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[5]: https://pypi.org/project/black/

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Say goodbye to boilerplate in Python with attrs)
[#]: via: (https://opensource.com/article/19/5/python-attrs)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez)
使用 attrs 来告别 Python 中的样板
======
在我们覆盖 7 个 PyPI 库的系列文章中了解更多解决 Python 问题的信息。
![Programming at a browser, orange hands][1]
Python是当今使用最多[流行的编程语言] [2]之一 - 并且有充分的理由它是开源的它具有广泛的用途例如Web编程业务应用程序游戏科学编程等等更多它有一个充满活力和专注的社区支持它。这个社区是我们在[Python Package Index] [3]PyPI中提供如此庞大多样化的软件包的原因以扩展和改进Python并解决不可避免的问题。
在本系列中,我们将介绍七个可以帮助你解决常见 Python 问题的 PyPI 库。今天,我们将研究 [**attrs**][4],这是一个帮助你快速编写简洁,正确的代码的 Python 包。
### attrs
如果你已经写过一段时间的 Python那么你可能习惯这样写代码
```
class Book(object):
def __init__(self, isbn, name, author):
self.isbn = isbn
self.name = name
self.author = author
```
接着写一个 **__repr__** 函数。否则,很难记录 **Book** 的实例:
```
def __repr__(self):
return f"Book({self.isbn}, {self.name}, {self.author})"
```
接下来你会写一个好看的 docstring 来记录期望的类型。但是你注意到你忘了添加 **edition****published_year** 属性,所以你必须在五个地方修改它们。
如果你不必这么做如何?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str
name: str
author: str
published_year: int
edition: int
```
使用新的类型注释语法注释类型属性,**attrs** 会检测注释并创建一个类。
ISBN 有特定格式。如果我们想强行使用该格式怎么办?
```
@attr.s(auto_attribs=True)
class Book(object):
isbn: str = attr.ib()
@isbn.validator
def pattern_match(self, attribute, value):
m = re.match(r"^(\d{3}-)\d{1,3}-\d{2,3}-\d{1,7}-\d$", value)
if not m:
raise ValueError("incorrect format for isbn", value)
name: str
author: str
published_year: int
edition: int
```
**attrs** 库也对[不可变风格编程][5]支持良好。将第一行改成 **@attr.s(auto_attribs=True, frozen=True)** 意味着 **Book** 现在是不可变的:尝试修改一个属性将会引发一个异常。相反,比如,如果希望将发布日期向后一年,我们可以修改成 **attr.evolve(old_book, published_year=old_book.published_year+1)** 来得到一个_新的_实例。
本系列的下一篇文章我们将来看下 **singledispatch**,一个能让你向 Python 库添加方法的库。
#### 查看本系列先前的文章
* [Cython][6]
* [Black][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/python-attrs
作者:[Moshe Zadka ][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y (Programming at a browser, orange hands)
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
[3]: https://pypi.org/
[4]: https://pypi.org/project/attrs/
[5]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
[6]: https://opensource.com/article/19/4/7-python-problems-solved-cython
[7]: https://opensource.com/article/19/4/python-problems-solved-black

View File

@ -0,0 +1,200 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Create SSH Alias In Linux)
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
如何在 Linux 中创建 SSH 别名
======
![How To Create SSH Alias In Linux][1]
如果你经常通过 SSH 访问许多不同的远程系统,这个技巧将为你节省一些时间。你可以通过 SSH 为频繁访问的系统创建 SSH 别名这样你就不必记住所有不同的用户名、主机名、ssh 端口号和 IP 地址等。此外,它避免了在 SSH 到 Linux 服务器时重复输入相同的用户名、主机名、IP 地址、端口号。
### 在 Linux 中创建 SSH 别名
在我知道这个技巧之前,我通常使用以下任意一种方式通过 SSH 连接到远程系统。
使用 IP 地址:
```
$ ssh 192.168.225.22
```
或使用端口号、用户名和 IP 地址:
```
$ ssh -p 22 sk@192.168.225.22
```
或使用端口号、用户名和主机名:
```
$ ssh -p 22 sk@server.example.com
```
这里
* **22** 是端口号,
* **sk** 是远程系统的用户名,
* **192.168.225.22** 是我远程系统的 IP
* **server.example.com** 是远程系统的主机名。
我相信大多数新手 Linux 用户和(或一些)管理员都会以这种方式通过 SSH 连接到远程系统。但是,如果你通过 SSH 连接到多个不同的系统,记住所有主机名或 IP 地址,还有用户名是困难的,除非你将它们写在纸上或者将其保存在文本文件中。别担心!这可以通过为 SSH 连接创建别名(或快捷方式)轻松解决。
我们可以用两种方法为 SSH 命令创建别名。
##### 方法 1 使用 SSH 配置文件
这是我创建别名的首选方法。
我们可以使用 SSH 默认配置文件来创建 SSH 别名。为此,编辑 **~/.ssh/config** 文件(如果此文件不存在,只需创建一个):
```
$ vi ~/.ssh/config
```
添加所有远程主机的详细信息,如下所示:
```
Host webserver
HostName 192.168.225.22
User sk
Host dns
HostName server.example.com
User root
Host dhcp
HostName 192.168.225.25
User ostechnix
Port 2233
```
![][2]
使用 SSH 配置文件在 Linux 中创建 SSH 别名
**Host**、**Hostname**、**User** 和 **Port** 的值替换为你自己的值。添加所有远程主机的详细信息后,保存并退出该文件。
现在你可以使用以下命令通过 SSH 进入系统:
```
$ ssh webserver
$ ssh dns
$ ssh dhcp
```
就是这么简单!
看看下面的截图。
![][3]
使用 SSH 别名访问远程系统
看到了吗?我只使用别名(例如 **webserver**)来访问 IP 地址为 **192.168.225.22** 的远程系统。
请注意,这只使用于当前用户。如果要为所有用户(系统范围内)提供别名,请在 **/etc/ssh/ssh_config** 文件中添加以上行。
你还可以在 SSH 配置文件中添加许多其他内容。例如,如果你[**已配置基于 SSH 密钥的身份验证**][4],说明 SSH 密钥文件的位置,如下所示:
```
Host ubuntu
HostName 192.168.225.50
User senthil
IdentityFIle ~/.ssh/id_rsa_remotesystem
```
确保已使用你自己的值替换主机名、用户名和 SSH 密钥文件路径。
现在使用以下命令连接到远程服务器:
```
$ ssh ubuntu
```
这样,你可以添加希望通过 SSH 访问的任意多台远程主机,并使用别名快速访问它们。
##### 方法 2 使用 Bash 别名
这是创建 SSH 别名的一种应急变通的方法,可以加快通信的速度。你可以使用 [**alias 命令**][5]使这项任务更容易。
打开 **~/.bashrc** 或者 **~/.bash_profile** 文件:
```
alias webserver='ssh sk@server.example.com'
alias dns='ssh sk@server.example.com'
alias dhcp='ssh sk@server.example.com -p 2233'
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
```
再次确保你已使用自己的值替换主机、主机名、端口号和 IP 地址。保存文件并退出。
然后,使用命令应用更改:
```
$ source ~/.bashrc
```
或者
```
$ source ~/.bash_profile
```
在此方法中,你甚至不需要使用 “ssh 别名” 命令。相反,只需使用别名,如下所示。
```
$ webserver
$ dns
$ dhcp
$ ubuntu
```
![][6]
这两种方法非常简单,但对于经常通过 SSH 连接到多个不同系统的人来说非常有用,而且非常方便。使用适合你的上述任何一种方法,通过 SSH 快速访问远程 Linux 系统。
* * *
**建议阅读:**
* [**允许或拒绝 SSH 访问 Linux 中的特定用户或组**][7]
* [**如何在 Linux 上 SSH 到特定目录**][8]
* [**如何在 Linux 中断开 SSH 会话**][9]
* [**4 种方式在退出 SSH 会话后保持命令运行**][10]
* [**SSLH 共享相同端口的 HTTPS 和 SSH**][11]
* * *
目前这就是全部了,希望它对你有帮助。更多好东西要来了,敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
作者:[sk][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/