Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-04-17 13:38:27 +08:00
commit 6781525287
31 changed files with 4742 additions and 740 deletions

View File

@ -92,7 +92,7 @@ via: https://itsfoss.com/history-of-firefox
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
@ -110,7 +110,7 @@ via: https://itsfoss.com/history-of-firefox
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser)
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/

View File

@ -1,45 +1,35 @@
[#]: collector: (lujun9972)
[#]: translator: (heguangzhi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10736-1.html)
[#]: subject: (How To Check The List Of Open Ports In Linux?)
[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何检查Linux中的开放端口列表
如何检查 Linux 中的开放端口列表?
======
最近,我们就同一主题写了两篇文章。
最近,我们就同一主题写了两篇文章。这些文章内容帮助你如何检查远程服务器中给定的端口是否打开。
这些文章内容帮助您如何检查远程服务器中给定的端口是否打开
如果你想 [检查远程 Linux 系统上的端口是否打开][1] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的端口是否打开][2] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的多个端口状态][2] 请点击链接浏览
如果您想 **[检查远程 Linux 系统上的端口是否打开][1]** 请点击链接浏览
但是本文帮助你检查本地系统上的开放端口列表
如果您想 **[检查多个远程 Linux 系统上的端口是否打开][2]** 请点击链接浏览
在 Linux 中很少有用于此目的的实用程序。然而,我提供了四个最重要的 Linux 命令来检查这一点
如果您想 **[检查多个远程Linux系统上的多个端口状态][2]** 请点击链接浏览。
但是本文帮助您检查本地系统上的开放端口列表。
在 Linux 中很少有用于此目的的实用程序。
然而,我提供了四个最重要的 Linux 命令来检查这一点。
您可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
* **`netstat:`** netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
* **`nmap:`** Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
* **`ss:`** ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
* **`lsof:`** lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
你可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
* `netstat`netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
* `nmap`Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
* `ss` ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
* `lsof` lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
### 如何使用 Linux 命令 netstat 检查系统中的开放端口列表
netstat 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
`netstat` 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表、伪装连接、多播成员和网络端口。
它可以列出所有的 tcp, udp 连接 和所有的 unix 套接字连接。
它可以列出所有的 tcp、udp 连接和所有的 unix 套接字连接。
它用于发现发现网络问题,确定网络连接数量。
@ -81,7 +71,7 @@ eth0 1 ff02::1
eth0 1 ff01::1
```
也可以使用下面的命令检查特定的端口。
也可以使用下面的命令检查特定的端口。
```
# # netstat -tplugn | grep :22
@ -92,7 +82,7 @@ tcp6 0 0 :::22 :::* LISTEN
### 如何使用 Linux 命令 ss 检查系统中的开放端口列表?
ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
`ss` 被用于转储套接字统计信息。它也可以显示类似 `netstat` 的信息。相比其他工具它可以展示更多的 TCP 状态信息。
```
# ss -lntu
@ -121,7 +111,7 @@ tcp LISTEN 0 100 :::25
tcp LISTEN 0 128 :::22 :::*
```
也可以使用下面的命令检查特定的端口。
也可以使用下面的命令检查特定的端口。
```
# # ss -lntu | grep ':25'
@ -132,12 +122,11 @@ tcp LISTEN 0 100 :::25 :::*
### 如何使用 Linux 命令 nmap 检查系统中的开放端口列表?
Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络,当然它也可以工作在独立主机上。
Nmap使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(操作系统版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
Nmap 使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络清点、管理服务升级计划以及监控主机或服务正常运行时间。
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络资产清点、管理服务升级计划以及监控主机或服务正常运行时间。
```
# nmap -sTU -O localhost
@ -166,9 +155,7 @@ OS detection performed. Please report any incorrect results at http://nmap.org/s
Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
```
您也可以使用下面的命令检查特定的端口。
你也可以使用下面的命令检查特定的端口。
```
# nmap -sTU -O localhost | grep 123
@ -176,10 +163,9 @@ Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
123/udp open ntp
```
### 如何使用 Linux 命令 lsof 检查系统中的开放端口列表?
它向您显示系统上打开的文件列表以及打开它们的进程。还会向您显示与文件相关的其他信息。
它向你显示系统上打开的文件列表以及打开它们的进程。还会向你显示与文件相关的其他信息。
```
# lsof -i
@ -214,8 +200,7 @@ httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
```
您也可以使用下面的命令检查特定的端口。
你也可以使用下面的命令检查特定的端口。
```
# lsof -i:80
@ -236,11 +221,11 @@ via: https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[heguangzhi](https://github.com/heguangzhi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
[1]: https://linux.cn/article-10675-1.html
[2]: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10740-1.html)
[#]: subject: (The Fargate Illusion)
[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
[#]: author: (Lee Briggs https://leebriggs.co.uk/)
@ -30,9 +30,9 @@
我有一个干净的 AWS 账户,并决定从零到部署一个 webapp。与 AWS 中的其它基础设施一样,我必须首先使基本的基础设施正常工作起来,因此我需要先定义一个 VPC。
遵循最佳实践,因此我将这个 VPC 划分为跨可用区AZ的子网一个公共子网和私有子网。这时我想到只要这种设置基础设施的需求存在我就能找到一份这种工作。AWS 是运维上“免费”这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。在我们甚至开始谈论多帐户架构*之前*(现在我仍然使用单一帐户),我必须已经定义好基础设施和传统的网络设备。
遵循最佳实践,因此我将这个 VPC 划分为跨可用区AZ的子网一个公共子网和私有子网。这时我想到只要这种设置基础设施的需求存在我就能找到一份这种工作。AWS 是"免"运维的这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。而这种想当然甚至发生在开始谈论多帐户架构*之前*就有了——现在我仍然使用单一帐户,我已经必须定义好基础设施和传统的网络设备。
这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。在 VPC 中定义 NAT 网关和路由根本不会让你觉得很“无服务器”,但要往下进行这就是必须要做的。
这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。定义 NAT 网关和在 VPC 中路由根本不会让你觉得很“Serverless”,但要往下进行这就是必须要做的。
### 运行简单的容器
@ -44,7 +44,7 @@
#### 任务定义
<ruby>任务定义<rt>Task Definition<rt></ruby>”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手工编写 JSON 来定义我的容器定义。
<ruby>任务定义<rt>Task Definition<rt></ruby>”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手写 JSON 来定义我的容器定义。
首先我定义了一些环境变量:
@ -173,7 +173,7 @@ resource "aws_ecs_service" "app" {
##### 负载均衡器从未远离
老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress 控制器)也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress Controller)也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
我现在意识到我需要:
@ -296,9 +296,9 @@ module "ecs" {
```
这里让我感到惊讶的是为什么我必须定义一个集群。作为一个相当熟悉 ECS 的人,你会觉得你需要一个集群,但我试图从一个必须经历这个过程的新人的角度来考虑这一点 —— 对我来说Fargate 标榜自己“
无服务器”而你仍需要定义集群,这似乎很令人惊讶。当然这是一个小细节,但它确实盘旋在我的脑海里。
Serverless”而你仍需要定义集群,这似乎很令人惊讶。当然这是一个小细节,但它确实盘旋在我的脑海里。
### 告诉我你的秘密
### 告诉我你的 Secret
在这个阶段,我很高兴我成功地运行了一些东西。然而,我的原始的成功标准缺少一些东西。如果我们回到任务定义那里,你会记得我的应用程序有一个存放密码的环境变量:
@ -315,11 +315,11 @@ container_environment_variables = [
]
```
如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的秘密信息管理][11]。
如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的Secrets管理][11]。
#### AWS SSM
Fargate / ECS 执行<ruby>秘密信息管理<rt>secret management</rt></ruby>部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
Fargate / ECS 执行<ruby>secret 管理<rt>secret management</rt></ruby>部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库 Systems Manager Parameter Store,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
AWS 文档很好的[涵盖了这个内容][13],因此我开始将其转换为 terraform。
@ -335,9 +335,9 @@ resource "aws_ssm_parameter" "app_password" {
}
```
显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的秘密信息管理具有巨大优势,默认情况下,这些秘密信息在 etcd 中是不加密的。
显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的 Secret 管理具有巨大优势,默认情况下,这些 Secret 在 etcd 中是不加密的。
然后我为 ECS 指定了另一个本地值映射,并将其作为秘密参数传递:
然后我为 ECS 指定了另一个本地值映射,并将其作为 Secret 参数传递:
```
container_secrets = [
@ -372,7 +372,7 @@ module "container_definition_app" {
##### 出了个问题
此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8可用时我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义版本 7。解决这件事花费的时间比我预期的要长但是在控制台的事件屏幕上我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取秘密信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8可用时我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义版本 7。解决这件事花费的时间比我预期的要长但是在控制台的事件屏幕上我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取 Secret 信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
在 Kubernetes 里,我会清楚地看到 pod 定义中的错误。Fargate 可以确保我的应用不会停止,这绝对是太棒了,但作为一名运维,我需要一些关于发生了什么的实际反馈。这真的不够好。我真的希望 Fargate 团队的人能够读到这篇文章,改善这种体验。
@ -400,7 +400,7 @@ module "container_definition_app" {
#### Kubernetes 的争议
最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是如果我删除这里某个东西Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然Kubernetes 有一个学习曲线但从这里的体验来看Fargate 也是如此。
最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是如果我删除这里某个东西Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然Kubernetes 有一个学习曲线但从这里的体验来看Fargate 也是如此。
### 总结
@ -419,7 +419,7 @@ via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
作者:[Lee Briggs][a]
选题:[lujun9972][b]
译者:[Bestony](https://github.com/Bestony)
校对:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy), 临石(阿里云智能技术专家)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,130 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (zgj1024 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why DevOps is the most important tech strategy today)
[#]: via: (https://opensource.com/article/19/3/devops-most-important-tech-strategy)
[#]: author: (Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht)
Why DevOps is the most important tech strategy today
======
Clearing up some of the confusion about DevOps.
![CICD with gears][1]
Many people first learn about [DevOps][2] when they see one of its outcomes and ask how it happened. It's not necessary to understand why something is part of DevOps to implement it, but knowing that—and why a DevOps strategy is important—can mean the difference between being a leader or a follower in an industry.
Maybe you've heard some the incredible outcomes attributed to DevOps, such as production environments that are so resilient they can handle thousands of releases per day while a "[Chaos Monkey][3]" is running around randomly unplugging things. This is impressive, but on its own, it's a weak business case, essentially burdened with [proving a negative][4]: The DevOps environment is resilient because a serious failure hasn't been observed… yet.
There is a lot of confusion about DevOps and many people are still trying to make sense of it. Here's an example from someone in my LinkedIn feed:
> Recently attended few #DevOps sessions where some speakers seemed to suggest #Agile is a subset of DevOps. Somehow, my understanding was just the opposite.
>
> Would like to hear your thoughts. What do you think is the relationship between Agile and DevOps?
>
> 1. DevOps is a subset of Agile
> 2. Agile is a subset of DevOps
> 3. DevOps is an extension of Agile, starts where Agile ends
> 4. DevOps is the new version of Agile
>
Tech industry professionals have been weighing in on the LinkedIn post with a wide range of answers. How would you respond?
### DevOps' roots in lean and agile
DevOps makes a lot more sense if we start with the strategies of Henry Ford and the Toyota Production System's refinements of Ford's model. Within this history is the birthplace of lean manufacturing, which has been well studied. In [_Lean Thinking_][5], James P. Womack and Daniel T. Jones distill it into five principles:
1. Specify the value desired by the customer
2. Identify the value stream for each product providing that value and challenge all of the wasted steps currently necessary to provide it
3. Make the product flow continuously through the remaining value-added steps
4. Introduce pull between all steps where continuous flow is possible
5. Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
This is a huge difference, but not everything in life is as simple and predictable as the penny in the Penny Game. This is where agile comes in. We certainly see lean principles on high-performing agile teams, but these teams need more than lean to do what they do.
To be able to handle the unpredictability and variance of typical software development tasks, agile methodology focuses on awareness, deliberation, decision, and action to adjust course in the face of a constantly changing reality. For example, agile frameworks (like scrum) increase awareness with ceremonies like the daily standup and the sprint review. If the scrum team becomes aware of a new reality, the framework allows and encourages them to adjust course if necessary.
For teams to make these types of decisions, they need to be self-organizing in a high-trust environment. High-performing agile teams working this way achieve a fast flow of value while continuously adjusting course, removing the waste of going in the wrong direction.
### Optimal batch size
To understand the power of DevOps in software development, it helps to understand the economics of batch size. Consider the following U-curve optimization illustration from Donald Reinertsen's _[Principles of Product Development Flow][8]:_
![U-curve optimization illustration of optimal batch size][9]
This can be explained with an analogy about grocery shopping. Suppose you need to buy some eggs and you live 30 minutes from the store. Buying one egg (far left on the illustration) at a time would mean a 30-minute trip each time. This is your _transaction cost_. The _holding cost_ might represent the eggs spoiling and taking up space in your refrigerator over time. The _total cost_ is the _transaction cost_ plus your _holding cost_. This U-curve explains why, for most people, buying a dozen eggs at a time is their _optimal batch size_. If you lived next door to the store, it'd cost you next to nothing to walk there, and you'd probably buy a smaller carton each time to save room in your refrigerator and enjoy fresher eggs.
This U-curve optimization illustration can shed some light on why productivity increases significantly in successful agile transformations. Consider the effect of agile transformation on decision making in an organization. In traditional hierarchical organizations, decision-making authority is centralized. This leads to larger decisions made less frequently by fewer people. An agile methodology will effectively reduce an organization's transaction cost for making decisions by decentralizing the decisions to where the awareness and information is the best known: across the high-trust, self-organizing agile teams.
The following animation shows how reducing transaction cost shifts the optimal batch size to the left. You can't understate the value to an organization in making faster decisions more frequently.
![U-curve optimization illustration][10]
### Where does DevOps fit in?
Automation is one of the things DevOps is most known for. The previous illustration shows the value of automation in great detail. Through automation, we reduce our transaction costs to nearly zero, essentially getting our testing and deployments for free. This lets us take advantage of smaller and smaller batch sizes of work. Smaller batches of work are easier to understand, commit to, test, review, and know when they are done. These smaller batch sizes also contain less variance and risk, making them easier to deploy and, if something goes wrong, to troubleshoot and recover from. With automation combined with a solid agile practice, we can get our feature development very close to single piece flow, providing value to customers quickly and continuously.
More traditionally, DevOps is understood as a way to knock down the walls of confusion between the dev and ops teams. In this model, development teams develop new features, while operations teams keep the system stable and running smoothly. Friction occurs because new features from development introduce change into the system, increasing the risk of an outage, which the operations team doesn't feel responsible for—but has to deal with anyway. DevOps is not just trying to get people working together, it's more about trying to make more frequent changes safely in a complex environment.
We can look to [Ron Westrum][11] for research about achieving safety in complex organizations. In researching why some organizations are safer than others, he found that an organization's culture is predictive of its safety. He identified three types of culture: Pathological, Bureaucratic, and Generative. He found that the Pathological culture was predictive of less safety and the Generative culture was predictive of more safety (e.g., far fewer plane crashes or accidental hospital deaths in his main areas of research).
![Three types of culture identified by Ron Westrum][12]
Effective DevOps teams achieve a Generative culture with lean and agile practices, showing that speed and safety are complementary, or two sides of the same coin. By reducing the optimal batch sizes of decisions and features to become very small, DevOps achieves a faster flow of information and value while removing waste and reducing risk.
In line with Westrum's research, change can happen easily with safety and reliability improving at the same time. When an agile DevOps team is trusted to make its own decisions, we get the tools and techniques DevOps is most known for today: automation and continuous delivery. Through this automation, transaction costs are reduced further than ever, and a near single piece lean flow is achieved, creating the potential for thousands of decisions and releases per day, as we've seen happen in high-performing DevOps organizations.
### Flow, feedback, learning
DevOps doesn't stop there. We've mainly been talking about DevOps achieving a revolutionary flow, but lean and agile practices are further amplified through similar efforts that achieve faster feedback loops and faster learning. In the [_DevOps Handbook_][13], the authors explain in detail how, beyond its fast flow, DevOps achieves telemetry across its entire value stream for fast and continuous feedback. Further, leveraging the [kaizen][14] bursts of lean and the [retrospectives][15] of scrum, high-performing DevOps teams will continuously drive learning and continuous improvement deep into the foundations of their organizations, achieving a lean manufacturing revolution in the software product development industry.
### Start with a DevOps assessment
The first step in leveraging DevOps is, either after much study or with the help of a DevOps consultant and coach, to conduct an assessment across a suite of dimensions consistently found in high-performing DevOps teams. The assessment should identify weak or non-existent team norms that need improvement. Evaluate the assessment's results to find quick wins—focus areas with high chances for success that will produce high-impact improvement. Quick wins are important for gaining the momentum needed to tackle more challenging areas. The teams should generate ideas that can be tried quickly and start to move the needle on the DevOps transformation.
After some time, the team should reassess on the same dimensions to measure improvements and identify new high-impact focus areas, again with fresh ideas from the team. A good coach will consult, train, mentor, and support as needed until the team owns its own continuous improvement and achieves near consistency on all dimensions by continually reassessing, experimenting, and learning.
In the [second part][16] of this article, we'll look at results from a DevOps survey in the Drupal community and see where the quick wins are most likely to be found.
* * *
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://opensource.com/resources/devops
[3]: https://github.com/Netflix/chaosmonkey
[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[6]: https://youtu.be/5t6GhcvKB8o?t=54
[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif (U-curve optimization illustration of optimal batch size)
[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif (U-curve optimization illustration)
[11]: https://en.wikipedia.org/wiki/Ron_Westrum
[12]: https://opensource.com/sites/default/files/uploads/information_flow.png (Three types of culture identified by Ron Westrum)
[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
[14]: https://en.wikipedia.org/wiki/Kaizen
[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
[19]: https://events.drupal.org/seattle2019

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Nyansas Voyance expands to the IoT)
[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Nyansas Voyance expands to the IoT
======
![Brandon Mowinkel \(CC0\)][1]
Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
Voyance a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior has been around for two years now, and Nyansa says that its being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
**More on IoT:**
* [Most powerful Internet of Things companies][3]
* [10 Hot IoT startups to watch][4]
* [The 6 ways to make money in IoT][5]
* [][6] [Blockchain, service-centric networking key to IoT success][7]
* [Getting grounded in IoT networking and security][8]
* [Building IoT-ready networks must become a priority][9]
* [What is the Industrial IoT? [And why the stakes are so high]][10]
Its a software-only product (available either via public SaaS or private cloud) that works by scanning a customers network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
The process doesnt happen instantaneously, particularly the creation of the baseline, but its designed to be minimally invasive to existing network management frameworks and easy to implement.
Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices said that Voyance IoT has offered a new level of visibility into the business complex array of connected diagnostic and treatment machines.
“In the past we didnt have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now were able to do that,” said CISO Thad Phillips.
While spiraling network complexity isnt an issue confined to the IoT, theres a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices arent particularly secure.
“Theyre not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, thats sort of a big problem there as well,” said Anand Srinivas, Nyansas co-founder and CTO.
Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the systems designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that theyre functioning correctly.
“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “Theres always going to be more and more things connecting to IP networks, and its just going to be a question of building up a database.”
Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
Two tools to help visualize and simplify your data-driven operations
======
Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planets Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
![danleap][1]
**Build the picture: Visualize your data**
The Internet of Things (IoT), 5G, smart technology, virtual reality all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
**See the upward graphical trend with graph databases**
Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
**Change the tide of data with delta-based federation**
New data, updated data, corrected data, deleted data all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSPs Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
**Ride the wave**
25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
[Watch this video to learn more about Blue Planet][2]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What SDN is and where its going)
[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
What SDN is and where its going
======
Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
![seedkin / Getty Images][1]
Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 20182022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 20172022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
### **What is SDN? **
The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
### How does SDN support edge computing, IoT and remote access?
A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components dont suffer latency.
SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
### **How does SDN support intent-based networking?**
Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
### **How does SDN help customers with security?**
SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
“For example, if a customer has an IoT group it doesnt feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 60 percent cheaper than traditional gear.”
The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Ciscos Nexus and ACI product lines.
"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
### **What is SDNs role in cloud computing?**
SDNs role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Ciscos ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDCs Casemore.
“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
### Where does SD-WAN fit in?
The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
At its most basic, SD-WAN lets companies aggregate a variety of network connections including MPLS, 4G LTE and DSL into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized for data, voice or video and potentially save money in the process.
[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Ciscos Enterprise Networking Business, said a Network World [article][18] earlier this year.
It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
[14]: https://www.linkedin.com/in/boblaliberte90/
[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html

View File

@ -1,292 +0,0 @@
translating by MjSeven
Getting started with Sensu monitoring
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
### Architecture
Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
![sensu_system.png][11]
### Installing Sensu
#### Prerequisites
* One Linux installation to act as the server node (I used CentOS 7 for this article)
* One or more Linux machines to monitor (clients)
#### Server side
Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
```
$ sudo yum install epel-release -y
```
Then install Redis:
```
$ sudo yum install redis -y
```
Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
```
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
```
Enable and start Redis service:
```
$ sudo systemctl enable redis
$ sudo systemctl start redis
```
Redis is now installed and ready to be used by Sensu.
Now lets install Sensu.
First, configure the Sensu repository and install the packages:
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
$ sudo yum install sensu uchiwa -y
```
Lets create the bare minimum configuration files for Sensu:
```
$ sudo tee /etc/sensu/conf.d/api.json << EOF
{
  "api": {
        "host": "127.0.0.1",
        "port": 4567
  }
}
EOF
```
Next, configure `sensu-api` to listen on localhost, with Port 4567:
```
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
{
  "redis": {
        "host": "<IP of server>",
        "port": 6379,
        "password": "password123"
  }
}
EOF
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
{
  "transport": {
        "name": "redis"
  }
}
EOF
```
In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
```
$ sudo tee /etc/sensu/uchiwa.json << EOF
{
   "sensu": [
        {
        "name": "sensu",
        "host": "127.0.0.1",
        "port": 4567
        }
   ],
   "uchiwa": {
        "host": "0.0.0.0",
        "port": 3000
   }
}
EOF
```
In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
For security reasons, change the owner of the configuration files you just created:
```
$ sudo chown -R sensu:sensu /etc/sensu
```
Enable and start the Sensu services:
```
$ sudo systemctl enable sensu-server sensu-api sensu-client
$ sudo systemctl start sensu-server sensu-api sensu-client
$ sudo systemctl enable uchiwa
$ sudo systemctl start uchiwa
```
Try accessing the Uchiwa website: http://<IP of server>:3000
For production environments, its recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
Sensu is now installed. Now lets configure the clients.
#### Client side
To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
```
With the repository enabled, install the package Sensu:
```
$ sudo yum install sensu -y
```
To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
```
$ sudo tee /etc/sensu/conf.d/client.json << EOF
{
  "client": {
        "name": "rhel-client",
        "environment": "development",
        "subscriptions": [
        "frontend"
        ]
  }
}
EOF
```
In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
```
$ sudo systemctl enable sensu-client
$ sudo systemctl start sensu-client
```
### Sensu checks
Sensu checks have two components: a plugin and a definition.
Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
Check definitions let Sensu know how, where, and when to run the plugin.
#### Client side
Lets install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
Enable EPEL and install `nagios-plugins-http` :
```
$ sudo yum install -y epel-release
$ sudo yum install -y nagios-plugins-http
```
Now lets explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we dont have a web server running:
```
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
connect to address 127.0.0.1 and port 80: Connection refused
HTTP CRITICAL - Unable to open TCP socket
```
It failed, as expected. Check the return code of the execution:
```
$ echo $?
2
```
The Nagios check plugin specification defines four return codes for the plugin execution:
| **Plugin return code** | **State** |
|------------------------|-----------|
| 0 | OK |
| 1 | WARNING |
| 2 | CRITICAL |
| 3 | UNKNOWN |
With this information, we can now create the check definition on the server.
#### Server side
On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
```
{
  "checks": {
    "check_http": {
      "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
      "interval": 10,
      "subscribers": [
        "frontend"
      ]
    }
  }
}
```
In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
```
$ sudo systemctl restart sensu-api sensu-server
```
### Whats next?
Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
作者:[Michael Zamot][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mzamot
[1]:https://www.rabbitmq.com/
[2]:https://redis.io/topics/config
[3]:https://slack.com/
[4]:https://en.wikipedia.org/wiki/HipChat
[5]:http://www.irc.org/
[6]:https://www.pagerduty.com/
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
[9]:https://uchiwa.io/#/
[10]:/file/406576
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
[13]:https://docs.sensu.io/
[14]:https://sensu.io/community

View File

@ -1,91 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Enjoy Netflix? You Should Thank FreeBSD
======
Netflix is one of the most popular streaming services in the world.
But you already know that. Dont you?
What you probably did not know is that Netflix uses [FreeBSD][1] to deliver its content to you.
Yes, thats right. Netflix relies on FreeBSD to build its in-house content delivery network (CDN).
A [CDN][2] is a group of servers located in various part of the world. It is mainly used to deliver heavy content like images and videos to the end-user faster than a centralized server.
Instead of opting for a commercial CDN service, Netflix has built its own in-house CDN called [Open Connect][3].
Open Connect utilizes [custom hardware][4], Open Connect Appliance. You can see it in the image below. It can handle 40Gb/s data and has a storage capacity of 248TB.
![Netflixs Open Connect Appliance runs FreeBSD][5]
Netflix provides Open Connect Appliance to qualifying Internet Service Providers (ISP) for free. This way, substantial Netflix traffic gets localized and the ISPs deliver the Netflix content more efficiently.
This Open Connect Appliance runs on FreeBSD operating system and [almost exclusively runs open source software][6].
### Open Connect uses FreeBSD “Head”
![][7]
You would expect Netflix to use a stable release of FreeBSD for such a critical infrastructure but Netflix tracks the [FreeBSD head/current version][8]. Netflix says that tracking “head” lets them “stay forward-looking and focused on innovation”.
Here are the benefits Netflix sees of tracking FreeBSD:
* Quicker feature iteration
* Quicker access to new FreeBSD features
* Quicker bug fixes
* Enables collaboration
* Minimizes merge conflicts
* Amortizes merge “cost”
> Running FreeBSD “head” lets us deliver large amounts of data to our users very efficiently, while maintaining a high velocity of feature development.
>
> Netflix
Remember, even [Google uses Debian][9] testing instead of Debian stable. Perhaps these enterprises prefer the cutting edge features more than anything else.
Like Google, Netflix also plans to upstream any code they can. This should help FreeBSD and other BSD distributions based on FreeBSD.
So what does Netflix achieves with FreeBSD? Here are some quick stats:
> Using FreeBSD and commodity parts, we achieve 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU.
>
> Netflix
If you want to know more about Netflix and FreeBSD, you can refer to [this presentation from FOSDEM][10]. You can also watch the video of the presentation [here][11].
These days big enterprises rely mostly on Linux for their server infrastructure but Netflix has put their trust in BSD. This is a good thing for BSD community because if an industry leader like Netflix throws its weight behind BSD, others could follow the lead. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/netflix-freebsd-cdn/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
[3]: https://openconnect.netflix.com/en/
[4]: https://openconnect.netflix.com/en/hardware/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
[6]: https://openconnect.netflix.com/en/software/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
[8]: https://www.bsdnow.tv/tutorials/stable-current
[9]: https://itsfoss.com/goobuntu-glinux-google/
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm

View File

@ -1,168 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 2)
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Using Square Brackets in Bash: Part 2
======
![square brackets][1]
We continue our tour of square brackets in Bash with a look at how they can act as a command.
[Creative Commons Zero][2]
Welcome back to our mini-series on square brackets. In the [previous article][3], we looked at various ways square brackets are used at the command line, including globbing. If you've not read that article, you might want to start there.
Square brackets can also be used as a command. Yep, for example, in:
```
[ "a" = "a" ]
```
which is, by the way, a valid command that you can execute, `[ ... ]` is a command. Notice that there are spaces between the opening bracket `[` and the parameters `"a" = "a"`, and then between the parameters and the closing bracket `]`. That is precisely because the brackets here act as a command, and you are separating the command from its parameters.
You would read the above line as " _test whether the string "a" is the same as string "a"_ ". If the premise is true, the `[ ... ]` command finishes with an exit status of 0. If not, the exit status is 1. [We talked about exit statuses in a previous article][4], and there you saw that you could access the value by checking the `$?` variable.
Try it out:
```
[ "a" = "a" ]
echo $?
```
And now try:
```
[ "a" = "b" ]
echo $?
```
In the first case, you will get a 0 (the premise is true), and running the second will give you a 1 (the premise is false). Remember that, in Bash, an exit status from a command that is 0 means it exited normally with no errors, and that makes it `true`. If there were any errors, the exit value would be a non-zero value (`false`). The `[ ... ]` command follows the same rules so that it is consistent with the rest of the other commands.
The `[ ... ]` command comes in handy in `if ... then` constructs and also in loops that require a certain condition to be met (or not) before exiting, like the `while` and `until` loops.
The logical operators for testing stuff are pretty straightforward:
```
[ STRING1 = STRING2 ] => checks to see if the strings are equal
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
etc...
```
You can also test for some very shell-specific things. The `-f` option, for example, tests whether a file exists or not:
```
for i in {000..099}; \
do \
if [ -f file$i ]; \
then \
echo file$i exists; \
else \
touch file$i; \
echo I made file$i; \
fi; \
done
```
If you run this in your test directory, line 3 will test to whether a file is in your long list of files. If it does exist, it will just print a message; but if it doesn't exist, it will create it, to make sure the whole set is complete.
You could write the loop more compactly like this:
```
for i in {000..099};\
do\
if [ ! -f file$i ];\
then\
touch file$i;\
echo I made file$i;\
fi;\
done
```
The `!` modifier in the condition inverts the premise, thus line 3 would translate to " _if the file`file$i` does not exist_ ".
Try it: delete some random files from the bunch you have in your test directory. Then run the loop shown above and watch how it rebuilds the list.
There are plenty of other tests you can try, including `-d` tests to see if the name belongs to a directory and `-h` tests to see if it is a symbolic link. You can also test whether a files belongs to a certain group of users (`-G`), whether one file is older than another (`-ot`), or even whether a file contains something or is, on the other hand, empty.
Try the following for example. Add some content to some of your files:
```
echo "Hello World" >> file023
echo "This is a message" >> file065
echo "To humanity" >> file010
```
and then run this:
```
for i in {000..099};\
do\
if [ ! -s file$i ];\
then\
rm file$i;\
echo I removed file$i;\
fi;\
done
```
And you'll remove all the files that are empty, leaving only the ones you added content to.
To find out more, check the manual page for the `test` command (a synonym for `[ ... ]`) with `man test`.
You may also see double brackets (`[[ ... ]]`) sometimes used in a similar way to single brackets. The reason for this is because double brackets give you a wider range of comparison operators. You can use `==`, for example, to compare a string to a pattern instead of just another string; or < and `>` to test whether a string would come before or after another in a dictionary.
To find out more about extended operators [check out this full list of Bash expressions][5].
### Next Time
In an upcoming article, we'll continue our tour and take a look at the role of parentheses `()` in Linux command lines. See you then!
_Read more:_
1. [The Meaning of Dot (`.`)][6]
2. [Understanding Angle Brackets in Bash (`<...>`)][7]
3. [More About Angle Brackets in Bash(`<` and `>`)][8]
4. [And, Ampersand, and & in Linux (`&`)][9]
5. [Ampersands and File Descriptors in Bash (`&`)][10]
6. [Logical & in Bash (`&`)][4]
7. [All about {Curly Braces} in Bash (`{}`)][11]
8. [Using Square Brackets in Bash: Part 1][3]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy (square brackets)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,354 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (12 Single Board Computers: Alternative to Raspberry Pi)
[#]: via: (https://itsfoss.com/raspberry-pi-alternatives/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
12 Single Board Computers: Alternative to Raspberry Pi
======
_**Brief: Looking for a Raspberry Pi alternative? Here are some other single board computers to satisfy your DIY cravings.**_
Raspberry Pi is the most popular single board computer right now. You can use it for your DIY projects or can use it as a cost effective system to learn coding or maybe utilize a [media server software][1] on it to stream media at your convenience.
You can do a lot of things with Raspberry Pi but it is not the ultimate solution for all kinds of tinkerers. Some might be looking for a cheaper board and some might be on the lookout for a powerful one.
Whatever be the case, we do need Raspberry Pi alternatives for a variety of reasons. So, in this article, we will talk about the best ten single board computers that we think are the best Raspberry Pi alternatives.
![][2]
### Raspberry Pi alternatives to satisfy your DIY craving
The list is in no particular order of ranking. Some of the links here are affiliate links. Please read our [affiliate policy][3].
#### 1\. Onion Omega2+
![][4]
For just **$13** , the Omega2+ is one of the cheapest IoT single board computers you can find out there. It runs on LEDE (Linux Embedded Development Environment) Linux OS a distribution based on [OpenWRT][5].
Its form factor, cost, and the flexibility that comes from running a customized version of Linux OS makes it a perfect fit for almost any type of IoT applications.
You can find [Onion Omega kit on Amazon][6] or order from their own website that would cost you extra shipping charges.
**Key Specifications**
* MT7688 SoC
* 2.4 GHz IEEE 802.11 b/g/n WiFi
* 128 MB DDR2 RAM
* 32 MB on-board flash storage
* MicroSD Slot
* USB 2.0
* 12 GPIO Pins
[Visit WEBSITE
][7]
#### 2\. NVIDIA Jetson Nano Developer Kit
![][8]
This is a very unique and interesting Raspberry Pi alternative from NVIDIA for just **$99**. Yes, its not something that everyone can make use of but for a specific group of tinkerers or developers.
NVIDIA explains it for the following use-case:
> NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
>
> nvidia
So, basically, if you are into AI and deep learning, you can make use of the developer kit. If you are curious, the production compute module of this will be arriving in June 2019.
**Key Specifications:**
* CPU: Quad-core ARM A57 @ 1.43 GHz
* GPU: 128-core Maxwell
* RAM: 4 GB 64-bit LPDDR4 25.6 GB/s
* Display: HDMI 2.0
* 4 x USB 3.0 and eDP 1.4
[VISIT WEBSITE
][9]
#### 3\. ASUS Tinker Board S
![][10]
ASUS Tinker Board S isnt the most affordable Raspberry Pi alternative at **$82** (on [Amazon][11]) but it is a powerful alternative. It features the same 40-pin connector that youd normally find in the standard Raspberry Pi 3 Model but offers a powerful processor and a GPU.Also, the size of the Tinker Board S is exactly the same as a standard Raspberry Pi 3.
The main highlight of this board is the presence of 16 GB [eMMC][12] (in layman terms, it has got SSD-like storage on board that makes it faster while working on it).
**Key Specifications:**
* Rockchip Quad-Core RK3288 processor
* 2 GB DDR3 RAM
* Integrated Graphics Processor
* ARM® Mali™-T764 GPU
* 16 GB eMMC
* MicroSD Card Slot
* 802.11 b/g/n, Bluetooth V4.0 + EDR
* USB 2.0
* 28 GPIO pins
* HDMI Interface
[Visit website
][13]
#### 4\. ClockworkPi
![][14]
Clockwork Pi is usually a part of the [GameShell Kit][15] if you are looking to assemble a modular retro gaming console. However, you can purchase the board separately for $49.
Its compact size, WiFi connectivity, and the presence of micro HDMI port make it a great choice for a lot of things.
**Key Specifications:**
* Allwinner R16-J Quad-core Cortex-A7 CPU @1.2GHz
* Mali-400 MP2 GPU
* RAM: 1GB DDR3
* WiFi & Bluetooth v4.0
* Micro HDMI output
* MicroSD Card Slot
[visit website
][16]
#### 5\. Arduino Mega 2560
![][17]
If you are into robotic projects or you want something for a 3D printer Arduino Mega 2560 will be a handy replacement to Raspberry Pi. Unlike Raspberry Pi, it is based on a microcontroller and not a microprocessor.
It would cost you $38.50 on their [official site][18] and and around [$33 on Amazon][19].
**Key Specifications:**
* Microcontroller: ATmega2560
* Clock Speed: 16 MHz
* Digital I/O Pins: 54
* Analog Input Pins: 16
* Flash Memory: 256 KB of which 8 KB used by bootloader
[visit website
][18]
#### 6\. Rock64 Media Board
![][20]
For the same investment as you would on a Raspberry Pi 3 B+, you will be getting a faster processor and double the memory on Rock64 Media Board. In addition, it also offers a cheaper alternative to Raspberry Pi if you want the 1 GB RAM model which would cost $10 less.
Unlike Raspberry Pi, you do not have wireless connectivity support here but the presence of USB 3.0 and HDMI 2.0 does make a good difference if that matters to you.
**Key Specifications:**
* Rockchip RK3328 Quad-Core ARM Cortex A53 64-Bit Processor
* Supports up to 4GB 1600MHz LPDDR3 RAM
* eMMC module socket
* MicroSD Card slot
* USB 3.0
* HDMI 2.0
[visit website
][21]
#### 7\. Odroid-XU4
![][22]
Odroid-XU4 is the perfect alternative to Raspberry Pi if you have room to spend a little more ($80-$100 or even lower, depending on the store/availability).
It is indeed a powerful replacement and technically a bit smaller in size. The support for eMMC and USB 3.0 makes it faster to work with.
**Key Specifications:**
* Samsung Exynos 5422 Octa ARM Cortex™-A15 Quad 2Ghz and Cortex™-A7 Quad 1.3GHz CPUs
* 2Gbyte LPDDR3 RAM
* GPU: Mali-T628 MP6
* USB 3.0
* HDMI 1.4a
* eMMC 5.0 module socket
* MicroSD Card Slot
[visit website
][23]
#### 8\. **PocketBeagle**
![][24]
It is an incredibly small SBC almost similar to the Raspberry Pi Zero. However, it would cost you the same as that of a full-sized Raspberry Pi 3 model. The main highlight here is that you can use it as a USB key-fob and then access the Linux terminal to work on it.
**Key Specifications:**
* Processor: Octavo Systems OSD3358 1GHz ARM® Cortex-A8
* RAM: 512 MB DDR3
* 72 expansion pin headers
* microUSB
* USB 2.0
[visit website
][25]
#### 9\. Le Potato
![][26]
Le Potato by [Libre Computer][27], also identified by its model number AML-S905X-CC. It would [cost you $45][28].
If you want double the memory along with HDMI 2.0 interface by spending a bit more than a Raspberry Pi this would be the perfect choice. Although, you wont find wireless connectivity baked in.
**Key Specifications:**
* Amlogic S905X SoC
* 2GB DDR3 SDRAM
* USB 2.0
* HDMI 2.0
* microUSB
* MicroSD Card Slot
* eMMC Interface
[visit website
][29]
#### 10\. Banana Pi M64
![][30]
It comes loaded with 8 Gigs of eMMC which is the key highlight of this Raspberry Pi alternative. For the very same reason, it would cost you $60.
The presence of HDMI interface makes it 4K-ready. In addition, Banana Pi offers a lot more variety of open source SBCs as an alternative to Raspberry Pi.
**Key Specifications:**
* 1.2 Ghz Quad-Core ARM Cortex A53 64-Bit Processor-R18
* 2GB DDR3 SDRAM
* 8 GB eMMC
* WiFi & Bluetooth
* USB 2.0
* HDMI
[visit website
][31]
#### 11\. Orange Pi Zero
![][32]
The Orange Pi Zero is an incredibly cheap alternative to Raspberry Pi. You will be able to get it for almost $10 on Aliexpress or Amazon. For a [little more investment, you can get 512 MB RAM][33].
If that isnt sufficient, you can also go for Orange Pi 3 with better specifications which will cost you around $25.
**Key Specifications:**
* H2 Quad-core Cortex-A7
* Mali400MP2 GPU
* RAM: Up to 512 MB
* TF Card support
* WiFi
* USB 2.0
[Visit website
][34]
#### 12\. VIM 2 SBC by Khadas
![][35]
VIM 2 by Khadas is one of the latest SBCs that you can grab with Bluetooth 5.0 on board. It [starts from $99 (the basic model) and goes up to $140][36].
The basic model includes 2 GB RAM, 16 GB eMMC and Bluetooth 4.1. However, the Pro/Max versions would include Bluetooth 5.0, more memory, and more eMMC storage.
**Key Specifications:**
* Amlogic S912 1.5GHz 64-bit Octa-Core CPU
* T820MP3 GPU
* Up to 3 GB DDR4 RAM
* Up to 64 GB eMMC
* Bluetooth 5.0 (Pro/Max)
* Bluetooth 4.1 (Basic)
* HDMI 2.0a
* WiFi
**Wrapping Up**
We do know that there are different types of single board computers. Some are better than Raspberry Pi and some scaled down versions of it for a cheaper price tag. Also, SBCs like Jetson Nano have been tailored for a specific use. So, depending on what you require you should verify the specifications of the single board computer.
If you think that you know about something that is better than the ones mentioned above, feel free to let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/raspberry-pi-alternatives/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-media-server/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-alternatives.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/affiliate-policy/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/omega-2-plus-e1555306748755-800x444.jpg?resize=800%2C444&ssl=1
[5]: https://openwrt.org/
[6]: https://amzn.to/2Xj8pkn
[7]: https://onion.io/store/omega2p/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Jetson-Nano-e1555306350976-800x590.jpg?resize=800%2C590&ssl=1
[9]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/asus-tinker-board-s-e1555304945760-800x450.jpg?resize=800%2C450&ssl=1
[11]: https://amzn.to/2XfkOFT
[12]: https://en.wikipedia.org/wiki/MultiMediaCard
[13]: https://www.asus.com/in/Single-Board-Computer/Tinker-Board-S/
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/clockwork-pi-e1555305016242-800x506.jpg?resize=800%2C506&ssl=1
[15]: https://itsfoss.com/gameshell-console/
[16]: https://www.clockworkpi.com/product-page/cpi-v3-1
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/arduino-mega-2560-e1555305257633.jpg?ssl=1
[18]: https://store.arduino.cc/usa/mega-2560-r3
[19]: https://amzn.to/2KCi041
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ROCK64_board-e1555306092845-800x440.jpg?resize=800%2C440&ssl=1
[21]: https://www.pine64.org/?product=rock64-media-board-computer
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/odroid-xu4.jpg?fit=800%2C354&ssl=1
[23]: https://www.hardkernel.com/shop/odroid-xu4-special-price/
[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/PocketBeagle.jpg?fit=800%2C450&ssl=1
[25]: https://beagleboard.org/p/products/pocketbeagle
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/aml-libre.-e1555306237972-800x514.jpg?resize=800%2C514&ssl=1
[27]: https://libre.computer/
[28]: https://amzn.to/2DpG3xl
[29]: https://libre.computer/products/boards/aml-s905x-cc/
[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/banana-pi-m6.jpg?fit=800%2C389&ssl=1
[31]: http://www.banana-pi.org/m64.html
[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/orange-pi-zero.jpg?fit=800%2C693&ssl=1
[33]: https://amzn.to/2IlI81g
[34]: http://www.orangepi.org/orangepizero/index.html
[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/khadas-vim-2-e1555306505640-800x563.jpg?resize=800%2C563&ssl=1
[36]: https://amzn.to/2UDvrFE

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
[#]: via: (https://opensource.com/article/15/4/news-april-15)
[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
Blender short film, new license for Chef, ethics in open source, and more news
======
Here are some of the biggest headlines in open source in the last two
weeks
![][1]
In this edition of our open source news roundup, we take a look at the 12th Blender short film, Chef shifts away from open core toward a 100% open source license, SuperTuxKart's latest release candidate with online multiplayer support, and more.
### Blender Animation Studio releases Spring
[Spring][2], the latest short film from [Blender Animation Studio][3], premiered on April 4th. The [press release on Blender.org][4] describes _Spring_ as "the story of a shepherd girl and her dog, who face ancient spirits in order to continue the cycle of life." The development version of Blender 2.80, as well as other open source tools, were used to create this animated short film. The character and asset files for the film are available from [Blender Cloud][5], and tutorials, walkthroughs, and other instructional material are coming soon.
### The importance of ethics in open source
Reuven M. Lerner, writing for [Linux Journal][6], shares his thoughts about need for teaching programmers about ethics in an article titled [Open Source Is Winning, and Now It's Time for People to Win Too][7]. Part retrospective looking back at the history of open source and part call to action for moving forward, Lerner's article discusses many issues relevant to open source beyond just coding. He argues that when we teach kids about open source "[w]e also need to inform them of the societal parts of their work, and the huge influence and power that today's programmers have." He continues by stating "It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal." Overall a very thought-provoking piece, Lerner makes a solid case for making sure to remember that the open source movement is about more than free code.
### Chef transitions from open core to open source
Chef, the company behind the well-known DevOps automation tool, [announced][8] that they will be release 100% of their software as open source under an Apache 2.0 license. This move marks a departure from their current [open core model][9]. Given a tendency for companies to try to move in the opposite direction, Chef's move is a big one. By operating under a fully open source model Chef builds a better, stronger relationship with the community, and the community benefits from full access to all the source code. Even developers of competing projects (and the commercial projects based on those products) benefit from being able to learn from Chef's code, as Chef can do from its open source competitors, which is one of the greatest advantages of open source; the best ideas get to win and business relationships are built around trust and quality of service, not proprietary secrets. For a more detailed look at this development, read Steven J. Vaughan-Nichols's [article for ZDNet][10].
### SuperTuxKart releases version 0.10 RC1 for testing
SuperTuxKart, the open source Mario Kart clone featuring open source mascots, is getting very close to releasing a version that supports online multi-player. On April 5th, the SuperTuxKart blog announced the release of [SuperTuxKart 0.10 Release Candidate 1][11], which needs testing before the final release. Users who want to help test the online and LAN multiplayer options can [download the game from SourceForge][12]. In addition to the new online and LAN features, SuperTuxKart 0.10 features a couple new tracks to race on; Ravenbridge Mansion replaces the old Mansion track, and Black Forest, which was an add-on track in earlier versions, is now part of the official track set.
#### In other news
* [My code is your code: Embracing the power of open sourcing][13]
* [FOSS means kids can have a big impact][14]
* [Open-source textbooks lighten students financial load][15]
* [Developing the ultimate open source radio control transmitter][16]
* [How does open source tech transform Government?][17]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
--------------------------------------------------------------------------------
via: https://opensource.com/article/15/4/news-april-15
作者:[Joshua Allen Holm (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
[2]: https://www.youtube.com/watch?v=WhWc3b3KhnY (Spring)
[3]: https://blender.studio/ (Blender Animation Studio)
[4]: https://www.blender.org/press/spring-open-movie/ (Spring Open Movie)
[5]: https://cloud.blender.org/p/spring/ (Spring on Blender Cloud)
[6]: https://www.linuxjournal.com/ (Linux Journal)
[7]: https://www.linuxjournal.com/content/open-source-winning-and-now-its-time-people-win-too (Open Source Is Winning, and Now It's Time for People to Win Too)
[8]: https://blog.chef.io/2019/04/02/chef-software-announces-the-enterprise-automation-stack/ (Introducing the New Chef: 100% Open, Always)
[9]: https://en.wikipedia.org/wiki/Open-core_model (Wikipedia: Open-core model)
[10]: https://www.zdnet.com/article/leading-devops-program-chef-goes-all-in-with-open-source/ (Leading DevOps program Chef goes all in with open source)
[11]: http://blog.supertuxkart.net/2019/04/supertuxkart-010-release-candidate-1.html (SuperTuxKart 0.10 Release Candidate 1 Released)
[12]: https://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.10-rc1/ (SourceForge: SuperTuxKart)
[13]: https://www.forbes.com/sites/forbestechcouncil/2019/04/10/my-code-is-your-code-embracing-the-power-of-open-sourcing/ (My code is your code: Embracing the power of open sourcing)
[14]: https://www.linuxjournal.com/content/foss-means-kids-can-have-big-impact (FOSS means kids can have a big impact)
[15]: https://www.schoolnewsnetwork.org/2019/04/09/open-source-textbooks-lighten-students-financial-load/ (Open-source textbooks lighten students financial load)
[16]: https://hackaday.com/2019/04/03/developing-the-ultimate-open-source-radio-control-transmitter/ (Developing the ultimate open source radio control transmitter)
[17]: https://www.openaccessgovernment.org/open-source-tech-transform/62059/ (How does open source tech transform Government?)

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Mercurial for version control)
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
Getting started with Mercurial for version control
======
Learn the basics of Mercurial, a distributed version control system
written in Python.
![][1]
[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
```
python2 -m virtualenv mercurial-env
./mercurial-env/bin/pip install mercurial
```
To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
```
$ source mercurial-env/bin/activate
(mercurial-env)$ mkdir test-dir
(mercurial-env)$ cd test-dir
(mercurial-env)$ hg init
(mercurial-env)$ hg status
(mercurial-env)$
```
The status is empty since you do not have any files. Add a couple of files:
```
(mercurial-env)$ echo 1 > one
(mercurial-env)$ echo 2 > two
(mercurial-env)$ hg status
? one
? two
(mercurial-env)$ hg addremove
adding one
adding two
(mercurial-env)$ hg commit -m 'Adding stuff'
(mercurial-env)$ hg log
changeset: 0:1f1befb5d1e9
tag: tip
user: Moshe Zadka <[moshez@zadka.club][4]>
date: Fri Mar 29 12:42:43 2019 -0700
summary: Adding stuff
```
The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
This is an example of a short Mercurial extension:
```
from mercurial import registrar
from mercurial.i18n import _
cmdtable = {}
command = registrar.command(cmdtable)
@command('say-hello',
[('w', 'whom', '', _('Whom to greet'))])
def say_hello(ui, repo, **opts):
ui.write("hello ", opts['whom'], "\n")
```
A simple way to test it is to put it in a file in the virtual environment manually:
```
`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
```
Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
```
$ cat >> .hg/hgrc
[extensions]
hello_ext =
```
Now, a greeting is possible:
```
(mercurial-env)$ hg say-hello --whom world
hello world
```
Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/getting-started-mercurial
作者:[Moshe Zadka (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
[2]: https://www.mercurial-scm.org/
[3]: https://www.mercurial-scm.org/wiki/UnixInstall
[4]: mailto:moshez@zadka.club
[5]: https://www.mercurial-scm.org/wiki/MercurialApi#Repositories
[6]: https://www.mercurial-scm.org/repo/hg/file/tip/hgext

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to identify duplicate files on Linux)
[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to identify duplicate files on Linux
======
Some files on a Linux system can appear in more than one location. Follow these instructions to find and identify these "identical twins" and learn why hard links can be so advantageous.
![Archana Jarajapu \(CC BY 2.0\)][1]
Identifying files that share disk space relies on making use of the fact that the files share the same inode — the data structure that stores all the information about a file except its name and content. If two or more files have different names and file system locations, yet share an inode, they also share content, ownership, permissions, etc.
These files are often referred to as "hard links" — unlike symbolic links that simply point to other files by containing their names. Symbolic links are easy to pick out in a file listing by the "l" in the first position and **- >** symbol that refers to the file being referenced.
```
$ ls -l my*
-rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
-rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
```
Identifying hard links in a single directory is not as obvious, but it is still quite easy. If you list the files using the **ls -i** command and sort them by inode number, you can pick out the hard links fairly easily. In this type of ls output, the first column shows the inode numbers.
```
$ ls -i | sort -n | more
...
788000 myfile <==
788000 mytwin <==
801865 Name_Labels.pdf
786692 never leave home angry
920242 NFCU_Docs
800247 nmap-notes
```
Scan your output looking for identical inode numbers and any matches will tell you what you want to know.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][2] ]**
If, on the other hand, you simply want to know if one particular file is hard-linked to another file, there's an easier way than scanning through a list of what may be hundreds of files. The find command's **-samefile** option will do the work for you.
```
$ find . -samefile myfile
./myfile
./save/mycopy
./mytwin
```
Notice that the starting location provided to the find command will determine how much of the file system is scanned for matches. In the above example, we're looking in the current directory and subdirectories.
Adding output details using find's **-ls** option might be more convincing:
```
$ find . -samefile myfile -ls
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
```
The first column shows the inode number. Then we see file permissions, links, owner, file size, date information, and the names of the files that refer to the same disk content. Note that the link field in this case is a "4" not the "3" we might expect, telling us that there's another link to this same inode as well (but outside our search range).
If you want to look for all instances of hard links in a single directory, you could try a script like this that will create the list and look for the duplicates for you:
```
#!/bin/bash
# seaches for files sharing inodes
prev=""
# list files by inode
ls -i | sort -n > /tmp/$0
# search through file for duplicate inode #s
while read line
do
inode=`echo $line | awk '{print $1}'`
if [ "$inode" == "$prev" ]; then
grep $inode /tmp/$0
fi
prev=$inode
done < /tmp/$0
# clean up
rm /tmp/$0
$ ./findHardLinks
788000 myfile
788000 mytwin
```
You can also use the find command to look for files by inode number as in this command. However, this search could involve more than one file system, so it is possible that you will get false results, since the same inode number might be used in another file system where it would not represent the same file. If that's the case, other file details will not be identical.
```
$ find / -inum 788000 -ls 2> /dev/null
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
```
Note that error output was shunted off to /dev/null so that we didn't have to look at all the "Permission denied" errors that would have otherwise been displayed for other directories that we're not allowed to look through.
Also, scanning for files that contain the same content but don't share inodes (i.e., simply file copies) would take considerably more time and effort.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,419 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Inter-process communication in Linux: Shared storage)
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Inter-process communication in Linux: Shared storage
======
Learn how processes synchronize with each other in Linux.
![Filing papers and documents][1]
This is the first article in a series about [interprocess communication][2] (IPC) in Linux. The series uses code examples in C to clarify the following IPC mechanisms:
* Shared files
* Shared memory (with semaphores)
* Pipes (named and unnamed)
* Message queues
* Sockets
* Signals
This article reviews some core concepts before moving on to the first two of these mechanisms: shared files and shared memory.
### Core concepts
A _process_ is a program in execution, and each process has its own address space, which comprises the memory locations that the process is allowed to access. A process has one or more _threads_ of execution, which are sequences of executable instructions: a _single-threaded_ process has just one thread, whereas a _multi-threaded_ process has more than one thread. Threads within a process share various resources, in particular, address space. Accordingly, threads within a process can communicate straightforwardly through shared memory, although some modern languages (e.g., Go) encourage a more disciplined approach such as the use of thread-safe channels. Of interest here is that different processes, by default, do _not_ share memory.
There are various ways to launch processes that then communicate, and two ways dominate in the examples that follow:
* A terminal is used to start one process, and perhaps a different terminal is used to start another.
* The system function **fork** is called within one process (the parent) to spawn another process (the child).
The first examples take the terminal approach. The [code examples][3] are available in a ZIP file on my website.
### Shared files
Programmers are all too familiar with file access, including the many pitfalls (non-existent files, bad file permissions, and so on) that beset the use of files in programs. Nonetheless, shared files may be the most basic IPC mechanism. Consider the relatively simple case in which one process ( _producer_ ) creates and writes to a file, and another process ( _consumer_ ) reads from this same file:
```
writes +-----------+ reads
producer-------->| disk file |<\-------consumer
+-----------+
```
The obvious challenge in using this IPC mechanism is that a _race condition_ might arise: the producer and the consumer might access the file at exactly the same time, thereby making the outcome indeterminate. To avoid a race condition, the file must be locked in a way that prevents a conflict between a _write_ operation and any another operation, whether a _read_ or a _write_. The locking API in the standard system library can be summarized as follows:
* A producer should gain an exclusive lock on the file before writing to the file. An _exclusive_ lock can be held by one process at most, which rules out a race condition because no other process can access the file until the lock is released.
* A consumer should gain at least a shared lock on the file before reading from the file. Multiple _readers_ can hold a _shared_ lock at the same time, but no _writer_ can access a file when even a single _reader_ holds a shared lock.
A shared lock promotes efficiency. If one process is just reading a file and not changing its contents, there is no reason to prevent other processes from doing the same. Writing, however, clearly demands exclusive access to a file.
The standard I/O library includes a utility function named **fcntl** that can be used to inspect and manipulate both exclusive and shared locks on a file. The function works through a _file descriptor_ , a non-negative integer value that, within a process, identifies a file. (Different file descriptors in different processes may identify the same physical file.) For file locking, Linux provides the library function **flock** , which is a thin wrapper around **fcntl**. The first example uses the **fcntl** function to expose API details.
#### Example 1. The _producer_ program
```
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#define FileName "data.dat"
#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
report_and_exit("open failed...");
if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
report_and_exit("fcntl failed to get lock...");
else {
write(fd, DataString, [strlen][6](DataString)); /* populate data file */
[fprintf][7](stderr, "Process %d has written to data file...\n", lock.l_pid);
}
/* Now release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd); /* close the file: would unlock if needed */
return 0; /* terminating the process would unlock as well */
}
```
The main steps in the _producer_ program above can be summarized as follows:
* The program declares a variable of type **struct flock** , which represents a lock, and initializes the structure's five fields. The first initialization: [code]`lock.l_type = F_WRLCK; /* exclusive lock */`[/code] makes the lock an exclusive ( _read-write_ ) rather than a shared ( _read-only_ ) lock. If the _producer_ gains the lock, then no other process will be able to write or read the file until the _producer_ releases the lock, either explicitly with the appropriate call to **fcntl** or implicitly by closing the file. (When the process terminates, any opened files would be closed automatically, thereby releasing the lock.)
* The program then initializes the remaining fields. The chief effect is that the _entire_ file is to be locked. However, the locking API allows only designated bytes to be locked. For example, if the file contains multiple text records, then a single record (or even part of a record) could be locked and the rest left unlocked.
* The first call to **fcntl** : [code]`if (fcntl(fd, F_SETLK, &lock) < 0)`[/code] tries to lock the file exclusively, checking whether the call succeeded. In general, the **fcntl** function returns **-1** (hence, less than zero) to indicate failure. The second argument **F_SETLK** means that the call to **fcntl** does _not_ block: the function returns immediately, either granting the lock or indicating failure. If the flag **F_SETLKW** (the **W** at the end is for _wait_ ) were used instead, the call to **fcntl** would block until gaining the lock was possible. In the calls to **fcntl** , the first argument **fd** is the file descriptor, the second argument specifies the action to be taken (in this case, **F_SETLK** for setting the lock), and the third argument is the address of the lock structure (in this case, **& lock**).
* If the _producer_ gains the lock, the program writes two text records to the file.
* After writing to the file, the _producer_ changes the lock structure's **l_type** field to the _unlock_ value: [code]`lock.l_type = F_UNLCK;`[/code] and calls **fcntl** to perform the unlocking operation. The program finishes up by closing the file and exiting.
#### Example 2. The _consumer_ program
```
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#define FileName "data.dat"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1); /* EXIT_FAILURE */
}
int main() {
struct flock lock;
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
lock.l_whence = SEEK_SET; /* base for seek offsets */
lock.l_start = 0; /* 1st byte in file */
lock.l_len = 0; /* 0 here means 'until EOF' */
lock.l_pid = getpid(); /* process id */
int fd; /* file descriptor to identify a file within a process */
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
report_and_exit("open to read failed...");
/* If the file is write-locked, we can't continue. */
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("can't get a read-only lock...");
/* Read the bytes (they happen to be ASCII codes) one at a time. */
int c; /* buffer for read bytes */
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
/* Release the lock explicitly. */
lock.l_type = F_UNLCK;
if (fcntl(fd, F_SETLK, &lock) < 0)
report_and_exit("explicit unlocking failed...");
close(fd);
return 0;
}
```
The _consumer_ program is more complicated than necessary to highlight features of the locking API. In particular, the _consumer_ program first checks whether the file is exclusively locked and only then tries to gain a shared lock. The relevant code is:
```
lock.l_type = F_WRLCK;
...
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
if (lock.l_type != F_UNLCK)
report_and_exit("file is still write locked...");
```
The **F_GETLK** operation specified in the **fcntl** call checks for a lock, in this case, an exclusive lock given as **F_WRLCK** in the first statement above. If the specified lock does not exist, then the **fcntl** call automatically changes the lock type field to **F_UNLCK** to indicate this fact. If the file is exclusively locked, the _consumer_ terminates. (A more robust version of the program might have the _consumer_ **sleep** a bit and try again several times.)
If the file is not currently locked, then the _consumer_ tries to gain a shared ( _read-only_ ) lock ( **F_RDLCK** ). To shorten the program, the **F_GETLK** call to **fcntl** could be dropped because the **F_RDLCK** call would fail if a _read-write_ lock already were held by some other process. Recall that a _read-only_ lock does prevent any other process from writing to the file, but allows other processes to read from the file. In short, a _shared_ lock can be held by multiple processes. After gaining a shared lock, the _consumer_ program reads the bytes one at a time from the file, prints the bytes to the standard output, releases the lock, closes the file, and terminates.
Here is the output from the two programs launched from the same terminal with **%** as the command line prompt:
```
% ./producer
Process 29255 has written to data file...
% ./consumer
Now is the winter of our discontent
Made glorious summer by this sun of York
```
In this first code example, the data shared through IPC is text: two lines from Shakespeare's play _Richard III_. Yet, the shared file's contents could be voluminous, arbitrary bytes (e.g., a digitized movie), which makes file sharing an impressively flexible IPC mechanism. The downside is that file access is relatively slow, whether the access involves reading or writing. As always, programming comes with tradeoffs. The next example has the upside of IPC through shared memory, rather than shared files, with a corresponding boost in performance.
### Shared memory
Linux systems provide two separate APIs for shared memory: the legacy System V API and the more recent POSIX one. These APIs should never be mixed in a single application, however. A downside of the POSIX approach is that features are still in development and dependent upon the installed kernel version, which impacts code portability. For example, the POSIX API, by default, implements shared memory as a _memory-mapped file_ : for a shared memory segment, the system maintains a _backing file_ with corresponding contents. Shared memory under POSIX can be configured without a backing file, but this may impact portability. My example uses the POSIX API with a backing file, which combines the benefits of memory access (speed) and file storage (persistence).
The shared-memory example has two programs, named _memwriter_ and _memreader_ , and uses a _semaphore_ to coordinate their access to the shared memory. Whenever shared memory comes into the picture with a _writer_ , whether in multi-processing or multi-threading, so does the risk of a memory-based race condition; hence, the semaphore is used to coordinate (synchronize) access to the shared memory.
The _memwriter_ program should be started first in its own terminal. The _memreader_ program then can be started (within a dozen seconds) in its own terminal. The output from the _memreader_ is:
```
`This is the way the world ends...`
```
Each source file has documentation at the top explaining the link flags to be included during compilation.
Let's start with a review of how semaphores work as a synchronization mechanism. A general semaphore also is called a _counting semaphore_ , as it has a value (typically initialized to zero) that can be incremented. Consider a shop that rents bicycles, with a hundred of them in stock, with a program that clerks use to do the rentals. Every time a bike is rented, the semaphore is incremented by one; when a bike is returned, the semaphore is decremented by one. Rentals can continue until the value hits 100 but then must halt until at least one bike is returned, thereby decrementing the semaphore to 99.
A _binary semaphore_ is a special case requiring only two values: 0 and 1. In this situation, a semaphore acts as a _mutex_ : a mutual exclusion construct. The shared-memory example uses a semaphore as a mutex. When the semaphore's value is 0, the _memwriter_ alone can access the shared memory. After writing, this process increments the semaphore's value, thereby allowing the _memreader_ to read the shared memory.
#### Example 3. Source code for the _memwriter_ process
```
/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, /* name from smem.h */
O_RDWR | O_CREAT, /* read/write, create if needed */
AccessPerms); /* access permissions (0644) */
if (fd < 0) report_and_exit("Can't open shared mem segment...");
ftruncate(fd, ByteSize); /* get the bytes */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
/* semaphore code to lock the shared mem */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
/* increment the semaphore so that memreader can read */
if (sem_post(semptr) < 0) report_and_exit("sem_post");
sleep(12); /* give reader a chance */
/* clean up */
munmap(memptr, ByteSize); /* unmap the storage */
close(fd);
sem_close(semptr);
shm_unlink(BackingFile); /* unlink from the backing file */
return 0;
}
```
Here's an overview of how the _memwriter_ and _memreader_ programs communicate through shared memory:
* The _memwriter_ program, shown above, calls the **shm_open** function to get a file descriptor for the backing file that the system coordinates with the shared memory. At this point, no memory has been allocated. The subsequent call to the misleadingly named function **ftruncate** : [code]`ftruncate(fd, ByteSize); /* get the bytes */`[/code] allocates **ByteSize** bytes, in this case, a modest 512 bytes. The _memwriter_ and _memreader_ programs access the shared memory only, not the backing file. The system is responsible for synchronizing the shared memory and the backing file.
* The _memwriter_ then calls the **mmap** function: [code] caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */ [/code] to get a pointer to the shared memory. (The _memreader_ makes a similar call.) The pointer type **caddr_t** starts with a **c** for **calloc** , a system function that initializes dynamically allocated storage to zeroes. The _memwriter_ uses the **memptr** for the later _write_ operation, using the library **strcpy** (string copy) function.
* At this point, the _memwriter_ is ready for writing, but it first creates a semaphore to ensure exclusive access to the shared memory. A race condition would occur if the _memwriter_ were writing while the _memreader_ was reading. If the call to **sem_open** succeeds: [code] sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */ [/code] then the writing can proceed. The **SemaphoreName** (any unique non-empty name will do) identifies the semaphore in both the _memwriter_ and the _memreader_. The initial value of zero gives the semaphore's creator, in this case, the _memwriter_ , the right to proceed, in this case, to the _write_ operation.
* After writing, the _memwriter_ increments the semaphore value to 1: [code]`if (sem_post(semptr) < 0) ..`[/code] with a call to the **sem_post** function. Incrementing the semaphore releases the mutex lock and enables the _memreader_ to perform its _read_ operation. For good measure, the _memwriter_ also unmaps the shared memory from the _memwriter_ address space: [code]`munmap(memptr, ByteSize); /* unmap the storage *`[/code] This bars the _memwriter_ from further access to the shared memory.
#### Example 4. Source code for the _memreader_ process
```
/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <semaphore.h>
#include <string.h>
#include "shmem.h"
void report_and_exit(const char* msg) {
[perror][4](msg);
[exit][5](-1);
}
int main() {
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
if (fd < 0) report_and_exit("Can't get file descriptor...");
/* get a pointer to memory */
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
ByteSize, /* how many bytes */
PROT_READ | PROT_WRITE, /* access protections */
MAP_SHARED, /* mapping visible to other processes */
fd, /* file descriptor */
0); /* offset: start at 1st byte */
if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
/* create a semaphore for mutual exclusion */
sem_t* semptr = sem_open(SemaphoreName, /* name */
O_CREAT, /* create the semaphore */
AccessPerms, /* protection perms */
0); /* initial value */
if (semptr == (void*) -1) report_and_exit("sem_open");
/* use semaphore as a mutex (lock) by waiting for writer to increment it */
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
int i;
for (i = 0; i < [strlen][6](MemContents); i++)
write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
sem_post(semptr);
}
/* cleanup */
munmap(memptr, ByteSize);
close(fd);
sem_close(semptr);
unlink(BackingFile);
return 0;
}
```
In both the _memwriter_ and _memreader_ programs, the shared-memory functions of main interest are **shm_open** and **mmap** : on success, the first call returns a file descriptor for the backing file, which the second call then uses to get a pointer to the shared memory segment. The calls to **shm_open** are similar in the two programs except that the _memwriter_ program creates the shared memory, whereas the _memreader_ only accesses this already created memory:
```
int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
```
With a file descriptor in hand, the calls to **mmap** are the same:
```
`caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);`
```
The first argument to **mmap** is **NULL** , which means that the system determines where to allocate the memory in virtual address space. It's possible (but tricky) to specify an address instead. The **MAP_SHARED** flag indicates that the allocated memory is shareable among processes, and the last argument (in this case, zero) means that the offset for the shared memory should be the first byte. The **size** argument specifies the number of bytes to be allocated (in this case, 512), and the protection argument indicates that the shared memory can be written and read.
When the _memwriter_ program executes successfully, the system creates and maintains the backing file; on my system, the file is _/dev/shm/shMemEx_ , with _shMemEx_ as my name (given in the header file _shmem.h_ ) for the shared storage. In the current version of the _memwriter_ and _memreader_ programs, the statement:
```
`shm_unlink(BackingFile); /* removes backing file */`
```
removes the backing file. If the **unlink** statement is omitted, then the backing file persists after the program terminates.
The _memreader_ , like the _memwriter_ , accesses the semaphore through its name in a call to **sem_open**. But the _memreader_ then goes into a wait state until the _memwriter_ increments the semaphore, whose initial value is 0:
```
`if (!sem_wait(semptr)) { /* wait until semaphore != 0 */`
```
Once the wait is over, the _memreader_ reads the ASCII bytes from the shared memory, cleans up, and terminates.
The shared-memory API includes operations explicitly to synchronize the shared memory segment and the backing file. These operations have been omitted from the example to reduce clutter and keep the focus on the memory-sharing and semaphore code.
The _memwriter_ and _memreader_ programs are likely to execute without inducing a race condition even if the semaphore code is removed: the _memwriter_ creates the shared memory segment and writes immediately to it; the _memreader_ cannot even access the shared memory until this has been created. However, best practice requires that shared-memory access is synchronized whenever a _write_ operation is in the mix, and the semaphore API is important enough to be highlighted in a code example.
### Wrapping up
The shared-file and shared-memory examples show how processes can communicate through _shared storage_ , files in one case and memory segments in the other. The APIs for both approaches are relatively straightforward. Do these approaches have a common downside? Modern applications often deal with streaming data, indeed, with massively large streams of data. Neither the shared-file nor the shared-memory approaches are well suited for massive data streams. Channels of one type or another are better suited. Part 2 thus introduces channels and message queues, again with code examples in C.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
[3]: http://condor.depaul.edu/mkalin
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html

View File

@ -0,0 +1,211 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kubernetes on Fedora IoT with k3s)
[#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/)
[#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/)
Kubernetes on Fedora IoT with k3s
======
![][1]
Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article [How to turn on an LED with Fedora IoT][2]. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.
Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.
### Why Kubernetes?
While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are [tons of applications][3] that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.
Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesnt matter if its a single node Raspberry Pi cluster or a large scale machine learning farm.
#### K3s a lightweight Kubernetes
A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is [k3s][4] a lightweight Kubernetes distribution.
K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!
### What you will need
1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide [here][5]. One machine is enough but two will allow you to test adding more nodes to the cluster.
2. [Configure the firewall][6] to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
### Install k3s
Installing k3s is very easy. Simply run the installation script:
```
curl -sfL https://get.k3s.io | sh -
```
This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:
```
kubectl get nodes
```
Note that there are several options that can be passed to the installation script through environment variables. These can be found in the [documentation][7]. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.
While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s
```
curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
K3S_TOKEN=XXX sh -
```
The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.
### Deploy some containers
Now that we have a Kubernetes cluster, what can we actually do with it? Lets start by deploying a simple web server.
```
kubectl create deployment my-server --image nginx
```
This will create a [Deployment][8] named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.
```
kubectl get pods
```
In order to access the nginx server running in the pod, first expose the Deployment through a [Service][9]. The following command will create a Service with the same name as the deployment.
```
kubectl expose deployment my-server --port 80
```
The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to _curl_ the nginx server just by specifying _my-server_ (the name of the Service). See the example below for how to do this.
```
# Start a pod and run bash interactively in it
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
# Wait for the bash prompt to appear
curl my-server
# You should get the "Welcome to nginx!" page as output
```
### Ingress controller and external IP
By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to [LoadBalancer][10]. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an [Ingress][11], and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.
Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes [Traefik][12] for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The [documentation][13] describes the service like this:
> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
>
> k3s README
The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.
```
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
default my-server ClusterIP 10.43.174.38 80/TCP 30m
kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d
```
Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
### Route incoming requests
Lets create an Ingress that routes requests to our web server based on the host header. This example uses [xip.io][14] to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.
We can tell the ingress controller to route requests to our web server Service with the following Ingress.
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-server
spec:
rules:
- host: my-server.10.0.0.8.xip.io
http:
paths:
- path: /
backend:
serviceName: my-server
servicePort: 80
```
Save the above snippet in a file named _my-ingress.yaml_ and add it to the cluster by running this command:
```
kubectl apply -f my-ingress.yaml
```
You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).
### What about IoT then?
Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.
In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize whats going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?
The simple answer is labels. You can label the nodes according to capabilities, like this:
```
kubectl label nodes <node-name> <label-key>=<label-value>
# Example
kubectl label nodes node2 camera=available
```
Once they are labeled, it is easy to select suitable nodes for your workload with [nodeSelectors][15]. The final piece to the puzzle, if you want to run your Pods on _all_ suitable nodes is to use [DaemonSets][16] instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.
The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You dont need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.
#### Utilize spare resources
With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.
You shouldnt have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.
Why not run your own [NextCloud][17] instance? Or maybe [gitea][18]? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?
The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you dont have to. However, in order to help Kubernetes make reasonable decisions you should definitely add [resource requests][19] to your workloads.
### Summary
While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.
Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/
作者:[Lennart Jern][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/lennartj/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png
[2]: https://fedoramagazine.org/turnon-led-fedora-iot/
[3]: https://hub.helm.sh/
[4]: https://k3s.io
[5]: https://docs.fedoraproject.org/en-US/iot/getting-started/
[6]: https://github.com/rancher/k3s#open-ports--network-security
[7]: https://github.com/rancher/k3s#systemd
[8]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[9]: https://kubernetes.io/docs/concepts/services-networking/service/
[10]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[11]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[12]: https://traefik.io/
[13]: https://github.com/rancher/k3s/blob/master/README.md#service-load-balancer
[14]: http://xip.io/
[15]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
[16]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
[17]: https://nextcloud.com/
[18]: https://gitea.io/en-us/
[19]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

View File

@ -0,0 +1,39 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Troubleshooting slow WiFi on Linux)
[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
Troubleshooting slow WiFi on Linux
======
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
Read more at: [OpenSource.com][2]
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
Read more at: [OpenSource.com][2]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
作者:[OpenSource.com][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/USERS/OPENSOURCECOM
[b]: https://github.com/lujun9972
[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux

View File

@ -0,0 +1,263 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
Building a DNS-as-a-service with OpenStack Designate
======
Learn how to install and configure Designate, a multi-tenant
DNS-as-a-service (DNSaaS) for OpenStack.
![Command line prompt][1]
[Designate][2] is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with [Neutron][3], and integration support for Bind9.
You would want to consider a DNSaaS for the following:
* A clean REST API for managing zones and records
* Automatic records generated (with OpenStack integration)
* Support for multiple authoritative name servers
* Hosting multiple projects/organizations
![Designate's architecture][4]
This article explains how to manually install and configure the latest release of Designate service on CentOS or Red Hat Enterprise Linux 7 (RHEL 7), but you can use the same configuration on other distributions.
### Install Designate on OpenStack
I have Ansible roles for bind and Designate that demonstrate the setup in my [GitHub repository][5].
This setup presumes bind service is external (even though you can install bind locally) on the OpenStack controller node.
1. Install Designate's packages and bind (on OpenStack controller): [code]`# yum install openstack-designate-* bind bind-utils -y`
```
2. Create the Designate database and user: [code] MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
'designate'@'localhost' IDENTIFIED BY 'rhlab123';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
IDENTIFIED BY 'rhlab123';
```
Note: Bind packages must be installed on the controller side for Remote Name Daemon Control (RNDC) to function properly.
### Configure bind (DNS server)
1. Generate RNDC files: [code] rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
cat <<EOF> etcrndc.conf
include "/etc/rndc.key";
options {
default-key "designate";
default-server {{ DNS_SERVER_IP }};
default-port 953;
};
EOF
```
2. Add the following into **named.conf** : [code]`include "/etc/rndc.key"; controls { inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; }; };`[/code] In the **option** section, add: [code] options {
...
allow-new-zones yes;
request-ixfr no;
listen-on port 53 { any; };
recursion no;
allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
}; [/code] Add the right permissions: [code] chown named:named /etc/rndc.key
chown named:named /etc/rndc.conf
chmod 600 /etc/rndc.key
chown -v root:named /etc/named.conf
chmod g+w /var/named
# systemctl restart named
# setsebool named_write_master_zones 1
```
3. Push **rndc.key** and **rndc.conf** into the OpenStack controller: [code]`# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/`
```
## Create OpenStack Designate service and endpoints
Enter:
```
# openstack user create --domain default --password-prompt designate
# openstack role add --project services --user designate admin
# openstack service create --name designate --description "DNS" dns
# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
```
## Configure Designate service
1. Edit **/etc/designate/designate.conf** :
* In the **[service:api]** section, configure **auth_strategy** : [code] [service:api]
listen = 0.0.0.0:9001
auth_strategy = keystone
api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
enable_api_v2 = True
enabled_extensions_v2 = quotas, reports
```
* In the **[keystone_authtoken]** section, configure the following options: [code] [keystone_authtoken]
auth_type = password
username = designate
password = rhlab123
project_name = service
project_domain_name = Default
user_domain_name = Default
www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
```
* In the **[service:worker]** section, enable the worker model: [code] enabled = True
notify = True
```
* In the **[storage:sqlalchemy]** section, configure database access: [code] [storage:sqlalchemy]
connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
```
* Populate the Designate database: [code]`# su -s /bin/sh -c "designate-manage database sync" designate`
```
2. Create Designate's **pools.yaml** file (has target and bind details):
* Edit **/etc/designate/pools.yaml** : [code] - name: default
# The name is immutable. There will be no option to change the name after
# creation and the only way will to change it will be to delete it
# (and all zones associated with it) and recreate it.
description: Default Pool
attributes: {}
# List out the NS records for zones hosted within this pool
# This should be a record that is created outside of designate, that
# points to the public IP of the controller node.
ns_records:
\- hostname: {{Controller_FQDN}}. # Thisis mDNS
priority: 1
# List out the nameservers for this pool. These are the actual BIND servers.
# We use these to verify changes have propagated to all nameservers.
nameservers:
\- host: {{ DNS_SERVER_IP }}
port: 53
# List out the targets for this pool. For BIND there will be one
# entry for each BIND server, as we have to run rndc command on each server
targets:
\- type: bind9
description: BIND9 Server 1
# List out the designate-mdns servers from which BIND servers should
# request zone transfers (AXFRs) from.
# This should be the IP of the controller node.
# If you have multiple controllers you can add multiple masters
# by running designate-mdns on them, and adding them here.
masters:
\- host: {{ CONTROLLER_SERVER_IP }}
port: 5354
# BIND Configuration options
options:
host: {{ DNS_SERVER_IP }}
port: 53
rndc_host: {{ DNS_SERVER_IP }}
rndc_port: 953
rndc_key_file: /etc/rndc.key
rndc_config_file: /etc/rndc.conf
```
* Populate Designate's pools: [code]`su -s /bin/sh -c "designate-manage pool update" designate`
```
3. Start Designate central and API services: [code]`systemctl enable --now designate-central designate-api`
```
4. Verify Designate's services are up: [code] # openstack dns service list
+--------------+--------+-------+--------------+
| service_name | status | stats | capabilities |
+--------------+--------+-------+--------------+
| central | UP | - | - |
| api | UP | - | - |
| mdns | UP | - | - |
| worker | UP | - | - |
| producer | UP | - | - |
+--------------+--------+-------+--------------+
```
### Configure OpenStack Neutron with external DNS
1. Configure iptables for Designate services: [code] # iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
# service iptables save; service iptables restart
# setsebool named_write_master_zones 1
```
2. Edit the **[default]** section of **/etc/neutron/neutron.conf** : [code]`external_dns_driver = designate`
```
3. Add the **[designate]** section in **/_etc/_neutron/neutron.conf** : [code] [designate]
url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
auth_type = password
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
username = designate
password = rhlab123
project_name = services
project_domain_name = Default
user_domain_name = Default
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
```
4. Edit **dns_domain** in **neutron.conf** : [code] dns_domain = rhlab.dev.
# systemctl restart neutron-*
```
5. Add **dns** to the list of Modular Layer 2 (ML2) drivers in **/etc/neutron/plugins/ml2/ml2_conf.ini** : [code]`extension_drivers=port_security,qos,dns`
```
6. Add **zone** in Designate: [code]`# openstack zone create email=admin@rhlab.dev rhlab.dev.`[/code] Add a new record in **zone rhlab.dev** : [code]`# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test`
```
Designate should now be installed and configured.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/getting-started-openstack-designate
作者:[Amjad Yaseen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayaseen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://docs.openstack.org/designate/latest/
[3]: /article/19/3/openstack-neutron
[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
[5]: https://github.com/ayaseen/designate

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Can schools be agile?)
[#]: via: (https://opensource.com/open-organization/19/4/education-culture-agile)
[#]: author: (Ben Owens https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins)
Can schools be agile?
======
We certainly don't need to run our schools like businesses—but we could
benefit from educational organizations more focused on continuous
improvement.
![][1]
We've all had those _deja vu_ moments that make us think "I've seen this before!" I experienced them often in the late 1980s, when I first began my career in industry. I was caught up in a wave of organizational change, where the U.S. manufacturing sector was experimenting with various models that asked leaders, managers, and engineers like me to rethink how we approached things like quality, cost, innovation, and shareholder value. It seems as if every year (sometimes, more frequently) we'd study yet another book to identify the "best practices" necessary for making us leaner, flatter, more nimble, and more responsive to the needs of the customer.
Many of the approaches were so transformational that their core principles still resonate with me today. Specific ideas and methods from thought leaders such as John Kotter, Peter Drucker, Edwards Demming, and Peter Senge were truly pivotal for our ability to rethink our work, as were the adoption of process improvement methods such as Six Sigma and those embodied in the "Toyota Way."
But others seemed to simply repackage these same ideas with a sexy new twist—hence my _deja vu_.
And yet when I began my career as a teacher, I encountered a context that _didn't_ give me that feeling: education. In fact, I was surprised to find that "getting better all the time" was _not_ the same high priority in my new profession that it was in my old one (particularly at the level of my role as a classroom teacher).
Why aren't more educational organizations working to create cultures of continuous improvement? I can think of several reasons, but let me address two.
### Widgets no more
The first barrier to a culture of continuous improvement is education's general reticence to look at other professions for ideas it can adapt and adopt—especially ideas from the business community. The second is education's predominant leadership model, which remains predominantly top-down and rooted in hierarchy. Conversations about systemic, continuous improvement tend to be the purview of a relatively small group of school or district leaders: principals, assistant principals, superintendents, and the like. But widespread organizational culture change can't occur if only one small group is involved in it.
Before unpacking these points a bit further, I'd like to emphasize that there are certainly exceptions to the above generalization (many I have seen first hand) and that there are two basic assumptions that I think any education stakeholder should be able to agree with:
1. Continuous improvement must be an essential mindset for _anyone_ involved in the work of providing high-quality and equitable teaching and learning systems for students, and
2. Decisions by leaders of our schools will more greatly benefit students and the communities in which they live when those decisions are informed and influenced by those who work closest with students.
So why a tendency to ignore (or be outright hostile toward) ideas that come from outside the education space?
I, for example, have certainly faced criticism in the past for suggesting that we look to other professions for ideas and inspiration that can help us better meet the needs of students. A common refrain is something like: "You're trying to treat our students like widgets!" But how could our students be treated any more like widgets than they already are? They matriculate through school in age-based cohorts, going from siloed class to class each day by the sound of a shrill bell, and receive grades based on arbitrary tests that emphasize sameness over individuality.
What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses.
It may be news to many inside of education, but widgets—abstract units of production that evoke the idea of assembly line standardization—are not a significant part of the modern manufacturing sector. Thanks to the culture of continuous improvement described above, modern, advanced manufacturing delivers just what the individual customer wants, at a competitive price, exactly when she wants it. If we adapted this model to our schools, teachers would be more likely to collaborate and constantly refine their unique paths of growth for all students based on just-in-time needs and desires—regardless of the time, subject, or any other traditional norm.
What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses. In order for this to happen effectively, however, we need to scrutinize a leadership structure that has frankly remained stagnant for over 100 years.
### Toward continuous improvement
While I certainly appreciate the argument that education is an animal significantly different from other professions, I also believe that rethinking an organizational and leadership structure is an applicable exercise for any entity wanting to remain responsible (and responsive) to the needs of its stakeholders. Most other professions have taken a hard look at their traditional, closed, hierarchical structures and moved to ones that encourage collective autonomy per shared goals of excellence—organizational elements essential for continuous improvement. It's time our schools and districts do the same by expanding their horizon beyond sources that, while well intended, are developed from a lens of the current paradigm.
Not surprisingly, a go-to resource I recommend to any school wanting to begin or accelerate this process is _The Open Organization_ by Jim Whitehurst. Not only does the book provide a window into how educators can create more open, inclusive leadership structures—where mutual respect enables nimble decisions to be made per real-time data—but it does so in language easily adaptable to the rather strange lexicon that's second nature to educators. Open organization thinking provides pragmatic ways any organization can empower members to be more open: sharing ideas and resources, embracing a culture of collaborative participation as a top priority, developing an innovation mindset through rapid prototyping, valuing ideas based on merit rather than the rank of the person proposing them, and building a strong sense of community that's baked into the organization's DNA. Such an open organization crowd-sources ideas from both inside and outside its formal structure and creates the type of environment that enables localized, student-centered innovations to thrive.
We simply can't rely on solutions and practices we developed in a factory-model paradigm.
Here's the bottom line: Essential to a culture of continuous improvement is recognizing that what we've done in the past may not be suitable in a rapidly changing future. For educators, that means we simply can't rely on solutions and practices we developed in a factory-model paradigm. We must acknowledge countless examples of best practices from other sectors—such as non-profits, the military, the medical profession, and yes, even business—that can at least _inform_ how we rethink what we do in the best interest of students. By moving beyond the traditionally sanctioned "eduspeak" world, we create opportunities for considering perspectives. We can better see the forest for the trees, taking a more objective look at the problems we face, as well as acknowledging what we do very well.
Intentionally considering ideas from all sources—from first year classroom teachers to the latest NYT Business & Management Leadership bestseller—offers us a powerful way to engage existing talent within our schools to help overcome the institutionalized inertia that has prevented more positive change from taking hold in our schools and districts.
Relentlessly pursuing methods of continuous improvement should not be a behavior confined to organizations fighting to remain competitive in a global, innovation economy, nor should it be left to a select few charged with the operation of our schools. When everyone in an organization is always thinking about what they can do differently _today_ to improve what they did _yesterday_ , then you have an organization living a culture of excellence. That's the kind of radically collaborative and innovative culture we should especially expect for organizations focused on changing the lives of young people.
I'm eagerly awaiting the day when I enter a school, recognize that spirit, and smile to myself as I say, "I've seen this before."
Experiential learning using open source is fraught with opportunities for disaster.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/4/education-culture-agile
作者:[Ben Owens][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_network.png?itok=ySEHuAQ8

View File

@ -0,0 +1,792 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Detecting malaria with deep learning)
[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
Detecting malaria with deep learning
======
Artificial intelligence combined with open source tools can improve
diagnosis of the fatal disease malaria.
![][1]
Artificial intelligence (AI) and open source tools, technologies, and frameworks are a powerful combination for improving society. _"Health is wealth"_ is perhaps a cliche, yet it's very accurate! In this article, we will examine how AI can be leveraged for detecting the deadly disease malaria with a low-cost, effective, and accurate open source deep learning solution.
While I am neither a doctor nor a healthcare researcher and I'm nowhere near as qualified as they are, I am interested in applying AI to healthcare research. My intent in this article is to showcase how AI and open source solutions can help malaria detection and reduce manual labor.
![Python and TensorFlow][2]
Python and TensorFlow: A great combo to build open source deep learning solutions
Thanks to the power of Python and deep learning frameworks like TensorFlow, we can build robust, scalable, and effective deep learning solutions. Because these tools are free and open source, we can build solutions that are very cost-effective and easily adopted and used by anyone. Let's get started!
### Motivation for the project
Malaria is a deadly, infectious, mosquito-borne disease caused by _Plasmodium_ parasites that are transmitted by the bites of infected female _Anopheles_ mosquitoes. There are five parasites that cause malaria, but two types— _P. falciparum_ and _P. vivax_ —cause the majority of the cases.
![Malaria heat map][3]
This map shows that malaria is prevalent around the globe, especially in tropical regions, but the nature and fatality of the disease is the primary motivation for this project.
If an infected mosquito bites you, parasites carried by the mosquito enter your blood and start destroying oxygen-carrying red blood cells (RBC). Typically, the first symptoms of malaria are similar to a virus like the flu and they usually begin within a few days or weeks after the mosquito bite. However, these deadly parasites can live in your body for over a year without causing symptoms, and a delay in treatment can lead to complications and even death. Therefore, early detection can save lives.
The World Health Organization's (WHO) [malaria facts][4] indicate that nearly half the world's population is at risk from malaria, and there are over 200 million malaria cases and approximately 400,000 deaths due to malaria every year. This is a motivatation to make malaria detection and diagnosis fast, easy, and effective.
### Methods of malaria detection
There are several methods that can be used for malaria detection and diagnosis. The paper on which our project is based, "[Pre-trained convolutional neural networks as feature extractors toward improved Malaria parasite detection in thin blood smear images][5]," by Rajaraman, et al., introduces some of the methods, including polymerase chain reaction (PCR) and rapid diagnostic tests (RDT). These two tests are typically used where high-quality microscopy services are not readily available.
The standard malaria diagnosis is typically based on a blood-smear workflow, according to Carlos Ariza's article "[Malaria Hero: A web app for faster malaria diagnosis][6]," which I learned about in Adrian Rosebrock's "[Deep learning and medical image analysis with Keras][7]." I appreciate the authors of these excellent resources for giving me more perspective on malaria prevalence, diagnosis, and treatment.
![Blood smear workflow for Malaria detection][8]
A blood smear workflow for Malaria detection
According to WHO protocol, diagnosis typically involves intensive examination of the blood smear at 100X magnification. Trained people manually count how many red blood cells contain parasites out of 5,000 cells. As the Rajaraman, et al., paper cited above explains:
> Thick blood smears assist in detecting the presence of parasites while thin blood smears assist in identifying the species of the parasite causing the infection (Centers for Disease Control and Prevention, 2012). The diagnostic accuracy heavily depends on human expertise and can be adversely impacted by the inter-observer variability and the liability imposed by large-scale diagnoses in disease-endemic/resource-constrained regions (Mitiku, Mengistu, and Gelaw, 2003). Alternative techniques such as polymerase chain reaction (PCR) and rapid diagnostic tests (RDT) are used; however, PCR analysis is limited in its performance (Hommelsheim, et al., 2014) and RDTs are less cost-effective in disease-endemic regions (Hawkes, Katsuva, and Masumbuko, 2009).
Thus, malaria detection could benefit from automation using deep learning.
### Deep learning for malaria detection
Manual diagnosis of blood smears is an intensive manual process that requires expertise in classifying and counting parasitized and uninfected cells. This process may not scale well, especially in regions where the right expertise is hard to find. Some advancements have been made in leveraging state-of-the-art image processing and analysis techniques to extract hand-engineered features and build machine learning-based classification models. However, these models are not scalable with more data being available for training and given the fact that hand-engineered features take a lot of time.
Deep learning models, or more specifically convolutional neural networks (CNNs), have proven very effective in a wide variety of computer vision tasks. (If you would like additional background knowledge on CNNs, I recommend reading [CS231n Convolutional Neural Networks for Visual Recognition][9].) Briefly, the key layers in a CNN model include convolution and pooling layers, as shown in the following figure.
![A typical CNN architecture][10]
A typical CNN architecture
Convolution layers learn spatial hierarchical patterns from data, which are also translation-invariant, so they are able to learn different aspects of images. For example, the first convolution layer will learn small and local patterns, such as edges and corners, a second convolution layer will learn larger patterns based on the features from the first layers, and so on. This allows CNNs to automate feature engineering and learn effective features that generalize well on new data points. Pooling layers helps with downsampling and dimension reduction.
Thus, CNNs help with automated and scalable feature engineering. Also, plugging in dense layers at the end of the model enables us to perform tasks like image classification. Automated malaria detection using deep learning models like CNNs could be very effective, cheap, and scalable, especially with the advent of transfer learning and pre-trained models that work quite well, even with constraints like less data.
The Rajaraman, et al., paper leverages six pre-trained models on a dataset to obtain an impressive accuracy of 95.9% in detecting malaria vs. non-infected samples. Our focus is to try some simple CNN models from scratch and a couple of pre-trained models using transfer learning to see the results we can get on the same dataset. We will use open source tools and frameworks, including Python and TensorFlow, to build our models.
### The dataset
The data for our analysis comes from researchers at the Lister Hill National Center for Biomedical Communications (LHNCBC), part of the National Library of Medicine (NLM), who have carefully collected and annotated the [publicly available dataset][11] of healthy and infected blood smear images. These researchers have developed a mobile [application for malaria detection][12] that runs on a standard Android smartphone attached to a conventional light microscope. They used Giemsa-stained thin blood smear slides from 150 _P. falciparum_ -infected and 50 healthy patients, collected and photographed at Chittagong Medical College Hospital, Bangladesh. The smartphone's built-in camera acquired images of slides for each microscopic field of view. The images were manually annotated by an expert slide reader at the Mahidol-Oxford Tropical Medicine Research Unit in Bangkok, Thailand.
Let's briefly check out the dataset's structure. First, I will install some basic dependencies (based on the operating system being used).
![Installing dependencies][13]
I am using a Debian-based system on the cloud with a GPU so I can run my models faster. To view the directory structure, we must install the tree dependency (if we don't have it) using **sudo apt install tree**.
![Installing the tree dependency][14]
We have two folders that contain images of cells, infected and healthy. We can get further details about the total number of images by entering:
```
import os
import glob
base_dir = os.path.join('./cell_images')
infected_dir = os.path.join(base_dir,'Parasitized')
healthy_dir = os.path.join(base_dir,'Uninfected')
infected_files = glob.glob(infected_dir+'/*.png')
healthy_files = glob.glob(healthy_dir+'/*.png')
len(infected_files), len(healthy_files)
# Output
(13779, 13779)
```
It looks like we have a balanced dataset with 13,779 malaria and 13,779 non-malaria (uninfected) cell images. Let's build a data frame from this, which we will use when we start building our datasets.
```
import numpy as np
import pandas as pd
np.random.seed(42)
files_df = pd.DataFrame({
'filename': infected_files + healthy_files,
'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files)
}).sample(frac=1, random_state=42).reset_index(drop=True)
files_df.head()
```
![Datasets][15]
### Build and explore image datasets
To build deep learning models, we need training data, but we also need to test the model's performance on unseen data. We will use a 60:10:30 split for train, validation, and test datasets, respectively. We will leverage the train and validation datasets during training and check the performance of the model on the test dataset.
```
from sklearn.model_selection import train_test_split
from collections import Counter
train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values,
files_df['label'].values,
test_size=0.3, random_state=42)
train_files, val_files, train_labels, val_labels = train_test_split(train_files,
train_labels,
test_size=0.1, random_state=42)
print(train_files.shape, val_files.shape, test_files.shape)
print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels))
# Output
(17361,) (1929,) (8268,)
Train: Counter({'healthy': 8734, 'malaria': 8627})
Val: Counter({'healthy': 970, 'malaria': 959})
Test: Counter({'malaria': 4193, 'healthy': 4075})
```
The images will not be of equal dimensions because blood smears and cell images vary based on the human, the test method, and the orientation of the photo. Let's get some summary statistics of our training dataset to determine the optimal image dimensions (remember, we don't touch the test dataset at all!).
```
import cv2
from concurrent import futures
import threading
def get_img_shape_parallel(idx, img, total_imgs):
if idx % 5000 == 0 or idx == (total_imgs - 1):
print('{}: working on img num: {}'.format(threading.current_thread().name,
idx))
return cv2.imread(img).shape
ex = futures.ThreadPoolExecutor(max_workers=None)
data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
print('Starting Img shape computation:')
train_img_dims_map = ex.map(get_img_shape_parallel,
[record[0] for record in data_inp],
[record[1] for record in data_inp],
[record[2] for record in data_inp])
train_img_dims = list(train_img_dims_map)
print('Min Dimensions:', np.min(train_img_dims, axis=0))
print('Avg Dimensions:', np.mean(train_img_dims, axis=0))
print('Median Dimensions:', np.median(train_img_dims, axis=0))
print('Max Dimensions:', np.max(train_img_dims, axis=0))
# Output
Starting Img shape computation:
ThreadPoolExecutor-0_0: working on img num: 0
ThreadPoolExecutor-0_17: working on img num: 5000
ThreadPoolExecutor-0_15: working on img num: 10000
ThreadPoolExecutor-0_1: working on img num: 15000
ThreadPoolExecutor-0_7: working on img num: 17360
Min Dimensions: [46 46 3]
Avg Dimensions: [132.77311215 132.45757733 3.]
Median Dimensions: [130. 130. 3.]
Max Dimensions: [385 394 3]
```
We apply parallel processing to speed up the image-read operations and, based on the summary statistics, we will resize each image to 125x125 pixels. Let's load up all of our images and resize them to these fixed dimensions.
```
IMG_DIMS = (125, 125)
def get_img_data_parallel(idx, img, total_imgs):
if idx % 5000 == 0 or idx == (total_imgs - 1):
print('{}: working on img num: {}'.format(threading.current_thread().name,
idx))
img = cv2.imread(img)
img = cv2.resize(img, dsize=IMG_DIMS,
interpolation=cv2.INTER_CUBIC)
img = np.array(img, dtype=np.float32)
return img
ex = futures.ThreadPoolExecutor(max_workers=None)
train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)]
test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)]
print('Loading Train Images:')
train_data_map = ex.map(get_img_data_parallel,
[record[0] for record in train_data_inp],
[record[1] for record in train_data_inp],
[record[2] for record in train_data_inp])
train_data = np.array(list(train_data_map))
print('\nLoading Validation Images:')
val_data_map = ex.map(get_img_data_parallel,
[record[0] for record in val_data_inp],
[record[1] for record in val_data_inp],
[record[2] for record in val_data_inp])
val_data = np.array(list(val_data_map))
print('\nLoading Test Images:')
test_data_map = ex.map(get_img_data_parallel,
[record[0] for record in test_data_inp],
[record[1] for record in test_data_inp],
[record[2] for record in test_data_inp])
test_data = np.array(list(test_data_map))
train_data.shape, val_data.shape, test_data.shape
# Output
Loading Train Images:
ThreadPoolExecutor-1_0: working on img num: 0
ThreadPoolExecutor-1_12: working on img num: 5000
ThreadPoolExecutor-1_6: working on img num: 10000
ThreadPoolExecutor-1_10: working on img num: 15000
ThreadPoolExecutor-1_3: working on img num: 17360
Loading Validation Images:
ThreadPoolExecutor-1_13: working on img num: 0
ThreadPoolExecutor-1_18: working on img num: 1928
Loading Test Images:
ThreadPoolExecutor-1_5: working on img num: 0
ThreadPoolExecutor-1_19: working on img num: 5000
ThreadPoolExecutor-1_8: working on img num: 8267
((17361, 125, 125, 3), (1929, 125, 125, 3), (8268, 125, 125, 3))
```
We leverage parallel processing again to speed up computations pertaining to image load and resizing. Finally, we get our image tensors of the desired dimensions, as depicted in the preceding output. We can now view some sample cell images to get an idea of how our data looks.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(1 , figsize = (8 , 8))
n = 0
for i in range(16):
n += 1
r = np.random.randint(0 , train_data.shape[0] , 1)
plt.subplot(4 , 4 , n)
plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
plt.imshow(train_data[r[0]]/255.)
plt.title('{}'.format(train_labels[r[0]]))
plt.xticks([]) , plt.yticks([])
```
![Malaria cell samples][16]
Based on these sample images, we can see some subtle differences between malaria and healthy cell images. We will make our deep learning models try to learn these patterns during model training.
Before can we start training our models, we must set up some basic configuration settings.
```
BATCH_SIZE = 64
NUM_CLASSES = 2
EPOCHS = 25
INPUT_SHAPE = (125, 125, 3)
train_imgs_scaled = train_data / 255.
val_imgs_scaled = val_data / 255.
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
val_labels_enc = le.transform(val_labels)
print(train_labels[:6], train_labels_enc[:6])
# Output
['malaria' 'malaria' 'malaria' 'healthy' 'healthy' 'malaria'] [1 1 1 0 0 1]
```
We fix our image dimensions, batch size, and epochs and encode our categorical class labels. The alpha version of TensorFlow 2.0 was released in March 2019, and this exercise is the perfect excuse to try it out.
```
import tensorflow as tf
# Load the TensorBoard notebook extension (optional)
%load_ext tensorboard.notebook
tf.random.set_seed(42)
tf.__version__
# Output
'2.0.0-alpha0'
```
### Deep learning model training
In the model training phase, we will build three deep learning models, train them with our training data, and compare their performance using the validation data. We will then save these models and use them later in the model evaluation phase.
#### Model 1: CNN from scratch
Our first malaria detection model will build and train a basic CNN from scratch. First, let's define our model architecture.
```
inp = tf.keras.layers.Input(shape=INPUT_SHAPE)
conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
activation='relu', padding='same')(inp)
pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
activation='relu', padding='same')(pool1)
pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3),
activation='relu', padding='same')(pool2)
pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
flat = tf.keras.layers.Flatten()(pool3)
hidden1 = tf.keras.layers.Dense(512, activation='relu')(flat)
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
model = tf.keras.Model(inputs=inp, outputs=out)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 125, 125, 3)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 125, 125, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 62, 62, 64) 18496
_________________________________________________________________
...
...
_________________________________________________________________
dense_1 (Dense) (None, 512) 262656
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 513
=================================================================
Total params: 15,102,529
Trainable params: 15,102,529
Non-trainable params: 0
_________________________________________________________________
```
Based on the architecture in this code, our CNN model has three convolution and pooling layers, followed by two dense layers, and dropouts for regularization. Let's train our model.
```
import datetime
logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs',
datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
patience=2, min_lr=0.000001)
callbacks = [reduce_lr, tensorboard_callback]
history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(val_imgs_scaled, val_labels_enc),
callbacks=callbacks,
verbose=1)
# Output
Train on 17361 samples, validate on 1929 samples
Epoch 1/25
17361/17361 [====] - 32s 2ms/sample - loss: 0.4373 - accuracy: 0.7814 - val_loss: 0.1834 - val_accuracy: 0.9393
Epoch 2/25
17361/17361 [====] - 30s 2ms/sample - loss: 0.1725 - accuracy: 0.9434 - val_loss: 0.1567 - val_accuracy: 0.9513
...
...
Epoch 24/25
17361/17361 [====] - 30s 2ms/sample - loss: 0.0036 - accuracy: 0.9993 - val_loss: 0.3693 - val_accuracy: 0.9565
Epoch 25/25
17361/17361 [====] - 30s 2ms/sample - loss: 0.0034 - accuracy: 0.9994 - val_loss: 0.3699 - val_accuracy: 0.9559
```
We get a validation accuracy of 95.6%, which is pretty good, although our model looks to be overfitting slightly (based on looking at our training accuracy, which is 99.9%). We can get a clear perspective on this by plotting the training and validation accuracy and loss curves.
```
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
max_epoch = len(history.history['accuracy'])+1
epoch_list = list(range(1,max_epoch))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(1, max_epoch, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(1, max_epoch, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
```
![Learning curves for basic CNN][17]
Learning curves for basic CNN
We can see after the fifth epoch that things don't seem to improve a whole lot overall. Let's save this model for future evaluation.
```
`model.save('basic_cnn.h5')`
```
#### Deep transfer learning
Just like humans have an inherent capability to transfer knowledge across tasks, transfer learning enables us to utilize knowledge from previously learned tasks and apply it to newer, related ones, even in the context of machine learning or deep learning. If you are interested in doing a deep-dive on transfer learning, you can read my article "[A comprehensive hands-on guide to transfer learning with real-world applications in deep learning][18]" and my book [_Hands-On Transfer Learning with Python_][19].
![Ideas for deep transfer learning][20]
The idea we want to explore in this exercise is:
> Can we leverage a pre-trained deep learning model (which was trained on a large dataset, like ImageNet) to solve the problem of malaria detection by applying and transferring its knowledge in the context of our problem?
We will apply the two most popular strategies for deep transfer learning.
* Pre-trained model as a feature extractor
* Pre-trained model with fine-tuning
We will be using the pre-trained VGG-19 deep learning model, developed by the Visual Geometry Group (VGG) at the University of Oxford, for our experiments. A pre-trained model like VGG-19 is trained on a huge dataset ([ImageNet][21]) with a lot of diverse image categories. Therefore, the model should have learned a robust hierarchy of features, which are spatial-, rotational-, and translation-invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images, can act as a good feature extractor for new images suitable for computer vision problems like malaria detection. Let's discuss the VGG-19 model architecture before unleashing the power of transfer learning on our problem.
##### Understanding the VGG-19 model
The VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which was developed for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is described in their paper "[Very deep convolutional networks for large-scale image recognition][22]." The architecture of the VGG-19 model is:
![VGG-19 Model Architecture][23]
You can see that we have a total of 16 convolution layers using 3x3 convolution filters along with max pooling layers for downsampling and two fully connected hidden layers of 4,096 units in each layer followed by a dense layer of 1,000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict malaria. We are more concerned with the first five blocks so we can leverage the VGG model as an effective feature extractor.
We will use one of the models as a simple feature extractor by freezing the five convolution blocks to make sure their weights aren't updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights will be updated in each epoch (per batch of data) as we train our own model.
#### Model 2: Pre-trained model as a feature extractor
For building this model, we will leverage TensorFlow to load up the VGG-19 model and freeze the convolution blocks so we can use them as an image feature extractor. We will plug in our own dense layers at the end to perform the classification task.
```
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
input_shape=INPUT_SHAPE)
vgg.trainable = False
# Freeze the layers
for layer in vgg.layers:
layer.trainable = False
base_vgg = vgg
base_out = base_vgg.output
pool_out = tf.keras.layers.Flatten()(base_out)
hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-4),
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 125, 125, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 125, 125, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 125, 125, 64) 36928
_________________________________________________________________
...
...
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_3 (Dense) (None, 512) 2359808
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_4 (Dense) (None, 512) 262656
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 1) 513
=================================================================
Total params: 22,647,361
Trainable params: 2,622,977
Non-trainable params: 20,024,384
_________________________________________________________________
```
It is evident from this output that we have a lot of layers in our model and we will be using the frozen layers of the VGG-19 model as feature extractors only. You can use the following code to verify how many layers in our model are indeed trainable and how many total layers are present in our network.
```
print("Total Layers:", len(model.layers))
print("Total trainable layers:",
sum([1 for l in model.layers if l.trainable]))
# Output
Total Layers: 28
Total trainable layers: 6
```
We will now train our model using similar configurations and callbacks to the ones we used in our previous model. Refer to [my GitHub repository][24] for the complete code to train the model. We observe the following plots showing the model's accuracy and loss.
![Learning curves for frozen pre-trained CNN][25]
Learning curves for frozen pre-trained CNN
This shows that our model is not overfitting as much as our basic CNN model, but the performance is slightly less than our basic CNN model. Let's save this model for future evaluation.
```
`model.save('vgg_frozen.h5')`
```
#### Model 3: Fine-tuned pre-trained model with image augmentation
In our final model, we will fine-tune the weights of the layers in the last two blocks of our pre-trained VGG-19 model. We will also introduce the concept of image augmentation. The idea behind image augmentation is exactly as the name sounds. We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don't get the same images each time. We will leverage an excellent utility called **ImageDataGenerator** in **tf.keras** that can help build image augmentors.
```
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
zoom_range=0.05,
rotation_range=25,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05, horizontal_flip=True,
fill_mode='nearest')
val_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# build image augmentation generators
train_generator = train_datagen.flow(train_data, train_labels_enc, batch_size=BATCH_SIZE, shuffle=True)
val_generator = val_datagen.flow(val_data, val_labels_enc, batch_size=BATCH_SIZE, shuffle=False)
```
We will not apply any transformations on our validation dataset (except for scaling the images, which is mandatory) since we will be using it to evaluate our model performance per epoch. For a detailed explanation of image augmentation in the context of transfer learning, feel free to check out my [article][18] cited above. Let's look at some sample results from a batch of image augmentation transforms.
```
img_id = 0
sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1],
batch_size=1)
sample = [next(sample_generator) for i in range(0,5)]
fig, ax = plt.subplots(1,5, figsize=(16, 6))
print('Labels:', [item[1][0] for item in sample])
l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]
```
![Sample augmented images][26]
You can clearly see the slight variations of our images in the preceding output. We will now build our deep learning model, making sure the last two blocks of the VGG-19 model are trainable.
```
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
input_shape=INPUT_SHAPE)
# Freeze the layers
vgg.trainable = True
set_trainable = False
for layer in vgg.layers:
if layer.name in ['block5_conv1', 'block4_conv1']:
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
base_vgg = vgg
base_out = base_vgg.output
pool_out = tf.keras.layers.Flatten()(base_out)
hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5),
loss='binary_crossentropy',
metrics=['accuracy'])
print("Total Layers:", len(model.layers))
print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))
# Output
Total Layers: 28
Total trainable layers: 16
```
We reduce the learning rate in our model since we don't want to make to large weight updates to the pre-trained layers when fine-tuning. The model's training process will be slightly different since we are using data generators, so we will be leveraging the **fit_generator(…)** function.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
patience=2, min_lr=0.000001)
callbacks = [reduce_lr, tensorboard_callback]
train_steps_per_epoch = train_generator.n // train_generator.batch_size
val_steps_per_epoch = val_generator.n // val_generator.batch_size
history = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS,
validation_data=val_generator, validation_steps=val_steps_per_epoch,
verbose=1)
# Output
Epoch 1/25
271/271 [====] - 133s 489ms/step - loss: 0.2267 - accuracy: 0.9117 - val_loss: 0.1414 - val_accuracy: 0.9531
Epoch 2/25
271/271 [====] - 129s 475ms/step - loss: 0.1399 - accuracy: 0.9552 - val_loss: 0.1292 - val_accuracy: 0.9589
...
...
Epoch 24/25
271/271 [====] - 128s 473ms/step - loss: 0.0815 - accuracy: 0.9727 - val_loss: 0.1466 - val_accuracy: 0.9682
Epoch 25/25
271/271 [====] - 128s 473ms/step - loss: 0.0792 - accuracy: 0.9729 - val_loss: 0.1127 - val_accuracy: 0.9641
```
This looks to be our best model yet. It gives us a validation accuracy of almost 96.5% and, based on the training accuracy, it doesn't look like our model is overfitting as much as our first model. This can be verified with the following learning curves.
![Learning curves for fine-tuned pre-trained CNN][27]
Learning curves for fine-tuned pre-trained CNN
Let's save this model so we can use it for model evaluation on our test dataset.
```
`model.save('vgg_finetuned.h5')`
```
This completes our model training phase. We are now ready to test the performance of our models on the actual test dataset!
### Deep learning model performance evaluation
We will evaluate the three models we built in the training phase by making predictions with them on the data from our test dataset—because just validation is not enough! We have also built a nifty utility module called **model_evaluation_utils** , which we can use to evaluate the performance of our deep learning models with relevant classification metrics. The first step is to scale our test data.
```
test_imgs_scaled = test_data / 255.
test_imgs_scaled.shape, test_labels.shape
# Output
((8268, 125, 125, 3), (8268,))
```
The next step involves loading our saved deep learning models and making predictions on the test data.
```
# Load Saved Deep Learning Models
basic_cnn = tf.keras.models.load_model('./basic_cnn.h5')
vgg_frz = tf.keras.models.load_model('./vgg_frozen.h5')
vgg_ft = tf.keras.models.load_model('./vgg_finetuned.h5')
# Make Predictions on Test Data
basic_cnn_preds = basic_cnn.predict(test_imgs_scaled, batch_size=512)
vgg_frz_preds = vgg_frz.predict(test_imgs_scaled, batch_size=512)
vgg_ft_preds = vgg_ft.predict(test_imgs_scaled, batch_size=512)
basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
for pred in basic_cnn_preds.ravel()])
vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
for pred in vgg_frz_preds.ravel()])
vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
for pred in vgg_ft_preds.ravel()])
```
The final step is to leverage our **model_evaluation_utils** module and check the performance of each model with relevant classification metrics.
```
import model_evaluation_utils as meu
import pandas as pd
basic_cnn_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=basic_cnn_pred_labels)
vgg_frz_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_frz_pred_labels)
vgg_ft_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_ft_pred_labels)
pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics],
index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned'])
```
![Model accuracy][28]
It looks like our third model performs best on the test dataset, giving a model accuracy and an F1-score of 96%, which is pretty good and quite comparable to the more complex models mentioned in the research paper and articles we mentioned earlier.
### Conclusion
Malaria detection is not an easy procedure, and the availability of qualified personnel around the globe is a serious concern in the diagnosis and treatment of cases. We looked at an interesting real-world medical imaging case study of malaria detection. Easy-to-build, open source techniques leveraging AI can give us state-of-the-art accuracy in detecting malaria, thus enabling AI for social good.
I encourage you to check out the articles and research papers mentioned in this article, without which it would have been impossible for me to conceptualize and write it. If you are interested in running or adopting these techniques, all the code used in this article is available on [my GitHub repository][24]. Remember to download the data from the [official website][11].
Let's hope for more adoption of open source AI capabilities in healthcare to make it less expensive and more accessible for everyone around the world!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/detecting-malaria-deep-learning
作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/djsarkar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC
[2]: https://opensource.com/sites/default/files/uploads/malaria1_python-tensorflow.png (Python and TensorFlow)
[3]: https://opensource.com/sites/default/files/uploads/malaria2_malaria-heat-map.png (Malaria heat map)
[4]: https://www.who.int/features/factfiles/malaria/en/
[5]: https://peerj.com/articles/4568/
[6]: https://blog.insightdatascience.com/https-blog-insightdatascience-com-malaria-hero-a47d3d5fc4bb
[7]: https://www.pyimagesearch.com/2018/12/03/deep-learning-and-medical-image-analysis-with-keras/
[8]: https://opensource.com/sites/default/files/uploads/malaria3_blood-smear.png (Blood smear workflow for Malaria detection)
[9]: http://cs231n.github.io/convolutional-networks/
[10]: https://opensource.com/sites/default/files/uploads/malaria4_cnn-architecture.png (A typical CNN architecture)
[11]: https://ceb.nlm.nih.gov/repositories/malaria-datasets/
[12]: https://www.ncbi.nlm.nih.gov/pubmed/29360430
[13]: https://opensource.com/sites/default/files/uploads/malaria5_dependencies.png (Installing dependencies)
[14]: https://opensource.com/sites/default/files/uploads/malaria6_tree-dependency.png (Installing the tree dependency)
[15]: https://opensource.com/sites/default/files/uploads/malaria7_dataset.png (Datasets)
[16]: https://opensource.com/sites/default/files/uploads/malaria8_cell-samples.png (Malaria cell samples)
[17]: https://opensource.com/sites/default/files/uploads/malaria9_learningcurves.png (Learning curves for basic CNN)
[18]: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
[19]: https://github.com/dipanjanS/hands-on-transfer-learning-with-python
[20]: https://opensource.com/sites/default/files/uploads/malaria10_transferideas.png (Ideas for deep transfer learning)
[21]: http://image-net.org/index
[22]: https://arxiv.org/pdf/1409.1556.pdf
[23]: https://opensource.com/sites/default/files/uploads/malaria11_vgg-19-model-architecture.png (VGG-19 Model Architecture)
[24]: https://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/tree/master/os_malaria_detection/
[25]: https://opensource.com/sites/default/files/uploads/malaria12_learningcurves.png (Learning curves for frozen pre-trained CNN)
[26]: https://opensource.com/sites/default/files/uploads/malaria13_sampleimages.png (Sample augmented images)
[27]: https://opensource.com/sites/default/files/uploads/malaria14_learningcurves.png (Learning curves for fine-tuned pre-trained CNN)
[28]: https://opensource.com/sites/default/files/uploads/malaria15_modelaccuracy.png (Model accuracy)

View File

@ -0,0 +1,238 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install MySQL in Ubuntu Linux)
[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Install MySQL in Ubuntu Linux
======
_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. Youll also learn how to verify your install and how to connect to MySQL for the first time.**_
**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
In this article Ill show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Lets get to it!
### Installing MySQL in Ubuntu
![][3]
Ill be covering two ways you can install **MySQL** in Ubuntu 18.04:
1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
2. Install MySQL using the official repository. There is a bigger step that youll have to add to the process, but nothing to worry about. Also, youll have the latest version (8.0)
When needed, Ill provide screenshots to guide you. For most of this guide, Ill be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Dont be scared of it!
#### Method 1. Installing MySQL from the Ubuntu repositories
First of all, make sure your repositories are updated by entering:
```
sudo apt update
```
Now, to install **MySQL 5.7** , simply type:
```
sudo apt install mysql-server -y
```
Thats it! Simple and efficient.
#### Method 2. Installing MySQL using the official repository
Although this method has a few more steps, Ill go through them one by one and Ill try writing down clear notes.
The first step is browsing to the [download page][4] of the official MySQL website.
![][5]
Here, go down to the **download link** for the **DEB Package**.
![][6]
Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
Now go back to the terminal. Well [use][7] **[Curl][7]** [command][7] to the download the package:
```
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
```
**<https://dev.mysql.com/get/mysql-apt-config\_0.8.12-1\_all.deb>** is the link I copied from the website. It might be different based on the current version of MySQL. Lets use **dpkg** to start installing MySQL:
```
sudo dpkg -i mysql-apt-config*
```
Update your repositories:
```
sudo apt update
```
To actually install MySQL, well use the same command as in the first method:
```
sudo apt install mysql-server -y
```
Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
![][8]
Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Dont confuse it with [root password of Ubuntu][9] system.
![][10]
Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** Youll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
![][11]
Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
![][12]
Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
Thats it! You have successfully installed MySQL.
#### Verify your MySQL installation
To **verify** that MySQL installed correctly, use:
```
sudo systemctl status mysql.service
```
This will show some information about the service:
![][13]
You should see **Active: active (running)** in there somewhere. If you dont, use the following command to start the **service** :
```
sudo systemctl start mysql.service
```
#### Configuring/Securing MySQL
For a new install, you should run the provided command for security-related updates. Thats:
```
sudo mysql_secure_installation
```
Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, youll have to select a minimum password strength ( **0 Low, 1 Medium, 2 High** ). You wont be able to input any password doesnt respect the selected rules. If you dont have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, youll continue the **securing** process; otherwise youll have to re-enter a password.
If, however, you do not want this feature (I wont), just press **Enter** or **any other key** to skip using it.
For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
#### Connecting to & Disconnecting from the MySQL Server
To be able to run SQL queries, youll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
```
mysql -h host_name -u user -p
```
* **-h** is used to specify a **host name** (if the server is located on another machine; if it isnt, just omit it)
* **-u** mentions the **user**
* **-p** specifies that you want to input a **password**.
Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
```
mysql -u test_user -p1234
```
If you successfully inputted the required parameters, youll be greeted by the **MySQL shell prompt** ( **mysql >**):
![][14]
To **disconnect** from the server and **leave** the mysql prompt, type:
```
QUIT
```
Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
You can also output info about the **version** with a simple command:
```
sudo mysqladmin -u root version -p
```
If you want to see a **list of options** , use:
```
mysql --help
```
#### Uninstalling MySQL
If you decide that you want to use a newer release or just want to stop using MySQL.
First, disable the service:
```
sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
```
Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
```
sudo apt purge mysql*
```
To clean up dependecies:
```
sudo apt autoremove
```
**Wrapping Up**
In this article, Ive covered **installing MySQL** in Ubuntu Linux. Id be glad if this guide helps struggling users and beginners.
Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? Were eager to receive any feedback, impressions or suggestions. Thanks for reading and have dont hesitate to experiment with this incredible tool!
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-mysql-ubuntu/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://www.mysql.com/
[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
[4]: https://dev.mysql.com/downloads/repo/apt/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
[7]: https://linuxhandbook.com/curl-command-examples/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
[9]: https://itsfoss.com/change-password-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1

View File

@ -0,0 +1,531 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
Inter-process communication in Linux: Using pipes and message queues
======
Learn how processes synchronize with each other in Linux.
![Chat bubbles][1]
This is the second article in a series about [interprocess communication][2] (IPC) in Linux. The [first article][3] focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a _write end_ for writing bytes, and a _read end_ for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
Consider the [man pages for the **mq_open**][4] function, which belongs to the memory queue API. These pages include a section on [Attributes][5] with this small table:
Interface | Attribute | Value
---|---|---
mq_open() | Thread safety | MT-Safe
The value **MT-Safe** (with **MT** for multi-threaded) means that the **mq_open** function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the _same_ process, such a condition cannot arise among threads in different processes. The **MT-Safe** attribute assures that a race condition does not arise in invocations of **mq_open**. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
### Unnamed pipes
Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar **|** represents an unnamed pipe at the command line. Assume **%** is the command line prompt, and consider this command:
```
`% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right`
```
The _sleep_ and _echo_ utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting _Hello, world!_ appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the _sleep_ and _echo_ processes have exited. What's going on?
In the vertical-bar syntax from the command line, the process to the left ( _sleep_ ) is the writer, and the process to the right ( _echo_ ) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
In the contrived example, the _sleep_ process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the _echo_ process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the _sleep_ and _echo_ processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
Here is a more useful example using two unnamed pipes. Suppose that the file _test.dat_ looks like this:
```
this
is
the
way
the
world
ends
```
The command:
```
`% cat test.dat | sort | uniq`
```
pipes the output from the _cat_ (concatenate) process into the _sort_ process to produce sorted output, and then pipes the sorted output into the _uniq_ process to eliminate duplicate records (in this case, the two occurrences of **the** reduce to one):
```
ends
is
the
this
way
world
```
The scene now is set for a program with two processes that communicate through an unnamed pipe.
#### Example 1. Two processes communicating through an unnamed pipe.
```
#include <sys/wait.h> /* wait */
#include <stdio.h>
#include <stdlib.h> /* exit functions */
#include <unistd.h> /* read, write, pipe, _exit */
#include <string.h>
#define ReadEnd 0
#define WriteEnd 1
void report_and_exit(const char* msg) {
[perror][6](msg);
[exit][7](-1); /** failure **/
}
int main() {
int pipeFDs[2]; /* two file descriptors */
char buf; /* 1-byte buffer */
const char* msg = "Nature's first green is gold\n"; /* bytes to write */
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
pid_t cpid = fork(); /* fork a child process */
if (cpid < 0) report_and_exit("fork"); /* check for failure */
if (0 == cpid) { /*** child ***/ /* child process */
close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
_exit(0); /* exit and notify parent at once */
}
else { /*** parent ***/
close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
close(pipeFDs[WriteEnd]); /* done writing: generate eof */
wait(NULL); /* wait for child to exit */
[exit][7](0); /* exit normally */
}
return 0;
}
```
The _pipeUN_ program above uses the system function **fork** to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function **fork** works:
* The **fork** function, called in the _parent_ process, returns **-1** to the parent in case of failure. In the _pipeUN_ example, the call is: [code]`pid_t cpid = fork(); /* called in parent */`[/code] The returned value is stored, in this example, in the variable **cpid** of integer type **pid_t**. (Every process has its own _process ID_ , a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full _process table_ , a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
* If the **fork** call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the _same_ code that follows the call to **fork**. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to **fork** returns:
* Zero to the child process
* The child's process ID to the parent
* An _if/else_ or equivalent construct typically is used after a successful **fork** call to segregate code meant for the parent from code meant for the child. In this example, the construct is: [code] if (0 == cpid) { /*** child ***/
...
}
else { /*** parent ***/
...
}
```
If forking a child succeeds, the _pipeUN_ program proceeds as follows. There is an integer array:
```
`int pipeFDs[2]; /* two file descriptors */`
```
to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element **pipeFDs[0]** is the file descriptor for the read end, and the array element **pipeFDs[1]** is the file descriptor for the write end.) A successful call to the system **pipe** function, made immediately before the call to **fork** , populates the array with the two file descriptors:
```
`if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");`
```
The parent and the child now have copies of both file descriptors, but the _separation of concerns_ pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child _if_ -clause code, therefore, closes the pipe's write end:
```
`close(pipeFDs[WriteEnd]); /* called in child code */`
```
and the first statement in the parent _else_ -clause code closes the pipe's read end:
```
`close(pipeFDs[ReadEnd]); /* called in parent code */`
```
The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
One more aspect of the program needs clarification: the call to the **wait** function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short _pipeUN_ program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a _zombie_ process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
```
`signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */`
```
A second approach is to have the parent execute a **wait** on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the _pipeUN_ program, where the parent code has this call:
```
`wait(NULL); /* called in parent */`
```
This call to **wait** means _wait until the termination of any child occurs_ , and in the _pipeUN_ program, there is only one child process. (The **NULL** argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible **waitpid** function for fine-grained control, e.g., for specifying a particular child process among several.
The _pipeUN_ program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular **exit** function. By contrast, the child terminates with a call to the **_exit** variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
```
`foo bar`
```
to a pipe and process P2 concurrently writes:
```
`baz baz`
```
to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
```
`baz foo baz bar`
```
The POSIX standard ensures that writes are not interleaved so long as no write exceeds **PIPE_BUF** bytes. On Linux systems, **PIPE_BUF** is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
## Named pipes
An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
Let's look at another command line example to get the gist of named pipes. Here are the steps:
* Open two terminals. The working directory should be the same for both.
* In one of the terminals, enter these two commands (the prompt again is **%** , and my comments start with **##** ): [code] % mkfifo tester ## creates a backing file named tester
% cat tester ## type the pipe's contents to stdout [/code] At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
* In the second terminal, enter the command: [code] % cat > tester ## redirect keyboard input to the pipe
hello, world! ## then hit Return key
bye, bye ## ditto
<Control-C> ## terminate session with a Control-C [/code] Whatever is typed into this terminal is echoed in the other. Once **Ctrl+C** is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
* Clean up by removing the file that implements the named pipe: [code]`% unlink tester`
```
As the utility's name _mkfifo_ implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named **mkfifo** that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
#### Example 2. The _fifoWriter_ program
```
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#define MaxLoops 12000 /* outer loop */
#define ChunkSize 16 /* how many written at a time */
#define IntsPerChunk 4 /* four 4-byte ints per chunk */
#define MaxZs 250 /* max microseconds to sleep */
int main() {
const char* pipeName = "./fifoChannel";
mkfifo(pipeName, 0666); /* read/write for user/group/others */
int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
if (fd < 0) return -1; /* can't go on */
int i;
for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
int j;
for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
int k;
int chunk[IntsPerChunk];
for (k = 0; k < IntsPerChunk; k++)
chunk[k] = [rand][9]();
write(fd, chunk, sizeof(chunk));
}
usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
}
close(fd); /* close pipe: generates an end-of-stream marker */
unlink(pipeName); /* unlink from the implementing file */
[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
return 0;
}
```
The _fifoWriter_ program above can be summarized as follows:
* The program creates a named pipe for writing: [code] mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
int fd = open(pipeName, O_CREAT | O_WRONLY); [/code] where **pipeName** is the name of the backing file passed to **mkfifo** as the first argument. The named pipe then is opened with the by-now familiar call to the **open** function, which returns a file descriptor.
* For a touch of realism, the _fifoWriter_ does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
* After closing the named pipe, the _fifoWriter_ also unlinks the file: [code] close(fd); /* close pipe: generates end-of-stream marker */
unlink(pipeName); /* unlink from the implementing file */ [/code] The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the _fifoWriter_ and the _fifoReader_ , both of which do an _unlink_ operation.
The two programs should be executed in different terminals with the same working directory. However, the _fifoWriter_ should be started before the _fifoReader_ , as the former creates the pipe. The _fifoReader_ then accesses the already created named pipe.
#### Example 3. The _fifoReader_ program
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
unsigned is_prime(unsigned n) { /* not pretty, but efficient */
if (n <= 3) return n > 1;
if (0 == (n % 2) || 0 == (n % 3)) return 0;
unsigned i;
for (i = 5; (i * i) <= n; i += 6)
if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
return 1; /* found a prime! */
}
int main() {
const char* file = "./fifoChannel";
int fd = open(file, O_RDONLY);
if (fd < 0) return -1; /* no point in continuing */
unsigned count = 0, total = 0, primes_count = 0;
while (1) {
int next;
int i;
ssize_t count = read(fd, &next, sizeof(int));
if (0 == count) break; /* end of stream */
else if (count == sizeof(int)) { /* read a 4-byte int value */
total++;
if (is_prime(next)) primes_count++;
}
}
close(fd); /* close pipe from read end */
unlink(file); /* unlink from the underlying file */
[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
return 0;
}
```
The _fifoReader_ program above can be summarized as follows:
* Because the _fifoWriter_ creates the named pipe, the _fifoReader_ needs only the standard call **open** to access the pipe through the backing file: [code] const char* file = "./fifoChannel";
int fd = open(file, O_RDONLY); [/code] The file opens as read-only.
* The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The **read** call: [code]`ssize_t count = read(fd, &next, sizeof(int));`[/code] returns 0 to indicate end-of-stream, in which case the _fifoReader_ breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
* After reading a 4-byte integer, the _fifoReader_ checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
On repeated sample runs, the _fifoReader_ successfully read all of the bytes that the _fifoWriter_ wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
```
% ./fifoWriter
768000 ints sent to the pipe.
###
% ./fifoReader
Received ints: 768000, primes: 37682
```
### Message queues
Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
As the name suggests, a message queue is a sequence of messages, each of which has two parts:
* The payload, which is an array of bytes ( **char** in C)
* A type, given as a positive integer value; types categorize messages for flexible retrieval
Consider the following depiction of a message queue, with each message labeled with an integer type:
```
+-+ +-+ +-+ +-+
sender--->|3|--->|2|--->|2|--->|1|--->receiver
+-+ +-+ +-+ +-+
```
Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
The _mqueue_ example consists of two programs, the _sender_ that writes to the message queue and the _receiver_ that reads from this queue. Both programs include the header file _queue.h_ shown below:
#### Example 4. The header file _queue.h_
```
#define ProjectId 123
#define PathName "queue.h" /* any existing, accessible file would do */
#define MsgLen 4
#define MsgCount 6
typedef struct {
long type; /* must be of type long */
char payload[MsgLen + 1]; /* bytes in the message */
} queuedMessage;
```
The header file defines a structure type named **queuedMessage** , with **payload** (byte array) and **type** (integer) fields. This file also defines symbolic constants (the **#define** statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The **ProjectId** can be any positive integer value, and the **PathName** must be an existing, accessible file—in this case, the file _queue.h_. The setup statements in both the _sender_ and the _receiver_ programs are:
```
key_t key = ftok(PathName, ProjectId); /* generate key */
int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
```
The ID **qid** is, in effect, the counterpart of a file descriptor for message queues.
#### Example 5. The message _sender_ program
```
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <stdlib.h>
#include <string.h>
#include "queue.h"
void report_and_exit(const char* msg) {
[perror][6](msg);
[exit][7](-1); /* EXIT_FAILURE */
}
int main() {
key_t key = ftok(PathName, ProjectId);
if (key < 0) report_and_exit("couldn't get key...");
int qid = msgget(key, 0666 | IPC_CREAT);
if (qid < 0) report_and_exit("couldn't get queue id...");
char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
int i;
for (i = 0; i < MsgCount; i++) {
/* build the message */
queuedMessage msg;
msg.type = types[i];
[strcpy][11](msg.payload, payloads[i]);
/* send the message */
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
}
return 0;
}
```
The _sender_ program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
```
`msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);`
```
is configured to be non-blocking (the flag **IPC_NOWAIT** ) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The _receiver_ program below also receives messages using the **IPC_NOWAIT** flag.
#### Example 6. The message _receiver_ program
```
#include <stdio.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <stdlib.h>
#include "queue.h"
void report_and_exit(const char* msg) {
[perror][6](msg);
[exit][7](-1); /* EXIT_FAILURE */
}
int main() {
key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
if (key < 0) report_and_exit("key not gotten...");
int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
if (qid < 0) report_and_exit("no access to queue...");
int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
int i;
for (i = 0; i < MsgCount; i++) {
queuedMessage msg; /* defined in queue.h */
if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
[puts][12]("msgrcv trouble...");
[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
}
/** remove the queue **/
if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
report_and_exit("trouble removing queue...");
return 0;
}
```
The _receiver_ program does not create the message queue, although the API suggests as much. In the _receiver_ , the call:
```
`int qid = msgget(key, 0666 | IPC_CREAT);`
```
is misleading because of the **IPC_CREAT** flag, but this flag really means _create if needed, otherwise access_. The _sender_ program calls **msgsnd** to send messages, whereas the _receiver_ calls **msgrcv** to retrieve them. In this example, the _sender_ sends the messages in the order 1-1-2-2-3-3, but the _receiver_ then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
```
% ./sender
msg1 sent as type 1
msg2 sent as type 1
msg3 sent as type 2
msg4 sent as type 2
msg5 sent as type 3
msg6 sent as type 3
% ./receiver
msg5 received as type 3
msg1 received as type 1
msg3 received as type 2
msg2 received as type 1
msg6 received as type 3
msg4 received as type 2
```
The output above shows that the _sender_ and the _receiver_ can be launched from the same terminal. The output also shows that the message queue persists even after the _sender_ process creates the queue, writes to it, and exits. The queue goes away only after the _receiver_ process explicitly removes it with the call to **msgctl** :
```
`if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */`
```
### Wrapping up
The pipes and message queue APIs are fundamentally _unidirectional_ : one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Foundation Training Courses Sale & Discount Coupon)
[#]: via: (https://itsfoss.com/linux-foundation-discount-coupon/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Linux Foundation Training Courses Sale & Discount Coupon
======
Linux Foundation is the non-profit organization that employs Linux creator Linus Torvalds and manages the development of the Linux kernel. Linux Foundation aims to promote the adoption of Linux and Open Source in the industry and it is doing a great job in this regard.
Open Source jobs are in demand and no one knows is better than Linux Foundation, the official Linux organization. This is why the Linux Foundation provides a number of training and certification courses on Linux related technology. You can browse the [entire course offering on Linux Foundations training webpage][1].
### Linux Foundation Latest Offer: 40% off on all courses [Limited Time]
At present Linux Foundation is offering some great offers for sysadmin, devops and cloud professionals.
At present, Linux Foundation is offering massive discount of 40% on the entire range of their e-learning courses and certification bundles, including the growing catalog of cloud and devops e-learning courses like Kubernetes!
Just use coupon code **APRIL40** at checkout to get your discount.
[Linux Foundation 40% Off (Coupon Code APRIL40)][2]
_Do note that this offer is valid till 22nd April 2019 only._
### Linux Foundation Discount Coupon [Valid all the time]
You can get a 16% off on any training or certification course provided by The Linux Foundation at any given time. All you have to do is to use the coupon code **FOSS16** at the checkout page.
Note that it might not be combined with sysadmin day offer.
[Get 16% off on Linux Foundation Courses with FOSS16 Code][1]
This article contains affiliate links. Please read our [affiliate policy][3].
#### Should you get certified?
![][4]
This is the question I have been asked regularly. Are Linux certifications worth it? The short answer is yes.
As per the [open source jobs report in 2018][5], over 80% of open source professionals said that certifications helped with their careers. Certifications enable you to demonstrate technical knowledge to potential employers and thus certifications make you more employable in general.
Almost half of the hiring managers said that employing certified open source professionals is a priority for them.
Certifications from a reputed authority like Linux Foundation, Red Hat, LPIC etc are particularly helpful when you are a fresh graduate or if you want to switch to a new domain in your career.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-foundation-discount-coupon/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://shareasale.com/u.cfm?d=507759&m=59485&u=747593&afftrack=
[2]: http://shrsl.com/1k5ug
[3]: https://itsfoss.com/affiliate-policy/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/07/linux-foundation-training-certification-discount.png?ssl=1
[5]: https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 3)
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3)
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
Linux Server Hardening Using Idempotency with Ansible: Part 3
======
![][1]
[Creative Commons Zero][2]
In the previous articles, we introduced idempotency as a way to approach your servers security posture and looked at some specific Ansible examples, including the kernel, system accounts, and IPtables. In this final article of the series, well look at a few more server-hardening examples and talk a little more about how the idempotency playbook might be used.
#### **Time**
Due to its reduced functionality, and therefore attack surface, the preference amongst a number of OSs has been to introduce “chronyd” over “ntpd”. If youre new to “chrony” then fret not. Its still using the NTP (Network Time Protocol) that we all know and love but in a more secure fashion.
The first thing I do with Ansible within the “chrony.conf” file is alter the “bind address” and if my memory serves theres also a “command port” option. These config options allow Chrony to only listen on the localhost. In other words you are still syncing as usual with other upstream time servers (just as NTP does) but no remote servers can query your time services; only your local machine has access.
Theres more information on the “bindcmdaddress 127.0.0.1” and “cmdport 0” on this Chrony page (<https://chrony.tuxfamily.org/faq.html>) under “2.5. How can I make chronyd more secure?” which you should read for clarity. This premise behind the comment on that page is a good idea: “you can disable the internet command sockets completely by adding cmdport 0 to the configuration file”.
Additionally I would also focus on securing the file permissions for Chrony and insist that the service starts as expected just like the syslog config above. Otherwise make sure that your time sources are sane, have a degree of redundancy with multiple sources set up and then copy the whole config file over using Ansible.
#### **Logging**
You can clearly affect the level of detail included in the logs from a number pieces of software on a server. Thinking back to what weve looked at in relation to syslog already you can also tweak that applications config using Ansible to your needs and then use the example Ansible above in addition.
#### **PAM**
Apparently PAM (Pluggable Authentication Modules) has been a part of Linux since 1997. It is undeniably useful (a common use is that you can force SSH to use it for password logins, as per the SSH YAML file above). It is extensible, sophisticated and can perform useful functions such as preventing brute force attacks on password logins using a clever rate limiting system. The syntax varies a little between OSes but if you have the time then getting PAM working well (even if youre only using SSH keys and not passwords for your logins) is a worthwhile effort. Attackers like their own users on a system with lots of usernames, something innocuous such as “webadmin” or similar might be easy to miss on a server, and PAM can help you out in this respect.
#### **Auditd**
Weve looked at logging a little already but what about capturing every “system call” that a kernel makes. The Linux kernel is a super-busy component of any system and logging almost every single thing that a system does is an excellent way of providing post-event forensics. This article will hopefully shed some light on where to begin: <http://www.admin-magazine.com/Archive/2018/43/Auditing-Docker-Containers-in-a-DevOps-Environment>. Note the comments in that article about performance, theres little point in paying extra for compute and disk IO resource because youve misconfigured your logging so spend some time getting it correct would be my advice.
For concerns over disk space I will usually change a few lines in the file “/etc/audit/auditd.conf” in order to prevent there firstly being too many log files created and secondly logs that grow very large without being rotated. This is also on the proviso that logs are being ingested upstream via another mechanism too. Clearly the files permissions and the service starting are also the basics you need to cover here too. Generally file permissions for auditd are tight as its a “root” oriented service so theres less changes needed here generally.
#### **Filesystems**
With a little reading you can discover which filesystems that are made available to your OS by default. You should disable these (at the “modprode.d” file level) with Ansible to prevent weird and wonderful things being attached unwittingly to your servers. You are reducing the attack surface with this approach. The Ansible might look something like this below for example.
```
name: Make sure filesystems which are not needed are forced as off
lineinfile: dest="/etcmodprobe.d/harden.conf" line='install squashfs /bin/true' state=present
```
#### **SELinux**
The old, but sometimes avoided due to complexity, security favourite, SELinux, should be set to “enforcing” mode. Or, at the every least, set to log sensibly using “permissive” mode. Permissive mode will at least fill your auditd logs up with any correct rule matches nicely. In terms of what Ansible looks like its simple and is along these lines:
```
name: Configure SElinux to be running in permissive mode
replace: path=”/etc/selinux/config” regexp='SELINUX=disabled' replace='SELINUX=permissive'
```
#### **Packages**
Needless to say the compliance hardening playbook is also a good place to upgrade all the packages (with some selective exclusions) on the system. Pay attention to the section relating to reboots and idempotency in a moment however. With other mechanisms in place you might not want to update packages here but instead as per the Automation Documents article mentioned in a moment.
### **Idempotency**
Now weve run through some of the aspects you would want to look at when hardening on a server, lets think a little more about how the playbook might be used.
When it comes to cloud platforms most of my professional work has been on AWS and therefore, more often than not, a fresh AMI is launched and then a playbook is run over the top of it. Theres a mountain of detail in one way of doing that in this article (<http://www.admin-magazine.com/Archive/2018/45/AWS-Automation-Documents>) which you may be pleased to discover accommodates a mechanism to spawn a script or playbook.
It is important to note, when it comes to idempotency, that it may take a little more effort initially to get your head around the logic involved in being able to re-run Ansible repeatedly without disturbing the required status quo of your server estate.
One thing to be absolutely certain of however (barring rare edge cases) is that after you apply your hardening for the very first time, on a new AMI or server build, you will require a reboot. This is an important element due to a number of system facets not being altered correctly without a reboot. These include applying kernel changes so alterations become live, writing auditd rules as immutable config and also starting or stopping services to improve the security posture.
Note though that youre probably not going to want to execute all plays in a playbook every twenty or thirty minutes, such as updating all packages and stopping and restarting key customer-facing services. As a result you should factor the logic into your Ansible so that some tasks only run once initially and then maybe write a “completed” placeholder file to the filesystem afterwards for referencing. Theres a million different ways of achieving a status checker.
The nice thing about Ansible is that the logic for rerunning playbooks is implicit and unlike shell scripts which for this type of task can be arduous to code the logic into. Sometimes, such as updating the GRUB bootloader for example, trying to guess the many permutations of a system change can be painful.
### **Bedtime Reading**
I still think that you cant beat trial and error when it comes to computing. Experience is valued for good reason.
Be warned that youll find contradictory advice sometimes from the vast array of online resources in this area. Advice differs probably because of the different use cases. The only way to harden the varying flavours of OS to my mind is via a bespoke approach. This is thanks to the environments that servers are used within and the requirements of the security framework or standard that an organisation needs to meet.
For OS hardening details you can check with resources such as the NSA ([https://www.nsa.gov][3]), the Cloud Security Alliance (<https://cloudsecurityalliance.org/working-groups/security-guidance/#_overview>), proprietary training organisations such as GIAC ([https://www.giac.org][4]) who offer resources (<https://www.giac.org/paper/gcux/97/red-hat-linux-71-installation-hardening-checklist/102167>), the diverse CIS Benchmarks ([https://www.cisecurity.org][5]) for industry consensus-based benchmarking, the SANS Institute (<https://uk.sans.org/score/checklists>), NISTs Computer Security Research ([https://csrc.nist.gov][6]) and of course print media too.
### **Conclusion**
Hopefully, you can see how powerful an idempotent server infrastructure is and are tempted to try it for yourself.
The ever-present threat of APT (Advanced Persistent Threat) attacks on infrastructure, where a successful attacker will sit silently monitoring events and then when its opportune infiltrate deeper into an estate, makes this type of configuration highly valuable.
The amount of detail that goes into the tests and configuration changes is key to the value that such an approach will bring to an organisation. Like the tests in a CI/CD pipeline theyre only as ever as good as their coverage.
Chris Binnies latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][7]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3
作者:[Chris Binnie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/chrisbinnie
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tech-1495181_1280.jpg?itok=5WcwApNN
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.nsa.gov/
[4]: https://www.giac.org/
[5]: https://www.cisecurity.org/
[6]: https://csrc.nist.gov/
[7]: https://www.devsecops.cc/

View File

@ -0,0 +1,312 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HTTPie A Modern Command Line HTTP Client For Curl And Wget Alternative)
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
HTTPie A Modern Command Line HTTP Client For Curl And Wget Alternative
======
Most of the time we use Curl Command or Wget Command for file download and other purpose.
We had written **[best command line download manager][1]** in the past. You can navigate those articles by clicking the corresponding URLs.
* **[aria2 A Command Line Multi-Protocol Download Tool For Linux][2]**
* **[Axel A Lightweight Command Line Download Accelerator For Linux][3]**
* **[Wget A Standard Command Line Download Utility For Linux][4]**
* **[curl A Nifty Command Line Download Tool For Linux][5]**
Today we are going to discuss about the same kind of topic. The utility name is HTTPie.
Its modern command line http client and best alternative for curl and wget commands.
### What Is HTTPie?
HTTPie (pronounced aitch-tee-tee-pie) is a command line HTTP client.
The httpie tool is a modern command line http client which makes CLI interaction with web services.
It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output.
HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
### Main Features
* Expressive and intuitive syntax
* Formatted and colorized terminal output
* Built-in JSON support
* Forms and file uploads
* HTTPS, proxies, and authentication
* Arbitrary request data
* Custom headers
* Persistent sessions
* Wget-like downloads
* Python 2.7 and 3.x support
### How To Install HTTPie In Linux?
Most Linux distributions provide a package that can be installed using the system package manager.
For **`Fedora`** system, use **[DNF Command][6]** to install httpie.
```
$ sudo dnf install httpie
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install httpie.
```
$ sudo apt install httpie
```
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install httpie.
```
$ sudo pacman -S httpie
```
For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install httpie.
```
$ sudo yum install httpie
```
For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install httpie.
```
$ sudo zypper install httpie
```
### 1) How To Request A URL Using HTTPie?
The basic usage of httpie is to request a website URL as an argument.
```
# http 2daygeek.com
HTTP/1.1 301 Moved Permanently
CF-RAY: 4c4a618d0c02ce6d-LHR
Cache-Control: max-age=3600
Connection: keep-alive
Date: Tue, 09 Apr 2019 06:21:28 GMT
Expires: Tue, 09 Apr 2019 07:21:28 GMT
Location: https://2daygeek.com/
Server: cloudflare
Transfer-Encoding: chunked
Vary: Accept-Encoding
```
### 2) How To Download A File Using HTTPie?
You can download a file using HTTPie with the `--download` parameter. This is similar to wget command.
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
HTTP/1.1 200 OK
Accept-Ranges: bytes
CF-Cache-Status: HIT
CF-RAY: 4c4a65d5ca360a66-LHR
Cache-Control: public, max-age=7200
Connection: keep-alive
Content-Length: 32066
Content-Type: image/png
Date: Tue, 09 Apr 2019 06:24:23 GMT
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expires: Tue, 09 Apr 2019 08:24:23 GMT
Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
Server: cloudflare
Set-Cookie: __cfduid=dd2034b2f95ae42047e082f59f2b964f71554791063; expires=Wed, 08-Apr-20 06:24:23 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
Vary: Accept-Encoding
Downloading 31.31 kB to "Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png"
Done. 31.31 kB in 0.01187s (2.58 MB/s)
```
Alternatively you can save the output file with different name by using `-o` parameter.
```
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png -o Anbox-1.png
HTTP/1.1 200 OK
Accept-Ranges: bytes
CF-Cache-Status: HIT
CF-RAY: 4c4a68194daa0a66-LHR
Cache-Control: public, max-age=7200
Connection: keep-alive
Content-Length: 32066
Content-Type: image/png
Date: Tue, 09 Apr 2019 06:25:56 GMT
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expires: Tue, 09 Apr 2019 08:25:56 GMT
Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
Server: cloudflare
Set-Cookie: __cfduid=d3eea753081690f9a2d36495a74407dd71554791156; expires=Wed, 08-Apr-20 06:25:56 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
Vary: Accept-Encoding
Downloading 31.31 kB to "Anbox-1.png"
Done. 31.31 kB in 0.01551s (1.97 MB/s)
```
### 3) How To Resume Partial Download Using HTTPie?
You can resume the download using HTTPie with the `-c` parameter.
```
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
HTTP/1.1 206 Partial Content
Connection: keep-alive
Content-Length: 100442112
Content-Range: bytes 4415488-104857599/104857600
Content-Type: application/octet-stream
Date: Tue, 09 Apr 2019 06:32:52 GMT
ETag: "5253f0fd-6400000"
Last-Modified: Tue, 08 Oct 2013 11:48:13 GMT
Server: nginx
Strict-Transport-Security: max-age=15768000; includeSubDomains
Downloading 100.00 MB to "100MB.bin"
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
```
You can verify the same in the below output.
```
[email protected]:/var/log# ls -lhtr 100MB.bin
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
```
### 5) How To Upload A File Using HTTPie?
You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
```
$ http https://transfer.sh < Anbox-1.png
```
### 6) How To Download A File Using HTTPie With Redirect Symbol ">"?
You can download a file using HTTPie with the `redirect ">"` symbol.
```
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
# ls -ltrh Flatpak.png
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
```
### 7) Send a HTTP GET Method?
You can send a HTTP GET method in the request. The GET method is used to retrieve information from the given server using a given URI.
```
# http GET httpie.org
HTTP/1.1 301 Moved Permanently
CF-RAY: 4c4a83a3f90dcbe6-SIN
Cache-Control: max-age=3600
Connection: keep-alive
Date: Tue, 09 Apr 2019 06:44:44 GMT
Expires: Tue, 09 Apr 2019 07:44:44 GMT
Location: https://httpie.org/
Server: cloudflare
Transfer-Encoding: chunked
Vary: Accept-Encoding
```
### 8) Submit A Form?
Use the following format to Submit a forms. A POST request is used to send data to the server, for example, customer information, file upload, etc. using HTML forms.
```
# http -f POST Ubuntu18.2daygeek.com hello='World'
HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Encoding: gzip
Content-Length: 3138
Content-Type: text/html
Date: Tue, 09 Apr 2019 06:48:12 GMT
ETag: "2aa6-5844bf1b047fc-gzip"
Keep-Alive: timeout=5, max=100
Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
Run the following command to see the request that is being sent.
```
# http -v Ubuntu18.2daygeek.com
GET / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: ubuntu18.2daygeek.com
User-Agent: HTTPie/0.9.8
hello=World
HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Encoding: gzip
Content-Length: 3138
Content-Type: text/html
Date: Tue, 09 Apr 2019 06:48:30 GMT
ETag: "2aa6-5844bf1b047fc-gzip"
Keep-Alive: timeout=5, max=100
Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
Server: Apache/2.4.29 (Ubuntu)
Vary: Accept-Encoding
```
### 9) HTTP Authentication?
The currently supported authentication schemes are Basic and Digest
Basic auth
```
$ http -a username:password example.org
```
Digest auth
```
$ http -A digest -a username:password example.org
```
Password prompt
```
$ http -a username example.org
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/best-4-command-line-download-managers-accelerators-for-linux/
[2]: https://www.2daygeek.com/aria2-linux-command-line-download-utility-tool/
[3]: https://www.2daygeek.com/axel-linux-command-line-download-accelerator/
[4]: https://www.2daygeek.com/wget-linux-command-line-download-utility-tool/
[5]: https://www.2daygeek.com/curl-linux-command-line-download-manager/
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,129 @@
[#]: collector: "lujun9972"
[#]: translator: "zgj1024 "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Why DevOps is the most important tech strategy today"
[#]: via: "https://opensource.com/article/19/3/devops-most-important-tech-strategy"
[#]: author: "Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht"
为何 DevOps 是如今最重要的技术策略
======
消除一些关于 DevOps 的疑惑
![CICD with gears][1]
很多人初学 [DevOps][2] 时,看到它其中一个结果就问这个是如何得来的。其实理解这部分 Devops 的怎样实现并不重要,重要的是——理解(使用) DevOps 策略的原因——这是做一个行业的领导者还是追随者的差别。
你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,“混世猴子”([Chaos Monkey][3])程序运行时,将周围的连接随机切断,每天仍可以处理数千个版本。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的无力案例,本质上会被一个[反例][4]困扰DevOps 环境有弹性是因为没有观察到严重的故障。。。还没有。
有很多关于 DevOps 的疑惑,并且许多人还在尝试弄清楚它的意义。下面是来自我 LinkedIn Feed 中的某个人的一个案例:
> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解洽洽相反。
>
> 能听一下你们的想法吗?你认为敏捷开发和 DevOps 之间是什么关系呢?
>
> 1. DevOps 是敏捷开发的子集
> 2. 敏捷开发 是 DevOps 的子集
> 3. DevOps 是敏捷开发的扩展,从敏捷开发结束的地方开始
> 4. DevOps 是敏捷开发的新版本
>
科技行业的专业人士在那篇 LinkedIn 的帖子上达标各样的答案,你会怎样回复呢?
### DevOps源于精益和敏捷
如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维( [Lean Thinking][5])提炼为五个原则:
1. 指明客户所需的价值
2. 确定提供该价值的每个产品的价值流,并对当前提供该价值所需的所有浪费步骤提起挑战
3. 使产品通过剩余的增值步骤持续流动
4. 在可以连续流动的所有步骤之间引入拉力
5. 管理要尽善尽美,以便为客户服务所需的步骤数量和时间以及信息量持续下降
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。(译者注:有相关的视频的)
这是巨大的不同,但是不是生活中的所有事都像硬币游戏那样简单并可预测的。这就是敏捷的出现的原因。我们当然看到了高效绩敏捷团队的精益原则,但这些团队需要的不仅仅是精益去做他们要做的事。
为了能够处理典型的软件开发任务的不可预见性和变化,敏捷开发的方法论会将重点放在意识、审议、决策和行动上,以便在不断变化的现实中调整。例如,敏捷框架(如 srcum通过每日站立会议和冲刺评审会议等仪式提高意识。如果 scrum 团队意识到新的事实,框架允许并鼓励他们在必要时及时调整路线。
要使团队做出这些类型的决策,他们需要高度信任的环境中的自我组织能力。以这种方式工作的高效绩敏捷团队在不断调整的同时实现快速的价值流,消除错误方向上的浪费。
### 最佳批量大小
要了解 DevOps 在软件开发中的请强大功能这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例
![U-curve optimization illustration of optimal batch size][9]
这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分的路程。买一个鸡蛋(图种最左边)意味着每次要花 30 分钟的路程这就是你的_交易成本_。_持有成本_可能是鸡蛋变质和在你的冰箱中持续地占用空间。_总成本_是_交易成本_加上你的_持有成本_。这 U 型曲线解释了为什么对大部分来说一次买一打鸡蛋是他们的_最佳批量大小_。如果你就住在商店的旁边步行到那里不会花费你任何的时候你可能每次只会买一小盒鸡蛋以此来节省冰箱的空间并享受新鲜的鸡蛋。
这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
下面的动画演示了降低事务成本后,最佳批量大小是如何向左移动。在更频繁地做出更快的决策方面,你不能低估组织的价值。
![U-curve optimization illustration][10]
### DevOps 适合哪些地方
自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速,持续地为客户提供价值。
更传统地说DevOps 被理解为一种打破开发团队和运营团队之间混乱局面的方法。在这个模型中开发团队开发新的功能而运营团队则保持系统的稳定和平稳运行。摩擦的发生是因为开发过程中的新功能将更改引入到系统中从而增加了停机的风险运营团队并不认为要对此负责但无论如何都必须处理这一问题。DevOps 不仅仅尝试让人们一起工作,更重要的是尝试在复杂的环境中安全地进行更频繁的更改。
我们可以向 [Ron Westrum][11] 寻求有关在复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态,官僚主义的和生产式的。他发现病理的可以预测安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
![Three types of culture identified by Ron Westrum][12]
高效的 DevOps 团队通过精益和敏捷的实践实现了一种生成性文化这表明速度和安全性是互补的或者说是同一个问题的两个方面。通过将决策和功能的最佳批量大小减少到非常小DevOps 实现了更快的信息流和价值,同时消除了浪费并降低了风险。
与 Westrum的研究一致在提高安全性和可靠性的同时变化也很容易发生。。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
### 流动、反馈、学习
DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[_DevOps手册_][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
### 从 DevOps 评估开始
利用 DevOps 的第一步是,经过大量研究或在 DevOps 顾问和教练的帮助下,对高效绩 DevOps 团队中始终存在的一系列维度进行评估。评估应确定需要改进的薄弱或不存在的团队规范。对评估的结果进行评估,以找到具有高成功机会的快速获胜焦点领域,从而产生高影响力的改进。快速获胜非常重要,能让团队获取解决更具挑战性领域所需的动力。团队应该产生可以快速尝试的想法,并开始关注 DevOps 转型。
一段时间后,团队应重新评估相同的维度,以衡量改进并确立新的高影响力重点领域,并再次采纳团队的新想法。一位好的教练将根据需要进行咨询、培训、指导和支持,直到团队拥有自己的持续改进方案,并通过不断地重新评估、试验和学习,在所有维度上实现近乎一致。
在本文的[第二部分][16]中,我们将查看 Drupal 社区中 DevOps 调查的结果,并了解最有可能找到快速获胜的位置。
* * *
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/zgj1024)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc "CICD with gears"
[2]: https://opensource.com/resources/devops
[3]: https://github.com/Netflix/chaosmonkey
[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[6]: https://youtu.be/5t6GhcvKB8o?t=54
[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif "U-curve optimization illustration of optimal batch size"
[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif "U-curve optimization illustration"
[11]: https://en.wikipedia.org/wiki/Ron_Westrum
[12]: https://opensource.com/sites/default/files/uploads/information_flow.png "Three types of culture identified by Ron Westrum"
[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
[14]: https://en.wikipedia.org/wiki/Kaizen
[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
[19]: https://events.drupal.org/seattle2019

View File

@ -0,0 +1,287 @@
Sensu 监控入门
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
Sensu 是一个开源基础设施和应用程序监控解决方案它监控服务器、相关服务和应用程序健康状况并通过第三方集成发送警报和通知。Sensu 用 Ruby 编写,可以使用 [RabbitMQ][1] 或 [Redis][2] 来处理消息,它使用 Redis 来存储数据。
如果你想以一种简单而有效的方式监控云基础设施Sensu 是一个不错的选择。它可以与你组织已经使用的许多现代 DevOps 堆栈集成,比如 [Slack][3]、[HipChat][4 ] 或 [IRC][5],它甚至可以用 [PagerDuty][6] 发送移动或寻呼机警报。
Sensu 的[模块化架构][7]意味着每个组件都可以安装在同一台服务器上或者在完全独立的机器上。
### 结构
Sensu 的主要通信机制是 `Transport`。每个 Sensu 组件必须连接到 `Transport` 才能相互发送消息。`Transport` 可以使用 RabbitMQ在生产中推荐使用或 Redis。
Sensu 服务器处理事件数据并采取行动。它注册客户端并使用过滤器、增变器和处理程序检查结果和监视事件。服务器向客户端发布检查说明Sensu API 提供 RESTful API提供对监控数据和核心功能的访问。
[Sensu 客户端][8]执行 Sensu 服务器安排的检查或本地检查定义。Sensu 使用数据存储Redis来保存所有的持久数据。最后[Uchiwa][9] 是与 Sensu API 进行通信的 Web 界面。
![sensu_system.png][11]
### 安装 Sensu
#### 条件
* 一个 Linux 系统作为服务器节点(本文使用了 CentOS 7
* 要监控的一台或多台 Linux 机器(客户机)
#### 服务器侧
Sensu 需要安装 Redis。要安装 Redis启用 EPEL 仓库:
```
$ sudo yum install epel-release -y
```
然后安装 Redis
```
$ sudo yum install redis -y
```
修改 `/etc/redis.conf` 来禁用保护模式,监听每个地址并设置密码:
```
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
```
启用并启动 Redis 服务:
```
$ sudo systemctl enable redis
$ sudo systemctl start redis
```
Redis 现在已经安装并准备好被 Sensu 使用。
现在让我们来安装 Sensu。
首先,配置 Sensu 仓库并安装软件包:
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
$ sudo yum install sensu uchiwa -y
```
让我们为 Sensu 创建最简单的配置文件:
```
$ sudo tee /etc/sensu/conf.d/api.json << EOF
{
  "api": {
        "host": "127.0.0.1",
        "port": 4567
  }
}
EOF
```
然后,配置 `sensu-api` 在本地主机上使用端口 4567 监听:
```
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
{
  "redis": {
        "host": "<IP of server>",
        "port": 6379,
        "password": "password123"
  }
}
EOF
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
{
  "transport": {
        "name": "redis"
  }
}
EOF
```
在这两个文件中,我们将 Sensu 配置为使用 Redis 作为传输机制,还有 Reids 监听的地址。客户端需要直接连接到传输机制。每台客户机都需要这两个文件。
```
$ sudo tee /etc/sensu/uchiwa.json << EOF
{
   "sensu": [
        {
        "name": "sensu",
        "host": "127.0.0.1",
        "port": 4567
        }
   ],
   "uchiwa": {
        "host": "0.0.0.0",
        "port": 3000
   }
}
EOF
```
在这个文件中,我们配置 `Uchiwa` 监听端口 3000 上的每个地址0.0.0.0)。我们还配置 `Uchiwa` 使用 `sensu-api`(已配置好)。
出于安全原因,更改刚刚创建的配置文件的所有者:
```
$ sudo chown -R sensu:sensu /etc/sensu
```
启用并启动 Sensu 服务:
```
$ sudo systemctl enable sensu-server sensu-api sensu-client
$ sudo systemctl start sensu-server sensu-api sensu-client
$ sudo systemctl enable uchiwa
$ sudo systemctl start uchiwa
```
尝试访问 `Uchiwa` 网站http://<服务器的 IP 地址>:3000
对于生产环境,建议运行 RabbitMQ 集群作为 Transport 而不是 Redis虽然 Redis 集群也可以用于生产),运行多个 Sensu 服务器实例和 API 实例,以实现负载均衡和高可用性。
Sensu 现在安装完成,让我们来配置客户端。
#### 客户端侧
要添加一个新客户端,你需要通过创建 `/etc/yum.repos.d/sensu.repo` 文件在客户机上启用 Sensu 仓库。
```
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
[sensu]
name=sensu
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
gpgcheck=0
enabled=1
EOF
```
启用仓库后,安装 Sensu
```
$ sudo yum install sensu -y
```
要配置 `sensu-client`,创建在服务器中相同的 `redis.json``transport.json`,还有 `client.json` 配置文件:
```
$ sudo tee /etc/sensu/conf.d/client.json << EOF
{
  "client": {
        "name": "rhel-client",
        "environment": "development",
        "subscriptions": [
        "frontend"
        ]
  }
}
EOF
```
`name` 字段中,指定一个名称来标识此客户机(通常是主机名)。`environment` 字段可以帮助你过滤,订阅定义客户机将执行哪些监视检查。
最后,启用并启动服务并检查 `Uchiwa`,因为客户机会自动注册:
```
$ sudo systemctl enable sensu-client
$ sudo systemctl start sensu-client
```
### Sensu 检查
Sensu 检查有两个组件:一个插件和一个定义。
Sensu 与 [Nagios 检查插件规范][12]兼容,因此无需修改即可使用针对 Nagios 的任何检查。检查是可执行文件,由 Sensu 客户机运行。
检查定义让 Sensu 知道如何、在哪以及何时运行插件。
#### 客户端侧
让我们在客户机上安装一个检查插件。请记住,此插件将在客户机上执行。
启用 EPEL 并安装 `nagios-plugins-http` :
```
$ sudo yum install -y epel-release
$ sudo yum install -y nagios-plugins-http
```
现在让我们通过手动执行它来研究这个插件。尝试检查客户机上运行的 Web 服务器的状态。它应该会失败,因为我们并没有运行 Web 服务器:
```
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
connect to address 127.0.0.1 and port 80: Connection refused
HTTP CRITICAL - Unable to open TCP socket
```
不出所料,它失败了。检查执行的返回值:
```
$ echo $?
2
```
Nagios 检查插件规范定义了插件执行的四个返回值:
| **Plugin return code** | **State** |
|------------------------|-----------|
| 0 | OK |
| 1 | WARNING |
| 2 | CRITICAL |
| 3 | UNKNOWN |
有了这些信息,我们现在可以在服务器上创建检查定义。
#### 服务器侧
在服务器机器上,创建 `/etc/sensu/conf.d/check_http.json` 文件:
```
{
  "checks": {
    "check_http": {
      "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
      "interval": 10,
      "subscribers": [
        "frontend"
      ]
    }
  }
}
```
`command ` 字段中,使用我们之前测试过的命令。`Interval` 会告诉 Sensu 这个检查的频率,以秒为单位。最后,`subscribers` 将定义执行检查的客户机。
重新启动 sensu-api 和 sensu-server 并确认新检查在 Uchiwa 中可用。
```
$ sudo systemctl restart sensu-api sensu-server
```
### 接下来
Sensu 是一个功能强大的工具,本文只简要介绍它可以干什么。参阅[文档][13]了解更多信息,访问 Sensu 网站了解有关 [Sensu 社区][14]的更多信息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
作者:[Michael Zamot][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mzamot
[1]:https://www.rabbitmq.com/
[2]:https://redis.io/topics/config
[3]:https://slack.com/
[4]:https://en.wikipedia.org/wiki/HipChat
[5]:http://www.irc.org/
[6]:https://www.pagerduty.com/
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
[9]:https://uchiwa.io/#/
[10]:/file/406576
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
[13]:https://docs.sensu.io/
[14]:https://sensu.io/community

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
享受 Netflix 么?你应该感谢 FreeBSD
======
Netflix 是世界上最受欢迎的流媒体服务之一。
但你已经知道了。不是吗?
你可能不知道的是 Netflix 使用 [FreeBSD][1] 向你提供内容。
是的。Netflix 依靠 FreeBSD 来构建其内部内容交付网络 CDN
[CDN][2] 是一组位于世界各地的服务器。它主要用于向终端用户分发像图像和视频这样的“大文件”。
Netflix 没有选择商业 CDN 服务,而是建立了自己的内部 CDN名为 [Open Connect][3]。
Open Connect 使用[自定义硬件][4]Open Connect Appliance。你可以在下面的图片中看到它。它可以处理 40Gb/s 的数据,存储容量为 248 TB。
![Netflixs Open Connect Appliance runs FreeBSD][5]
Netflix 免费为合格的互联网服务提供商 ISP 提供 Open Connect Appliance。通过这种方式大量的 Netflix 流量得到了本地化ISP 可以更高效地提供 Netflix 内容。
Open Connect Appliance 运行在 FreeBSD 操作系统上,并且[几乎完全运行开源软件][6]。
### Open Connect 使用 FreeBSD “头”
![][7]
你或许会期望 Netflix 在这样一个关键基础设施上使用 FreeBSD 的稳定版本,但 Netflix 会跟踪 [FreeBSD 头/当前版本][8]。Netflix 表示,跟踪“头”让他们“保持前瞻性,专注于创新”。
以下是 Netflix 跟踪 FreeBSD 的好处:
* 更快的功能迭代
  * 更快地使用 FreeBSD 的新功能
  * 更快的 bug 修复
  * 实现协作
  * 尽量减少合并冲突
  * 摊销合并“成本”
> 运行 FreeBSD “head” 可以让我们非常高效地向用户分发大量数据,同时保持高速的功能开发。
>
> Netflix
请记得,甚至[谷歌也使用 Debian][9] 测试版而不是 Debian 稳定版。也许这些企业更喜欢最先进的功能。
与谷歌一样Netflix 也计划向上游提供代码。这应该有助于 FreeBSD 和其他基于 FreeBSD 的 BSD 发行版。
那么 Netflix 用 FreeBSD 实现了什么?以下是一些统计数据:
> 使用 FreeBSD 和商业硬件,我们在 16 核 2.6GHz CPU 上使用约 55 的 CPU实现了 90 Gb/s 的 TLS 加密连接,。
>
> Netflix
如果你想了解更多关于 Netflix 和 FreeBSD 的信息,可以参考 [FOSDEM 的这个演示文稿][10]。你还可以在[这里][11]观看演示文稿的视频。
目前,大型企业主要依靠 Linux 来实现其服务器基础架构,但 Netflix 已经信任了 BSD。这对 BSD 社区来说是一件好事,因为如果像 Netflix 这样的行业领导者重视 BSD那么其他人也可以跟上。你怎么看
--------------------------------------------------------------------------------
via: https://itsfoss.com/netflix-freebsd-cdn/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
[3]: https://openconnect.netflix.com/en/
[4]: https://openconnect.netflix.com/en/hardware/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
[6]: https://openconnect.netflix.com/en/software/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
[8]: https://www.bsdnow.tv/tutorials/stable-current
[9]: https://itsfoss.com/goobuntu-glinux-google/
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 2)
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
在 Bash 中使用[方括号](二)
======
![square brackets][1]
> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
[Creative Commons Zero][2]
欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
方括号还可以以一个命令的形式使用,就像这样:
```
[ "a" = "a" ]
```
上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以<ruby>状态码<rt>status code</rt></ruby> 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
分别执行
```
[ "a" = "a" ]
echo $?
```
以及
```
[ "a" = "b" ]
echo $?
```
这两段命令中,前者会输出 0判断结果为真后者则会输出 1判断结果为假。在 Bash 当中,如果一个命令的状态码是 0表示这个命令正常执行完成并退出而且其中没有出现错误对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
对应使用的逻辑判断运算符也相当直观:
```
[ STRING1 = STRING2 ] => checks to see if the strings are equal
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
etc...
```
方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
```
for i in {000..099}; \
do \
if [ -f file$i ]; \
then \
echo file$i exists; \
else \
touch file$i; \
echo I made file$i; \
fi; \
done
```
如果你在上一篇文章使用到的测试目录中运行以上这串命令,其中的第 3 行会判断那几十个文件当中的某个文件是否存在。如果文件存在,会输出一条提示信息;如果文件不存在,就会把对应的文件创建出来。最终,这个目录中会完整存在从 `file000``file099` 这一百个文件。
上面这段命令还可以写得更加简洁:
```
for i in {000..099};\
do\
if [ ! -f file$i ];\
then\
touch file$i;\
echo I made file$i;\
fi;\
done
```
其中 `!` 运算符表示将判断结果取反,因此第 3 行的含义就是“如果文件 `file$i` 不存在”。
可以尝试一下将测试目录中那几十个文件随意删除几个,然后运行上面的命令,你就可以看到它是如何把被删除的文件重新创建出来的。
除了 `-f` 之外,还有很多有用的参数。`-d` 参数可以判断某个目录是否存在,`-h` 参数可以判断某个文件是不是一个符号链接。可以用 `-G` 参数判断某个文件是否属于某个用户组,用 `-ot` 参数判断某个文件的最后更新时间是否早于另一个文件,甚至还可以判断某个文件是否为空文件。
运行下面的几条命令,可以向几个文件中写入一些内容:
```
echo "Hello World" >> file023
echo "This is a message" >> file065
echo "To humanity" >> file010
```
然后运行:
```
for i in {000..099};\
do\
if [ ! -s file$i ];\
then\
rm file$i;\
echo I removed file$i;\
fi;\
done
```
你就会发现所有空文件都被删除了,只剩下少数几个非空的文件。
如果你还想了解更多别的参数,可以执行 `man test` 来查看 `test` 命令的 man 手册(`test` 是 `[ ... ]` 的命令别名)。
有时候你还会看到 `[[ ... ]]` 这种双方括号的形式,使用起来和单方括号差别不大。但双方括号支持的比较运算符更加丰富:例如可以使用 `==` 来判断某个字符串是否符合某个<ruby>模式<rt>pattern</rt></ruby>,也可以使用 `<`、`>` 来判断两个字符串的出现顺序。
可以在 [Bash 表达式文档][5]中了解到双方括号支持的更多运算符。
### 下一集
在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash