Merge pull request #12 from LCTT/master

update
This commit is contained in:
acyanbird 2019-04-09 17:54:25 +08:00 committed by GitHub
commit a5022f2107
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
127 changed files with 14012 additions and 2173 deletions

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10691-1.html)
[#]: subject: (7 Best VPN Services For 2019)
[#]: via: (https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/)
[#]: author: (Editor https://www.ostechnix.com/author/editor/)
2019 年最好的 7 款虚拟私人网络服务
======
在过去三年中,全球至少有 67 的企业面临着数据泄露,亿万用户受到影响。研究表明,如果事先对数据安全采取最基本的保护措施,那么预计有 93% 的安全问题是可以避免的。
糟糕的数据安全会带来极大的代价,特别是对企业而言。它会大致大规模的破坏并影响你的品牌声誉。尽管有些企业可以艰难地收拾残局,但仍有一些企业无法从事故中完全恢复。不过现在,你很幸运地可以得到数据及网络安全软件。
![](https://www.ostechnix.com/wp-content/uploads/2019/02/vpn-1.jpeg)
到了 2019 年,你可以通过**虚拟私人网络**,也就是我们熟知的 **VPN** 来保护你免受网络攻击。当涉及到在线隐私和安全时,常常存在许多不确定因素。有数百个不同的 VPN 提供商,选择合适的供应商也同时意味着在定价、服务和易用性之间谋取恰当的平衡。
如果你正在寻找一个可靠的 100 经过测试和安全的 VPN你可能需要进行详尽的调查并作出最佳选择。这里为你提供在 2019 年 7 款最好用并经过测试的 VPN 服务。
### 1、Vpnunlimitedapp
通过 VPN Unlimited你的数据安全将得到全面的保障。此 VPN 允许你连接任何 WiFi ,而无需担心你的个人数据可能被泄露。你的数据通过 AES-256 算法加密,保护你不受第三方和黑客的窥探。无论你身处何处,这款 VPN 都可确保你在所有网站上保持匿名且不受跟踪。它提供 7 天的免费试用和多种协议支持openvpn、IKEv2 和 KeepSolidWise。有特殊需求的用户会获得特殊的额外服务如个人服务器、终身 VPN 订阅和个人 IP 选项。
### 2、VPN Lite
VPN Lite 是一款易于使用而且**免费**的用于上网的 VPN 服务。你可以通过它在网络上保持匿名并保护你的个人隐私。它会模糊你的 IP 并加密你的数据,这意味着第三方无法跟踪你的所有线上活动。你还可以访问网络上的全部内容。使用 VPN Lite你可以访问在被拦截的网站。你还放心地可以访问公共 WiFi 而不必担心敏感信息被间谍软件窃取和来自黑客的跟踪和攻击。
### 3、HotSpot Shield
这是一款在 2005 年推出的大受欢迎的 VPN。这套 VPN 协议至少被全球 70% 的数据安全公司所集成并在全球有数千台服务器。它提供两种免费模式一种为完全免费但会有线上广告另一种则为七天试用。它提供军事级的数据加密和恶意软件防护。HotSpot Shield 保证网络安全并保证高速网络。
### 4、TunnelBear
如果你是一名 VPN 新手,那么 TunnelBear 将是你的最佳选择。它带有一个用户友好的界面,并配有动画熊引导。你可以在 TunnelBear 的帮助下以极快的速度连接至少 22 个国家的服务器。它使用 **AES 256-bit** 加密算法,保证无日志记录,这意味着你的数据将得到保护。你还可以在最多五台设备上获得无限流量。
### 5、ProtonVPN
这款 VPN 为你提供强大的优质服务。你的连接速度可能会受到影响,但你也可以享受到无限流量。它具有易于使用的用户界面,提供多平台兼容。 ProtonVPN 的服务据说是因为为种子下载提供了优化因而无法访问 Netflix。你可以获得如协议和加密等安全功能来保证你的网络安全。
### 6、ExpressVPN
ExpressVPN 被认为是最好的用于接触封锁和保护隐私的离岸 VPN。凭借强大的客户支持和快速的速度它已成为全球顶尖的 VPN 服务。它提供带有浏览器扩展和自定义固件的路由。 ExpressVPN 拥有一系列令人赞叹高质量应用程序,配有大量的服务器,并且最多只能支持三台设备。
ExpressVPN 并不是完全免费的,恰恰相反,正是由于它所提供的高质量服务而使之成为了市场上最贵的 VPN 之一。ExpressVPN 有 30 天内退款保证,因此你可以免费试用一个月。好消息是,这是完全没有风险的。例如,如果你在短时间内需要 VPN 来绕过在线审查,这可能是你的首选解决方案。用过它之后,你就不会随意想给一个会发送垃圾邮件、缓慢的免费的程序当成试验品。
ExpressVPN 也是享受在线流媒体和户外安全的最佳方式之一。如果你需要继续使用它你只需要续订或取消你的免费试用。ExpressVPN 在 90 多个国家架设有 2000 多台服务器,可以解锁 Netflix提供快速连接并为用户提供完全隐私。
### 7、PureVPN
虽然 PureVPN 可能不是完全免费的,但它却是此列表中最实惠的一个。用户可以注册获得 7 天的免费试用,并在之后选择任一付费计划。通过这款 VPN你可以访问到至少 140 个国家中的 750 余台服务器。它还可以在几乎所有设备上轻松安装。它的所有付费特性仍然可以在免费试用期间使用。包括无限数据流量、IP 泄漏保护和 ISP 不可见性。它支持的系统有 iOS、Android、Windows、Linux 和 macOS。
### 总结
如今,可用的免费 VPN 服务越来越多,为什么不抓住这个机会来保护你自己和你的客户呢?在了解到有那么多优秀的 VPN 服务后,我们知道即使是最安全的免费服务也不一定就完全没有风险。你可能需要付费升级到高级版以增强保护。高级版的 VPN 为你提供了免费试用,提供无风险退款保证。无论你打算花钱购买 VPN 还是准备使用免费 VPN我们都强烈建议你使用一个。
**关于作者:**
**Renetta K. Molina** 是一个技术爱好者和健身爱好者。她撰写有关技术、应用程序、 WordPress 和其他任何领域的文章。她喜欢在空余时间打高尔夫球和读书。她喜欢学习和尝试新事物。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/
作者:[Editor][a]
选题:[lujun9972][b]
译者:[Modrisco](https://github.com/Modrisco)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,351 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10698-1.html)
[#]: subject: (How To Set Password Policies In Linux)
[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
如何设置 Linux 系统的密码策略
======
![](https://www.ostechnix.com/wp-content/uploads/2016/03/How-To-Set-Password-Policies-In-Linux-720x340.jpg)
虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险,弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux比如 Debian、Ubuntu、Linux Mint 等和基于 RPM 系统的 Linux比如 RHEL、CentOS、Scientific Linux 等的系统下设置像**密码长度**、**密码复杂度**、**密码有效期**等密码策略。
### 在基于 DEB 的系统中设置密码长度
默认情况下,所有的 Linux 操作系统要求用户**密码长度最少 6 个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字、大写字母和特殊符号。
通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 `/etc/pam.d/` 目录中。
设置最小密码长度,编辑 `/etc/pam.d/common-password` 文件;
```
$ sudo nano /etc/pam.d/common-password
```
找到下面这行:
```
password [success=2 default=ignore] pam_unix.so obscure sha512
```
![][2]
在末尾添加额外的文字:`minlen=8`。在这里我设置的最小密码长度为 `8`
```
password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-3-1.jpg)
保存并关闭该文件。这样一来,用户现在不能设置小于 8 个字符的密码。
### 在基于 RPM 的系统中设置密码长度
**在 RHEL、CentOS、Scientific Linux 7.x** 系统中, 以 root 身份执行下面的命令来设置密码长度。
```
# authconfig --passminlen=8 --update
```
查看最小密码长度,执行:
```
# grep "^minlen" /etc/security/pwquality.conf
```
**输出样例:**
```
minlen = 8
```
**在 RHEL、CentOS、Scientific Linux 6.x** 系统中,编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
```
找到下面这行并在该行末尾添加:
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/root@server_003-3.jpg)
如上设置中,最小密码长度是 `8` 个字符。
### 在基于 DEB 的系统中设置密码复杂度
此设置会强制要求密码中应该包含多少类型,比如大写字母、小写字母和其他字符。
首先,用下面命令安装密码质量检测库:
```
$ sudo apt-get install libpam-pwquality
```
之后,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 `ucredit=-1`
```
password requisite pam_pwquality.so retry=3 ucredit=-1
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_001-7.jpg)
设置密码中至少有一个**小写字母**,如下所示。
```
password requisite pam_pwquality.so retry=3 dcredit=-1
```
设置密码中至少含有其他字符,如下所示。
```
password requisite pam_pwquality.so retry=3 ocredit=-1
```
正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母、小写字母和特殊字符。
你还可以设置密码中被允许的字符类的最大或最小数量。
下面的例子展示了设置一个新密码中被要求的字符类的最小数量:
```
password requisite pam_pwquality.so retry=3 minclass=2
```
### 在基于 RPM 的系统中设置密码复杂度
**在 RHEL 7.x / CentOS 7.x / Scientific Linux 7.x 中:**
设置密码中至少有一个小写字母,执行:
```
# authconfig --enablereqlower --update
```
查看该设置,执行:
```
# grep "^lcredit" /etc/security/pwquality.conf
```
**输出样例:**
```
lcredit = -1
```
类似地,使用以下命令去设置密码中至少有一个大写字母:
```
# authconfig --enablerequpper --update
```
查看该设置:
```
# grep "^ucredit" /etc/security/pwquality.conf
```
**输出样例:**
```
ucredit = -1
```
设置密码中至少有一个数字,执行:
```
# authconfig --enablereqdigit --update
```
查看该设置,执行:
```
# grep "^dcredit" /etc/security/pwquality.conf
```
**输出样例:**
```
dcredit = -1
```
设置密码中至少含有一个其他字符,执行:
```
# authconfig --enablereqother --update
```
查看该设置,执行:
```
# grep "^ocredit" /etc/security/pwquality.conf
```
**输出样例:**
```
ocredit = -1
```
**RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以 root 身份编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
```
找到下面这行并且在该行末尾添加:
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
```
如上设置中,密码必须要至少包含 `8` 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
### 在基于 DEB 的系统中设置密码有效期
现在,我们将要设置下面的策略。
1. 密码被使用的最长天数。
2. 密码更改允许的最小间隔天数。
3. 密码到期之前发出警告的天数。
设置这些策略,编辑:
```
$ sudo nano /etc/login.defs
```
在你的每个需求后设置值。
```
PASS_MAX_DAYS 100
PASS_MIN_DAYS 0
PASS_WARN_AGE 7
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-8.jpg)
正如你在上面样例中看到的一样,用户应该每 `100` 天修改一次密码,并且密码到期之前的 `7` 天开始出现警告信息。
请注意,这些设置将会在新创建的用户中有效。
为已存在的用户设置修改密码的最大间隔天数,你必须要运行下面的命令:
```
$ sudo chage -M <days> <username>
```
设置修改密码的最小间隔天数,执行:
```
$ sudo chage -m <days> <username>
```
设置密码到期之前的警告,执行:
```
$ sudo chage -W <days> <username>
```
显示已存在用户的密码,执行:
```
$ sudo chage -l sk
```
这里,**sk** 是我的用户名。
**输出样例:**
```
Last password change : Feb 24, 2017
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
正如你在上面看到的输出一样,该密码是无限期的。
修改已存在用户的密码有效期,
```
$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
```
上面的命令将会设置用户 `sk` 的密码期限是 `24/06/2018`。并且修改密码的最小间隔时间为 `5` 天,最大间隔时间为 `90` 天。用户账号将会在 `10` 天后被自动锁定,而且在到期之前的 `10` 天前显示警告信息。
### 在基于 RPM 的系统中设置密码效期
这点和基于 DEB 的系统是相同的。
### 在基于 DEB 的系统中禁止使用近期使用过的密码
你可以限制用户去设置一个已经使用过的密码。通俗的讲,就是说用户不能再次使用相同的密码。
为设置这一点,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
找到下面这行并且在末尾添加文字 `remember=5`
```
password        [success=2 default=ignore]      pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
```
上面的策略将会阻止用户去使用最近使用过的 5 个密码。
### 在基于 RPM 的系统中禁止使用近期使用过的密码
这点对于 RHEL 6.x 和 RHEL 7.x 和它们的衍生系统 CentOS、Scientific Linux 是相同的。
以 root 身份编辑 `/etc/pam.d/system-auth` 文件,
```
# vi /etc/pam.d/system-auth
```
找到下面这行,并且在末尾添加文字 `remember=5`
```
password     sufficient     pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
```
现在你了解了 Linux 中的密码策略,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前请保持关注。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: http://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_003-2-1.jpg

View File

@ -0,0 +1,197 @@
iWant一个去中心化的点对点共享文件的命令行工具
======
![](https://www.ostechnix.com/wp-content/uploads/2017/07/p2p-720x340.jpg)
不久之前,我们编写了一个指南,内容是一个文件共享实用程序,名为 [transfer.sh][1],它是一个免费的 Web 服务,允许你在 Internet 上轻松快速地共享文件,还有 [PSiTransfer][2],一个简单的开源自托管文件共享解决方案。今天,我们将看到另一个名为 “iWant” 的文件共享实用程序。它是一个基于命令行的自由开源的去中心化点对点文件共享应用程序。
你可能想知道,它与其它文件共享应用程序有什么不同?以下是 iWant 的一些突出特点。
* 它是一个命令行应用程序。这意味着你不需要消耗内存来加载 GUI 实用程序。你只需要一个终端。
* 它是去中心化的。这意味着你的数据不会在任何中心位置存储。因此,不会因为中心点失败而失败。
* iWant 允许中断下载,你可以在以后随时恢复。你不需要从头开始下载,它会从你停止的位置恢复下载。
* 共享目录中文件所作的任何更改(如删除、添加、修改)都会立即反映在网络中。
* 就像种子一样iWant 从多个节点下载文件。如果任何节点离开群组或未能响应,它将继续从另一个节点下载。
* 它是跨平台的,因此你可以在 GNU/Linux、MS Windows 或者 Mac OS X 中使用它。
### 安装 iWant
iWant 可以使用 PIP 包管理器轻松安装。确保你在 Linux 发行版中安装了 pip。如果尚未安装参考以下指南。
[如何使用 Pip 管理 Python 包](https://www.ostechnix.com/manage-python-packages-using-pip/)
安装 pip 后,确保你有以下依赖项:
* libffi-dev
* libssl-dev
比如说,在 Ubuntu 上,你可以使用以下命令安装这些依赖项:
```
$ sudo apt-get install libffi-dev libssl-dev
```
安装完所有依赖项后,使用以下命令安装 iWant
```
$ sudo pip install iwant
```
现在我们的系统中已经有了 iWant让我们来看看如何使用它来通过网络传输文件。
### 用法
首先,使用以下命令启动 iWant 服务器:
LCTT 译注:虽然这个软件是叫 iWant但是其命令名为 `iwanto`,另外这个软件至少一年没有更新了。)
```
$ iwanto start
```
第一次启动时iWant 会询问想要分享和下载文件夹的位置,所以需要输入两个文件夹的位置。然后,选择要使用的网卡。
示例输出:
```
Shared/Download folder details looks empty..
Note: Shared and Download folder cannot be the same
SHARED FOLDER(absolute path):/home/sk/myshare
DOWNLOAD FOLDER(absolute path):/home/sk/mydownloads
Network interface available
1. lo => 127.0.0.1
2. enp0s3 => 192.168.43.2
Enter index of the interface:2
now scanning /home/sk/myshare
[Adding] /home/sk/myshare 0.0
Updating Leader 56f6d5e8-654e-11e7-93c8-08002712f8c1
[Adding] /home/sk/myshare 0.0
connecting to 192.168.43.2:1235 for hashdump
```
如果你看到类似上面的输出,你可以立即开始使用 iWant 了。
同样,在网络中的所有系统上启动 iWant 服务,指定有效的分享和下载文件夹的位置,并选择合适的网卡。
iWant 服务将继续在当前终端窗口中运行,直到你按下 `CTRL+C` 退出为止。你需要打开一个新选项卡或新的终端窗口来使用 iWant。
iWant 的用法非常简单,它的命令很少,如下所示。
* `iwanto start` 启动 iWant 服务。
* `iwanto search <name>` 查找文件。
* `iwanto download <hash>` 下载一个文件。
* `iwanto share <path>` 更改共享文件夹的位置。
* `iwanto download to <destination>` 更改下载文件夹位置。
* `iwanto view config` 查看共享和下载文件夹。
* `iwanto version` 显示 iWant 版本。
* `iwanto -h` 显示帮助信息。
让我向你展示一些例子。
#### 查找文件
要查找一个文件,运行:
```
$ iwanto search <filename>
```
请注意,你无需指定确切的名称。
示例:
```
$ iwanto search command
```
上面的命令将搜索包含 “command” 字符串的所有文件。
我的 Ubuntu 系统会输出:
```
Filename Size Checksum
------------------------------------------- ------- --------------------------------
/home/sk/myshare/THE LINUX COMMAND LINE.pdf 3.85757 efded6cc6f34a3d107c67c2300459911
```
#### 下载文件
你可以在你的网络上的任何系统下载文件。要下载文件,只需提供文件的哈希(校验和),如下所示。你可以使用 `iwanto search` 命令获取共享的哈希值。
```
$ iwanto download efded6cc6f34a3d107c67c2300459911
```
文件将保存在你的下载位置,在本文中是 `/home/sk/mydownloads/` 位置。
```
Filename: /home/sk/mydownloads/THE LINUX COMMAND LINE.pdf
Size: 3.857569 MB
```
#### 查看配置
要查看配置,例如共享和下载文件夹的位置,运行:
```
$ iwanto view config
```
示例输出:
```
Shared folder:/home/sk/myshare
Download folder:/home/sk/mydownloads
```
#### 更改共享和下载文件夹的位置
你可以更改共享文件夹和下载文件夹。
```
$ iwanto share /home/sk/ostechnix
```
现在,共享位置已更改为 `/home/sk/ostechnix`
同样,你可以使用以下命令更改下载位置:
```
$ iwanto download to /home/sk/Downloads
```
要查看所做的更改,运行命令:
```
$ iwanto view config
```
#### 停止 iWant
一旦你不想用 iWant 了,可以按下 `CTRL+C` 退出。
如果它不起作用,那可能是由于防火墙或你的路由器不支持多播。你可以在 `~/.iwant/.iwant.log` 文件中查看所有日志。有关更多详细信息,参阅最后提供的项目的 GitHub 页面。
差不多就是全部了。希望这个工具有所帮助。下次我会带着另一个有趣的指南再次来到这里。
干杯!
### 资源
-[iWant GitHub](https://github.com/nirvik/iWant)
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/iwant-decentralized-peer-peer-file-sharing-commandline-application/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/easy-fast-way-share-files-internet-command-line/
[2]:https://www.ostechnix.com/psitransfer-simple-open-source-self-hosted-file-sharing-solution/

View File

@ -0,0 +1,101 @@
重新发现 make 规则背后的力量
======
![](https://user-images.githubusercontent.com/4419992/35015638-0529f1c0-faf4-11e7-9801-4995fc4b54f0.jpg)
我过去认为 makefile 只是一种将一组组的 shell 命令列出来的简便方法;过了一段时间我了解到它们是有多么的强大、灵活以及功能齐全。这篇文章带你领略其中一些有关规则的特性。
> 备注:这些全是针对 GNU Makefile 的,如果你希望支持 BSD Makefile ,你会发现有些新的功能缺失。感谢 [zge][5] 指出这点。
### 规则
<ruby>规则<rt>rule</rt></ruby>是指示 `make` 应该如何并且何时构建一个被称作为<ruby>目标<rt>target</rt></ruby>的文件的指令。目标可以依赖于其它被称作为<ruby>前提<rt>prerequisite</rt></ruby>的文件。
你会指示 `make` 如何按<ruby>步骤<rt>recipe</rt></ruby>构建目标,那就是一套按照出现顺序一次执行一个的 shell 命令。语法像这样:
```
target_name : prerequisites
recipe
```
一但你定义好了规则,你就可以通过从命令行执行以下命令构建目标:
```
$ make target_name
```
目标一经构建,除非前提改变,否则 `make` 会足够聪明地不再去运行该步骤。
### 关于前提的更多信息
前提表明了两件事情:
* 当目标应当被构建时:如果其中一个前提比目标更新,`make` 假定目的应当被构建。
* 执行的顺序:鉴于前提可以反过来在 makefile 中由另一套规则所构建,它们同样暗示了一个执行规则的顺序。
如果你想要定义一个顺序但是你不想在前提改变的时候重新构建目标,你可以使用一种特别的叫做“<ruby>唯顺序<rt>order only</rt></ruby>”的前提。这种前提可以被放在普通的前提之后,用管道符(`|`)进行分隔。
### 样式
为了便利,`make` 接受目标和前提的样式。通过包含 `%` 符号可以定义一种样式。这个符号是一个可以匹配任何长度的文字符号或者空隔的通配符。以下有一些示例:
* `%`:匹配任何文件
* `%.md`:匹配所有 `.md` 结尾的文件
* `prefix%.go`:匹配所有以 `prefix` 开头以 `.go` 结尾的文件
### 特殊目标
有一系列目标名字,它们对于 `make` 来说有特殊的意义,被称作<ruby>特殊目标<rt>special target</rt></ruby>
你可以在这个[文档][1]发现全套特殊目标。作为一种经验法则,特殊目标以点开始后面跟着大写字母。
以下是几个有用的特殊目标:
- `.PHONY`:向 `make` 表明此目标的前提可以被当成伪目标。这意味着 `make` 将总是运行,无论有那个名字的文件是否存在或者上次被修改的时间是什么。
- `.DEFAULT`:被用于任何没有指定规则的目标。
- `.IGNORE`:如果你指定 `.IGNORE` 为前提,`make` 将忽略执行步骤中的错误。
### 替代
当你需要以你指定的改动方式改变一个变量的值,<ruby>替代<rt>substitution</rt></ruby>就十分有用了。
替代的格式是 `$(var:a=b)`,它的意思是获取变量 `var` 的值,用值里面的 `b` 替代词末尾的每个 `a` 以代替最终的字符串。例如:
```
foo := a.o
bar : = $(foo:.o=.c) # sets bar to a.c
```
注意:特别感谢 [Luis Lavena][2] 让我们知道替代的存在。
### 档案文件
档案文件是用来一起将多个数据文档(类似于压缩文件的概念)收集成一个文件。它们由 `ar` Unix 工具所构建。`ar` 可以用于为任何目的创建档案,但除了[静态库][3],它已经被 `tar` 大量替代。
`make` 中,你可以使用一个档案文件中的单独一个成员作为目标或者前提,就像这样:
```
archive(member) : prerequisite
recipe
```
### 最后的想法
关于 `make` 还有更多可探索的,但是至少这是一个起点,我强烈鼓励你去查看[文档][4],创建一个笨拙的 makefile 然后就可以探索它了。
--------------------------------------------------------------------------------
via: https://monades.roperzh.com/rediscovering-make-power-behind-rules/
作者:[Roberto Dip][a]
译者:[tomjlw](https://github.com/tomjlw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://monades.roperzh.com
[1]:https://www.gnu.org/software/make/manual/make.html#Special-Targets
[2]:https://twitter.com/luislavena/
[3]:http://tldp.org/HOWTO/Program-Library-HOWTO/static-libraries.html
[4]:https://www.gnu.org/software/make/manual/make.html
[5]:https://lobste.rs/u/zge

View File

@ -0,0 +1,229 @@
Rancher一个全面的可用于产品环境的容器管理平台
======
Docker 作为一款容器化应用的新兴软件,被大多数 IT 公司使用来减少基础设施平台的成本。
通常,没有 GUI 的 Docker 软件对于 Linux 管理员来说很容易,但是对于开发者来就有点困难。当把它搬到生产环境上来,那么它对 Linux 管理员来说也相当不友好。那么,轻松管理 Docker 的最佳解决方案是什么呢?
唯一的办法就是提供 GUI。Docker API 允许第三方应用接入 Docker。在市场上有许多 Docker GUI 应用。我们已经写过一篇关于 Portainer 应用的文章。今天我们来讨论另一个应用Rancher。
容器让软件开发更容易,让开发者更快的写代码、更好的运行它们。但是,在生产环境上运行容器却很困难。
**推荐阅读:** [Portainer一个简单的 Docker 管理图形工具][1]
### Rancher 简介
[Rancher][2] 是一个全面的容器管理平台,它可以让容器在各种基础设施平台的生产环境上部署和运行更容易。它提供了诸如多主机网络、全局/本地负载均衡和卷快照等基础设施服务。它整合了原生 Docker 的管理能力,如 Docker Machine 和 Docker Swarm。它提供了丰富的用户体验让 DevOps 管理员在更大规模的生产环境上运行 Docker。
访问以下文章可以了解 Linux 系统上安装 Docker。
**推荐阅读:**
- [如何在 Linux 上安装 Docker][3]
- [如何在 Linux 上使用 Docker 镜像][4]
- [如何在 Linux 上使用 Docker 容器][5]
- [如何在 Docker 容器内安装和运行应用][6]
### Rancher 特性
* 可以在两分钟内安装 Kubernetes。
* 一键启动应用90 个流行的 Docker 应用)。
* 部署和管理 Docker 更容易。
* 全面的生产级容器管理平台。
* 可以在生产环境上快速部署容器。
* 强大的自动部署和运营容器技术。
* 模块化基础设施服务。
* 丰富的编排工具。
* Rancher 支持多种认证机制。
### 怎样安装 Rancher
由于 Rancher 是以轻量级的 Docker 容器方式运行所以它的安装非常简单。Rancher 是由一组 Docker 容器部署的。只需要简单的启动两个容器就能运行 Rancher。一个容器用作管理服务器另一个容器在各个节点上作为代理。在 Linux 系统下简单的运行下列命令就能部署 Rancher。
Rancher 服务器提供了两个不同的安装包标签如 `stable``latest`。下列命令将会拉取适合的 Rancher 镜像并安装到你的操作系统上。Rancher 服务器仅需要两分钟就可以启动。
* `latest`:这个标签是他们的最新开发构建。这些构建将通过 Rancher CI 的自动化框架进行验证,不建议在生产环境使用。
* `stable`:这是最新的稳定发行版本,推荐在生产环境使用。
Rancher 的安装方法有多种。在这篇教程中我们仅讨论两种方法。
* 以单一容器的方式安装 Rancher内嵌 Rancher 数据库)
* 以单一容器的方式安装 Rancher外部数据库
### 方法 - 1
运行下列命令以单一容器的方式安装 Rancher 服务器(内嵌数据库)
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:latest
```
### 方法 - 2
你可以在启动 Rancher 服务器时指向外部数据库,而不是使用自带的内部数据库。首先创建所需的数据库,数据库用户为同一个。
```
> CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';
> GRANT ALL ON cattle.* TO 'cattle'@'%' IDENTIFIED BY 'cattle';
> GRANT ALL ON cattle.* TO 'cattle'@'localhost' IDENTIFIED BY 'cattle';
```
运行下列命令启动 Rancher 去连接外部数据库。
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server \
--db-host myhost.example.com --db-port 3306 --db-user username --db-pass password --db-name cattle
```
如果你想测试 Rancher 2.0,使用下列的命令去启动。
```
$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:preview
```
### 通过 GUI 访问 & 安装 Rancher
浏览器输入 `http://hostname:8080``http://server_ip:8080` 去访问 rancher GUI.
![][8]
### 怎样注册主机
注册你的主机 URL 允许它连接到 Rancher API。这是一次性设置。
接下来,点击主菜单下面的 “Add a Host” 链接或者点击主菜单上的 “INFRASTRUCTURE >> Add Hosts”点击 “Save” 按钮。
![][9]
默认情况下Rancher 里的访问控制认证禁止了访问,因此我们首先需要通过一些方法打开访问控制认证,否则任何人都不能访问 GUI。
点击 “>> Admin >> Access Control”输入下列的值最后点击 “Enable Authentication” 按钮去打开它。在我这里,是通过 “local authentication” 的方式打开的。
* “Login UserName” 输入你期望的登录名
* “Full Name” 输入你的全名
* “Password” 输入你期望的密码
* “Confirm Password” 再一次确认密码
![][10]
注销然后使用新的登录凭证重新登录:
![][11]
现在,我能看到本地认证已经被打开。
![][12]
### 怎样添加主机
注册你的主机后,它将带你进入下一个页面,在那里你能选择不同云服务提供商的 Linux 主机。我们将添加一个主机运行 Rancher 服务因此选择“custom”选项然后输入必要的信息。
在第 4 步输入你服务器的公有 IP运行第 5 步列出的命令,最后点击 “close” 按钮。
```
$ sudo docker run -e CATTLE_AGENT_IP="192.168.56.2" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.11 http://192.168.56.2:8080/v1/scripts/16A52B9BE2BAB87BB0F5:1546214400000:ODACe3sfis5V6U8E3JASL8jQ
INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.56.2:8080/v1
INFO: Attempting to connect to: http://192.168.56.2:8080/v1
INFO: http://192.168.56.2:8080/v1 is accessible
INFO: Configured Host Registration URL info: CATTLE_URL=http://192.168.56.2:8080/v1 ENV_URL=http://192.168.56.2:8080/v1
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=9946BD1DCBCFEF3439F8
INFO: ENV: CATTLE_AGENT_IP=192.168.56.2
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.56.2:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.11
INFO: Launched Rancher Agent: e83b22afd0c023dabc62404f3e74abb1fa99b9a178b05b1728186c9bfca71e8d
```
![][13]
等待几秒钟后新添加的主机将会出现。点击 “Infrastructure >> Hosts” 页面。
![][14]
### 怎样查看容器
只需要点击下列位置就能列出所有容器。点击 “Infrastructure >> Containers” 页面。
![][15]
### 怎样创建容器
非常简单,只需点击下列位置就能创建容器。
点击 “Infrastructure >> Containers >> Add Container” 然后输入每个你需要的信息。为了测试,我将创建一个 `latest` 标签的 CentOS 容器。
![][16]
在同样的列表位置,点击 “ Infrastructure >> Containers”。
![][17]
点击容器名展示容器的性能信息,如 CPU、内存、网络和存储。
![][18]
选择特定容器然后点击最右边的“三点”按钮或者点击“Actions”按钮对容器进行管理如停止、启动、克隆、重启等。
![][19]
如果你想控制台访问容器,只需要点击 “Actions” 按钮中的 “Execute Shell” 选项即可。
![][20]
### 怎样从应用目录部署容器
Rancher 提供了一个应用模版目录,让部署变的很容易,只需要单击一下就可以。
它维护了多数流行应用,这些应用由 Rancher 社区贡献。
![][21]
点击 “Catalog >> All >> Choose the required application”最后点击 “Launch” 去部署。
![][22]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
作者:[Magesh Maruthamuthu][a]
译者:[arrowfeng](https://github.com/arrowfeng)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
[2]:http://rancher.com/
[3]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
[4]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
[5]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
[6]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-1.png
[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-2.png
[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3.png
[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3a.png
[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-4.png
[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-5.png
[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-6.png
[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-7.png
[16]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-8.png
[17]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-9.png
[18]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-10.png
[19]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-11.png
[20]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-12.png
[21]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-13.png
[22]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-14.png

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10683-1.html)
[#]: subject: (Oomox Customize And Create Your Own GTK2, GTK3 Themes)
[#]: via: (https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/)
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
Oomox定制和创建你自己的 GTK2、GTK3 主题
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png)
主题和可视化定制是 Linux 的主要优势之一。由于所有代码都是开源的,因此你可以比 Windows/Mac OS 更大程度上地改变 Linux 系统的外观和行为方式。GTK 主题可能是人们定制 Linux 桌面的最流行方式。GTK 工具包被各种桌面环境使用,如 Gnome、Cinnamon、Unity、XFC E和 budgie。这意味着为 GTK 制作的单个主题只需很少的修改就能应用于任何这些桌面环境。
有很多非常高品质的流行 GTK 主题,例如 **Arc**、**Numix** 和 **Adapta**。但是如果你想自定义这些主题并创建自己的视觉设计,你可以使用 **Oomox**
Oomox 是一个图形应用,可以完全使用自己的颜色、图标和终端风格自定义和创建自己的 GTK 主题。它自带几个预设,你可以在 Numix、Arc 或 Materia 主题样式上创建自己的 GTK 主题。
### 安装 Oomox
在 Arch Linux 及其衍生版中:
Oomox 可以在 [AUR][1] 中找到,所以你可以使用任何 AUR 助手程序安装它,如 [yay][2]。
```
$ yay -S oomox
```
在 Debian/Ubuntu/Linux Mint 中,在[这里][3]下载 `oomox.deb` 包并按如下所示进行安装。在写本指南时,最新版本为 `oomox_1.7.0.5.deb`
```
$ sudo dpkg -i oomox_1.7.0.5.deb
$ sudo apt install -f
```
在 Fedora 上Oomox 可以在第三方 **COPR** 仓库中找到。
```
$ sudo dnf copr enable tcg/themes
$ sudo dnf install oomox
```
Oomox 也有 [Flatpak 应用][4]。确保已按照[本指南][5]中的说明安装了 Flatpak。然后使用以下命令安装并运行 Oomox
```
$ flatpak install flathub com.github.themix_project.Oomox
$ flatpak run com.github.themix_project.Oomox
```
对于其他 Linux 发行版,请进入 Github 上的 Oomox 项目页面(本指南末尾给出链接),并从源代码手动编译和安装。
### 自定义并创建自己的 GTK2、GTK3 主题
#### 主题定制
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png)
你可以更改几乎每个 UI 元素的颜色,例如:
1. 标题
2. 按钮
3. 标题内的按钮
4. 菜单
5. 选定的文字
在左边,有许多预设主题,如汽车主题、现代主题,如 Materia 和 Numix以及复古主题。在窗口的顶部有一个名为**主题样式**的选项,可让你设置主题的整体视觉样式。你可以在 Numix、Arc 和 Materia 之间进行选择。
使用某些像 Numix 这样的样式,你甚至可以更改标题渐变,边框宽度和面板透明度等内容。你还可以为主题添加黑暗模式,该模式将从默认主题自动创建。
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png)
#### 图标集定制
你可以自定义用于主题图标的图标集。有两个选项Gnome Colors 和 Archdroid。你可以更改图标集的基础和笔触颜色。
#### 终端定制
你还可以自定义终端颜色。该应用有几个预设,但你可以为每个颜色,如红色,绿色,黑色等自定义确切的颜色代码。你还可以自动交换前景色和背景色。
#### Spotify 主题
这个应用的一个独特功能是你可以根据喜好定义 spotify 主题。你可以更改 spotify 的前景色、背景色和强调色来匹配整体的 GTK 主题。
然后,只需按下“应用 Spotify 主题”按钮,你就会看到这个窗口:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png)
点击应用即可。
#### 导出主题
根据自己的喜好自定义主题后,可以通过单击左上角的重命名按钮重命名主题:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png)
然后,只需点击“导出主题”将主题导出到你的系统。
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png)
你也可以只导出图标集或终端主题。
之后你可以打开桌面环境中的任何可视化自定义应用,例如基于 Gnome 桌面的 Tweaks或者 “XFCE 外观设置”。选择你导出的 GTK 或者 shell 主题。
### 总结
如果你是一个 Linux 主题迷并且你确切知道系统中的每个按钮、每个标题应该怎样Oomox 值得一试。 对于极致的定制者,它可以让你几乎更改系统外观的所有内容。对于那些只想稍微调整现有主题的人来说,它有很多很多预设,所以你可以毫不费力地得到你想要的东西。
你试过吗? 你对 Oomox 有什么看法? 请在下面留言!
### 资源
- [Oomox GitHub 仓库](https://github.com/themix-project/oomox)
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://aur.archlinux.org/packages/oomox/
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[3]: https://github.com/themix-project/oomox/releases
[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/

View File

@ -0,0 +1,223 @@
[#]: collector: "lujun9972"
[#]: translator: "Auk7F7"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: subject: "Arch-Wiki-Man A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline"
[#]: via: "https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/"
[#]: author: "Prakash Subramanian https://www.2daygeek.com/author/prakash/"
[#]: url: "https://linux.cn/article-10694-1.html"
Arch-Wiki-Man一个以 Linux Man 手册样式离线浏览 Arch Wiki 的工具
======
现在上网已经很方便了,但技术上会有限制。看到技术的发展,我很惊讶,但与此同时,各种地方也都会出现衰退。
当你搜索有关其他 Linux 发行版的某些东西时,大多数时候你会得到的是一个第三方的链接,但是对于 Arch Linux 来说,每次你都会得到 Arch Wiki 页面的结果。
因为 Arch Wiki 提供了除第三方网站以外的大多数解决方案。
到目前为止,你也许可以使用 Web 浏览器为你的 Arch Linux 系统找到一个解决方案,但现在你可以不用这么做了。
一个名为 arch-wiki-man 的工具提供了一个在命令行中更快地执行这个操作的方案。如果你是一个 Arch Linux 爱好者,我建议你阅读 [Arch Linux 安装后指南][1],它可以帮助你调整你的系统以供日常使用。
### arch-wiki-man 是什么?
[arch-wiki-man][2] 工具允许用户从命令行CLI中离线搜索 Arch Wiki 页面。它允许用户以 Linux Man 手册样式访问和搜索整个 Wiki 页面。
而且,你无需切换到 GUI。更新将每两天自动推送一次因此你的 Arch Wiki 本地副本页面将是最新的。这个工具的名字是 `awman` `awman` 是 “Arch Wiki Man” 的缩写。
我们之前写过一篇类似工具 [Arch Wiki 命令行实用程序][3]arch-wiki-cli的文章。这个工具允许用户从互联网上搜索 Arch Wiki。但你需要在线使用这个实用程序。
### 如何安装 arch-wiki-man 工具?
arch-wiki-man 工具可以在 AUR 仓库LCTT 译注AUR 即<ruby>Arch 用户软件仓库<rt>Arch User Repository</rt></ruby>)中获得,因此,我们需要使用 AUR 工具来安装它。有许多 AUR 工具可用,而且我们曾写了一篇关于流行的 AUR 辅助工具: [Yaourt AUR helper][4] 和 [Packer AUR helper][5] 的文章。
```
$ yaourt -S arch-wiki-man
```
```
$ packer -S arch-wiki-man
```
或者,我们可以使用 npm 包管理器来安装它,确保你已经在你的系统上安装了 [NodeJS][6]。然后运行以下命令来安装它。
```
$ npm install -g arch-wiki-man
```
### 如何更新 Arch Wiki 本地副本?
正如前面更新的那样,更新每两天自动推送一次,也可以通过运行以下命令来完成更新。
```
$ sudo awman-update
[sudo] password for daygeek:
[email protected] /usr/lib/node_modules/arch-wiki-man
└── [email protected]
arch-wiki-md-repo has been successfully updated or reinstalled.
```
`awman-update` 是一种更快、更方便的更新方法。但是,你也可以通过运行以下命令重新安装 arch-wiki-man 来获取更新。
```
$ yaourt -S arch-wiki-man
```
```
$ packer -S arch-wiki-man
```
### 如何在终端中使用 Arch Wiki
它有着简易的接口且易于使用。想要搜索,只需要运行 `awman` 加搜索项目。一般语法如下所示。
```
$ awman Search-Term
```
### 如何搜索多个匹配项?
如果希望列出包含 “installation” 字符串的所有结果的标题,运行以下格式的命令,如果输出有多个结果,那么你将会获得一个选择菜单来浏览每个项目。
```
$ awman installation
```
![][8]
详细页面的截屏:
![][9]
### 在标题和描述中搜索给定的字符串
`-d``--desc-search` 选项允许用户在标题和描述中搜索给定的字符串。
```
$ awman -d mirrors
```
```
$ awman --desc-search mirrors
? Select an article: (Use arrow keys)
[1/3] Mirrors: Related articles
[2/3] DeveloperWiki-NewMirrors: Contents
[3/3] Powerpill: Powerpill is a pac
```
### 在内容中搜索给定的字符串
`-k``--apropos` 选项也允许用户在内容中搜索给定的字符串。但须注意,此选项会显著降低搜索速度,因为此选项会扫描整个 Wiki 页面的内容。
```
$ awman -k openjdk
```
```
$ awman --apropos openjdk
? Select an article: (Use arrow keys)
[1/26] Hadoop: Related articles
[2/26] XDG Base Directory support: Related articles
[3/26] Steam-Game-specific troubleshooting: See Steam/Troubleshooting first.
[4/26] Android: Related articles
[5/26] Elasticsearch: Elasticsearch is a search engine based on Lucene. It provides a distributed, mul..
[6/26] LibreOffice: Related articles
[7/26] Browser plugins: Related articles
(Move up and down to reveal more choices)
```
### 在浏览器中打开搜索结果
`-w``--web` 选项允许用户在 Web 浏览器中打开搜索结果。
```
$ awman -w AUR helper
```
```
$ awman --web AUR helper
```
![][10]
### 以其他语言搜索
想要查看支持的语言列表,请运行以下命令。
```
$ awman --list-languages
arabic
bulgarian
catalan
chinesesim
chinesetrad
croatian
czech
danish
dutch
english
esperanto
finnish
greek
hebrew
hungarian
indonesian
italian
korean
lithuanian
norwegian
polish
portuguese
russian
serbian
slovak
spanish
swedish
thai
ukrainian
```
使用你的首选语言运行 `awman` 命令以查看除英语以外的其他语言的结果。
```
$ awman -l chinesesim deepin
```
![][11]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[Auk7F7](https://github.com/Auk7F7)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/arch-linux-post-installation-30-things-to-do-after-installing-arch-linux/
[2]: https://github.com/greg-js/arch-wiki-man
[3]: https://www.2daygeek.com/search-arch-wiki-website-command-line-terminal/
[4]: https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
[5]: https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
[6]: https://www.2daygeek.com/install-nodejs-on-ubuntu-centos-debian-fedora-mint-rhel-opensuse/
[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-1.png
[9]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-2.png
[10]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-3.png
[11]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-4.png

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10688-1.html)
[#]: subject: (Secure Email Service Tutanota Has a Desktop App Now)
[#]: via: (https://itsfoss.com/tutanota-desktop)
[#]: author: (John Paul https://itsfoss.com/author/john/)
@ -10,32 +10,33 @@
加密邮件服务 Tutanota 现在有桌面应用了
======
![][18]
[Tutanota][1] 最近[宣布][2]发布针对其电子邮件服务的桌面应用。该 Beta 版适用于 Linux、Windows 和 macOS。
### 什么是 Tutanota
网上有大量免费的、带有广告的电子邮件服务。但是,大多数电子邮件服务并不完全安全或在意隐私。在这个后[[斯诺登][3]世界中,[Tutanota][4] 提供了免费、安全的电子邮件服务,它专注于隐私。
网上有大量免费的、带有广告的电子邮件服务。但是,大多数电子邮件服务并不完全安全或在意隐私。在这个后[斯诺登][3]世界中,[Tutanota][4] 提供了免费、安全的电子邮件服务,它专注于隐私。
Tutanota 有许多引人注目的功能,例如:
* 端到端加密邮箱
  * 端到端加密地址簿
  * 用户之间自动端到端加密邮件
  * 使用共享密码将端到端加密电子邮件发送到任何电子邮件地址
  * 安全密码重置,使 Tutanota 完全无法访问
  * 从发送和接收的电子邮件中去除 IP 地址
  * 运行Tutanota 的代码是[开源][5]的
  * 双重身份验证
  * 专注于隐私
  * 密码加盐,并本地使用 Bcrypt 哈希
  * 位于德国的安全服务器
  * 支持PFS、DMARC、DKIM、DNSSEC 和 DANE 的 TLS
  * 本地执行加密数据的全文搜索
* 端到端加密邮箱
* 端到端加密地址簿
* 用户之间自动端到端加密邮件
* 通过分享密码将端到端加密电子邮件发送到任何电子邮件地址
* 安全密码重置,使 Tutanota 完全无法访问
* 从发送和接收的电子邮件中去除 IP 地址
* 运行 Tutanota 的代码是[开源][5]的
* 双因子身份验证
* 专注于隐私
* 加盐的密码,并本地使用 Bcrypt 哈希
* 位于德国的安全服务器
* 支持 PFS、DMARC、DKIM、DNSSEC 和 DANE 的 TLS
* 本地执行加密数据的全文搜索
![][6]
web 中的 Tutanota
*web 中的 Tutanota*
你可以[免费注册一个帐户][7]。你还可以升级帐户获取其他功能,例如自定义域、自定义域登录、域规则、额外的存储和别名。他们还提供企业帐户。
@ -45,35 +46,33 @@ Tutanota 也可以在移动设备上使用。事实上,它的 [Android 应用
### Tutanota 的新桌面应用
Tutanota 在圣诞节前宣布了桌面应用的 [Beta 版][2]。该应用基于 [Electron][10]。
Tutanota 在去年圣诞节前宣布了桌面应用的 [Beta 版][2]。该应用基于 [Electron][10]。
![][11]
Tutanota 桌面应用
*Tutanota 桌面应用*
他们选择 Electron 的原因:
* 以最小的成本支持三个主流操作系统。
  * 快速调整新桌面客户端,使其与添加到网页客户端的新功能一致。
  * 将开发时间留给桌面功能,例如离线可用、电子邮件导入,将同时在所有三个桌面客户端中提供。
* 以最小的成本支持三个主流操作系统。
* 快速调整新桌面客户端,使其与添加到网页客户端的新功能一致。
* 将开发时间留给桌面功能,例如离线可用、电子邮件导入,将同时在所有三个桌面客户端中提供。
由于这是 Beta 版因此应用中缺少一些功能。Tutanota 的开发团队正在努力添加以下功能:
* 电子邮件导入和与外部邮箱同步。这将“使 Tutanota 能够从外部邮箱导入电子邮件,并在将数据存储在 Tutanota 服务器上之前在设备本地加密数据。”
  * 电子邮件的离线可用
  * 双重身份验证
* 电子邮件导入和与外部邮箱同步。这将“使 Tutanota 能够从外部邮箱导入电子邮件,并在将数据存储在 Tutanota 服务器上之前在设备本地加密数据。”
* 电子邮件的离线可用
* 双因子身份验证
### 如何安装 Tutanota 桌面客户端?
![][12]
在 Tutanota 中写邮件
*在 Tutanota 中写邮件*
你可以直接从 Tutanota 的网站[下载][2] Beta 版应用。它们有[适用于 Linux 的 AppImage 文件][13]、适用于 Windows 的 .exe 文件和适用于 macOS 的 .app 文件。你可以将你遇到的任何 bug 发布到 Tutanota 的 [GitHub 帐号中][14]。
为了证明应用的安全性Tutanota 签名了每个版本。 “签名确保桌面客户端以及任何更新直接来自我们且未被篡改。”你可以使用 Tutanota 的 [GitHub 页面][15]来验证签名。
为了证明应用的安全性Tutanota 签名了每个版本。“签名确保桌面客户端以及任何更新直接来自我们且未被篡改。”你可以使用 Tutanota 的 [GitHub 页面][15]来验证签名。
请记住,你需要先创建一个 Tutanota 帐户才能使用它。该邮件客户端设计上只能用在 Tutanota。
@ -83,9 +82,8 @@ Tutanota 桌面应用
你曾经使用过 [Tutanota][16] 么?如果没有,你最喜欢的关心隐私的邮件服务是什么?请在下面的评论中告诉我们。
如果你觉得这篇文章很有趣,请花些时间在社交媒体、Hacker News 或 [Reddit][17] 上分享。
如果你觉得这篇文章很有趣,请花些时间在社交媒体上分享。
![][18]
--------------------------------------------------------------------------------
@ -94,7 +92,7 @@ via: https://itsfoss.com/tutanota-desktop
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,15 +1,16 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10682-1.html)
[#]: subject: (Emulators and Native Linux games on the Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/play-games-raspberry-pi)
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
树莓派上的模拟器和原生 Linux 游戏
树莓派使用入门:树莓派上的模拟器和原生 Linux 游戏
======
树莓派是一个很棒的游戏平台。在我们的系列文章的第九篇中学习如何开始使用树莓派。
> 树莓派是一个很棒的游戏平台。在我们的系列文章的第九篇中学习如何开始使用树莓派。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_minecraft_copy.png?itok=iz4RF7f8)
@ -17,13 +18,13 @@
### 使用模拟器玩游戏
模拟器是一种能让你在树莓派上玩不同系统不同年代游戏的软件。在如今众多的模拟器中,[RetroPi][2] 是树莓派中最受欢迎的。你可以用它来玩 Apple II、Amiga、Atari 2600、Commodore 64、Game Boy Advance 和[其他许多][3]游戏。
模拟器是一种能让你在树莓派上玩不同系统不同年代游戏的软件。在如今众多的模拟器中,[RetroPi][2] 是树莓派中最受欢迎的。你可以用它来玩 Apple II、Amiga、Atari 2600、Commodore 64、Game Boy Advance 和[其他许多][3]游戏。
如果 RetroPi 听起来有趣,请阅读[这些说明][4]开始使用,玩得开心!
### 原生 Linux 游戏
树莓派的操作系统 Raspbian 上也有很多原生 Linux 游戏。“Make Use Of” 有一篇关于如何在树莓派上[玩 10 个老经典游戏][5]如 Doom 和 Nuke Dukem 3D 的文章。
树莓派的操作系统 Raspbian 上也有很多原生 Linux 游戏。“Make Use Of” 有一篇关于如何在树莓派上[玩 10 个老经典游戏][5]如 Doom 和 Nuke Dukem 3D 的文章。
你也可以将树莓派用作[游戏服务器][6]。例如,你可以在树莓派上安装 Terraria、Minecraft 和 QuakeWorld 服务器。
@ -34,13 +35,13 @@ via: https://opensource.com/article/19/3/play-games-raspberry-pi
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
[1]: https://linux.cn/article-10653-1.html
[2]: https://retropie.org.uk/
[3]: https://retropie.org.uk/about/systems
[4]: https://opensource.com/article/19/1/retropie

View File

@ -1,28 +1,30 @@
[#]: collector: (lujun9972)
[#]: translator: (qhwdw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10687-1.html)
[#]: subject: (Let's get physical: How to use GPIO pins on the Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/gpio-pins-raspberry-pi)
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
进入物理世界:如何使用树莓派的 GPIO 针脚
树莓派使用入门:进入物理世界 —— 如何使用树莓派的 GPIO 针脚
======
在树莓派使用入门的第十篇文章中,我们将学习如何使用 GPIO。
> 在树莓派使用入门的第十篇文章中,我们将学习如何使用 GPIO。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspbery_pi_zero_wireless_hardware.jpg?itok=9YFzdxFQ)
到目前为止,本系列文章主要专注于树莓派的软件方面,而今天我们将学习硬件。在树莓派最初发布时,最让我感兴趣的主要特性之一就是它的 [通用输入输出][1]GPIO针脚。GPIO 可以让你的树莓派程序与连接到它上面的传感器、继电器、和其它类型的电子元件与物理世界来交互。
![](https://opensource.com/sites/default/files/uploads/raspberrypi_10_gpio-pins-pi2.jpg)
树莓派上的每个 GPIO 针脚要么有一个预定义的功能,要么被设计为通用的。另外,不同的树莓派型号要么 26 个,要么有 40 个 GPIO 针脚是你可以随意使用的。在维基百科上有一个 [关于每个针脚的非常详细的说明][2] 以及它的功能介绍。
树莓派上的每个 GPIO 针脚要么有一个预定义的功能,要么被设计为通用的。另外,不同的树莓派型号要么 26 个,要么有 40 个 GPIO 针脚,你可以根据情况使用的。在维基百科上有一个 [关于每个针脚的非常详细的说明][2] 以及它的功能介绍。
你可以使用树莓派的 GPIO 针脚做更多的事情。关于它的 GPIO 的使用我写过一些文章,包括使用树莓派来控制节日彩灯的三篇文章([第一篇][3]、 [第二篇][4]、和 [第三篇][5]),在这些文章中我通过使用开源程序让灯光随着音乐起舞。
树莓派社区在不同编程语言创建不同的库方面做了非常好的一些工作,因此,你能够使用 [C][6]、[Python][7]、 [Scratch][8]和其它语言与 GPIO 进行交互。
树莓派社区在不同编程语言创建不同的库方面做了非常好的一些工作,因此,你能够使用 [C][6]、[Python][7]、 [Scratch][8] 和其它语言与 GPIO 进行交互。
另外,如果你想在树莓派与物理世界交互方面获得更好的体验,你可以选用 [Raspberry Pi Sense Hat][9],它是插入树莓派 GPIO 针脚上的一个很便宜的电路板,借助它你可以通过程序与 LED、驾驶杆、气压计、温度计、温度计、 陀螺仪、加速度计以及磁力仪来交互。
另外,如果你想在树莓派与物理世界交互方面获得更好的体验,你可以选用 [Raspberry Pi Sense Hat][9],它是插入树莓派 GPIO 针脚上的一个很便宜的电路板,借助它你可以通过程序与 LED、驾驶杆、气压计、温度计、温度计、 陀螺仪、加速度计以及磁力仪来交互。
--------------------------------------------------------------------------------
@ -31,7 +33,7 @@ via: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (sanfusu)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10689-1.html)
[#]: subject: (Blockchain 2.0: Redefining Financial Services [Part 3])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/)
[#]: author: (ostechnix https://www.ostechnix.com/author/editor/)
区块链 2.0:重新定义金融服务(三)
======
![](https://www.ostechnix.com/wp-content/uploads/2019/03/Financial-Services-1-720x340.png)
[本系列的前一篇文章][1]侧重于建立背景,以阐明为什么将现有的金融系统向充满未来主义的[区块链][2]体系迈进是“货币”改革的下一个自然步骤。我们将继续了解哪些区块链特性将有助于这一迁移。但是,金融市场十分复杂,并且人们的交易由众多组成部分组成,而不仅仅是货币。
本部分将探索哪些区块链特性能够让金融机构向区块链平台迁移并将传统银行和金融系统与其合并。如之前讨论证明的那样如果有足够的人参与到给定的区块链网络并且支持交易协议则赋给“代币”的面值将提升并变得更稳定。以比特币BTC为例就和我们习惯使用的纸币一样像比特币和以太币这样的加密货币都可以用于所有前者的目的从购买食物到船只乃至贷款和购买保险。
事实上,你所涉及的银行或其他金融机构很可能已经[利用了区块链分类账本技术][r1]。金融行业中区块链技术最显著的用途是建立支付基础设施、基金交易技术和数字身份管理。传统上,后两者是由金融服务业传统的系统处理的。但由于区块链处理上的效率,这些系统正逐渐的向区块链迁移合并。区块链还为这些金融服务业的公司提供了高质量的数据分析解决方案,这一方面之所以能够快速的得到重视,主要得益于最近的数据科学的发展。
从这一领域前沿阵地的初创企业和项目入手考察,区块链似乎能有所保证,因为这些企业或项目的产品已经开始在市场上扩展开来。
PayPal这是一家创建于 1998 年的在线支付公司现为此类平台中最大的一个常被视作运营和技术能力的基准。PayPal 很大程度上派生自现有的货币体系。它的创新贡献来自于如何收集并利用消费者数据,以提供即时的在线服务。如今,在线交易已被认为是理所当然的事,其所基于的技术方面,在该行业里的创新极少。拥有坚实的基础是一件好事,但在快速发展的 IT 行业里并不能提供任何竞争力毕竟每天都有新的标准和新的技术。2014 年PayPal 子公司 **Braintree** [宣布][r2]与流行的加密货币支付方案解决商 [Coinbase][r3] 和 **GoCoin** 建立了合作关系,以便逐步将比特币和其它加密货币整合到它们的服务平台上。这基本上给了加密货币支付方案解决商的消费者在 PayPal 可靠且熟悉的平台下探索和体验的一个机会。事实上,打车公司 **Uber** 和 Braintree 具有独家合作关系,允许消费者在打车的时候使用比特币。
**瑞波Ripple** 正在让人们在多个区块链之间的操作变得更简单。瑞波已经成为美国各地区银行向前发展的头条新闻,比如,在不需要第三方中介的情况下,将资金双边转移给其他地区银行,从而降低了成本和时间管理费用。[瑞波的 Codius 平台][r4]允许区块链之间互相操作,并为智能合约编入系统提供了方便之门,以最大限度地减少篡改和混乱。建立在这种先进、安全并且可根据需要扩展的平台上,瑞波拥有像瑞银和[渣打银行][r5] 在内的客户列表,更多的银行客户也在期待加入。
**Kraken**,是一个在全球各地运营的美国加密货币交易所,因其可靠的**加密货币量**估算而闻名,甚至向彭博终端实时提供比特币定价数据。在 2015 年,[他们与菲多尔银行][r6]合作建立世界上第一个提供银行业务和加密货币交易的加密货币银行。
另一家金融科技公司 [Circle][r7] 则是目前同类公司中规模最大的一家,允许用户投资和交易加密货币衍生资产,类似于传统的货币市场资产。
如今,像 [Wyre][r8] 和 **Stellar** 这样的公司已经将国际电汇的提前期从平均 3 天降到了 6 小时。有人声称,一旦建立了适当的监管体系,同样的 6 小时可以缩短至几秒钟。
虽然现在上述内容集中在相关的初创项目上,但是不应忽视更受尊敬的老派金融机构的影响力和能力。这些全球范围内交易量达数十亿美元,已经存在了数十年乃至上百年的机构,在利用区块链及其潜力上有着相当的兴趣。
前面的文章中我们已经提到,**摩根大通**最近披露了他们在开发加密货币和企业级别的区块链基础分类帐本上的计划。该项目被称为 [Quorum][r9],被定义为 **“企业级分布式分类帐和智能合约平台”**。这一平台的主要目标是将大量的银行操作逐渐的迁移到 Quorum 中,从而削减像摩根大通这样的公司在保证隐私、安全和透明度上的重大开销。他们声称自己是行业中唯一完整拥有全部的区块链、协议和代币系统的玩家。他们也发布了一个称为 **JPM 硬币** 的加密货币用于大额即时结算。JPM 硬币是由摩根大通等主要银行支持的首批“稳定币”。稳定币是其价格与现存主要货币系统相关联的加密货币。Quorum 也因其每秒几近 100 次远高于同行的交易量而倍受吹捧,这远远领先于同时代。
据报道,英国跨国金融巨头巴克莱已经[注册了两项基于区块链的专利][r10],旨在简化资金转移和 KYC 规程。巴克莱更多的是旨在提高自身的银行操作效率。其中一个应用是创建一个私有区块链网络,用于存储客户的 KYC 信息。经过验证、存储和确认后,这些详细信息将不可变,并且无需再进一步验证。若能实施这一应用,该协议将取消对 KYC 信息多次验证的需求。像印度这样有着高密度人口的发展中国家,其中大部分人口的 KYC 信息尚未被引入正式的银行系统中,若能引入这种具有革新意义的 KYC 系统,将有助于减少随机错误并减少交付时间。据传,巴克莱同时也在探索区块链系统的功能,以便解决信用状态评级和保险赔偿问题。
这种以区块链作支撑的系统,被用来消除不必要的维护成本,并利用智能合约来为那些需要慎重、安全和速度的企业在行业内赢得竞争力。这些企业产品建立在一个能够确保完整交易以及合同隐私的协议之上,同时建立了可使腐败和贿赂无效的共识机制。
[普华永道 2017 年的全球金融科技报告][r11] 表示到 2020 年,所有金融科技公司中约有 77 将转向基于区块链的技术和流程。高达 90 的受访者表示他们计划在 2020 年之前将区块链技术作为生产系统的一部分。他们的判断没错,因为从监管的角度来看,通过转移到基于区块链的系统上,可以确保显著的成本节约和透明度增加。
由于区块链平台默认内置了监管能力,因此企业从传统系统迁移到运行区块链分类账本的现代网络也是行业监管机构所欢迎的举措。交易和贸易运动可以一劳永逸地进行验证和跟踪。从长远来看,这可能会带来更好的监管和风险管理,更不用说改善了公司和个人的责任。
虽然对跨越式创新的投资是由企业进行的大量投资顺带所致,但如果认为这些措施不会渗透到最终用户的利益中是具有误导性的。随着银行和金融机构开始采用区块链,这将为他们带来更多的成本节约和效率,而这最终也将对终端消费者有利。透明度和欺诈保护带来的额外好处将改善客户的感受,更重要的是提高人们对银行和金融系统的信任。通过区块链及其与传统服务的整合,金融服务行业急需的革命将成为可能。 在本系列的下一部分中,我们将讨论[房地产中的区块链][3]。
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/
作者:[ostechnix][a]
选题:[lujun9972][b]
译者:[sanfusu](https://github.com/sanfusu)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10668-1.html
[2]: https://linux.cn/article-10650-1.html
[3]: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/
[r1]: https://www.forbes.com/sites/bernardmarr/2018/01/22/35-amazing-real-world-examples-of-how-blockchain-is-changing-our-world/#170df8de43b5
[r2]: https://publicpolicy.paypal-corp.com/issues/blockchain
[r3]: https://blog.coinbase.com/coinbase-adds-support-for-paypal-and-credit-cards-21968661d508
[r4]: http://fortune.com/2018/06/06/ripple-codius/
[r5]: https://www.finextra.com/newsarticle/32048/standard-chartered-to-extend-use-of-ripplenet-to-more-countries
[r6]: https://99bitcoins.com/fidor-and-kraken-team-up-for-cryptocurrency-bank/
[r7]: https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=249292386
[r8]: https://www.forbes.com/sites/julianmitchell/2018/07/31/wyre-the-blockchain-platform-taking-the-lead-in-cross-border-transactions/#6bc69ade69d7
[r9]: https://www.jpmorgan.com/global/Quorum
[r10]: https://cointelegraph.com/news/barclays-files-two-digital-currency-and-blockchain-patents-with-u-s-patent-office
[r11]: https://www.pwc.com/jg/en/media-release/global-fintech-survey-2017.html

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: (hopefully2333)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10690-1.html)
[#]: subject: (Learn about computer security with the Raspberry Pi and Kali Linux)
[#]: via: (https://opensource.com/article/19/3/computer-security-raspberry-pi)
[#]: author: (Anderson Silva https://opensource.com/users/ansilva)
树莓派使用入门:通过树莓派和 kali Linux 学习计算机安全
======
> 树莓派是学习计算机安全的一个好方法。在我们这个系列的第十一篇文章中会进行学习。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
在技术方面是否有比保护你的计算机更热门的话题?一些专家会告诉你,没有绝对安全的系统。他们开玩笑说,如果你想要你的服务器或者应用程序真正的安全,就关掉你的服务器,从网络上断线,然后把它放在一个安全的地方。但问题是显而易见的:没人能用的应用程序或者服务器有什么用?
这是围绕安全的一个难题,我们如何才能在保证安全性的同时,让服务器或应用程序依然可用且有价值?我无论如何都不是一个安全专家,虽然我希望有一天我能是。因此,分享可以用树莓派来做些什么以学习计算机安全的知识,我认为是有意义的。
我要提示一下,就像本系列中其他写给树莓派初学者的文章一样,我的目标不是深入研究,而是起个头,让你有兴趣去了解更多与这些主题相关的东西。
### Kali Linux
当我们谈到“做一些安全方面的事”的时候,出现在脑海中的一个 Linux 发行版就是 [Kali Linux][1]。Kali Linux 的开发主要集中在调查取证和渗透测试方面。它有超过 600 个已经预先安装好了的用来测试你的计算机的安全性的[渗透测试工具][2],还有一个[取证模式][3],它可以避免自身接触到被检查系统的内部的硬盘驱动器或交换空间。
![](https://opensource.com/sites/default/files/uploads/raspberrypi_11_kali.png)
就像 Raspbian 一样Kali Linux 基于 Debian 的发行版,你可以在 Kali 的主要[文档门户][4]的网页上找到将它安装在树莓派上的文档。如果你已经在你的树莓派上安装了 Raspbian 或者是其它的 Linux 发行版。那么你装 Kali 应该是没问题的Kali 的创造者甚至将[培训、研讨会和职业认证][5]整合到了一起,以此来帮助提升你在安全领域内的职业生涯。
### 其他的 Linux 发行版
大多数的标准 Linux 发行版,比如 Raspbian、Ubuntu 和 Fedora 这些,在它们的仓库里同样也有[很多可用的安全工具][6]。一些很棒的探测工具你可以试试,包括 [Nmap][7]、[Wireshark][8]、[auditctl][9],和 [SELinux][10]。
### 项目
你可以在树莓派上运行很多其他的安全相关的项目,例如[蜜罐][11][广告拦截器][12]和 [USB 清洁器][13]。花些时间了解它们!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/computer-security-raspberry-pi
作者:[Anderson Silva][a]
选题:[lujun9972][b]
译者:[hopefully2333](https://github.com/hopefully2333)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ansilva
[b]: https://github.com/lujun9972
[1]: https://www.kali.org/
[2]: https://en.wikipedia.org/wiki/Kali_Linux#Development
[3]: https://docs.kali.org/general-use/kali-linux-forensics-mode
[4]: https://docs.kali.org/kali-on-arm/install-kali-linux-arm-raspberry-pi
[5]: https://www.kali.org/penetration-testing-with-kali-linux/
[6]: https://linuxblog.darkduck.com/2019/02/9-best-linux-based-security-tools.html
[7]: https://nmap.org/
[8]: https://www.wireshark.org/
[9]: https://linux.die.net/man/8/auditctl
[10]: https://opensource.com/article/18/7/sysadmin-guide-selinux
[11]: https://trustfoundry.net/honeypi-easy-honeypot-raspberry-pi/
[12]: https://pi-hole.net/
[13]: https://www.circl.lu/projects/CIRCLean/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10679-1.html)
[#]: subject: (10 Python image manipulation tools)
[#]: via: (https://opensource.com/article/19/3/python-image-manipulation-tools)
[#]: author: (Parul Pandey https://opensource.com/users/parul-pandey)
@ -10,19 +10,19 @@
10 个 Python 图像编辑工具
======
以下提到的这些 Python 工具在编辑图像、操作图像底层数据方面都提供了简单直接的方法。
> 以下提到的这些 Python 工具在编辑图像、操作图像底层数据方面都提供了简单直接的方法。
![][1]
当今的世界充满了可以被利用的数据,而图像数据就是其中很重要的一部分。但只有经过处理和分析,提高图像的质量,从中提取出有效地信息,才能利用到这些图像数据。
当今的世界充满了数据,而图像数据就是其中很重要的一部分。但只有经过处理和分析,提高图像的质量,从中提取出有效地信息,才能利用到这些图像数据。
常见的图像处理操作包括显示图像,图像的裁剪、翻转、旋转,图像的分割、分类、特征提取,图像识别,以及图像识别等等。Python 作为一种日益风靡的科学编程语言,是这些图像处理操作的最佳选择。同时,在 Python 生态当中也有很多优秀的图像处理工具可以被免费使用
常见的图像处理操作包括显示图像,基本的图像操作,如裁剪、翻转、旋转;图像的分割、分类、特征提取;图像恢复;以及图像识别等等。Python 作为一种日益风靡的科学编程语言,是这些图像处理操作的最佳选择。同时,在 Python 生态当中也有很多可以免费使用的优秀的图像处理工具。
下文将介绍 10 个可以用于图像处理任务的 Python 工具,它们在编辑图像、查看图像底层数据方面都提供了简单直接的方法。
下文将介绍 10 个可以用于图像处理任务的 Python ,它们在编辑图像、查看图像底层数据方面都提供了简单直接的方法。
### 1\. scikit-image
### 1scikit-image
[scikit-image][2] 是一个结合 [NumPy][3] 数组使用的开源 Python 工具,它实现了可用于研究、教育、工业应用的算法和应用程序。即使是对于刚刚接触 Python 生态圈的新手来说,它也是一个在使用上足够简单的库。同时它的代码质量也足够高,因为它是由一个活跃的志愿者社区开发的,并且通过了<ruby>同行评审<rt>peer review</rt></ruby>
[scikit-image][2] 是一个结合 [NumPy][3] 数组使用的开源 Python 工具,它实现了可用于研究、教育、工业应用的算法和应用程序。即使是对于刚刚接触 Python 生态圈的新手来说,它也是一个在使用上足够简单的库。同时它的代码质量也高,因为它是由一个活跃的志愿者社区开发的,并且通过了<ruby>同行评审<rt>peer review</rt></ruby>
#### 资源
@ -30,7 +30,7 @@ scikit-image 的[文档][4]非常完善,其中包含了丰富的用例。
#### 示例
通过 `import skimage` 就可以导入 scikit-image,大部分的功能都可以在它的子模块中找到。
可以通过导入 `skimage` 使用,大部分的功能都可以在它的子模块中找到。
<ruby>图像滤波<rt>image filtering</rt></ruby>
@ -53,9 +53,9 @@ plt.imshow(edges, cmap='gray')
在[展示页面][10]可以看到更多相关的例子。
### 2\. NumPy
### 2NumPy
[NumPy][11] 提供了对数组的支持,是 Python 编程的一个核心库。图像的本质其实也是一个包含数据点像素的标准 NumPy 数组,因此可以通过一些基本的 NumPy 操作(例如切片、掩码、花式索引等),就可以从像素级别对图像进行编辑。通过 NumPy 数组存储的图像也可以被 skimage 加载并使用 matplotlib 显示。
[NumPy][11] 提供了对数组的支持,是 Python 编程的一个核心库。图像的本质其实也是一个包含像素数据点的标准 NumPy 数组,因此可以通过一些基本的 NumPy 操作(例如切片、<ruby>掩膜<rt>mask</rt></ruby><ruby>花式索引<rt>fancy indexing</rt></ruby>等),就可以从像素级别对图像进行编辑。通过 NumPy 数组存储的图像也可以被 skimage 加载并使用 matplotlib 显示。
#### 资源
@ -63,7 +63,7 @@ plt.imshow(edges, cmap='gray')
#### 示例
使用 NumPy 对图像进行掩码操作:
使用 NumPy 对图像进行<ruby>掩膜<rt>mask</rt></ruby>操作:
```
import numpy as np
@ -82,17 +82,18 @@ plt.imshow(image, cmap='gray')
![NumPy][13]
### 3\. SciPy
### 3SciPy
[SciPy][14] 是 Python 的一个核心科学计算模块,也可以用于图像的基本操作和处理。尤其是 SciPy v1.1.0 中的 [scipy.ndimage][15] 子模块,它提供了在 n 维 NumPy 数组上的运行的函数。SciPy 目前还提供了<ruby>线性和非线性滤波<rt>linear and non-linear filtering</rt></ruby><ruby>二值形态学<rt>binary morphology</rt></ruby><ruby>B 样条插值<rt>B-spline interpolation</rt></ruby><ruby>对象测量<rt>object measurements</rt></ruby>等方面的函数。
像 NumPy 一样,[SciPy][14] 是 Python 的一个核心科学计算模块,也可以用于图像的基本操作和处理。尤其是 SciPy v1.1.0 中的 [scipy.ndimage][15] 子模块,它提供了在 n 维 NumPy 数组上的运行的函数。SciPy 目前还提供了<ruby>线性和非线性滤波<rt>linear and non-linear filtering</rt></ruby><ruby>二值形态学<rt>binary morphology</rt></ruby><ruby>B 样条插值<rt>B-spline interpolation</rt></ruby><ruby>对象测量<rt>object measurements</rt></ruby>等方面的函数。
#### 资源
在[官方文档][16]中可以查阅到 scipy.ndimage 的完整函数列表。
在[官方文档][16]中可以查阅到 `scipy.ndimage` 的完整函数列表。
#### 示例
使用 SciPy 的[高斯滤波][17]对图像进行模糊处理:
```
from scipy import misc,ndimage
@ -106,9 +107,9 @@ plt.imshow(<image to be displayed>)
![Using a Gaussian filter in SciPy][19]
### 4\. PIL/Pillow
### 4PIL/Pillow
PIL (Python Imaging Library) 是一个自由的 Python 编程库,它提供了对多种格式图像文件的打开、编辑、保存的支持。但在 2009 年之后 PIL 就停止发布新版本了。幸运的是,还有一个 PIL 的积极开发的分支 [Pillow][20],它的安装过程比 PIL 更加简单,支持大部分主流的操作系统,并且还支持 Python 3。Pillow 包含了图像的基础处理功能,包括像素点操作、使用内置卷积内核进行滤波、颜色空间转换等等。
PIL (Python Imaging Library) 是一个免费 Python 编程库,它提供了对多种格式图像文件的打开、编辑、保存的支持。但在 2009 年之后 PIL 就停止发布新版本了。幸运的是,还有一个 PIL 的积极开发的分支 [Pillow][20],它的安装过程比 PIL 更加简单,支持大部分主流的操作系统,并且还支持 Python 3。Pillow 包含了图像的基础处理功能,包括像素点操作、使用内置卷积内核进行滤波、颜色空间转换等等。
#### 资源
@ -132,9 +133,9 @@ enh.enhance(1.8).show("30% more contrast")
![Enhancing an image in Pillow using ImageFilter][23]
[源码][24]
- [源码][24]
### 5\. OpenCV-Python
### 5OpenCV-Python
OpenCVOpen Source Computer Vision 库)是计算机视觉领域最广泛使用的库之一,[OpenCV-Python][25] 则是 OpenCV 的 Python API。OpenCV-Python 的运行速度很快,这归功于它使用 C/C++ 编写的后台代码,同时由于它使用了 Python 进行封装,因此调用和部署的难度也不大。这些优点让 OpenCV-Python 成为了计算密集型计算机视觉应用程序的一个不错的选择。
@ -149,9 +150,9 @@ OpenCVOpen Source Computer Vision 库)是计算机视觉领域最广泛使
![Image blending using Pyramids in OpenCV-Python][28]
[源码][29]
- [源码][29]
### 6\. SimpleCV
### 6SimpleCV
[SimpleCV][30] 是一个开源的计算机视觉框架。它支持包括 OpenCV 在内的一些高性能计算机视觉库,同时不需要去了解<ruby>位深度<rt>bit depth</rt></ruby>、文件格式、<ruby>色彩空间<rt>color space</rt></ruby>之类的概念,因此 SimpleCV 的学习曲线要比 OpenCV 平缓得多正如它的口号所说“将计算机视觉变得更简单”。SimpleCV 的优点还有:
@ -164,13 +165,11 @@ OpenCVOpen Source Computer Vision 库)是计算机视觉领域最广泛使
#### 示例
### [7-_simplecv.png][32]
![SimpleCV][33]
### 7\. Mahotas
### 7Mahotas
[Mahotas][34] 是一个 Python 图像处理和计算机视觉库。在图像处理方面,它支持滤波和形态学相关的操作;在计算机视觉方面,它也支持<ruby>特征计算<rt>feature computation</rt></ruby><ruby>兴趣点检测<rt>interest point detection</rt></ruby><ruby>局部描述符<rt>local descriptors</rt></ruby>等功能。Mahotas 的接口使用了 Python 进行编写,因此适合快速开发,而算法使用 C++ 实现并针对速度进行了优化。Mahotas 尽可能做到代码量少和依赖项少,因此它的运算速度非常快。可以参考[官方文档][35]了解更多详细信息。
[Mahotas][34] 是一个 Python 图像处理和计算机视觉库。在图像处理方面,它支持滤波和形态学相关的操作;在计算机视觉方面,它也支持<ruby>特征计算<rt>feature computation</rt></ruby><ruby>兴趣点检测<rt>interest point detection</rt></ruby><ruby>局部描述符<rt>local descriptors</rt></ruby>等功能。Mahotas 的接口使用了 Python 进行编写,因此适合快速开发,而算法使用 C++ 实现并针对速度进行了优化。Mahotas 尽可能做到代码量少和依赖项少,因此它的运算速度非常快。可以参考[官方文档][35]了解更多详细信息。
#### 资源
@ -182,15 +181,13 @@ Mahotas 力求使用少量的代码来实现功能。例如这个 [Finding Wally
![Finding Wally problem in Mahotas][39]
[源码][40]
![Finding Wally problem in Mahotas][42]
[源码][40]
- [源码][40]
### 8\. SimpleITK
### 8SimpleITK
[ITK][43]Insight Segmentation and Registration Toolkit是一个为开发者提供普适性图像分析功能的开源、跨平台工具[SimpleITK][44] 则是基于 ITK 构建出来的一个简化层,旨在促进 ITK 在快速原型设计、教育、解释语言中的应用。SimpleITK 作为一个图像分析工具包,它也带有[大量的组件][45],可以支持常规的滤波、图像分割、<ruby>图像配准<rt>registration</rt></ruby>功能。尽管 SimpleITK 使用 C++ 编写,但它也支持包括 Python 在内的大部分编程语言。
[ITK][43]Insight Segmentation and Registration Toolkit是一个为开发者提供普适性图像分析功能的开源、跨平台工具套件[SimpleITK][44] 则是基于 ITK 构建出来的一个简化层,旨在促进 ITK 在快速原型设计、教育、解释语言中的应用。SimpleITK 作为一个图像分析工具包,它也带有[大量的组件][45],可以支持常规的滤波、图像分割、<ruby>图像配准<rt>registration</rt></ruby>功能。尽管 SimpleITK 使用 C++ 编写,但它也支持包括 Python 在内的大部分编程语言。
#### 资源
@ -202,9 +199,9 @@ Mahotas 力求使用少量的代码来实现功能。例如这个 [Finding Wally
![SimpleITK animation][48]
[源码][49]
- [源码][49]
### 9\. pgmagick
### 9pgmagick
[pgmagick][50] 是使用 Python 封装的 GraphicsMagick 库。[GraphicsMagick][51] 通常被认为是图像处理界的瑞士军刀,因为它强大而又高效的工具包支持对多达 88 种主流格式图像文件的读写操作,包括 DPX、GIF、JPEG、JPEG-2000、PNG、PDF、PNM、TIFF 等等。
@ -218,15 +215,15 @@ pgmagick 的 [GitHub 仓库][52]中有相关的安装说明、依赖列表,以
![Image scaling in pgmagick][55]
[源码][56]
- [源码][56]
边缘提取:
![Edge extraction in pgmagick][58]
[源码][59]
- [源码][59]
### 10\. Pycairo
### 10Pycairo
[Cairo][61] 是一个用于绘制矢量图的二维图形库,而 [Pycairo][60] 是用于 Cairo 的一组 Python 绑定。矢量图的优点在于做大小缩放的过程中不会丢失图像的清晰度。使用 Pycairo 可以在 Python 中调用 Cairo 的相关命令。
@ -240,7 +237,7 @@ Pycairo 的 [GitHub 仓库][62]提供了关于安装和使用的详细说明,
![Pycairo][65]
[源码][66]
- [源码][66]
### 总结
@ -253,7 +250,7 @@ via: https://opensource.com/article/19/3/python-image-manipulation-tools
作者:[Parul Pandey][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,189 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10695-1.html)
[#]: subject: (Quickly Go Back To A Specific Parent Directory Using bd Command In Linux)
[#]: via: (https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在 Linux 中使用 bd 命令快速返回到特定的父目录
======
两天前我们写了一篇关于 `autocd` 的文章,它是一个内置的 shell 变量,可以帮助我们在[没有 cd 命令的情况下导航到目录中][1]。
如果你想回到上一级目录,那么你需要输入 `cd ..`
如果你想回到上两级目录,那么你需要输入 `cd ../..`
这在 Linux 中是正常的,但如果你想从第九级目录回到第三级目录,那么使用 `cd` 命令是很糟糕的。
有什么解决方案呢?
是的,在 Linux 中有一个解决方案。我们可以使用 `bd` 命令来轻松应对这种情况。
### 什么是 bd 命令?
`bd` 命令允许用户快速返回 Linux 中的父目录,而不是反复输入 `cd ../../..`
你可以列出给定目录的内容,而不用提供完整路径 ls `bd Directory_Name`。它支持以下其它命令,如 `ls`、`ln`、`echo`、`zip`、`tar` 等。
另外,它还允许我们执行 shell 文件而不用提供完整路径 bd p`/shell_file.sh`。
### 如何在 Linux 中安装 bd 命令?
除了 Debian/Ubuntu 之外,`bd` 没有官方发行包。因此,我们需要手动执行方法。
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][2]或[APT 命令][3]来安装 `bd`
```
$ sudo apt install bd
```
对于其它 Linux 发行版,使用 [wget 命令][4]下载 `bd` 可执行二进制文件。
```
$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
```
设置 `bd` 二进制文件的可执行权限。
```
$ sudo chmod +rx /usr/local/bin/bd
```
`.bashrc` 文件中添加以下值。
```
$ echo 'alias bd=". bd -si"' >> ~/.bashrc
```
运行以下命令以使更改生效。
```
$ source ~/.bashrc
```
要启用自动完成,执行以下两个步骤。
```
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ sudo source /etc/bash_completion.d/bd
```
我们已经在系统上成功安装并配置了 `bd` 实用程序,现在是时候测试一下了。
我将使用下面的目录路径进行测试。
运行 `pwd` 命令或 `dirs` 命令,亦或是 `tree` 命令来了解你当前的路径。
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ pwd
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ dirs
/usr/share/icons/Adwaita/256x256/apps
```
我现在在 `/usr/share/icons/Adwaita/256x256/apps` 目录,如果我想快速跳转到 `icons` 目录,那么只需输入以下命令即可。
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ bd icons
/usr/share/icons/
daygeek@Ubuntu18:/usr/share/icons$
```
甚至,你不需要输入完整的目录名称,也可以输入几个字母。
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ bd i
/usr/share/icons/
daygeek@Ubuntu18:/usr/share/icons$
```
注意:如果层次结构中有多个同名的目录,`bd` 会将你带到最近的目录。(不考虑直接的父目录)
如果要列出给定的目录内容,使用以下格式。它会打印出 `/usr/share/icons/` 的内容。
```
$ ls -lh `bd icons`
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ ls -lh `bd i`
total 64K
drwxr-xr-x 12 root root 4.0K Jul 25 2018 Adwaita
lrwxrwxrwx 1 root root 51 Feb 25 14:32 communitheme -> /snap/communitheme/current/share/icons/communitheme
drwxr-xr-x 2 root root 4.0K Jul 25 2018 default
drwxr-xr-x 3 root root 4.0K Jul 25 2018 DMZ-Black
drwxr-xr-x 3 root root 4.0K Jul 25 2018 DMZ-White
drwxr-xr-x 9 root root 4.0K Jul 25 2018 gnome
drwxr-xr-x 3 root root 4.0K Jul 25 2018 handhelds
drwxr-xr-x 20 root root 4.0K Mar 9 14:52 hicolor
drwxr-xr-x 9 root root 4.0K Jul 25 2018 HighContrast
drwxr-xr-x 12 root root 4.0K Jul 25 2018 Humanity
drwxr-xr-x 7 root root 4.0K Jul 25 2018 Humanity-Dark
drwxr-xr-x 4 root root 4.0K Jul 25 2018 locolor
drwxr-xr-x 3 root root 4.0K Feb 25 15:46 LoginIcons
drwxr-xr-x 3 root root 4.0K Jul 25 2018 redglass
drwxr-xr-x 10 root root 4.0K Feb 25 15:46 ubuntu-mono-dark
drwxr-xr-x 10 root root 4.0K Feb 25 15:46 ubuntu-mono-light
drwxr-xr-x 3 root root 4.0K Jul 25 2018 whiteglass
```
如果要在父目录中的某个位置执行文件,使用以下格式。它将运行 shell 文件 `/usr/share/icons/users-list.sh`
```
$ `bd i`/users-list.sh
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ `bd icon`/users-list.sh
daygeek
thanu
renu
2gadmin
testuser
demouser
sudha
suresh
user1
user2
user3
```
如果你位于 `/usr/share/icons/Adwaita/256x256/apps` 中,想要导航到不同的父目录,使用以下格式。以下命令将导航到 `/usr/share/icons/gnome` 目录。
```
$ cd `bd i`/gnome
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ cd `bd icon`/gnome
daygeek@Ubuntu18:/usr/share/icons/gnome$
```
如果你位于 `/usr/share/icons/Adwaita/256x256/apps` ,你想在 `/usr/share/icons/` 下创建一个新目录,使用以下格式。
```
$ daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ sudo mkdir `bd icons`/2g
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ ls -ld `bd icon`/2g
drwxr-xr-x 2 root root 4096 Mar 16 05:44 /usr/share/icons//2g
```
本教程允许你快速返回到特定的父目录,但没有快速前进的选项。
我们有另一个解决方案,很快就会提出,请保持关注。
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/navigate-switch-directory-without-using-cd-command-in-linux/
[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/wget-command-line-download-utility-tool/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (mokshal)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: subject: (5 reasons to give Linux for the holidays)

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
[#]: via: (https://itsfoss.com/history-of-firefox)
[#]: author: (John Paul https://itsfoss.com/author/john/)
A Look Back at the History of Firefox
======
The Firefox browser has been a mainstay of the open-source community for a long time. For many years it was the default web browser on (almost) all Linux distros and the lone obstacle to Microsofts total dominance of the internet. This browser has roots that go back all the way to the very early days of the internet. Since this week marks the 30th anniversary of the internet, there is no better time to talk about how Firefox became the browser we all know and love.
### Early Roots
In the early 1990s, a young man named [Marc Andreessen][1] was working on his bachelors degree in computer science at the University of Illinois. While there, he started working for the [National Center for Supercomputing Applications][2]. During that time [Sir Tim Berners-Lee][3] released an early form of the web standards that we know today. Marc [was introduced][4] to a very primitive web browser named [ViolaWWW][5]. Seeing that the technology had potential, Marc and Eric Bina created an easy to install browser for Unix named [NCSA Mosaic][6]). The first alpha was released in June 1993. By September, there were ports to Windows and Macintosh. Mosaic became very popular because it was easier to use than other browsing software.
In 1994, Marc graduated and moved to California. He was approached by Jim Clark, who had made his money selling computer hardware and software. Clark had used Mosaic and saw the financial possibilities of the internet. Clark recruited Marc and Eric to start an internet software company. The company was originally named Mosaic Communications Corporation, however, the University of Illinois did not like [their use of the name Mosaic][7]. As a result, the company name was changed to Netscape Communications Corporation.
The companys first project was an online gaming network for the Nintendo 64, but that fell through. The first product they released was a web browser named Mosaic Netscape 0.9, subsequently renamed Netscape Navigator. Internally, the browser project was codenamed mozilla, which stood for “Mosaic killer”. An employee created a cartoon of a [Godzilla like creature][8]. They wanted to take out the competition.
![Early Firefox Mascot][9]Early Mozilla mascot at Netscape
They succeed mightily. At the time, one of the biggest advantages that Netscape had was the fact that its browser looked and functioned the same on every operating system. Netscape described this as giving everyone a level playing field.
As usage of Netscape Navigator increase, the market share of NCSA Mosaic cratered. In 1995, Netscape went public. [On the first day][10], the stock started at $28, jumped to $75 and ended the day at $58. Netscape was without any rivals.
But that didnt last for long. In the summer of 1994, Microsoft released Internet Explorer 1.0, which was based on Spyglass Mosaic which was based on NCSA Mosaic. The [browser wars][11] had begun.
Over the next few years, Netscape and Microsoft competed for dominance of the internet. Each added features to compete with the other. Unfortunately, Internet Explorer had an advantage because it came bundled with Windows. On top of that, Microsoft had more programmers and money to throw at the problem. Toward the end of 1997, Netscape started to run into financial problems.
### Going Open Source
![Mozilla Firefox][12]
In January 1998, Netscape open-sourced the code of the Netscape Communicator 4.0 suite. The [goal][13] was to “harness the creative power of thousands of programmers on the Internet by incorporating their best enhancements into future versions of Netscapes software. This strategy is designed to accelerate development and free distribution by Netscape of future high-quality versions of Netscape Communicator to business customers and individuals.”
The project was to be shepherded by the newly created Mozilla Organization. However, the code from Netscape Communicator 4.0 proved to be very difficult to work with due to its size and complexity. On top of that, several parts could not be open sourced because of licensing agreements with third parties. In the end, it was decided to rewrite the browser from scratch using the new [Gecko][14]) rendering engine.
In November 1998, Netscape was acquired by AOL for [stock swap valued at $4.2 billion][15].
Starting from scratch was a major undertaking. Mozilla Firefox (initially nicknamed Phoenix) was created in June 2002 and it worked on multiple operating systems, such as Linux, Mac OS, Microsoft Windows, and Solaris.
The following year, AOL announced that they would be shutting down browser development. The Mozilla Foundation was subsequently created to handle the Mozilla trademarks and handle the financing of the project. Initially, the Mozilla Foundation received $2 million in donations from AOL, IBM, Sun Microsystems, and Red Hat.
In March 2003, Mozilla [announced pl][16][a][16][ns][16] to separate the suite into stand-alone applications because of creeping software bloat. The stand-alone browser was initially named Phoenix. However, the name was changed due to a trademark dispute with the BIOS manufacturer Phoenix Technologies, which had a BIOS-based browser named trademark dispute with the BIOS manufacturer Phoenix Technologies. Phoenix was renamed Firebird only to run afoul of the Firebird database server people. The browser was once more renamed to the Firefox that we all know.
At the time, [Mozilla said][17], “Weve learned a lot about choosing names in the past year (more than we would have liked to). We have been very careful in researching the name to ensure that we will not have any problems down the road. We have begun the process of registering our new trademark with the US Patent and Trademark office.”
![Mozilla Firefox 1.0][18]Firefox 1.0 : [Picture Credit][19]
The first official release of Firefox was [0.8][20] on February 8, 2004. 1.0 followed on November 9, 2004. Version 2.0 and 3.0 followed in October 2006 and June 2008 respectively. Each major release brought with it many new features and improvements. In many respects, Firefox pulled ahead of Internet Explorer in terms of features and technology, but IE still had more users.
That changed with the release of Googles Chrome browser. In the months before the release of Chrome in September 2008, Firefox accounted for 30% of all [browser usage][21] and IE had over 60%. According to StatCounters [January 2019 report][22], Firefox accounts for less than 10% of all browser usage, while Chrome has over 70%.
Fun Fact
Contrary to popular belief, the logo of Firefox doesnt feature a fox. Its actually a [Red Panda][23]. In Chinese, “fire fox” is another name for the red panda.
### The Future
As noted above, Firefox currently has the lowest market share in its recent history. There was a time when a bunch of browsers were based on Firefox, such as the early version of the [Flock browser][24]). Now most browsers are based on Google technology, such as Opera and Vivaldi. Even Microsoft is giving up on browser development and [joining the Chromium band wagon][25].
This might seem like quite a downer after the heights of the early Netscape years. But dont forget what Firefox has accomplished. A group of developers from around the world have created the second most used browser in the world. They clawed 30% market share away from Microsofts monopoly, they can do it again. After all, they have us, the open source community, behind them.
The fight against the monopoly is one of the several reasons [why I use Firefox][26]. Mozilla regained some of its lost market-share with the revamped release of [Firefox Quantum][27] and I believe that it will continue the upward path.
What event from Linux and open source history would you like us to write about next? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][28].
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-firefox
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
[11]: https://en.wikipedia.org/wiki/Browser_wars
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
[15]: http://news.cnet.com/2100-1023-218360.html
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
[28]: http://reddit.com/r/linuxusersgroup
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing changes in open source projects)
[#]: via: (https://opensource.com/article/19/3/managing-changes-open-source-projects)
[#]: author: (Ben Cotton (Red Hat, Community Moderator) https://opensource.com/users/bcotton)
Managing changes in open source projects
======
Here's how to create a visible change process to support the community around an open source project.
![scrabble letters: "time for change"][1]
Why bother having a process for proposing changes to your open source project? Why not just let people do what they're doing and merge the features when they're ready? Well, you can certainly do that if you're the only person on the project. Or maybe if it's just you and a few friends.
But if the project is large, you might need to coordinate how some of the changes land. Or, at the very least, let people know a change is coming so they can adjust if it affects the parts they work on. A visible change process is also helpful to the community. It allows them to give feedback that can improve your idea. And if nothing else, it lets people know what's coming so that they can get excited, and maybe get you a little bit of coverage on Opensource.com or the like. Basically, it's "here's what I'm going to do" instead of "here's what I did," and it might save you some headaches as you scramble to QA right before your release.
So let's say I've convinced you that having a change process is a good idea. How do you build one?
**[Watch my talk on this topic]**
<https://www.youtube.com/embed/cVV1K3Junkc>
### Right-size your change process
Before we start talking about what a change process looks like, I want to make it very clear that this is not a one-size-fits-all situation. The smaller your project is—mainly in the number of contributors—the less process you'll probably need. As [Richard Hackman says][2], the number of communication channels in a team goes up exponentially with the number of people on the team. In community-driven projects, this becomes even more complicated as people come and go, and even your long-time contributors might not be checking in every day. So the change process consolidates those communication channels into a single area where people can quickly check to see if they care and then get back to whatever it is they do.
At one end of the scale, there's the command-line Twitter client I maintain. The change process there is, "I pick something I want to work on, probably make a Git branch for it but I might forget that, merge it, and tag a release when I'm out of stuff that I can/want to do." At the other end is Fedora. Fedora isn't really a single project; it's a program of related projects that mostly move in the same direction. More than 200 people a week touch Fedora in a technical sense: spec file maintenance, build submission, etc. This doesn't even include the untold number of people who are working on the upstreams. And these upstreams all have their own release schedules and their own processes for how features land and when. Nobody can keep up with everything on their own, so the change process brings important changes to light.
### Decide who needs to review changes
One of the first things you need to consider when putting together a change process for your community is: "who needs to review changes?" This isn't necessarily approving the changes; we'll come to that shortly. But are there people who should take a look early in the process? Maybe your release engineering or infrastructure teams need to review them to make sure they don't require changes to build infrastructure. Maybe you have a legal review process to make sure licenses are in order. Or maybe you just have a change wrangler who looks to make sure all the required information is included. Or you may choose to do none of these and have change proposals go directly to the community.
But this brings up the next step. Do you want full community feedback or only a select group to provide feedback? My preference, and what we do in Fedora, is to publish changes to the community before they're approved. But the structure of your community may fit a model where some approval body signs off on the change before it is sent to the community as an announcement.
### Determine who approves changes
Even if you lack any sort of organizational structure, someone ends up approving changes. This should reflect the norms and values of your community. The simplest form of approval is the person who proposed the change implements it. Easy peasy! In loosely organized communities, that might work. Fully democratic communities might put it to a community-wide vote. If a certain number or proportion of members votes in favor, the change is approved. Other communities may give that power to an individual or group. They could be responsible for the entire project or certain subsections.
In Fedora, change approval is the role of the Fedora Engineering Steering Committee (FESCo). This is a nine-person body elected by community members. This gives the community the ability to remove members who are not acting in the best interests of the project but also enables relatively quick decisions without large overhead.
In much of this article, I am simply presenting information, but I'm going to take a moment to be opinionated. For any project with a significant contributor base, a model where a small body makes approval decisions is the right approach. A pure democracy can be pretty messy. People who may have no familiarity with the technical ramifications of a change will be able to cast a binding vote. And that process is subject to "brigading," where someone brings along a large group of otherwise-uninterested people to support their position. Think about what it might look like if someone proposed changing the default text editor. Would the decision process be rational?
### Plan how to enforce changes
The other advantage of having a defined approval body is it can mediate conflicts between changes. What happens if two proposed changes conflict? Or if a change turns out to have a negative impact? Someone needs to have the authority to say "this isn't going in after all" or make sure conflicting changes are brought into agreement. Your QA team and processes will be a part of this, and maybe they're the ones who will make the final call.
It's relatively straightforward to come up with a plan if a change doesn't work as expected or is incomplete by the deadline. If you require a contingency plan as part of the change process, then you implement that plan. The harder part is: what happens if someone makes a change that doesn't go through your change process? Here's a secret your friendly project manager doesn't want you to know: you can't force people to go through your process, particularly in community projects.
So if something sneaks in and you don't discover it until you have a release candidate, you have a couple of options: you can let it in, or you can get someone to forcibly remove it. In either case, you'll have someone who is very unhappy. Either the person who made the change, because you kicked their work out, or the people who had to deal with the breakage it caused. (If it snuck in without anyone noticing, then it's probably not that big of a deal.)
The answer, in either case, is going to be social pressure to follow the process. Processes are sometimes painful to follow, but a well-designed and well-maintained process will give more benefit than it costs. In this case, the benefit may be identifying breakages sooner or giving other developers a chance to take advantage of new features that are offered. And it can help prevent slips in the release schedule or hero effort from your QA team.
### Implement your change process
So we've thought about the life of a change proposal in your project. Throw in some deadlines that make sense for your release cadence, and you can now come up with the policy—but how do you implement it?
First, you'll want to identify the required information for a change proposal. At a minimum, I'd suggest the following. You may have more requirements depending on the specifics of what your community is making and how it operates.
* Name and summary
* Benefit to the project
* Scope
* Owner
* Test plan
* Dependencies and impacts
* Contingency plan
You'll also want one or several change wranglers. These aren't gatekeepers so much as shepherds. They may not have the ability to approve or reject change proposals, but they are responsible for moving the proposals through the process. They check the proposal for completeness, submit it to the appropriate bodies, make appropriate announcements, etc. You can have people wrangle their own changes, but this can be a specialized task and will generally benefit from a dedicated person who does this regularly, instead of making community members do it less frequently.
And you'll need some tooling to manage these changes. This could be a wiki page, a kanban board, a ticket tracker, something else, or a combination of these. But basically, you want to be able to track their state and provide some easy reporting on the status of changes. This makes it easier to know what is complete, what is at risk, and what needs to be deferred to a later release. You can use whatever works best for you, but in general, you'll want to minimize copy-and-pasting and maximize scriptability.
### Remember to iterate
Your change process may seem perfect. Then people will start using it. You'll discover edge cases you didn't consider. You'll find that the community hates a certain part of it. Decisions that were once valid will become invalid over time as technology and society change. In Fedora, our Features process revealed itself to be ambiguous and burdensome, so it was refined into the [Changes][3] process we use today. Even though the Changes process is much better than its predecessor, we still adjust it here and there to make sure it's best meeting the needs of the community.
When designing your process, make sure it fits the size and values of your community. Consider who gets a voice and who gets a vote in approving changes. Come up with a plan for how you'll handle incomplete changes and other exceptions. Decide who will guide the changes through the process and how they'll be tracked. And once you design your change policy, write it down in a place that's easy for your community to find so that they can follow it. But most of all, remember that the process is here to serve the community; the community is not here to serve the process.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/managing-changes-open-source-projects
作者:[Ben Cotton (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bcotton
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/change_words_scrabble_letters.jpg?itok=mbRFmPJ1 (scrabble letters: "time for change")
[2]: https://hbr.org/2009/05/why-teams-dont-work
[3]: https://fedoraproject.org/wiki/Changes/Policy

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: (zgj1024 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why DevOps is the most important tech strategy today)
[#]: via: (https://opensource.com/article/19/3/devops-most-important-tech-strategy)
[#]: author: (Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht)
Why DevOps is the most important tech strategy today
======
Clearing up some of the confusion about DevOps.
![CICD with gears][1]
Many people first learn about [DevOps][2] when they see one of its outcomes and ask how it happened. It's not necessary to understand why something is part of DevOps to implement it, but knowing that—and why a DevOps strategy is important—can mean the difference between being a leader or a follower in an industry.
Maybe you've heard some the incredible outcomes attributed to DevOps, such as production environments that are so resilient they can handle thousands of releases per day while a "[Chaos Monkey][3]" is running around randomly unplugging things. This is impressive, but on its own, it's a weak business case, essentially burdened with [proving a negative][4]: The DevOps environment is resilient because a serious failure hasn't been observed… yet.
There is a lot of confusion about DevOps and many people are still trying to make sense of it. Here's an example from someone in my LinkedIn feed:
> Recently attended few #DevOps sessions where some speakers seemed to suggest #Agile is a subset of DevOps. Somehow, my understanding was just the opposite.
>
> Would like to hear your thoughts. What do you think is the relationship between Agile and DevOps?
>
> 1. DevOps is a subset of Agile
> 2. Agile is a subset of DevOps
> 3. DevOps is an extension of Agile, starts where Agile ends
> 4. DevOps is the new version of Agile
>
Tech industry professionals have been weighing in on the LinkedIn post with a wide range of answers. How would you respond?
### DevOps' roots in lean and agile
DevOps makes a lot more sense if we start with the strategies of Henry Ford and the Toyota Production System's refinements of Ford's model. Within this history is the birthplace of lean manufacturing, which has been well studied. In [_Lean Thinking_][5], James P. Womack and Daniel T. Jones distill it into five principles:
1. Specify the value desired by the customer
2. Identify the value stream for each product providing that value and challenge all of the wasted steps currently necessary to provide it
3. Make the product flow continuously through the remaining value-added steps
4. Introduce pull between all steps where continuous flow is possible
5. Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
This is a huge difference, but not everything in life is as simple and predictable as the penny in the Penny Game. This is where agile comes in. We certainly see lean principles on high-performing agile teams, but these teams need more than lean to do what they do.
To be able to handle the unpredictability and variance of typical software development tasks, agile methodology focuses on awareness, deliberation, decision, and action to adjust course in the face of a constantly changing reality. For example, agile frameworks (like scrum) increase awareness with ceremonies like the daily standup and the sprint review. If the scrum team becomes aware of a new reality, the framework allows and encourages them to adjust course if necessary.
For teams to make these types of decisions, they need to be self-organizing in a high-trust environment. High-performing agile teams working this way achieve a fast flow of value while continuously adjusting course, removing the waste of going in the wrong direction.
### Optimal batch size
To understand the power of DevOps in software development, it helps to understand the economics of batch size. Consider the following U-curve optimization illustration from Donald Reinertsen's _[Principles of Product Development Flow][8]:_
![U-curve optimization illustration of optimal batch size][9]
This can be explained with an analogy about grocery shopping. Suppose you need to buy some eggs and you live 30 minutes from the store. Buying one egg (far left on the illustration) at a time would mean a 30-minute trip each time. This is your _transaction cost_. The _holding cost_ might represent the eggs spoiling and taking up space in your refrigerator over time. The _total cost_ is the _transaction cost_ plus your _holding cost_. This U-curve explains why, for most people, buying a dozen eggs at a time is their _optimal batch size_. If you lived next door to the store, it'd cost you next to nothing to walk there, and you'd probably buy a smaller carton each time to save room in your refrigerator and enjoy fresher eggs.
This U-curve optimization illustration can shed some light on why productivity increases significantly in successful agile transformations. Consider the effect of agile transformation on decision making in an organization. In traditional hierarchical organizations, decision-making authority is centralized. This leads to larger decisions made less frequently by fewer people. An agile methodology will effectively reduce an organization's transaction cost for making decisions by decentralizing the decisions to where the awareness and information is the best known: across the high-trust, self-organizing agile teams.
The following animation shows how reducing transaction cost shifts the optimal batch size to the left. You can't understate the value to an organization in making faster decisions more frequently.
![U-curve optimization illustration][10]
### Where does DevOps fit in?
Automation is one of the things DevOps is most known for. The previous illustration shows the value of automation in great detail. Through automation, we reduce our transaction costs to nearly zero, essentially getting our testing and deployments for free. This lets us take advantage of smaller and smaller batch sizes of work. Smaller batches of work are easier to understand, commit to, test, review, and know when they are done. These smaller batch sizes also contain less variance and risk, making them easier to deploy and, if something goes wrong, to troubleshoot and recover from. With automation combined with a solid agile practice, we can get our feature development very close to single piece flow, providing value to customers quickly and continuously.
More traditionally, DevOps is understood as a way to knock down the walls of confusion between the dev and ops teams. In this model, development teams develop new features, while operations teams keep the system stable and running smoothly. Friction occurs because new features from development introduce change into the system, increasing the risk of an outage, which the operations team doesn't feel responsible for—but has to deal with anyway. DevOps is not just trying to get people working together, it's more about trying to make more frequent changes safely in a complex environment.
We can look to [Ron Westrum][11] for research about achieving safety in complex organizations. In researching why some organizations are safer than others, he found that an organization's culture is predictive of its safety. He identified three types of culture: Pathological, Bureaucratic, and Generative. He found that the Pathological culture was predictive of less safety and the Generative culture was predictive of more safety (e.g., far fewer plane crashes or accidental hospital deaths in his main areas of research).
![Three types of culture identified by Ron Westrum][12]
Effective DevOps teams achieve a Generative culture with lean and agile practices, showing that speed and safety are complementary, or two sides of the same coin. By reducing the optimal batch sizes of decisions and features to become very small, DevOps achieves a faster flow of information and value while removing waste and reducing risk.
In line with Westrum's research, change can happen easily with safety and reliability improving at the same time. When an agile DevOps team is trusted to make its own decisions, we get the tools and techniques DevOps is most known for today: automation and continuous delivery. Through this automation, transaction costs are reduced further than ever, and a near single piece lean flow is achieved, creating the potential for thousands of decisions and releases per day, as we've seen happen in high-performing DevOps organizations.
### Flow, feedback, learning
DevOps doesn't stop there. We've mainly been talking about DevOps achieving a revolutionary flow, but lean and agile practices are further amplified through similar efforts that achieve faster feedback loops and faster learning. In the [_DevOps Handbook_][13], the authors explain in detail how, beyond its fast flow, DevOps achieves telemetry across its entire value stream for fast and continuous feedback. Further, leveraging the [kaizen][14] bursts of lean and the [retrospectives][15] of scrum, high-performing DevOps teams will continuously drive learning and continuous improvement deep into the foundations of their organizations, achieving a lean manufacturing revolution in the software product development industry.
### Start with a DevOps assessment
The first step in leveraging DevOps is, either after much study or with the help of a DevOps consultant and coach, to conduct an assessment across a suite of dimensions consistently found in high-performing DevOps teams. The assessment should identify weak or non-existent team norms that need improvement. Evaluate the assessment's results to find quick wins—focus areas with high chances for success that will produce high-impact improvement. Quick wins are important for gaining the momentum needed to tackle more challenging areas. The teams should generate ideas that can be tried quickly and start to move the needle on the DevOps transformation.
After some time, the team should reassess on the same dimensions to measure improvements and identify new high-impact focus areas, again with fresh ideas from the team. A good coach will consult, train, mentor, and support as needed until the team owns its own continuous improvement and achieves near consistency on all dimensions by continually reassessing, experimenting, and learning.
In the [second part][16] of this article, we'll look at results from a DevOps survey in the Drupal community and see where the quick wins are most likely to be found.
* * *
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://opensource.com/resources/devops
[3]: https://github.com/Netflix/chaosmonkey
[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[6]: https://youtu.be/5t6GhcvKB8o?t=54
[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif (U-curve optimization illustration of optimal batch size)
[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif (U-curve optimization illustration)
[11]: https://en.wikipedia.org/wiki/Ron_Westrum
[12]: https://opensource.com/sites/default/files/uploads/information_flow.png (Three types of culture identified by Ron Westrum)
[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
[14]: https://en.wikipedia.org/wiki/Kaizen
[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
[19]: https://events.drupal.org/seattle2019

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Continuous response: The essential process we're ignoring in DevOps)
[#]: via: (https://opensource.com/article/19/3/continuous-response-devops)
[#]: author: (Randy Bias https://opensource.com/users/randybias)
Continuous response: The essential process we're ignoring in DevOps
======
You probably practice CI and CD, but if you aren't thinking about
continuous response, you aren't really doing DevOps.
![CICD with gears][1]
Continuous response (CR) is an overlooked link in the DevOps process chain. The two other major links—[continuous integration (CI) and continuous delivery (CD)][2]—are well understood, but CR is not. Yet, CR is the essential element of follow-through required to make customers happy and fulfill the promise of greater speed and agility. At the heart of the DevOps movement is the need for greater velocity and agility to bring businesses into our new digital age. CR plays a pivotal role in enabling this.
### Defining CR
We need a crisp definition of CR to move forward with breaking it down. To put it into context, let's revisit the definitions of continuous integration (CI) and continuous delivery (CD). Here are Gartner's definitions when I wrote this them down in 2017:
> [Continuous integration][3] is the practice of integrating, building, testing, and delivering functional software on a scheduled, repeatable, and automated basis.
>
> Continuous delivery is a software engineering approach where teams keep producing valuable software in short cycles while ensuring that the software can be reliably released at any time.
I propose the following definition for CR:
> Continuous response is a practice where developers and operators instrument, measure, observe, and manage their deployed software looking for changes in performance, resiliency, end-user behavior, and security posture and take corrective actions as necessary.
We can argue about whether these definitions are 100% correct. They are good enough for our purposes, which is framing the definition of CR in rough context so we can understand it is really just the last link in the chain of a holistic cycle.
![The holistic DevOps cycle][4]
What is this multi-colored ring, you ask? It's the famous [OODA Loop][5]. Before continuing, let's touch on what the OODA Loop is and why it's relevant to DevOps. We'll keep it brief though, as there is already a long history between the OODA Loop and DevOps.
#### A brief aside: The OODA Loop
At the heart of core DevOps thinking is using the OODA Loop to create a proactive process for evolving and responding to changing environments. A quick [web search][6] makes it easy to learn the long history between the OODA Loop and DevOps, but if you want the deep dive, I highly recommend [The Tao of Boyd: How to Master the OODA Loop][7].
Here is the "evolved OODA Loop" presented by John Boyd:
![OODA Loop][8]
The most important thing to understand about the OODA Loop is that it's a cognitive process for adapting to and handling changing circumstances.
The second most important thing to understand about the OODA Loop is, since it is a thought process that is meant to evolve, it depends on driving feedback back into the earlier parts of the cycle as you iterate.
As you can see in the diagram above, CI, CD, and CR are all their own isolated OODA Loops within the overall DevOps OODA Loop. The key here is that each OODA Loop is an evolving thought process for how test, release, and success are measured. Simply put, those who can execute on the OODA Loop fastest will win.
Put differently, DevOps wants to drive speed (executing the OODA Loop faster) combined with agility (taking feedback and using it to constantly adjust the OODA Loop). This is why CR is a vital piece of the DevOps process. We must drive production feedback into the DevOps maturation process. The DevOps notion of Culture, Automation, Measurement, and Sharing ([CAMS][9]) partially but inadequately captures this, whereas CR provides a much cleaner continuation of CI/CD in my mind.
### Breaking CR down
CR has more depth and breadth than CI or CD. This is natural, given that what we're categorizing is the post-deployment process by which our software is taking a variety of actions from autonomic responses to analytics of customer experience. I think, when it's broken down, there are three key buckets that CR components fall into. Each of these three areas forms a complete OODA Loop; however, the level of automation throughout the OODA Loop varies significantly.
The following table will help clarify the three areas of CR:
CR Type | Purpose | Examples
---|---|---
Real-time | Autonomics for availability and resiliency | Auto-scaling, auto-healing, developer-in-the-loop automated responses to real-time failures, automated root-cause analysis
Analytic | Feature/fix pipeline | A/B testing, service response times, customer interaction models
Predictive | History-based planning | Capacity planning, hardware failure prediction models, cost-basis analysis
_Real-time CR_ is probably the best understood of the three. This kind of CR is where our software has been instrumented for known issues and can take an immediate, automated response (autonomics). Examples of known issues include responding to high or low demand (e.g., elastic auto-scaling), responding to expected infrastructure resource failures (e.g., auto-healing), and responding to expected distributed application failures (e.g., circuit breaker pattern). In the future, we will see machine learning (ML) and similar technologies applied to automated root-cause analysis and event correlation, which will then provide a path towards "no ops" or "zero ops" operational models.
_Analytic CR_ is still the most manual of the CR processes. This kind of CR is focused primarily on observing end-user experience and providing feedback to the product development cycle to add features or fix existing functionality. Examples of this include traditional A/B website testing, measuring page-load times or service-response times, post-mortems of service failures, and so on.
_Predictive CR_ , due to the resurgence of AI and ML, is one of the innovation areas in CR. It uses historical data to predict future needs. ML techniques are allowing this area to become more fully automated. Examples include automated and predictive capacity planning (primarily for the infrastructure layer), automated cost-basis analysis of service delivery, and real-time reallocation of infrastructure resources to resolve capacity and hardware failure issues before they impact the end-user experience.
### Diving deeper on CR
CR, like CI or CD, is a DevOps process supported by a set of underlying tools. CI and CD are not Jenkins, unit tests, or automated deployments alone. They are a process flow. Similarly, CR is a process flow that begins with the delivery of new code via CD, which open source tools like [Spinnaker][10] give us. CR is not monitoring, machine learning, or auto-scaling, but a diverse set of processes that occur after code deployment, supported by a variety of tools. CR is also different in two specific ways.
First, it is different because, by its nature, it is broader. The general software development lifecycle (SDLC) process means that most [CI/CD processes][11] are similar. However, code running in production differs from app to app or service to service. This means that CR differs as well.
Second, CR is different because it is nascent. Like CI and CD before it, the process and tools existed before they had a name. Over time, CI/CD became more normalized and easier to scope. CR is new, hence there is lots of room to discuss what's in or out. I welcome your comments in this regard and hope you will run with these ideas.
### CR: Closing the loop on DevOps
DevOps arose because of the need for greater service delivery velocity and agility. Essentially, DevOps is an extension of agile software development practices to an operational mindset. It's a direct response to the flexibility and automation possibilities that cloud computing affords. However, much of the thinking on DevOps to date has focused on deploying the code to production and ends there. But our jobs don't end there. As professionals, we must also make certain our code is behaving as expected, we are learning as it runs in production, and we are taking that learning back into the product development process.
This is where CR lives and breathes. DevOps without CR is the same as saying there is no OODA Loop around the DevOps process itself. It's like saying that operators' and developers' jobs end with the code being deployed. We all know this isn't true. Customer experience is the ultimate measurement of our success. Can people use the software or service without hiccups or undue friction? If not, we need to fix it. CR is the final link in the DevOps chain that enables delivering the truest customer experience.
If you aren't thinking about continuous response, you aren't doing DevOps. Share your thoughts on CR, and tell me what you think about the concept and the definition.
* * *
_This article is based on[The Essential DevOps Process We're Ignoring: Continuous Response][12], which originally appeared on the Cloudscaling blog under a [CC BY 4.0][13] license and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/continuous-response-devops
作者:[Randy Bias][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/randybias
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: https://opensource.com/article/18/8/what-cicd
[3]: https://www.gartner.com/doc/3187420/guidance-framework-continuous-integration-continuous
[4]: https://opensource.com/sites/default/files/uploads/holistic-devops-cycle-smaller.jpeg (The holistic DevOps cycle)
[5]: https://en.wikipedia.org/wiki/OODA_loop
[6]: https://www.google.com/search?q=site%3Ablog.b3k.us+ooda+loop&rlz=1C5CHFA_enUS730US730&oq=site%3Ablog.b3k.us+ooda+loop&aqs=chrome..69i57j69i58.8660j0j4&sourceid=chrome&ie=UTF-8#q=devops+ooda+loop&*
[7]: http://www.artofmanliness.com/2014/09/15/ooda-loop/
[8]: https://opensource.com/sites/default/files/uploads/ooda-loop-2-1.jpg (OODA Loop)
[9]: https://itrevolution.com/devops-culture-part-1/
[10]: https://www.spinnaker.io
[11]: https://opensource.com/article/18/12/cicd-tools-sysadmins
[12]: http://cloudscaling.com/blog/devops/the-essential-devops-process-were-ignoring-continuous-response/
[13]: https://creativecommons.org/licenses/by/4.0/

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why do organizations have open secrets?)
[#]: via: (https://opensource.com/open-organization/19/3/open-secrets-bystander-effect)
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger/users/maryjo)
Why do organizations have open secrets?
======
Everyone sees something, but no one says anything—that's the bystander
effect. And it's damaging your organizational culture.
![][1]
[The five characteristics of an open organization][2] must work together to ensure healthy and happy communities inside our organizations. Even the most transparent teams, departments, and organizations require equal doses of additional open principles—like inclusivity and collaboration—to avoid dysfunction.
The "open secrets" phenomenon illustrates the limitations of transparency when unaccompanied by additional open values. [A recent article in Harvard Business Review][3] explored the way certain organizational issues—widely apparent but seemingly impossible to solve—lead to discomfort in the workforce. Authors Insiya Hussain and Subra Tangirala performed a number of studies, and found that the more people in an organization who knew about a particular "secret," be it a software bug or a personnel issue, the less likely any one person would be to report the issue or otherwise _do_ something about it.
Hussain and Tangirala explain that so-called "open secrets" are the result of a [bystander effect][4], which comes into play when people think, "Well, if _everyone_ knows, surely _I_ don't need to be the one to point it out." The authors mention several causes of this behavior, but let's take a closer look at why open secrets might be circulating in your organization—with an eye on what an open leader might do to [create a safe space for whistleblowing][5].
### 1\. Fear
People don't want to complain about a known problem only to have their complaint be the one that initiates the quality assurance, integrity, or redress process. What if new information emerges that makes their report irrelevant? What if they are simply _wrong_?
At the root of all bystander behavior is fear—fear of repercussions, fear of losing reputation or face, or fear that the very thing you've stood up against turns out to be a non-issue for everyone else. Going on record as "the one who reported" carries with it a reputational risk that is very intimidating.
The first step to ensuring that your colleagues report malicious behavior, code, or _whatever_ needs reporting is to create a fear-free workplace. We're inundated with the idea that making a mistake is bad or wrong. We're taught that we have to "protect" our reputations. However, the qualities of a good and moral character are _always_ subjective.
_Tip for leaders_ : Reward courage and strength every time you see it, regardless of whether you deem it "necessary." For example, if in a meeting where everyone except one person agrees on something, spend time on that person's concerns. Be patient and kind in helping that person change their mind, and be open minded about that person being able to change yours. Brains work in different ways; never forget that one person might have a perspective that changes the lay of the land.
### 2\. Policies
Usually, complaint procedures and policies are designed to ensure fairness towards all parties involved in the complaint. Discouraging false reporting and ensuring such fairness in situations like these is certainly a good idea. But policies might actually deter people from standing up—because a victim might be discouraged from reporting an experience if the formal policy for reporting doesn't make them feel protected. Standing up to someone in a position of power and saying "Your behavior is horrid, and I'm not going to take it" isn't easy for anyone, but it's particularly difficult for marginalized groups.
The "open secrets" phenomenon illustrates the limitations of transparency when unaccompanied by additional open values.
To ensure fairness to all parties, we need to adjust for victims. As part of making the decision to file a report, a victim will be dealing with a variety of internal fears. They'll wonder what might happen to their self-worth if they're put in a situation where they have to talk to someone about their experience. They'll wonder if they'll be treated differently if they're the one who stands up, and how that will affect their future working environments and relationships. Especially in a situation involving an open secret, asking a victim to be strong is asking them to have to trust that numerous other people will back them up. This fear shouldn't be part of their workplace experience; it's just not fair.
Remember that if one feels responsible for a problem (e.g., "Crap, that's _my code_ that's bringing down the whole server!"), then that person might feel fear at pointing out the mistake. _The important thing is dealing with the situation, not finding someone to blame._ Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.
_Tip for leaders_ : Make sure your team's or organization's policy regarding complaints makes anonymous reporting possible. Asking a victim to "go on record" puts them in the position of having to defend their perspective. If they feel they're the victim of harassment, they're feeling as if they are harassed _and_ being asked to defend their experience. This means they're doing double the work of the perpetrator, who only has to defend themselves.
### 3\. Marginalization
Women, LGBTQ people, racial minorities, people with physical disabilities, people who are neuro-atypical, and other marginalized groups often find themselves in positions that them feel routinely dismissed, disempowered, disrespected—and generally dissed. These feelings are valid (and shouldn't be too surprising to anyone who has spent some time looking at issues of diversity and inclusion). Our emotional safety matters, and we tend to be quite protective of it—even if it means letting open secrets go unaddressed.
Marginalized groups have enough worries weighing on them, even when they're _not_ running the risk of damaging their relationships with others at work. Being seen and respected in both an organization and society more broadly is difficult enough _without_ drawing potentially negative attention.
Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.
Luckily, in recent years attitudes towards marginalized groups have become visible, and we as a society have begun to talk about our experiences as "outliers." We've also come to realize that marginalized groups aren't actually "outliers" at all; we can thank the colorful, beautiful internet for that.
_Tip for leaders_ : Diversity and inclusion plays a role in dispelling open secrets. Make sure your diversity and inclusion practices and policies truly encourage a diverse workplace.
### Model the behavior
The best way to create a safe workplace and give people the ability to call attention to pervasive problems found within it is to _model the behaviors that you want other people to display_. Dysfunction occurs in cultures that don't pay attention to and value the principles upon which they are built. In order to discourage bystander behavior, transparent, inclusive, adaptable and collaborative communities must create policies that support calling attention to open secrets and then empathetically dealing with whatever the issue may be.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/3/open-secrets-bystander-effect
作者:[Laura Hilliger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurahilliger/users/maryjo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_secret_ingredient_520x292.png?itok=QbKzJq-N
[2]: https://opensource.com/open-organization/resources/open-org-definition
[3]: https://hbr.org/2019/01/why-open-secrets-exist-in-organizations
[4]: https://www.psychologytoday.com/us/basics/bystander-effect
[5]: https://opensource.com/open-organization/19/2/open-leaders-whistleblowers

View File

@ -0,0 +1,139 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 open source tools for building a fault-tolerant system)
[#]: via: (https://opensource.com/article/19/3/tools-fault-tolerant-system)
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
9 open source tools for building a fault-tolerant system
======
Maximize uptime and minimize problems with these open source tools.
![magnifying glass on computer screen, finding a bug in the code][1]
I've always been interested in web development and software architecture because I like to see the broader picture of a working system. Whether you are building a mobile app or a web application, it has to be connected to the internet to exchange data among different modules, which means you need a web service.
If you use a cloud system as your application's backend, you can take advantage of greater computing power, as the backend service will scale horizontally and vertically and orchestrate different services. But whether or not you use a cloud backend, it's important to build a _fault-tolerant system_ —one that is resilient, stable, fast, and safe.
To understand fault-tolerant systems, let's use Facebook, Amazon, Google, and Netflix as examples. Millions and billions of users access these platforms simultaneously while transmitting enormous amounts of data via peer-to-peer and user-to-server networks, and you can be sure there are also malicious users with bad intentions, like hacking or denial-of-service (DoS) attacks. Even so, these platforms can operate 24 hours a day and 365 days a year without downtime.
Although machine learning and smart algorithms are the backbones of these systems, the fact that they achieve consistent service without a single minute of downtime is praiseworthy. Their expensive hardware and gigantic datacenters certainly matter, but the elegant software designs supporting the services are equally important. And the fault-tolerant system is one of the principles to build such an elegant system.
### Two behaviors that cause problems in production
Here's another way to think of a fault-tolerant system. When you run your application service locally, everything seems to be fine. Great! But when you promote your service to the production environment, all hell breaks loose. In a situation like this, a fault-tolerant system helps by addressing two problems: Fail-stop behavior and Byzantine behavior.
#### Fail-stop behavior
Fail-stop behavior is when a running system suddenly halts or a few parts of the system fail. Server downtime and database inaccessibility fall under this category. For example, in the diagram below, Service 1 can't communicate with Service 2 because Service 2 is inaccessible:
![Fail-stop behavior due to Service 2 downtime][2]
But the problem can also occur if there is a network problem between the services, like this:
![Fail-stop behavior due to network failure][3]
#### Byzantine behavior
Byzantine behavior is when the system continuously runs but doesn't produce the expected behavior (e.g., wrong data or an invalid value).
Byzantine failure can happen if Service 2 has corrupted data or values, even though the service looks to be operating just fine, like in this example:
![Byzantine failure due to corrupted service][4]
Or, there can be a malicious middleman intercepting between the services and injecting unwanted data:
![Byzantine failure due to malicious middleman][5]
Neither fail-stop nor Byzantine behavior is a desired situation, so we need ways to prevent or fix them. That's where fault-tolerant systems come into play. Following are eight open source tools that can help you address these problems.
### Tools for building a fault-tolerant system
Although building a truly practical fault-tolerant system touches upon in-depth _distributed computing theory_ and complex computer science principles, there are many software tools—many of them, like the following, open source—to alleviate undesirable results by building a fault-tolerant system.
#### Circuit-breaker pattern: Hystrix and Resilience4j
The [circuit-breaker pattern][6] is a technique that helps to return a prepared dummy response or a simple response when a service fails:
![Circuit breaker pattern][7]
Netflix's open source **[Hystrix][8]** is the most popular implementation of the circuit-breaker pattern.
Many companies where I've worked previously are leveraging this wonderful tool. Surprisingly, Netflix announced that it will no longer update Hystrix. (Yeah, I know.) Instead, Netflix recommends using an alternative solution like [**Resilence4j**][9], which supports Java 8 and functional programming, or an alternative practice like [Adaptive Concurrency Limit][10].
#### Load balancing: Nginx and HaProxy
Load balancing is one of the most fundamental concepts in a distributed system and must be present to have a production-quality environment. To understand load balancers, we first need to understand the concept of _redundancy_. Every production-quality web service has multiple servers that provide redundancy to take over and maintain services when servers go down.
![Load balancer][11]
Think about modern airplanes: their dual engines provide redundancy that allows them to land safely even if an engine catches fire. (It also helps that most commercial airplanes have state-of-art, automated systems.) But, having multiple engines (or servers) means that there must be some kind of scheduling mechanism to effectively route the system when something fails.
A load balancer is a device or software that optimizes heavy traffic transactions by balancing multiple server nodes. For instance, when thousands of requests come in, the load balancer acts as the middle layer to route and evenly distribute traffic across different servers. If a server goes down, the load balancer forwards requests to the other servers that are running well.
There are many load balancers available, but the two best-known ones are Nginx and HaProxy.
[**Nginx**][12] is more than a load balancer. It is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. Companies like Groupon, Capital One, Adobe, and NASA use it.
[**HaProxy**][13] is also popular, as it is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Many large internet companies, including GitHub, Reddit, Twitter, and Stack Overflow, use HaProxy. Oh and yes, Red Hat Enterprise Linux also supports HaProxy configuration.
#### Actor model: Akka
The [actor model][14] is a concurrency design pattern that delegates responsibility when an _actor_ , which is a primitive unit of computation, receives a message. An actor can create even more actors and delegate the message to them.
[**Akka**][15] is one of the most well-known tools for the actor model implementation. The framework supports Java and Scala, which are both based on JVM.
#### Asynchronous, non-blocking I/O using messaging queue: Kafka and RabbitMQ
Multi-threaded development has been popular in the past, but this practice has been discouraged and replaced with asynchronous, non-blocking I/O patterns. For Java, this is explicitly stated in its [Enterprise Java Bean (EJB) specifications][16]:
> "An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances.
>
> "The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread's priority or name. The enterprise bean must not attempt to manage thread groups."
Now, there are other practices like stream APIs and actor models. But messaging queues like [**Kafka**][17] and [**RabbitMQ**][18] offer the out-of-box support for asynchronous and non-blocking IO features, and they are powerful open source tools that can be replacements for threads by handling concurrent processes.
#### Other options: Eureka and Chaos Monkey
Other useful tools for fault-tolerant systems include monitoring tools, such as Netflix's **[Eureka][19]** , and stress-testing tools, like **[Chaos Monkey][20]**. They aim to discover potential issues earlier by testing in lower environments, like integration (INT), quality assurance (QA), and user acceptance testing (UAT), to prevent potential problems before moving to the production environment.
* * *
What open source tools are you using for building a fault-tolerant system? Please share your favorites in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/tools-fault-tolerant-system
作者:[Bryant Son (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://opensource.com/sites/default/files/uploads/1_errordowntimeservice.jpg (Fail-stop behavior due to Service 2 downtime)
[3]: https://opensource.com/sites/default/files/uploads/2_errordowntimenetwork.jpg (Fail-stop behavior due to network failure)
[4]: https://opensource.com/sites/default/files/uploads/3_byzantinefailuremalicious.jpg (Byzantine failure due to corrupted service)
[5]: https://opensource.com/sites/default/files/uploads/4_byzantinefailuremiddleman.jpg (Byzantine failure due to malicious middleman)
[6]: https://martinfowler.com/bliki/CircuitBreaker.html
[7]: https://opensource.com/sites/default/files/uploads/5_circuitbreakerpattern.jpg (Circuit breaker pattern)
[8]: https://github.com/Netflix/Hystrix/wiki
[9]: https://github.com/resilience4j/resilience4j
[10]: https://medium.com/@NetflixTechBlog/performance-under-load-3e6fa9a60581
[11]: https://opensource.com/sites/default/files/uploads/7_loadbalancer.jpg (Load balancer)
[12]: https://www.nginx.com
[13]: https://www.haproxy.org
[14]: https://en.wikipedia.org/wiki/Actor_model
[15]: https://akka.io
[16]: https://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
[17]: https://kafka.apache.org
[18]: https://www.rabbitmq.com
[19]: https://github.com/Netflix/eureka
[20]: https://github.com/Netflix/chaosmonkey

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How Kubeflow is evolving without ksonnet)
[#]: via: (https://opensource.com/article/19/4/kubeflow-evolution)
[#]: author: (Jonathan Gershater (Red Hat) https://opensource.com/users/jgershat/users/jgershat)
How Kubeflow is evolving without ksonnet
======
There are big differences in how open source communities handle change compared to closed source vendors.
![Chat bubbles][1]
Many software projects depend on modules that are run as separate open source projects. When one of those modules loses support (as is inevitable), the community around the main project must determine how to proceed.
This situation is happening right now in the [Kubeflow][2] community. Kubeflow is an evolving open source platform for developing, orchestrating, deploying, and running scalable and portable machine learning workloads on [Kubernetes][3]. Recently, the primary supporter of the Kubeflow component [ksonnet][4] announced that it would [no longer support][5] the software.
When a piece of software loses support, the decision-making process (and the outcome) differs greatly depending on whether the software is open source or closed source.
### A cellphone analogy
To illustrate the differences in how an open source community and a closed source/single software vendor proceed when a component loses support, let's use an example from hardware design.
Suppose you buy cellphone Model A and it stops working. When you try to get it repaired, you discover the manufacturer is out of business and no longer offering support. Since the cellphone's design is proprietary and closed, no other manufacturers can support it.
Now, suppose you buy cellphone Model B, it stops working, and its manufacturer is also out of business and no longer offering support. However, Model B's design is open, and another company is in business manufacturing, repairing and upgrading Model B cellphones.
This illustrates one difference between software written using closed and open source principles. If the vendor of a closed source software solution goes out of business, support disappears with the vendor, unless the vendor sells the software's design and intellectual property. But, if the vendor of an open source solution goes out of business, there is no intellectual property to sell. By the principles of open source, the source code is available for anyone to use and modify, under license, so another vendor can continue to maintain the software.
### How Kubeflow is evolving without ksonnet
The ramification of ksonnet's backers' decision to cease development illustrates Kubeflow's open and collaborative design process. Kubeflow's designers have several options, such as replacing ksonnet, adopting and developing ksonnet, etc. Because Kubeflow is an open source project, all options are discussed in the open on the Kubeflow mailing list. Some of the community's suggestions include:
> * Should we look at projects that are CNCF/Apache projects e.g. [helm][6]
> * I would opt for back to the basics. KISS. How about plain old jsonnet + kubectl + makefile/scripts ? Thats how e.g. the coreos [prometheus operator][7] does it. It would also lower the entry barrier (no new tooling) and let vendors of k8s (gke, openshift, etc) easily build on top of that.
> * I vote for using a simple, _programmatic_ context, be it manual jsonnet + kubectl, or simple Python scripts + Python K8s client, or any tool be can build on top of these.
>
The members of the mailing list are discussing and debating alternatives to ksonnet and will arrive at a decision to continue development. What I love about the open source way of adapting is that it's done communally. Unlike closed source software, which is often designed by one vendor, the organizations that are members of an open source project can collaboratively steer the project in the direction they best see fit. As Kubeflow evolves, it will benefit from an open, collaborative decision-making framework.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/kubeflow-evolution
作者:[Jonathan Gershater (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jgershat/users/jgershat
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
[2]: https://www.kubeflow.org/
[3]: https://github.com/kubernetes
[4]: https://ksonnet.io/
[5]: https://blogs.vmware.com/cloudnative/2019/02/05/welcoming-heptio-open-source-projects-to-vmware/
[6]: https://landscape.cncf.io
[7]: https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Breaking News! Microsoft is Creating Linux-based Smartphone OS)
[#]: via: (https://itsfoss.com/microsoft-linux-mobile-os)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Breaking News! Microsoft is Creating Linux-based Smartphone OS
======
Microsoft, the king of desktop operating systems, havent had much luck with the mobile operating systems. Its Windows-based mobile operating systems [Windows Mobile][1] and [Windows Phone][2], both failed miserably and have been discontinued.
But that hasnt deterred Microsoft from trying its hands in the lucrative mobile market again.
### Microsofts yet another attempt on mobile OS
![][3]
The unconfirmed news is that Microsoft is creating a Linux-based mobile operating system with a special focus on protecting users privacy.
> We have put special attention on user privacy. There will be no data collection through Cortana. The OS will not be updated without user permission and above all, the updates wont require you to restart your system several times. That itself is our biggest achievement till date.
>
> Don Jhoe, Microsoft Product Manager
Reports are not confirmed but the project is internally called “Mazure”. The name could be changed as the project progresses.
The Mazure OS will be completely open source. This shows the firm commitment to open source development from Microsoft. This is another effort from Microsoft to give back to the community by [open sourcing essential tools like Windows calculator app][4].
### Targeting 4 billion strong mobile OS market
There are over 4 billion mobile devices in the world. A tech-giant like Microsoft cannot simply give up on this big a market.
The world of mobile operating systems is dominated by Android and iOS and many experts think that it is saturated and a new player doesnt stand a chance.
But Linux-based [KaiOS][5] came into the market in 2017 and created a niche for itself in just one year.
This moderate success of KaiOS perhaps gave Microsoft the idea to launch its own mobile operating system based on Linux.
Lately, a number of [open source mobile operating system][6] projects have come up. Almost all of them use Linux kernel underneath.
Linux-based smartphones are also a hot-topic in tech world. [Librem5][7] and [Necuno][8] have already announced Linux-based smartphones with focus on privacy.
Microsoft has taken the same idea and started its own Linux-based mobile OS “Mazure” with a promise to protect users privacy.
You can find more information about this project on its extremely confidential website below.
[Microsofts Mazure OS Project Website][9]
--------------------------------------------------------------------------------
via: https://itsfoss.com/microsoft-linux-mobile-os
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Windows_Mobile
[2]: https://en.wikipedia.org/wiki/Windows_Phone
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/microsoft-linux-based-mobile-os.png?resize=800%2C450&ssl=1
[4]: https://www.theverge.com/2019/3/6/18253474/microsoft-windows-calculator-open-source-github
[5]: https://en.wikipedia.org/wiki/KaiOS
[6]: https://itsfoss.com/open-source-alternatives-android/
[7]: https://itsfoss.com/librem-linux-phone/
[8]: https://itsfoss.com/necunos-linux-smartphone/
[9]: https://en.wikipedia.org/wiki/April_Fools%27_Day

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Making computer science curricula as adaptable as our code)
[#]: via: (https://opensource.com/open-organization/19/4/adaptable-curricula-computer-science)
[#]: author: (Amarachi Achonu https://opensource.com/users/amarach1/users/johnsontanner3)
Making computer science curricula as adaptable as our code
======
No two computer science students are alike—so teachers need curricula
that are open and adaptable.
![][1]
Educators in elementary computer science face a lack of adaptable curricula. Calls for more modifiable, non-rigid curricula are therefore enticing—assuming that such curricula could benefit teachers by increasing their ability to mold resources for individual classrooms and, ultimately, produce better teaching experiences and learning outcomes.
Our team at [CSbyUs][2] noticed this scarcity, and we've created an open source web platform to facilitate more flexible, adaptable, and tested curricula for computer science educators. The mission of the CSbyUs team has always been utilizing open source technology to improve pedagogy in computer science, which includes increasing support for teachers. Therefore, this project primarily seeks to use open source principles—and the benefits inherent in them—to expand the possibilities of modern curriculum-making and support teachers by increasing access to more adaptable curricula.
### Rigid, monotonous, mundane
Why is the lack of adaptable curricula a problem for computer science education? Rigid curricula dominates most classrooms today, primarily through monotonous and routinely distributed lesson plans. Many of these plans are developed without the capacity for dynamic use and application to different classroom atmospheres. In contrast, an _adaptable_ curriculum is one that would _account_ for dynamic and changing classroom environments.
An adaptable curriculum means freedom and more options for educators. This is especially important in elementary-level classrooms, where instructors are introducing students to computer science for the first time, and in classrooms with higher populations of groups typically underrepresented in the field of computer science. Here especially, it's advantageous for instructors to have access to curricula that explicitly consider diverse classroom landscapes and grants the freedom necessary to adapt to specific student populations.
### Making it adaptable
This kind of adaptability is certainly at work at CSbyUs. Hayley Barton—a member of both the organization's curriculum-making team and its teaching team, and a senior at Duke University majoring in Economics and minoring in Computer Science and Spanish—recently demonstrated the benefits of adaptable curricula during an engagement in the field. Reflecting on her teaching experiences, Barton describes a major reason why curriculum adaptation is necessary in computer science classrooms. "We are seeing the range of students that we work with," she says, "and trying to make the curriculum something that can be tailored to different students."
An adaptable curriculum means freedom and more options for educators.
A more adaptable curriculum is necessary for truly challenging students, Barton continues.
The need for change became most evident to Barton when working students to make their own preliminary apps. Barton collaborated with students who appeared to be at different levels of focus and attention. On the one hand, a group of more advanced students took well to the style of a demonstrative curriculum and remained attentive and engaged to the task. On the other hand, another group of students seemed to have more trouble focusing in the classroom or even being motivated to engage with topics of computer science skills. Witnessing this difference among students, it became important that curriculum would need to be adaptable in multiple ways to be able to engage more students at their level.
"We want to challenge every student without making it too challenging for any individual student," Barton says. "Thinking about those things definitely feeds into how I'm thinking about the curriculum in terms of making it accessible for all the students."
As a curriculum-maker, she subsequently uses experiences like this to make changes to the original curriculum.
"If those other students have one-on-one time themselves, they could be doing even more amazing things with their apps," says Barton.
Taking this advice, Barton would potentially incorporate into the curriculum more emphasis on cultivating students' sense of ownership in computer science, since this is important to their focus and productivity. For this, students may be afforded that sense of one-on-one time. The result will affect the next round of teachers who use the curriculum.
For these changes to be effective, the onus is on teachers to notice the dynamics of the classroom. In the future, curriculum adaptation may depend on paying particular attention to and identifying these subtle differences of style of curriculum. Identifying and commenting about these subtleties allows the possibility of applying a different strategy, and these are the changes that are applied to the curriculum.
Curriculum adaptation should be iterative, as it involves learning from experience, returning to the drawing board, making changes, and finally, utilizing the curriculum again.
"We've gone through a lot of stages of development," Barton says. "The goal is to have this kind of back and forth, where the curriculum is something that's been tested, where we've used our feedback, and also used other research that we've done, to make it something that's actually impactful."
Hayley's "back and forth" process is an iterative process of curriculum-making. Between utilizing curricula and modifying curricula, instructors like Hayley can take a once-rigid curriculum and mold it to any degree that the user sees fit—again and again. This iterative process depends on tests performed first in the classroom, and it depends on the teacher's rationale and reflection on how curricula uniquely pans out for them.
Adaptability of curriculum is the most important principle on which the CSbyUs platform is built. Much like Hayley's process of curriculum-making, curriculum adaptation should be _iterative_ , as it involves learning from experience, returning to the drawing board, making changes, and finally, utilizing the curriculum again. Once launched, the CSbyUS website will document this iterative process.
The open-focused pedagogy behind the CSByUs platform, then, brings to life the flexibility inherent in the process of curriculum adaptation. First, it invites and collects the valuable first-hand perspectives of real educators working with real curricula to produce real learning. Next, it capitalizes on an iterative processes of development—one familiar to open source programmers—to enable modifications to curriculum (and the documentation of those modifications). Finally, it transforms the way teachers encounter curricula by helping them make selections from different versions of both modified curriculum and "the original." Our platform's open source strategy is crucial to cultivating a hub of flexible curricula for educators.
Open source practices can be a key difference in making rigid curricula more moldable for educators. Furthermore, since this approach effectively melds open source technologies with open-focused pedagogy, open pedagogy can potentially provide flexibility for educators teaching various curriculum across disciplines.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/4/adaptable-curricula-computer-science
作者:[Amarachi Achonu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/amarach1/users/johnsontanner3
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolserieshe_rh_051x_0.png?itok=gIzbmxuI
[2]: https://csbyus.herokuapp.com/

View File

@ -1,356 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Set Password Policies In Linux)
[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
How To Set Password Policies In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2016/03/How-To-Set-Password-Policies-In-Linux-720x340.jpg)
Even though Linux is secure by design, there are many chances for the security breach. One of them is weak passwords. As a System administrator, you must provide a strong password for the users. Because, mostly system breaches are happening due to weak passwords. This tutorial describes how to set password policies such as **password length** , **password complexity** , **password** **expiration period** etc., in DEB based systems like Debian, Ubuntu, Linux Mint, and RPM based systems like RHEL, CentOS, Scientific Linux.
### Set password length in DEB based systems
By default, all Linux operating systems requires **password length of minimum 6 characters** for the users. I strongly advice you not to go below this limit. Also, dont use your real name, parents/spouse/kids name, or your date of birth as a password. Even a novice hacker can easily break such kind of passwords in minutes. The good password must always contains more than 6 characters including a number, a capital letter, and a special character.
Usually, the password and authentication-related configuration files will be stored in **/etc/pam.d/** location in DEB based operating systems.
To set minimum password length, edit**/etc/pam.d/common-password** file;
```
$ sudo nano /etc/pam.d/common-password
```
Find the following line:
```
password [success=2 default=ignore] pam_unix.so obscure sha512
```
![][2]
And add an extra word: **minlen=8** at the end. Here I set the minimum password length as **8**.
```
password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-3-1.jpg)
Save and close the file. So, now the users cant use less than 8 characters for their password.
### Set password length in RPM based systems
**In RHEL, CentOS, Scientific Linux 7.x** systems, run the following command as root user to set password length.
```
# authconfig --passminlen=8 --update
```
To view the minimum password length, run:
```
# grep "^minlen" /etc/security/pwquality.conf
```
**Sample output:**
```
minlen = 8
```
**In RHEL, CentOS, Scientific Linux 6.x** systems, edit **/etc/pam.d/system-auth** file:
```
# nano /etc/pam.d/system-auth
```
Find the following line and add the following at the end of the line:
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/root@server_003-3.jpg)
As per the above setting, the minimum password length is **8** characters.
### Set password complexity in DEB based systems
This setting enforces how many classes, i.e upper-case, lower-case, and other characters, should be in a password.
First install password quality checking library using command:
```
$ sudo apt-get install libpam-pwquality
```
Then, edit **/etc/pam.d/common-password** file:
```
$ sudo nano /etc/pam.d/common-password
```
To set at least one **upper-case** letters in the password, add a word **ucredit=-1** at the end of the following line.
```
password requisite pam_pwquality.so retry=3 ucredit=-1
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_001-7.jpg)
Set at least one **lower-case** letters in the password as shown below.
```
password requisite pam_pwquality.so retry=3 dcredit=-1
```
Set at least **other** letters in the password as shown below.
```
password requisite pam_pwquality.so retry=3 ocredit=-1
```
As you see in the above examples, we have set at least (minimum) one upper-case, lower-case, and a special character in the password. You can set any number of maximum allowed upper-case, lower-case, and other letters in your password.
You can also set the minimum/maximum number of allowed classes in the password.
The following example shows the minimum number of required classes of characters for the new password:
```
password requisite pam_pwquality.so retry=3 minclass=2
```
### Set password complexity in RPM based systems
**In RHEL 7.x / CentOS 7.x / Scientific Linux 7.x:**
To set at least one lower-case letter in the password, run:
```
# authconfig --enablereqlower --update
```
To view the settings, run:
```
# grep "^lcredit" /etc/security/pwquality.conf
```
**Sample output:**
```
lcredit = -1
```
Similarly, set at least one upper-case letter in the password using command:
```
# authconfig --enablerequpper --update
```
To view the settings:
```
# grep "^ucredit" /etc/security/pwquality.conf
```
**Sample output:**
```
ucredit = -1
```
To set at least one digit in the password, run:
```
# authconfig --enablereqdigit --update
```
To view the setting, run:
```
# grep "^dcredit" /etc/security/pwquality.conf
```
**Sample output:**
```
dcredit = -1
```
To set at least one other character in the password, run:
```
# authconfig --enablereqother --update
```
To view the setting, run:
```
# grep "^ocredit" /etc/security/pwquality.conf
```
**Sample output:**
```
ocredit = -1
```
In **RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** , edit **/etc/pam.d/system-auth** file as root user:
```
# nano /etc/pam.d/system-auth
```
Find the following line and add the following at the end of the line:
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
```
As per the above setting, the password must have at least 8 characters. In addtion, the password should also have at least one upper-case letter, one lower-case letter, one digit, and one other characters.
### Set password expiration period in DEB based systems
Now, We are going to set the following policies.
1. Maximum number of days a password may be used.
2. Minimum number of days allowed between password changes.
3. Number of days warning given before a password expires.
To set this policy, edit:
```
$ sudo nano /etc/login.defs
```
Set the values as per your requirement.
```
PASS_MAX_DAYS 100
PASS_MIN_DAYS 0
PASS_WARN_AGE 7
```
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-8.jpg)
As you see in the above example, the user should change the password once in every **100** days and the warning message will appear **7** days before password expiration.
Be mindful that these settings will impact the newly created users.
To set maximum number of days between password change to existing users, you must run the following command:
```
$ sudo chage -M <days> <username>
```
To set minimum number of days between password change, run:
```
$ sudo chage -m <days> <username>
```
To set warning before password expires, run:
```
$ sudo chage -W <days> <username>
```
To display the password for the existing users, run:
```
$ sudo chage -l sk
```
Here, **sk** is my username.
**Sample output:**
```
Last password change : Feb 24, 2017
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
As you see in the above output, the password never expires.
To change the password expiration period of an existing user,
```
$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
```
The above command will set password of the user **sk** to expire on **24/06/2018**. Also the the minimum number days between password change is set 5 days and the maximum number of days between password changes is set to **90** days. The user account will be locked automatically after **10 days** and It will display a warning message for **10 days** before password expiration.
### Set password expiration period in RPM based systems
This is same as DEB based systems.
### Forbid previously used passwords in DEB based systems
You can limit the users to set a password which is already used in the past. To put this in layman terms, the users cant use the same password again.
To do so, edit**/etc/pam.d/common-password** file:
```
$ sudo nano /etc/pam.d/common-password
```
Find the following line and add the word **remember=5** at the end:
```
password        [success=2 default=ignore]      pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
```
The above policy will prevent the users to use the last 5 used passwords.
### Forbid previously used passwords in RPM based systems
This is same for both RHEL 6.x and RHEL 7.x and its clone systems like CentOS, Scientific Linux.
Edit **/etc/pam.d/system-auth** file as root user,
```
# vi /etc/pam.d/system-auth
```
Find the following line, and add **remember=5** at the end.
```
password     sufficient     pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
```
You know now what is password policies in Linux, and how to set different password policies in DEB and RPM based systems.
Thats all for now. I will be here soon with another interesting and useful article. Until then stay tuned with OSTechNix. If you find this tutorial helpful, share it on your social, professional networks and support us.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: http://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_003-2-1.jpg

View File

@ -1,289 +0,0 @@
Myths about /dev/urandom
======
There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
Fact: /dev/random has a very nasty problem: it blocks.
### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
Namely, what is randomness, or better: what kind of randomness am I talking about here?
And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
### You're saying I'm stupid!
Emphatically no!
Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
### True randomness
What does it mean for random numbers to be “truly random”?
I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
### Two kinds of security, one that matters
But let's assume you've obtained those “true” random numbers. What are you going to do with them?
You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
### Structure of Linux's random number generator
#### An incorrect view
Chances are, your idea of the kernel's random number generator is something similar to this:
![image: mythical structure of the kernel's random number generator][1]
“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
#### A better simplification
##### Before Linux 4.8
![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
##### From Linux 4.8 onward
In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
We will see shortly why that is not a security problem.
### What's wrong with blocking?
Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
### The CSPRNGs are alright
But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
### What about entropy running low?
It doesn't matter.
The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
### Re-seeding
But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
djb [remarked][4] that more entropy actually can hurt.
First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
There is another reason why re-seeding the random number generator every now and then is important:
Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
You've totally lost now, because the attacker can compute all future outputs from this point on.
But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
### The random and urandom man page
The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
The man page is silly, that's all. At least it tries to redeem itself with this:
> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
### Orthodoxy
The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
Let's take [Daniel Bernstein][5], better known as djb:
> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
>
> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
>
> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
>
>
>
> For a cryptographer this doesn't even pass the laugh test.
Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
>
> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
### Not everything is perfect
/dev/urandom isn't perfect. The problems are twofold:
On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
### tldr;
Just use /dev/urandom!
--------------------------------------------------------------------------------
via: https://www.2uo.de/myths-about-urandom/
作者:[Thomas Hühn][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2uo.de/
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
[4]:http://blog.cr.yp.to/20140205-entropy.html
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

View File

@ -1,203 +0,0 @@
Translating by MjSeven
iWant The Decentralized Peer To Peer File Sharing Commandline Application
======
![](https://www.ostechnix.com/wp-content/uploads/2017/07/p2p-720x340.jpg)
A while ago, we have written a guide about two file sharing utilities named [**transfer.sh**][1], a free web service that allows you to share files over Internet easily and quickly, and [**PSiTransfer**][2], a simple open source self-hosted file sharing solution. Today, we will see yet another file sharing utility called **“iWant”**. It is a free and open source CLI-based decentralized peer to peer file sharing application.
Whats makes it different from other file sharing applications? You might wonder. Here are some prominent features of iWant.
* Its commandline application. You dont need any memory consuming GUI utilities. You need only the Terminal.
* It is decentralized. That means your data will not be stored in any central location. So, there is no central point of failure.
* iWant allows you to pause the download and you can resume it later when you want. You dont need to download it from beginning, it just resumes the downloads from where you left off.
* Any changes made in the files in the shared directory (such as deletion, addition, modification) will be reflected instantly in the network.
* Just like torrents, iWant downloads the files from multiple peers. If any seeder left the group or failed to respond, it will continue the download from another seeder.
* It is cross-platform, so, you can use it in GNU/Linux, MS Windows, and Mac OS X.
### iWant A CLI-based Decentralized Peer To Peer File Sharing Solution
#### Install iWant
iWant can be easily installed using PIP package manager. Make sure you have pip installed in your Linux distribution. if it is not installed yet, refer the following guide.
[How To Manage Python Packages Using Pip](https://www.ostechnix.com/manage-python-packages-using-pip/)
After installing PIP, make sure you have installed the the following dependencies:
* libffi-dev
* libssl-dev
Say for example, on Ubuntu, you can install these dependencies using command:
```
$ sudo apt-get install libffi-dev libssl-dev
```
Once all dependencies installed, install iWant using the following command:
```
$ sudo pip install iwant
```
We have now iWant in our system. Let us go ahead and see how to use it to transfer files over network.
#### Usage
First, start iWant server using command:
```
$ iwanto start
```
At the first time, iWant will ask the Shared and Download folders location. Enter the actual location of both folders. Then, choose which interface you want to use:
Sample output would be:
```
Shared/Download folder details looks empty..
Note: Shared and Download folder cannot be the same
SHARED FOLDER(absolute path):/home/sk/myshare
DOWNLOAD FOLDER(absolute path):/home/sk/mydownloads
Network interface available
1. lo => 127.0.0.1
2. enp0s3 => 192.168.43.2
Enter index of the interface:2
now scanning /home/sk/myshare
[Adding] /home/sk/myshare 0.0
Updating Leader 56f6d5e8-654e-11e7-93c8-08002712f8c1
[Adding] /home/sk/myshare 0.0
connecting to 192.168.43.2:1235 for hashdump
```
If you see an output something like above, you can start using iWant right away.
Similarly, start iWant service on all systems in the network, assign valid Shared and Downloads folders location, and select the network interface card.
The iWant service will keep running in the current Terminal window until you press **CTRL+C** to quit it. You need to open a new tab or new Terminal window to use iWant.
iWant usage is very simple. It has few commands as listed below.
* **iwanto start** Starts iWant server.
* **iwanto search <name>** Search for files.
* **iwanto download <hash>** Download a file.
* **iwanto share <path>** Change the Shared folders location.
* **iwanto download to <destination>** Change the Download folders location.
* **iwanto view config** View Shared and Download folders.
* **iwanto version** Displays the iWant version.
* **iwanto -h** Displays the help section.
Allow me to show you some examples.
**Search files**
To search for a file, run:
```
$ iwanto search <filename>
```
Please note that you dont need to specify the accurate name.
Example:
```
$ iwanto search command
```
The above command will search for any files that contains the string “command”.
Sample output from my Ubuntu system:
```
Filename Size Checksum
------------------------------------------- ------- --------------------------------
/home/sk/myshare/THE LINUX COMMAND LINE.pdf 3.85757 efded6cc6f34a3d107c67c2300459911
```
**Download files**
You can download the files from any system on your network. To download a file, just mention the hash (checksum) of the file as shown below. You can get hash value of a share using “iwanto search” command.
```
$ iwanto download efded6cc6f34a3d107c67c2300459911
```
The file will be saved in your Download location (/home/sk/mydownloads/ in my case).
```
Filename: /home/sk/mydownloads/THE LINUX COMMAND LINE.pdf
Size: 3.857569 MB
```
**View configuration**
To view the configuration i.e the Shared and Download folders, run:
```
$ iwanto view config
```
Sample output:
```
Shared folder:/home/sk/myshare
Download folder:/home/sk/mydownloads
```
**Change Shared and Download folders location**
You can change the Shared folder and Download folder location to some other path like below.
```
$ iwanto share /home/sk/ostechnix
```
Now, the Shared location has been changed to /home/sk/ostechnix location.
Also, you can change the Downloads location using command:
```
$ iwanto download to /home/sk/Downloads
```
To view the changes made, run the config command:
```
$ iwanto view config
```
**Stop iWant**
Once you done with iWant, you can quit it by pressing **CTRL+C**.
If it is not working by any chance, it might be due to Firewall or your router doesnt support multicast. You can view all logs in** ~/.iwant/.iwant.log** file. For more details, refer the projects GitHub page provided at the end.
And, thats all. Hope this tool helps. I will be here again with another interesting guide. Till then, stay tuned with OSTechNix!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/iwant-decentralized-peer-peer-file-sharing-commandline-application/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/easy-fast-way-share-files-internet-command-line/
[2]:https://www.ostechnix.com/psitransfer-simple-open-source-self-hosted-file-sharing-solution/

View File

@ -1,3 +1,4 @@
FSSlc translating
5 open source fonts ideal for programmers
======
@ -102,7 +103,7 @@ Whichever typeface you select, you will most likely spend hours each day immerse
via: https://opensource.com/article/17/11/how-select-open-source-programming-font
作者:[Andrew Lekashman][a]
译者:[译者ID](https://github.com/译者ID)
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The shell scripting trap)
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
[#]: author: (Martin Tournoij https://arp242.net/)
The shell scripting trap
======
Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
```
# Official way of naming Go-related things:
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
goldfish
```
Takes several lines of code and a lot more brainpower in many programming languages. For example in Ruby:
```
puts(Dir['/usr/share/dict/*-english'].map do |f|
File.open(f)
.readlines
.select { |l| l[0..1].downcase == 'go' }
end.flatten.sample.chomp)
```
The Ruby version isnt that long, or even especially complicated. But the shell script version was so simple that I didnt even need to actually test it to make sure it is correct, whereas I did have to test the Ruby version to ensure I didnt make a mistake. Its also twice as long and looks a lot more dense.
This is why people use shell scripts, its so easy to make something useful. Heres is another example:
```
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
grep '^<li><a href=' |
sed -r 's|<li><a href="/wiki/.+" title=".+">(.+)</a>.*</li>|\1|' |
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
```
This gets a list of all Dutch municipalities. I actually wrote this as a quick one-shot script to populate a database years ago, but it still works fine today, and it took me a minimum of effort to make it. Doing this in e.g. Ruby would take a lot more effort.
But theres a downside, as your script grows it will become increasingly harder to maintain, but you also dont really want to rewrite it to something else, as youve already spent so much time on the shell script version.
This is what I call the shell script trap, which is a special case of the [sunk cost fallacy][1].
And many scripts do grow beyond their original intended size, and often you will spend a lot more time than you should on “fixing that one bug”, or “adding just one small feature”. Rinse, repeat.
If you had written it in Python or Ruby or another similar language from the start, you would have spent some more time writing the original version, but would have spent much less time maintaining it, while almost certainly having fewer bugs.
Take my [packman.vim][2] script for example. It started out as a simple `for` loop over all directories and a `git pull` and has grown from there. At about 200 lines its hardly the most complex script, but had I written it in Go as I originally planned then it would have been much easier to add support for printing out the status or cloning new repos from a config file. It would also be almost trivial to add support for parallel clones, which is hard (though not impossible) to do correct in a shell script. In hindsight, I would have saved time, and gotten a better result to boot.
I regret writing most shell scripts Ive written for similar reasons, and my 2018 new years pledge will be to not write any more.
#### Appendix: the problems
And to be clear, shell scripting does come with some real limitation. Some examples:
* Dealing with filenames that contain spaces or other special characters requires careful attention to detail. The vast majority of scripts get this wrong, even when written by experienced authors who care about such things (e.g. me), because its so easy to do it wrong. [Adding quotes is not enough][3].
* There are many “right” and “wrong” ways to do things. Should you use `which` or `command`? Should you use `$@` or `$*`, and should that be quoted? Should you use `cmd $arg` or `cmd "$arg"`? etc. etc.
* You cannot store any NULL bytes (0x00) in variables; it is very hard to make shell scripts deal with binary data.
* While you can make something very useful very quickly, implementing more complex algorithms can be very painful if not nigh-impossible even when using the ksh/zsh/bash extensions. My ad-hoc HTML parsing in the example above was okay for a quick one-off script, but you really dont want to do things like that in a production-script.
* It can be hard to write shell scripts that work well on all platforms. `/bin/sh` could be `dash` or `bash`, and will behave different. External tools such as `grep`, `sed`, etc. may or may not support certain flags. Are you sure that your script works on all versions (past, present, and future) of Linux, macOS, and Windows equally well?
* Debugging shell scripts can be hard, especially as the syntax can get fairly obscure quite fast, and not everyone is equally well versed in shell scripting.
* Error handling can be tricky (check `$?` or `set -e`), and doing something more advanced beyond “an error occurred” is practically impossible.
* Undefined variables are not an error unless you use `set -u`, leading to “fun stuff” like `rm -r ~/$undefined` deleting users home dir ([not a theoretical problem][4]).
* Everything is a string. Some shells add arrays, which works but the syntax is obscure and ugly. Numeric computations with fractions remain tricky and rely on external tools such as `bc` or `dc` (`$(( .. ))` expansion only works for integers).
**Feedback**
You can mail me at [martin@arp242.net][5] or [create a GitHub issue][6] for feedback, questions, etc.
--------------------------------------------------------------------------------
via: https://arp242.net/weblog/shell-scripting-trap.html
作者:[Martin Tournoij][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://arp242.net/
[b]: https://github.com/lujun9972
[1]: https://youarenotsosmart.com/2011/03/25/the-sunk-cost-fallacy/
[2]: https://github.com/Carpetsmoker/packman.vim
[3]: https://dwheeler.com/essays/filenames-in-shell.html
[4]: https://github.com/ValveSoftware/steam-for-linux/issues/3671
[5]: mailto:martin@arp242.net
[6]: https://github.com/Carpetsmoker/arp242.net/issues/new

View File

@ -1,101 +0,0 @@
tomjlw is translating
Rediscovering make: the power behind rules
======
![](https://user-images.githubusercontent.com/4419992/35015638-0529f1c0-faf4-11e7-9801-4995fc4b54f0.jpg)
I used to think makefiles were just a convenient way to list groups of shell commands; over time I've learned how powerful, flexible, and full-featured they are. This post brings to light over some of those features related to rules.
### Rules
Rules are instructions that indicate `make` how and when a file called the target should be built. The target can depend on other files called prerequisites.
You instruct `make` how to build the target in the recipe, which is no more than a set of shell commands to be executed, one at a time, in the order they appear. The syntax looks like this:
```
target_name : prerequisites
recipe
```
Once you have defined a rule, you can build the target from the command line by executing:
```
$ make target_name
```
Once the target is built, `make` is smart enough to not run the recipe ever again unless at least one of the prerequisites has changed.
### More on prerequisites
Prerequisites indicate two things:
* When the target should be built: if a prerequisite is newer than the target, `make` assumes that the target should be built.
* An order of execution: since prerequisites can, in turn, be built by another rule on the makefile, they also implicitly set an order on which rules are executed.
If you want to define an order, but you don't want to rebuild the target if the prerequisite changes, you can use a special kind of prerequisite called order only, which can be placed after the normal prerequisites, separated by a pipe (`|`)
### Patterns
For convenience, `make` accepts patterns for targets and prerequisites. A pattern is defined by including the `%` character, a wildcard that matches any number of literal characters or an empty string. Here are some examples:
* `%`: match any file
* `%.md`: match all files with the `.md` extension
* `prefix%.go`: match all files that start with `prefix` that have the `.go` extension
### Special targets
There's a set of target names that have special meaning for `make` called special targets.
You can find the full list of special targets in the [documentation][1]. As a rule of thumb, special targets start with a dot followed by uppercase letters.
Here are a few useful ones:
**.PHONY** : Indicates `make` that the prerequisites of this target are considered to be phony targets, which means that `make` will always run it's recipe regardless of whether a file with that name exists or what its last-modification time is.
**.DEFAULT** : Used for any target for which no rules are found.
**.IGNORE** : If you specify prerequisites for `.IGNORE`, `make` will ignore errors in execution of their recipes.
### Substitutions
Substitutions are useful when you need to modify the value of a variable with alterations that you specify.
A substitution has the form `$(var:a=b)` and its meaning is to take the value of the variable `var`, replace every `a` at the end of a word with `b` in that value, and substitute the resulting string. For example:
```
foo := a.o
bar : = $(foo:.o=.c) # sets bar to a.c
```
note: special thanks to [Luis Lavena][2] for letting me know about the existence of substitutions.
### Archive Files
Archive files are used to collect multiple data files together into a single file (same concept as a zip file), they are built with the `ar` Unix utility. `ar` can be used to create archives for any purpose, but has been largely replaced by `tar` for any other purposes than [static libraries][3].
In `make`, you can use an individual member of an archive file as a target or prerequisite as follows:
```
archive(member) : prerequisite
recipe
```
### Final Thoughts
There's a lot more to discover about make, but at least this counts as a start, I strongly encourage you to check the [documentation][4], create a dumb makefile, and just play with it.
--------------------------------------------------------------------------------
via: https://monades.roperzh.com/rediscovering-make-power-behind-rules/
作者:[Roberto Dip][a]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://monades.roperzh.com
[1]:https://www.gnu.org/software/make/manual/make.html#Special-Targets
[2]:https://twitter.com/luislavena/
[3]:http://tldp.org/HOWTO/Program-Library-HOWTO/static-libraries.html
[4]:https://www.gnu.org/software/make/manual/make.html

View File

@ -1,223 +0,0 @@
arrowfeng is translating
Rancher - Container Management Application
======
Docker is a cutting-edge software used for containerization, that is used in most of IT companies to reduce infrastructure cost.
By default docker comes without any GUI, which is easy for Linux administrator to manage it and its very difficult for developers to manage. When its come to production then its very difficult for Linux admin too. So, what would be the best solution to manage the docker without any trouble.
The only way is GUI. The Docker API has allowed third party applications to interfacing with Docker. There are many docker GUI applications available in the market. We already wrote an article about Portainer application. Today we are going to discuss about Rancher.
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard.
**Suggested Read :** [Portainer A Simple Docker Management GUI][1]
### What is Rancher
[Rancher][2] is a complete container management platform that makes it easy to deploy and run containers in production on any infrastructure. It provides infrastructure services such as multi-host networking, global and local load balancing, and volume snapshots. It integrates native Docker management capabilities such as Docker Machine and Docker Swarm. It offers a rich user experience that enables devops admins to operate Docker in production at large scale.
Navigate to following article for docker installation on Linux.
**Suggested Read :**
**(#)** [How to install Docker in Linux][3]
**(#)** [How to play with Docker images on Linux][4]
**(#)** [How to play with Docker containers on Linux][5]
**(#)** [How to Install, Run Applications inside Docker Containers][6]
### Rancher Features
* Set up Kubernetes in two minutes
* Launch apps with single click (90 popular Docker applications)
* Deploy and manage Docker easily
* complete container management platform for production environment
* Quickly deploy containers in production
* Automate container deployment and operations with a robust technology
* Modular infrastructure services
* Rich set of orchestration tools
* Rancher supports multiple authentication mechanisms
### How to install Rancher
Rancher installation is very simple since its runs as a lightweight Docker containers. Rancher is deployed as a set of Docker containers. Running Rancher is as simple as launching two containers. One container as the management server and another container on a node as an agent. Simple run the following command to deploy rancher on Linux.
Rancher server offers two different package tags like `stable` & `latest`. The below commands will pull appropriate build rancher image and install on your system. It will only take a couple of minutes for Rancher server to start up.
* `stable` : This tag will be their latest development builds. These builds will have been validated through rancher CI automation framework which is not advisable for deployment in production.
* `latest` : Its a latest stable release version which is recommend for production environment.
Rancher installation comes with many varieties. In this tutorial we are going to discuss about two variants.
* Install rancher server in a single container (Inbuilt Rancher Database)
* Install rancher server in a single container (External Database)
### Method-1
Run the following commands to install rancher server in a single container (Inbuilt Rancher Database).
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:latest
```
### Method-2
Instead of using the internal database that comes with Rancher server, you can start Rancher server pointing to an external database. First create required database, database user for the same.
```
> CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';
> GRANT ALL ON cattle.* TO 'cattle'@'%' IDENTIFIED BY 'cattle';
> GRANT ALL ON cattle.* TO 'cattle'@'localhost' IDENTIFIED BY 'cattle';
```
Run the following command to start Rancher connecting to an external database.
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server \
--db-host myhost.example.com --db-port 3306 --db-user username --db-pass password --db-name cattle
```
If you want to test Rancher 2.0 use the following command to start.
```
$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:preview
```
### Access & Setup Rancher Through GUI
Navigate to the following URL `http://hostname:8080` or `http://server_ip:8080` to access rancher GUI.
[![][7]![][7]][8]
### How To Register the Host
Register your host URL which allow hosts to connect to the Rancher API. Its one time setup.
To do, Click “Add a Host” link under the main menu or Go to >> Infrastructure >> Add Hosts then hit `save` button.
[![][7]![][7]][9]
By default access control authentication is disabled in rancher so first we have to enable the access control authentication through available method, otherwise anyone can access the GUI.
Go to >> Admin >> Access Control and input the following values and finally hit `Enable Authentication` button to enable it. In my case im enabling via `local authentication`
* **`Login UserName`** Input your descried login username
* **`Full Name`** Input your full name
* **`Password`** Input your descried password
* **`Confirm Password`**Confirm the password once again
[![][7]![][7]][10]
Logout and login back with your new login credential.
[![][7]![][7]][11]
Now, i can see the local authentication is enabled.
[![][7]![][7]][12]
### How To Add Hosts
After register your host, it will take you to next page where you can choose Linux machines from varies cloud providers. We are going to add the host that is running Rancher server, so select the `custom` option and input the required information.
Enter your server public IP address in the 4th step and run the command which is displaying in the 5th step into your terminal then finally hit `close` button.
```
$ sudo docker run -e CATTLE_AGENT_IP="192.168.1.112" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.1.112:8080/v1/scripts/3F8217A1DCF01A7B7F8A:1514678400000:D7WeLUcEUnqZOt8rWjrvoaUE
INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.1.112:8080/v1
INFO: Attempting to connect to: http://66.70.189.137:8080/v1
INFO: http://192.168.1.112:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=A35151AB87C15633DFB4
INFO: ENV: CATTLE_AGENT_IP=192.168.1.112
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.1.112:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Deleting container rancher-agent
INFO: Launched Rancher Agent: 3415a1fd101f3c57d9cff6aef373c0ce66a3e20772122d2ca832039dcefd92fd
```
[![][7]![][7]][13]
Wait few seconds then the newly added host will be visible. To bring this Go to Infrastructure >> Hosts page.
[![][7]![][7]][14]
### How To View Containers
Just navigate the following location to view a list of running containers. Go to >> Infrastructure >> Containers.
[![][7]![][7]][15]
### How To Create Container
Its very simple, just navigate the following location to create a container.
Go to >> Infrastructure >> Containers >> “Add Container” and input the required information as per your requirement. To test this, im going to create Centos container with latest OS.
[![][7]![][7]][16]
The same has been listed here. Infrastructure >> Containers
[![][7]![][7]][17]
Hit on the `Container` name to view the container performances information like CPU, memory, network and storage.
[![][7]![][7]][18]
To manage the container such as stop, start, clone, restart, etc. Choose the particular container then hit `Three dot's` button in the left side of the container or `Actions` button to perform.
[![][7]![][7]][19]
If you want console access of the container, just hit `Execute Shell` option in the action button.
[![][7]![][7]][20]
### How To Deploy Container From Application Catalog
Rancher provides a catalog of application templates that make it easy to deploy in single click. Its maintain popular applications (nearly 90) contributed by the Rancher community.
[![][7]![][7]][21]
Go to >> Catalog >> All >> Choose the required application >> Finally hit “Launch” button to deploy.
[![][7]![][7]][22]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
[2]:http://rancher.com/
[3]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
[4]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
[5]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
[6]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-1.png
[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-2.png
[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3.png
[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3a.png
[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-4.png
[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-5.png
[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-6.png
[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-7.png
[16]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-8.png
[17]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-9.png
[18]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-10.png
[19]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-11.png
[20]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-12.png
[21]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-13.png
[22]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-14.png

View File

@ -1,3 +1,5 @@
translating by robsean
12 Best GTK Themes for Ubuntu and other Linux Distributions
======
**Brief: Lets have a look at some of the beautiful GTK themes that you can use not only in Ubuntu but other Linux distributions that use GNOME.**

View File

@ -1,137 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Oomox Customize And Create Your Own GTK2, GTK3 Themes)
[#]: via: (https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/)
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
Oomox Customize And Create Your Own GTK2, GTK3 Themes
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png)
Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes.
There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**.
The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme.
### Installing Oomox
On Arch Linux and its variants:
Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2].
```
$ yay -S oomox
```
On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**.
```
$ sudo dpkg -i oomox_1.7.0.5.deb
$ sudo apt install -f
```
On Fedora, Oomox is available in third-party **COPR** repository.
```
$ sudo dnf copr enable tcg/themes
$ sudo dnf install oomox
```
Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands:
```
$ flatpak install flathub com.github.themix_project.Oomox
$ flatpak run com.github.themix_project.Oomox
```
For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source.
### Customize And Create Your Own GTK2, GTK3 Themes
**Theme Customization**
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png)
You can change the colour of practically every UI element, like:
1. Headers
2. Buttons
3. Buttons inside Headers
4. Menus
5. Selected Text
To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, theres an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia.
With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png)
**Iconset Customization**
You can customize the iconset that will be used for the theme icons. There are 2 options Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset.
**Terminal Customization**
You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours.
**Spotify Theme**
A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme.
Then, just press the **Apply Spotify Theme** button, and youll get this window:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png)
Just hit apply, and youre done.
**Exporting your Theme**
Once youre done customizing the theme to your liking, you can rename it by clicking the rename button at the top left:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png)
And then, just hit **Export Theme** to export the theme to your system.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png)
You can also just export just the Iconset or the terminal theme.
After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme.
### Verdict
If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort.
Have you tried it? What are your thoughts on Oomox? Put them in the comments below!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/
作者:[EDITOR][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[1]: https://aur.archlinux.org/packages/oomox/
[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[3]: https://github.com/themix-project/oomox/releases
[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox
[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/

View File

@ -1,214 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Auk7F7)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: subject: (Arch-Wiki-Man A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline)
[#]: via: (https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/)
[#]: author: ([Prakash Subramanian](https://www.2daygeek.com/author/prakash/))
[#]: url: ( )
Arch-Wiki-Man A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline
======
Getting internet is not a big deal now a days, however there will be a limitation on technology.
I was really surprise to see the technology growth but in the same time there will be fall in everywhere.
Whenever you search anything about other Linux distributions most of the time you will get a third party links in the first place but for Arch Linux every time you would get the Arch Wiki page for your results.
As Arch Wiki has most of the solution other than third party websites.
As of now, you might used web browser to get a solution for your Arch Linux system but you no need to do the same for now.
There is a solution is available in command line to perform this action much faster way and the utility called arch-wiki-man. If you are Arch Linux lover, i would suggest you to read **[Arch Linux Post Installation guide][1]** which helps you to tweak your system for day to day use.
### What is arch-wiki-man?
[arch-wiki-man][2] tool allows user to search the arch wiki pages right from the command line (CLI) instantly without internet connection. It allows user to access and search an entire wiki pages as a Linux man page.
Also, you no need to switch to GUI. Updates are pushed automatically every two days so, your local copy of the Arch Wiki pages will be upto date. The tool name is `awman`. awman stands for Arch Wiki Man.
We had already wrote similar kind of topic called **[Arch Wiki Command Line Utility][3]** (arch-wiki-cli) which allows user search Arch Wiki from command line but make sure you should have internet to use this utility.
### How to Install arch-wiki-man tool?
arch-wiki-man utility is available in AUR repository so, we need to use AUR helper to install it. There are many AUR helper is available and we had wrote an article about **[Yaourt AUR helper][4]** and **[Packer AUR helper][5]** which are very famous AUR helper.
```
$ yaourt -S arch-wiki-man
or
$ packer -S arch-wiki-man
```
Alternatively we can install it using npm package manager. Make sure, you should have installed **[NodeJS][6]** on your system. If so, run the following command to install it.
```
$ npm install -g arch-wiki-man
```
### How to Update the local Arch Wiki copy?
As updated previously, updates are pushed automatically every two days and it can be done by running the following command.
```
$ sudo awman-update
[sudo] password for daygeek:
[email protected] /usr/lib/node_modules/arch-wiki-man
└── [email protected]
arch-wiki-md-repo has been successfully updated or reinstalled.
```
awman-update is faster and more convenient method to get the update. However, you can get the updates by reinstalling this package using the following command.
```
$ yaourt -S arch-wiki-man
or
$ packer -S arch-wiki-man
```
### How to Use Arch Wiki from command line?
Its very simple interface and easy to use. To search anything, just run `awman` followed by the search term. The general syntax is as follow.
```
$ awman Search-Term
```
### How to Search Multiple Matches?
If you would like to list all the results titles comes with `installation` string, run the following command format. If the output comes with multiple results then you will get a selection menu to navigate each item.
```
$ awman installation
```
![][8]
Detailed page screenshot.
![][9]
### Search a given string in Titles & Descriptions
The `-d` or `--desc-search` option allow users to search a given string in titles and descriptions.
```
$ awman -d mirrors
or
$ awman --desc-search mirrors
? Select an article: (Use arrow keys)
[1/3] Mirrors: Related articles
[2/3] DeveloperWiki-NewMirrors: Contents
[3/3] Powerpill: Powerpill is a pac
```
### Search a given string in Contents
The `-k` or `--apropos` option allow users to search a given string in content as well. Make a note, this option significantly slower your search as this scan entire wiki page content.
```
$ awman -k openjdk
or
$ awman --apropos openjdk
? Select an article: (Use arrow keys)
[1/26] Hadoop: Related articles
[2/26] XDG Base Directory support: Related articles
[3/26] Steam-Game-specific troubleshooting: See Steam/Troubleshooting first.
[4/26] Android: Related articles
[5/26] Elasticsearch: Elasticsearch is a search engine based on Lucene. It provides a distributed, mul..
[6/26] LibreOffice: Related articles
[7/26] Browser plugins: Related articles
(Move up and down to reveal more choices)
```
### Open the search results in a web browser
The `-w` or `--web` option allow users to open the search results in a web browser.
```
$ awman -w AUR helper
or
$ awman --web AUR helper
```
![][10]
### Search in other languages
The `-w` or `--web` option allow users to open the search results in a web browser. To see a list of supported language, run the following command.
```
$ awman --list-languages
arabic
bulgarian
catalan
chinesesim
chinesetrad
croatian
czech
danish
dutch
english
esperanto
finnish
greek
hebrew
hungarian
indonesian
italian
korean
lithuanian
norwegian
polish
portuguese
russian
serbian
slovak
spanish
swedish
thai
ukrainian
```
Run the awman command with your preferred language to see the results with different language other than English.
```
$ awman -l chinesesim deepin
```
![][11]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/arch-linux-post-installation-30-things-to-do-after-installing-arch-linux/
[2]: https://github.com/greg-js/arch-wiki-man
[3]: https://www.2daygeek.com/search-arch-wiki-website-command-line-terminal/
[4]: https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/
[5]: https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/
[6]: https://www.2daygeek.com/install-nodejs-on-ubuntu-centos-debian-fedora-mint-rhel-opensuse/
[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-1.png
[9]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-2.png
[10]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-3.png
[11]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-4.png

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Take to the virtual skies with FlightGear)
[#]: via: (https://opensource.com/article/19/1/flightgear)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Take to the virtual skies with FlightGear
======
Dreaming of piloting a plane? Try open source flight simulator FlightGear.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS)
If you've ever dreamed of piloting a plane, you'll love [FlightGear][1]. It's a full-featured, [open source][2] flight simulator that runs on Linux, MacOS, and Windows.
The FlightGear project began in 1996 due to dissatisfaction with commercial flight simulation programs, which were not scalable. Its goal was to create a sophisticated, robust, extensible, and open flight simulator framework for use in academia and pilot training or by anyone who wants to play with a flight simulation scenario.
### Getting started
FlightGear's hardware requirements are fairly modest, including an accelerated 3D video card that supports OpenGL for smooth framerates. It runs well on my Linux laptop with an i5 processor and only 4GB of RAM. Its documentation includes an [online manual][3]; a [wiki][4] with portals for [users][5] and [developers][6]; and extensive tutorials (such as one for its default aircraft, the [Cessna 172p][7]) to teach you how to operate it.
It's easy to install on both [Fedora][8] and [Ubuntu][9] Linux. Fedora users can consult the [Fedora installation page][10] to get FlightGear running.
On Ubuntu 18.04, I had to install a repository:
```
$ sudo add-apt-repository ppa:saiarcot895/flightgear
$ sudo apt-get update
$ sudo apt-get install flightgear
```
Once the installation finished, I launched it from the GUI, but you can also launch the application from a terminal by entering:
```
$ fgfs
```
### Configuring FlightGear
The menu on the left side of the application window provides configuration options.
![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png)
**Summary** returns you to the application's home screen.
**Aircraft** shows the aircraft you have installed and offers the option to install up to 539 other aircraft available in FlightGear's default "hangar." I installed a Cessna 150L, a Piper J-3 Cub, and a Bombardier CRJ-700. Some of the aircraft (including the CRJ-700) have tutorials to teach you how to fly a commercial jet; I found the tutorials informative and accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png)
To select an aircraft to pilot, highlight it and click on **Fly!** at the bottom of the menu. I chose the default Cessna 172p and found the cockpit depiction extremely accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png)
The default airport is Honolulu, but you can change it in the **Location** menu by providing your favorite airport's [ICAO airport code][11] identifier. I found some small, local, non-towered airports like Olean and Dunkirk, New York, as well as larger airports including Buffalo, O'Hare, and Raleigh—and could even choose a specific runway.
Under **Environment** , you can adjust the time of day, the season, and the weather. The simulation includes advance weather modeling and the ability to download current weather from [NOAA][12].
**Settings** provides an option to start the simulation in Paused mode by default. Also in Settings, you can select multi-player mode, which allows you to "fly" with other players on FlightGear supporters' global network of servers that allow for multiple users. You must have a moderately fast internet connection to support this functionality.
The **Add-ons** menu allows you to download aircraft and additional scenery.
### Take flight
To "fly" my Cessna, I used a Logitech joystick that worked well. You can calibrate your joystick using an option in the **File** menu at the top.
Overall, I found the simulation very accurate and think the graphics are great. Try FlightGear yourself — I think you will find it a very fun and complete simulation package.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/flightgear
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: http://home.flightgear.org/
[2]: http://wiki.flightgear.org/GNU_General_Public_License
[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
[4]: http://wiki.flightgear.org/FlightGear_Wiki
[5]: http://wiki.flightgear.org/Portal:User
[6]: http://wiki.flightgear.org/Portal:Developer
[7]: http://wiki.flightgear.org/Cessna_172P
[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
[10]: https://apps.fedoraproject.org/packages/FlightGear/
[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
[12]: https://www.noaa.gov/

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ)
[#]: via: (https://itsfoss.com/story-of-va-linux/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
VA Linux: The Linux Company That Once Ruled NASDAQ
======
This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past.
At its time, _VA Linux_ was indeed a crusade to free the world from Microsofts domination.
On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day.
The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2].
It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3].
How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company?
Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief.
### How did it all actually begin?
In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time.
So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Suns but at a much lower price tag: $2,000.
That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed.
![VA Linux founder, Larry Augustin][9]
> Once software goes into the GPL, you cant take it back. People can stop contributing, but the code that exists, people can continue to develop on it.
>
> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10]
#### Some screenshots of their web domains from the early days
![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11]
![varesearch.com reveals emerging growth | February 16, 1998][12]
![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13]
### The spectacular rise and the devastating fall of VA Linux
VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year.
After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010.
Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015.
In case youre curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise.
![Image Credit: Wikipedia][16]
### What happened to VA Linux afterward?
Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17].
SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode.
An [article][18] from 2016 sadly quotes in its final paragraph:
> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.”
Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]?
How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder?
The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _Its FOSS,_ our heartfelt salute goes out to those noble ideas!
Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_.
But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, its [a different story there][24] and still going strong with the original ideologies of _VA Linux_!
![VA Linux booth circa 2000 | Image Credit: Storem][25]
#### _VA Linux_ Subsidiary Still Operational in Japan!
VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services:
* Failure Analysis and Support Services: [_VA Quest_][27]
* Entrusted Development Service
* Consulting Service
_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]!
You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. Its an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan.
Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today!
What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below.
I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know.
--------------------------------------------------------------------------------
via: https://itsfoss.com/story-of-va-linux/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Initial_public_offering
[2]: https://www.forbes.com/1999/05/03/feat.html
[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux
[4]: https://www.gamestop.com/
[5]: http://www.sun.com/
[6]: https://en.wikipedia.org/wiki/Do_it_yourself
[7]: https://www.urbandictionary.com/define.php?term=FTW
[8]: https://www.linkedin.com/in/larryaugustin/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1
[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1
[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/
[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1
[17]: https://www.thinkgeek.com/
[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux
[19]: https://itsfoss.com/linux-gaming-distributions/
[20]: https://en.wikipedia.org/wiki/Open-source_video_game
[21]: https://www.valvesoftware.com/
[22]: https://itsfoss.com/steam-play-proton/
[23]: https://archive.org/web/web.php
[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text=
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1
[26]: https://www.valinux.co.jp/english/
[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service
[28]: https://www.linkedin.com/in/yogo45/
[29]: https://www.valinux.co.jp/english/about/timeline/
[30]: https://github.com/vaj
[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499
[32]: https://en.wikipedia.org/wiki/Kubernetes

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game)
[#]: via: (https://itsfoss.com/crosscode-game/)
[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/)
CrossCode is an Awesome 16-bit Sci-Fi RPG Game
======
What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent.
Note: CrossCode is not open source software. We have covered it because it is Linux specific.
![][2]
### Story
You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past.
As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesnt sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience.
The story unfolds at a satisfying speed and the characters development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game.
All-in-all, CrossCodes story did not leave me wanting, not even in the slightest. Its deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look.
![][3]
### Gameplay
Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The games mechanics are fast-paced, challenging, intuitive, and downright fun!
You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesnt conflict with one another.
The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCodes gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you cant help but periodically stop and think “wow, this game is great!”
Though this has to be the games strongest feature, it can also be the games biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and thats putting it lightly.
There are times in which CrossCodes gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them.
![][4]
The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I havent found in many other games.
And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesnt cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the games most difficult parts to help the player along.
Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the games best strengths.
![][5]
### Design
One of the things that surprised me most about CrossCode was how well its world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode.
Being in a fictional MMO world, the games character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration.
If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore.
It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park.
![][6]
### Conclusion
In the end, it is obvious how I feel about this game. And just in case you havent caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there.
Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11].
If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, youll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly.
--------------------------------------------------------------------------------
via: https://itsfoss.com/crosscode-game/
作者:[Phillip Prado][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/phillip/
[b]: https://github.com/lujun9972
[1]: http://www.cross-code.com/en/home
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1
[7]: https://itsfoss.com/free-linux-games/
[8]: http://www.radicalfishgames.com/
[9]: https://store.steampowered.com/app/368340/CrossCode/
[10]: https://www.gog.com/game/crosscode
[11]: https://www.humblebundle.com/store/crosscode
[12]: https://www.humblebundle.com/monthly?partner=itsfoss
[13]: https://itsfoss.com/affiliate-policy/

View File

@ -1,77 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Best VPN Services For 2019)
[#]: via: (https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/)
[#]: author: (Editor https://www.ostechnix.com/author/editor/)
7 Best VPN Services For 2019
======
At least 67 percent of global businesses in the past three years have faced data breaching. The breaching has been reported to expose hundreds of millions of customers. Studies show that an estimated 93 percent of these breaches would have been avoided had data security fundamentals been considered beforehand.
Understand that poor data security can be extremely costly, especially to a business and could quickly lead to widespread disruption and possible harm to your brand reputation. Although some businesses can pick up the pieces the hard way, there are still those that fail to recover. Today however, you are fortunate to have access to data and network security software.
![](https://www.ostechnix.com/wp-content/uploads/2019/02/vpn-1.jpeg)
As you start 2019, keep off cyber-attacks by investing in a **V** irtual **P** rivate **N** etwork commonly known as **VPN**. When it comes to online privacy and security, there are many uncertainties. There are hundreds of different VPN providers, and picking the right one means striking just the right balance between pricing, services, and ease of use.
If you are looking for a solid 100 percent tested and secure VPN, you might want to do your due diligence and identify the best match. Here are the top 7 Best tried and tested VPN services For 2019.
### 1. Vpnunlimitedapp
With VPN Unlimited, you have total security. This VPN allows you to use any WIFI without worrying that your personal data can be leaked. With AES-256, your data is encrypted and protected against prying third-parties and hackers. This VPN ensures you stay anonymous and untracked on all websites no matter the location. It offers a 7-day trial and a variety of protocol options: OpenVPN, IKEv2, and KeepSolid Wise. Demanding users are entitled to special extras such as a personal server, lifetime VPN subscription, and personal IP options.
### 2. VPN Lite
VPN Lite is an easy-to-use and **free VPN service** that allows you to browse the internet at no charges. You remain anonymous and your privacy is protected. It obscures your IP and encrypts your data meaning third parties are not able to track your activities on all online platforms. You also get to access all online content. With VPN Lite, you get to access blocked sites in your state. You can also gain access to public WIFI without the worry of having sensitive information tracked and hacked by spyware and hackers.
### 3. HotSpot Shield
Launched in 2005, this is a popular VPN embraced by the majority of users. The VPN protocol here is integrated by at least 70 percent of the largest security companies globally. It is also known to have thousands of servers across the globe. It comes with two free options. One is completely free but supported by online advertisements, and the second one is a 7-day trial which is the flagship product. It contains military grade data encryption and protects against malware. HotSpot Shield guaranteed secure browsing and offers lightning-fast speeds.
### 4. TunnelBear
This is the best way to start if you are new to VPNs. It comes to you with a user-friendly interface complete with animated bears. With the help of TunnelBear, users are able to connect to servers in at least 22 countries at great speeds. It uses **AES 256-bit encryption** guaranteeing no data logging meaning your data stays protected. You also get unlimited data for up to five devices.
### 5. ProtonVPN
This VPN offers you a strong premium service. You may suffer from reduced connection speeds, but you also get to enjoy its unlimited data. It features an intuitive interface easy to use, and comes with a multi-platform compatibility. Protons servers are said to be specifically optimized for torrenting and thus cannot give access to Netflix. You get strong security features such as protocols and encryptions meaning your browsing activities remain secure.
### 6. ExpressVPN
This is known as the best offshore VPN for unblocking and privacy. It has gained recognition for being the top VPN service globally resulting from solid customer support and fast speeds. It offers routers that come with browser extensions and custom firmware. ExpressVPN also has an admirable scope of quality apps, plenty of servers, and can only support up to three devices.
Its not entirely free, and happens to be one of the most expensive VPNs on the market today because it is fully packed with the most advanced features. With it comes a 30-day money-back guarantee, meaning you can freely test this VPN for a month. Good thing is; it is completely risk-free. If you need a VPN for a short duration to bypass online censorship for instance, this could, be your go-to solution. You dont want to give trials to a spammy, slow, free program.
It is also one of the best ways to enjoy online streaming as well as outdoor security. Should you need to continue using it, you only have to renew or cancel your free trial if need be. Express VPN has over 2000 servers across 90 countries, unblocks Netflix, gives lightning fast connections, and gives users total privacy.
### 7. PureVPN
While this VPN may not be completely free, it falls under the most budget-friendly services on this list. Users can sign up for a free seven days trial and later choose one of its paid plans. With this VPN, you get to access 750-plus servers in at least 140 countries. There is also access to easy installation on almost all devices. All its paid features can still be accessed within the free trial window. That includes unlimited data transfers, IP leakage protection, and ISP invisibility. The supproted operating systems are iOS, Android, Windows, Linux, and macOS.
### Summary
With the large variety of available freemium VPN services today, why not take that opportunity to protect yourself and your customers? Understand that there are some great VPN services. Even the most secure free service however, cannot be touted as risk free. You might want to upgrade to a premium one for increased protection. Premium VPN allows you to test freely offering risk-free money-back guarantee. Whether you plan to sign up for a paid VPN or commit to a free one, it is highly advisable to have a VPN.
**About the author:**
**Renetta K. Molina** is a tech enthusiast and fitness enthusiast. She writes about technology, apps, WordPress and a variety of other topics. In her free time, she likes to play golf and read books. She loves to learn and try new things.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/7-best-opensource-vpn-services-for-2019/
作者:[Editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Enjoy Netflix? You Should Thank FreeBSD
======
Netflix is one of the most popular streaming services in the world.
But you already know that. Dont you?
What you probably did not know is that Netflix uses [FreeBSD][1] to deliver its content to you.
Yes, thats right. Netflix relies on FreeBSD to build its in-house content delivery network (CDN).
A [CDN][2] is a group of servers located in various part of the world. It is mainly used to deliver heavy content like images and videos to the end-user faster than a centralized server.
Instead of opting for a commercial CDN service, Netflix has built its own in-house CDN called [Open Connect][3].
Open Connect utilizes [custom hardware][4], Open Connect Appliance. You can see it in the image below. It can handle 40Gb/s data and has a storage capacity of 248TB.
![Netflixs Open Connect Appliance runs FreeBSD][5]
Netflix provides Open Connect Appliance to qualifying Internet Service Providers (ISP) for free. This way, substantial Netflix traffic gets localized and the ISPs deliver the Netflix content more efficiently.
This Open Connect Appliance runs on FreeBSD operating system and [almost exclusively runs open source software][6].
### Open Connect uses FreeBSD “Head”
![][7]
You would expect Netflix to use a stable release of FreeBSD for such a critical infrastructure but Netflix tracks the [FreeBSD head/current version][8]. Netflix says that tracking “head” lets them “stay forward-looking and focused on innovation”.
Here are the benefits Netflix sees of tracking FreeBSD:
* Quicker feature iteration
* Quicker access to new FreeBSD features
* Quicker bug fixes
* Enables collaboration
* Minimizes merge conflicts
* Amortizes merge “cost”
> Running FreeBSD “head” lets us deliver large amounts of data to our users very efficiently, while maintaining a high velocity of feature development.
>
> Netflix
Remember, even [Google uses Debian][9] testing instead of Debian stable. Perhaps these enterprises prefer the cutting edge features more than anything else.
Like Google, Netflix also plans to upstream any code they can. This should help FreeBSD and other BSD distributions based on FreeBSD.
So what does Netflix achieves with FreeBSD? Here are some quick stats:
> Using FreeBSD and commodity parts, we achieve 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU.
>
> Netflix
If you want to know more about Netflix and FreeBSD, you can refer to [this presentation from FOSDEM][10]. You can also watch the video of the presentation [here][11].
These days big enterprises rely mostly on Linux for their server infrastructure but Netflix has put their trust in BSD. This is a good thing for BSD community because if an industry leader like Netflix throws its weight behind BSD, others could follow the lead. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/netflix-freebsd-cdn/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
[3]: https://openconnect.netflix.com/en/
[4]: https://openconnect.netflix.com/en/hardware/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
[6]: https://openconnect.netflix.com/en/software/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
[8]: https://www.bsdnow.tv/tutorials/stable-current
[9]: https://itsfoss.com/goobuntu-glinux-google/
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Installing Kali Linux on VirtualBox: Quickest & Safest Way
======
_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_
[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
Since it deals with a sensitive topic like hacking, its like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again.
While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option.
With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. Its almost the same as running VLC or a game in your system.
Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your host system (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
![][3]
### How to install Kali Linux on VirtualBox
Ill be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). Its available free of cost.
In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
**Note:** _The same steps apply for Windows/Linux running VirtualBox._
As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (dont hate me!) where I try to install Kali Linux in VirtualBox step by step.
And, the best part is even if you happen to use a Linux distro as your primary OS, the same steps will be applicable!
Wondering, how? Lets see…
[Subscribe to Our YouTube Channel for More Linux Videos][5]
### Step by Step Guide to install Kali Linux on VirtualBox
_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine but why do that when you have an easy alternative?_
#### 1\. Download and install VirtualBox
The first thing you need to do is to download and install VirtualBox from Oracles official website.
[Download VirtualBox][6]
Once you download the installer, just double click on it to install VirtualBox. Its the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well.
#### 2\. Download ready-to-use virtual image of Kali Linux
After installing it successfully, head to [Offensive Securitys download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too.
![][10]
As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11].
[Kali Linux Virtual Image][8]
#### 3\. Install Kali Linux on Virtual Box
Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work.
Heres how to import the VirtualBox image for Kali Linux:
**Step 1** : Launch VirtualBox. You will notice an **Import** button click on it
![Click on Import button][12]
**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with kali linux and end with . **ova** extension.
![Importing Kali Linux image][13]
**S** Once selected, proceed by clicking on **Next**.
**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not that is your choice. It is okay if you go with the default settings.
You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
![Import hard drives as VDI][14]
Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
After you are done with the settings, hit **Import** and wait for a while.
**Step 4:** You will now see it listed. So, just hit **Start** to launch it.
You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
![Kali Linux running in VirtualBox][15]
The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it.
Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbors WiFi.
I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing good luck with that!
**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet.
### Bonus: Free Kali Linux Guide Book
If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux.
Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools.
Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
[Download Kali Linux Revealed for FREE][17]
Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-kali-linux-virtualbox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.kali.org/
[2]: https://itsfoss.com/linux-hacking-penetration-testing/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
[4]: https://www.virtualbox.org/
[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[6]: https://www.virtualbox.org/wiki/Downloads
[7]: https://itsfoss.com/install-virtualbox-ubuntu/
[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
[11]: https://itsfoss.com/4-best-download-managers-for-linux/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
[16]: https://linuxhandbook.com/update-kali-linux/
[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI)
[#]: via: (https://itsfoss.com/flowblade-video-editor-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI
======
[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set but the simplicity, flexibility, and being an open source project that counts.
However, with Flowblade 2.0 released recently it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen.
In this article, we shall take a look at whats new with Flowblade 2.0.
### New Features in Flowblade 2.0
Here are some of the major new changes in the latest release of Flowblade.
#### GUI Updates
![Flowblade 2.0][3]
This was a much needed change. Im always looking for open source solutions that works as expected along with a great GUI.
So, in this update, you will observe a new custom theme set as the default which looks good though.
Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on.
#### Workflow Overhaul
No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive.
With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement.
#### New Tools
![Flowblade Video Editor Interface][4]
**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline.
**Multitrim** : A combination of trill, roll, and slip tool.
**Cut:** Available now as a tool in addition to the traditional cut at the playhead.
**Ripple trim:** It is a mode of Trim tool not often used by many now available as a separate tool.
#### More changes?
In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images.
A lot of more tiny little changes have taken place as well you can check those out in the official [changelog][6] on GitHub.
### Installing Flowblade 2.0
If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0.
For the rest, youll have to [install it using the source code][7].
All the files are available on its GitHub page. You can download it from the page below.
[Download Flowblade 2.0][8]
### Wrapping Up
If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development.
Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you?
Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/flowblade-video-editor-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://github.com/jliljebl/flowblade
[2]: https://itsfoss.com/best-video-editing-software-linux/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1
[5]: https://en.wikipedia.org/wiki/Key_frame
[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md
[7]: https://itsfoss.com/install-software-from-source-code/
[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0
[9]: https://itsfoss.com/olive-video-editor/

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Review of Debian System Administrators Handbook)
[#]: via: (https://itsfoss.com/debian-administrators-handbook/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Review of Debian System Administrators Handbook
======
_**Debian System Administrators Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_
This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that.
### Debian Administrators Handbook
![][1]
[Debian Administrators Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server.
The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesnt mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users.
Let me give you a quick summary of what this book covers.
#### Section 1 Debian Project
The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario.
#### Section 2 Using fictional case studies for different needs
The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned.
#### Section 3 & 4- Setups and Installation
The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition.
Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about.
#### Section 5 & 6 Packaging System and Updates
Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught.
Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims.
#### Section 7 Solving Problems and finding Relevant Solutions
Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is patience. If you are patient then many problems in Debian are resolved or can be resolved after a good nights sleep.
#### Section 8 Basic Configuration, Network, Accounts, Printing
Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here.
#### Section 9 Unix Service
Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin.
#### Section 10, 11 & 12 Networking and Adminstration
Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues.
Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein.
Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein.
![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6]
#### Section 13 Workstation
Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion
#### Section 14 Security
Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that its a vast topic.
#### Section 15 Creating a Debian package
Section 15 explains the tools and processes to _debianize_ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports.
### Pros and Cons
Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the authors point of view.
One of the drawbacks, if I may call it that, is the absolute absence of humor in the book.
### Final Thoughts
I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact.
If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldnt be able to recommend a better book than this.
### Download Debian Administrators Handbook
The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11].
You can download an electronic version of the Debian Administrators Handbook in PDF, ePub or Mobi format from the link below:
[Download Debian Administrators Handbook][12]
You can also buy the book paperback edition of the book if you want to support the amazing work of the authors.
[Buy the paperback edition][13]
Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14].
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-administrators-handbook/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1
[2]: https://debian-handbook.info/
[3]: https://www.debian.org/
[4]: https://www.cups.org
[5]: https://itsfoss.com/systemd-features/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1
[7]: https://wayland.freedesktop.org/
[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland
[9]: https://itsfoss.com/reasons-why-i-love-debian/
[10]: https://debian-handbook.info/liberation/
[11]: https://www.ulule.com/debian-handbook/
[12]: https://debian-handbook.info/get/now/
[13]: https://debian-handbook.info/get/
[14]: https://raphaelhertzog.com/

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries)
[#]: via: (https://itsfoss.com/libreoffice-drops-32-bit-support/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries
======
LibreOffice is my favorite office suite as a free and powerful [alternative to Microsoft Office tools on Linux][1]. Even when I use my Windows machine I prefer to have LibreOffice installed instead of Microsoft Office tools any day.
Now, with the recent [LibreOffice][2] 6.2 update, theres a lot of good stuff to talk about along with a bad news.
### Whats New in LibreOffice 6.2?
Lets have a quick look at the major new features in the [latest release of LibreOffice][3].
If you like Linux videos, dont forget to [subscribe to our YouTube channel][4] as well.
#### The new NotebookBar
![][5]
A new addition to the interface that is optional and not enabled by default. In order to enable it, go to **View - > User Interface -> Tabbed**.
You can either set it as a tabbed layout or a grouped compact layout.
While it is not something that is mind blowing but it still counts as a significant user interface update considering a variety of user preferences.
#### Icon Theme
![][6]
A new set of icons is now available to choose from. I will definitely utilize the new set of icons they look good!
#### Platform Compatibility
With the new update, the compatibility has been improved across all the platforms (Mac, Windows, and Linux).
#### Performance Improvements
This shouldnt concern you if you didnt have any issues. But, still, the better they work on this it is a win-win for all.
They have removed unnecessary animations, worked on latency reduction, avoided repeated re-layout, and more such things to improve the performance.
#### More fixes and improvements
A lot of bugs have been fixed in this new update along with little tweaks here and there for all the tools (Writer, Calc, Draw, Impress).
To get to know all the technical details, you should check out their [release notes.
][7]
### The Sad News: Dropping the support for 32-bit binaries
Of course, this is not a feature. But, this was bound to happen because it was anticipated a few months ago. LibreOffice will no more provide 32-bit binary releases.
This is inevitable. [Ubuntu has dropped 32-bit support][8]. Many other Linux distributions have also stopped supporting 32-bit processors. The number of [Linux distributions still supporting a 32-bit architecture][9] is fast dwindling.
For the future versions of LibreOffice on 32-bit systems, youll have to rely on your distribution to provide it to you. You cannot download the binaries anymore.
### Installing LibreOffice 6.2
![][10]
Your Linux distribution should be providing this update sooner or later.
Arch-based Linux users should be getting it already while Ubuntu and Debian users would have to wait a bit longer.
If you cannot wait, you should download it and [install it from the deb file][11]. Do remove the existing LibreOffice install before using the DEB file.
[Download LibreOffice 6.2][12]
If you dont want to use the deb file, you may use the official PPA should provide you LibreOffice 6.2 before Ubuntu (it doesnt have 6.2 release at the moment). It will update your existing LibreOffice install.
```
sudo add-apt-repository ppa:libreoffice/ppa
sudo apt update
sudo apt install libreoffice
```
### Wrapping Up
LibreOffice 6.2 is definitely a major step up to keep it as a better alternative to Microsoft Office for Linux users.
Do you happen to use LibreOffice? Do these updates matter to you? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-drops-32-bit-support/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[2]: https://www.libreoffice.org/
[3]: https://itsfoss.com/libreoffice-6-0-released/
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreoffice-tabbed.png?resize=800%2C434&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/Libreoffice-style-elementary.png?ssl=1
[7]: https://wiki.documentfoundation.org/ReleaseNotes/6.2
[8]: https://itsfoss.com/ubuntu-drops-32-bit-desktop/
[9]: https://itsfoss.com/32-bit-os-list/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libre-office-6-2-release.png?resize=800%2C450&ssl=1
[11]: https://itsfoss.com/install-deb-files-ubuntu/
[12]: https://www.libreoffice.org/download/download/

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 14.04 is Reaching the End of Life. Here are Your Options
======
Ubuntu 14.04 is reaching its end of life on April 30, 2019. This means there will be no security and maintenance updates for Ubuntu 14.04 users beyond this date.
You wont even get updates for installed applications and you wont be able to install a new application using apt command or the Software Center without manually modifying the sources.list.
Ubuntu 14.04 was released almost five years ago. Thats the life of a long term support release of Ubuntu.
[Check Ubuntu version][1] and see if you are still using Ubuntu 14.04. If thats the case either on desktops or on servers, you might be wondering what should you do in such a situation.
Let me help you out there. Let me tell you what options do you have in this case.
![][2]
### Upgrade to Ubuntu 16.04 LTS (easiest of them all)
If you have a good internet connection, you can upgrade to Ubuntu 16.04 LTS from within Ubuntu 14.04.
Ubuntu 16.04 is also a long term support release and it will be supported till April, 2021. Which means youll have two years before another upgrade.
I recommend reading this tutorial about [upgrading Ubuntu version][3]. It was originally written for upgrading Ubuntu 16.04 to Ubuntu 18.04 but the steps are applicable in your case as well.
### Make a backup, do a fresh install of Ubuntu 18.04 LTS (ideal for desktop users)
The other option is that you make a backup of your Documents, Music, Pictures, Downloads and any other folder where you have kept essential data that you cannot afford to lose.
When I say backup, it simply means copying these folders to an external USB disk. In other words, you should have a way to copy the data back to your computer because youll be formatting your system.
I would recommend this option for the desktop users. Ubuntu 18.04 is the current long term support release and it will be supported till at least April, 2023. You have four long years before you are forced for another upgrade.
### Pay for extended security maintenance and continue using Ubuntu 14.04
This is suited for enterprise/corporate clients. Canonical, the parent company of Ubuntu, provides the Ubuntu Advantage program where customers can pay for phone/email based support among other benefits.
Ubuntu Advantage program users also have the [Extended Security Maintenance][4] (ESM) feature. This program provides security updates even after reaching the end of life for a given version.
This comes at a cost. It costs $225 per year per physical node for server users. For desktop users, the price is $150 per year. You can read the detailed pricing of the Ubuntu Advantage program [here][5].
### Still using Ubuntu 14.04?
If you are still using Ubuntu 14.04, you should start exploring your options as you have less than two months to go.
In any case, you must not use Ubuntu 14.04 after 30 April 2019 because your system will be vulnerable due to lack of security updates. Not being able to install new applications will be an additional major pain.
So, what option do you choose here? Upgrading to Ubuntu 16.04 or 18.04 or paying for the ESM?
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-14-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/upgrade-ubuntu-version/
[4]: https://www.ubuntu.com/esm
[5]: https://www.ubuntu.com/support/plans-and-pricing

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular)
[#]: via: (https://itsfoss.com/earliest-linux-distros/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
The Earliest Linux Distros: Before Mainstream Distros Became So Popular
======
In this throwback history article, weve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today.
![][1]
In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available.
As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System.
### 1\. The first known “distro” by HJ Lu
The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes:
![Linux 0.12 Boot and Root Disks | Photo Credit][2]
* **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first.
* **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting.
To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era.
Feeling too nostalgic?
You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90s computers.
### 2\. MCC Interim Linux
![MCC Linux 0.99.14, 1993 | Image Credit][4]
Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment.
MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR.
Though it was first released in February 1992, it was also available for download through FTP since November that year.
### 3\. TAMU Linux
![TAMU Linux | Image Credit][5]
TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system.
### 4\. Softlanding Linux System (SLS)
![SLS Linux 1.05, 1994 | Image Credit][6]
“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it.
Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are:
* **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.
* **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian.
### 5\. Yggdrasil
![LGX Yggdrasil Fall 1993 | Image Credit][7]
Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in todays time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux.
![Yggdrasils Plug-and-Play Promo | Image Credit][8]
Their motto was “Free Software For The Rest of Us”.
In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10].
If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/earliest-linux-distros/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1
[3]: https://itsfoss.com/cool-retro-term/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1
[9]: https://en.wikipedia.org/wiki/Mandriva_Linux
[10]: https://www.openmandriva.org/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Decentralized Slack Alternative Riot Releases its First Stable Version)
[#]: via: (https://itsfoss.com/riot-stable-release/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Decentralized Slack Alternative Riot Releases its First Stable Version
======
Remember [Riot messenger][1]? Its a decentralized, encrypted open source messaging software based on the [Matrix protocol][2].
I wrote a [detailed tutorial on using Riot on Linux desktop][3]. The software was in beta back then. The first stable version, Riot 1.0 has been released a few days ago. Wonder whats new?
![][4]
### New Features in Riot 1.0
Lets look at some of the changes which were introduced in the move to Riot 1.0.
#### New Looks and Branding
![][5]
The first thing that you see is the welcome screen which has a nice background and also a refreshed sky and dark blue logo which is cleaner and clearer than the previous logo.
The welcome screen gives you the option to sign into an existing riot account on either matrix.org or any other homeserver or create an account. There is also the option to talk with the Riot Bot and have a room directory listing.
#### Changing Homeservers and Making your own homeserver
![Make your own homeserver][6]
As you can see, here you can change the homeserver. The idea of riot as was shared before is to have [de-centralized][7] chat services, without foregoing the simplicity that centralized services offer. For those who want to run their own homeservers, you need the new [matrix-syanpse 0.99.1.1 reference homeserver][8].
You can find an unofficial list of matrix homeservers listed [here][9] although its far from complete.
#### Internationalization and Languages.
One of the more interesting things are that the UI and everything is now il8n-aware and has been translated to catala, dansk, duetsch, Spanish along with English (US) which is/was the default when I installed. We can hope to see some more improvements in language support going ahead.
#### Favoriting a channel
![Favoriting a channel in Riot][10]
One of the things that has changed from last time is how you favorite a channel. Now as you can see, you select the channel, click on the three vertical dots in it and then either favorite or do whatever you want with it.
#### Making changes to your profile and Settings
![Riot Different settings you can do. ][11]
Just clicking the drop-down box beside your Avatar you get the settings box. You click on the box and it gives a wide variety of settings you can change.
As you can see there are lot more choices and the language is easier than before.
#### Encryption and E2E
![Riot encryption screen][12]
One of the big things which riot has been talked about is Encryption and end-to-end encryption. This is still a work in progress.
The new release brings the focus on two enhancements in encryption: key backup and emoji device verification (still in progress).
With Riot 1.0, you can automatically backup your keys on your server. This key itself will be encrypted with a password so that it is stored securely. With this, youll never lose your encrypted message because you wont lose your encryption key.
You will soon be able to verify your device with emoji now which is easier than matching long strings, isnt it?
**In the end**
Using Riot requires a bit of patience. Once you get the hang of it, there is nothing like it. This decentralized messaging app becomes an important tool in the arsenal of privacy cautious people.
Riot is an important tool in the continuous effort to keep our data secure and privacy intact. The new major release makes it even more awesome. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/riot-stable-release/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://about.riot.im/
[2]: https://matrix.org/blog/home/
[3]: https://itsfoss.com/riot-desktop/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-messenger.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-im-web-1.0-welcome-screen.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-change-homeservers.jpg?resize=800%2C420&ssl=1
[7]: https://medium.com/s/story/why-decentralization-matters-5e3f79f7638e
[8]: https://github.com/matrix-org/synapse/releases/tag/v0.99.1.1
[9]: https://www.hello-matrix.net/public_servers.php
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-channel-preferences.jpg?resize=800%2C420&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-settings-1-e1550427251686.png?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-encryption.jpg?fit=800%2C572&ssl=1

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps for Network Engineers: Linux Foundations New Training Course)
[#]: via: (https://itsfoss.com/devops-for-network-engineers/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
DevOps for Network Engineers: Linux Foundations New Training Course
======
_**Linux Foundation has launched a[DevOps course for sysadmins][1] and network engineers. They are also offering a limited time 40% discount on the launch.**_
DevOps is no longer a buzzword. It has become the necessity for any IT company.
The role and responsibility of a sysadmin and a network engineer have changed as well. They are required to have knowledge of the DevOps tools popular in the IT industry.
If you are a sysadmin or a network engineer, you can no longer laugh off DevOps anymore. Its time to learn new skills to stay relevant in todays rapidly changing IT industry otherwise the automation trend might cost you your job.
And who knows it better than Linux Foundation, the official organization behind Linux project and the employer of Linux-creator Linus Torvalds?
[Linux Foundation has a number of courses on Linux and related technologies][2] that help you in getting a job or improving your existing skills at work.
The [latest course offering][1] from Linux Foundation specifically focuses on sysadmins who would like to familiarize with DevOps tools.
### DevOps for Network Engineers Course
![][3]
[This course][1] is intended for existing sysadmins and network engineers. So you need to have some knowledge of Linux system administration, shell scripting and Python.
The course will help you with:
* Integrating into a DevOps/Agile environment
* Familiarizing with commonly used DevOps tools
* Collaborating on projects as DevOps
* Confidently working with software and configuration files in version control
* Recognizing the roles of SCRUM team members
* Confidently applying Agile principles in an organization
This is the course outline:
* Chapter 1. Course Introduction
* Chapter 2. Modern Project Management
* Chapter 3. The DevOps Process: A Network Engineers Perspective
* Chapter 4. Network Simulation and Testing with [Mininet][4]
* Chapter 5. [OpenFlow][5] and [ONOS][6]
* Chapter 6. Infrastructure as Code ([Ansible][7] Basics)
* Chapter 7. Version Control ([Git][8])
* Chapter 8. Continuous Integration and Continuous Delivery ([Jenkins][9])
* Chapter 9. Using [Gerrit][10] in DevOps
* Chapter 10. Jenkins, Gerrit and Code Review for DevOps
* Chapter 11. The DevOps Process and Tools (Review)
Altogether, you get 25-30 hours of course material. The online course is self-paced and you can access the material for one year from the date of purchase.
_**Unlike most other courses on Linux Foundation, this is NOT a video course.**_
There is no certification for this course because it is more focused on learning and improving skills.
#### Get the course at a 40% discount (limited time)
The course costs $299 but since its just launched, they are offering 40% discount till March 1st, 2019. You can get the discount by using the **DEVOPSNET** coupon code at checkout.
[DevOps for Network Engineers][1]
By the way, if you are interested in Open Source development, you can benefit from the “[Introduction to Open Source Development, Git, and Linux][11]” video course. You can get a limited time 50% discount using **OSDEV50** code at the checkout.
Staying relevant is absolutely necessary in any industry, not just IT industry. Learning new skills that are in-demand in your industry is perhaps the best way in this regard.
What do you think? What are your views on the current automation trend? How would you go about it?
_Disclaimer: This post contains affiliate links. Please read our_ [_affiliate policy_][12] _for more details._
--------------------------------------------------------------------------------
via: https://itsfoss.com/devops-for-network-engineers/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: http://shrsl.com/1glcb
[2]: https://shareasale.com/r.cfm?b=1074561&u=747593&m=59485&urllink=&afftrack=
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/DevOps-for-Network-Engineers-800x450.png?resize=800%2C450&ssl=1
[4]: http://mininet.org/
[5]: https://en.wikipedia.org/wiki/OpenFlow
[6]: https://onosproject.org/
[7]: https://www.ansible.com/
[8]: https://itsfoss.com/basic-git-commands-cheat-sheet/
[9]: https://jenkins.io/
[10]: https://www.gerritcodereview.com/
[11]: https://shareasale.com/r.cfm?b=1193750&u=747593&m=59485&urllink=&afftrack=
[12]: https://itsfoss.com/affiliate-policy/

View File

@ -1,63 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (sanfusu)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0: Redefining Financial Services [Part 3])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/)
[#]: author: (EDITOR https://www.ostechnix.com/author/editor/)
Blockchain 2.0: Redefining Financial Services [Part 3]
======
![](https://www.ostechnix.com/wp-content/uploads/2019/03/Financial-Services-1-720x340.png)
The [**previous article of this series**][1] focused on building context to bring forth why moving our existing monetary system to a futuristic [**blockchain**][2] system is the next natural step in the evolution of “money”. We looked at the features of a blockchain platform which would aid in such a move. However, the financial markets are far more complex and composed of numerous other instruments that people trade rather than just a currency.
This part will explore the features of blockchain which will enable institutions to transform and interlace traditional banking and financing systems with it. As previously discussed, and proved, if enough people participate in a given blockchain n­­etwork and support the protocols for transactions, the nominal value that can be attributed to the “token” increases and becomes more stable. Take, for instance, Bitcoin (BTC). Like the simple paper currency, were all used to, cryptocurrencies such as Bitcoin and Ether can be utilized for all the formers purposes from buying food to ships and from loaning money to insurance.
Chances are you are already involved with a bank or any other financial institution that makes use of blockchain ledger technology. The most significant uses of blockchain tech in the finance industry will be in setting up payments infrastructure, fund transfer technologies, and digital identity management. The latter two have traditionally been handled by legacy systems in the financial services industry. These systems are slowly being migrated to blockchain systems owing to their efficiency in handling work like this. The blockchain also offers high-quality data analytics solutions to these firms, an aspect that is quickly gaining prominence because of recent developments in data sciences.[1]
Considering the start-ups and projects at the cutting edge of innovation in this space first seems warranted due to their products or services already doing the rounds in the market today.
Starting with PayPal, an online payments company started in 1998, and now among the largest of such platforms, it is considered to be a benchmark in terms of operations and technical prowess. PayPal derives largely from the existing monetary system. Its contribution to innovation came by how it collected and leveraged consumer data to provide services online at instantaneous speeds. Online transactions are taken for granted today with minimal innovation in the industry in terms of the tech that its based on. Having a solid foundation is a good thing, but that wont give anyone an edge over their competition in this fast-paced IT world with new standards and technology being pioneered every other day. In 2014 PayPal subsidiary, **Braintree** announced partnerships with popular cryptocurrency payment solutions including **Coinbase** and **GoCoin** , in a bid to gradually integrate Bitcoin and other popular cryptocurrencies into its service platform. This basically gave its consumers a chance to explore and experience the side of whats to come under the familiar umbrella cover and reliability of PayPal. In fact, ride-hailing company **Uber** had an exclusive partnership with Braintree to allow customers to pay for rides using Bitcoin.[2][3]
**Ripple** is making it easier for people to operate between multiple blockchains. Ripple has been in the headlines for moving ahead with regional banks in the US, for instance, to facilitate transferring money bilaterally to other regional banks without the need for a 3rd party intermediary resulting in reduced cost and time overheads. Ripples **Codius platform** allows for interoperability between blockchains and opens the doors to smart contracts programmed into the system for minimal tampering and confusion. Built on technology that is highly advanced, secure and scalable to suit needs, Ripples platform currently has names such as UBS and Standard Chartered on their clients list. Many more are expected to join in.[4][5]
**Kraken** , a US-based cryptocurrency exchange operating in locations around the globe is known for their reliable **crypto quant** estimates even providing Bitcoin pricing data real time to the Bloomberg terminal. In 2015, they partnered with **Fidor Bank** to form what was then the worlds first Cryptocurrency Bank offering customers banking services and products which dealt with cryptocurrencies.[6]
**Circle** , another FinTech company is currently among the largest of its sorts involved with allowing users to invest and trade in cryptocurrency derived assets, similar to traditional money market assets.[7]
Companies such as **Wyre** and **Stellar** today have managed to bring down the lead time involved in international wire transfers from an average of 3 days to under 6 hours. Claims have been made saying that once a proper regulatory system is in place the same 6 hours can be brought down to a matter of seconds.[8]
Now while all of the above have focused on the start-up projects involved, it has to be remembered that the reach and capabilities of the older more respectable financial institutions should not be ignored. Institutions that have existed for decades if not centuries moving billions of dollars worldwide are equally interested in leveraging the blockchain and its potential.
As we already mentioned in the previous article, **JP Morgan** recently unveiled their plans to exploit cryptocurrencies and the underlying ledger like the functionality of the blockchain for enterprises. The project, called **Quorum** , is defined as an **“Enterprise-ready distributed ledger and smart contract platform”**. The main goal being that gradually the bulk of the banks operations would one day be migrated to Quorum thus cutting significant investments that firms such as JP Morgan need to make in order to guarantee privacy, security, and transparency. Theyre claimed to be the only player in the industry now to have complete ownership over the whole stack of the blockchain, protocol, and token system. They also released a cryptocurrency called **JPM Coin** meant to be used in transacting high volume settlements instantaneously. JPM coin is among the first “stable coins” to be backed by a major bank such as JP Morgan. A stable coin is a cryptocurrency whose price is linked to an existing major monetary system. Quorum is also touted for its capabilities to process almost 100 transactions a second which is leaps and bounds ahead of its contemporaries.[9]
**Barclays** , a British multinational financial giant is reported to have registered two blockchain-based patents supposedly with the aim of streamlining fund transfers and KYC procedures. Barclays proposals though are more aimed toward improving their banking operations efficiency. One of the application deals with creating a private blockchain network for storing KYC details of consumers. Once verified, stored and confirmed, these details are immutable and nullifies the need for further verifications down the line. If implemented the protocol will do away with the need for multiple verifications of KYC details. Developing and densely populated countries such as India where a bulk of the population is yet to be inducted into a formal banking system will find the innovative KYC system useful in reducing random errors and lead times involved in the process[10]. Barclays is also rumored to be exploring the capabilities of a blockchain system to address credit status ratings and insurance claims.
Such blockchain backed systems are designed to eliminate needless maintenance costs and leverage the power of smart contracts for enterprises which operate in industries where discretion, security, and speed determine competitive advantage. Being enterprise products, theyre built on protocols that ensure complete transaction and contract privacy along with a consensus mechanism which essentially nullifies corruption and bribery.
**PwCs Global Fintech Report** from 2017 states that by 2020, an estimated 77% of all Fintech companies are estimated to switch to blockchain based technologies and processes concerning their operations. A whopping 90 percent of their respondents said that they were planning to adopt blockchain technology as part of an in-production system by 2020. Their judgments are not misplaced as significant cost savings and transparency gains from a regulatory point of view are guaranteed by moving to a blockchain based system.[11]
Since regulatory capabilities are built into the blockchain platform by default the migration of firms from legacy systems to modern networks running blockchain ledgers is a welcome move for industry regulators as well. Transactions and trade movements can be verified and tracked on the fly once and for all rather than after. This, in the long run, will likely result in better regulation and risk management. Not to mention improved accountability from the part of firms and individuals alike.[11]
While considerable investments in the space and leaping innovations are courtesy of large investments by established corporates it is misleading to think that such measures wouldnt permeate the benefits to the end user. As banks and financial institutions start to adopt the blockchain, it will result in increased cost savings and efficiency for them which will ultimately mean good for the end consumer too. The added benefits of transparency and fraud protection will improve customer sentiments and more importantly improve the trust that people place on the banking and financial system. A much-needed revolution in the financial services industry is possible with blockchains and their integration into traditional services.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-redefining-financial-services/
作者:[EDITOR][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/blockchain-2-0-revolutionizing-the-financial-system/
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mageia Linux Is a Modern Throwback to the Underdog Days)
[#]: via: (https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
Mageia Linux Is a Modern Throwback to the Underdog Days
======
![Welcome to Mageia][1]
The Mageia Welcome App is a boon for new Linux users.
[Used with permission][2]
Ive been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.
Well, that didnt happen. In fact, Linux Mandrake didnt even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of [OpenMandriva][3], as well as another distribution called [Mageia Linux][4].
Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but its never faltered. As of this writing, Mageia is listed as number 26 on the [Distrowatch][5] Page Hit Ranking chart and is enjoying release number 6.1.
### What Sets Mageia Apart?
This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If youve seen one KDE, GNOME, or Xfce distribution, youve seen them all, right? Anyone who's used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. Its not about what you do with the desktop; its how you put everything together to improve the user experience.
Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but its slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).
![Installing Mageia][6]
Figure 1: Installing Mageia from the Live instance.
[Used with permission][2]
Once youve launched the installation app, its fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, Im talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:
* Basic Install
* Custom Install
The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.
The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.
![bootloader][7]
Figure 2: Configuring the Mageia bootloader.
[Used with permission][2]
The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. Its not. If you dont want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.
![bootloader options][8]
Figure 3: Advanced bootloader options can be configured here.
[Used with permission][2]
Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) youll then be prompted to configure both the root user password and a standard user account (Figure 4).
![Configuring your users][9]
Figure 4: Configuring your users.
[Used with permission][2]
And thats all there is to the Mageia installation.
### Welcome to Mageia
Once you log into Mageia, youll be greeted by something every Linux distribution should use—a welcome app (Figure 5).
![welcome app][10]
Figure 5: The Mageia welcome app is a new users best friend.
[Used with permission][2]
From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but its important information for users to have at the ready.
Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as youll find (without using either SUSE or openSUSE).
![Control Center][11]
Figure 6: The Mageia Control Center is an outstanding system management tool.
[Used with permission][2]
Beyond those two tools, youll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, youd be hard pressed to find another tool you need to install to get your work done. Its that complete a distribution.
### Target Audience
Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isnt really that challenging, just slightly misleading), using Mageia Linux is a dream.
The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what theyre getting into with the installation portion of this take on the Linux platform.
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia-main.jpg?itok=ZmkbMxfM (Welcome to Mageia)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.openmandriva.org/
[4]: https://www.mageia.org/en/
[5]: https://distrowatch.com/
[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_1.jpg?itok=RYXPU70j (Installing Mageia)
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_2.jpg?itok=m2IPxgA4 (bootloader)
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_3.jpg?itok=Bs2PPrMF (bootloader options)
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_4.jpg?itok=YZBIZ0Ua (Configuring your users)
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_5.jpg?itok=gYcTfUKv (welcome app)
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_6.jpg?itok=eSl2qpPp (Control Center)

View File

@ -1,73 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
[#]: author: (Jeff Macharyas (Community Moderator) )
Sweet Home 3D: An open source tool to help you decide on your dream home
======
Interior design application makes it easy to render your favorite house—real or imaginary.
![Houses in a row][1]
I recently accepted a new job in Virginia. Since my wife was working and watching our house in New York until it sold, it was my responsibility to go out and find a new house for us and our cat. A house that she would not see until we moved into it!
I contracted with a real estate agent and looked at a few houses, taking many pictures and writing down illegible notes. At night, I would upload the photos into a Google Drive folder, and my wife and I would review them simultaneously over the phone while I tried to remember whether the room was on the right or the left, whether it had a fan, etc.
Since this was a rather tedious and not very accurate way to present my findings, I went in search of an open source solution to better illustrate what our future dream house would look like that wouldn't hinge on my fuzzy memory and blurry photos.
[Sweet Home 3D][2] did exactly what I wanted it to do. Sweet Home 3D is available on Sourceforge and released under the GNU General Public License. The [website][3] is very informative, and I was able to get it up and running in no time. Sweet Home 3D was developed by Paris-based Emmanuel Puybaret of eTeks.
### Hanging the drywall
I downloaded Sweet Home 3D onto my MacBook Pro and added a PNG version of a flat floorplan of a house to use as a background base map.
From there, it was a simple matter of using the Rooms palette to trace the pattern and set the "real life" dimensions. After I mapped the rooms, I added the walls, which I could customize by color, thickness, height, etc.
![Sweet Home 3D floorplan][5]
Now that I had the "drywall" built, I downloaded various pieces of "furniture" from a large array that includes actual furniture as well as doors, windows, shelves, and more. Each item downloads as a ZIP file, so I created a folder of all my uncompressed pieces. I could customize each piece of furniture, and repetitive items, such as doors, were easy to copy-and-paste into place.
Once I had all my walls and doors and windows in place, I used the application's 3D view to navigate through the house. Drawing upon my photos and memory, I made adjustments to all the objects until I had a close representation of the house. I could have spent more time modifying the house by adding textures, additional furniture, and objects, but I got it to the point I needed.
![Sweet Home 3D floorplan][7]
After I finished, I exported the plan as an OBJ file, which can be opened in a variety of programs, such as [Blender][8] and Preview on the Mac, to spin the house around and examine it from various angles. The Video function was most useful, as I could create a starting point, draw a path through the house, and record the "journey." I exported the video as a MOV file, which I opened and viewed on the Mac using QuickTime.
My wife was able to see (almost) exactly what I saw, and we could even start arranging furniture ahead of the move, too. Now, all I have to do is load up the moving truck and head south.
Sweet Home 3D will also prove useful at my new job. I was looking for a way to improve the map of the college's buildings and was planning to just re-draw it in [Inkscape][9] or Illustrator or something. However, since I have the flat map, I can use Sweet Home 3D to create a 3D version of the floorplan and upload it to our website to make finding the bathrooms so much easier!
### An open source crime scene?
An interesting aside: according to the [Sweet Home 3D blog][10], "the French Forensic Police Office (Scientific Police) recently chose Sweet Home 3D as a tool to design plans [to represent roads and crime scenes]. This is a concrete application of the recommendation of the French government to give the preference to free open source solutions."
This is one more bit of evidence of how open source solutions are being used by citizens and governments to create personal projects, solve crimes, and build worlds.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/tool-find-home
作者:[Jeff Macharyas (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://sourceforge.net/projects/sweethome3d/
[3]: http://www.sweethome3d.com/
[4]: /file/426441
[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
[6]: /file/426451
[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html

View File

@ -0,0 +1,301 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Configure sudo Access In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Configure sudo Access In Linux?
======
The root user has all the controls in Linux system.
root user is the most powerful user in the Linux system and can perform any action in the system.
If any users wants to perform some actions, dont provide the root access to anybody because if he/she done anything wrong there is no option/way to rectify it.
To fix this, what will be the solution?
We can grant sudo permission to the corresponding user to overcome this situation.
The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user.
They can perform most of the administrative operations but not all operations like root.
### What Is sudo?
sudo is a program, which can be used by a normal users to execute a command as the super user or another user, as specified by the security policy.
sudo users access is controlled by `/etc/sudoers` file.
### What Is An Advantage Of sudo Users?
sudo is a safe way to run a command in Linux system if you are not familiar on it.
* The Linux system keeps a logs into the `/var/log/secure` and `/var/log/auth.log` file where you can verify what actions was made by the sudo user.
* Every time, it will prompt a password to perform the current action. So, you will be getting a time to verify the action, which you are going to perform. If you feel its not a correct action then you can safely exit there itself without perform the current action.
Its different for RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) and Debian based systems such as Debian, Ubuntu and LinuxMint.
We will tech you, how to perform this on both the distributions in this article.
It can be done in three ways in both the distributions.
* Add a user into corresponding groups. For RHEL based system, we need to add a user into `wheel` group. For Debian based system, we need to add a user into `sudo` or `admin` groups.
* Add a user into `/etc/group` file manually.
* Add a user into `/etc/sudoers` file using visudo.
### How To Configure sudo Access In RHEL/CentOS/OEL Systems?
It can be done on RHEL based systems such as Redhat (RHEL), CentOS and Oracle Enterprise Linux (OEL) using following three methods.
### Method-1: How To Grant The Super User Access To A Normal User In Linux Using wheel Group?
Wheel is a special group in the RHEL based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
Make a note that the `wheel` group should be enabled in the `/etc/sudoers` file to gain this access.
```
# grep -i wheel /etc/sudoers
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
# %wheel ALL=(ALL) NOPASSWD: ALL
```
I assume that we had already created an user account to perform this. In my case, Im going to use `daygeek` user account.
Run the following command to add an user into wheel group.
```
# usermod -aG wheel daygeek
```
We can doube confirm this by running the following command.
```
# getent group wheel
wheel:x:10:daygeek
```
Im going to check whether `daygeek` user can access a file which is owned by the root user.
```
$ tail -5 /var/log/secure
tail: cannot open _/var/log/secure_ for reading: Permission denied
```
I was getting an error when i try to access the `/var/log/secure` file as a normal user. Im going to access the same file with sudo, lets see the magic.
```
$ sudo tail -5 /var/log/secure
[sudo] password for daygeek:
Mar 17 07:01:56 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
Mar 17 07:01:56 CentOS7 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=root ; COMMAND=/bin/tail -5 /var/log/secure
Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
```
### Method-2: How To Grant The Super User Access To A Normal User In RHEL/CentOS/OEL using /etc/group file?
We can manually add an user into the wheel group by editing the `/etc/group` file.
Just open the file then append the corresponding user in the appropriate group to achieve this.
```
$ grep -i wheel /etc/group
wheel:x:10:daygeek,user1
```
In this example, Im going to use `user1` user account.
Im going to check whether `user1` user has sudo access or not by restarting the `Apache` service in the system. lets see the magic.
```
$ sudo systemctl restart httpd
[sudo] password for user1:
$ sudo grep -i user1 /var/log/secure
[sudo] password for user1:
Mar 17 07:09:47 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl restart httpd
Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
```
### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under wheel group.
Just append the desired user into /etc/suoders file by using visudo command.
```
# grep -i user2 /etc/sudoers
user2 ALL=(ALL) ALL
```
In this example, Im going to use `user2` user account.
Im going to check whether `user2` user has sudo access or not by restarting the `MariaDB` service in the system. lets see the magic.
```
$ sudo systemctl restart mariadb
[sudo] password for user2:
$ sudo grep -i mariadb /var/log/secure
[sudo] password for user2:
Mar 17 07:23:10 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/grep -i mariadb /var/log/secure
```
### How To Configure sudo Access In Debian/Ubuntu Systems?
It can be done on Debian based systems such as Debian based systems such as Debian, Ubuntu and LinuxMint using following three methods.
### Method-1: How To Grant The Super User Access To A Normal User In Linux Using sudo or admin Groups?
sudo or admin is a special group in the Debian based systems that provides additional privileges that empower a user to execute restricted commands as the super user.
Make a note that the `sudo` or `admin` group should be enabled in the `/etc/sudoers` file to gain this access.
```
# grep -i 'sudo\|admin' /etc/sudoers
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
```
I assume that we had already created an user account to perform this. In my case, Im going to use `2gadmin` user account.
Run the following command to add an user into sudo group.
```
# usermod -aG sudo 2gadmin
```
We can doube confirm this by running the following command.
```
# getent group sudo
sudo:x:27:2gadmin
```
Im going to check whether `2gadmin` user can access a file which is owned by the root user.
```
$ less /var/log/auth.log
/var/log/auth.log: Permission denied
```
I was getting an error when i try to access the `/var/log/auth.log` file as a normal user. Im going to access the same file with sudo, lets see the magic.
```
$ sudo tail -5 /var/log/auth.log
[sudo] password for 2gadmin:
Mar 17 20:39:47 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/bin/bash
Mar 17 20:39:47 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
Mar 17 20:40:23 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 20:40:48 Ubuntu18 sudo: 2gadmin : TTY=pts/0 ; PWD=/home/2gadmin ; USER=root ; COMMAND=/usr/bin/tail -5 /var/log/auth.log
Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by 2gadmin(uid=0)
```
Alternatively we can perform the same by adding an user to `admin` group.
Run the following command to add an user into sudo group.
```
# usermod -aG admin user1
```
We can doube confirm this by running the following command.
```
# getent group admin
admin:x:1011:user1
```
Lets see the output.
```
$ sudo tail -2 /var/log/auth.log
[sudo] password for user1:
Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/usr/bin/tail -2 /var/log/auth.log
Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
```
### Method-2: How To Grant The Super User Access To A Normal User In Debian/Ubuntu using /etc/group file?
We can manually add an user into the sudo or admin group by editing the `/etc/group` file.
Just open the file then append the corresponding user in the appropriate group to achieve this.
```
$ grep -i sudo /etc/group
sudo:x:27:2gadmin,user2
```
In this example, Im going to use `user2` user account.
Im going to check whether `user2` user has sudo access or not by restarting the `Apache` service in the system. lets see the magic.
```
$ sudo systemctl restart apache2
[sudo] password for user2:
$ sudo tail -f /var/log/auth.log
[sudo] password for user2:
Mar 17 21:01:04 Ubuntu18 systemd-logind[559]: New session 22 of user user2.
Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened for user user2 by (uid=0)
Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
```
### Method-3: How To Grant The Super User Access To A Normal User In Linux Using /etc/sudoers file?
sudo users access is controlled by `/etc/sudoers` file. So, simply add an user into the sudoers file under sudo or admin group.
Just append the desired user into /etc/suoders file by using visudo command.
```
# grep -i user3 /etc/sudoers
user3 ALL=(ALL:ALL) ALL
```
In this example, Im going to use `user3` user account.
Im going to check whether `user3` user has sudo access or not by restarting the `MariaDB` service in the system. lets see the magic.
```
$ sudo systemctl restart mariadb
[sudo] password for user3:
$ sudo tail -f /var/log/auth.log
[sudo] password for user3:
Mar 17 21:12:32 Ubuntu18 systemd-logind[559]: New session 24 of user user3.
Mar 17 21:12:49 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/bin/systemctl restart mariadb
Mar 17 21:12:49 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user root
Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,150 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lets try dwm — dynamic window manager)
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
Lets try dwm — dynamic window manager
======
![][1]
If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
## **Installation**
To install dwm on Fedora, run:
```
$ sudo dnf install dwm dwm-user
```
The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
Additionally, to be able to lock the screen when needed, well also install _slock_ — a simple X display locker.
```
$ sudo dnf install slock
```
However, you can use a different one based on your personal preference.
## **Quick start**
To start dwm, choose the _dwm-user_ option on the login screen.
![][2]
After you log in, youll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
### Launching applications
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. Theres also a shortcut _Alt+Shift+Enter_ for opening a terminal.
Now that some apps are running, have a look at the layouts.
### Layouts
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
![][3]
The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
![][4]
The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
### Workspaces and tags
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
## **Configuration**
To make dwm as minimalistic as possible, it doesnt use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But dont worry, in Fedora its as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
First, you need to copy the file into your home directory using a command similar to the following:
```
$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
```
You can get the exact path by running _man dwm-start._
Second, just edit the _~/.dwm/config.h_ file. As an example, lets configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
Considering weve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
Under the _/* commands */_ comment, add:
```
static const char *slockcmd[] = { "slock", NULL };
```
And the following line into _static Key keys[]_ :
```
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
```
In the end, it should look like as follows: (added lines are highlighted)
```
...
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };
static const char *slockcmd[] = { "slock", NULL };
static Key keys[] = {
/* modifier key function argument */
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
...
```
Save the file.
Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, its fast enough you wont even notice it.
You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
## **Conclusion**
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what youve been looking for. However, it probably isnt for beginners. There might be a lot of additional configuration youll need to do in order to make it just as you like it.
To learn more about dwm, see the projects homepage at <https://dwm.suckless.org/>.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/asamalik/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png

View File

@ -0,0 +1,157 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Jaeger to build an Istio service mesh)
[#]: via: (https://opensource.com/article/19/3/getting-started-jaeger)
[#]: author: (Daniel Oh (Red Hat) https://opensource.com/users/daniel-oh)
Getting started with Jaeger to build an Istio service mesh
======
Improve monitoring and tracing of cloud-native apps on a distributed networking system.
![Mesh networking connected dots][1]
[Service mesh][2] provides a dedicated network for service-to-service communication in a transparent way. [Istio][3] aims to help developers and operators address service mesh features such as dynamic service discovery, mutual transport layer security (TLS), circuit breakers, rate limiting, and tracing. [Jaeger][4] with Istio augments monitoring and tracing of cloud-native apps on a distributed networking system. This article explains how to get started with Jaeger to build an Istio service mesh on the Kubernetes platform.
### Spinning up a Kubernetes cluster
[Minikube][5] allows you to run a single-node Kubernetes cluster based on a virtual machine such as [KVM][6], [VirtualBox][7], or [HyperKit][8] on your local machine. [Install Minikube][9] and use the following shell script to run it:
```
#!/bin/bash
export MINIKUBE_PROFILE_NAME=istio-jaeger
minikube profile $MINIKUBE_PROFILE_NAME
minikube config set cpus 3
minikube config set memory 8192
# You need to replace appropriate VM driver on your local machine
minikube config set vm-driver hyperkit
minikube start
```
In the above script, replace the **\--vm-driver=xxx** option with the appropriate virtual machine driver on your operating system (OS).
### Deploying Istio service mesh with Jaeger
Download the Istio installation file for your OS from the [Istio release page][10]. In the Istio package directory, you will find the Kubernetes installation YAML files in **install/** and the sample applications in **sample/**. Use the following commands:
```
$ curl -L <https://git.io/getLatestIstio> | sh -
$ cd istio-1.0.5
$ export PATH=$PWD/bin:$PATH
```
The easiest way to deploy Istio with Jaeger on your Kubernetes cluster is to use [Custom Resource Definitions][11]. Install Istio with mutual TLS authentication between sidecars with these commands:
```
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
$ kubectl apply -f install/kubernetes/istio-demo-auth.yaml
```
Check if all pods of Istio on your Kubernetes cluster are deployed and running correctly by using the following command and review the output:
```
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-59b8896965-p2vgs 1/1 Running 0 3h
istio-citadel-856f994c58-tk8kq 1/1 Running 0 3h
istio-cleanup-secrets-mq54t 0/1 Completed 0 3h
istio-egressgateway-5649fcf57-n5ql5 1/1 Running 0 3h
istio-galley-7665f65c9c-wx8k7 1/1 Running 0 3h
istio-grafana-post-install-nh5rw 0/1 Completed 0 3h
istio-ingressgateway-6755b9bbf6-4lf8m 1/1 Running 0 3h
istio-pilot-698959c67b-d2zgm 2/2 Running 0 3h
istio-policy-6fcb6d655f-lfkm5 2/2 Running 0 3h
istio-security-post-install-st5xc 0/1 Completed 0 3h
istio-sidecar-injector-768c79f7bf-9rjgm 1/1 Running 0 3h
istio-telemetry-664d896cf5-wwcfw 2/2 Running 0 3h
istio-tracing-6b994895fd-h6s9h 1/1 Running 0 3h
prometheus-76b7745b64-hzm27 1/1 Running 0 3h
servicegraph-5c4485945b-mk22d 1/1 Running 1 3h
```
### Building sample microservice apps
You can use the [Bookinfo][12] app to learn about Istio's features. Bookinfo consists of four microservice apps: _productpage_ , _details_ , _reviews_ , and _ratings_ deployed independently on Minikube. Each microservice will be deployed with an Envoy sidecar via Istio by using the following commands:
```
// Enable sidecar injection automatically
$ kubectl label namespace default istio-injection=enabled
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
// Export the ingress IP, ports, and gateway URL
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
$ export INGRESS_HOST=$(minikube ip)
$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
```
### Accessing the Jaeger dashboard
To view tracing information for each HTTP request, create some traffic by running the following commands at the command line:
```
```
$ while true; do
curl -s http://${GATEWAY_URL}/productpage > /dev/null
echo -n .;
sleep 0.2
done
You can access the Jaeger dashboard through a web browser with [http://localhost:16686][13] if you set up port forwarding as follows:
```
kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686 &
```
You can explore all traces by clicking "Find Traces" after selecting the _productpage_ service. Your dashboard will look similar to this:
![Find traces in Jaeger][14]
You can also view more details about each trace to dig into performance issues or elapsed time by clicking on a certain trace.
![Viewing details about a trace][15]
### Conclusion
A distributed tracing platform allows you to understand what happened from service to service for individual ingress/egress traffic. Istio sends individual trace information automatically to Jaeger, the distributed tracing platform, even if your modern applications aren't aware of Jaeger at all. In the end, this capability helps developers and operators do troubleshooting easier and quicker at scale.
* * *
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/getting-started-jaeger
作者:[Daniel Oh (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
[2]: https://blog.buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/
[3]: https://istio.io/docs/concepts/what-is-istio/
[4]: https://www.jaegertracing.io/docs/1.9/
[5]: https://opensource.com/article/18/10/getting-started-minikube
[6]: https://www.linux-kvm.org/page/Main_Page
[7]: https://www.virtualbox.org/wiki/Downloads
[8]: https://github.com/moby/hyperkit
[9]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[10]: https://github.com/istio/istio/releases
[11]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions
[12]: https://github.com/istio/istio/tree/master/samples/bookinfo
[13]: http://localhost:16686/
[14]: https://opensource.com/sites/default/files/uploads/traces_productpages.png (Find traces in Jaeger)
[15]: https://opensource.com/sites/default/files/uploads/traces_performance.png (Viewing details about a trace)

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 ways to jumpstart productivity at work)
[#]: via: (https://opensource.com/article/19/3/guide-being-more-productive)
[#]: author: (Sarah Wall https://opensource.com/users/sarahwall)
4 ways to jumpstart productivity at work
======
This article includes six open source productivity tools.
![][1]
Time poverty—the idea that there's not enough time to do all the work we need to do—is it a perception or a reality?
The truth is you'll never get more than 24 hours out of any day. Working longer hours doesn't help. Your productivity actually decreases the longer you work in a given day. Your perception, or intuitive understanding of your time, is what matters. One key to managing productivity is how you use the time you've got.
You have lots of time that you can use more efficiently, including time lost to ineffective meetings, distractions, and context switching between tasks. By spending your time more wisely, you can get more done and achieve higher overall job performance. You will also have a higher level of job satisfaction and feel lower levels of stress.
### Jumpstart your productivity
#### 1\. Eliminate distractions
When you have too many things vying for your attention, it slows you down and decreases your productivity. Do your best to remove every distraction that pulls you off tasks.
Cellphones, email, and messaging apps are the most common drains on productivity. Set the ringer on your phone to vibrate, set specific times for checking email, and close irrelevant browser tabs. With this approach, your work will be interrupted less throughout the day.
#### 2\. Make your to-do list _verb-oriented_
To-do lists are a great way to help you focus on exactly what you need to accomplish each day. Some people do best with a physical list, like a notebook, and others do better with digital tools. Check out these suggestions for [open source productivity tools][2] to help you manage your workflow. Or check these six open source tools to stay organized:
* [Joplin, a note-taking app][3]
* [Wekan, an open source kanban board][4]
* [TaskBoard, a lightweight kanban board][5]
* [Go For It, a flexible to-do list application][6]
* [Org mode without Emacs][7]
* [Freeplane, an open source mind-mapping application][8]
Your list can be as sophisticated or as simple as you like, but just making a list is not enough. What goes on your list makes all the difference. Every item that goes on your list should be actionable. The trick is to make sure there's a verb. For example, "Smith project" is not actionable enough. "Outline key deliverables on Smith project" gives you a more concrete task to complete.
#### 3\. Stick to the 10-minute rule
Overwhelmed by an unclear or unwieldy task? Break it into 10-minute mini-tasks instead. This can be a great way to take something unmanageable and turn it into something achievable.
The beauty of 10-minute tasks is they can be fit into many parts of your day. When you get into the office in the morning and are feeling fresh, kick off your day with a burst of productivity with a few 10-minute tasks. Losing momentum in the afternoon? A 10-minute job can help you regain speed.
Ten-minute tasks are also a good way to identify tasks that can be delegated to others. The ability to delegate work is often one of the most effective management techniques. By finding a simple task that can be accomplished by another member of your team, you can make short work of a big job.
#### 4\. Take a break
Another drain on productivity is the urge to keep pressing ahead on a task to complete it without taking a break. Suddenly you feel really fatigued or hungry, and you realize you haven't gone to the bathroom in hours! Your concentration is affected, and therefore your productivity decreases.
Set benchmarks for taking breaks and stick to them. For example, commit to once per hour to get up and move around for five minutes. If you're pressed for time, stand up and stretch for two minutes. Changing your body position and focusing on the present moment will help relieve any mental tension that has built up.
Hydrate your mind with a glass of water. When your body is not properly hydrated, it can put increased stress on your brain. As little as a one to three percent decrease in hydration can negatively affect your memory, concentration, and decision-making.
### Don't fall into the time-poverty trap
Time is limited and time poverty is just an idea. How you choose to spend the time you have each day is what's important. When you develop new, healthy habits, you can increase your productivity and direct your time in the ways that give the most value.
* * *
_This article was adapted from "[The Keys to Productivity][9]" on ImageX's blog._
_Sarah Wall will present_ [_Mindless multitasking: a dummy's guide to productivity_][10], _at_ [_DrupalCon_][11] _in Seattle, April 8-12, 2019._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/guide-being-more-productive
作者:[Sarah Wall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sarahwall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj
[2]: https://opensource.com/article/16/11/open-source-productivity-hacks
[3]: https://opensource.com/article/19/1/productivity-tool-joplin
[4]: https://opensource.com/article/19/1/productivity-tool-wekan
[5]: https://opensource.com/article/19/1/productivity-tool-taskboard
[6]: https://opensource.com/article/19/1/productivity-tool-go-for-it
[7]: https://opensource.com/article/19/1/productivity-tool-org-mode
[8]: https://opensource.com/article/19/1/productivity-tool-freeplane
[9]: https://imagexmedia.com/managing-productivity
[10]: https://events.drupal.org/seattle2019/sessions/mindless-multitasking-dummy%E2%80%99s-guide-productivity
[11]: https://events.drupal.org/seattle2019

View File

@ -0,0 +1,188 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?)
[#]: via: (https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?
======
We had recently written an article to check if a port is open on the remote Linux server. It will help you to check for single server.
If you want to check for five servers then no issues, you can use any of the one following command such as nc (netcat), nmap and telnet.
If you would like to check for 50+ servers then what will be the solution?
Its not easy to check all servers, if you do the same then there is no point and you will be wasting a lots of time unnecessarily.
To overcome this situation, i had coded a small shell script using nc command that will allow us to scan any number of servers with given port.
If you are looking for a single server scan then you have multiple options, to know more about it. Simply navigate to the following URL to **[Check Whether A Port Is Open On The Remote Linux System?][1]**
There are two scripts available in this tutorial and both the scripts are useful.
Both scripts are used for different purpose, which you can easily understand by reading a head line.
I will ask you few questions before you are reading this article, just answer yourself if you know or you can get it by reading this article.
How to check, if a port is open on the remote Linux server?
How to check, if a port is open on the multiple remote Linux server?
How to check, if multiple ports are open on the multiple remote Linux server?
### What Is nc (netcat) Command?
nc stands for netcat. Netcat is a simple Unix utility which reads and writes data across network connections, using TCP or UDP protocol.
It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts.
At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.
Netcat has three main modes of functionality. These are the connect mode, the listen mode, and the tunnel mode.
**Common Syntax for nc (netcat):**
```
$ nc [-options] [HostName or IP] [PortNumber]
```
### How To Check If A Port Is Open On Multiple Remote Linux Server?
Use the following shell script if you would like to check the given port is open on multiple remote Linux servers or not.
In my case, we are going to check whether the port 22 is open in the following remote servers or not? Make sure you have to update your servers list in the file instead of us.
Make sure you have to update the servers list into `server-list.txt file`. Each server should be in separate line.
```
# cat server-list.txt
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
192.168.1.6
192.168.1.7
```
Use the following script to achieve this.
```
# vi port_scan.sh
#!/bin/sh
for server in `more server-list.txt`
do
#echo $i
nc -zvw3 $server 22
done
```
Set an executable permission to `port_scan.sh` file.
```
$ chmod +x port_scan.sh
```
Finally run the script to achieve this.
```
# sh port_scan.sh
Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
```
### How To Check If Multiple Ports Are Open On Multiple Remote Linux Server?
Use the following script if you want to check the multiple ports in multiple servers.
In my case, we are going to check whether the port 22 and 80 is open or not in the given servers. Make sure you have to replace your required ports and servers name instead of us.
Make sure you have to update the port lists into `port-list.txt` file. Each port should be in a separate line.
```
# cat port-list.txt
22
80
```
Make sure you have to update the servers list into `server-list.txt` file. Each server should be in separate line.
```
# cat server-list.txt
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
192.168.1.6
192.168.1.7
```
Use the following script to achieve this.
```
# vi multiple_port_scan.sh
#!/bin/sh
for server in `more server-list.txt`
do
for port in `more port-list.txt`
do
#echo $server
nc -zvw3 $server $port
echo ""
done
done
```
Set an executable permission to `multiple_port_scan.sh` file.
```
$ chmod +x multiple_port_scan.sh
```
Finally run the script to achieve this.
```
# sh multiple_port_scan.sh
Connection to 192.168.1.2 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.2 80 port [tcp/http] succeeded!
Connection to 192.168.1.3 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.3 80 port [tcp/http] succeeded!
Connection to 192.168.1.4 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.4 80 port [tcp/http] succeeded!
Connection to 192.168.1.5 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.5 80 port [tcp/http] succeeded!
Connection to 192.168.1.6 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.6 80 port [tcp/http] succeeded!
Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
Connection to 192.168.1.7 80 port [tcp/http] succeeded!
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/

View File

@ -0,0 +1,540 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use Spark SQL: A hands-on tutorial)
[#]: via: (https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial)
[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
How to use Spark SQL: A hands-on tutorial
======
This tutorial explains how to leverage relational databases at scale using Spark SQL and DataFrames.
![Team checklist and to dos][1]
In the [first part][2] of this series, we looked at advances in leveraging the power of relational databases "at scale" using [Apache Spark SQL and DataFrames][3]. We will now do a simple tutorial based on a real-world dataset to look at how to use Spark SQL. We will be using Spark DataFrames, but the focus will be more on using SQL. In a separate article, I will cover a detailed discussion around Spark DataFrames and common operations.
I love using cloud services for my machine learning, deep learning, and even big data analytics needs, instead of painfully setting up my own Spark cluster. I will be using the Databricks Platform for my Spark needs. Databricks is a company founded by the creators of Apache Spark that aims to help clients with cloud-based big data processing using Spark.
![Apache Spark and Databricks][4]
The simplest (and free of charge) way is to go to the [Try Databricks page][5] and [sign up for a community edition][6] account. You get a cloud-based cluster, which is a single-node cluster with 6GB and unlimited notebooks—not bad for a free version! I recommend using the Databricks Platform if you have serious needs for analyzing big data.
Let's get started with our case study now. Feel free to create a new notebook from your home screen in Databricks or your own Spark cluster.
![Create a notebook][7]
You can also import my notebook containing the entire tutorial, but please make sure to run every cell and play around and explore with it, instead of just reading through it. Unsure of how to use Spark on Databricks? Follow [this short but useful tutorial][8].
This tutorial will familiarize you with essential Spark capabilities to deal with structured data often obtained from databases or flat files. We will explore typical ways of querying and aggregating relational data by leveraging concepts of DataFrames and SQL using Spark. We will work on an interesting dataset from the [KDD Cup 1999][9] and try to query the data using high-level abstractions like the dataframe that has already been a hit in popular data analysis tools like R and Python. We will also look at how easy it is to build data queries using the SQL language and retrieve insightful information from our data. This also happens at scale without us having to do a lot more since Spark distributes these data structures efficiently in the backend, which makes our queries scalable and as efficient as possible. We'll start by loading some basic dependencies.
```
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
```
#### Data retrieval
The [KDD Cup 1999][9] dataset was used for the Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99, the Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network-intrusion detector, a predictive model capable of distinguishing between _bad connections_ , called intrusions or attacks, and _good, normal connections_. This database contains a standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment.
We will be using the reduced dataset **kddcup.data_10_percent.gz** that contains nearly a half-million network interactions. We will download this Gzip file from the web locally and then work on it. If you have a good, stable internet connection, feel free to download and work with the full dataset, **kddcup.data.gz**.
#### Working with data from the web
Dealing with datasets retrieved from the web can be a bit tricky in Databricks. Fortunately, we have some excellent utility packages like **dbutils** that help make our job easier. Let's take a quick look at some essential functions for this module.
```
dbutils.help()
```
```
This module provides various utilities for users to interact with the rest of Databricks.
fs: DbfsUtils -> Manipulates the Databricks filesystem (DBFS) from the console
meta: MetaUtils -> Methods to hook into the compiler (EXPERIMENTAL)
notebook: NotebookUtils -> Utilities for the control flow of a notebook (EXPERIMENTAL)
preview: Preview -> Utilities under preview category
secrets: SecretUtils -> Provides utilities for leveraging secrets within notebooks
widgets: WidgetsUtils -> Methods to create and get bound value of input widgets inside notebooks
```
#### Retrieve and store data in Databricks
We will now leverage the Python **urllib** library to extract the KDD Cup 99 data from its web repository, store it in a temporary location, and move it to the Databricks filesystem, which can enable easy access to this data for analysis
> **Note:** If you skip this step and download the data directly, you may end up getting a **InvalidInputException: Input path does not exist** error.
```
import urllib
urllib.urlretrieve("<http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data\_10\_percent.gz>", "/tmp/kddcup_data.gz")
dbutils.fs.mv("file:/tmp/kddcup_data.gz", "dbfs:/kdd/kddcup_data.gz")
display(dbutils.fs.ls("dbfs:/kdd"))
```
![Spark Job kddcup_data.gz][10]
#### Build the KDD dataset
Now that we have our data stored in the Databricks filesystem, let's load up our data from the disk into Spark's traditional abstracted data structure, the [Resilient Distributed Dataset][11] (RDD).
```
data_file = "dbfs:/kdd/kddcup_data.gz"
raw_rdd = sc.textFile(data_file).cache()
raw_rdd.take(5)
```
![Data in Resilient Distributed Dataset \(RDD\)][12]
You can also verify the type of data structure of our data (RDD) using the following code.
```
type(raw_rdd)
```
![output][13]
#### Build a Spark DataFrame on our data
A Spark DataFrame is an interesting data structure representing a distributed collecion of data. Typically the entry point into all SQL functionality in Spark is the **SQLContext** class. To create a basic instance of this call, all we need is a **SparkContext** reference. In Databricks, this global context object is available as **sc** for this purpose.
```
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
sqlContext
```
![output][14]
#### Split the CSV data
Each entry in our RDD is a comma-separated line of data, which we first need to split before we can parse and build our dataframe.
```
csv_rdd = raw_rdd.map(lambda row: row.split(","))
print(csv_rdd.take(2))
print(type(csv_rdd))
```
![Splitting RDD entries][15]
#### Check the total number of features (columns)
We can use the following code to check the total number of potential columns in our dataset.
```
len(csv_rdd.take(1)[0])
Out[57]: 42
```
#### Understand and parse data
The KDD 99 Cup data consists of different attributes captured from connection data. You can obtain the [full list of attributes in the data][16] and further details pertaining to the [description for each attribute/column][17]. We will just be using some specific columns from the dataset, the details of which are specified as follows.
feature num | feature name | description | type
---|---|---|---
1 | duration | length (number of seconds) of the connection | continuous
2 | protocol_type | type of the protocol, e.g., tcp, udp, etc. | discrete
3 | service | network service on the destination, e.g., http, telnet, etc. | discrete
4 | src_bytes | number of data bytes from source to destination | continuous
5 | dst_bytes | number of data bytes from destination to source | continuous
6 | flag | normal or error status of the connection | discrete
7 | wrong_fragment | number of "wrong" fragments | continuous
8 | urgent | number of urgent packets | continuous
9 | hot | number of "hot" indicators | continuous
10 | num_failed_logins | number of failed login attempts | continuous
11 | num_compromised | number of "compromised" conditions | continuous
12 | su_attempted | 1 if "su root" command attempted; 0 otherwise | discrete
13 | num_root | number of "root" accesses | continuous
14 | num_file_creations | number of file creation operations | continuous
We will be extracting the following columns based on their positions in each data point (row) and build a new RDD as follows.
```
from pyspark.sql import Row
parsed_rdd = csv_rdd.map(lambda r: Row(
duration=int(r[0]),
protocol_type=r[1],
service=r[2],
flag=r[3],
src_bytes=int(r[4]),
dst_bytes=int(r[5]),
wrong_fragment=int(r[7]),
urgent=int(r[8]),
hot=int(r[9]),
num_failed_logins=int(r[10]),
num_compromised=int(r[12]),
su_attempted=r[14],
num_root=int(r[15]),
num_file_creations=int(r[16]),
label=r[-1]
)
)
parsed_rdd.take(5)
```
![Extracting columns][18]
#### Construct the DataFrame
Now that our data is neatly parsed and formatted, let's build our DataFrame!
```
```
df = sqlContext.createDataFrame(parsed_rdd)
display(df.head(10))
![DataFrame][19]
You can also now check out the schema of our DataFrame using the following code.
```
df.printSchema()
```
![Dataframe schema][20]
#### Build a temporary table
We can leverage the **registerTempTable()** function to build a temporary table to run SQL commands on our DataFrame at scale! A point to remember is that the lifetime of this temp table is tied to the session. It creates an in-memory table that is scoped to the cluster in which it was created. The data is stored using Hive's highly optimized, in-memory columnar format.
You can also check out **saveAsTable()** , which creates a permanent, physical table stored in S3 using the Parquet format. This table is accessible to all clusters. The table metadata, including the location of the file(s), is stored within the Hive metastore.
```
help(df.registerTempTable)
```
![help\(df.registerTempTable\)][21]
```
df.registerTempTable("connections")
```
### Execute SQL at Scale
Let's look at a few examples of how we can run SQL queries on our table based off of our dataframe. We will start with some simple queries and then look at aggregations, filters, sorting, sub-queries, and pivots in this tutorial.
#### Connections based on the protocol type
Let's look at how we can get the total number of connections based on the type of connectivity protocol. First, we will get this information using normal DataFrame DSL syntax to perform aggregations.
```
display(df.groupBy('protocol_type')
.count()
.orderBy('count', ascending=False))
```
![Total number of connections][22]
Can we also use SQL to perform the same aggregation? Yes, we can leverage the table we built earlier for this!
```
protocols = sqlContext.sql("""
SELECT protocol_type, count(*) as freq
FROM connections
GROUP BY protocol_type
ORDER BY 2 DESC
""")
display(protocols)
```
![protocol type and frequency][23]
You can clearly see that you get the same results and don't need to worry about your background infrastructure or how the code is executed. Just write simple SQL!
#### Connections based on good or bad (attack types) signatures
We will now run a simple aggregation to check the total number of connections based on good (normal) or bad (intrusion attacks) types.
```
labels = sqlContext.sql("""
SELECT label, count(*) as freq
FROM connections
GROUP BY label
ORDER BY 2 DESC
""")
display(labels)
```
![Connection by type][24]
We have a lot of different attack types. We can visualize this in the form of a bar chart. The simplest way is to use the excellent interface options in the Databricks notebook.
![Databricks chart types][25]
This gives us a nice-looking bar chart, which you can customize further by clicking on **Plot Options**.
![Bar chart][26]
Another way is to write the code to do it. You can extract the aggregated data as a Pandas DataFrame and plot it as a regular bar chart.
```
labels_df = pd.DataFrame(labels.toPandas())
labels_df.set_index("label", drop=True,inplace=True)
labels_fig = labels_df.plot(kind='barh')
plt.rcParams["figure.figsize"] = (7, 5)
plt.rcParams.update({'font.size': 10})
plt.tight_layout()
display(labels_fig.figure)
```
![Bar chart][27]
### Connections based on protocols and attacks
Let's look at which protocols are most vulnerable to attacks by using the following SQL query.
```
attack_protocol = sqlContext.sql("""
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as freq
FROM connections
GROUP BY protocol_type, state
ORDER BY 3 DESC
""")
display(attack_protocol)
```
![Protocols most vulnerable to attacks][28]
Well, it looks like ICMP connections, followed by TCP connections have had the most attacks.
#### Connection stats based on protocols and attacks
Let's take a look at some statistical measures pertaining to these protocols and attacks for our connection requests.
```
attack_stats = sqlContext.sql("""
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
GROUP BY protocol_type, state
ORDER BY 3 DESC
""")
display(attack_stats)
```
![Statistics pertaining to protocols and attacks][29]
Looks like the average amount of data being transmitted in TCP requests is much higher, which is not surprising. Interestingly, attacks have a much higher average payload of data being transmitted from the source to the destination.
#### Filtering connection stats based on the TCP protocol by service and attack type
Let's take a closer look at TCP attacks, given that we have more relevant data and statistics for the same. We will now aggregate different types of TCP attacks based on service and attack type and observe different metrics.
```
tcp_attack_stats = sqlContext.sql("""
SELECT
service,
label as attack_type,
COUNT(*) as total_freq,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
WHERE protocol_type = 'tcp'
AND label != 'normal.'
GROUP BY service, attack_type
ORDER BY total_freq DESC
""")
display(tcp_attack_stats)
```
![TCP attack data][30]
There are a lot of attack types, and the preceding output shows a specific section of them.
#### Filtering connection stats based on the TCP protocol by service and attack type
We will now filter some of these attack types by imposing some constraints in our query based on duration, file creations, and root accesses.
```
tcp_attack_stats = sqlContext.sql("""
SELECT
service,
label as attack_type,
COUNT(*) as total_freq,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
WHERE (protocol_type = 'tcp'
AND label != 'normal.')
GROUP BY service, attack_type
HAVING (mean_duration >= 50
AND total_file_creations >= 5
AND total_root_acceses >= 1)
ORDER BY total_freq DESC
""")
display(tcp_attack_stats)
```
![Filtered by attack type][31]
It's interesting to see that [multi-hop attacks][32] can get root accesses to the destination hosts!
#### Subqueries to filter TCP attack types based on service
Let's try to get all the TCP attacks based on service and attack type such that the overall mean duration of these attacks is greater than zero ( **> 0** ). For this, we can do an inner query with all aggregation statistics and extract the relevant queries and apply a mean duration filter in the outer query, as shown below.
```
tcp_attack_stats = sqlContext.sql("""
SELECT
t.service,
t.attack_type,
t.total_freq
FROM
(SELECT
service,
label as attack_type,
COUNT(*) as total_freq,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
WHERE protocol_type = 'tcp'
AND label != 'normal.'
GROUP BY service, attack_type
ORDER BY total_freq DESC) as t
WHERE t.mean_duration > 0
""")
display(tcp_attack_stats)
```
![TCP attacks based on service and attack type][33]
This is nice! Now another interesting way to view this data is to use a pivot table, where one attribute represents rows and another one represents columns. Let's see if we can leverage Spark DataFrames to do this!
#### Build a pivot table from aggregated data
We will build upon the previous DataFrame object where we aggregated attacks based on type and service. For this, we can leverage the power of Spark DataFrames and the DataFrame DSL.
```
display((tcp_attack_stats.groupby('service')
.pivot('attack_type')
.agg({'total_freq':'max'})
.na.fill(0))
)
```
![Pivot table][34]
We get a nice, neat pivot table showing all the occurrences based on service and attack type!
### Next steps
I would encourage you to go out and play with Spark SQL and DataFrames. You can even [import my notebook][35] and play with it in your own account.
Feel free to refer to [my GitHub repository][36] also for all the code and notebooks used in this article. It covers things we didn't cover here, including:
* Joins
* Window functions
* Detailed operations and transformations of Spark DataFrames
You can also access my tutorial as a [Jupyter Notebook][37], in case you want to use it offline.
There are plenty of articles and tutorials available online, so I recommend you check them out. One useful resource is Databricks' complete [guide to Spark SQL][38].
Thinking of working with JSON data but unsure of using Spark SQL? Databricks supports it! Check out this excellent guide to [JSON support in Spark SQL][39].
Interested in advanced concepts like window functions and ranks in SQL? Take a look at "[Introducing Window Functions in Spark SQL][40]."
I will write another article covering some of these concepts in an intuitive way, which should be easy for you to understand. Stay tuned!
In case you have any feedback or queries, you can reach out to me on [LinkedIn][41].
* * *
*This article originally appeared on Medium's [Towards Data Science][42] channel and is republished with permission. *
* * *
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial
作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/djsarkar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
[2]: https://opensource.com/article/19/3/sql-scale-apache-spark-sql-and-dataframes
[3]: https://spark.apache.org/sql/
[4]: https://opensource.com/sites/default/files/uploads/13_spark-databricks.png (Apache Spark and Databricks)
[5]: https://databricks.com/try-databricks
[6]: https://databricks.com/signup#signup/community
[7]: https://opensource.com/sites/default/files/uploads/14_create-notebook.png (Create a notebook)
[8]: https://databricks.com/spark/getting-started-with-apache-spark
[9]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
[10]: https://opensource.com/sites/default/files/uploads/15_dbfs-kdd-kddcup_data-gz.png (Spark Job kddcup_data.gz)
[11]: https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds
[12]: https://opensource.com/sites/default/files/uploads/16_rdd-data.png (Data in Resilient Distributed Dataset (RDD))
[13]: https://opensource.com/sites/default/files/uploads/16a_output.png (output)
[14]: https://opensource.com/sites/default/files/uploads/16b_output.png (output)
[15]: https://opensource.com/sites/default/files/uploads/17_split-csv.png (Splitting RDD entries)
[16]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup.names
[17]: http://kdd.ics.uci.edu/databases/kddcup99/task.html
[18]: https://opensource.com/sites/default/files/uploads/18_extract-columns.png (Extracting columns)
[19]: https://opensource.com/sites/default/files/uploads/19_build-dataframe.png (DataFrame)
[20]: https://opensource.com/sites/default/files/uploads/20_dataframe-schema.png (Dataframe schema)
[21]: https://opensource.com/sites/default/files/uploads/21_registertemptable.png (help(df.registerTempTable))
[22]: https://opensource.com/sites/default/files/uploads/22_number-of-connections.png (Total number of connections)
[23]: https://opensource.com/sites/default/files/uploads/23_sql.png (protocol type and frequency)
[24]: https://opensource.com/sites/default/files/uploads/24_intrusion-type.png (Connection by type)
[25]: https://opensource.com/sites/default/files/uploads/25_chart-interface.png (Databricks chart types)
[26]: https://opensource.com/sites/default/files/uploads/26_plot-options-chart.png (Bar chart)
[27]: https://opensource.com/sites/default/files/uploads/27_pandas-barchart.png (Bar chart)
[28]: https://opensource.com/sites/default/files/uploads/28_most-attacked.png (Protocols most vulnerable to attacks)
[29]: https://opensource.com/sites/default/files/uploads/29_data-transmissions.png (Statistics pertaining to protocols and attacks)
[30]: https://opensource.com/sites/default/files/uploads/30_tcp-attack-metrics.png (TCP attack data)
[31]: https://opensource.com/sites/default/files/uploads/31_attack-type.png (Filtered by attack type)
[32]: https://attack.mitre.org/techniques/T1188/
[33]: https://opensource.com/sites/default/files/uploads/32_tcp-attack-types.png (TCP attacks based on service and attack type)
[34]: https://opensource.com/sites/default/files/uploads/33_pivot-table.png (Pivot table)
[35]: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3137082781873852/3704545280501166/1264763342038607/latest.html
[36]: https://github.com/dipanjanS/data_science_for_all/tree/master/tds_spark_sql_intro
[37]: http://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/blob/master/tds_spark_sql_intro/Working%20with%20SQL%20at%20Scale%20-%20Spark%20SQL%20Tutorial.ipynb
[38]: https://docs.databricks.com/spark/latest/spark-sql/index.html
[39]: https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html
[40]: https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html
[41]: https://www.linkedin.com/in/dipanzan/
[42]: https://towardsdatascience.com/sql-at-scale-with-apache-spark-sql-and-dataframes-concepts-architecture-and-examples-c567853a702f

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development)
[#]: via: (https://itsfoss.com/nvidia-jetson-nano/)
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development
======
At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product.
![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4]
### Bringing back AI development from cloud
In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available.
Recently, theres been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but were seeing a great explosion these days in this product segment.
Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIAs latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud.
This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself.
### Jetson Nano focuses on smaller AI projects
![Jetson Nano powered JetBot][9]
Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide.
Did you know?
NVIDIAs JetPack SDK provides a complete desktop Linux environment based on Ubuntu 18.04 LTS. In other words, the Jetson Nano is powered by Ubuntu Linux.
### NVIDIA Jetson Nano Specifications
For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13].
CPU | Quad-core ARM® Cortex®-A57 MPCore processor
---|---
GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
RAM | 4 GB 64-bit LPDDR4
Storage | 16 GB eMMC 5.1 Flash
Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps)
Connectivity | Gigabit Ethernet
Display Ports | HDMI 2.0 and DP 1.2
USB Ports | 1 USB 3.0 and 3 USB 2.0
Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs
Size | 69.6 mm x 45 mm
Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIAs [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards.
“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nations supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA.
[Subscribe to Its FOSS YouTube Channel][16]
**What do you think of Jetson Nano?**
The availability of Jetson Nano differs from country to country.
The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. Its good to see competition stirring up at these lower price points from the big manufacturers.
Im looking forward to getting my hands on the product if possible.
What do you guys think about a product like this? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/nvidia-jetson-nano/
作者:[Atharva Lele][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/atharva/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/gtc/
[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1
[5]: https://itsfoss.com/nanotechnology-open-science-ai/
[6]: https://en.wikipedia.org/wiki/Edge_computing
[7]: https://www.sparkfun.com/news/2886
[8]: https://openmv.io/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1
[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/
[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications
[14]: https://developer.nvidia.com/embedded/jetpack
[15]: https://developer.nvidia.com/deepstream-sdk
[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[17]: https://software.intel.com/en-us/movidius-ncs-get-started

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top 10 New Linux SBCs to Watch in 2019)
[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019)
[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown)
Top 10 New Linux SBCs to Watch in 2019
======
![UP Xtreme][1]
Aaeon's Linux-ready UP Xtreme SBC.
[Used with permission][2]
A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you dont need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4].
Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5].
Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.
Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Googles i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.
The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.
**[UP Xtreme][8]** —The latest in Aaeons line of community-backed SBCs taps Intels 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive.
The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeons new AI Core X modules, which offer Intels latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.
**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module thats sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.
Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and theres a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.
**[Coral Dev Board][10]** —Googles very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Googles Edge TPU AI chip—a stripped-down version of Googles TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.
The Coral Dev Board combines the Edge TPU chip with NXPs quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidias Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.
**[SBC-C43][11]** —Secos commercial, industrial temperature SBC-C43 board is the first SBC based on NXPs high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.
The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.
**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXPs new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that youre limited to HD video resolution.
Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.
**[Pine H64 Model B][13]** —Pine64s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.
The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.
**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, were more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC weve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.
The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.
**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoCs dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.
Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.
**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.
The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the boards expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).
**[Avenger96][19]** —Like Arrows AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: STs recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.
This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. Its unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. Theres also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019
作者:[Eric Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/ericstephenbrown
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html
[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond
[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/
[6]: https://www.embedded-world.de/en
[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world
[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/
[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/
[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/
[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/
[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/
[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/
[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera
[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/
[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/
[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/
[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/
[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/
[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up Fedora Silverblue as a gaming station)
[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
How to set up Fedora Silverblue as a gaming station
======
![][1]
This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.
Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers.
### Add the Flathub repository
This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.
First, go to <https://flathub.org/home> and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page.
![Quick setup button on flathub.org/home][3]
This redirects you to <https://flatpak.org/setup/> where you should click on the Fedora icon.
![Fedora icon on flatpak.org/setup][4]
Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application.
![Flathub repository file button on flatpak.org/setup/Fedora][5]
The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system.
![Install button in GNOME Software][6]
### Install the Steam flatpak
You can now search for the S _team_ flatpak in _GNOME Software_. If you cant find it, try rebooting — or logout and login — in case _GNOME Software_ didnt read the metadata. That happens automatically when you next login.
![Searching for Steam][7]
Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_.
![Steam page in GNOME Software][8]
And now you have installed _Steam_ flatpak on your system.
### Enable Steam Play in Steam
Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window.
![Settings button in Steam][9]
Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but its recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports.
![Steam Play settings menu on Steam][11]
If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine:
> [Play Windows games on Fedora with Steam Play and Proton][12]
### Appendix
Youre now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂
* * *
_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg
[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png
[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png
[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png
[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png
[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png
[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png
[10]: https://www.protondb.com/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png
[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/
[13]: https://github.com/ValveSoftware/Proton
[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Printing from the Linux command line)
[#]: via: (https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Printing from the Linux command line
======
There's a lot more to printing from the Linux command line than the lp command. Check out some of the many available options.
![Sherry \(CC BY 2.0\)][1]
Printing from the Linux command line is easy. You use the **lp** command to request a print, and **lpq** to see what print jobs are in the queue, but things get a little more complicated when you want to print double-sided or use portrait mode. And there are lots of other things you might want to do — such as printing multiple copies of a document or canceling a print job. Let's check out some options for getting your printouts to look just the way you want them to when you're printing from the command line.
### Displaying printer settings
To view your printer settings from the command line, use the **lpoptions** command. The output should look something like this:
```
$ lpoptions
copies=1 device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 job-sheets=none,none marker-change-time=1553023232 marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00 marker-levels=18,62,62,63 marker-names='Black\ Cartridge\ HP\ CC530A,Cyan\ Cartridge\ HP\ CC531A,Magenta\ Cartridge\ HP\ CC533A,Yellow\ Cartridge\ HP\ CC532A' marker-types=toner,toner,toner,toner number-up=1 printer-commands=none printer-info='HP Color LaserJet CP2025dn (F47468)' printer-is-accepting-jobs=true printer-is-shared=true printer-is-temporary=false printer-location printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7' printer-state=3 printer-state-change-time=1553023232 printer-state-reasons=none printer-type=167964 printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn sides=one-sided
```
This output is likely to be a little more human-friendly if you turn its blanks into carriage returns. Notice how many settings are listed.
NOTE: In the output below, some lines have been reconnected to make this output more readable.
```
$ lpoptions | tr " " '\n'
copies=1
device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/
finishings=3
job-cancel-after=10800
job-hold-until=no-hold
job-priority=50
job-sheets=none,none
marker-change-time=1553023232
marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00
marker-levels=18,62,62,63
marker-names='Black\ Cartridge\ HP\ CC530A,
Cyan\ Cartridge\ HP\ CC531A,
Magenta\ Cartridge\ HP\ CC533A,
Yellow\ Cartridge\ HP\ CC532A'
marker-types=toner,toner,toner,toner
number-up=1
printer-commands=none
printer-info='HP Color LaserJet CP2025dn (F47468)'
printer-is-accepting-jobs=true
printer-is-shared=true
printer-is-temporary=false
printer-location
printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7'
printer-state=3
printer-state-change-time=1553023232
printer-state-reasons=none
printer-type=167964
printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn
sides=one-sided
```
With the **-v** option, the **lpinfo** command will list drivers and related information.
```
$ lpinfo -v
network ipp
network https
network socket
network beh
direct hp
network lpd
file cups-brf:/
network ipps
network http
direct hpfax
network dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ <== printer
network socket://192.168.0.23 <== printer IP
```
The lpoptions command will show the settings of your default printer. Use the **-p** option to specify one of a number of available printers.
```
$ lpoptions -p LaserJet
```
The **lpstat -p** command displays the status of a printer while **lpstat -p -d** also lists available printers.
```
$ lpstat -p -d
printer Color-LaserJet-CP2025dn is idle. enabled since Tue 19 Mar 2019 05:07:45 PM EDT
system default destination: Color-LaserJet-CP2025dn
```
### Useful commands
To print a document on the default printer, just use the **lp** command followed by the name of the file you want to print. If the filename includes blanks (rare on Linux systems), either put the name in quotes or start entering the file name and press the tab key to invoke file completion (as shown in the second example below).
```
$ lp "never leave home angry"
$ lp never\ leave\ home\ angry
```
The **lpq** command displays the print queue.
```
$ lpq
Color-LaserJet-CP2025dn is ready and printing
Rank Owner Job File(s) Total Size
active shs 234 agenda 2048 bytes
```
With the **-n** option, the lp command allows you to specify the number of copies of a printout you want.
```
$ lp -n 11 agenda
```
To cancel a print job, you can use the **cancel** or **lprm** command. If you don't act quickly, you might see this:
```
$ cancel 229
cancel: cancel-job failed: Job #229 is already completed - can't cancel.
```
### Two-sided printing
To print in two-sided mode, you can issue your lp command with a **sides** option that says both to print on both sides of the paper and which edge to turn the paper on. This setting represents the normal way that you would expect two-sided portrait documents to look.
```
$ lp -o sides=two-sided-long-edge Notes.pdf
```
If you want all of your documents to print in two-side mode, you can change your lp settings by using the **lpoptions** command to change the setting for **sides**.
```
$ lpoptions -o sides=two-sided-short-edge
```
To revert to single-sided printing, you would use a command like this one:
```
$ lpoptions -o sides=one-sided
```
#### Printing in landscape mode
To print in landscape mode, you would use the **landscape** option with the lp command.
```
$ lp -o landscape penguin.jpg
```
### CUPS
The print system used on Linux systems is the standards-based, open source printing system called CUPS, originally standing for **Common Unix Printing System**. It allows a computer to act as a print server.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/printouts-paper-100791390-large.jpg
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,314 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Backup on Fedora Silverblue with Borg)
[#]: via: (https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/)
[#]: author: (Steven Snow https://fedoramagazine.org/author/jakfrost/)
Backup on Fedora Silverblue with Borg
======
![][1]
When it comes to backing up a Fedora Silverblue system, some of the traditional tools may not function as expected. BorgBackup (Borg) is an alternative available that can provide backup capability for your Silverblue based systems. This how-to explains the steps for using BorgBackup 1.1.8 as a layered package to back up Fedora Silverblue 29 system.
On a normal Fedora Workstation system, _dnf_ is used to install a package. However, on Fedora Silverblue, _rpm-ostree install_ is used to install new software. This is termed layering on the Silverblue system, since the core ostree is an immutable image and the rpm package is layered onto the core system during the install process resulting in a new local image with the layered package.
> “BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.”
>
> From the Borg website
Additionally, the main way to interact with Borg is via the command line. Reading the Quick Start guide it becomes apparent that Borg is well suited to scripting. In fact, it is pretty much necessary to use some form of shell script when performing repeated thorough backups of a system. A basic script is provided in the [Borg Quick Start guide][2] , as a point to get started.
### Installing Borg
In a terminal, type the following command to install BorgBackup as a layered package:
```
$rpm-ostree install borgbackup
```
This installs BorgBackup to the Fedora Silverblue system. To use it, reboot into the new ostree with:
```
$systemctl reboot
```
Now Borg is installed, and ready to use.
### Some notes about Silverblue and its file system, layered packages and flatpaks
#### The file system
Silverblue is an immutable operating system based on ostree, with support for layering rpms through the use of rpm-ostree. At the user level, this means the path that appears as _/home_ in a flatpak, will actually be _/var/home_ to the system. For programs like Borg, and other backup tools this is important to remember since they often require the actual path, so in this example that would be _/var/home_ instead of just _/home_.
Before starting a backup its a good idea to understand where potential data could be stored, and then if that data should be backed up. Silverblues file system layout is very specific with respect to what is writable and what is not. On Silverblue _/etc_ and _/var_ are the only places that are not immutable, therefore writable. On a single user system, typically the user home directory would be a likely choice for data backup. Normally excluding Downloads, but including Documents and more. Also, _/etc_ is a logical choice for some configuration options you dont want to go through again. Take notes of what to exclude from your home directory and from _/etc_. Some files and subdirectories of /etc you need root or sudo privileges to access.
#### Flatpaks
Flatpak applications store data in your home directory under _$HOME/.var/app/flatpakapp_ , regardless of whether they were installed as user or system. If installed at a user level, there is also data found in _$HOME/.local/share/flatpak/app/_ , or if installed at a system level it will be found in _/var/lib/flatpak/app_ For the purposes of this article, it was enough to list the flatpaks installed and redirect the output to a file for backing up. Reasoning that if there is a need to reinstall them (flatpaks) the list file could be used to do it from. For a more robust approach, examining the flatpak file system layouts can be done [here.][3]
#### Layering and rpm-ostree
There is no easy way for a user to retrieve the layered package information aside from the
$rpm-ostree status
command. Which shows the current and previous ostree commits layered packages, and if any commits are pinned they would be listed too. Below is the output on my system, note the LayeredPackages label at the end of each commit listing.
![][4]
The command
$ostree log
is useful to retrieve a history of commits for the system. Type it in your terminal to see the output.
### Preparing the backup repo
In order to use Borg to back up a system, you need to first initialize a Borg repo. Before initializing, the decision must be made to use encryption (or not) and if so, what mode.
With Borg the data can be protected using 256-bit AES encryption. The integrity and authenticity of the data, which is encrypted on the clientside, is verified using HMAC-SHA256. The encryption modes are listed below.
#### Encryption modes
Hash/MAC | Not encrypted no auth | Not encrypted, but authenticated | Encrypted (AEAD w/ AES) and authenticated
---|---|---|---
SHA-256 | none | authenticated | repokey keyfile
BLAKE2b | n/a | authenticated-blake2 | repokey-blake2 keyfile-blake2
The encryption mode decided on was keyfile-blake2, which requires a passphrase to be entered as well as the keyfile being needed.
Borg can use the following compression types which you can specify at backup creation time.
* lz4 (super fast, low compression)
* zstd (wide range from high speed and low compression to high compression and lower speed)
* zlib (medium speed and compression)
* lzma (low speed, high compression)
For compression lzma was chosen at setting 6, the highest sensible compression level. The initial backup took 4 minutes 59.98 seconds to complete, while subsequent ones have taken less than 20 seconds as a rule.
#### Borg init
To be able to perform backups with Borg, first, create a directory for your Borg repo:
```
$mkdir borg_testdir
```
and then change to it.
```
$cd borg_testdir
```
Next, initialize the Borg repo with the borg init command:
```
$borg init -e=keyfile-blake2 .
```
Borg will prompt for your passphrase, which is case sensitive, and at creation must be entered twice. A suitable passphrase of alpha-numeric characters and symbols, and of a reasonable length should be created. It can be changed later on if needed without affecting the keyfile, or your encrypted data. The keyfile can be exported and should be for backup purposes, along with the passphrase, and stored somewhere secure.
#### Creating a backup
Next, create a test backup of the Documents directory, remember on Silverblue the actual path to the user Documents directory is _/var/home/username/Documents_. In practice on Silverblue, it is suitable to use _~/_ or _$HOME_ to indicate your home directory. The distinction between the actual path and environment variables being the real path does not change whereas the environment variable can be changed. From within the Borg repo, type the following command
```
$borg create .::borgtest /var/home/username/Documents
```
and that will create a backup of the Documents directory named **borgtest**. To break down the command a bit; **create** requires a **repo location** , in this case **.** since we are in the **top level** of the **repo**. That makes the path **.::borgtest** for the backup name. Finally **/var/home/username/Documents** is the location of the data we are backing up.
The following command
```
$borg list
```
returns a listing of your backups, after a few days it look similar to this:
![Output of borg list command in my backup repo.][5]
To delete the test backup, type the following in the terminal
```
$borg delete .::borgtest
```
at this time Borg will prompt for the encryption passphrase in order to delete the backup.
### Pulling it together into a shell script
As mentioned Borg is an eminently script friendly tool. The Borg documentation links provided are great places to find out more about BorgBackup, and there is more. The example script provided by Borg was modified to suit this article. Below is a version with the basic parts that others could use as a starting point if desired. It tries to capture the three information pieces of the system and apps mentioned earlier. The output of _flatpak list_ , _rpm-ostree status_ , and _ostree log_ as human readable files given the same names each time so overwritten each time. The repo setup had to be changed since the original example is for a remote server login with ssh, and this was intended to be used locally. The other changes mostly involved correcting directory paths, tailoring the excluded content to suit this systems home directory, and choosing the compression.
```
#!/bin/sh
# This gets the ostree commit data, this file is overwritten each time
sudo ostree log fedora-workstation:fedora/29/x86_64/silverblue > ostree.log
rpm-ostree status > rpm-ostree-status.lst
# Flatpaks get listed too
flatpak list > flatpak.lst
# Setting this, so the repo does not need to be given on the commandline:
export BORG_REPO=/var/home/usernamehere/borg_testdir
# Setting this, so you won't be asked for your repository passphrase:(Caution advised!)
export BORG_PASSPHRASE='usercomplexpassphrasehere'
# some helpers and error handling:
info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; }
trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM
info "Starting backup"
# Backup the most important directories into an archive named after
# the machine this script is currently running on:
borg create \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression auto,lzma,6 \
--exclude-caches \
--exclude '/var/home/*/borg_testdir'\
--exclude '/var/home/*/Downloads/'\
--exclude '/var/home/*/.var/' \
--exclude '/var/home/*/Desktop/'\
--exclude '/var/home/*/bin/' \
\
::'{hostname}-{now}' \
/etc \
/var/home/ssnow \
backup_exit=$?
info "Pruning repository"
# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
# archives of THIS machine. The '{hostname}-' prefix is very important to
# limit prune's operation to this machine's archives and not apply to
# other machines' archives also:
borg prune \
--list \
--prefix '{hostname}-' \
--show-rc \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
prune_exit=$?
# use highest exit code as global exit code
global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit ))
if [ ${global_exit} -eq 0 ]; then
info "Backup and Prune finished successfully"
elif [ ${global_exit} -eq 1 ]; then
info "Backup and/or Prune finished with warnings"
else
info "Backup and/or Prune finished with errors"
fi
exit ${global_exit}
```
This listing is missing some more excludes that were specific to the test system setup and backup intentions, and is very basic with room for customization and improvement. For this test to write an article it wasnt a problem having the passphrase inside of a shell script file. Under normal use it is better to enter the passphrase each time when performing the backup.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/
作者:[Steven Snow][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jakfrost/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/borg-816x345.jpg
[2]: https://borgbackup.readthedocs.io/en/stable/quickstart.html
[3]: https://github.com/flatpak/flatpak/wiki/Filesystem
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-17-11-21-1024x285.png
[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-18-56-03.png

View File

@ -0,0 +1,50 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity)
[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/)
[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/)
Contribute at the Fedora Test Day for Fedora Modularity
======
![][1]
Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2].
The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images youll need to participate. Read on for more information on the test day.
### How do test days work?
A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If youve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
* Download test materials, which include some large files
* Read and follow directions step by step
The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After youve done some testing, you can log your results in the test day [web application][4]. If youre available on or around the day of the event, please do some testing and report your results.
Happy testing, and we hope to see you on test day.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/
作者:[Sumantro Mukherjee][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sumantrom/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png
[2]: https://docs.fedoraproject.org/en-US/modularity/
[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day
[4]: http://testdays.fedorainfracloud.org/events/61

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with Vim: The basics)
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
Getting started with Vim: The basics
======
Learn to use Vim enough to get by at work or for a new project.
![Person standing in front of a giant computer screen with numbers, data][1]
I remember the very first time I encountered Vim. I was a university student, and the computers in the computer science department's lab were installed with Ubuntu Linux. While I had been exposed to different Linux variations (like RHEL) even before my college years (Red Hat sold its CDs at Best Buy!), this was the first time I needed to use the Linux operating system regularly, because my classes required me to do so. Once I started using Linux, like many others before and after me, I began to feel like a "real programmer."
![Real Programmers comic][2]
Real Programmers, by [xkcd][3]
Students could use a graphical text editor like [Kate][4], which was installed on the lab computers by default. For students who could use the shell but weren't used to the console-based editor, the popular choice was [Nano][5], which provided good interactive menus and an experience similar to Windows' graphical text editor.
I used Nano sometimes, but I heard awesome things about [Vi/Vim][6] and [Emacs][7] and really wanted to give them a try (mainly because they looked cool, and I was also curious to see what was so great about them). Using Vim for the first time scared me—I did not want to mess anything up! But once I got the hang of it, things became much easier and I could appreciate the editor's powerful capabilities. As for Emacs, well, I sort of gave up, but I'm happy I stuck with Vim.
In this article, I will walk through Vim (based on my personal experience) just enough so you can get by with it as an editor on a Linux system. This will neither make you an expert nor even scratch the surface of many of Vim's powerful capabilities. But the starting point always matters, and I want to make the beginning experience as easy as possible, and you can explore the rest on your own.
### Step 0: Open a console window
Before jumping into Vim, you need to do a little preparation. Open a console terminal from your Linux operating system. (Since Vim is also available on MacOS, Mac users can use these instructions, also.)
Once a terminal window is up, type the **ls** command to list the current directory. Then, type **mkdir Tutorial** to create a new directory called **Tutorial**. Go inside the directory by typing **cd Tutorial**.
![Create a folder][8]
That's it for preparation. Now it's time to move on to the fun part—starting to use Vim.
### Step 1: Create and close a Vim file without saving
Remember when I said I was scared to use Vim at first? Well, the scary part was thinking, "what if I change an existing file and mess things up?" After all, several computer science assignments required me to work on existing files by modifying them. I wanted to know: _How can I open and close a file without saving my changes?_
The good news is you can use the same command to create or open a file in Vim: **vim <FILE_NAME**>, where **< FILE_NAME>** represents the target file name you want to create or modify. Let's create a file named **HelloWorld.java** by typing **vim HelloWorld.java**.
Hello, Vim! Now, here is a very important concept in Vim, possibly the most important to remember: Vim has multiple modes. Here are three you need to know to do Vim basics:
Mode | Description
---|---
Normal | Default; for navigation and simple editing
Insert | For explicitly inserting and modifying text
Command Line | For operations like saving, exiting, etc.
Vim has other modes, like Visual, Select, and Ex-Mode, but Normal, Insert, and Command Line modes are good enough for us.
You are now in Normal mode. If you have text, you can move around with your arrow keys or other navigation keystrokes (which you will see later). To make sure you are in Normal mode, simply hit the **Esc** (Escape) key.
> **Tip:** **Esc** switches to Normal mode. Even though you are already in Normal mode, hit **Esc** just for practice's sake.
Now, this will be interesting. Press **:** (the colon key) followed by **q!** (i.e., **:q!** ). Your screen will look like this:
![Editing Vim][9]
Pressing the colon in Normal mode switches Vim to Command Line mode, and the **:q!** command quits the Vim editor without saving. In other words, you are abandoning all changes. You can also use **ZQ** ; choose whichever option is more convenient.
Once you hit **Enter** , you should no longer be in Vim. Repeat the exercise a few times, just to get the hang of it. Once you've done that, move on to the next section to learn how to make a change to this file.
### Step 2: Make and save modifications in Vim
Reopen the file by typing **vim HelloWorld.java** and pressing the **Enter** key. Insert mode is where you can make changes to a file. First, hit **Esc** to make sure you are in Normal mode, then press **i** to go into Insert mode. (Yes, that is the letter **i**.)
In the lower-left, you should see **\-- INSERT --**. This means you are in Insert mode.
![Vim insert mode][10]
Type some Java code. You can type anything you want, but here is an example for you to follow. Your screen will look like this:
```
public class HelloWorld {
public static void main([String][11][] args) {
}
}
```
Very pretty! Notice how the text is highlighted in Java syntax highlight colors. Because you started the file in Java, Vim will detect the syntax color.
Save the file. Hit **Esc** to leave Insert mode and enter Command Line mode. Type **:** and follow that with **x!** (i.e., a colon followed by x and !). Hit **Enter** to save the file. You can also type **wq** to perform the same operation.
Now you know how to enter text using Insert mode and save the file using **:x!** or **:wq**.
### Step 3: Basic navigation in Vim
While you can always use your friendly Up, Down, Left, and Right arrow buttons to move around a file, that would be very difficult in a large file with almost countless lines. It's also helpful to be able to be able to jump around within a line. Although Vim has a ton of awesome navigation features, the first one I want to show you is how to go to a specific line.
Press the **Esc** key to make sure you are in Normal mode, then type **:set number** and hit **Enter** .
Voila! You see line numbers on the left side of each line.
![Showing Line Numbers][12]
OK, you may say, "that's cool, but how do I jump to a line?" Again, make sure you are in Normal mode, then press **: <LINE_NUMBER>**, where **< LINE_NUMBER>** is the number of the line you want to go to, and press **Enter**. Try moving to line 2.
```
:2
```
Now move to line 3.
![Jump to line 3][13]
But imagine a scenario where you are dealing with a file that is 1,000 lines long and you want to go to the end of the file. How do you get there? Make sure you are in Normal mode, then type **:$** and press **Enter**.
You will be on the last line!
Now that you know how to jump among the lines, as a bonus, let's learn how to move to the end of a line. Make sure you are on a line with some text, like line 3, and type **$**.
![Go to the last character][14]
You're now at the last character on the line. In this example, the open curly brace is highlighted to show where your cursor moved to, and the closing curly brace is highlighted because it is the opening curly brace's matching character.
That's it for basic navigation in Vim. Wait, don't exit the file, though. Let's move to basic editing in Vim. Feel free to grab a cup of coffee or tea, though.
### Step 4: Basic editing in Vim
Now that you know how to navigate around a file by hopping onto the line you want, you can use that skill to do some basic editing in Vim. Switch to Insert mode. (Remember how to do that, by hitting the **i** key?) Sure, you can edit by using the keyboard to delete or insert characters, but Vim offers much quicker ways to edit files.
Move to line 3, where it shows **public static void main(String[] args) {**. Quickly hit the **d** key twice in succession. Yes, that is **dd**. If you did it successfully, you will see a screen like this, where line 3 is gone, and every following line moved up by one (i.e., line 4 became line 3).
![Deleting A Line][15]
That's the _delete_ command. Don't fear! Hit **u** and you will see the deleted line recovered. Whew. This is the _undo_ command.
![Undoing a change in Vim][16]
The next lesson is learning how to copy and paste text, but first, you need to learn how to highlight text in Vim. Press **v** and move your Left and Right arrow buttons to select and deselect text. This feature is also very useful when you are showing code to others and want to identify the code you want them to see.
![Highlighting text in Vim][17]
Move to line 4, where it says **System.out.println("Hello, Opensource");**. Highlight all of line 4. Done? OK, while line 4 is still highlighted, press **y**. This is called _yank_ mode, and it will copy the text to the clipboard. Next, create a new line underneath by entering **o**. Note that this will put you into Insert mode. Get out of Insert mode by pressing **Esc** , then hit **p** , which stands for _paste_. This will paste the copied text from line 3 to line 4.
![Pasting in Vim][18]
As an exercise, repeat these steps but also modify the text on your newly created lines. Also, make sure the lines are aligned well.
> **Hint:** You need to switch back and forth between Insert mode and Command Line mode to accomplish this task.
Once you are finished, save the file with the **x!** command. That's all for basic editing in Vim.
### Step 5: Basic searching in Vim
Imagine your team lead wants you to change a text string in a project. How can you do that quickly? You might want to search for the line using a certain keyword.
Vim's search functionality can be very useful. Go into the Command Line mode by (1) pressing **Esc** key, then (2) pressing colon **:** ****key. We can search a keyword by entering : **/ <SEARCH_KEYWORD>**, where **< SEARCH_KEYWORD>** is the text string you want to find. Here we are searching for the keyword string "Hello." In the image below, the colon is missing but required.
![Searching in Vim][19]
However, a keyword can appear more than once, and this may not be the one you want. So, how do you navigate around to find the next match? You simply press the **n** key, which stands for _next_. Make sure that you aren't in Insert mode when you do this!
### Bonus step: Use split mode in Vim
That pretty much covers all the Vim basics. But, as a bonus, I want to show you a cool Vim feature called _split mode_.
Get out of _HelloWorld.java_ and create a new file. In a terminal window, type **vim GoodBye.java** and hit **Enter** to create a new file named _GoodBye.java_.
Enter any text you want; I decided to type "Goodbye." Save the file. (Remember you can use **:x!** or **:wq** in Command Line mode.)
In Command Line mode, type **:split HelloWorld.java** , and see what happens.
![Split mode in Vim][20]
Wow! Look at that! The **split** command created horizontally divided windows with _HelloWorld.java_ above and _GoodBye.java_ below. How can you switch between the windows? Hold **Control** (on a Mac) or **CTRL** (on a PC) then hit **ww** (i.e., **w** twice in succession).
As a final exercise, try to edit _GoodBye.java_ to match the screen below by copying and pasting from _HelloWorld.java_.
![Modify GoodBye.java file in Split Mode][21]
Save both files, and you are done!
> **TIP 1:** If you want to arrange the files vertically, use the command **:vsplit <FILE_NAME>** (instead of **:split <FILE_NAME>**, where **< FILE_NAME>** is the name of the file you want to open in Split mode.
>
> **TIP 2:** You can open more than two files by calling as many additional **split** or **vsplit** commands as you want. Try it and see how it looks.
### Vim cheat sheet
In this article, you learned how to use Vim just enough to get by for work or a project. But this is just the beginning of your journey to unlock Vim's powerful capabilities. Be sure to check out other great tutorials and tips on Opensource.com.
To make things a little easier, I've summarized everything you've learned into [a handy cheat sheet][22].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/getting-started-vim
作者:[Bryant Son (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
[3]: https://xkcd.com/378/
[4]: https://kate-editor.org
[5]: https://www.nano-editor.org
[6]: https://www.vim.org
[7]: https://www.gnu.org/software/emacs
[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
[22]: https://opensource.com/downloads/cheat-sheet-vim

View File

@ -0,0 +1,166 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Reducing sysadmin toil with Kubernetes controllers)
[#]: via: (https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers)
[#]: author: (Paul Czarkowski https://opensource.com/users/paulczar)
Reducing sysadmin toil with Kubernetes controllers
======
Controllers can ease a sysadmin's workload by handling things like creating and managing DNS addresses and SSL certificates.
![][1]
Kubernetes is a platform for reducing toil cunningly disguised as a platform for running containers. The element that allows for both running containers and reducing toil is the Kubernetes concept of a **Controller**.
Most resources in Kubernetes are managed by **kube-controller-manager** , or "controller" for short. A [controller][2] is defined as "a control loop that watches the shared state of a cluster … and makes changes attempting to move the current state toward the desired state." Think of it like this: A Kubernetes controller is to a microservice as a Chef recipe (or an Ansible playbook) is to a monolith.
Each Kubernetes resource is controlled by its own control loop. This is a step forward from previous systems like Chef or Puppet, which both have control loops at the server level, but not the resource level. A controller is a fairly simple piece of code that creates a control loop over a single resource to ensure the resource is behaving correctly. These control loops can stack together to create complex functionality with simple interfaces.
The canonical example of this in action is in how we manage Pods in Kubernetes. A Pod is effectively a running copy of an application that a specific worker node is asked to run. If that application crashes, the kubelet running on that node will start it again. However, if that node crashes, the Pod is not recovered, as the control loop (via the kubelet process) responsible for the resource no longer exists. To make applications more resilient, Kubernetes has the ReplicaSet controller.
The ReplicaSet controller is bundled inside the Kubernetes **controller-manager** , which runs on the Kubernetes master node and contains the controllers for these more advanced resources. The ReplicaSet controller is responsible for ensuring that a set number of copies of your application is always running. To do this, the ReplicaSet controller requests that a given number of Pods is created. It then routinely checks that the correct number of Pods is still running and will request more Pods or destroy existing Pods to do so.
By requesting a ReplicaSet from Kubernetes, you get a self-healing deployment of your application. You can further add lifecycle management to your workload by requesting [a Deployment][3], which is a controller that manages ReplicaSets and provides rolling upgrades by managing multiple versions of your application's ReplicaSets.
These controllers are great for managing Kubernetes resources and fantastic for managing resources outside of Kubernetes. The [Cloud Controller Manager][4] is a grouping of Kubernetes controllers that acts on resources external to Kubernetes, specifically resources that provide functionality to Kubernetes on the underlying cloud infrastructure. This is what drives Kubernetes' ability to do things like having a **LoadBalancer** [Service][5] type create and manage a cloud-specific load-balancer (e.g., an Elastic Load Balancer on AWS).
Furthermore, you can extend Kubernetes by writing a controller that watches for events and annotations and performs extra work, acting on Kubernetes resources or external resources that have some form of programmable API.
To review:
* Controllers are a fundamental building block of Kubernetes' functionality.
* A controller forms a control loop to ensure that the state of a given resource matches the requested state.
* Kubernetes provides controllers via Controller Manager and Cloud Controller Manager processes that provide additional resilience and functionality.
* The ReplicaSet controller adds resiliency to pods by ensuring the correct number of replicas is running.
* A Deployment controller adds rolling upgrade capabilities to ReplicaSets.
* You can extend Kubernetes' functionality by writing your own controllers.
### Controllers reduce sysadmin toil
Some of the most common tickets in a sysadmin's queue are for fairly simple tasks that should be automated, but for various reasons are not. For example, creating or updating a DNS record generally requires updating a [zone file][6], but one bad entry and you can take down your entire DNS infrastructure. Or how about those tickets that look like _[SYSAD-42214] Expired SSL Certificate - Production is down_?
[![DNS Haiku][7]][8]
DNS haiku, image by HasturHasturHamster
What if I told you that Kubernetes could manage these things for you by running some additional controllers?
Imagine a world where asking Kubernetes to run applications for you would automatically create and manage DNS addresses and SSL certificates. What a world we live in!
#### Example: External DNS controller
The **[external-dns][9]** controller is a perfect example of Kubernetes treating operations as a microservice. You configure it with your DNS provider, and it will watch resources including Services and Ingress controllers. When one of those resources changes, it will inspect them for annotations that will tell it when it needs to perform an action.
With the **external-dns** controller running in your cluster, you can add the following annotation to a service, and it will go out and create a matching [DNS A record][10] for that resource:
```
kubectl annotate service nginx \
"external-dns.alpha.kubernetes.io/hostname=nginx.example.org."
```
You can change other characteristics, such as the DNS record's TTL value:
```
kubectl annotate service nginx \
"external-dns.alpha.kubernetes.io/ttl=10"
```
Just like that, you now have automatic DNS management for your applications and services in Kubernetes that reacts to any changes in your cluster to ensure your DNS is correct.
#### Example: Certificate manager operator
Like the **external-dns** controller, the [**cert-manager**][11] will react to changes in resources, but it also comes with a custom resource definition (CRD) that will allow you to request certificates as a resource on their own, not just as a byproduct of an annotation.
**cert-manager** works with [Let's Encrypt][12] and other sources of certificates to request valid, signed Transport Layer Security (TLS) certificates. You can even use it in combination with **external-dns** , like in the following example, which registers **web.example.com** , retrieves a TLS certificate from Let's Encrypt, and stores it in a Secret.
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/acme-http01-edit-in-place: "true"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
name: example
spec:
rules:
- host: web.example.com
http:
paths:
- backend:
serviceName: example
servicePort: 80
path: /*
tls:
- hosts:
- web.example.com
secretName: example-tls
```
You can also request a certificate directly from the **cert-manager** CRD, like in the following example. As in the above, it will result in a certificate key pair stored in a Kubernetes Secret:
```
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example-com
namespace: default
spec:
secretName: example-com-tls
issuerRef:
name: letsencrypt-staging
commonName: example.com
dnsNames:
- www.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- example.com
- http01:
ingress: my-ingress
domains:
- www.example.com
```
### Conclusion
This was a quick look at one way Kubernetes is helping enable a new wave of changes in how we operate software. This is one of my favorite topics, and I look forward to sharing more on [Opensource.com][14] and my [blog][15]. I'd also like to hear how you use controllers—message me on Twitter [@pczarkowski][16].
* * *
_This article is based on[Cloud Native Operations - Kubernetes Controllers][17] originally published on Paul Czarkowski's blog._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers
作者:[Paul Czarkowski][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/paulczar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv
[2]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
[3]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[4]: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/
[5]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[6]: https://en.wikipedia.org/wiki/Zone_file
[7]: https://opensource.com/sites/default/files/uploads/dns_haiku.png (DNS Haiku)
[8]: https://www.reddit.com/r/sysadmin/comments/4oj7pv/network_solutions_haiku/
[9]: https://github.com/kubernetes-incubator/external-dns
[10]: https://en.wikipedia.org/wiki/List_of_DNS_record_types#Resource_records
[11]: http://docs.cert-manager.io/en/latest/
[12]: https://letsencrypt.org/
[13]: http://www.example.com
[14]: http://Opensource.com
[15]: https://tech.paulcz.net/blog/
[16]: https://twitter.com/pczarkowski
[17]: https://tech.paulcz.net/blog/cloud-native-operations-k8s-controllers/

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An inside look at an IIoT-powered smart factory)
[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
An inside look at an IIoT-powered smart factory
======
### Despite housing some 50 robots and 50 people, Tempo Automations gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant.
![Tempo Automation][1]
As someone whos spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. Thats why I was so interested in [Tempo Automation][2]s new 42,000-square-foot facility in San Franciscos trendy Design District.
Frankly, I pictured the companys facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplins 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].)
**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more.
![Tempo Automation office space][6]
![Tempo Automation factory floor][7]
## How Tempo Automation's 'smart factory' works
On the front end, Tempos customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factorys machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next.
While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory.
## Connecting the digital thread
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempos co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.”
Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop.
“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a Production Forensics report.”
Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.”
## Traditional IoT, too
Of course, the Tempo factory isnt all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg
[2]: http://www.tempoautomation.com/
[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin
[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg
[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg
[8]: https://www.linkedin.com/in/shashanksamala/
[9]: https://aws.amazon.com/govcloud-us/
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bringing Kubernetes to the bare-metal edge)
[#]: via: (https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge)
[#]: author: (John Studarus https://opensource.com/users/studarus)
Bringing Kubernetes to the bare-metal edge
======
New Kubespray features enable Kubernetes clusters to be deployed across
next-generation edge locations.
![cubes coming together to create a larger cube][1]
[Kubespray][2], a community project that provides Ansible playbooks for the deployment and management of Kubernetes clusters, recently added support for the bare-metal cloud [Packet][3]. This allows Kubernetes clusters to be deployed across next-generation edge locations, including [cell-tower based micro datacenters][4].
Packet, which is unique in its bare-metal focus, expands Kubespray's support beyond the usual clouds—Amazon Web Services, Google Compute Engine, Azure, OpenStack, vSphere, and Oracle Cloud Infrastructure. Kubespray removes the complexities of standing up a Kubernetes cluster through automation using Terraform and Ansible. Terraform provisions the infrastructure and installs the prerequisites for the Ansible installation. Terraform provider plugins enable support for a variety of different cloud providers. The Ansible playbook then deploys and configures Kubernetes.
Since there are already [detailed instructions online][5] for deploying with Kubespray on Packet, I'll focus on why bare-metal support is important for Kubernetes and what's required to make it happen.
### Why bare metal?
Historically, Kubernetes deployments relied upon the "creature comforts" of a public cloud or a fully managed private cloud to provide virtual machines and networking infrastructure for running Kubernetes. This adds a layer of abstraction (e.g., a hypervisor with virtual machines) that Kubernetes doesn't necessarily need. In fact, Kubernetes began its life on bare metal as Google's Borg.
As we move workloads closer to the end user (in the form of edge computing) and deploy to more diverse environments (including hybrid and on-premises infrastructure of different architectures and sizes), relying on a homogenous public cloud substrate isn't always possible or ideal. For instance, with edge locations being resource constrained, it is more efficient and practical to run Kubernetes directly on bare metal.
### Mind the gaps
Without a full-featured public cloud underneath a bare-metal cluster, some traditional capabilities, such as load balancing and storage orchestration, will need to be managed directly within the Kubernetes cluster. Luckily there are projects, such as [MetalLB][6] and [Rook][7], that provide this support for Kubernetes.
MetalLB, a Layer 2 and Layer 3 load balancer, is integrated into Kubespray, and it's easy to install support for Rook, which orchestrates Ceph to provide distributed and replicated storage for a Kubernetes cluster, on a bare-metal cluster. In addition to enabling full functionality, this "bring your own" approach to storage and load balancing removes reliance upon specific cloud services, helping you avoid lock-in with an approach that can be installed anywhere.
Kubespray has support for ARM64 processors. The ARM architecture (which is starting to show up regularly in datacenter-grade hardware, SmartNICs, and other custom accelerators) has a long history in mobile and embedded devices, making it well-suited for edge deployments.
Going forward, I hope to see deeper integration with MetalLB and Rook as well as bare-metal continuous integration (CI) of daily builds atop a number of different hardware configurations. Access to automated bare metal at Packet enables testing and maintaining support across various processor types, storage options, and networking setups. This will help ensure that Kubespray-powered Kubernetes can be deployed and managed confidently across public clouds, bare metal, and edge environments.
### It takes a village
Kubespray is an open source project driven by the community, indebted to its core developers and contributors as well as the folks that assisted with the Packet integration. Contributors include [Maxime Guyot][8] and [Aivars Sterns][9] for the initial commits and code reviews, [Rong Zhang][10] and [Ed Vielmetti][11] for document reviews, as well as [Tomáš Karásek][12] (who maintains the Packet Go library and Terraform provider).
* * *
_John Studarus will present[The Open Micro Edge Data Center][13] at the [Open Infrastructure Summit][14], April 29-May 1 in Denver._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge
作者:[John Studarus][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/studarus
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://kubespray.io/
[3]: https://www.packet.com/
[4]: https://twitter.com/packethost/status/1062147355108085760
[5]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/packet.md
[6]: https://metallb.universe.tf/
[7]: https://rook.io/
[8]: https://twitter.com/Miouge
[9]: https://github.com/Atoms
[10]: https://github.com/riverzhang
[11]: https://twitter.com/vielmetti
[12]: https://t0mk.github.io/
[13]: https://www.openstack.org/summit/denver-2019/summit-schedule/events/23153/the-open-micro-edge-data-center
[14]: https://openstack.org/summit

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology)
[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all)
[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/)
Changes in SD-WAN Purchase Drivers Show Maturity of the Technology
======
![istock][1]
[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market.
Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change.
This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters.
The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN.
With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier.
![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4]
With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network.
Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.”
At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation.
### Mainstream buyers set new expectations for SD-WAN
With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk.
Its not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds.
Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology.
![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6]
### The bottom line
The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all
作者:[Cliff Grossner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cliff-Grossner/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg
[2]: https://www.silver-peak.com/sd-wan
[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg
[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor
[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg

View File

@ -0,0 +1,229 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use NetBSD on a Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/netbsd-raspberry-pi)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
How to use NetBSD on a Raspberry Pi
======
Experiment with NetBSD, an open source OS with direct lineage back to the original UNIX source code, on your Raspberry Pi.
![][1]
Do you have an old Raspberry Pi lying around gathering dust, maybe after a recent Pi upgrade? Are you curious about [BSD Unix][2]? If you answered "yes" to both of these questions, you'll be pleased to know that the first is the solution to the second, because you can run [NetBSD][3], as far back as the very first release, on a Raspberry Pi.
BSD is the Berkley Software Distribution of [Unix][4]. In fact, it's the only open source Unix with direct lineage back to the original source code written by Dennis Ritchie and Ken Thompson at Bell Labs. Other modern versions are either proprietary (such as AIX and Solaris) or clever re-implementations (such as Minix and GNU/Linux). If you're used to Linux, you'll feel mostly right at home with BSD, but there are plenty of new commands and conventions to discover. If you're still relatively new to open source, trying BSD is a good way to experience a traditional Unix.
Admittedly, NetBSD isn't an operating system that's perfectly suited for the Pi. It's a minimal install compared to many Linux distributions designed specifically for the Pi, and not all components of recent Pi models are functional under NetBSD yet. However, it's arguably an ideal OS for the older Pi models, since it's lightweight and lovingly maintained. And if nothing else, it's a lot of fun for any die-hard Unix geek to experience another side of the [POSIX][5] world.
### Download NetBSD
There are different versions of BSD. NetBSD has cultivated a reputation for being lightweight and versatile (its website features the tagline "Of course it runs NetBSD"). It offers an image of the latest version of the OS for every version of the Raspberry Pi since the original. To download a version for your Pi, you must first [determine what variant of the ARM architecture your Pi uses][6]. Some information about this is available on the NetBSD site, but for a comprehensive overview, you can also refer to [RPi Hardware History][7].
The Pi I used for this article is, as far as I can tell, a Raspberry Pi Model B Rev 2.0 (with two USB ports and no mounting holes). According to the [Raspberry Pi FAQ][8], this means the architecture is ARMv6, which translates to **earmv6hf** in NetBSD's architecture notation.
![NetBSD on Raspberry Pi][9]
If you're not sure what kind of Pi you have, the good news is that there are only two Pi images, so try **earmv7hf** first; if it doesn't work, fall back to **earmv6hf**.
For the easiest and quickest install, use the binary image instead of an installer. Using the image is the most common method of getting an OS onto your Pi: you copy the image to your SD card and boot it up. There's no install necessary, because the image is a generic installation of the OS, and you've just copied it, bit for bit, onto the media that the Pi uses as its boot drive.
The image files are found in the **binary > gzimg** directories of the NetBSD installation media server, which you can reach from the [front page][3] of NetBSD.org. The image is **rpi.img.gz** , a compressed **.img** file. Download it to your hard drive.
Once you have downloaded the entire image, extract it. If you're running Linux, BSD, or MacOS, you can use the **gunzip** command:
```
$ gunzip ~/Downloads/rpi.img.gz
```
If you're working on Windows, you can install the open source [7-Zip][10] archive utility.
### Copy the image to your SD card
Once the image file is uncompressed, you must copy it to your Pi's SD card. There are two ways to do this, so use the one that works best for you.
#### 1\. Using Etcher
Etcher is a cross-platform application specifically designed to copy OS images to USB drives and SD cards. Download it from [Etcher.io][11] and launch it.
In the Etcher interface, select the image file on your hard drive and the SD card you want to flash, then click the Flash button.
![Etcher][12]
That's it.
#### 2\. Using the dd command
On Linux, BSD, or MacOS, you can use the **dd** command to copy the image to your SD card.
1. First, insert your SD card into a card reader. Don't mount the card to your system because **dd** needs the device to be disengaged to copy data onto it.
2. Run **dmesg | tail** to find out where the card is located without it being mounted. On MacOS, use **diskutil list**.
3. Copy the image file to the SD card:
```
$ sudo dd if=~/Downloads/rpi.img of=/dev/mmcblk0 bs=2M status=progress
```
Before doing this, you _must be sure_ you have the correct location of the SD card. If you copy the image file to the incorrect device, you could lose data. If you are at all unsure about this, use Etcher instead!
When either **dd** or Etcher has written the image to the SD card, place the card in your Pi and power it on.
### First boot
The first time it's booted, NetBSD detects that the SD card's filesystem does not occupy all the free space available and resizes the filesystem accordingly.
![Booting NetBSD on Raspberry Pi][13]
Once that's finished, the Pi reboots and presents a login prompt. Log into your NetBSD system using **root** as the user name. No password is required.
### Set up a user account
First, set a password for the root user:
```
# passwd
```
Then create a user account for yourself with the **-m** option to prompt NetBSD to create a home directory and the **-G wheel** option to add your account to the wheel group so that you can become the administrative user (root) as needed:
```
# useradd -m -G wheel seth
```
Use the **passwd** command again to set a password for your user account:
```
# passwd seth
```
Log out, and then log back in with your new credentials.
### Add software to NetBSD
If you've ever used a Pi, you probably know that the way to add more software to your system is with a special command like **apt** or **dnf** (depending on whether you prefer to run [Raspbian][14] or [FedBerry][15] on your Pi). On NetBSD, use the **pkg_add** command. But some setup is required before the command knows where to go to get the packages you want to install.
There are ready-made (pre-compiled) packages for NetBSD on NetBSD's servers using the scheme **<[ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/[PORT]/[VERSION]/All>][16]**. Replace PORT with the architecture you are using, either **earmv6hf** or **earmv7hf**. Replace VERSION with the NetBSD release you are using; at the time of this writing, that's **8.0**.
Place this value in a file called **/etc/pkg_install.conf**. Since that's a system file outside your user folder, you must invoke root privileges to create it:
```
$ su -
<password>
# echo "PKG_PATH=<ftp://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/earmv6hf/8.0/All/>" >> /etc/pkg_install.conf
```
Now you can install packages from the NetBSD software distribution. A good first candidate is Bash, commonly the default shell on a Linux (and Mac) system. Also, if you're not already a Vi text editor user, you may want to try something more intuitive such as [Jove][17] or [Nano][18]:
```
# pkg_add -v bash jove nano
# exit
$
```
Unlike many Linux distributions ([Slackware][19] being a notable exception), NetBSD does very little configuration on your behalf, and this is considered a feature. So, to use Bash, Jove, or Nano as your default toolset, you must set the configuration yourself.
You can set many of your preferences dynamically using environment variables, which are special variables that your whole system can access. For instance, most applications in Unix know that if there is a **VISUAL** or **EDITOR** variable set, the value of those variables should be used as the default text editor. You can set these two variables temporarily, just for your current login session:
```
$ export EDITOR=nano
# export VISUAL=nano
```
Or you can make them permanent by adding them to the default NetBSD **.profile** file:
```
$ sed -i 's/EDITOR=vi/EDITOR=nano/' ~/.profile
```
Load your new settings:
```
$ . ~/.profile
```
To make Bash your default shell, use the **chsh** (change shell) command, which now loads into your preferred editor. Before running **chsh** , though, make sure you know where Bash is located:
```
$ which bash
/usr/pkg/bin/bash
```
Set the value for **shell** in the **chsh** entry to **/usr/pkg/bin/bash** , then save the document.
### Add sudo
The **pkg_add** command is a privileged command, which means to use it, you must become the root user with the **su** command. If you prefer, you can also set up the **sudo** command, which allows certain users to use their own password to execute administrative tasks.
First, install it:
```
# pkg_add -v sudo
```
And then use the **visudo** command to edit its configuration file. You must use the **visudo** command to edit the **sudo** configuration, and it must be run as root:
```
$ su
# SUDO_EDITOR=nano visudo
```
Once you are in the editor, find the line allowing members of the wheel group to execute any command, and uncomment it (by removing **#** from the beginning of the line):
```
### Uncomment to allow members of group wheel to execute any command
%wheel ALL=(ALL) ALL
```
Save the document as described in Nano's bottom menu panel and exit the root shell.
Now you can use **pkg_add** with **sudo** instead of becoming root:
```
$ sudo pkg_add -v fluxbox
```
### Net gain
NetBSD is a full-featured Unix operating system, and now that you have it set up on your Pi, you can explore every nook and cranny. It happens to be a pretty lightweight OS, so even an old Pi with a 700mHz processor and 256MB of RAM can run it with ease. If this article has sparked your interest and you have an old Pi sitting in a drawer somewhere, try it out!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/netbsd-raspberry-pi
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82
[2]: https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]: http://netbsd.org/
[4]: https://en.wikipedia.org/wiki/Unix
[5]: https://en.wikipedia.org/wiki/POSIX
[6]: http://wiki.netbsd.org/ports/evbarm/raspberry_pi
[7]: https://elinux.org/RPi_HardwareHistory
[8]: https://www.raspberrypi.org/documentation/faqs/
[9]: https://opensource.com/sites/default/files/uploads/pi.jpg (NetBSD on Raspberry Pi)
[10]: https://www.7-zip.org/
[11]: https://www.balena.io/etcher/
[12]: https://opensource.com/sites/default/files/uploads/etcher_0.png (Etcher)
[13]: https://opensource.com/sites/default/files/uploads/boot.png (Booting NetBSD on Raspberry Pi)
[14]: http://raspbian.org/
[15]: http://fedberry.org/
[16]: ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/%5BPORT%5D/%5BVERSION%5D/All%3E
[17]: https://opensource.com/article/17/1/jove-lightweight-alternative-vim
[18]: https://www.nano-editor.org/
[19]: http://www.slackware.com/

View File

@ -0,0 +1,52 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Todays Retailer is Turning to the Edge for CX)
[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all)
[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/)
Todays Retailer is Turning to the Edge for CX
======
### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census.
![iStock][1]
Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. Thats putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3].
With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4].
So what new and innovative technologies are edge environments supporting? Heres where retail is headed with customer service and how edge computing will help them get there.
**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customers most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience.
**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Medias projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision.
**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages.
**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers.
These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible.
To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all
作者:[Cindy Waxer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cindy-Waxer/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg
[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales
[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/
[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,154 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 1)
[#]: via: (https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Using Square Brackets in Bash: Part 1
======
![square brackets][1]
This tutorial tackle square brackets and how they are used in different contexts at the command line.
[Creative Commons Zero][2]
After taking a look at [how curly braces (`{}`) work on the command line][3], now its time to tackle brackets (`[]`) and see how they are used in different contexts.
### Globbing
The first and easiest use of square brackets is in _globbing_. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:
```
ls *.jpg
```
Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.
In the example above, the asterisk means " _zero or more characters_ ". There is another globbing wildcard, `?`, which means " _exactly one character_ ", so, while
```
ls d*k*
```
will list files called _darkly_ and _ducky_ (and _dark_ and _duck_ \-- remember `*` can also be zero characters),
```
ls d*k?
```
will not list _darkly_ (or _dark_ or _duck_ ), but it will list _ducky_.
Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, `cd` into it and create a bunch of files like this:
```
touch file0{0..9}{0..9}
```
(If you don't know why that works, [take a look at the last installment that explains curly braces `{}`][3]).
This will create files _file000_ , _file001_ , _file002_ , etc., through _file097_ , _file098_ and _file099_.
Then, to list the files in the 70s and 80s, you can do this:
```
ls file0[78]?
```
To list _file022_ , _file027_ , _file028_ , _file052_ , _file057_ , _file058_ , _file092_ , _file097_ , and _file98_ you can do this:
```
ls file0[259][278]
```
Of course, you can use globbing (and square brackets for sets) for more than just `ls`. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.
Let's say you want to create duplicates of files _file010_ through _file029_ and call the copies _archive010_ , _archive011_ , _archive012_ , etc..
You can't do:
```
cp file0[12]? archive0[12]?
```
Because globbing is for matching against existing files and directories and the _archive..._ files don't exist yet.
Doing this:
```
cp file0[12]? archive0[1..2][0..9]
```
won't work either, because `cp` doesn't let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:
```
mkdir archive
cp file0[12]? archive
```
would work, but it would copy the files, using their same names, into a directory called _archive/_. This is not what you set out to do.
However, if you look back at [the article on curly braces (`{}`)][3], you will remember how you can use `%` to lop off the end of a string contained in a variable.
Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of `%`, you use `#`.
For practice, you can try this:
```
myvar="Hello World"
echo Goodbye Cruel ${myvar#Hello}
```
It prints " _Goodbye Cruel World_ " because `#Hello` gets rid of the _Hello_ part at the beginning of the string stored in `myvar`.
You can use this feature alongside your globbing tools to make your _archive_ duplicates:
```
for i in file0[12]?;\
do\
cp $i archive${i#file};\
done
```
The first line tells the Bash interpreter that you want to loop through all the files that contain the string _file0_ followed by the digits _1_ or _2_ , and then one other character, which can be anything. The second line `do` indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.
Line 3 is where the actually copying happens, and you use the contents of the loop variable _`i`_ **twice: First, straight out, as the first parameter of the `cp` command, and then you add _archive_ to its contents, while at the same time cutting of _file_. So, if _`i`_ contains, say, _file019_...
```
"archive" + "file019" - "file" = "archive019"
```
the `cp` line is expanded to this:
```
cp file019 archive019
```
Finally, notice how you can use the backslash `\` to split a chain of commands over several lines for clarity.
In part two, well look at more ways to use square brackets. Stay tuned.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-gabriele-diwald-475007-unsplash.jpg?itok=cKmysLfd (square brackets)
[2]: https://www.linux.com/LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies)
[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco forms VC firm looking to weaponize fledgling technology companies
======
### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests.
![BrianaJackson / Getty][1]
Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market.
Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies.
**[ Now see[7 free network tools you must have][3]. ]**
Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.”
“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.”
“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.”
Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets.
Cisco didnt talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasnt closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote.
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]**
Decibel does plan to invest anywhere from $5M 15M in each start up in their portfolio, Cisco says.
“Cisco has a culture of leveraging both internal and external innovation accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said.
He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg
[2]: https://twitter.com/jonsakoda
[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
[4]: https://www.decibel.vc/the-blast/announcingdecibel
[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund
[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html
[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml
[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE introduces hybrid cloud consulting business)
[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE introduces hybrid cloud consulting business
======
### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems.
![Hewlett Packard Enterprise][1]
Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly.
Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate.
Right Mix Advisor gathers data points from the companys entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers.
**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
HPE Pointnext consultants then work with the clients IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPEs main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries.
In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. *
Although HPE has thrown its weight behind AWS, that doesnt mean it doesnt support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud.
“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote.
Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired.
“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on whats most impactful right now to meet your business goals the 10 things you can do on Monday morning that you can be confident will really help your business.”
HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,235 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to make a Raspberry Pi gamepad)
[#]: via: (https://opensource.com/article/19/3/gamepad-raspberry-pi)
[#]: author: (Leon Anavi https://opensource.com/users/leon-anavi)
How to make a Raspberry Pi gamepad
======
This DIY retro video game controller for the Raspberry Pi is fun and not difficult to build but requires some time.
![Raspberry Pi Gamepad device][1]
From time to time, I get nostalgic about the video games I played during my childhood in the late '80s and the '90s. Although most of my old computers and game consoles are long gone, my Raspberry Pi can fulfill my retro-gaming fix. I enjoy the simple games included in Raspbian, and the open source RetroPie project helped me turn my Raspberry Pi into an advanced retro-gaming machine.
But, for a more authentic experience, like back in the "old days," I needed a gamepad. There are a lot of options on the market for USB gamepads and joysticks, but as an open source enthusiast, maker, and engineer, I prefer doing it the hard way. So, I made my own simple open source hardware gamepad, which I named the [ANAVI Play pHAT][2]. I designed it as an add-on board for Raspberry Pi using an [EEPROM][3] and a devicetree binary overlay I created for mapping the keys.
### Get the gamepad buttons and EEPROM
There are a huge variety of gamepads available for purchase, and some of them are really complex. However, it's not hard to make a gamepad similar to the iconic NES controller using the design I created.
The gamepad uses eight "momentary" buttons (i.e., switches that are active only while they're pushed): four tactile (tact) switches for movement (Up, Down, Left, Right), two tact buttons for A and B, and two smaller tact buttons for Select and Start. I used [through-hole][4] tact switches: six 6x6x4.3mm switches for movement and the A and B buttons, and two 3x6x4.3mm switches for the Start and Select buttons.
While the gamepad's primary purpose is to play retro games, the add-on board is large enough to include home-automation features, such as monitoring temperature, humidity, light, or barometric pressure, that you can use when you're not playing games. I added three slots for attaching [I2C][5] sensors to the primary I2C bus on physical pins 3 and 5.
The most interesting and important part of the hardware design is the EEPROM (electrically erasable programmable read-only memory). A through-hole mounted EEPROM is easier to flash on a breadboard and solder to the gamepad. An article in the [MagPi magazine][6] recommends CAT24C32 EEPROM; if that model isn't available, try to find a model with similar technical specifications. All Raspberry Pi models and versions released after 2014 (Raspberry Pi B+ and newer) have a secondary I2C bus on physical pins 27 and 28.
Once you have this hardware, use a breadboard to check that it works.
### Create the printed circuit board
The next step is to create a printed circuit board (PCB) design and have it manufactured. As an open source enthusiast, I believe that free and open source software should be used for creating open source hardware. I rely on [KiCad][7], electronic design automation (EDA) software available under the GPLv3+ license. KiCad works on Windows, MacOS, and GNU/Linux. (I use KiCad version 5 on Ubuntu 18.04.)
KiCad allows you to create PCBs with up to 32 copper layers plus 14 fixed-purpose technical layers. It also has an integrated 3D viewer. It's actively developed, including many contributions by CERN developers, and used for industrial applications; for example, Olimex uses KiCad to design complex PCBs with multiple layers, like the one in its [TERES-I][8] DIY open source hardware laptop.
The KiCad workflow includes three major steps:
* Designing the schematics in the schematic layout editor
* Drawing the edge cuts, placing the components, and routing the tracks in the PCB layout editor
* Exporting Gerber and drill files for manufacture
If you haven't designed PCBs before, keep in mind there is a steep learning curve. Go through the [examples and user's guides][9] provided by KiCad to learn how to work with the schematic and the PCB layout editor. (If you are not in the mood to do everything from scratch, you can just clone the ANAVI Play pHAT project in my [GitHub repository][10].)
![KiCad schematic][11]
In KiCad's schematic layout editor, connect the Raspberry Pi's GPIOs to the buttons, the slots for sensors to the primary I2C, and the EEPROM to the secondary I2C. Assign an appropriate footprint to each component. Perform an electrical rule check and, if there are no errors, generate the [netlist][12], which describes an electronic circuit's connectivity.
Open the PCB layout editor. It contains several layers. Read the netlist. All components and tracks must be on the front and bottom copper layers (F.Cu and B.Cu), and the board's form must be created in the Edge.Cuts layer. Any text, including button labels, must be on the silkscreen layers.
![Printable circuit board design][13]
Finally, export the Gerber and drill files that you'll send to the company that will produce your PCB. The Gerber format is the de facto industry standard for PCBs. It is an open ASCII vector format for 2D binary images; simply explained, it is like a PDF for PCB manufacturing.
There are numerous companies that can make a simple two-layer board like the gamepad's. For a few prototypes, you can count on [OSHPark in the US][14] or [Aisler in Europe][15]. There are also a lot of Chinese manufacturers, such as JLCPCB, PCBWay, ALLPCB, Seeed Studio, and many more. Alternatively, if you prefer to skip the hassle of PCB manufacturing and sourcing components, you can order the [ANAVI Play pHAT maker kit from Crowd Supply][2] and solder all the through-hole components on your own.
### Understanding devicetree
[Devicetree][16] is a specification for a software data structure that describes the hardware components. Its purpose is to allow the compiled Linux kernel to handle a variety of different hardware configurations within a wider architecture family. The bootloader loads the devicetree into memory and passes it to the Linux kernel.
The devicetree includes three components:
* Devicetree source (DTS)
* Devicetree blob (DTB) and overlay (DTBO)
* Devicetree compiler (DTC)
The DTC creates binaries from a textual source. Devicetree overlays allow a central DTB to be overlaid on the devicetree. Overlays include a number of fragments.
For several years, a devicetree has been required for all new ARM systems on a chip (SoCs), including Broadcom SoCs in all Raspberry Pi models and versions. With the default bootloader in Raspberry Pi's popular Raspbian distribution, DTO can be set in the configuration file ( **config.txt** ) on the FAT partition of a bootable microSD card using the keyword **device_tree=**.
Since 2014, the Raspberry Pi's pin header has been extended to 40 pins. Pins 27 and 28 are dedicated for a secondary I2C bus. This way, the DTBO can be automatically loaded from an EEPROM attached to these pins. Furthermore, additional system information can be saved in the EEPROM. This feature is among the Raspberry Pi Foundation's requirements for any Raspberry Pi HAT (hardware attached on top) add-on board. On Raspbian and other GNU/Linux distributions for Raspberry Pi, the information from the EEPROM can be seen from userspace at **/proc/device-tree/hat/** after booting.
In my opinion, the devicetree is one of the most fascinating features added in the Linux ecosystem over the past decade. Creating devicetree blobs and overlays is an advanced task and requires some background knowledge. However, it's possible to create a devicetree binary overlay for the Raspberry Pi add-on board and flash it on an appropriate EEPROM. The device binary overlay defines the Linux key codes for each key of the gamepad. The result is a gamepad for Raspberry Pi with keys that work as soon as you boot Raspbian.
#### Creating the DTBO
There are three major steps to create a devicetree binary overlay for the gamepad:
* Creating the devicetree source with mapping for the keys based on the Linux key codes
* Compiling the devicetree binary overlay using the devicetree compiles
* Creating an **.eep** file and flashing it on an EEPROM using the open source tools provided by the Raspberry Pi Foundation
Linux key codes are defined in the file **/usr/include/linux/input-event-codes.h**. The device source file should describe which Raspberry Pi GPIO pin is connected to which hardware button and which Linux key code should be triggered when the button is pressed. In this gamepad, GPIO17 (pin 11) is connected to the tactile button for Right, GPIO4 (pin 7) to Left, GPIO22 (pin 15) to Up, GPIO27 (pin 13) to Down, GPIO5 (pin 29) to Start, GPIO6 (pin 31) to Select, GPIO19 (pin 35) to A, and GPIO26 (pin 37) to B.
Please note there is a difference between the GPIO numbers and the physical position of the pin on the header. For convenience, all pins are located on the second row of the Raspberry Pi's 40-pin header. This approach makes it easier to route the printed circuit board in KiCad.
The entire devicetree source for the gamepad is [available on GitHub][17]. As an example, the following is a short code snippet that demonstrates how GPIO17, corresponding to physical pin 11 on the Raspberry Pi, is mapped to the tact button for Right:
```
button@17 {
label = "right";
linux,code = <106>;
gpios = <&gpio 17 1>;
};
```
To compile the DTS directly on the Raspberry Pi, install the devicetree compiler on Raspbian by executing the following command in the terminal:
```
sudo apt-get update
sudo apt-get install device-tree-compiler
```
Run DTC and provide as arguments the name of the output DTBO and the path to the source file. For example:
```
dtc -I dts -O dtb -o anavi-play-phat.dtbo anavi-play-phat.dts
```
The Raspberry Pi Foundation provides a [GitHub repository with the mechanical, hardware, and software specifications for HATs][18]. It also includes three very convenient tools:
* **eepmake:** Creates an **.eep** file from a text file with settings
* **eepdump:** Useful for debugging, as it dumps a binary **.eep** file as human-readable text
* **eepflash:** Writes or reads an **.eep** binary image to/from an EEPROM
The **eeprom_settings.txt** file can be used as a template. [The Raspberry Pi Foundation][19] and [MagPi magazine][6] have helpful articles and tutorials, so I won't go into too many details. As I wrote above, the recommended EEPROM is CAT24C32, but it can be replaced with any other EEPROM with the same technical specifications. Using an EEPROM with an eight-pin, through-hole, dual in-line (DIP) package is easier for hobbyists to flash because it can be done with a breadboard. The following example command creates a file ready to be flashed on the EEPROM using the **eepmake** tool from the Raspberry Pi GitHub repository:
```
./eepmake settings.txt settings.eep anavi-play-phat.dtbo
```
Before proceeding with flashing, ensure that the EEPROM is connected properly to the primary I2C bus (pins 3 and 5) on the Raspberry Pi. (You can consult the MagPi magazine article linked above for a discussion on wiring schematics.) Then run the following command and follow the onscreen instructions to flash the **.eep** file on the EEPROM:
```
sudo ./eepflash.sh -w -f=settings.eep -t=24c32
```
Before soldering the EEPROM to the printed circuit board, move it to the secondary I2C bus on the breadboard and test it to ensure it works as expected. If you detect any issues while testing the EEPROM on the breadboard, correct the settings files, move it back to the primary I2C bus, and flash it again.
### Testing the gamepad
Now comes the fun part! It is time to test the add-on board using Raspbian, which you can [download][20] from RaspberryPi.org. After booting, open a terminal and enter the following commands:
```
cat /proc/device-tree/hat/product
cat /proc/device-tree/hat/vendor
```
The output should be similar to this:
![Testing output][21]
If it is, congratulations! The data from the EEPROM has been read successfully.
The next step is to verify that the keys on the Play pHAT are set properly and working. In a terminal or a text editor, press each of the eight buttons and verify they are acting as configured.
Finally, it is time to play games! By default, Raspbian's desktop includes [Python Games][22]. Launch them from the application menu. Make an audio output selection and pick a game from the list. My favorite is Wormy, a Snake-like game. As a former Symbian mobile application developer, I find playing Wormy brings back memories of the glorious days of Nokia.
### Retro gaming with RetroPie
![RetroPie with the Play pHAT][23]
Raspbian is amazing, but [RetroPie][24] offers so much more for retro games fans. It is a GNU/Linux distribution optimized for playing retro games and combines the open source projects RetroArch and Emulation Station. It's available for Raspberry Pi, the [Odroid][25] C1/C2, and personal computers running Debian or Ubuntu. It provides emulators for loading ROMs—the digital versions of game cartridges. Keep in mind that no ROMs are included in RetroPie due to copyright issues. You will have to [find appropriate ROMs and copy them][26] to the Raspberry Pi after booting RetroPie.
The open source hardware gamepad works fine in RetroPie's menus, but I discovered that the keys fail after launching some games and emulators. After debugging, I found a solution to ensuring they work in the game emulators: add a Python script for additional software emulation of the keys. [The script is available on GitHub.][27] Here's how to get it and install Python on RetroPie:
```
sudo apt-get update
sudo apt-get install -y python-pip
sudo pip install evdev
cd ~
git clone <https://github.com/AnaviTechnology/anavi-examples.git>
```
Finally, add the following line to **/etc/rc.local** so it will be executed automatically when RetroPie boots:
```
sudo python /home/pi/anavi-examples/anavi-play-phat/anavi-play-gamepad.py &
```
That's it! After following these steps, you can create an entirely open source hardware gamepad as an add-on board for any Raspberry Pi model with a 40-pin header and use it with Raspbian and RetroPie!
### What's next?
Combining free and open source software with open source hardware is fun and not difficult, but it requires a significant amount of time. After creating the open source hardware gamepad in my spare time, I ran a modest crowdfunding campaign at [Crowd Supply][2] for low-volume manufacturing in my hometown in Plovdiv, Bulgaria. [The Open Source Hardware Association][28] certified the ANAVI Play pHAT as an open source hardware project under [BG000007][29]. Even [the acrylic enclosures][30] that protect the board from dust are open source hardware created with the free and open source software OpenSCAD.
![Game pad in acrylic enclosure][31]
If you enjoyed reading this article, I encourage you to try creating your own open source hardware add-on board for Raspberry Pi with KiCad. If you don't have enough spare time, you can order an [ANAVI Play pHAT maker kit][2], grab your soldering iron, and assemble the through-hole components. If you're not comfortable with the soldering iron, you can just order a fully assembled version.
Happy retro gaming everybody! Next time someone irritably asks what you can learn from playing vintage computer games, tell them about Raspberry Pi, open source hardware, Linux, and devicetree.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/gamepad-raspberry-pi
作者:[Leon Anavi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leon-anavi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/gamepad_raspberrypi_hardware.jpg?itok=W16gOnay (Raspberry Pi Gamepad device)
[2]: https://www.crowdsupply.com/anavi-technology/anavi-play-phat
[3]: https://en.wikipedia.org/wiki/EEPROM
[4]: https://en.wikipedia.org/wiki/Through-hole_technology
[5]: https://en.wikipedia.org/wiki/I%C2%B2C
[6]: https://www.raspberrypi.org/magpi/make-your-own-hat/
[7]: http://kicad-pcb.org/
[8]: https://www.olimex.com/Products/DIY-Laptop/
[9]: http://kicad-pcb.org/help/getting-started/
[10]: https://github.com/AnaviTechnology/anavi-play-phat
[11]: https://opensource.com/sites/default/files/uploads/kicad-schematic.png (KiCad schematic)
[12]: https://en.wikipedia.org/wiki/Netlist
[13]: https://opensource.com/sites/default/files/uploads/circuitboard.png (Printable circuit board design)
[14]: https://oshpark.com/
[15]: https://aisler.net/
[16]: https://www.devicetree.org/
[17]: https://github.com/AnaviTechnology/hats/blob/anavi/eepromutils/anavi-play-phat.dts
[18]: https://github.com/raspberrypi/hats
[19]: https://www.raspberrypi.org/blog/introducing-raspberry-pi-hats/
[20]: https://www.raspberrypi.org/downloads/
[21]: https://opensource.com/sites/default/files/uploads/testing-output.png (Testing output)
[22]: https://www.raspberrypi.org/documentation/usage/python-games/
[23]: https://opensource.com/sites/default/files/uploads/retropie.jpg (RetroPie with the Play pHAT)
[24]: https://retropie.org.uk/
[25]: https://www.hardkernel.com/product-category/odroid-board/
[26]: https://opensource.com/article/19/1/retropie
[27]: https://github.com/AnaviTechnology/anavi-examples/blob/master/anavi-play-phat/anavi-play-gamepad.py
[28]: https://www.oshwa.org/
[29]: https://certification.oshwa.org/bg000007.html
[30]: https://github.com/AnaviTechnology/anavi-cases/tree/master/anavi-play-phat
[31]: https://opensource.com/sites/default/files/uploads/gamepad-acrylic.jpg (Game pad in acrylic enclosure)

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms)
[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all)
[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/)
Identifying exceptional user experience (UX) in IoT platforms
======
### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas.
![Leo Wolfert / Getty Images][1]
Enterprises are inundated with information about IoT platforms features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users.
Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms preferably by [testing a variety of IoT platforms][2] before making a purchase decision.
However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platforms features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform.
[RELATED: Storage tank operator turns to IoT for energy savings][3]
One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad.
In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas.
## Persona 1: platform administrator
A platform administrators primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform.
Typical platform administrator tasks include
* configuration of the on-platform data visualization and data aggregation tools
* configuration of available device management functionality or execution of in-bulk device management tasks
* configuration and creation of on-platform complex event processing (CEP) workflows
* management and configuration of platform service orchestration
Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred.
## Persona 2: platform operator
A platform operators primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks.
Typical platform operator tasks include
* visualizing and aggregating on-platform data to view key business KPIs
* using device management functionality on a per-device basis
* creating, managing, and monitoring per-device and per-location event processing rules
* executing self-service administrative tasks, such as enrolling downstream operators
Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.
## Persona 3: hardware and systems developer
A hardware and systems developers primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services.
Typical hardware and systems developer tasks include
* designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs
* updating firmware or software packages over deployment lifecycles
* integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform
Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware.
## Persona 4: platform and backend developer
A platform and backend developers primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications.
Typical platform and backend developer tasks include
* integrating streaming data from the IoT platform into external systems and applications
* configuring inbound and outbound platform actions and interactions with external systems
* configuring complex code-based event processing capabilities beyond the scope of a platform administrators knowledge or ability
* debugging low-level platform functionalities that require coding to detect or resolve
Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs.
## Persona 5: user interface and experience (UI/UX) developer
A UI/UX developers primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive.
Typical UI/UX developer tasks include
* building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices
* designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases
* ensuring good user experience for customers over the lifetime of an IoT implementation
Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices.
Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention.
**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all
作者:[Steven Hilton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Steven-Hilton/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/
[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb
[4]: /contributor-network/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS)
[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS
======
### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company.
![Getty Images][1]
Much of whats exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own.
Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device.
**[ Check out our[corporate guide to addressing IoT security][2]. ]**
The technologys called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructures electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities.
It seems likely to make IIoT-related waves once its out of testing and onto the market.
NLIM was recently tested, said MITs news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.”
Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ships diesel engines known as a jacket water heater.
“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system.
Its easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though its very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like?
**Volkswagen teams up with Amazon**
AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality.
Real-time data from all 122 of VWs manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the companys “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too.
The German carmaker clearly believes that AWSs technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4]
**IoT-in-a-box**
IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work.
Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the companys core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the companys devices, also includes a built-in management layer, allowing for simplified configuration and monitoring.
OK, so its not quite all-in-one, but its still an impressive level of integration, particularly from a company that many might not have heard of before. Its also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780
[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Setting kernel command line arguments with Fedora 30)
[#]: via: (https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/)
[#]: author: (Laura Abbott https://fedoramagazine.org/makes-fedora-kernel/)
Setting kernel command line arguments with Fedora 30
======
![][1]
Adding options to the kernel command line is a common task when debugging or experimenting with the kernel. The upcoming Fedora 30 release made a change to use Bootloader Spec ([BLS][2]). Depending on how you are used to modifying kernel command line options, your workflow may now change. Read on for more information.
To determine if your system is running with BLS or the older layout, look in the file
```
/etc/default/grub
```
If you see
```
GRUB_ENABLE_BLSCFG=true
```
in there, you are running with the BLS setup and you may need to change how you set kernel command line arguments.
If you only want to modify a single kernel entry (for example, to temporarily work around a display problem) you can use a grubby command
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --args="amdgpu.dc=0"
```
To remove a kernel argument, you can use the
```
--remove-args
```
argument to grubby
```
$ grubby --update-kernel /boot/vmlinuz-5.0.1-300.fc30.x86_64 --remove-args="amdgpu.dc=0"
```
If there is an option that should be added to every kernel command line (for example, you always want to disable the use of the rdrand instruction for random number generation) you can run a grubby command:
```
$ grubby --update-kernel=ALL --args="nordrand"
```
This will update the command line of all kernel entries and save the option to the saved kernel command line for future entries.
If you later want to remove the option from all kernels, you can again use
```
--remove-args
```
with
```
--update-kernel=ALL
```
```
$ grubby --update-kernel=ALL --remove-args="nordrand"
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/setting-kernel-command-line-arguments-with-fedora-30/
作者:[Laura Abbott][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/makes-fedora-kernel/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-kernel-1-816x345.jpg
[2]: https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (As memory prices plummet, PCIe is poised to overtake SATA for SSDs)
[#]: via: (https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
As memory prices plummet, PCIe is poised to overtake SATA for SSDs
======
### Taiwan vendors believe PCIe and SATA will achieve price and market share parity by years' end.
![Intel SSD DC P6400 Series][1]
A collapse in price for NAND flash memory and a shrinking gap between the prices of PCI Express-based and SATA-based [solid-state drives][2] (SSDs) means the shift to PCI Express SSDs will accelerate in 2019, with the newer, faster format replacing the old by years' end.
According to the Taiwanese tech publication DigiTimes (the stories are now archived and unavailable without a subscription), falling NAND flash prices continue to drag down SSD prices, which will drive the adoption of SSDs in enterprise and data-center applications. This, in turn, will further drive the adoption of PCIe drives, which are a superior format to SATA.
**[ Read also:[Backup vs. archive: Why its important to know the difference][3] ]**
## SATA vs. PCI Express
SATA was introduced in 2001 as a replacement for the IDE interface, which had a much larger cable and slower interface. But SATA is a legacy HDD connection and not fast enough for NAND flash memory.
I used to review SSDs, and it was always the same when it came to benchmarking, with the drives scoring within a few milliseconds of each other despite the memory used. The SATA interface was the bottleneck. A SATA SSD is like a one-lane highway with no speed limit.
PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an [add-in card][4] that plugs into a PCIe slot and M.2, which is about the size of a [stick of gum][5] and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices.
There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moores Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market.
“The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesnt cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][6] ]**
DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019, and PCIe SSDs are expected to emerge as a new mainstream offering by the end of 2019 with a market share of 50 percent, matching SATA SSDs.
## SSD and NAND memory prices already falling
Market sources to DigiTimes said that unit price for 512GB PCIe SSD has fallen by 11 percent sequentially in the first quarter of 2019, while SATA SSDs have dropped 9 percent. They added that the current average unit price for 512GB SSDs is now equal to that of 256GB SSDs from one year ago, with prices continuing to drop.
According to DRAMeXchange, NAND flash contract prices will continue falling but at a slower rate in the second quarter of 2019. Memory makers are cutting production to avoid losing any more profits.
“Were in a price collapse. For over a year Ive been saying the destination for NAND is 8 cents per gigabyte, and some spot markets are 6 cents. It was 30 cents a year ago. Contract pricing is around 15 cents now, it had been 25 to 27 cents last year,” said Handy.
A contract price is what it sounds like. A memory maker like Samsung or Micron signs a contract with a SSD maker like Toshiba or Kingston for X amount for Y cents per gigabyte. Spot prices are prices that take place at the end of a quarter (like now) where a vendor anxious to unload excessive inventory has a fire sale to a drive maker that needs it on short supply.
DigiTimess contacts arent the only ones who foresee this. Handy was at an analyst event by Samsung a few months back where they presented their projection that PCIe SSD would outsell SATA by the end of this year, and not just in the enterprise but everywhere.
**More about backup and recovery:**
* [Backup vs. archive: Why its important to know the difference][3]
* [How to pick an off-site data-backup method][7]
* [Tape vs. disk storage: Why isnt tape dead yet?][8]
* [The correct levels of backup save time, bandwidth, space][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/12/intel-ssd-p4600-series1-100782098-large.jpg
[2]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[3]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
[4]: https://www.newegg.com/Product/Product.aspx?Item=N82E16820249107
[5]: https://www.newegg.com/Product/Product.aspx?Item=20-156-199&cm_sp=SearchSuccess-_-INFOCARD-_-m.2+-_-20-156-199-_-2&Description=m.2+
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[8]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
[9]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Can Better Task Stealing Make Linux Faster?)
[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster)
[#]: author: (Oracle )
Can Better Task Stealing Make Linux Faster?
======
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster
作者:[Oracle][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png
[4]: https://lkml.org/lkml/2018/12/6/1253
[5]: https://lkml.org/lkml/2018/12/6/1250
[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws)
[#]: via: (https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws
======
### Cisco is issuing 17 new fixes for security problems with IOS and IOS/XE software that runs most of its routers and switches, while it has no patch yet to replace flawed patches to RV320 and RV 325 routers.
![Marisa9 / Getty][1]
Cisco has dropped [17 Security advisories describing 19 vulnerabilities][2] in the software that runs most of its routers and switches, IOS and IOS/XE.
The company also announced that two previously issued patches for its RV320 and RV325 Dual Gigabit WAN VPN Routers were “incomplete” and would need to be redone and reissued.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
Cisco rates both those router vulnerabilities as “High” and describes the problems like this:
* [One vulnerability][5] is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as _root_.
* The [second exposure][6] is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information.
Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both.
On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software. Some of the security bugs, which are all rated as “High”, include:
* [A vulnerability][7] in the web UI of Cisco IOS XE Software could let an unauthenticated, remote attacker access sensitive configuration information.
* [A vulnerability][8] in Cisco IOS XE Software could let an authenticated, local attacker inject arbitrary commands that are executed with elevated privileges. The vulnerability is due to insufficient input validation of commands supplied by the user. An attacker could exploit this vulnerability by authenticating to a device and submitting crafted input to the affected commands.
* [A weakness][9] in the ingress traffic validation of Cisco IOS XE Software for Cisco Aggregation Services Router (ASR) 900 Route Switch Processor 3 could let an unauthenticated, adjacent attacker trigger a reload of an affected device, resulting in a denial of service (DoS) condition, Cisco said. The vulnerability exists because the software insufficiently validates ingress traffic on the ASIC used on the RSP3 platform. An attacker could exploit this vulnerability by sending a malformed OSPF version 2 message to an affected device.
* A problem in the [authorization subsystem][10] of Cisco IOS XE Software could allow an authenticated but unprivileged (level 1), remote attacker to run privileged Cisco IOS commands by using the web UI. The vulnerability is due to improper validation of user privileges of web UI users. An attacker could exploit this vulnerability by submitting a malicious payload to a specific endpoint in the web UI, Cisco said.
* A vulnerability in the [Cluster Management Protocol][11] (CMP) processing code in Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, adjacent attacker to trigger a DoS condition on an affected device. The vulnerability is due to insufficient input validation when processing CMP management packets, Cisco said.
Cisco has released free software updates that address the vulnerabilities described in these advisories and [directs users to their software agreements][12] to find out how they can download the fixes.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/woman-with-hands-over-face_mistake_oops_embarrassed_shy-by-marisa9-getty-100787990-large.jpg
[2]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-71135
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-inject
[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-info
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xeid
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xecmd
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-rsp3-ospf
[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-iosxe-privesc
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-cmp-dos
[12]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment)
[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment
======
### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But thats only a first step. The data collected by that equipment must also be considered.
![Thinkstock][1]
Theres a surprising battle being fought on Americas farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle.
## Right to repair farm equipment
Heres the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it.
**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
[Warrens proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment.
Thats a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue.
## Part of a much bigger IoT data issue
But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesnt address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment.
And as many farmers can tell you, this isnt some academic argument. That data has real value — not to mention privacy implications. For farmers, its GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products.
The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms.
At the very least, users need clear statements of the rules, so they know exactly what theyre getting — and not getting — when they buy IoT-enhanced devices and equipment. And if theyre being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy.
Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach.
**[ Now read this:[Big trouble down on the IoT farm][6] ]**
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg
[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech
[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html
[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to run PostgreSQL on Kubernetes)
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
How to run PostgreSQL on Kubernetes
======
Create uniformly managed, cloud-native production deployments with the
flexibility to deploy a personalized database-as-a-service.
![cubes coming together to create a larger cube][1]
By running a [PostgreSQL][2] database on [Kubernetes][3], you can create uniformly managed, cloud-native production deployments with the flexibility to deploy a personalized database-as-a-service tailored to your specific needs.
Using an Operator allows you to provide additional context to Kubernetes to [manage a stateful application][4]. An Operator is also helpful when using an open source database like PostgreSQL to help with actions including provisioning, scaling, high availability, and user management.
Let's explore how to get PostgreSQL up and running on Kubernetes.
### Set up the PostgreSQL operator
The first step to using PostgreSQL with Kubernetes is installing an Operator. You can get up and running with the open source [Crunchy PostgreSQL Operator][5] on any Kubernetes-based environment with the help of Crunchy's [quickstart script][6] for Linux.
The quickstart script has a few prerequisites:
* The [Wget][7] utility installed
* [kubectl][8] installed
* A [StorageClass][9] defined on your Kubernetes cluster
* Access to a Kubernetes user account with cluster-admin privileges. This is required to install the Operator [RBAC][10] rules
* A [namespace][11] to hold the PostgreSQL Operator
Executing the script will give you a default PostgreSQL Operator deployment that assumes [dynamic storage][12] and a StorageClass named **standard**. User-provided values are allowed by the script to override these defaults.
You can download the quickstart script and set it to be executable with the following commands:
```
wget <https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/quickstart.sh>
chmod +x ./quickstart.sh
```
Then you can execute the quickstart script:
```
./examples/quickstart.sh
```
After the script prompts you for some basic information about your Kubernetes cluster, it performs the following operations:
* Downloads the Operator configuration files
* Sets the **$HOME/.pgouser** file to default settings
* Deploys the Operator as a Kubernetes [Deployment][13]
* Sets your **.bashrc** to include the Operator environmental variables
* Sets your **$HOME/.bash_completion** file to be the **pgo bash_completion** file
During the quickstart's execution, you'll be prompted to set up the RBAC rules for your Kubernetes cluster. In a separate terminal, execute the command the quickstart command tells you to use.
Once the script completes, you'll get information on setting up a port forward to the PostgreSQL Operator pod. In a separate terminal, execute the port forward; this will allow you to begin executing commands to the PostgreSQL Operator! Try creating a cluster by entering:
```
pgo create cluster mynewcluster
```
You can test that your cluster is up and running with by entering:
```
pgo test mynewcluster
```
You can now manage your PostgreSQL databases in your Kubernetes environment! You can find a full reference to commands, including those for scaling, high availability, backups, and more, in the [documentation][14].
* * *
_Parts of this article are based on[Get Started Running PostgreSQL on Kubernetes][15] that the author wrote for the Crunchy blog._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
作者:[Jonathan S. Katz][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jkatz05
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://www.postgresql.org/
[3]: https://kubernetes.io/
[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
[5]: https://github.com/CrunchyData/postgres-operator
[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
[7]: https://www.gnu.org/software/wget/
[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft introduces Azure Stack for HCI)
[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Microsoft introduces Azure Stack for HCI
======
### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution.
![Thinkstock/Microsoft][1]
Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware.
[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsofts data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsofts cloud service is easy.
HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. Its designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software.
**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking.
The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery.
“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7].
Its not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement.
"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said.
Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10].
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg
[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html
[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html
[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/
[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/
[9]: https://www.openstack.org/
[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks)
[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Motorola taps freed-up wireless spectrum for enterprise LTE networks
======
### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks.
![Jiraroj Praditcharoenkul / Getty Images][1]
In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2].
The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even.
**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. Thats a swath of radio bandwidth thats being released by the Federal Communications Commission (FCC) in the 3.5GHz band. Its a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise.
The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations dont have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks.
## Motorola's pitch for using a private broadband network
Why a private broadband network and not simply cell phones? One giveaway line is in Motorolas promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks theres a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [Ive written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise.
Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. Thats particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, dont handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup.
## Industrial IoT uses for Motorola's Nitro network
Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products cant provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.”
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]**
Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks.
The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg
[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html
[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf
[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html
[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,48 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Robots in Retail are Real… and so is Edge Computing)
[#]: via: (https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all)
[#]: author: (Wendy Torell https://www.networkworld.com/author/Wendy-Torell/)
Robots in Retail are Real… and so is Edge Computing
======
### Ive seen plenty of articles touting the promise of edge computing technologies like AI and robotics in retail brick & mortar, but it wasnt until this past weekend that I had my first encounter with an actual robot in a retail store.
![Getty][1]
Ive seen plenty of articles touting the promise of [edge computing][2] technologies like AI and robotics in retail brick & mortar, but it wasnt until this past weekend that I had my first encounter with an actual robot in a retail store. I was doing my usual weekly grocery shopping at my local Stop & Shop, and who comes strolling down the aisle, but…. Marty… the autonomous robot. He was friendly looking with his big googly eyes and was wearing a sign that explained he was there for safety, and that he was monitoring the aisles to report spills, debris, and other hazards to employees to improve my shopping experience. He caught the attention of most of the shoppers.
At the National Retail Federation conference in NY that I attended in January, this was a topic of one of the [panel sessions][3]. It all makes sense… a positive customer experience is critical to retail success. But employee-to-customer (human to human) interaction has also been proven important. Thats where Marty comes in… to free up resources spent on tedious, time consuming tasks so that personnel can spend more time directly helping customers.
**Use cases for robots in stores**
Robotics have been utilized by retailers in manufacturing floors, and in distribution warehouses to improve productivity and optimize business processes along the supply chain. But it is only more recently that were seeing them make their way into the retail store front, where they are in contact with the customers. Alerting to hazards in the aisles is just one of many use-cases for the robots. They can also be used to scan and re-stock shelves, or as general information sources and greeters upon entering the store to guide your shopping experience. But how does a retailer justify the investment in this type of technology? Determining your ROI isnt as cut and dry as in a warehouse environment, for example, where costs are directly tied to number of staff, time to complete tasks, etc… I guess time will tell for the retailers that are giving it a go.
**What does it mean for the IT equipment on-premise ([micro data center][4])**
Robotics are one of the many ways retail stores are being digitized. Video analytics is another big one, being used to analyze facial expressions for customer satisfaction, obtain customer demographics as input to product development, or ensure queue lines dont get too long. My colleague, Patrick Donovan, wrote a detailed [blog post][5] about our trip to NRF and the impact on the physical infrastructure in the stores. In a nutshell, the equipment on-premise is becoming more mission critical, more integrated to business applications in the cloud, more tied to positive customer-experiences… and with that comes the need for more secure, more available, more manageable edge. But this is easier said than done in an environment that generally has no IT staff on-premise, and with hundreds or potentially thousands of stores spread out geographically. So how do we address this?
We answer this question in a white paper that Patrick and I are currently writing titled “An Integrated Ecosystem to Solve Edge Computing Infrastructure Challenges”. Heres a hint, (1) an integrated ecosystem of partners, and (2) an integrated micro data center that emerges from the ecosystem. Ill be sure to comment on this blog with the link when the white paper becomes publicly available! In the meantime, explore our [edge computing][2] landing page to learn more.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all
作者:[Wendy Torell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Wendy-Torell/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/gettyimages-828488368-1060x445-100792228-large.jpg
[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
[3]: https://stores.org/2019/01/15/why-is-there-a-robot-in-my-store/
[4]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
[5]: https://blog.apc.com/2019/02/06/4-thoughts-edge-computing-infrastructure-retail-sector/

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to manage your Linux environment)
[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to manage your Linux environment
======
### Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter.
![IIP Photo Archive \(CC BY 2.0\)][1]
The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they're located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking.
Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files:
```
/etc/environment
/etc/bash.bashrc
/etc/profile
```
And some of these local files:
```
~/.bashrc
~/.profile -- not read if ~/.bash_profile or ~/.bash_login
~/.bash_profile
~/.bash_login
```
You can modify any of the local four that exist, since they sit in your home directory and belong to you.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
### Viewing your Linux environment settings
To view your environment settings, use the **env** command. Your output will likely look similar to this:
```
$ env
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;
01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:
*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:
*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:
*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;
31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:
*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:
*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:
*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:
*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:
*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:
*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:
*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:
*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:
*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:
*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:
*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36:
SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22
LESSCLOSE=/usr/bin/lesspipe %s %s
LANG=en_US.UTF-8
OLDPWD=/home/shs
XDG_SESSION_ID=2253
USER=shs
PWD=/home/shs
HOME=/home/shs
SSH_CLIENT=192.168.0.21 34975 22
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
SSH_TTY=/dev/pts/0
MAIL=/var/mail/shs
TERM=xterm
SHELL=/bin/bash
SHLVL=1
LOGNAME=shs
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LESSOPEN=| /usr/bin/lesspipe %s
_=/usr/bin/env
```
While you're likely to get a _lot_ of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like ***.tar=01;31:** , this tells you that tar files will be displayed in a file listing in red, while ***.jpg=01;35:** tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at [Customizing your colors on the Linux command line][3].
One easy way to turn colors off when you prefer a simpler display is to use a command such as this one:
```
$ ls -l --color=never
```
That command could easily be turned into an alias:
```
$ alias ll2='ls -l --color=never'
```
You can also display individual settings using the **echo** command. In this command, we display the number of commands that will be remembered in our history buffer:
```
$ echo $HISTSIZE
1000
```
Your last location in the file system will be remembered if you've moved.
```
PWD=/home/shs
OLDPWD=/tmp
```
### Making changes
You can make changes to environment settings with a command like this, but add a line lsuch as "HISTSIZE=1234" in your ~/.bashrc file if you want to retain this setting.
```
$ export HISTSIZE=1234
```
### What it means to "export" a variable
Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes.
### Adding and removing variables
You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file.
```
$ export MSG="Hello, World!"
```
You can unset a variable if you need by using the **unset** command:
```
$ unset MSG
```
If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example:
```
$ echo $MSG
Hello, World!
$ unset $MSG
$ echo $MSG
$ . ~/.bashrc
$ echo $MSG
Hello, World!
```
### Wrap-up
User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins).
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to submit a bug report with Bugzilla)
[#]: via: (https://opensource.com/article/19/3/bug-reporting)
[#]: author: (David Both (Community Moderator) https://opensource.com/users/dboth)
How to submit a bug report with Bugzilla
======
Submitting bug reports is an easy way to give back and it helps everyone.
![][1]
I spend a lot of time doing research for my books and [Opensource.com][2] articles. Sometimes this leads me to discover bugs in the software I use, including Fedora and the Linux kernel. As a long-time Linux user and sysadmin, I have benefited greatly from GNU/Linux, and I like to give back. I am not a C language programmer, so I don't create fixes and submit them with bug reports, as some people do. But a way I can return some value to the Linux community is by reporting bugs.
Product maintainers use a lot of tools to let their users search for existing bugs and report new ones. Bugzilla is a popular tool, and I use the Red Hat [Bugzilla][3] website to report Fedora-related bugs because I primarily use Fedora on the systems I'm responsible for. It's an easy process, but it may seem daunting if you have never done it before. So let's start with the basics.
### Start with a search
Even though it's tempting, never assume that seemingly anomalous behavior is the result of a bug. I always start with a search of relevant websites, such as the [Fedora wiki][4], the [CentOS wiki][5], and the documentation for the distro I'm using. I also try to check the various distro listservs.
If it appears that no one has encountered this problem before (or if they have, they haven't reported it as a bug), I go to the Red Hat Bugzilla site and begin searching for a bug report that might come close to matching the symptoms I encountered.
You can search the Red Hat Bugzilla site without an account. Go to the Bugzilla site and click on the [Advanced Search tab][6].
![Searching for a bug][7]
For example, if you want to search for bug reports related to Fedora's Rescue mode kernel, enter the following data in the Advanced Search form.
Field | Logic | Data or Selection
---|---|---
Summary | Contains the string | Rescue mode kernel
Classification | | Fedora
Product | | Fedora
Component | | grub2
Status | | New + Assigned
Then press **Search**. This returns a list of one bug with the ID 1654337 (which happens to be a bug I reported).
![Bug report list][8]
Click on the ID to view my bug report details. I entered as much relevant data as possible in the top section of the report. In the comments, I described the problem and included supporting files, other relevant comments (such as the fact that the problem occurred on multiple motherboards), and the steps to reproduce the problem.
![Bug report details][9]
The more information you can provide here that pertains to the bug, such as symptoms, the hardware and software environments (if they are applicable), other software that was running at the time, kernel and distro release levels, and so on, the easier it will be to determine where to assign your bug. In this case, I originally chose the kernel component, but it was quickly changed to the GRUB2 component because the problem occurred before the kernel loaded.
### How to submit a bug report
The Red Hat [Bugzilla][3] website requires an account to submit new bugs or comment on old ones. It is easy to sign up. On Bugzilla's main page, click **Open a New Account** and fill in the requested information. After you verify your email address, you can fill in the rest of the information to create your account.
_**Advisory:**_ _Bugzilla is a working website that people count on for support. I strongly suggest not creating an account unless you intend to submit bug reports or comment on existing bugs._
To demonstrate how to submit a bug report, I'll use a fictional example of creating a bug against the Xfce4-terminal emulator in Fedora. _Please do not do this unless you have a real bug to report._
Log into your account and click on **New** in the menu bar or the **File a Bug** button. You'll need to select a classification for the bug to continue the process. This will narrow down some of the choices on the next page.
The following image shows how I filled out the required fields (and a couple of others that are not required).
![Reporting a bug][10]
When you type a short problem description in the **Summary** field, Bugzilla displays a list of other bugs that might match yours. If one matches, click **Add Me to the CC List** to receive emails when changes are made to the bug.
If none match, fill in the information requested in the **Description** field. Add as much information as you can, including error messages and screen captures that illustrate the problem. Be sure to describe the exact steps needed to reproduce the problem and how reproducible it is: does it fail every time, every second, third, fourth, random time, or whatever. If it happened only once, it's very unlikely anyone will be able to reproduce the problem you observed.
When you finish adding as much information as you can, press **Submit Bug**.
### Be kind
Bug reporting websites are not for asking questions—they are for searching and reporting bugs. That means you must have performed some work on your own to conclude that there really is a bug. There are many wikis, listservs, and Q&A websites that are appropriate for asking questions. Use sites like Bugzilla to search for existing bug reports on the problem you have found.
Be sure you submit your bugs on the correct bug reporting website. For example, only submit bugs about Red Hat products on the Red Hat Bugzilla, and submit bugs about LibreOffice by following [LibreOffice's instructions][11].
Reporting bugs is not difficult, and it is an important way to participate.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/bug-reporting
作者:[David Both (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug-insect-butterfly-diversity-inclusion-2.png?itok=TcC9eews
[2]: http://Opensource.com
[3]: https://bugzilla.redhat.com/
[4]: https://fedoraproject.org/wiki/
[5]: https://wiki.centos.org/
[6]: https://bugzilla.redhat.com/query.cgi?format=advanced
[7]: https://opensource.com/sites/default/files/uploads/bugreporting-1.png (Searching for a bug)
[8]: https://opensource.com/sites/default/files/uploads/bugreporting-2.png (Bug report list)
[9]: https://opensource.com/sites/default/files/uploads/bugreporting-4.png (Bug report details)
[10]: https://opensource.com/sites/default/files/uploads/bugreporting-3.png (Reporting a bug)
[11]: https://wiki.documentfoundation.org/QA/BugReport

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Russia demands access to VPN providers servers)
[#]: via: (https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all)
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
Russia demands access to VPN providers servers
======
### 10 VPN service providers have been ordered to link their servers in Russia to the state censorship agency by April 26
![Getty Images][1]
The Russian censorship agency Roskomnadzor has ordered 10 [VPN][2] service providers to link their servers in Russia to its network in order to stop users within the country from reaching banned sites.
If they fail to comply, their services will be blocked, according to a machine translation of the order.
[RELATED: Best VPN routers for small business][3]
The 10 VPN providers are ExpressVPN, HideMyAss!, Hola VPN, IPVanish, Kaspersky Secure Connection, KeepSolid, NordVPN, OpenVPN, TorGuard, and VyprVPN.
In response at least five of the 10 Express VPN, IPVanish, KeepSolid, NordVPN, TorGuard and say they are tearing down their servers in Russia but continuing to offer their services to Russian customers if they can reach the providers servers located outside of Russia. A sixth provider, Kaspersky Labs, which is based in Moscow, says it will comply with the order. The other four could not be reached for this article.
IPVanish characterized the order as another phase of “Russias censorship agenda” dating back to 2017 when the government enacted a law forbidding the use of VPNs to access blocked Web sites.
“Up until recently, however, they had done little to enforce such rules,” IPVanish [says in its blog][4]. “These new demands mark a significant escalation.”
The reactions of those not complying are similar. TorGuard says it has taken steps to remove all its physical servers from Russia. It is also cutting off its business with data centers in the region
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
“We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred,” [TorGuard says in its blog][6]. “We do not store any logs so even if servers were compromised it would be impossible for customers data to be exposed.”
TorGuard says it is deploying more servers in adjacent countries to protect fast download speeds for customers in the region.
IPVanish says it has faced similar demands from Russia before and responded similarly. In 2016, a new Russian law required online service providers to store customers private data for a year. “In response, [we removed all physical server presence in Russia][7], while still offering Russians encrypted connections via servers outside of Russian borders,” the company says. “That decision was made in accordance with our strict zero-logs policy.”
KeepSolid says it had no servers in Russia, but it will not comply with the order to link with Roskomnadzor's network. KeepSolid says it will [draw on its experience dealing with the Great Firewall of China][8] to fight the Russian censorship attempt. "Our team developed a special [KeepSolid Wise protocol][9] which is designed for use in countries where the use of VPN is blocked," a spokesperson for the company said in an email statement.
NordVPN says its shutting down all its Russian servers, and all of them will be shredded as of April 1. [The company says in a blog][10] that some of its customers who connected to its Russian servers without use of the NordVPN application will have to reconfigure their devices to insure their security. Those customers using the app wont have to do anything differently because the option to connect to Russia via the app has been removed.
ExpressVPN is also not complying with the order. "As a matter of principle, ExpressVPN will never cooperate with efforts to censor the internet by any country," said the company's vice presidentn Harold Li in an email, but he said that blocking traffic will be ineffective. "We epect that Russian internet users will still be able to find means of accessing the sites and services they want, albeit perhaps with some additional effort."
Kaspersky Labs says it will comply with the Russian order and responded to emailed questions about its reaction with this written response:
“Kaspersky Lab is aware of the new requirements from Russian regulators for VPN providers operating in the country. These requirements oblige VPN providers to restrict access to a number of websites that were listed and prohibited by the Russian Government in the countrys territory. As a responsible company, Kaspersky Lab complies with the laws of all the countries where it operates, including Russia. At the same time, the new requirements dont affect the main purpose of Kaspersky Secure Connection which protects user privacy and ensures confidentiality and protection against data interception, for example, when using open Wi-Fi networks, making online payments at cafes, airports or hotels. Additionally, the new requirements are relevant to VPN use only in Russian territory and do not concern users in other countries.”
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all
作者:[Tim Greene][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Tim-Greene/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/ipsecurity-protocols-network-security-vpn-100775457-large.jpg
[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
[3]: http://www.networkworld.com/article/3002228/router/best-vpn-routers-for-small-business.html#tk.nww-fsb
[4]: https://nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://torguard.net/blog/why-torguard-has-removed-all-russian-servers/
[7]: https://blog.ipvanish.com/ipvanish-removes-russian-vpn-servers-from-moscow/
[8]: https://www.vpnunlimitedapp.com/blog/what-roskomnadzor-demands-from-vpns/
[9]: https://www.vpnunlimitedapp.com/blog/keepsolid-wise-a-smart-solution-to-get-total-online-freedom/
[10]: /cms/article/blog%20https:/nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,176 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ShadowReader: Serverless load tests for replaying production traffic)
[#]: via: (https://opensource.com/article/19/3/shadowreader-serverless)
[#]: author: (Yuki Sawa https://opensource.com/users/yukisawa1/users/yongsanchez)
ShadowReader: Serverless load tests for replaying production traffic
======
This open source tool recreates serverless production conditions to
pinpoint causes of memory leaks and other errors that aren't visible in
the QA environment.
![Traffic lights at night][1]
While load testing has become more accessible, configuring load tests that faithfully re-create production conditions can be difficult. A good load test must use a set of URLs that are representative of production traffic and achieve request rates that mimic real users. Even performing distributed load tests requires the upkeep of a fleet of servers.
[ShadowReader][2] aims to solve these problems. It gathers URLs and request rates straight from production logs and replays them using AWS Lambda. Being serverless, it is more cost-efficient and performant than traditional distributed load tests; in practice, it has scaled beyond 50,000 requests per minute.
At Edmunds, we have been able to utilize these capabilities to solve problems, such as Node.js memory leaks that were happening only in production, by recreating the same conditions in our QA environment. We're also using it daily to generate load for pre-production canary deployments.
The memory leak problem we faced in our Node.js application confounded our engineering team; as it was only occurring in our production environment; we could not reproduce it in QA until we introduced ShadowReader to replay production traffic into QA.
### The incident
On Christmas Eve 2017, we suffered an incident where there was a jump in response time across the board with error rates tripling and impacting many users of our website.
![Christmas Eve 2017 incident][3]
![Christmas Eve 2017 incident][4]
Monitoring during the incident helped identify and resolve the issue quickly, but we still needed to understand the root cause.
At Edmunds, we leverage a robust continuous delivery (CD) pipeline that releases new updates to production multiple times a day. We also dynamically scale up our applications to accommodate peak traffic and scale down to save costs. Unfortunately, this had the side effect of masking a memory leak.
In our investigation, we saw that the memory leak had existed for weeks, since early December. Memory usage would climb to 60%, along with a slow increase in 99th percentile response time.
Between our CD pipeline and autoscaling events, long-running containers were frequently being shut down and replaced by newer ones. This inadvertently masked the memory leak until December, when we decided to stop releasing software to ensure stability during the holidays.
![Slow increase in 99th percentile response time][5]
### Our CD pipeline
At a glance, Edmunds' CD pipeline looks like this:
1. Unit test
2. Build a Docker image for the application
3. Integration test
4. Load test/performance test
5. Canary release
The solution is fully automated and requires no manual cutover. The final step is a canary deployment directly into the live website, allowing us to release multiple times a day.
For our load testing, we leveraged custom tooling built on top of JMeter. It takes random samples of production URLs and can simulate various percentages of traffic. Unfortunately, however, our load tests were not able to reproduce the memory leak in any of our pre-production environments.
### Solving the memory leak
When looking at the memory patterns in QA, we noticed there was a very healthy pattern. Our initial hypothesis was that our JMeter load testing in QA was unable to simulate production traffic in a way that allows us to predict how our applications will perform.
While the load test takes samples from production URLs, it can't precisely simulate the URLs customers use and the exact frequency of calls (i.e., the burst rate).
Our first step was to re-create the problem in QA. We used a new tool called ShadowReader, a project that evolved out of our hackathons. While many projects we considered were product-focused, this was the only operations-centric one. It is a load-testing tool that runs on AWS Lambda and can replay production traffic and usage patterns against our QA environment.
The results it returned were immediate:
![QA results in ShadowReader][6]
Knowing that we could re-create the problem in QA, we took the additional step to point ShadowReader to our local environment, as this allowed us to trigger Node.js heap dumps. After analyzing the contents of the dumps, it was obvious the memory leak was coming from two excessively large objects containing only strings. At the time the snapshot dumped, these objects contained 373MB and 63MB of strings!
![Heap dumps show source of memory leak][7]
We found that both objects were temporary lookup caches containing metadata to be used on the client side. Neither of these caches was ever intended to be persisted on the server side. The user's browser cached only its own metadata, but on the server side, it cached the metadata for all users. This is why we were unable to reproduce the leak with synthetic testing. Synthetic tests always resulted in the same fixed set of metadata in the server-side caches. The leak surfaced only when we had a sufficient amount of unique metadata being generated from a variety of users.
Once we identified the problem, we were able to remove the large caches that we observed in the heap dumps. We've since instrumented the application to start collecting metrics that can help detect issues like this faster.
![Collecting metrics][8]
After making the fix in QA, we saw that the memory usage was constant and the leak was plugged.
![Graph showing memory leak fixed][9]
### What is ShadowReader?
ShadowReader is a serverless load-testing framework powered by AWS Lambda and S3 to replay production traffic. It mimics real user traffic by replaying URLs from production at the same rate as the live website. We are happy to announce that after months of internal usage, we have released it as open source!
#### Features
* ShadowReader mimics real user traffic by replaying user requests (URLs). It can also replay certain headers, such as True-Client-IP and User-Agent, along with the URL.
* It is more efficient cost- and performance-wise than traditional distributed load tests that run on a fleet of servers. Managing a fleet of servers for distributed load testing can cost $1,000 or more per month; with a serverless stack, it can be reduced to $100 per month by provisioning compute resources on demand.
* We've scaled it up to 50,000 requests per minute, but it should be able to handle more than 100,000 reqs/min.
* New load tests can be spun up and stopped instantly, unlike traditional load-testing tools, which can take many minutes to generate the test plan and distribute the test data to the load-testing servers.
* It can ramp traffic up or down by a percentage value to function as a more traditional load test.
* Its plugin system enables you to switch out plugins to change its behavior. For instance, you can switch from past replay (i.e., replays past requests) to live replay (i.e., replays requests as they come in).
* Currently, it can replay logs from the [Application Load Balancer][10] and [Classic Load Balancer][11] Elastic Load Balancers (ELBs), and support for other load balancers is coming soon.
### How it works
ShadowReader is composed of four different Lambdas: a Parser, an Orchestrator, a Master, and a Worker.
![ShadowReader architecture][12]
When a user visits a website, a load balancer (in this case, an ELB) typically routes the request. As the ELB routes the request, it will log the event and ship it to S3.
Next, ShadowReader triggers a Parser Lambda every minute via a CloudWatch event, which parses the latest access (ELB) logs on S3 for that minute, then ships the parsed URLs into another S3 bucket.
On the other side of the system, ShadowReader also triggers an Orchestrator lambda every minute. This Lambda holds the configurations and state of the system.
The Orchestrator then invokes a Master Lambda function. From the Orchestrator, the Master receives information on which time slice to replay and downloads the respective data from the S3 bucket of parsed URLs (deposited there by the Parser).
The Master Lambda divides the load-test URLs into smaller batches, then invokes and passes each batch into a Worker Lambda. If 800 requests must be sent out, then eight Worker Lambdas will be invoked, each one handling 100 URLs.
Finally, the Worker receives the URLs passed from the Master and starts load-testing the chosen test environment.
### The bigger picture
The challenge of reproducibility in load testing serverless infrastructure becomes increasingly important as we move from steady-state application sizing to on-demand models. While ShadowReader is designed and used with Edmunds' infrastructure in mind, any application leveraging ELBs can take full advantage of it. Soon, it will have support to replay the traffic of any service that generates traffic logs.
As the project moves forward, we would love to see it evolve to be compatible with next-generation serverless runtimes such as Knative. We also hope to see other open source communities build similar toolchains for their infrastructure as serverless becomes more prevalent.
### Getting started
If you would like to test drive ShadowReader, check out the [GitHub repo][2]. The README contains how-to guides and a batteries-included [demo][13] that will deploy all the necessary resources to try out live replay in your AWS account.
We would love to hear what you think and welcome contributions. See the [contributing guide][14] to get started!
* * *
_This article is based on "[How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA][15]," published on the_ _Edmunds Tech Blog_ _with the help of Carlos Macasaet, Sharath Gowda, and Joey Davis._ _Yuki_ _Sawa_ _also presented this_ as* [ShadowReader—Serverless load tests for replaying production traffic][16] at ([SCaLE 17x][17]) March 7-10 in Pasadena, Calif.*
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/shadowreader-serverless
作者:[Yuki Sawa][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yukisawa1/users/yongsanchez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys (Traffic lights at night)
[2]: https://github.com/edmunds/shadowreader
[3]: https://opensource.com/sites/default/files/uploads/shadowreader_incident1_0.png (Christmas Eve 2017 incident)
[4]: https://opensource.com/sites/default/files/uploads/shadowreader_incident2.png (Christmas Eve 2017 incident)
[5]: https://opensource.com/sites/default/files/uploads/shadowreader_99thpercentile.png (Slow increase in 99th percentile response time)
[6]: https://opensource.com/sites/default/files/uploads/shadowreader_qa.png (QA results in ShadowReader)
[7]: https://opensource.com/sites/default/files/uploads/shadowreader_heapdumps.png (Heap dumps show source of memory leak)
[8]: https://opensource.com/sites/default/files/uploads/shadowreader_code.png (Collecting metrics)
[9]: https://opensource.com/sites/default/files/uploads/shadowreader_leakplugged.png (Graph showing memory leak fixed)
[10]: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
[11]: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
[12]: https://opensource.com/sites/default/files/uploads/shadowreader_architecture.png (ShadowReader architecture)
[13]: https://github.com/edmunds/shadowreader#live-replay
[14]: https://github.com/edmunds/shadowreader/blob/master/CONTRIBUTING.md
[15]: https://technology.edmunds.com/2018/08/25/Investigating-a-Memory-Leak-and-Introducing-ShadowReader/
[16]: https://www.socallinuxexpo.org/scale/17x/speakers/yuki-sawa
[17]: https://www.socallinuxexpo.org/

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor)
[#]: author: (Stephan Tetzel https://opensource.com/users/stephan)
How to build a mobile particulate matter sensor with a Raspberry Pi
======
Monitor your air quality with a Raspberry Pi, a cheap sensor, and an inexpensive display.
![Team communication, chat][1]
About a year ago, I wrote about [measuring air quality][2] using a Raspberry Pi and a cheap sensor. We've been using this project in our school and privately for a few years now. However, it has one disadvantage: It is not portable because it depends on a WLAN network or a wired network connection to work. You can't even access the sensor's measurements if the Raspberry Pi and the smartphone or computer are not on the same network.
To overcome this limitation, we added a small screen to the Raspberry Pi so we can read the values directly from the device. Here's how we set up and configured a screen for our mobile fine particulate matter sensor.
### Setting up the screen for the Raspberry Pi
There is a wide range of Raspberry Pi displays available from [Amazon][3], AliExpress, and other sources. They range from ePaper screens to LCDs with touch function. We chose an inexpensive [3.5″ LCD][4] with touch and a resolution of 320×480 pixels that can be plugged directly into the Raspberry Pi's GPIO pins. It's also nice that a 3.5″ display is about the same size as a Raspberry Pi.
The first time you turn on the screen and start the Raspberry Pi, the screen will remain white because the driver is missing. You have to install [the appropriate drivers][5] for the display first. Log in with SSH and execute the following commands:
```
$ rm -rf LCD-show
$ git clone <https://github.com/goodtft/LCD-show.git>
$ chmod -R 755 LCD-show
$ cd LCD-show/
```
Execute the appropriate command for your screen to install the drivers. For example, this is the command for our model MPI3501 screen:
```
$ sudo ./LCD35-show
```
This command installs the appropriate drivers and restarts the Raspberry Pi.
### Installing PIXEL desktop and setting up autostart
Here is what we want our project to do: If the Raspberry Pi boots up, we want to display a small website with our air quality measurements.
First, install the Raspberry Pi's [PIXEL desktop environment][6]:
```
$ sudo apt install raspberrypi-ui-mods
```
Then install the Chromium browser to display the website:
```
$ sudo apt install chromium-browser
```
Autologin is required for the measured values to be displayed directly after startup; otherwise, you will just see the login screen. However, autologin is not configured for the "pi" user by default. You can configure autologin with the **raspi-config** tool:
```
$ sudo raspi-config
```
In the menu, select: **3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**.
There is a step missing to start Chromium with our website right after boot. Create the folder **/home/pi/.config/lxsession/LXDE-pi/** :
```
$ mkdir -p /home/pi/config/lxsession/LXDE-pi/
```
Then create the **autostart** file in this folder:
```
$ nano /home/pi/.config/lxsession/LXDE-pi/autostart
```
and paste the following code:
```
#@unclutter
@xset s off
@xset -dpms
@xset s noblank
# Open Chromium in Full Screen Mode
@chromium-browser --incognito --kiosk <http://localhost>
```
If you want to hide the mouse pointer, you have to install the package **unclutter** and remove the comment character at the beginning of the **autostart** file:
```
$ sudo apt install unclutter
```
![Mobile particulate matter sensor][7]
I've made a few small changes to the code in the last year. So, if you set up the air quality project before, make sure to re-download the script and files for the AQI website using the instructions in the [original article][2].
By adding the touch screen, you now have a mobile particulate matter sensor! We use it at our school to check the quality of the air in the classrooms or to do comparative measurements. With this setup, you are no longer dependent on a network connection or WLAN. You can use the small measuring station everywhere—you can even use it with a power bank to be independent of the power grid.
* * *
_This article originally appeared on[Open School Solutions][8] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor
作者:[Stephan Tetzel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stephan
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a
[4]: https://amzn.to/2CcvgpC
[5]: https://github.com/goodtft/LCD-show
[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc
[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor)
[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 cool text-based email clients)
[#]: via: (https://fedoramagazine.org/3-cool-text-based-email-clients/)
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
3 cool text-based email clients
======
![][1]
Writing and receiving email is a big part of everyones daily routine and choosing an email client is usually a major decision. The Fedora OS provides a large choice of email clients and among these are text-based email applications.
### Mutt
Mutt is probably one of the most popular text-based email clients. It supports all the common features that one would expect from an email client. Color coding, mail threading, POP3, and IMAP are all supported by Mutt. But one of its best features is its highly configurable. Indeed, the user can easily change the keybindings, and create macros to adapt the tool to a particular workflow.
To give Mutt a try, install it [using sudo][2] and dnf:
```
$ sudo dnf install mutt
```
To help newcomers get started, Mutt has a very comprehensive [wiki][3] full of macro examples and configuration tricks.
### Alpine
Alpine is also among the most popular text-based email clients. Its more beginner friendly than Mutt, and you can configure most of Alpine via the application itself — no need to edit a configuration file. One powerful feature of Alpine is the ability to score emails. This is particularly interesting for users that are registered to a high volume mailing list like Fedoras [devel list][4]. Using scores, Alpine can sort the email based on the users interests, showing emails with a high score first.
Alpine is also available to install from Fedoras repository using dnf.
```
$ sudo dnf install alpine
```
While using Alpine, you can easily access the documentation by pressing the _Ctrl+G_ key combination.
### nmh
nmh (new Mail Handling) follows the UNIX tools philosophy. It provides a collection of single purpose programs to send, receive, save, retrieve, and manipulate e-mail messages. This lets you swap the _nmh_ command with other programs, or create scripts around _nmh_ to create more customized tools. For example, you can use Mutt with nmh.
nmh can be easily installed using dnf.
```
$ sudo dnf install nmh
```
To learn more about nmh and mail handling in general you can read this GPL licenced [book][5].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-cool-text-based-email-clients/
作者:[Clément Verna][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/cverna/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2018/07/email-clients-816x345.png
[2]: https://fedoramagazine.org/howto-use-sudo/
[3]: https://gitlab.com/muttmua/mutt/wikis/home
[4]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/
[5]: https://rand-mh.sourceforge.io/book/

View File

@ -0,0 +1,226 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Build and host a website with Git)
[#]: via: (https://opensource.com/article/19/4/building-hosting-website-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
Build and host a website with Git
======
Publishing your own website is easy if you let Git help you out. Learn
how in the first article in our series about little-known Git uses.
![web development and design, desktop and browser][1]
[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git.
Creating a website used to be both sublimely simple and a form of black magic all at once. Back in the old days of Web 1.0 (that's not what anyone actually called it), you could just open up any website, view its source code, and reverse engineer the HTML—with all its inline styling and table-based layout—and you felt like a programmer after an afternoon or two. But there was still the matter of getting the page you created on the internet, which meant dealing with servers and FTP and webroot directories and file permissions. While the modern web has become far more complex since then, self-publication can be just as easy (or easier!) if you let Git help you out.
### Create a website with Hugo
[Hugo][3] is an open source static site generator. Static sites are what the web used to be built on (if you go back far enough, it was _all_ the web was). There are several advantages to static sites: they're relatively easy to write because you don't have to code them, they're relatively secure because there's no code executed on the pages, and they can be quite fast because there's no processing aside from transferring whatever you have on the page.
Hugo isn't the only static site generator out there. [Grav][4], [Pico][5], [Jekyll][6], [Podwrite][7], and many others provide an easy way to create a full-featured website with minimal maintenance. Hugo happens to be one with GitLab integration built in, which means you can generate and host your website with a free GitLab account.
Hugo has some pretty big fans, too. For instance, if you've ever gone to the Let's Encrypt website, then you've used a site built with Hugo.
![Let's Encrypt website][8]
#### Install Hugo
Hugo is cross-platform, and you can find installation instructions for MacOS, Windows, Linux, OpenBSD, and FreeBSD in [Hugo's getting started resources][9].
If you're on Linux or BSD, it's easiest to install Hugo from a software repository or ports tree. The exact command varies depending on what your distribution provides, but on Fedora you would enter:
```
$ sudo dnf install hugo
```
Confirm you have installed it correctly by opening a terminal and typing:
```
$ hugo help
```
This prints all the options available for the **hugo** command. If you don't see that, you may have installed Hugo incorrectly or need to [add the command to your path][10].
#### Create your site
To build a Hugo site, you must have a specific directory structure, which Hugo will generate for you by entering:
```
$ hugo new site mysite
```
You now have a directory called **mysite** , and it contains the default directories you need to build a Hugo website.
Git is your interface to get your site on the internet, so change directory to your new **mysite** folder and initialize it as a Git repository:
```
$ cd mysite
$ git init .
```
Hugo is pretty Git-friendly, so you can even use Git to install a theme for your site. Unless you plan on developing the theme you're installing, you can use the **\--depth** option to clone the latest state of the theme's source:
```
$ git clone --depth 1 \
<https://github.com/darshanbaral/mero.git\\>
themes/mero
```
Now create some content for your site:
```
$ hugo new posts/hello.md
```
Use your favorite text editor to edit the **hello.md** file in the **content/posts** directory. Hugo accepts Markdown files and converts them to themed HTML files at publication, so your content must be in [Markdown format][11].
If you want to include images in your post, create a folder called **images** in the **static** directory. Place your images into this folder and reference them in your markup using the absolute path starting with **/images**. For example:
```
![A picture of a thing](/images/thing.jpeg)
```
#### Choose a theme
You can find more themes at [themes.gohugo.io][12], but it's best to stay with a basic theme while testing. The canonical Hugo test theme is [Ananke][13]. Some themes have complex dependencies, and others don't render pages the way you might expect without complex configuration. The Mero theme used in this example comes bundled with a detailed **config.toml** configuration file, but (for the sake of simplicity) I'll provide just the basics here. Open the file called **config.toml** in a text editor and add three configuration parameters:
```
languageCode = "en-us"
title = "My website on the web"
theme = "mero"
[params]
author = "Seth Kenlon"
description = "My hugo demo"
```
#### Preview your site
You don't have to put anything on the internet until you're ready to publish it. While you work, you can preview your site by launching the local-only web server that ships with Hugo.
```
$ hugo server --buildDrafts --disableFastRender
```
Open a web browser and navigate to **<http://localhost:1313>** to see your work in progress.
### Publish with Git to GitLab
To publish and host your site on GitLab, create a repository for the contents of your site.
To create a repository in GitLab, click on the **New Project** button in your GitLab Projects page. Create an empty repository called **yourGitLabUsername.gitlab.io** , replacing **yourGitLabUsername** with your GitLab user name or group name. You must use this scheme as the name of your project. If you want to add a custom domain later, you can.
Do not include a license or a README file (because you've started a project locally, adding these now would make pushing your data to GitLab more complex, and you can always add them later).
Once you've created the empty repository on GitLab, add it as the remote location for the local copy of your Hugo site, which is already a Git repository:
```
$ git remote add origin git@gitlab.com:skenlon/mysite.git
```
Create a GitLab site configuration file called **.gitlab-ci.yml** and enter these options:
```
image: monachus/hugo
variables:
GIT_SUBMODULE_STRATEGY: recursive
pages:
script:
- hugo
artifacts:
paths:
- public
only:
- master
```
The **image** parameter defines a containerized image that will serve your site. The other parameters are instructions telling GitLab's servers what actions to execute when you push new code to your remote repository. For more information on GitLab's CI/CD (Continuous Integration and Delivery) options, see the [CI/CD section of GitLab's docs][14].
#### Set the excludes
Your Git repository is configured, the commands to build your site on GitLab's servers are set, and your site ready to publish. For your first Git commit, you must take a few extra precautions so you're not version-controlling files you don't intend to version-control.
First, add the **/public** directory that Hugo creates when building your site to your **.gitignore** file. You don't need to manage the finished site in Git; all you need to track are your source Hugo files.
```
$ echo "/public" >> .gitignore
```
You can't maintain a Git repository within a Git repository without creating a Git submodule. For the sake of keeping this simple, move the embedded **.git** directory so that the theme is just a theme.
Note that you _must_ add your theme files to your Git repository so GitLab will have access to the theme. Without committing your theme files, your site cannot successfully build.
```
$ mv themes/mero/.git ~/.local/share/Trash/files/
```
Alternately, use a **trash** command such as [Trashy][15]:
```
$ trash themes/mero/.git
```
Now you can add all the contents of your local project directory to Git and push it to GitLab:
```
$ git add .
$ git commit -m 'hugo init'
$ git push -u origin HEAD
```
### Go live with GitLab
Once your code has been pushed to GitLab, take a look at your project page. An icon indicates GitLab is processing your build. It might take several minutes the first time you push your code, so be patient. However, don't be _too_ patient, because the icon doesn't always update reliably.
![GitLab processing your build][16]
While you're waiting for GitLab to assemble your site, go to your project settings and find the **Pages** panel. Once your site is ready, its URL will be provided for you. The URL is **yourGitLabUsername.gitlab.io/yourProjectName**. Navigate to that address to view the fruits of your labor.
![Previewing Hugo site][17]
If your site fails to assemble correctly, GitLab provides insight into the CI/CD pipeline logs. Review the error message for an indication of what went wrong.
### Git and the web
Hugo (or Jekyll or similar tools) is just one way to leverage Git as your web publishing tool. With server-side Git hooks, you can design your own Git-to-web pipeline with minimal scripting. With the community edition of GitLab, you can self-host your own GitLab instance or you can use an alternative like [Gitolite][18] or [Gitea][19] and use this article as inspiration for a custom solution. Have fun!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/building-hosting-website-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh (web development and design, desktop and browser)
[2]: https://git-scm.com/
[3]: http://gohugo.io
[4]: http://getgrav.org
[5]: http://picocms.org/
[6]: https://jekyllrb.com
[7]: http://slackermedia.info/podwrite/
[8]: https://opensource.com/sites/default/files/uploads/letsencrypt-site.jpg (Let's Encrypt website)
[9]: https://gohugo.io/getting-started/installing
[10]: https://opensource.com/article/17/6/set-path-linux
[11]: https://commonmark.org/help/
[12]: https://themes.gohugo.io/
[13]: https://themes.gohugo.io/gohugo-theme-ananke/
[14]: https://docs.gitlab.com/ee/ci/#overview
[15]: http://slackermedia.info/trashy
[16]: https://opensource.com/sites/default/files/uploads/hugo-gitlab-cicd.jpg (GitLab processing your build)
[17]: https://opensource.com/sites/default/files/uploads/hugo-demo-site.jpg (Previewing Hugo site)
[18]: http://gitolite.com
[19]: http://gitea.io

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to create a filesystem on a Linux partition or logical volume)
[#]: via: (https://opensource.com/article/19/4/create-filesystem-linux-partition)
[#]: author: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
How to create a filesystem on a Linux partition or logical volume
======
Learn to create a filesystem and mount it persistently or
non-persistently in your system.
![Filing papers and documents][1]
In computing, a filesystem controls how data is stored and retrieved and helps organize the files on the storage media. Without a filesystem, information in storage would be one large block of data, and you couldn't tell where one piece of information stopped and the next began. A filesystem helps manage all of this by providing names to files that store data and maintaining a table of files and directories—along with their start/end location, total size, etc.—on disks within the filesystem.
In Linux, when you create a hard disk partition or a logical volume, the next step is usually to create a filesystem by formatting the partition or logical volume. This how-to assumes you know how to create a partition or a logical volume, and you just want to format it to contain a filesystem and mount it.
### Create a filesystem
Imagine you just added a new disk to your system and created a partition named **/dev/sda1** on it.
1. To verify that the Linux kernel can see the partition, you can **cat** out **/proc/partitions** like this:
```
[root@localhost ~]# cat /proc/partitions
major minor #blocks name
253 0 10485760 vda
253 1 8192000 vda1
11 0 1048575 sr0
11 1 374 sr1
8 0 10485760 sda
8 1 10484736 sda1
252 0 3145728 dm-0
252 1 2097152 dm-1
252 2 1048576 dm-2
8 16 1048576 sdb
```
2. Decide what kind of filesystem you want to create, such as ext4, XFS, or anything else. Here are a few options:
```
[root@localhost ~]# mkfs.<tab><tab>
mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
```
3. For the purposes of this exercise, choose ext4. (I like ext4 because it allows you to shrink the filesystem if you need to, a thing that isn't as straightforward with XFS.) Here's how it can be done (the output may differ based on device name/sizes):
```
[root@localhost ~]# mkfs.ext4 /dev/sda1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=8191 blocks
194688 inodes, 778241 blocks
38912 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=799014912
24 block groups
32768 blocks per group, 32768 fragments per group
8112 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
4. In the previous step, if you want to create a different kind of filesystem, use a different **mkfs** command variation.
### Mount a filesystem
After you create your filesystem, you can mount it in your operating system.
1. First, identify the UUID of your new filesystem. Issue the **blkid** command to list all known block storage devices and look for **sda1** in the output:
```
[root@localhost ~]# blkid
/dev/vda1: UUID="716e713d-4e91-4186-81fd-c6cfa1b0974d" TYPE="xfs"
/dev/sr1: UUID="2019-03-08-16-17-02-00" LABEL="config-2" TYPE="iso9660"
/dev/sda1: UUID="wow9N8-dX2d-ETN4-zK09-Gr1k-qCVF-eCerbF" TYPE="LVM2_member"
/dev/mapper/test-test1: PTTYPE="dos"
/dev/sda1: UUID="ac96b366-0cdd-4e4c-9493-bb93531be644" TYPE="ext4"
[root@localhost ~]#
```
2. Run the following command to mount the **/dev/sd1** device :
```
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# ls /mnt/
mount_point_for_dev_sda1
[root@localhost ~]# mount -t ext4 /dev/sda1 /mnt/mount_point_for_dev_sda1/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.9G 920M 7.0G 12% /
devtmpfs 443M 0 443M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 30M 434M 7% /run
tmpfs 463M 0 463M 0% /sys/fs/cgroup
tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
[root@localhost ~]#
```
The **df -h** command shows which filesystem is mounted on which mount point. Look for **/dev/sd1**. The mount command above used the device name **/dev/sda1**. Substitute it with the UUID identified in the **blkid** command. Also, note that a new directory was created to mount **/dev/sda1** under **/mnt**.
3. A problem with using the mount command directly on the command line (as in the previous step) is that the mount won't persist across reboots. To mount the filesystem persistently, edit the **/etc/fstab** file to include your mount information:
```
UUID=ac96b366-0cdd-4e4c-9493-bb93531be644 /mnt/mount_point_for_dev_sda1/ ext4 defaults 0 0
```
4. After you edit **/etc/fstab** , you can **umount /mnt/mount_point_for_dev_sda1** and run the command **mount -a** to mount everything listed in **/etc/fstab**. If everything went right, you can still list **df -h** and see your filesystem mounted:
```
root@localhost ~]# umount /mnt/mount_point_for_dev_sda1/
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.9G 920M 7.0G 12% /
devtmpfs 443M 0 443M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 30M 434M 7% /run
tmpfs 463M 0 463M 0% /sys/fs/cgroup
tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
```
5. You can also check whether the filesystem was mounted:
```
[root@localhost ~]# mount | grep ^/dev/sd
/dev/sda1 on /mnt/mount_point_for_dev_sda1 type ext4 (rw,relatime,seclabel,stripe=8191,data=ordered)
```
Now you know how to create a filesystem and mount it persistently or non-persistently within your system.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/create-filesystem-linux-partition
作者:[Kedar Vijay Kulkarni (Red Hat)][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kkulkarn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Meta Networks builds user security into its Network-as-a-Service)
[#]: via: (https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Meta Networks builds user security into its Network-as-a-Service
======
### Meta Networks has a unique approach to the security of its Network-as-a-Service. A tight security perimeter is built around every user and the specific resources each person needs to access.
![MF3d / Getty Images][1]
Network-as-a-Service (NaaS) is growing in popularity and availability for those organizations that dont want to host their own LAN or WAN, or that want to complement or replace their traditional network with something far easier to manage.
With NaaS, a service provider creates a multi-tenant wide area network comprised of geographically dispersed points of presence (PoPs) connected via high-speed Tier 1 carrier links that create the network backbone. The PoPs peer with cloud services to facilitate customer access to cloud applications such as SaaS offerings, as well as to infrastructure services from the likes of Amazon, Google and Microsoft. User organizations connect to the network from whatever facilities they have — data centers, branch offices, or even individual client devices — typically via SD-WAN appliances and/or VPNs.
Numerous service providers now offer Network-as-a-Service. As the network backbone and the PoPs become more of a commodity, the providers are distinguishing themselves on other value-added services, such as integrated security or WAN optimization.
**[ Also read:[What to consider when deploying a next generation firewall][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3]. ]**
Ever since its launch about a year ago, [Meta Networks][4] has staked security as its primary value-add. Whats different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative.
Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user. No access is possible unless it is explicitly granted, and its continuously verified at the packet level. This model effectively provides dynamically provisioned secure network segmentation.
## SDP tightly controls access to specific resources
This approach works very well when a company wants to securely connect employees, contractors, and external partners to specific resources on the network. For example, one of Meta Networks customers is Via Transportation, a New York-based company that has a ride-sharing platform. The company operates its own ride-sharing services in various cities in North America and Europe, and it licenses its technology to other transit systems around the world.
Vias operations are completely cloud-native, and so it has no legacy-style site-based WAN to connect its 400-plus employees and contractors to their cloud-based applications. Vias partners, primarily transportation operators in different cities and countries, also need controlled access to specific portions of Vias software platform to manage rideshares. Giving each group of users access to the applications they need — and _only_ to the ones they specifically need was a challenge using a VPN. Using the Meta NaaS instead gives Via more granular control over who has what access.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
Vias employees with managed devices connect to the Meta NaaS using client software on the device, and they are authenticated using Okta and a certificate. Contractors and customers with unmanaged devices use a browser-based access solution from Meta that doesnt require installation or setup. New users can be on-boarded quickly and assigned granular access policies based on their role. Integration with Okta provides information that facilitates identity-based access policies. Once users connect to the network, they can see only the applications and network resources that their policy allows; everything else is invisible to them under the SDP architecture.
For Via, there are several benefits to the Meta NaaS approach. First and foremost, the company doesnt have to own or operate its own WAN infrastructure. Everything is a managed service located in the cloud — the same business model that Via itself espouses. Next, this solution scales easily to support the companys growth. Metas security integrates with Vias existing identity management system, so identities and access policies can be centrally managed. And finally, the software-defined perimeter hides resources from unauthorized users, creating security by obscurity.
## Tightening security even further
Meta Networks further tightens the security around the user by doing device posture checks — “NAC lite,” if you will. A customer can define the criteria that devices have to meet before they are allowed to connect to the NaaS. For example, the check could be whether a security certificate is installed, if a registry key is set to a specific value, or if anti-virus software is installed and running. Its one more way to enforce company policies on network access.
When end users use the browser-based method to connect to the Meta NaaS, all activity is recorded in a rich log so that everything can be audited, but also to set alerts and look for anomalies. This data can be exported to a SIEM if desired, but Meta has its own notification and alert system for security incidents.
Meta Networks recently implemented some new features around management, including smart groups and support for the System for Cross-Domain Identity Management (SCIM) protocol. The smart groups feature provides the means to add an extra notation or tag to elements such as devices, services, network subnets or segments, and basically everything thats in the system. These tags can then be applied to policy. For example, a customer could label some of their services as a production, staging, or development environment. Then a policy could be implemented to say that only sales people can access the production environment. Smart groups are just one more way to get even more granular about policy.
The SCIM support makes on-boarding new users simple. SCIM is a protocol that is used to synchronize and provision users and identities from a third-party identity provider such as Okta, Azure AD, or OneLogin. A customer can use SCIM to provision all the users from the IdP into the Meta system, synchronize in real time the groups and attributes, and then use that information to build the access policies inside Meta NaaS.
These and other security features fit into Meta Networks vision that the security perimeter goes with you no matter where you are, and the perimeter includes everything that was formerly delivered through the data center. It is delivered through the cloud to your client device with always-on security. Its a broad approach to SDP and a unique approach to NaaS.
**Reviews: 4 free, open-source network monitoring tools**
* [Icinga: Enterprise-grade, open-source network-monitoring that scales][6]
* [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][7]
* [Observium open-source network monitoring tool: Wont run on Windows but has a great user interface][8]
* [Zabbix delivers effective no-frills network monitoring][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/firewall_network-security_lock_padlock_cyber-security-100776989-large.jpg
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.metanetworks.com/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
[7]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
[8]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
[9]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge)
[#]: via: (https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge
======
![istock][1]
Were now near reaching the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the *Top Ten Reasons to Think Outside the Router. *Click for the [#3][3], [#4][4], [#5][5], [#6][6], [#7][7], [#8][8], [#9][9] and [#10][10] reasons to retire traditional branch routers.
_The #2 reason its time to retire branch routers: conventional router-centric WAN architectures are rigid and complex to manage!_
### **Challenges of conventional WAN edge architecture**
A conventional WAN edge architecture consists of a disparate array of devices, including routers, firewalls, WAN optimization appliances, wireless controllers and so on. This architecture was born in the era when applications were hosted exclusively in the data center. With this model, deploying new applications or provisioning new policies or making policy changes has become an arduous and time-consuming task. Configuration, deployment and management requires specialized on-premise IT expertise to manually program and configure each device with its own management interface, often using an arcane CLI. This process has hit the wall in the cloud era proving too slow, complex, error-prone, costly and inefficient.
As cloud-first enterprises increasingly migrate applications and infrastructure to the cloud, the traditional WAN architecture is no longer efficient. IT is now faced with a new set of challenges when it comes to connecting users securely and directly to the applications that run their businesses:
* How do you manage and consistently apply QoS and security policies across the distributed enterprise?
* How do you intelligently automate traffic steering across multiple WAN transport services based on application type and unique requirements?
* How do you deliver the highest quality of experiences to users when running applications over broadband, especially voice and video?
* How do you quickly respond to continuously changing business requirements?
These are just some of the new challenges facing IT teams in the cloud era. To be successful, enterprises will need to shift toward a business-first networking model where top-down business intent drives how the network behaves. And they would be well served to deploy a business-driven unified [SD-WAN][11] edge platform to transform their networks from a business constraint to a business accelerant.
### **Shifting toward a business-driven WAN edge platform**
A business-driven WAN edge platform is designed to enable enterprises to realize the full transformation promise of the cloud. It is a model where top-down business intent is the driver, not bottoms-up technology constraints. Its outcome oriented, utilizing automation, artificial intelligence (AI) and machine learning to get smarter every day. Through this continuous adaptation, and the ability to improve the performance of underlying transport and applications, it delivers the highest quality of experience to end users. This is in stark contrast to the router-centric model where application policies must be shoe-horned to fit within the constraints of the network. A business-driven, top-down approach continuously stays in compliance with business intent and centrally defined security policies.
### **A unified platform for simplifying and consolidating the WAN Edge**
Achieving a business-driven architecture requires a unified platform, designed from the ground up as one system, uniting [SD-WAN][12], [firewall][13], [segmentation][14], [routing][15], [WAN optimization][16], application visibility and control in a single-platform. Furthermore, it requires [centralized orchestration][17] with complete observability of the entire wide area network through a single pane of glass.
The use case “[Simplifying WAN Architecture][18]” describes in detail key capabilities of the Silver Peak [Unity EdgeConnect™][19] SD-WAN edge platform. It illustrates how EdgeConnect enables enterprises to simplify branch office WAN edge infrastructure and streamline deployment, configuration and ongoing management.
![][20]
### **Business and IT outcomes of a business-driven SD-WAN**
* Accelerates deployment, leveraging consistent hardware, software, cloud delivery models
* Saves up to 40 percent on hardware, software, installation, management and maintenance costs when replacing traditional routers
* Protects existing investment in security through simplified service chaining with our broadest ecosystem partners: [Check Point][21], [Forcepoint][22], [McAfee][23], [OPAQ][24], [Palo Alto Networks][25], [Symantec][26] and [Zscaler][27].
* Reduces foot print by 75 percent as it unifies network functions into a single platform
* Saves more than 50 percent on WAN optimization costs by selectively applying it when and where is needed on an application-by-application basis
* Accelerates time-to-resolution of application or network performance bottlenecks from days to minutes with simple, visual application and WAN analytics
Calculate your [ROI][28] today and learn why the time is now to [think outside the router][29] and deploy the business-driven Silver Peak EdgeConnect SD-WAN edge platform!
![][30]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/silverpeak_main-100792490-large.jpg
[2]: https://www.silver-peak.com/why-silver-peak
[3]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
[4]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
[5]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
[6]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
[7]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
[8]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
[9]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
[10]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
[11]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[12]: https://www.silver-peak.com/sd-wan
[13]: https://www.silver-peak.com/products/unity-edge-connect/orchestrated-security-policies
[14]: https://www.silver-peak.com/resource-center/centrally-orchestrated-end-end-segmentation
[15]: https://www.silver-peak.com/products/unity-edge-connect/bgp-routing
[16]: https://www.silver-peak.com/products/unity-boost
[17]: https://www.silver-peak.com/products/unity-orchestrator
[18]: https://www.silver-peak.com/use-cases/simplifying-wan-architecture
[19]: https://www.silver-peak.com/products/unity-edge-connect
[20]: https://images.idgesg.net/images/article/2019/04/sp_linkthrough-copy-100792505-large.jpg
[21]: https://www.silver-peak.com/resource-center/check-point-silver-peak-securing-internet-sd-wan
[22]: https://www.silver-peak.com/company/tech-partners/forcepoint
[23]: https://www.silver-peak.com/company/tech-partners/mcafee
[24]: https://www.silver-peak.com/company/tech-partners/opaq-networks
[25]: https://www.silver-peak.com/resource-center/palo-alto-networks-and-silver-peak
[26]: https://www.silver-peak.com/company/tech-partners/symantec
[27]: https://www.silver-peak.com/resource-center/zscaler-and-silver-peak-solution-brief
[28]: https://www.silver-peak.com/sd-wan-interactive-roi-calculator
[29]: https://www.silver-peak.com/think-outside-router
[30]: https://images.idgesg.net/images/article/2019/04/roi-100792506-large.jpg

View File

@ -0,0 +1,171 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is 5G? How is it better than 4G?)
[#]: via: (https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all)
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
What is 5G? How is it better than 4G?
======
### 5G networks will boost wireless throughput by a factor of 10 and may replace wired broadband. But when will they be available, and why are 5G and IoT so linked together?
![Thinkstock][1]
[5G wireless][2] is an umbrella term to describe a set of standards and technologies for a radically faster wireless internet that ideally is up to 20 times faster with 120 times less latency than 4G, setting the stage for IoT networking advances and support for new high-bandwidth applications.
## What is 5G? Technology or buzzword?
It will be years before the technology reaches its full potential worldwide, but meanwhile some 5G network services are being rolled out today. 5G is as much a marketing buzzword as a technical term, and not all services marketed as 5G are standard.
**[From Mobile World Congress:[The time of 5G is almost here][3].]**
## 5G speed vs 4G
With every new generation of wireless technology, the biggest appeal is increased speed. 5G networks have potential peak download speeds of [20 Gbps, with 10 Gbps being seen as typical][4]. That's not just faster than current 4G networks, which currently top out at around 1 Gbps, but also faster than cable internet connections that deliver broadband to many people's homes. 5G offers network speeds that rival optical-fiber connections.
Throughput alone isn't 5G's only important speed improvement; it also features a huge reduction in network latency*.* That's an important distinction: throughput measures how long it would take to download a large file, while latency is determined by network bottlenecks and delays that slow down responses in back-and-forth communication.
Latency can be difficult to quantify because it varies based on myriad network conditions, but 5G networks are capable of latency rates that are less than a millisecond in ideal conditions. Overall, 5G latency will be lower than 4G's by a factor of 60 to 120. That will make possible a number of applications such as virtual reality that delay makes impractical today.
## 5G technology
The technology underpinnings of 5G are defined by a series of standards that have been in the works for the better part of a decade. One of the most important of these is 5G New Radio, or 5G NR*,* formalized by the 3rd Generation Partnership Project, a standards organization that develops protocols for mobile telephony. 5G NR will dictate many of the ways in which consumer 5G devices will operate, and was [finalized in June of 2018][5].
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
A number of individual technologies have come together to make the speed and latency improvements of 5G possible, and below are some of the most important.
## Millimeter waves
5G networks will for the most part use frequencies in the 30 to 300 GHz range. (Wavelengths at these frequencies are between 1 and 10 millimeters, thus the name.) This high-frequency band can [carry much more information per unit of time than the lower-frequency signals][7] currently used by 4G LTE, which is generally below 1 GHz, or Wi-Fi, which tops out at 6 GHz.
Millimeter-wave technology has traditionally been expensive and difficult to deploy. Technical advances have overcome those difficulties, which is part of what's made 5G possible today.
## Small cells
One drawback of millimeter wave transmission is that it's more prone to interference than Wi-Fi or 4G signals as they pass through physical objects.
To overcome this, the model for 5G infrastructure will be different from 4G's. Instead of the large cellular-antenna masts we've come to accept as part of the landscape, 5G networks will be powered by [much smaller base stations spread throughout cities about 250 meters apart][8], creating cells of service that are also smaller.
These 5G base stations have lower power requirements than those for 4G and can be attached to buildings and utility poles more easily.
## Massive MIMO
Despite 5G base stations being much smaller than their 4G counterparts, they pack in many more antennas. These antennas are [multiple-input multiple-output (MIMO)][9], meaning that they can handle multiple two-way conversations over the same data signal simultaneously. 5G networks can handle more than [20 times more conversations in this way than 4G networks][10].
Massive MIMO promises to [radically improve on base station capacity limits][11], allowing individual base stations to have conversations with many more devices. This in particular is why 5G may drive wider adoption of IoT. In theory, a lot more internet-connected wireless gadgets will be able to be deployed in the same space without overwhelming the network.
## Beamforming
Making sure all these conversations go back and forth to the right places is tricky, especially with the aforementioned problems millimeter-wave signals have with interference. To overcome those issues, 5G stations deploy advanced beamforming techniques, which use constructive and destructive radio interference to make signals directional rather than broadcast. That effectively boosts signal strength and range in a particular direction.
## 5G availability
The first commercial 5G network was [rolled out in Qatar in May 2018][12]. Since then, networks have been popping up across the world, from Argentina to Vietnam. [Lifewire has a good, frequently updated list][13].
One thing to keep in mind, though, is that not all 5G networks deliver on all the technology's promises yet. Some early 5G offerings piggyback on existing 4G infrastructure, which reduces the potential speed gains; other services dubbed 5G for marketing purposes don't even comply with the standard. A closer look at offerings from U.S. wireless carriers will demonstrate some of the pitfalls.
## Wireless carriers and 5G
Technically, 5G is available in the U.S. today. But the caveats involved in that statement vary from carrier to carrier, demonstrating the long road that still lies ahead before 5G becomes omnipresent.
Verizon is making probably the biggest early 5G push. It announced [5G Home][14] in parts of four cities in October of 2018, a service that requires using a special 5G hotspot to connect to the network and feed it to your other devices via Wi-Fi.
Verizon planned an April rollout of a [mobile service in Minneapolis and Chicago][15], which will spread to other cities over the course of the year. Accessing the 5G network will cost customers an extra monthly fee plus what theyll have to spend on a phone that can actually connect to it (more on that in a moment). As an added wrinkle, Verizon is deploying what it calls [5G TF][16], which doesn't match up with the 5G NR standard.
AT&T [announced the availability of 5G in 12 U.S. cities in December 2018][17], with nine more coming by the end of 2019, but even in those cities, availability is limited to the downtown areas. To use the network requires a special Netgear hotspot that connects to the service, then provides a Wi-Fi signal to phones and other devices.
Meanwhile, AT&T is also rolling out speed boosts to its 4G network, which it's dubbed 5GE even though these improvements aren't related to 5G networking. ([This is causing backlash][18].)
Sprint will have 5G service in parts of four cities by May of 2019, and five more by the end of the year. But while Sprint's 5G offering makes use of massive MIMO cells, they [aren't using millimeter-wave signals][19], meaning that Sprint users won't see as much of a speed boost as customers of other carriers.
T-Mobile is pursuing a similar model,and it [won't roll out its service until the end of 2019][20] because there won't be any phones to connect to it.
One kink that might stop a rapid spread of 5G is the need to spread out all those small-cell base stations. Their small size and low power requirements make them easier to deploy than current 4G tech in a technical sense, but that doesn't mean it's simple to convince governments and property owners to install dozens of them everywhere. Verizon actually set up a [website that you can use to petition your local elected officials][21] to speed up 5G base station deployment.
## **5G phones: When available? When to buy?**
The first major 5G phone to be announced is the Samsung Galaxy S10 5G, which should be available by the end of the summer of 2019. You can also order a "[Moto Mod][22]" from Verizon, which [transforms Moto Z3 phones into 5G-compatible device][23]s.
But unless you can't resist the lure of being an early adopter, you may wish to hold off for a bit; some of the quirks and looming questions about carrier rollout may mean that you end up with a phone that [isn't compatible with your carrier's entire 5G network][24].
One laggard that may surprise you is Apple: analysts believe that there won't be a [5G-compatible iPhone until 2020 at the earliest][25]. But this isn't out of character for the company; Apple [also lagged behind Samsung in releasing 4G-compatible phones][26] in back in 2012.
Still, the 5G flood is coming. 5G-compatible devices [dominated Barcelona's Mobile World Congress in 2019][3], so expect to have a lot more choice on the horizon.
## Why are people talking about 6G already?
Some experts say [5G wont be able to meet the latency and reliability targets][27] it is shooting for. These skeptics are already looking ahead to 6G, which they say will try to address these projected shortcomings.
There is [a group that is researching new technologies that can be rolled into 6G][28] that calls itself
The Center for Converged TeraHertz Communications and Sensing (ComSenTer). Part of the spec theyre working on calls for 100Gbps speed for every device.
In addition to adding reliability, overcoming reliability and boosting speed, 6G is also trying to enable thousands of simultaneous connections. If successful, this feature could help to network IoT devices, which can be deployed in the thousands as sensors in a variety of industrial settings.
Even in its embryonic form, 6G may already be facing security concerns due to the emergence of newly discovered [potential for man-in-the-middle attacks in tera-hertz based networks][29]. The good news is that theres plenty of time to find solutions to the problem. 6G networks arent expected to start rolling out until 2030.
**More about 5g networks:**
* [How enterprises can prep for 5G networks][30]
* [5G vs 4G: How speed, latency and apps support differ][31]
* [Private 5G networks are coming][32]
* [5G and 6G wireless have security issues][33]
* [How millimeter-wave wireless could help support 5G and IoT][34]
Join the Network World communities on [Facebook][35] and [LinkedIn][36] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all
作者:[Josh Fruhlinger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/04/5g-100718139-large.jpg
[2]: https://www.networkworld.com/article/3203489/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[5]: https://www.theverge.com/2018/6/15/17467734/5g-nr-standard-3gpp-standalone-finished
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3291323/millimeter-wave-wireless-could-help-support-5g-and-iot.html
[8]: https://spectrum.ieee.org/video/telecom/wireless/5g-bytes-small-cells-explained
[9]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
[10]: https://spectrum.ieee.org/tech-talk/telecom/wireless/5g-researchers-achieve-new-spectrum-efficiency-record
[11]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html
[12]: https://venturebeat.com/2018/05/14/worlds-first-commercial-5g-network-launches-in-qatar/
[13]: https://www.lifewire.com/5g-availability-world-4156244
[14]: https://www.digitaltrends.com/computing/verizon-5g-home-promises-up-to-gigabit-internet-speeds-for-50/
[15]: https://lifehacker.com/heres-your-cheat-sheet-for-verizons-new-5g-data-plans-1833278817
[16]: https://www.theverge.com/2018/10/2/17927712/verizon-5g-home-internet-real-speed-meaning
[17]: https://www.cnn.com/2018/12/18/tech/5g-mobile-att/index.html
[18]: https://www.networkworld.com/article/3339720/like-4g-before-it-5g-is-being-hyped.html?nsdr=true
[19]: https://www.digitaltrends.com/mobile/sprint-5g-rollout/
[20]: https://www.cnet.com/news/t-mobile-delays-full-600-mhz-5g-launch-until-second-half/
[21]: https://lets5g.com/
[22]: https://www.verizonwireless.com/support/5g-moto-mod-faqs/?AID=11365093&SID=100098X1555750Xbc2e857934b22ebca1a0570d5ba93b7c&vendorid=CJM&PUBID=7105813&cjevent=2e2150cb478c11e98183013b0a1c0e0c
[23]: https://www.digitaltrends.com/cell-phone-reviews/moto-z3-review/
[24]: https://www.businessinsider.com/samsung-galaxy-s10-5g-which-us-cities-have-5g-networks-2019-2
[25]: https://www.cnet.com/news/why-apples-in-no-rush-to-sell-you-a-5g-iphone/
[26]: https://mashable.com/2012/09/09/iphone-5-4g-lte/#hYyQUelYo8qq
[27]: https://www.networkworld.com/article/3305359/6g-will-achieve-terabits-per-second-speeds.html
[28]: https://www.networkworld.com/article/3285112/get-ready-for-upcoming-6g-wireless-too.html
[29]: https://www.networkworld.com/article/3315626/5g-and-6g-wireless-technologies-have-security-issues.html
[30]: https://%20https//www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
[31]: https://%20https//www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[32]: https://%20https//www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html
[33]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html
[34]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html
[35]: https://www.facebook.com/NetworkWorld/
[36]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 Essentials for Achieving Resiliency at the Edge)
[#]: via: (https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all)
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
3 Essentials for Achieving Resiliency at the Edge
======
### Edge computing requires different thinking and management to ensure the always-on availability that users have come to demand.
![iStock][1]
> “The IT industry has done a good job of making robust data centers that are highly manageable, highly secure, with redundant systems,” [says Kevin Brown][2], SVP Innovation and CTO for Schneider Electrics Secure Power Division.
However, he continues, companies then connect these data centers to messy edge closets and server rooms, which over time have become “micro mission-critical data centers” in their own right — making system availability vital. If not designed and managed correctly, the situation can be disastrous if users cannot connect to business-critical applications.
To avoid unacceptable downtime, companies should incorporate three essential ingredients into their edge computing deployments: remote management, physical security, and rapid deployments.
**Remote management**
Depending on the companys size, staff could be managing several — or many multiple — edge sites. Not only is this time consuming and costly, its also complex, especially if protocols differ from site to site.
While some organizations might deploy traditional remote monitoring technology to manage these sites, its important to note these tools: dont provide real-time status updates; are largely reactionary rather than proactive; and are sometimes limited in terms of data output.
Coupled with the need to overcome these limitations, the economics for managing edge sites necessitate that organizations consider a digital, or cloud-based, solution. In addition to cost savings, these platforms provide:
* Simplification in monitoring across edge sites
* Real-time visibility, right down to any device on the network
* Predictive analytics, including data-driven intelligence and recommendations to ensure proactive service delivery
**Physical security**
Small, local edge computing sites are often situated within larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. And sometimes theyre set up on-the-fly for a time-sensitive project.
However, when there is no dedicated location and open racks are unsecured, the risks of malicious and accidental incidents escalate.
To prevent unauthorized access to IT equipment at edge computing sites, proper physical security is critical and requires:
* Physical space monitoring, with environmental sensors for temperature and humidity
* Access control, with biometric sensors as an option
* Audio and video surveillance and monitoring with recording
* If possible, install IT equipment within a secure enclosure
**Rapid deployments**
The [benefits of edge computing][3] are significant, especially the ability to bring bandwidth-intensive computing closer to the user, which leads to faster speed to market and greater productivity.
Create a holistic plan that will enable the company to quickly deploy edge sites, while ensuring resiliency and reliability. That means having a standardized, repeatable process including:
* Pre-configured, integrated equipment that combines server, storage, networking, and software in a single enclosure — a prefabricated micro data center, if you will
* Designs that specify supporting racks, UPSs, PDUs, cable management, airflow practices, and cooling systems
These best practices as well as a balanced, systematic approach to edge computing deployments will ensure the always-on availability that todays employees and users have come to expect.
Learn how to enable resiliency within your edge computing deployment at [APC.com][4].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all
作者:[Anne Taylor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Anne-Taylor/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-900882382-100792635-large.jpg
[2]: https://www.youtube.com/watch?v=IfsCTFSH6Jc
[3]: https://www.networkworld.com/article/3342455/how-edge-computing-will-bring-business-to-the-next-level.html
[4]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5G: A deep dive into fast, new wireless)
[#]: via: (https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all)
[#]: author: (Craig Mathias https://www.networkworld.com/author/Craig-Mathias/)
5G: A deep dive into fast, new wireless
======
### 5G wireless networks are just about ready for prime time, overcoming backhaul and backward-compatibility issues, and promising the possibility of all-mobile networking through enhanced throughput.
The next step in the evolution of wireless WAN communications - [5G networks][1] \- is about to hit the front pages, and for good reason: it will complete the evolution of cellular from wireline augmentation to wireline replacement, and strategically from mobile-first to mobile-only.
So its not too early to start least basic planning to understanding how 5G will fit into and benefit IT plans across organizations of all sizes, industries and missions.
**[ From Mobile World Congress:[The time of 5G is almost here][2] ]**
5G will of course provide end-users with the additional throughput, capacity, and other elements to address the continuing and dramatic growth in geographic availability, user base, range of subscriber devices, demand for capacity, and application requirements, but will also enable service providers to benefit from new opportunities in overall strategy, service offerings and broadened marketplace presence.
A look at the key features you can expect in 5G wireless. (Click for larger image.)
![A look at the key features you can expect in 5G wireless.][3]
This article explores the technologies and market drivers behind 5G, with an emphasis on what 5G means to enterprise and organizational IT.
While 5G remains an imprecise term today, key objectives for the development of the advances required have become clear. These are as follows:
## 5G speeds
As is the case with Wi-Fi, major advances in cellular are first and foremost defined by new upper-bound _throughput_ numbers. The magic number here for 5G is in fact a _floor_ of 1 Gbps, with numbers as high as 10 Gbps mentioned by some. However, and again as is the case with Wi-Fi, its important to think more in terms of overall individual-cell and system-wide _capacity_. We believe, then, that per-user throughput of 50 Mbps is a more reasonable but clearly still remarkable working assumption, with up to 300 Mbps peak throughput realized in some deployments over the next five years. The possibility of reaching higher throughput than that exceeds our planning horizon, but such is, well, possible.
## Reduced latency
Perhaps even more important than throughput, though, is a reduction in the round-trip time for each packet. Reducing latency is important for voice, which will most certainly be all-IP in 5G implementations, video, and, again, in improving overall capacity. The over-the-air latency goal for 5G is less than 10ms, with 1ms possible in some defined classes of service.
## 5G network management and OSS
Operators are always seeking to reduce overhead and operating expense, so enhancements to both system management and operational support systems (OSS) yielding improvements in reliability, availability, serviceability, resilience, consistency, analytics capabilities, and operational efficiency, are all expected. The benefits of these will, in most cases, however, be transparent to end-users.
## Mobility and 5G technology
Very-high-speed user mobility, to as much as hundreds of kilometers per hour, will be supported, thus serving users on all modes of transportation. Regulatory and situation-dependent restrictions most notably, on aircraft however, will still apply.
To continue reading this article register now
[Get Free Access][4]
[Learn More][5] Existing Users [Sign In][4]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all
作者:[Craig Mathias][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Craig-Mathias/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://images.idgesg.net/images/article/2017/06/2017_nw_5g_wireless_key_features-100727485-large.jpg
[4]: javascript://
[5]: /learn-about-insider/

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Announcing the release of Fedora 30 Beta)
[#]: via: (https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Announcing the release of Fedora 30 Beta
======
![][1]
The Fedora Project is pleased to announce the immediate availability of Fedora 30 Beta, the next big step on our journey to the exciting Fedora 30 release.
Download the prerelease from our Get Fedora site:
* [Get Fedora 30 Beta Workstation][2]
* [Get Fedora 30 Beta Server][3]
* [Get Fedora 30 Beta Silverblue][4]
Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:
* [Get Fedora 30 Beta Spins][5]
* [Get Fedora 30 Beta Labs][6]
* [Get Fedora 30 Beta ARM][7]
### Beta Release Highlights
#### New desktop environment options
Fedora 30 Beta includes two new options for desktop environment. [DeepinDE][8] and [Pantheon Desktop][9] join GNOME, KDE Plasma, Xfce, and others as options for users to customize their Fedora experience.
#### DNF performance improvements
All dnf repository metadata for Fedora 30 Beta is compressed with the zchunk format in addition to xz or gzip. zchunk is a new compression format designed to allow for highly efficient deltas. When Fedoras metadata is compressed using zchunk, dnf will download only the differences between any earlier copies of the metadata and the current version.
#### GNOME 3.32
Fedora 30 Workstation Beta includes GNOME 3.32, the latest version of the popular desktop environment. GNOME 3.32 features updated visual style, including the user interface, the icons, and the desktop itself. For a full list of GNOME 3.32 highlights, see the [release notes][10].
#### Other updates
Fedora 30 Beta also includes updated versions of many popular packages like Golang, the Bash shell, the GNU C Library, Python, and Perl. For a full list, see the [Change set][11] on the Fedora Wiki. In addition, many Python 2 packages are removed in preparation for Python 2 end-of-life on 2020-01-01.
#### Testing needed
Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the [Common F30 Bugs page][12].
For tips on reporting a bug effectively, read [how to file a bug][13].
#### What is the Beta Release?
A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesnt just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.
#### More information
For more detailed information about whats new on Fedora 30 Beta release, you can consult the [Fedora 30 Change set][11]. It contains more technical information about the new packages and improvements shipped with this release.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-beta-816x345.jpg
[2]: https://getfedora.org/workstation/prerelease/
[3]: https://getfedora.org/server/prerelease/
[4]: https://silverblue.fedoraproject.org/download
[5]: https://spins.fedoraproject.org/prerelease
[6]: https://labs.fedoraproject.org/prerelease
[7]: https://arm.fedoraproject.org/prerelease
[8]: https://www.deepin.org/en/dde/
[9]: https://www.fosslinux.com/4652/pantheon-everything-you-need-to-know-about-the-elementary-os-desktop.htm
[10]: https://help.gnome.org/misc/release-notes/3.32/
[11]: https://fedoraproject.org/wiki/Releases/30/ChangeSet
[12]: https://fedoraproject.org/wiki/Common_F30_bugs
[13]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate password resets with PWM)
[#]: via: (https://opensource.com/article/19/4/automate-password-resets-pwm)
[#]: author: (James Mawson https://opensource.com/users/dxmjames)
Automate password resets with PWM
======
PWM puts responsibility for password resets in users' hands, freeing IT
for more pressing tasks.
![Password][1]
One of the things that can be "death by a thousand cuts" for any IT team's sanity and patience is constantly being asked to reset passwords.
The best way we've found to handle this is to ditch your hashing algorithms and store your passwords in plaintext so that your users can retrieve them at any time.
Ha! I am, of course, kidding. That's a terrible idea.
When your users forget their passwords, you'll still need to reset them. But is there a way to break free from the monotonous, repetitive task of doing it manually?
### PWM puts password resets in users' hands
[PWM][2] is an open source ([GPLv2][3]) [JavaServer Pages][4] application that provides a webpage where users can submit their own password resets. If certain conditions are met—which you can configure—PWM will send a password reset instruction to whichever directory service you've connected it to.
![PWM password reset screen][5]
One thing that's great about PWM is it's very easy to add it to an existing network. If you're largely happy with what you've already built—just sick of processing password requests manually—you can just throw PWM into the mix.
PWM works with any implementation of [LDAP][6] and written to run on [Apache Tomcat][7]. Once you get it up and running, you can administer it through a browser-based dashboard.
### Why PWM is better than Microsoft SSPR
As much as our team prefers open source, we still have to deal with Windows networks. Of course, Microsoft has its own password-reset tool, called Self Service Password Reset (SSPR). But I prefer PWM, and not just because of a general preference for open source. I believe PWM is better for my use case for the following reasons:
* **SSPR has a very complex licensing system**. You need different products depending on what servers you're running and whose metal they're running on. This is a constraint on your flexibility and a whole extra pain in the neck when it's time to move to new architecture. For [the busy admin who wants to go home on time][8], it's extra bureaucracy to get the purchase approved. PWM just works on what it's configured to work on at no cost.
* **PWM is not just for Windows**. It works with any kind of LDAP server. So, it's one less part you need to worry about if you ever stop using Windows for a certain role. It also means that, once you've gotten the hang of it, you have something in your bag of tricks that you can use in many different environments.
* **PWM is easy to install**. If you know how to install Linux as a virtual machine—and, let's face it, if you're running a network, you probably do—then you're already most of the way there.
PWM can run on Windows, but we prefer to include it in a Windows network by running it on a Linux virtual machine, [for example, Ubuntu Server 16.04][9].
### Risks and rewards of automation
Password resets are an attack vector, so be thoughtful about where and how you use PWM. Automating your password resets can mean an attacker is potentially just one unencrypted email connection away from resetting a password.
To some extent, automating your password resets trades a bit of security for some convenience. So maybe this isn't the right way to handle C-suite user accounts that approve large payments.
On the other hand, manual resets are not 100% secure either—they can be gamed with targeted attacks like spear phishing and social engineering. It's much easier to fall for these scams if your team gets frequent reset requests and is sick of dealing with them. You may benefit from automating the bulk of lower-risk requests so you can focus on protecting the higher-risk accounts manually; this is possible given the time you can save using PWM.
Some of the risks associated with shifting resets to users can be mitigated with PWM's built-in features, such as insisting users verify their password reset request by email or SMS. You can also make PWM accessible only on the intranet.
![PWM configuration options][10]
PWM doesn't store any passwords, so that's one less headache. It does, however, store answers to users' secret questions in a MySQL database that can be configured to be stored locally or on a separate server, depending on your preference.
There are a ton of ways to make PWM look and feel like a polished part of your team's infrastructure. With a little bit of CSS know-how, you can customize the user interface for your business' branding. There are also more options for implementation than you can shake a stick at.
### Wrapping up
PWM is a great open source project, it's actively developed, and it has a helpful online community. It's a great alternative to Microsoft's Azure SSPR solution for small to midsized businesses that have to keep a tight grip on the purse strings, and it slots in neatly to any existing Active Directory infrastructure. It also saves IT's time by outsourcing this mundane task to users.
I advise every network admin to dive in and have a look at the cool stuff PWM offers. Check out the [getting started resources][11] and reach out to the community if you have any questions.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/automate-password-resets-pwm
作者:[James Mawson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dxmjames
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ (Password)
[2]: https://github.com/pwm-project/pwm
[3]: https://github.com/pwm-project/pwm/blob/master/LICENSE
[4]: https://www.oracle.com/technetwork/java/index-jsp-138231.html
[5]: https://opensource.com/sites/default/files/uploads/pwm_password-reset.png (PWM password reset screen)
[6]: https://opensource.com/business/14/5/top-4-open-source-ldap-implementations
[7]: http://tomcat.apache.org/
[8]: https://opensource.com/article/18/7/tools-admin
[9]: https://blog.dxmtechsupport.com.au/adding-pwm-password-reset-tool-to-windows-network/
[10]: https://opensource.com/sites/default/files/uploads/pwm-configuration.png (PWM configuration options)
[11]: https://github.com/pwm-project/pwm#links

View File

@ -0,0 +1,202 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure Plex on Ubuntu Linux)
[#]: via: (https://itsfoss.com/install-plex-ubuntu)
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
How to Install and Configure Plex on Ubuntu Linux
======
When you are a media hog and have a big collection of movies, photos or music, the below capabilities would be very handy.
* Share media with family and other people.
* Access media from different devices and platforms.
Plex ticks all of those boxes and more. Plex is a client-server media player system with additional features. Plex supports a wide array of platforms, both for the server and the player. No wonder it is considered one of the [best media servers for Linux][1].
Note: Plex is not a completely open source media player. We have covered it because this is one of the frequently [requested tutorial][2].
### Install Plex on Ubuntu
For this guide I am installing Plex on Elementary OS, an Ubuntu based distribution. You can still follow along if you are installing it on a headless Linux machine.
Go to the Plex [downloads][3] page, select Ubuntu 64-bit (I would not recommend installing it on a 32-bit CPU) and download the .deb file.
![][4]
[Download Plex][3]
You can [install the .deb file][5] by just clicking on the package. If it does not work, you can use an installer like **Eddy** or **[GDebi][6].**
You can also install it via the terminal using dpkg as shown below.
Install Plex on a headless Linux system
For a [headless system][7], you can use **wget** to download the .deb package. This example uses the current link for Ubuntu, at the time of writing. Be sure to use the up-to-date version supplied on the Plex website.
```
wget https://downloads.plex.tv/plex-media-server-new/1.15.1.791-8bec0f76c/debian/plexmediaserver_1.15.1.791-8bec0f76c_amd64.deb
```
The above command downloads the 64-bit .deb package. Once downloaded install the package using the following command.
```
dpkg -i plexmediaserver*.deb
```
Enable version upgrades for Plex
The .deb installation does create an entry in sources.d, but [repository updates][8] are not enabled by default and the contents of _plexmediaserver.list_ are commented out. This means that if there is a new Plex version available, your system will not be able to update your Plex install.
To enable repository updates you can either remove the # from the line starting with deb or run the following commands.
```
echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list
```
The above command updates the entry in sources.d directory.
We also need to add Plexs public key to facilitate secure and safe downloads. You can try running the command below, unfortunately this **did not work for me** and the [GPG][9] key was not added.
```
curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add -
```
To fix this issue I found out the key hash for from the error message after running _sudo apt-get update._
![][10]
```
97203C7B3ADCA79D
```
The above hash can be used to add the key from the key-server. Run the below commands to add the key.
```
gpg --keyserver https://downloads.plex.tv/plex-keys/PlexSign.key --recv-keys 97203C7B3ADCA79D
```
```
gpg --export --armor 97203C7B3ADCA79D|sudo apt-key add -
```
You should see an **OK** once the key is added.
Run the below command to verify that the repository is added to the sources list successfully.
```
sudo apt update
```
To update Plex to the newest version available on the repository, run the below [apt-get command][11].
```
sudo apt-get --only-upgrade install plexmediaserver
```
Once installed the Plex service automatically starts running. You can check if its running by running the this command in a terminal.
```
systemctl status plexmediaserver
```
If the service is running properly you should see something like this.
![Check the status of Plex Server][12]
### Configuring Plex as a Media Server
The Plex server is accessible on the ports 32400 and 32401. Navigate to **localhost:32400** or **localhost:32401** using a browser. You should replace the localhost with the IP address of the machine running Plex server if you are going headless.
The first time you are required to sign up or log in to your Plex account.
![Plex Login Page][13]
Now you can go ahead and give a friendly name to your Plex Server. This name will be used to identify the server over the network. You can also have multiple Plex servers identified by different names on the same network.
![Plex Server Setup][14]
Now it is finally time to add all your collections to the Plex library. Here your collections will be automatically get indexed and organized.
You can click the add library button to add all your collections.
![Add Media Library][15]
![][16]
Navigate to the location of the media you want to add to Plex .
![][17]
You can add multiple folders and different types of media.
When you are done, you are taken to a very slick looking Plex UI. You can already see the contents of your libraries showing up on the home screen. It also automatically selects a thumbnail and also fills the metadata.
![][18]
You can head over to the settings and configure some of the settings. You can create new users( **only with Plex Pass** ), adjust the transcoding settings set scheduled library updates and more.
If you have a public IP assigned to your router by the ISP you can also enable Remote Access. This means that you can be traveling and still access your libraries at home, considering you have your Plex server running all the time.
Now you are all set up and ready, but how do you access your media? Yes you can access through your browser but Plex has a presence in almost all platforms you can think of including Android Auto.
### Accessing Your Media and Plex Pass
You can access you media either by using the web browser (the same address you used earlier) or Plexs suite of apps. The web browser experience is pretty good on computers and can be better on phones.
Plex apps provide a much better experience. But, the iOS and Android apps need to be activated with a [Plex Pass][19]. Without activation you are limited to 1 minute of video playback and images are watermarked.
Plex Pass is a premium subscription service which activates the mobile apps and enables more features. You can also individually activate your apps tied to a particular phone for a cheaper price. You can also create multiple users and set permissions with the Plex Pass which is a very handy feature.
You can check out all the benefits of Plex Pass [here][19].
_Note: Plex Meida Player is free on all platforms other than Android and iOS App._
**Conclusion**
Thats about all things you need to know for the first time configuration, go ahead and explore the Plex UI, it also gives you access to free online content like podcasts and music through Tidal.
There are alternatives to Plex like [Jellyfin][20] which is free but native apps are in beta and on road to be published on the App stores.You can also use a NAS with any of the freely available media centers like Kodi, OpenELEC or even VLC media player.
Here is an article listing the [best Linux media servers.][1]
Let us know your experience with Plex and what you use for your media sharing needs.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-plex-ubuntu
作者:[Chinmay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/chinmay/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-media-server/
[2]: https://itsfoss.com/request-tutorial/
[3]: https://www.plex.tv/media-server-downloads/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/downloads-plex.png?ssl=1
[5]: https://itsfoss.com/install-deb-files-ubuntu/
[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/
[7]: https://www.lions-wing.net/lessons/servers/home-server.html
[8]: https://itsfoss.com/ubuntu-repositories/
[9]: https://www.gnupg.org/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-26-07-21-05-1.png?ssl=1
[11]: https://itsfoss.com/apt-get-linux-guide/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/check-plex-service.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/plex-home-page.png?ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Plex-server-setup.png?ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-library.png?ssl=1
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-library.png?ssl=1
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-folder.png?ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-17-22-27-56.png?ssl=1
[19]: https://www.plex.tv/plex-pass/
[20]: https://jellyfin.readthedocs.io/en/latest/

Some files were not shown because too many files have changed in this diff Show More