Translated by liujing97

This commit is contained in:
liujing97 2019-04-09 16:50:15 +08:00
commit ad59536e87
69 changed files with 6534 additions and 670 deletions

View File

@ -1,25 +1,26 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10698-1.html)
[#]: subject: (How To Set Password Policies In Linux)
[#]: via: (https://www.ostechnix.com/how-to-set-password-policies-in-linux/)
[#]: author: (SK https://www.ostechnix.com/author/sk/)
如何在 Linux 系统中设置密码策略
如何设置 Linux 系统的密码策略
======
![](https://www.ostechnix.com/wp-content/uploads/2016/03/How-To-Set-Password-Policies-In-Linux-720x340.jpg)
虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux比如 Debian, Ubuntu, Linux Mint 等和基于 RPM 系统的 Linux比如 RHEL, CentOS, Scientific Linux 等的系统下设置像**密码长度****密码复杂度****密码有效期**等密码策略。  
虽然 Linux 的设计是安全的,但还是存在许多安全漏洞的风险弱密码就是其中之一。作为系统管理员,你必须为用户提供一个强密码。因为大部分的系统漏洞就是由于弱密码而引发的。本教程描述了在基于 DEB 系统的 Linux比如 Debian、Ubuntu、Linux Mint 等和基于 RPM 系统的 Linux比如 RHEL、CentOS、Scientific Linux 等的系统下设置像**密码长度**、**密码复杂度**、**密码有效期**等密码策略。
### 在基于 DEB 的系统中设置密码长度
默认情况下,所有的 Linux 操作系统要求用户**密码长度最少6个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字大写字母和特殊符号。
默认情况下,所有的 Linux 操作系统要求用户**密码长度最少 6 个字符**。我强烈建议不要低于这个限制。并且不要使用你的真实名称、父母、配偶、孩子的名字,或者你的生日作为密码。即便是一个黑客新手,也可以很快地破解这类密码。一个好的密码必须是至少 6 个字符,并且包含数字大写字母和特殊符号。
通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 **/etc/pam.d/** 目录中。
通常地,在基于 DEB 的操作系统中,密码和身份认证相关的配置文件被存储在 `/etc/pam.d/` 目录中。
设置最小密码长度,编辑 **/etc/pam.d/common-password** 文件;
设置最小密码长度,编辑 `/etc/pam.d/common-password` 文件;
```
$ sudo nano /etc/pam.d/common-password
@ -33,7 +34,7 @@ password [success=2 default=ignore] pam_unix.so obscure sha512
![][2]
在末尾添加额外的文字:**minlen=8**。在这里我设置的最小密码长度为 **8**
在末尾添加额外的文字:`minlen=8`。在这里我设置的最小密码长度为 `8`
```
password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
@ -43,15 +44,15 @@ password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
保存并关闭该文件。这样一来,用户现在不能设置小于 8 个字符的密码。
### 在基于RPM的系统中设置密码长度
### 在基于 RPM 的系统中设置密码长度
**在 RHEL, CentOS, Scientific Linux 7.x** 系统中, 以root身份执行下面的命令来设置密码长度。
**在 RHEL、CentOS、Scientific Linux 7.x** 系统中, 以 root 身份执行下面的命令来设置密码长度。
```
# authconfig --passminlen=8 --update
```
查看最小密码长度, 执行:
查看最小密码长度执行:
```
# grep "^minlen" /etc/security/pwquality.conf
@ -63,7 +64,7 @@ password [success=2 default=ignore] pam_unix.so obscure sha512 minlen=8
minlen = 8
```
**在 RHEL, CentOS, Scientific Linux 6.x** 系统中, 编辑 **/etc/pam.d/system-auth** 文件:
**在 RHEL、CentOS、Scientific Linux 6.x** 系统中,编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
@ -77,11 +78,11 @@ password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
![](https://www.ostechnix.com/wp-content/uploads/2016/03/root@server_003-3.jpg)
在以上所有设置中,最小密码长度是 **8** 个字符。
如上设置中,最小密码长度是 `8` 个字符。
### 在基于DEB的系统中设置密码复杂度
### 在基于 DEB 的系统中设置密码复杂度
此设置会强制要求密码中应该包含多少类型,比如大写字母小写字母和其他字符。
此设置会强制要求密码中应该包含多少类型,比如大写字母小写字母和其他字符。
首先,用下面命令安装密码质量检测库:
@ -89,13 +90,13 @@ password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8
$ sudo apt-get install libpam-pwquality
```
之后,编辑 **/etc/pam.d/common-password** 文件:
之后,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 **ucredit=-1**
为了设置密码中至少有一个**大写字母**,则在下面这行的末尾添加文字 `ucredit=-1`
```
password requisite pam_pwquality.so retry=3 ucredit=-1
@ -115,9 +116,9 @@ password requisite pam_pwquality.so retry=3 dcredit=-1
password requisite pam_pwquality.so retry=3 ocredit=-1
```
正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母小写字母和特殊字符。
正如你在上面样例中看到的一样,我们设置了密码中至少含有一个大写字母、一个小写字母和一个特殊字符。你可以设置被最大允许的任意数量的大写字母小写字母和特殊字符。
你还可以设置密码中被允许的最大或最小类型的数量。
你还可以设置密码中被允许的字符类的最大或最小数量。
下面的例子展示了设置一个新密码中被要求的字符类的最小数量:
@ -125,7 +126,7 @@ password requisite pam_pwquality.so retry=3 ocredit=-1
password requisite pam_pwquality.so retry=3 minclass=2
```
### 在基于RPM的系统中设置密码杂度
### 在基于 RPM 的系统中设置密码杂度
**在 RHEL 7.x / CentOS 7.x / Scientific Linux 7.x 中:**
@ -201,7 +202,7 @@ dcredit = -1
ocredit = -1
```
**RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以root身份编辑 **/etc/pam.d/system-auth** 文件:
**RHEL 6.x / CentOS 6.x / Scientific Linux 6.x systems** 中,以 root 身份编辑 `/etc/pam.d/system-auth` 文件:
```
# nano /etc/pam.d/system-auth
@ -212,17 +213,17 @@ ocredit = -1
```
password requisite pam_cracklib.so try_first_pass retry=3 type= minlen=8 dcredit=-1 ucredit=-1 lcredit=-1 ocredit=-1
```
在以上每个设置中,密码必须要至少包含 8 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
### 在基于DEB的系统中设置密码有效期
如上设置中,密码必须要至少包含 `8` 个字符。另外,密码必须至少包含一个大写字母、一个小写字母、一个数字和一个其他字符。
### 在基于 DEB 的系统中设置密码有效期
现在,我们将要设置下面的策略。
1. 密码被使用的最长天数。
2. 密码更改允许的最小间隔天数。
3. 密码到期之前发出警告的天数。
设置这些策略,编辑:
```
@ -239,7 +240,7 @@ PASS_WARN_AGE 7
![](https://www.ostechnix.com/wp-content/uploads/2016/03/sk@sk-_002-8.jpg)
正如你在上面样例中看到的一样,用户应该每 **100** 天修改一次密码,并且密码到期之前的 **7** 天开始出现警告信息。
正如你在上面样例中看到的一样,用户应该每 `100` 天修改一次密码,并且密码到期之前的 `7` 天开始出现警告信息。
请注意,这些设置将会在新创建的用户中有效。
@ -280,6 +281,7 @@ Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
正如你在上面看到的输出一样,该密码是无限期的。
修改已存在用户的密码有效期,
@ -288,22 +290,23 @@ Number of days of warning before password expires : 7
$ sudo chage -E 24/06/2018 -m 5 -M 90 -I 10 -W 10 sk
```
上面的命令将会设置用户 **sk** 的密码期限是 **24/06/2018**。并且修改密码的最小间隔时间为 5 天,最大间隔时间为 **90** 天。用户账号将会在 **10 天**后被自动锁定而且在到期之前的 **10 天**将会显示警告信息。
上面的命令将会设置用户 `sk` 的密码期限是 `24/06/2018`。并且修改密码的最小间隔时间为 `5` 天,最大间隔时间为 `90` 天。用户账号将会在 `10` 天后被自动锁定,而且在到期之前的 `10` 天前显示警告信息。
### 在基于 RPM 的系统中设置密码效期
这点和基于 DEB 的系统是相同的。
### 在基于 DEB 的系统中禁止使用近期使用过的密码
你可以限制用户去设置一个已经使用过的密码。通俗的讲,就是说用户不能再次使用相同的密码。
为设置这一点,编辑 **/etc/pam.d/common-password** 文件:
为设置这一点,编辑 `/etc/pam.d/common-password` 文件:
```
$ sudo nano /etc/pam.d/common-password
```
找到下面这行并且在末尾添加文字 **remember=5**
找到下面这行并且在末尾添加文字 `remember=5`
```
password        [success=2 default=ignore]      pam_unix.so obscure use_authtok try_first_pass sha512 remember=5
@ -313,27 +316,23 @@ password        [success=2 default=ignore]      pam_unix.so obscure use_a
### 在基于 RPM 的系统中禁止使用近期使用过的密码
这点对于 RHEL 6.x 和 RHEL 7.x 是相同的。他们的克隆系统类似于 CentOS, Scientific Linux
这点对于 RHEL 6.x 和 RHEL 7.x 和它们的衍生系统 CentOS、Scientific Linux 是相同的
root身份编辑 **/etc/pam.d/system-auth** 文件,
root 身份编辑 `/etc/pam.d/system-auth` 文件,
```
# vi /etc/pam.d/system-auth
```
找到下面这行,并且在末尾添加文字 **remember=5**
找到下面这行,并且在末尾添加文字 `remember=5`
```
password     sufficient     pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5
```
现在你知道了 Linux 中的密码策略是什么,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
现在就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前会与 OSTechNix 保持联系。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
祝贺!
现在你了解了 Linux 中的密码策略,以及如何在基于 DEB 和 RPM 的系统中设置不同的密码策略。
就这样,我很快会在这里发表另外一天有趣而且有用的文章。在此之前请保持关注。如果您觉得本教程对你有帮助,请在您的社交,专业网络上分享并支持我们。
--------------------------------------------------------------------------------
@ -342,7 +341,7 @@ via: https://www.ostechnix.com/how-to-set-password-policies-in-linux/
作者:[SK][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,48 +1,48 @@
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: translator: "Auk7F7"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: subject: "Arch-Wiki-Man A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline"
[#]: via: "https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/"
[#]: author: "[Prakash Subramanian](https://www.2daygeek.com/author/prakash/)"
[#]: url: " "
[#]: author: "Prakash Subramanian https://www.2daygeek.com/author/prakash/"
[#]: url: "https://linux.cn/article-10694-1.html"
Arch-Wiki-Man 一个以 Linux Man 手册样式离线浏览 Arch Wiki 的工具
Arch-Wiki-Man:一个以 Linux Man 手册样式离线浏览 Arch Wiki 的工具
======
现在上网已经很方便了,但技术上会有限制。
现在上网已经很方便了,但技术上会有限制。看到技术的发展,我很惊讶,但与此同时,各种地方也都会出现衰退。
看到技术的发展,我很惊讶,但与此同时,各个地方都会出现衰退。
当你搜索有关其他 Linux 发型版本的某些东西时,大多数时候你会首先得到一个第三方的链接,但是对于 Arch Linux 来说,每次你都会得到 Arch Wiki 页面的结果。
当你搜索有关其他 Linux 发行版的某些东西时,大多数时候你会得到的是一个第三方的链接,但是对于 Arch Linux 来说,每次你都会得到 Arch Wiki 页面的结果。
因为 Arch Wiki 提供了除第三方网站以外的大多数解决方案。
到目前为止,你也许可以使用 Web 浏览器为你的 Arch Linux 系统找到一个解决方案,但现在你可以不用这么做了。
一个名为 arch-wiki-man 的工具t提供了一个在命令行中更快地执行这个操作的方案。如果你是一个 Arch Linux 爱好者,我建议你阅读 **[Arch Linux 安装后指南][1]** ,它可以帮助你调整你的系统以供日常使用。
一个名为 arch-wiki-man 的工具提供了一个在命令行中更快地执行这个操作的方案。如果你是一个 Arch Linux 爱好者,我建议你阅读 [Arch Linux 安装后指南][1],它可以帮助你调整你的系统以供日常使用。
### arch-wiki-man 是什么?
[arch-wiki-man][2] 工具允许用户在离线的时候从命令行CLI中搜索 Arch Wiki 页面。它允许用户以 Linux Man 手册样式访问和搜索整个 Wiki 页面。
[arch-wiki-man][2] 工具允许用户从命令行CLI离线搜索 Arch Wiki 页面。它允许用户以 Linux Man 手册样式访问和搜索整个 Wiki 页面。
而且你无需切换到GUI。更新将每两天自动推送一次因此你的 Arch Wiki 本地副本页面将是最新的。这个工具的名字是`awman` `awman` 是 Arch Wiki Man 的缩写。
而且,你无需切换到 GUI。更新将每两天自动推送一次因此你的 Arch Wiki 本地副本页面将是最新的。这个工具的名字是 `awman` `awman`Arch Wiki Man 的缩写。
我们已经写出了名为 **[Arch Wiki 命令行实用程序][3]** (arch-wiki-cli)的类似工具。它允许用户从互联网上搜索 Arch Wiki。但确保你因该在线使用这个实用程序。
我们之前写过一篇类似工具 [Arch Wiki 命令行实用程序][3]arch-wiki-cli的文章。这个工具允许用户从互联网上搜索 Arch Wiki。但你需要在线使用这个实用程序。
### 如何安装 arch-wiki-man 工具?
arch-wiki-man 工具可以在 AUR 仓库LCTT译者注AUR 即 Arch 用户软件仓库Archx User Repository)中获得,因此,我们需要使用 AUR 工具来安装它。有许多 AUR 工具可用,而且我们曾写了一篇有关非常著名的 AUR 工具: **[Yaourt AUR helper][4]** 和 **[Packer AUR helper][5]** 的文章,
arch-wiki-man 工具可以在 AUR 仓库LCTT 译注AUR 即<ruby>Arch 用户软件仓库<rt>Arch User Repository</rt></ruby>)中获得,因此,我们需要使用 AUR 工具来安装它。有许多 AUR 工具可用,而且我们曾写了一篇关于流行的 AUR 辅助工具: [Yaourt AUR helper][4] 和 [Packer AUR helper][5] 的文章。
```
$ yaourt -S arch-wiki-man
```
or
```
$ packer -S arch-wiki-man
```
或者,我们可以使用 npm 包管理器来安装它,确保你已经在你的系统上安装了 **[NodeJS][6]** 。然后运行以下命令来安装它。
或者,我们可以使用 npm 包管理器来安装它,确保你已经在你的系统上安装了 [NodeJS][6]。然后运行以下命令来安装它。
```
$ npm install -g arch-wiki-man
@ -61,13 +61,15 @@ $ sudo awman-update
arch-wiki-md-repo has been successfully updated or reinstalled.
```
awman-update 是一种更快更方便的更新方法。但是你也可以通过运行以下命令重新安装arch-wiki-man 来获取更新。
`awman-update` 是一种更快更方便的更新方法。但是,你也可以通过运行以下命令重新安装 arch-wiki-man 来获取更新。
```
$ yaourt -S arch-wiki-man
```
or
```
$ packer -S arch-wiki-man
```
@ -81,7 +83,7 @@ $ awman Search-Term
### 如何搜索多个匹配项?
如果希望列出包含`installation`字符串的所有结果的标题,运行以下格式的命令,如果输出有多个结果,那么你将会获得一个选择菜单来浏览每个项目。
如果希望列出包含 “installation” 字符串的所有结果的标题,运行以下格式的命令,如果输出有多个结果,那么你将会获得一个选择菜单来浏览每个项目。
```
$ awman installation
@ -89,35 +91,39 @@ $ awman installation
![][8]
详细页面的截屏
详细页面的截屏
![][9]
### 在标题和描述中搜索给定的字符串
`-d``--desc-search` 选项允许用户在标题和描述中搜索给定的字符串。
`-d``--desc-search` 选项允许用户在标题和描述中搜索给定的字符串。
```
$ awman -d mirrors
```
or
```
$ awman --desc-search mirrors
? Select an article: (Use arrow keys)
[1/3] Mirrors: Related articles
[2/3] DeveloperWiki-NewMirrors: Contents
[3/3] Powerpill: Powerpill is a pac
[2/3] DeveloperWiki-NewMirrors: Contents
[3/3] Powerpill: Powerpill is a pac
```
### 在内容中搜索给定的字符串
`-k``--apropos` 选项也允许用户在内容中搜索给定的字符串。但须注意,此选项会显著降低搜索速度,因为此选项会扫描整个 Wiki 页面的内容。
`-k``--apropos` 选项也允许用户在内容中搜索给定的字符串。但须注意,此选项会显著降低搜索速度,因为此选项会扫描整个 Wiki 页面的内容。
```
$ awman -k openjdk
```
or
```
$ awman --apropos openjdk
? Select an article: (Use arrow keys)
[1/26] Hadoop: Related articles
@ -132,13 +138,15 @@ $ awman --apropos openjdk
### 在浏览器中打开搜索结果
`-w``--web` 选项允许用户在 Web 浏览器中打开搜索结果。
`-w``--web` 选项允许用户在 Web 浏览器中打开搜索结果。
```
$ awman -w AUR helper
```
or
```
$ awman --web AUR helper
```
@ -146,7 +154,7 @@ $ awman --web AUR helper
### 以其他语言搜索
`-w``--web` 选项允许用户在 Web 浏览器中打开搜索结果。想要查看支持的语言列表,请运行以下命令。
想要查看支持的语言列表,请运行以下命令。
```
$ awman --list-languages
@ -196,7 +204,7 @@ via: https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[Auk7F7](https://github.com/Auk7F7)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10695-1.html)
[#]: subject: (Quickly Go Back To A Specific Parent Directory Using bd Command In Linux)
[#]: via: (https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,45 +10,43 @@
在 Linux 中使用 bd 命令快速返回到特定的父目录
======
<to 校正我在 ubuntu 上似乎没有按照这个教程成功使用 bd 命令难道我的姿势不对>
两天前我们写了一篇关于 `autocd` 的文章,它是一个内置的 `shell` 变量,可以帮助我们在**[没有 `cd` 命令的情况下导航到目录中][1]**.
两天前我们写了一篇关于 `autocd` 的文章,它是一个内置的 shell 变量,可以帮助我们在[没有 cd 命令的情况下导航到目录中][1]。
如果你想回到上一级目录,那么你需要输入 `cd ..`
如果你想回到上两级目录,那么你需要输入 `cd ../..`
这在 Linux 中是正常的,但如果你想从第九个目录回到第三个目录,那么使用 cd 命令是很糟糕的。
这在 Linux 中是正常的,但如果你想从第九级目录回到第三级目录,那么使用 `cd` 命令是很糟糕的。
有什么解决方案呢?
是的,在 Linux 中有一个解决方案。我们可以使用 bd 命令来轻松应对这种情况。
是的,在 Linux 中有一个解决方案。我们可以使用 `bd` 命令来轻松应对这种情况。
### 什么是 bd 命令?
bd 命令允许用户快速返回 Linux 中的父目录,而不是反复输入 `cd ../../..`
`bd` 命令允许用户快速返回 Linux 中的父目录,而不是反复输入 `cd ../../..`
你可以列出给定目录的内容,而不用提供完整路径 `ls `bd Directory_Name``。它支持以下其它命令,如 ls、ln、echo、zip、tar 等。
你可以列出给定目录的内容,而不用提供完整路径 ls `bd Directory_Name`。它支持以下其它命令,如 `ls`、`ln`、`echo`、`zip`、`tar` 等。
另外,它还允许我们执行 shell 文件而不用提供完整路径 `bd p`/shell_file.sh``。
另外,它还允许我们执行 shell 文件而不用提供完整路径 bd p`/shell_file.sh`。
### 如何在 Linux 中安装 bd 命令?
除了 Debian/Ubuntu 之外bd 没有官方发行包。因此,我们需要手动执行方法。
除了 Debian/Ubuntu 之外,`bd` 没有官方发行包。因此,我们需要手动执行方法。
对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][2]****[APT 命令][3]**来安装 bd。
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][2]或[APT 命令][3]来安装 `bd`
```
$ sudo apt install bd
```
对于其它 Linux 发行版,使用 **[wget 命令][4]**下载 bd 可执行二进制文件。
对于其它 Linux 发行版,使用 [wget 命令][4]下载 `bd` 可执行二进制文件。
```
$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
```
设置 bd 二进制文件的可执行权限。
设置 `bd` 二进制文件的可执行权限。
```
$ sudo chmod +rx /usr/local/bin/bd
@ -61,17 +59,19 @@ $ echo 'alias bd=". bd -si"' >> ~/.bashrc
```
运行以下命令以使更改生效。
```
$ source ~/.bashrc
```
要启用自动完成,执行以下两个步骤。
```
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
$ sudo source /etc/bash_completion.d/bd
```
我们已经在系统上成功安装并配置了 bd 实用程序,现在是时候测试一下了。
我们已经在系统上成功安装并配置了 `bd` 实用程序,现在是时候测试一下了。
我将使用下面的目录路径进行测试。
@ -79,7 +79,7 @@ $ sudo source /etc/bash_completion.d/bd
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ pwd
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ dirs
/usr/share/icons/Adwaita/256x256/apps
@ -94,19 +94,20 @@ daygeek@Ubuntu18:/usr/share/icons$
```
甚至,你不需要输入完整的目录名称,也可以输入几个字母。
```
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ bd i
/usr/share/icons/
daygeek@Ubuntu18:/usr/share/icons$
```
`注意:` 如果层次结构中有多个同名的目录bd 会将你带到最近的目录。(不考虑直接的父目录)
注意:如果层次结构中有多个同名的目录,`bd` 会将你带到最近的目录。(不考虑直接的父目录)
如果要列出给定的目录内容,使用以下格式。它会打印出 `/usr/share/icons/` 的内容。
```
$ ls -lh `bd icons`
or
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ ls -lh `bd i`
total 64K
drwxr-xr-x 12 root root 4.0K Jul 25 2018 Adwaita
@ -132,7 +133,7 @@ drwxr-xr-x 3 root root 4.0K Jul 25 2018 whiteglass
```
$ `bd i`/users-list.sh
or
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ `bd icon`/users-list.sh
daygeek
thanu
@ -151,7 +152,7 @@ user3
```
$ cd `bd i`/gnome
or
daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ cd `bd icon`/gnome
daygeek@Ubuntu18:/usr/share/icons/gnome$
```
@ -167,7 +168,7 @@ drwxr-xr-x 2 root root 4096 Mar 16 05:44 /usr/share/icons//2g
本教程允许你快速返回到特定的父目录,但没有快速前进的选项。
我们有另一个解决方案,很快就会提出新的解决方案,请跟我们保持联系
我们有另一个解决方案,很快就会提出,请保持关注
--------------------------------------------------------------------------------
@ -176,7 +177,7 @@ via: https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
[#]: via: (https://itsfoss.com/history-of-firefox)
[#]: author: (John Paul https://itsfoss.com/author/john/)
A Look Back at the History of Firefox
======
The Firefox browser has been a mainstay of the open-source community for a long time. For many years it was the default web browser on (almost) all Linux distros and the lone obstacle to Microsofts total dominance of the internet. This browser has roots that go back all the way to the very early days of the internet. Since this week marks the 30th anniversary of the internet, there is no better time to talk about how Firefox became the browser we all know and love.
### Early Roots
In the early 1990s, a young man named [Marc Andreessen][1] was working on his bachelors degree in computer science at the University of Illinois. While there, he started working for the [National Center for Supercomputing Applications][2]. During that time [Sir Tim Berners-Lee][3] released an early form of the web standards that we know today. Marc [was introduced][4] to a very primitive web browser named [ViolaWWW][5]. Seeing that the technology had potential, Marc and Eric Bina created an easy to install browser for Unix named [NCSA Mosaic][6]). The first alpha was released in June 1993. By September, there were ports to Windows and Macintosh. Mosaic became very popular because it was easier to use than other browsing software.
In 1994, Marc graduated and moved to California. He was approached by Jim Clark, who had made his money selling computer hardware and software. Clark had used Mosaic and saw the financial possibilities of the internet. Clark recruited Marc and Eric to start an internet software company. The company was originally named Mosaic Communications Corporation, however, the University of Illinois did not like [their use of the name Mosaic][7]. As a result, the company name was changed to Netscape Communications Corporation.
The companys first project was an online gaming network for the Nintendo 64, but that fell through. The first product they released was a web browser named Mosaic Netscape 0.9, subsequently renamed Netscape Navigator. Internally, the browser project was codenamed mozilla, which stood for “Mosaic killer”. An employee created a cartoon of a [Godzilla like creature][8]. They wanted to take out the competition.
![Early Firefox Mascot][9]Early Mozilla mascot at Netscape
They succeed mightily. At the time, one of the biggest advantages that Netscape had was the fact that its browser looked and functioned the same on every operating system. Netscape described this as giving everyone a level playing field.
As usage of Netscape Navigator increase, the market share of NCSA Mosaic cratered. In 1995, Netscape went public. [On the first day][10], the stock started at $28, jumped to $75 and ended the day at $58. Netscape was without any rivals.
But that didnt last for long. In the summer of 1994, Microsoft released Internet Explorer 1.0, which was based on Spyglass Mosaic which was based on NCSA Mosaic. The [browser wars][11] had begun.
Over the next few years, Netscape and Microsoft competed for dominance of the internet. Each added features to compete with the other. Unfortunately, Internet Explorer had an advantage because it came bundled with Windows. On top of that, Microsoft had more programmers and money to throw at the problem. Toward the end of 1997, Netscape started to run into financial problems.
### Going Open Source
![Mozilla Firefox][12]
In January 1998, Netscape open-sourced the code of the Netscape Communicator 4.0 suite. The [goal][13] was to “harness the creative power of thousands of programmers on the Internet by incorporating their best enhancements into future versions of Netscapes software. This strategy is designed to accelerate development and free distribution by Netscape of future high-quality versions of Netscape Communicator to business customers and individuals.”
The project was to be shepherded by the newly created Mozilla Organization. However, the code from Netscape Communicator 4.0 proved to be very difficult to work with due to its size and complexity. On top of that, several parts could not be open sourced because of licensing agreements with third parties. In the end, it was decided to rewrite the browser from scratch using the new [Gecko][14]) rendering engine.
In November 1998, Netscape was acquired by AOL for [stock swap valued at $4.2 billion][15].
Starting from scratch was a major undertaking. Mozilla Firefox (initially nicknamed Phoenix) was created in June 2002 and it worked on multiple operating systems, such as Linux, Mac OS, Microsoft Windows, and Solaris.
The following year, AOL announced that they would be shutting down browser development. The Mozilla Foundation was subsequently created to handle the Mozilla trademarks and handle the financing of the project. Initially, the Mozilla Foundation received $2 million in donations from AOL, IBM, Sun Microsystems, and Red Hat.
In March 2003, Mozilla [announced pl][16][a][16][ns][16] to separate the suite into stand-alone applications because of creeping software bloat. The stand-alone browser was initially named Phoenix. However, the name was changed due to a trademark dispute with the BIOS manufacturer Phoenix Technologies, which had a BIOS-based browser named trademark dispute with the BIOS manufacturer Phoenix Technologies. Phoenix was renamed Firebird only to run afoul of the Firebird database server people. The browser was once more renamed to the Firefox that we all know.
At the time, [Mozilla said][17], “Weve learned a lot about choosing names in the past year (more than we would have liked to). We have been very careful in researching the name to ensure that we will not have any problems down the road. We have begun the process of registering our new trademark with the US Patent and Trademark office.”
![Mozilla Firefox 1.0][18]Firefox 1.0 : [Picture Credit][19]
The first official release of Firefox was [0.8][20] on February 8, 2004. 1.0 followed on November 9, 2004. Version 2.0 and 3.0 followed in October 2006 and June 2008 respectively. Each major release brought with it many new features and improvements. In many respects, Firefox pulled ahead of Internet Explorer in terms of features and technology, but IE still had more users.
That changed with the release of Googles Chrome browser. In the months before the release of Chrome in September 2008, Firefox accounted for 30% of all [browser usage][21] and IE had over 60%. According to StatCounters [January 2019 report][22], Firefox accounts for less than 10% of all browser usage, while Chrome has over 70%.
Fun Fact
Contrary to popular belief, the logo of Firefox doesnt feature a fox. Its actually a [Red Panda][23]. In Chinese, “fire fox” is another name for the red panda.
### The Future
As noted above, Firefox currently has the lowest market share in its recent history. There was a time when a bunch of browsers were based on Firefox, such as the early version of the [Flock browser][24]). Now most browsers are based on Google technology, such as Opera and Vivaldi. Even Microsoft is giving up on browser development and [joining the Chromium band wagon][25].
This might seem like quite a downer after the heights of the early Netscape years. But dont forget what Firefox has accomplished. A group of developers from around the world have created the second most used browser in the world. They clawed 30% market share away from Microsofts monopoly, they can do it again. After all, they have us, the open source community, behind them.
The fight against the monopoly is one of the several reasons [why I use Firefox][26]. Mozilla regained some of its lost market-share with the revamped release of [Firefox Quantum][27] and I believe that it will continue the upward path.
What event from Linux and open source history would you like us to write about next? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][28].
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-firefox
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
[11]: https://en.wikipedia.org/wiki/Browser_wars
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
[15]: http://news.cnet.com/2100-1023-218360.html
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
[28]: http://reddit.com/r/linuxusersgroup
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (zgj1024 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,290 +0,0 @@
Moelf translating
Myths about /dev/urandom
======
There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
Fact: /dev/random has a very nasty problem: it blocks.
### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
Namely, what is randomness, or better: what kind of randomness am I talking about here?
And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
### You're saying I'm stupid!
Emphatically no!
Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
### True randomness
What does it mean for random numbers to be “truly random”?
I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
### Two kinds of security, one that matters
But let's assume you've obtained those “true” random numbers. What are you going to do with them?
You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
### Structure of Linux's random number generator
#### An incorrect view
Chances are, your idea of the kernel's random number generator is something similar to this:
![image: mythical structure of the kernel's random number generator][1]
“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
#### A better simplification
##### Before Linux 4.8
![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
##### From Linux 4.8 onward
In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
We will see shortly why that is not a security problem.
### What's wrong with blocking?
Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
### The CSPRNGs are alright
But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
### What about entropy running low?
It doesn't matter.
The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
### Re-seeding
But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
djb [remarked][4] that more entropy actually can hurt.
First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
There is another reason why re-seeding the random number generator every now and then is important:
Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
You've totally lost now, because the attacker can compute all future outputs from this point on.
But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
### The random and urandom man page
The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
The man page is silly, that's all. At least it tries to redeem itself with this:
> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
### Orthodoxy
The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
Let's take [Daniel Bernstein][5], better known as djb:
> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
>
> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
>
> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
>
>
>
> For a cryptographer this doesn't even pass the laugh test.
Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
>
> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
### Not everything is perfect
/dev/urandom isn't perfect. The problems are twofold:
On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
### tldr;
Just use /dev/urandom!
--------------------------------------------------------------------------------
via: https://www.2uo.de/myths-about-urandom/
作者:[Thomas Hühn][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2uo.de/
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
[4]:http://blog.cr.yp.to/20140205-entropy.html
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

View File

@ -1,93 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Take to the virtual skies with FlightGear)
[#]: via: (https://opensource.com/article/19/1/flightgear)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Take to the virtual skies with FlightGear
======
Dreaming of piloting a plane? Try open source flight simulator FlightGear.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS)
If you've ever dreamed of piloting a plane, you'll love [FlightGear][1]. It's a full-featured, [open source][2] flight simulator that runs on Linux, MacOS, and Windows.
The FlightGear project began in 1996 due to dissatisfaction with commercial flight simulation programs, which were not scalable. Its goal was to create a sophisticated, robust, extensible, and open flight simulator framework for use in academia and pilot training or by anyone who wants to play with a flight simulation scenario.
### Getting started
FlightGear's hardware requirements are fairly modest, including an accelerated 3D video card that supports OpenGL for smooth framerates. It runs well on my Linux laptop with an i5 processor and only 4GB of RAM. Its documentation includes an [online manual][3]; a [wiki][4] with portals for [users][5] and [developers][6]; and extensive tutorials (such as one for its default aircraft, the [Cessna 172p][7]) to teach you how to operate it.
It's easy to install on both [Fedora][8] and [Ubuntu][9] Linux. Fedora users can consult the [Fedora installation page][10] to get FlightGear running.
On Ubuntu 18.04, I had to install a repository:
```
$ sudo add-apt-repository ppa:saiarcot895/flightgear
$ sudo apt-get update
$ sudo apt-get install flightgear
```
Once the installation finished, I launched it from the GUI, but you can also launch the application from a terminal by entering:
```
$ fgfs
```
### Configuring FlightGear
The menu on the left side of the application window provides configuration options.
![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png)
**Summary** returns you to the application's home screen.
**Aircraft** shows the aircraft you have installed and offers the option to install up to 539 other aircraft available in FlightGear's default "hangar." I installed a Cessna 150L, a Piper J-3 Cub, and a Bombardier CRJ-700. Some of the aircraft (including the CRJ-700) have tutorials to teach you how to fly a commercial jet; I found the tutorials informative and accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png)
To select an aircraft to pilot, highlight it and click on **Fly!** at the bottom of the menu. I chose the default Cessna 172p and found the cockpit depiction extremely accurate.
![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png)
The default airport is Honolulu, but you can change it in the **Location** menu by providing your favorite airport's [ICAO airport code][11] identifier. I found some small, local, non-towered airports like Olean and Dunkirk, New York, as well as larger airports including Buffalo, O'Hare, and Raleigh—and could even choose a specific runway.
Under **Environment** , you can adjust the time of day, the season, and the weather. The simulation includes advance weather modeling and the ability to download current weather from [NOAA][12].
**Settings** provides an option to start the simulation in Paused mode by default. Also in Settings, you can select multi-player mode, which allows you to "fly" with other players on FlightGear supporters' global network of servers that allow for multiple users. You must have a moderately fast internet connection to support this functionality.
The **Add-ons** menu allows you to download aircraft and additional scenery.
### Take flight
To "fly" my Cessna, I used a Logitech joystick that worked well. You can calibrate your joystick using an option in the **File** menu at the top.
Overall, I found the simulation very accurate and think the graphics are great. Try FlightGear yourself — I think you will find it a very fun and complete simulation package.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/flightgear
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: http://home.flightgear.org/
[2]: http://wiki.flightgear.org/GNU_General_Public_License
[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
[4]: http://wiki.flightgear.org/FlightGear_Wiki
[5]: http://wiki.flightgear.org/Portal:User
[6]: http://wiki.flightgear.org/Portal:Developer
[7]: http://wiki.flightgear.org/Cessna_172P
[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
[10]: https://apps.fedoraproject.org/packages/FlightGear/
[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
[12]: https://www.noaa.gov/

View File

@ -0,0 +1,147 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ)
[#]: via: (https://itsfoss.com/story-of-va-linux/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
VA Linux: The Linux Company That Once Ruled NASDAQ
======
This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past.
At its time, _VA Linux_ was indeed a crusade to free the world from Microsofts domination.
On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day.
The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2].
It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3].
How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company?
Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief.
### How did it all actually begin?
In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time.
So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Suns but at a much lower price tag: $2,000.
That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed.
![VA Linux founder, Larry Augustin][9]
> Once software goes into the GPL, you cant take it back. People can stop contributing, but the code that exists, people can continue to develop on it.
>
> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10]
#### Some screenshots of their web domains from the early days
![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11]
![varesearch.com reveals emerging growth | February 16, 1998][12]
![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13]
### The spectacular rise and the devastating fall of VA Linux
VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year.
After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010.
Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015.
In case youre curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise.
![Image Credit: Wikipedia][16]
### What happened to VA Linux afterward?
Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17].
SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode.
An [article][18] from 2016 sadly quotes in its final paragraph:
> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.”
Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]?
How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder?
The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _Its FOSS,_ our heartfelt salute goes out to those noble ideas!
Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_.
But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, its [a different story there][24] and still going strong with the original ideologies of _VA Linux_!
![VA Linux booth circa 2000 | Image Credit: Storem][25]
#### _VA Linux_ Subsidiary Still Operational in Japan!
VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services:
* Failure Analysis and Support Services: [_VA Quest_][27]
* Entrusted Development Service
* Consulting Service
_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]!
You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. Its an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan.
Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today!
What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below.
I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know.
--------------------------------------------------------------------------------
via: https://itsfoss.com/story-of-va-linux/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Initial_public_offering
[2]: https://www.forbes.com/1999/05/03/feat.html
[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux
[4]: https://www.gamestop.com/
[5]: http://www.sun.com/
[6]: https://en.wikipedia.org/wiki/Do_it_yourself
[7]: https://www.urbandictionary.com/define.php?term=FTW
[8]: https://www.linkedin.com/in/larryaugustin/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1
[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1
[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/
[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1
[17]: https://www.thinkgeek.com/
[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux
[19]: https://itsfoss.com/linux-gaming-distributions/
[20]: https://en.wikipedia.org/wiki/Open-source_video_game
[21]: https://www.valvesoftware.com/
[22]: https://itsfoss.com/steam-play-proton/
[23]: https://archive.org/web/web.php
[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text=
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1
[26]: https://www.valinux.co.jp/english/
[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service
[28]: https://www.linkedin.com/in/yogo45/
[29]: https://www.valinux.co.jp/english/about/timeline/
[30]: https://github.com/vaj
[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499
[32]: https://en.wikipedia.org/wiki/Kubernetes

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game)
[#]: via: (https://itsfoss.com/crosscode-game/)
[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/)
CrossCode is an Awesome 16-bit Sci-Fi RPG Game
======
What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent.
Note: CrossCode is not open source software. We have covered it because it is Linux specific.
![][2]
### Story
You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past.
As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesnt sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience.
The story unfolds at a satisfying speed and the characters development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game.
All-in-all, CrossCodes story did not leave me wanting, not even in the slightest. Its deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look.
![][3]
### Gameplay
Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The games mechanics are fast-paced, challenging, intuitive, and downright fun!
You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesnt conflict with one another.
The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCodes gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you cant help but periodically stop and think “wow, this game is great!”
Though this has to be the games strongest feature, it can also be the games biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and thats putting it lightly.
There are times in which CrossCodes gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them.
![][4]
The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I havent found in many other games.
And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesnt cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the games most difficult parts to help the player along.
Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the games best strengths.
![][5]
### Design
One of the things that surprised me most about CrossCode was how well its world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode.
Being in a fictional MMO world, the games character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration.
If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore.
It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park.
![][6]
### Conclusion
In the end, it is obvious how I feel about this game. And just in case you havent caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there.
Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11].
If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, youll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly.
--------------------------------------------------------------------------------
via: https://itsfoss.com/crosscode-game/
作者:[Phillip Prado][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/phillip/
[b]: https://github.com/lujun9972
[1]: http://www.cross-code.com/en/home
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1
[7]: https://itsfoss.com/free-linux-games/
[8]: http://www.radicalfishgames.com/
[9]: https://store.steampowered.com/app/368340/CrossCode/
[10]: https://www.gog.com/game/crosscode
[11]: https://www.humblebundle.com/store/crosscode
[12]: https://www.humblebundle.com/monthly?partner=itsfoss
[13]: https://itsfoss.com/affiliate-policy/

View File

@ -0,0 +1,91 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Enjoy Netflix? You Should Thank FreeBSD
======
Netflix is one of the most popular streaming services in the world.
But you already know that. Dont you?
What you probably did not know is that Netflix uses [FreeBSD][1] to deliver its content to you.
Yes, thats right. Netflix relies on FreeBSD to build its in-house content delivery network (CDN).
A [CDN][2] is a group of servers located in various part of the world. It is mainly used to deliver heavy content like images and videos to the end-user faster than a centralized server.
Instead of opting for a commercial CDN service, Netflix has built its own in-house CDN called [Open Connect][3].
Open Connect utilizes [custom hardware][4], Open Connect Appliance. You can see it in the image below. It can handle 40Gb/s data and has a storage capacity of 248TB.
![Netflixs Open Connect Appliance runs FreeBSD][5]
Netflix provides Open Connect Appliance to qualifying Internet Service Providers (ISP) for free. This way, substantial Netflix traffic gets localized and the ISPs deliver the Netflix content more efficiently.
This Open Connect Appliance runs on FreeBSD operating system and [almost exclusively runs open source software][6].
### Open Connect uses FreeBSD “Head”
![][7]
You would expect Netflix to use a stable release of FreeBSD for such a critical infrastructure but Netflix tracks the [FreeBSD head/current version][8]. Netflix says that tracking “head” lets them “stay forward-looking and focused on innovation”.
Here are the benefits Netflix sees of tracking FreeBSD:
* Quicker feature iteration
* Quicker access to new FreeBSD features
* Quicker bug fixes
* Enables collaboration
* Minimizes merge conflicts
* Amortizes merge “cost”
> Running FreeBSD “head” lets us deliver large amounts of data to our users very efficiently, while maintaining a high velocity of feature development.
>
> Netflix
Remember, even [Google uses Debian][9] testing instead of Debian stable. Perhaps these enterprises prefer the cutting edge features more than anything else.
Like Google, Netflix also plans to upstream any code they can. This should help FreeBSD and other BSD distributions based on FreeBSD.
So what does Netflix achieves with FreeBSD? Here are some quick stats:
> Using FreeBSD and commodity parts, we achieve 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU.
>
> Netflix
If you want to know more about Netflix and FreeBSD, you can refer to [this presentation from FOSDEM][10]. You can also watch the video of the presentation [here][11].
These days big enterprises rely mostly on Linux for their server infrastructure but Netflix has put their trust in BSD. This is a good thing for BSD community because if an industry leader like Netflix throws its weight behind BSD, others could follow the lead. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/netflix-freebsd-cdn/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://www.freebsd.org/
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
[3]: https://openconnect.netflix.com/en/
[4]: https://openconnect.netflix.com/en/hardware/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
[6]: https://openconnect.netflix.com/en/software/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
[8]: https://www.bsdnow.tv/tutorials/stable-current
[9]: https://itsfoss.com/goobuntu-glinux-google/
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way)
[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Installing Kali Linux on VirtualBox: Quickest & Safest Way
======
_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_
[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts.
Since it deals with a sensitive topic like hacking, its like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again.
While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option.
With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. Its almost the same as running VLC or a game in your system.
Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your host system (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe.
![][3]
### How to install Kali Linux on VirtualBox
Ill be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). Its available free of cost.
In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available.
**Note:** _The same steps apply for Windows/Linux running VirtualBox._
As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (dont hate me!) where I try to install Kali Linux in VirtualBox step by step.
And, the best part is even if you happen to use a Linux distro as your primary OS, the same steps will be applicable!
Wondering, how? Lets see…
[Subscribe to Our YouTube Channel for More Linux Videos][5]
### Step by Step Guide to install Kali Linux on VirtualBox
_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine but why do that when you have an easy alternative?_
#### 1\. Download and install VirtualBox
The first thing you need to do is to download and install VirtualBox from Oracles official website.
[Download VirtualBox][6]
Once you download the installer, just double click on it to install VirtualBox. Its the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well.
#### 2\. Download ready-to-use virtual image of Kali Linux
After installing it successfully, head to [Offensive Securitys download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too.
![][10]
As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11].
[Kali Linux Virtual Image][8]
#### 3\. Install Kali Linux on Virtual Box
Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work.
Heres how to import the VirtualBox image for Kali Linux:
**Step 1** : Launch VirtualBox. You will notice an **Import** button click on it
![Click on Import button][12]
**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with kali linux and end with . **ova** extension.
![Importing Kali Linux image][13]
**S** Once selected, proceed by clicking on **Next**.
**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not that is your choice. It is okay if you go with the default settings.
You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows.
![Import hard drives as VDI][14]
Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set.
After you are done with the settings, hit **Import** and wait for a while.
**Step 4:** You will now see it listed. So, just hit **Start** to launch it.
You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done!
![Kali Linux running in VirtualBox][15]
The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it.
Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbors WiFi.
I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing good luck with that!
**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet.
### Bonus: Free Kali Linux Guide Book
If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux.
Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools.
Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free.
[Download Kali Linux Revealed for FREE][17]
Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-kali-linux-virtualbox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.kali.org/
[2]: https://itsfoss.com/linux-hacking-penetration-testing/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1
[4]: https://www.virtualbox.org/
[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[6]: https://www.virtualbox.org/wiki/Downloads
[7]: https://itsfoss.com/install-virtualbox-ubuntu/
[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/
[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1
[11]: https://itsfoss.com/4-best-download-managers-for-linux/
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1
[16]: https://linuxhandbook.com/update-kali-linux/
[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI)
[#]: via: (https://itsfoss.com/flowblade-video-editor-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI
======
[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set but the simplicity, flexibility, and being an open source project that counts.
However, with Flowblade 2.0 released recently it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen.
In this article, we shall take a look at whats new with Flowblade 2.0.
### New Features in Flowblade 2.0
Here are some of the major new changes in the latest release of Flowblade.
#### GUI Updates
![Flowblade 2.0][3]
This was a much needed change. Im always looking for open source solutions that works as expected along with a great GUI.
So, in this update, you will observe a new custom theme set as the default which looks good though.
Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on.
#### Workflow Overhaul
No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive.
With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement.
#### New Tools
![Flowblade Video Editor Interface][4]
**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline.
**Multitrim** : A combination of trill, roll, and slip tool.
**Cut:** Available now as a tool in addition to the traditional cut at the playhead.
**Ripple trim:** It is a mode of Trim tool not often used by many now available as a separate tool.
#### More changes?
In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images.
A lot of more tiny little changes have taken place as well you can check those out in the official [changelog][6] on GitHub.
### Installing Flowblade 2.0
If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0.
For the rest, youll have to [install it using the source code][7].
All the files are available on its GitHub page. You can download it from the page below.
[Download Flowblade 2.0][8]
### Wrapping Up
If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development.
Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you?
Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/flowblade-video-editor-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://github.com/jliljebl/flowblade
[2]: https://itsfoss.com/best-video-editing-software-linux/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1
[5]: https://en.wikipedia.org/wiki/Key_frame
[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md
[7]: https://itsfoss.com/install-software-from-source-code/
[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0
[9]: https://itsfoss.com/olive-video-editor/

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Review of Debian System Administrators Handbook)
[#]: via: (https://itsfoss.com/debian-administrators-handbook/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Review of Debian System Administrators Handbook
======
_**Debian System Administrators Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_
This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that.
### Debian Administrators Handbook
![][1]
[Debian Administrators Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server.
The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesnt mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users.
Let me give you a quick summary of what this book covers.
#### Section 1 Debian Project
The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario.
#### Section 2 Using fictional case studies for different needs
The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned.
#### Section 3 & 4- Setups and Installation
The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition.
Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about.
#### Section 5 & 6 Packaging System and Updates
Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught.
Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims.
#### Section 7 Solving Problems and finding Relevant Solutions
Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is patience. If you are patient then many problems in Debian are resolved or can be resolved after a good nights sleep.
#### Section 8 Basic Configuration, Network, Accounts, Printing
Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here.
#### Section 9 Unix Service
Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin.
#### Section 10, 11 & 12 Networking and Adminstration
Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues.
Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein.
Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein.
![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6]
#### Section 13 Workstation
Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion
#### Section 14 Security
Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that its a vast topic.
#### Section 15 Creating a Debian package
Section 15 explains the tools and processes to _debianize_ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports.
### Pros and Cons
Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the authors point of view.
One of the drawbacks, if I may call it that, is the absolute absence of humor in the book.
### Final Thoughts
I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact.
If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldnt be able to recommend a better book than this.
### Download Debian Administrators Handbook
The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11].
You can download an electronic version of the Debian Administrators Handbook in PDF, ePub or Mobi format from the link below:
[Download Debian Administrators Handbook][12]
You can also buy the book paperback edition of the book if you want to support the amazing work of the authors.
[Buy the paperback edition][13]
Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14].
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-administrators-handbook/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1
[2]: https://debian-handbook.info/
[3]: https://www.debian.org/
[4]: https://www.cups.org
[5]: https://itsfoss.com/systemd-features/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1
[7]: https://wayland.freedesktop.org/
[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland
[9]: https://itsfoss.com/reasons-why-i-love-debian/
[10]: https://debian-handbook.info/liberation/
[11]: https://www.ulule.com/debian-handbook/
[12]: https://debian-handbook.info/get/now/
[13]: https://debian-handbook.info/get/
[14]: https://raphaelhertzog.com/

View File

@ -0,0 +1,114 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries)
[#]: via: (https://itsfoss.com/libreoffice-drops-32-bit-support/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
LibreOffice 6.2 is Here: This is the Last Release with 32-bit Binaries
======
LibreOffice is my favorite office suite as a free and powerful [alternative to Microsoft Office tools on Linux][1]. Even when I use my Windows machine I prefer to have LibreOffice installed instead of Microsoft Office tools any day.
Now, with the recent [LibreOffice][2] 6.2 update, theres a lot of good stuff to talk about along with a bad news.
### Whats New in LibreOffice 6.2?
Lets have a quick look at the major new features in the [latest release of LibreOffice][3].
If you like Linux videos, dont forget to [subscribe to our YouTube channel][4] as well.
#### The new NotebookBar
![][5]
A new addition to the interface that is optional and not enabled by default. In order to enable it, go to **View - > User Interface -> Tabbed**.
You can either set it as a tabbed layout or a grouped compact layout.
While it is not something that is mind blowing but it still counts as a significant user interface update considering a variety of user preferences.
#### Icon Theme
![][6]
A new set of icons is now available to choose from. I will definitely utilize the new set of icons they look good!
#### Platform Compatibility
With the new update, the compatibility has been improved across all the platforms (Mac, Windows, and Linux).
#### Performance Improvements
This shouldnt concern you if you didnt have any issues. But, still, the better they work on this it is a win-win for all.
They have removed unnecessary animations, worked on latency reduction, avoided repeated re-layout, and more such things to improve the performance.
#### More fixes and improvements
A lot of bugs have been fixed in this new update along with little tweaks here and there for all the tools (Writer, Calc, Draw, Impress).
To get to know all the technical details, you should check out their [release notes.
][7]
### The Sad News: Dropping the support for 32-bit binaries
Of course, this is not a feature. But, this was bound to happen because it was anticipated a few months ago. LibreOffice will no more provide 32-bit binary releases.
This is inevitable. [Ubuntu has dropped 32-bit support][8]. Many other Linux distributions have also stopped supporting 32-bit processors. The number of [Linux distributions still supporting a 32-bit architecture][9] is fast dwindling.
For the future versions of LibreOffice on 32-bit systems, youll have to rely on your distribution to provide it to you. You cannot download the binaries anymore.
### Installing LibreOffice 6.2
![][10]
Your Linux distribution should be providing this update sooner or later.
Arch-based Linux users should be getting it already while Ubuntu and Debian users would have to wait a bit longer.
If you cannot wait, you should download it and [install it from the deb file][11]. Do remove the existing LibreOffice install before using the DEB file.
[Download LibreOffice 6.2][12]
If you dont want to use the deb file, you may use the official PPA should provide you LibreOffice 6.2 before Ubuntu (it doesnt have 6.2 release at the moment). It will update your existing LibreOffice install.
```
sudo add-apt-repository ppa:libreoffice/ppa
sudo apt update
sudo apt install libreoffice
```
### Wrapping Up
LibreOffice 6.2 is definitely a major step up to keep it as a better alternative to Microsoft Office for Linux users.
Do you happen to use LibreOffice? Do these updates matter to you? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/libreoffice-drops-32-bit-support/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-free-open-source-alternatives-microsoft-office/
[2]: https://www.libreoffice.org/
[3]: https://itsfoss.com/libreoffice-6-0-released/
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreoffice-tabbed.png?resize=800%2C434&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/Libreoffice-style-elementary.png?ssl=1
[7]: https://wiki.documentfoundation.org/ReleaseNotes/6.2
[8]: https://itsfoss.com/ubuntu-drops-32-bit-desktop/
[9]: https://itsfoss.com/32-bit-os-list/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libre-office-6-2-release.png?resize=800%2C450&ssl=1
[11]: https://itsfoss.com/install-deb-files-ubuntu/
[12]: https://www.libreoffice.org/download/download/

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 14.04 is Reaching the End of Life. Here are Your Options)
[#]: via: (https://itsfoss.com/ubuntu-14-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 14.04 is Reaching the End of Life. Here are Your Options
======
Ubuntu 14.04 is reaching its end of life on April 30, 2019. This means there will be no security and maintenance updates for Ubuntu 14.04 users beyond this date.
You wont even get updates for installed applications and you wont be able to install a new application using apt command or the Software Center without manually modifying the sources.list.
Ubuntu 14.04 was released almost five years ago. Thats the life of a long term support release of Ubuntu.
[Check Ubuntu version][1] and see if you are still using Ubuntu 14.04. If thats the case either on desktops or on servers, you might be wondering what should you do in such a situation.
Let me help you out there. Let me tell you what options do you have in this case.
![][2]
### Upgrade to Ubuntu 16.04 LTS (easiest of them all)
If you have a good internet connection, you can upgrade to Ubuntu 16.04 LTS from within Ubuntu 14.04.
Ubuntu 16.04 is also a long term support release and it will be supported till April, 2021. Which means youll have two years before another upgrade.
I recommend reading this tutorial about [upgrading Ubuntu version][3]. It was originally written for upgrading Ubuntu 16.04 to Ubuntu 18.04 but the steps are applicable in your case as well.
### Make a backup, do a fresh install of Ubuntu 18.04 LTS (ideal for desktop users)
The other option is that you make a backup of your Documents, Music, Pictures, Downloads and any other folder where you have kept essential data that you cannot afford to lose.
When I say backup, it simply means copying these folders to an external USB disk. In other words, you should have a way to copy the data back to your computer because youll be formatting your system.
I would recommend this option for the desktop users. Ubuntu 18.04 is the current long term support release and it will be supported till at least April, 2023. You have four long years before you are forced for another upgrade.
### Pay for extended security maintenance and continue using Ubuntu 14.04
This is suited for enterprise/corporate clients. Canonical, the parent company of Ubuntu, provides the Ubuntu Advantage program where customers can pay for phone/email based support among other benefits.
Ubuntu Advantage program users also have the [Extended Security Maintenance][4] (ESM) feature. This program provides security updates even after reaching the end of life for a given version.
This comes at a cost. It costs $225 per year per physical node for server users. For desktop users, the price is $150 per year. You can read the detailed pricing of the Ubuntu Advantage program [here][5].
### Still using Ubuntu 14.04?
If you are still using Ubuntu 14.04, you should start exploring your options as you have less than two months to go.
In any case, you must not use Ubuntu 14.04 after 30 April 2019 because your system will be vulnerable due to lack of security updates. Not being able to install new applications will be an additional major pain.
So, what option do you choose here? Upgrading to Ubuntu 16.04 or 18.04 or paying for the ESM?
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-14-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/ubuntu-14-04-end-of-life-featured.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/upgrade-ubuntu-version/
[4]: https://www.ubuntu.com/esm
[5]: https://www.ubuntu.com/support/plans-and-pricing

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular)
[#]: via: (https://itsfoss.com/earliest-linux-distros/)
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
The Earliest Linux Distros: Before Mainstream Distros Became So Popular
======
In this throwback history article, weve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today.
![][1]
In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available.
As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System.
### 1\. The first known “distro” by HJ Lu
The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes:
![Linux 0.12 Boot and Root Disks | Photo Credit][2]
* **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first.
* **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting.
To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era.
Feeling too nostalgic?
You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90s computers.
### 2\. MCC Interim Linux
![MCC Linux 0.99.14, 1993 | Image Credit][4]
Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment.
MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR.
Though it was first released in February 1992, it was also available for download through FTP since November that year.
### 3\. TAMU Linux
![TAMU Linux | Image Credit][5]
TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system.
### 4\. Softlanding Linux System (SLS)
![SLS Linux 1.05, 1994 | Image Credit][6]
“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it.
Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are:
* **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions.
* **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian.
### 5\. Yggdrasil
![LGX Yggdrasil Fall 1993 | Image Credit][7]
Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in todays time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux.
![Yggdrasils Plug-and-Play Promo | Image Credit][8]
Their motto was “Free Software For The Rest of Us”.
In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10].
If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/earliest-linux-distros/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1
[3]: https://itsfoss.com/cool-retro-term/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1
[9]: https://en.wikipedia.org/wiki/Mandriva_Linux
[10]: https://www.openmandriva.org/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Decentralized Slack Alternative Riot Releases its First Stable Version)
[#]: via: (https://itsfoss.com/riot-stable-release/)
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
Decentralized Slack Alternative Riot Releases its First Stable Version
======
Remember [Riot messenger][1]? Its a decentralized, encrypted open source messaging software based on the [Matrix protocol][2].
I wrote a [detailed tutorial on using Riot on Linux desktop][3]. The software was in beta back then. The first stable version, Riot 1.0 has been released a few days ago. Wonder whats new?
![][4]
### New Features in Riot 1.0
Lets look at some of the changes which were introduced in the move to Riot 1.0.
#### New Looks and Branding
![][5]
The first thing that you see is the welcome screen which has a nice background and also a refreshed sky and dark blue logo which is cleaner and clearer than the previous logo.
The welcome screen gives you the option to sign into an existing riot account on either matrix.org or any other homeserver or create an account. There is also the option to talk with the Riot Bot and have a room directory listing.
#### Changing Homeservers and Making your own homeserver
![Make your own homeserver][6]
As you can see, here you can change the homeserver. The idea of riot as was shared before is to have [de-centralized][7] chat services, without foregoing the simplicity that centralized services offer. For those who want to run their own homeservers, you need the new [matrix-syanpse 0.99.1.1 reference homeserver][8].
You can find an unofficial list of matrix homeservers listed [here][9] although its far from complete.
#### Internationalization and Languages.
One of the more interesting things are that the UI and everything is now il8n-aware and has been translated to catala, dansk, duetsch, Spanish along with English (US) which is/was the default when I installed. We can hope to see some more improvements in language support going ahead.
#### Favoriting a channel
![Favoriting a channel in Riot][10]
One of the things that has changed from last time is how you favorite a channel. Now as you can see, you select the channel, click on the three vertical dots in it and then either favorite or do whatever you want with it.
#### Making changes to your profile and Settings
![Riot Different settings you can do. ][11]
Just clicking the drop-down box beside your Avatar you get the settings box. You click on the box and it gives a wide variety of settings you can change.
As you can see there are lot more choices and the language is easier than before.
#### Encryption and E2E
![Riot encryption screen][12]
One of the big things which riot has been talked about is Encryption and end-to-end encryption. This is still a work in progress.
The new release brings the focus on two enhancements in encryption: key backup and emoji device verification (still in progress).
With Riot 1.0, you can automatically backup your keys on your server. This key itself will be encrypted with a password so that it is stored securely. With this, youll never lose your encrypted message because you wont lose your encryption key.
You will soon be able to verify your device with emoji now which is easier than matching long strings, isnt it?
**In the end**
Using Riot requires a bit of patience. Once you get the hang of it, there is nothing like it. This decentralized messaging app becomes an important tool in the arsenal of privacy cautious people.
Riot is an important tool in the continuous effort to keep our data secure and privacy intact. The new major release makes it even more awesome. What do you think?
--------------------------------------------------------------------------------
via: https://itsfoss.com/riot-stable-release/
作者:[Shirish][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/shirish/
[b]: https://github.com/lujun9972
[1]: https://about.riot.im/
[2]: https://matrix.org/blog/home/
[3]: https://itsfoss.com/riot-desktop/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-messenger.jpg?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-im-web-1.0-welcome-screen.jpg?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-change-homeservers.jpg?resize=800%2C420&ssl=1
[7]: https://medium.com/s/story/why-decentralization-matters-5e3f79f7638e
[8]: https://github.com/matrix-org/synapse/releases/tag/v0.99.1.1
[9]: https://www.hello-matrix.net/public_servers.php
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-channel-preferences.jpg?resize=800%2C420&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-settings-1-e1550427251686.png?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/riot-web-1.0-encryption.jpg?fit=800%2C572&ssl=1

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (DevOps for Network Engineers: Linux Foundations New Training Course)
[#]: via: (https://itsfoss.com/devops-for-network-engineers/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
DevOps for Network Engineers: Linux Foundations New Training Course
======
_**Linux Foundation has launched a[DevOps course for sysadmins][1] and network engineers. They are also offering a limited time 40% discount on the launch.**_
DevOps is no longer a buzzword. It has become the necessity for any IT company.
The role and responsibility of a sysadmin and a network engineer have changed as well. They are required to have knowledge of the DevOps tools popular in the IT industry.
If you are a sysadmin or a network engineer, you can no longer laugh off DevOps anymore. Its time to learn new skills to stay relevant in todays rapidly changing IT industry otherwise the automation trend might cost you your job.
And who knows it better than Linux Foundation, the official organization behind Linux project and the employer of Linux-creator Linus Torvalds?
[Linux Foundation has a number of courses on Linux and related technologies][2] that help you in getting a job or improving your existing skills at work.
The [latest course offering][1] from Linux Foundation specifically focuses on sysadmins who would like to familiarize with DevOps tools.
### DevOps for Network Engineers Course
![][3]
[This course][1] is intended for existing sysadmins and network engineers. So you need to have some knowledge of Linux system administration, shell scripting and Python.
The course will help you with:
* Integrating into a DevOps/Agile environment
* Familiarizing with commonly used DevOps tools
* Collaborating on projects as DevOps
* Confidently working with software and configuration files in version control
* Recognizing the roles of SCRUM team members
* Confidently applying Agile principles in an organization
This is the course outline:
* Chapter 1. Course Introduction
* Chapter 2. Modern Project Management
* Chapter 3. The DevOps Process: A Network Engineers Perspective
* Chapter 4. Network Simulation and Testing with [Mininet][4]
* Chapter 5. [OpenFlow][5] and [ONOS][6]
* Chapter 6. Infrastructure as Code ([Ansible][7] Basics)
* Chapter 7. Version Control ([Git][8])
* Chapter 8. Continuous Integration and Continuous Delivery ([Jenkins][9])
* Chapter 9. Using [Gerrit][10] in DevOps
* Chapter 10. Jenkins, Gerrit and Code Review for DevOps
* Chapter 11. The DevOps Process and Tools (Review)
Altogether, you get 25-30 hours of course material. The online course is self-paced and you can access the material for one year from the date of purchase.
_**Unlike most other courses on Linux Foundation, this is NOT a video course.**_
There is no certification for this course because it is more focused on learning and improving skills.
#### Get the course at a 40% discount (limited time)
The course costs $299 but since its just launched, they are offering 40% discount till March 1st, 2019. You can get the discount by using the **DEVOPSNET** coupon code at checkout.
[DevOps for Network Engineers][1]
By the way, if you are interested in Open Source development, you can benefit from the “[Introduction to Open Source Development, Git, and Linux][11]” video course. You can get a limited time 50% discount using **OSDEV50** code at the checkout.
Staying relevant is absolutely necessary in any industry, not just IT industry. Learning new skills that are in-demand in your industry is perhaps the best way in this regard.
What do you think? What are your views on the current automation trend? How would you go about it?
_Disclaimer: This post contains affiliate links. Please read our_ [_affiliate policy_][12] _for more details._
--------------------------------------------------------------------------------
via: https://itsfoss.com/devops-for-network-engineers/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: http://shrsl.com/1glcb
[2]: https://shareasale.com/r.cfm?b=1074561&u=747593&m=59485&urllink=&afftrack=
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/DevOps-for-Network-Engineers-800x450.png?resize=800%2C450&ssl=1
[4]: http://mininet.org/
[5]: https://en.wikipedia.org/wiki/OpenFlow
[6]: https://onosproject.org/
[7]: https://www.ansible.com/
[8]: https://itsfoss.com/basic-git-commands-cheat-sheet/
[9]: https://jenkins.io/
[10]: https://www.gerritcodereview.com/
[11]: https://shareasale.com/r.cfm?b=1193750&u=747593&m=59485&urllink=&afftrack=
[12]: https://itsfoss.com/affiliate-policy/

View File

@ -0,0 +1,125 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mageia Linux Is a Modern Throwback to the Underdog Days)
[#]: via: (https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
Mageia Linux Is a Modern Throwback to the Underdog Days
======
![Welcome to Mageia][1]
The Mageia Welcome App is a boon for new Linux users.
[Used with permission][2]
Ive been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.
Well, that didnt happen. In fact, Linux Mandrake didnt even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of [OpenMandriva][3], as well as another distribution called [Mageia Linux][4].
Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but its never faltered. As of this writing, Mageia is listed as number 26 on the [Distrowatch][5] Page Hit Ranking chart and is enjoying release number 6.1.
### What Sets Mageia Apart?
This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If youve seen one KDE, GNOME, or Xfce distribution, youve seen them all, right? Anyone who's used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. Its not about what you do with the desktop; its how you put everything together to improve the user experience.
Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but its slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).
![Installing Mageia][6]
Figure 1: Installing Mageia from the Live instance.
[Used with permission][2]
Once youve launched the installation app, its fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, Im talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:
* Basic Install
* Custom Install
The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.
The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.
![bootloader][7]
Figure 2: Configuring the Mageia bootloader.
[Used with permission][2]
The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. Its not. If you dont want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.
![bootloader options][8]
Figure 3: Advanced bootloader options can be configured here.
[Used with permission][2]
Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) youll then be prompted to configure both the root user password and a standard user account (Figure 4).
![Configuring your users][9]
Figure 4: Configuring your users.
[Used with permission][2]
And thats all there is to the Mageia installation.
### Welcome to Mageia
Once you log into Mageia, youll be greeted by something every Linux distribution should use—a welcome app (Figure 5).
![welcome app][10]
Figure 5: The Mageia welcome app is a new users best friend.
[Used with permission][2]
From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but its important information for users to have at the ready.
Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as youll find (without using either SUSE or openSUSE).
![Control Center][11]
Figure 6: The Mageia Control Center is an outstanding system management tool.
[Used with permission][2]
Beyond those two tools, youll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, youd be hard pressed to find another tool you need to install to get your work done. Its that complete a distribution.
### Target Audience
Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isnt really that challenging, just slightly misleading), using Mageia Linux is a dream.
The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what theyre getting into with the installation portion of this take on the Linux platform.
--------------------------------------------------------------------------------
via: https://www.linux.com/BLOG/LEARN/2019/3/MAGEIA-LINUX-MODERN-THROWBACK-UNDERDOG-DAYS
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia-main.jpg?itok=ZmkbMxfM (Welcome to Mageia)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.openmandriva.org/
[4]: https://www.mageia.org/en/
[5]: https://distrowatch.com/
[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_1.jpg?itok=RYXPU70j (Installing Mageia)
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_2.jpg?itok=m2IPxgA4 (bootloader)
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_3.jpg?itok=Bs2PPrMF (bootloader options)
[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_4.jpg?itok=YZBIZ0Ua (Configuring your users)
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_5.jpg?itok=gYcTfUKv (welcome app)
[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/mageia_6.jpg?itok=eSl2qpPp (Control Center)

View File

@ -1,73 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
[#]: author: (Jeff Macharyas (Community Moderator) )
Sweet Home 3D: An open source tool to help you decide on your dream home
======
Interior design application makes it easy to render your favorite house—real or imaginary.
![Houses in a row][1]
I recently accepted a new job in Virginia. Since my wife was working and watching our house in New York until it sold, it was my responsibility to go out and find a new house for us and our cat. A house that she would not see until we moved into it!
I contracted with a real estate agent and looked at a few houses, taking many pictures and writing down illegible notes. At night, I would upload the photos into a Google Drive folder, and my wife and I would review them simultaneously over the phone while I tried to remember whether the room was on the right or the left, whether it had a fan, etc.
Since this was a rather tedious and not very accurate way to present my findings, I went in search of an open source solution to better illustrate what our future dream house would look like that wouldn't hinge on my fuzzy memory and blurry photos.
[Sweet Home 3D][2] did exactly what I wanted it to do. Sweet Home 3D is available on Sourceforge and released under the GNU General Public License. The [website][3] is very informative, and I was able to get it up and running in no time. Sweet Home 3D was developed by Paris-based Emmanuel Puybaret of eTeks.
### Hanging the drywall
I downloaded Sweet Home 3D onto my MacBook Pro and added a PNG version of a flat floorplan of a house to use as a background base map.
From there, it was a simple matter of using the Rooms palette to trace the pattern and set the "real life" dimensions. After I mapped the rooms, I added the walls, which I could customize by color, thickness, height, etc.
![Sweet Home 3D floorplan][5]
Now that I had the "drywall" built, I downloaded various pieces of "furniture" from a large array that includes actual furniture as well as doors, windows, shelves, and more. Each item downloads as a ZIP file, so I created a folder of all my uncompressed pieces. I could customize each piece of furniture, and repetitive items, such as doors, were easy to copy-and-paste into place.
Once I had all my walls and doors and windows in place, I used the application's 3D view to navigate through the house. Drawing upon my photos and memory, I made adjustments to all the objects until I had a close representation of the house. I could have spent more time modifying the house by adding textures, additional furniture, and objects, but I got it to the point I needed.
![Sweet Home 3D floorplan][7]
After I finished, I exported the plan as an OBJ file, which can be opened in a variety of programs, such as [Blender][8] and Preview on the Mac, to spin the house around and examine it from various angles. The Video function was most useful, as I could create a starting point, draw a path through the house, and record the "journey." I exported the video as a MOV file, which I opened and viewed on the Mac using QuickTime.
My wife was able to see (almost) exactly what I saw, and we could even start arranging furniture ahead of the move, too. Now, all I have to do is load up the moving truck and head south.
Sweet Home 3D will also prove useful at my new job. I was looking for a way to improve the map of the college's buildings and was planning to just re-draw it in [Inkscape][9] or Illustrator or something. However, since I have the flat map, I can use Sweet Home 3D to create a 3D version of the floorplan and upload it to our website to make finding the bathrooms so much easier!
### An open source crime scene?
An interesting aside: according to the [Sweet Home 3D blog][10], "the French Forensic Police Office (Scientific Police) recently chose Sweet Home 3D as a tool to design plans [to represent roads and crime scenes]. This is a concrete application of the recommendation of the French government to give the preference to free open source solutions."
This is one more bit of evidence of how open source solutions are being used by citizens and governments to create personal projects, solve crimes, and build worlds.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/tool-find-home
作者:[Jeff Macharyas (Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://sourceforge.net/projects/sweethome3d/
[3]: http://www.sweethome3d.com/
[4]: /file/426441
[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
[6]: /file/426451
[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html

View File

@ -0,0 +1,150 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Lets try dwm — dynamic window manager)
[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/)
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
Lets try dwm — dynamic window manager
======
![][1]
If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.
You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.
## **Installation**
To install dwm on Fedora, run:
```
$ sudo dnf install dwm dwm-user
```
The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article.
Additionally, to be able to lock the screen when needed, well also install _slock_ — a simple X display locker.
```
$ sudo dnf install slock
```
However, you can use a different one based on your personal preference.
## **Quick start**
To start dwm, choose the _dwm-user_ option on the login screen.
![][2]
After you log in, youll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows.
### Launching applications
Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. Theres also a shortcut _Alt+Shift+Enter_ for opening a terminal.
Now that some apps are running, have a look at the layouts.
### Layouts
There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.
The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._
![][3]
The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.
To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area.
![][4]
The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_.
Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_.
### Workspaces and tags
Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button.
Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._
## **Configuration**
To make dwm as minimalistic as possible, it doesnt use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But dont worry, in Fedora its as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora.
First, you need to copy the file into your home directory using a command similar to the following:
```
$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h
```
You can get the exact path by running _man dwm-start._
Second, just edit the _~/.dwm/config.h_ file. As an example, lets configure a new shortcut to lock the screen by pressing _Alt+Shift+L_.
Considering weve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work:
Under the _/* commands */_ comment, add:
```
static const char *slockcmd[] = { "slock", NULL };
```
And the following line into _static Key keys[]_ :
```
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
```
In the end, it should look like as follows: (added lines are highlighted)
```
...
/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[] = { "st", NULL };
static const char *slockcmd[] = { "slock", NULL };
static Key keys[] = {
/* modifier key function argument */
{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },
{ MODKEY, XK_p, spawn, {.v = dmenucmd } },
{ MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } },
...
```
Save the file.
Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, its fast enough you wont even notice it.
You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter.
## **Conclusion**
If you like minimalism and want a very fast yet powerful window manager, dwm might be just what youve been looking for. However, it probably isnt for beginners. There might be a lot of additional configuration youll need to do in order to make it just as you like it.
To learn more about dwm, see the projects homepage at <https://dwm.suckless.org/>.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/
作者:[Adam Šamalík][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/asamalik/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png
[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development)
[#]: via: (https://itsfoss.com/nvidia-jetson-nano/)
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development
======
At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product.
![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4]
### Bringing back AI development from cloud
In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available.
Recently, theres been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but were seeing a great explosion these days in this product segment.
Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIAs latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud.
This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself.
### Jetson Nano focuses on smaller AI projects
![Jetson Nano powered JetBot][9]
Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide.
Did you know?
NVIDIAs JetPack SDK provides a complete desktop Linux environment based on Ubuntu 18.04 LTS. In other words, the Jetson Nano is powered by Ubuntu Linux.
### NVIDIA Jetson Nano Specifications
For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13].
CPU | Quad-core ARM® Cortex®-A57 MPCore processor
---|---
GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
RAM | 4 GB 64-bit LPDDR4
Storage | 16 GB eMMC 5.1 Flash
Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps)
Connectivity | Gigabit Ethernet
Display Ports | HDMI 2.0 and DP 1.2
USB Ports | 1 USB 3.0 and 3 USB 2.0
Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs
Size | 69.6 mm x 45 mm
Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIAs [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards.
“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nations supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA.
[Subscribe to Its FOSS YouTube Channel][16]
**What do you think of Jetson Nano?**
The availability of Jetson Nano differs from country to country.
The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. Its good to see competition stirring up at these lower price points from the big manufacturers.
Im looking forward to getting my hands on the product if possible.
What do you guys think about a product like this? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/nvidia-jetson-nano/
作者:[Atharva Lele][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/atharva/
[b]: https://github.com/lujun9972
[1]: https://www.nvidia.com/en-us/gtc/
[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1
[5]: https://itsfoss.com/nanotechnology-open-science-ai/
[6]: https://en.wikipedia.org/wiki/Edge_computing
[7]: https://www.sparkfun.com/news/2886
[8]: https://openmv.io/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1
[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/
[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/
[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications
[14]: https://developer.nvidia.com/embedded/jetpack
[15]: https://developer.nvidia.com/deepstream-sdk
[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[17]: https://software.intel.com/en-us/movidius-ncs-get-started

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top 10 New Linux SBCs to Watch in 2019)
[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019)
[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown)
Top 10 New Linux SBCs to Watch in 2019
======
![UP Xtreme][1]
Aaeon's Linux-ready UP Xtreme SBC.
[Used with permission][2]
A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you dont need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4].
Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5].
Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.
Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Googles i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.
The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.
**[UP Xtreme][8]** —The latest in Aaeons line of community-backed SBCs taps Intels 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive.
The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeons new AI Core X modules, which offer Intels latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.
**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module thats sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.
Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and theres a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.
**[Coral Dev Board][10]** —Googles very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Googles Edge TPU AI chip—a stripped-down version of Googles TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.
The Coral Dev Board combines the Edge TPU chip with NXPs quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidias Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.
**[SBC-C43][11]** —Secos commercial, industrial temperature SBC-C43 board is the first SBC based on NXPs high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.
The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.
**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXPs new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that youre limited to HD video resolution.
Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.
**[Pine H64 Model B][13]** —Pine64s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.
The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.
**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, were more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC weve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.
The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.
**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoCs dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.
Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.
**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.
The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the boards expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).
**[Avenger96][19]** —Like Arrows AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: STs recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.
This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. Its unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. Theres also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019
作者:[Eric Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/ericstephenbrown
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html
[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond
[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/
[6]: https://www.embedded-world.de/en
[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world
[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/
[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/
[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/
[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/
[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/
[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/
[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera
[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/
[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/
[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/
[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/
[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/
[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up Fedora Silverblue as a gaming station)
[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
How to set up Fedora Silverblue as a gaming station
======
![][1]
This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.
Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers.
### Add the Flathub repository
This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.
First, go to <https://flathub.org/home> and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page.
![Quick setup button on flathub.org/home][3]
This redirects you to <https://flatpak.org/setup/> where you should click on the Fedora icon.
![Fedora icon on flatpak.org/setup][4]
Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application.
![Flathub repository file button on flatpak.org/setup/Fedora][5]
The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system.
![Install button in GNOME Software][6]
### Install the Steam flatpak
You can now search for the S _team_ flatpak in _GNOME Software_. If you cant find it, try rebooting — or logout and login — in case _GNOME Software_ didnt read the metadata. That happens automatically when you next login.
![Searching for Steam][7]
Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_.
![Steam page in GNOME Software][8]
And now you have installed _Steam_ flatpak on your system.
### Enable Steam Play in Steam
Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window.
![Settings button in Steam][9]
Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but its recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports.
![Steam Play settings menu on Steam][11]
If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine:
> [Play Windows games on Fedora with Steam Play and Proton][12]
### Appendix
Youre now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂
* * *
_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg
[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png
[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png
[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png
[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png
[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png
[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png
[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png
[10]: https://www.protondb.com/
[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png
[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/
[13]: https://github.com/ValveSoftware/Proton
[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,50 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity)
[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/)
[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/)
Contribute at the Fedora Test Day for Fedora Modularity
======
![][1]
Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2].
The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images youll need to participate. Read on for more information on the test day.
### How do test days work?
A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If youve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
* Download test materials, which include some large files
* Read and follow directions step by step
The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After youve done some testing, you can log your results in the test day [web application][4]. If youre available on or around the day of the event, please do some testing and report your results.
Happy testing, and we hope to see you on test day.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/
作者:[Sumantro Mukherjee][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sumantrom/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png
[2]: https://docs.fedoraproject.org/en-US/modularity/
[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day
[4]: http://testdays.fedorainfracloud.org/events/61

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An inside look at an IIoT-powered smart factory)
[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
An inside look at an IIoT-powered smart factory
======
### Despite housing some 50 robots and 50 people, Tempo Automations gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant.
![Tempo Automation][1]
As someone whos spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. Thats why I was so interested in [Tempo Automation][2]s new 42,000-square-foot facility in San Franciscos trendy Design District.
Frankly, I pictured the companys facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplins 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].)
**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]**
Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more.
![Tempo Automation office space][6]
![Tempo Automation factory floor][7]
## How Tempo Automation's 'smart factory' works
On the front end, Tempos customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factorys machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next.
While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory.
## Connecting the digital thread
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempos co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.”
Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop.
“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a Production Forensics report.”
Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.”
## Traditional IoT, too
Of course, the Tempo factory isnt all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg
[2]: http://www.tempoautomation.com/
[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin
[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg
[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg
[8]: https://www.linkedin.com/in/shashanksamala/
[9]: https://aws.amazon.com/govcloud-us/
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology)
[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all)
[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/)
Changes in SD-WAN Purchase Drivers Show Maturity of the Technology
======
![istock][1]
[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market.
Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change.
This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters.
The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN.
With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier.
![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4]
With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network.
Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.”
At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation.
### Mainstream buyers set new expectations for SD-WAN
With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk.
Its not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds.
Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology.
![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6]
### The bottom line
The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all
作者:[Cliff Grossner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cliff-Grossner/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg
[2]: https://www.silver-peak.com/sd-wan
[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg
[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor
[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg

View File

@ -0,0 +1,52 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Todays Retailer is Turning to the Edge for CX)
[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all)
[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/)
Todays Retailer is Turning to the Edge for CX
======
### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census.
![iStock][1]
Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. Thats putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3].
With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4].
So what new and innovative technologies are edge environments supporting? Heres where retail is headed with customer service and how edge computing will help them get there.
**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customers most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience.
**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Medias projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision.
**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages.
**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers.
These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible.
To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all
作者:[Cindy Waxer][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Cindy-Waxer/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg
[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales
[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/
[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies)
[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco forms VC firm looking to weaponize fledgling technology companies
======
### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests.
![BrianaJackson / Getty][1]
Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market.
Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies.
**[ Now see[7 free network tools you must have][3]. ]**
Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.”
“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.”
“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.”
Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets.
Cisco didnt talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasnt closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote.
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]**
Decibel does plan to invest anywhere from $5M 15M in each start up in their portfolio, Cisco says.
“Cisco has a culture of leveraging both internal and external innovation accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said.
He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg
[2]: https://twitter.com/jonsakoda
[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
[4]: https://www.decibel.vc/the-blast/announcingdecibel
[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund
[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html
[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml
[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE introduces hybrid cloud consulting business)
[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE introduces hybrid cloud consulting business
======
### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems.
![Hewlett Packard Enterprise][1]
Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly.
Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate.
Right Mix Advisor gathers data points from the companys entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers.
**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
HPE Pointnext consultants then work with the clients IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPEs main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries.
In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. *
Although HPE has thrown its weight behind AWS, that doesnt mean it doesnt support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud.
“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote.
Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired.
“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on whats most impactful right now to meet your business goals the 10 things you can do on Monday morning that you can be confident will really help your business.”
HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms)
[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all)
[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/)
Identifying exceptional user experience (UX) in IoT platforms
======
### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas.
![Leo Wolfert / Getty Images][1]
Enterprises are inundated with information about IoT platforms features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users.
Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms preferably by [testing a variety of IoT platforms][2] before making a purchase decision.
However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platforms features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform.
[RELATED: Storage tank operator turns to IoT for energy savings][3]
One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad.
In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas.
## Persona 1: platform administrator
A platform administrators primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform.
Typical platform administrator tasks include
* configuration of the on-platform data visualization and data aggregation tools
* configuration of available device management functionality or execution of in-bulk device management tasks
* configuration and creation of on-platform complex event processing (CEP) workflows
* management and configuration of platform service orchestration
Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred.
## Persona 2: platform operator
A platform operators primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks.
Typical platform operator tasks include
* visualizing and aggregating on-platform data to view key business KPIs
* using device management functionality on a per-device basis
* creating, managing, and monitoring per-device and per-location event processing rules
* executing self-service administrative tasks, such as enrolling downstream operators
Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.
## Persona 3: hardware and systems developer
A hardware and systems developers primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services.
Typical hardware and systems developer tasks include
* designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs
* updating firmware or software packages over deployment lifecycles
* integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform
Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware.
## Persona 4: platform and backend developer
A platform and backend developers primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications.
Typical platform and backend developer tasks include
* integrating streaming data from the IoT platform into external systems and applications
* configuring inbound and outbound platform actions and interactions with external systems
* configuring complex code-based event processing capabilities beyond the scope of a platform administrators knowledge or ability
* debugging low-level platform functionalities that require coding to detect or resolve
Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs.
## Persona 5: user interface and experience (UI/UX) developer
A UI/UX developers primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive.
Typical UI/UX developer tasks include
* building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices
* designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases
* ensuring good user experience for customers over the lifetime of an IoT implementation
Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices.
Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention.
**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all
作者:[Steven Hilton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Steven-Hilton/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/
[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb
[4]: /contributor-network/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS)
[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS
======
### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company.
![Getty Images][1]
Much of whats exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own.
Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device.
**[ Check out our[corporate guide to addressing IoT security][2]. ]**
The technologys called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructures electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities.
It seems likely to make IIoT-related waves once its out of testing and onto the market.
NLIM was recently tested, said MITs news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.”
Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ships diesel engines known as a jacket water heater.
“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system.
Its easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though its very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like?
**Volkswagen teams up with Amazon**
AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality.
Real-time data from all 122 of VWs manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the companys “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too.
The German carmaker clearly believes that AWSs technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4]
**IoT-in-a-box**
IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work.
Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the companys core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the companys devices, also includes a built-in management layer, allowing for simplified configuration and monitoring.
OK, so its not quite all-in-one, but its still an impressive level of integration, particularly from a company that many might not have heard of before. Its also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg
[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780
[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (As memory prices plummet, PCIe is poised to overtake SATA for SSDs)
[#]: via: (https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
As memory prices plummet, PCIe is poised to overtake SATA for SSDs
======
### Taiwan vendors believe PCIe and SATA will achieve price and market share parity by years' end.
![Intel SSD DC P6400 Series][1]
A collapse in price for NAND flash memory and a shrinking gap between the prices of PCI Express-based and SATA-based [solid-state drives][2] (SSDs) means the shift to PCI Express SSDs will accelerate in 2019, with the newer, faster format replacing the old by years' end.
According to the Taiwanese tech publication DigiTimes (the stories are now archived and unavailable without a subscription), falling NAND flash prices continue to drag down SSD prices, which will drive the adoption of SSDs in enterprise and data-center applications. This, in turn, will further drive the adoption of PCIe drives, which are a superior format to SATA.
**[ Read also:[Backup vs. archive: Why its important to know the difference][3] ]**
## SATA vs. PCI Express
SATA was introduced in 2001 as a replacement for the IDE interface, which had a much larger cable and slower interface. But SATA is a legacy HDD connection and not fast enough for NAND flash memory.
I used to review SSDs, and it was always the same when it came to benchmarking, with the drives scoring within a few milliseconds of each other despite the memory used. The SATA interface was the bottleneck. A SATA SSD is like a one-lane highway with no speed limit.
PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an [add-in card][4] that plugs into a PCIe slot and M.2, which is about the size of a [stick of gum][5] and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices.
There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moores Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market.
“The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesnt cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][6] ]**
DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019, and PCIe SSDs are expected to emerge as a new mainstream offering by the end of 2019 with a market share of 50 percent, matching SATA SSDs.
## SSD and NAND memory prices already falling
Market sources to DigiTimes said that unit price for 512GB PCIe SSD has fallen by 11 percent sequentially in the first quarter of 2019, while SATA SSDs have dropped 9 percent. They added that the current average unit price for 512GB SSDs is now equal to that of 256GB SSDs from one year ago, with prices continuing to drop.
According to DRAMeXchange, NAND flash contract prices will continue falling but at a slower rate in the second quarter of 2019. Memory makers are cutting production to avoid losing any more profits.
“Were in a price collapse. For over a year Ive been saying the destination for NAND is 8 cents per gigabyte, and some spot markets are 6 cents. It was 30 cents a year ago. Contract pricing is around 15 cents now, it had been 25 to 27 cents last year,” said Handy.
A contract price is what it sounds like. A memory maker like Samsung or Micron signs a contract with a SSD maker like Toshiba or Kingston for X amount for Y cents per gigabyte. Spot prices are prices that take place at the end of a quarter (like now) where a vendor anxious to unload excessive inventory has a fire sale to a drive maker that needs it on short supply.
DigiTimess contacts arent the only ones who foresee this. Handy was at an analyst event by Samsung a few months back where they presented their projection that PCIe SSD would outsell SATA by the end of this year, and not just in the enterprise but everywhere.
**More about backup and recovery:**
* [Backup vs. archive: Why its important to know the difference][3]
* [How to pick an off-site data-backup method][7]
* [Tape vs. disk storage: Why isnt tape dead yet?][8]
* [The correct levels of backup save time, bandwidth, space][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/12/intel-ssd-p4600-series1-100782098-large.jpg
[2]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[3]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html
[4]: https://www.newegg.com/Product/Product.aspx?Item=N82E16820249107
[5]: https://www.newegg.com/Product/Product.aspx?Item=20-156-199&cm_sp=SearchSuccess-_-INFOCARD-_-m.2+-_-20-156-199-_-2&Description=m.2+
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[8]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html
[9]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,133 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Can Better Task Stealing Make Linux Faster?)
[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster)
[#]: author: (Oracle )
Can Better Task Stealing Make Linux Faster?
======
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._
### Load balancing via scalable task stealing
The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.
I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.
### Results
Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:
* %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
* steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.
![load balancing][1]
[Used with permission][2]
CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:
![][3]
Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.
### The code
As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using
```
# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
Yes
```
If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).
### Future work
After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:
* If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes.
* Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
* Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
* Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.
_This article originally appeared at[Oracle Developers Blog][6]._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster
作者:[Oracle][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing)
[2]: /LICENSES/CATEGORY/USED-PERMISSION
[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png
[4]: https://lkml.org/lkml/2018/12/6/1253
[5]: https://lkml.org/lkml/2018/12/6/1250
[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws)
[#]: via: (https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws
======
### Cisco is issuing 17 new fixes for security problems with IOS and IOS/XE software that runs most of its routers and switches, while it has no patch yet to replace flawed patches to RV320 and RV 325 routers.
![Marisa9 / Getty][1]
Cisco has dropped [17 Security advisories describing 19 vulnerabilities][2] in the software that runs most of its routers and switches, IOS and IOS/XE.
The company also announced that two previously issued patches for its RV320 and RV325 Dual Gigabit WAN VPN Routers were “incomplete” and would need to be redone and reissued.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
Cisco rates both those router vulnerabilities as “High” and describes the problems like this:
* [One vulnerability][5] is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as _root_.
* The [second exposure][6] is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information.
Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both.
On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software. Some of the security bugs, which are all rated as “High”, include:
* [A vulnerability][7] in the web UI of Cisco IOS XE Software could let an unauthenticated, remote attacker access sensitive configuration information.
* [A vulnerability][8] in Cisco IOS XE Software could let an authenticated, local attacker inject arbitrary commands that are executed with elevated privileges. The vulnerability is due to insufficient input validation of commands supplied by the user. An attacker could exploit this vulnerability by authenticating to a device and submitting crafted input to the affected commands.
* [A weakness][9] in the ingress traffic validation of Cisco IOS XE Software for Cisco Aggregation Services Router (ASR) 900 Route Switch Processor 3 could let an unauthenticated, adjacent attacker trigger a reload of an affected device, resulting in a denial of service (DoS) condition, Cisco said. The vulnerability exists because the software insufficiently validates ingress traffic on the ASIC used on the RSP3 platform. An attacker could exploit this vulnerability by sending a malformed OSPF version 2 message to an affected device.
* A problem in the [authorization subsystem][10] of Cisco IOS XE Software could allow an authenticated but unprivileged (level 1), remote attacker to run privileged Cisco IOS commands by using the web UI. The vulnerability is due to improper validation of user privileges of web UI users. An attacker could exploit this vulnerability by submitting a malicious payload to a specific endpoint in the web UI, Cisco said.
* A vulnerability in the [Cluster Management Protocol][11] (CMP) processing code in Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, adjacent attacker to trigger a DoS condition on an affected device. The vulnerability is due to insufficient input validation when processing CMP management packets, Cisco said.
Cisco has released free software updates that address the vulnerabilities described in these advisories and [directs users to their software agreements][12] to find out how they can download the fixes.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/woman-with-hands-over-face_mistake_oops_embarrassed_shy-by-marisa9-getty-100787990-large.jpg
[2]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-71135
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-inject
[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-info
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xeid
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xecmd
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-rsp3-ospf
[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-iosxe-privesc
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-cmp-dos
[12]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment)
[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment
======
### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But thats only a first step. The data collected by that equipment must also be considered.
![Thinkstock][1]
Theres a surprising battle being fought on Americas farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle.
## Right to repair farm equipment
Heres the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it.
**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
[Warrens proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment.
Thats a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue.
## Part of a much bigger IoT data issue
But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesnt address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment.
And as many farmers can tell you, this isnt some academic argument. That data has real value — not to mention privacy implications. For farmers, its GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products.
The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms.
At the very least, users need clear statements of the rules, so they know exactly what theyre getting — and not getting — when they buy IoT-enhanced devices and equipment. And if theyre being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy.
Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach.
**[ Now read this:[Big trouble down on the IoT farm][6] ]**
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg
[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech
[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html
[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (arrowfeng)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Microsoft introduces Azure Stack for HCI)
[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Microsoft introduces Azure Stack for HCI
======
### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution.
![Thinkstock/Microsoft][1]
Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware.
[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsofts data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsofts cloud service is easy.
HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. Its designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software.
**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking.
The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery.
“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7].
Its not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement.
"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said.
Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10].
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg
[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html
[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html
[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/
[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/
[9]: https://www.openstack.org/
[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks)
[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Motorola taps freed-up wireless spectrum for enterprise LTE networks
======
### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks.
![Jiraroj Praditcharoenkul / Getty Images][1]
In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2].
The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even.
**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. Thats a swath of radio bandwidth thats being released by the Federal Communications Commission (FCC) in the 3.5GHz band. Its a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise.
The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations dont have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks.
## Motorola's pitch for using a private broadband network
Why a private broadband network and not simply cell phones? One giveaway line is in Motorolas promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks theres a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [Ive written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise.
Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. Thats particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, dont handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup.
## Industrial IoT uses for Motorola's Nitro network
Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products cant provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.”
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]**
Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks.
The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me.
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg
[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html
[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf
[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html
[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html
[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,48 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Robots in Retail are Real… and so is Edge Computing)
[#]: via: (https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all)
[#]: author: (Wendy Torell https://www.networkworld.com/author/Wendy-Torell/)
Robots in Retail are Real… and so is Edge Computing
======
### Ive seen plenty of articles touting the promise of edge computing technologies like AI and robotics in retail brick & mortar, but it wasnt until this past weekend that I had my first encounter with an actual robot in a retail store.
![Getty][1]
Ive seen plenty of articles touting the promise of [edge computing][2] technologies like AI and robotics in retail brick & mortar, but it wasnt until this past weekend that I had my first encounter with an actual robot in a retail store. I was doing my usual weekly grocery shopping at my local Stop & Shop, and who comes strolling down the aisle, but…. Marty… the autonomous robot. He was friendly looking with his big googly eyes and was wearing a sign that explained he was there for safety, and that he was monitoring the aisles to report spills, debris, and other hazards to employees to improve my shopping experience. He caught the attention of most of the shoppers.
At the National Retail Federation conference in NY that I attended in January, this was a topic of one of the [panel sessions][3]. It all makes sense… a positive customer experience is critical to retail success. But employee-to-customer (human to human) interaction has also been proven important. Thats where Marty comes in… to free up resources spent on tedious, time consuming tasks so that personnel can spend more time directly helping customers.
**Use cases for robots in stores**
Robotics have been utilized by retailers in manufacturing floors, and in distribution warehouses to improve productivity and optimize business processes along the supply chain. But it is only more recently that were seeing them make their way into the retail store front, where they are in contact with the customers. Alerting to hazards in the aisles is just one of many use-cases for the robots. They can also be used to scan and re-stock shelves, or as general information sources and greeters upon entering the store to guide your shopping experience. But how does a retailer justify the investment in this type of technology? Determining your ROI isnt as cut and dry as in a warehouse environment, for example, where costs are directly tied to number of staff, time to complete tasks, etc… I guess time will tell for the retailers that are giving it a go.
**What does it mean for the IT equipment on-premise ([micro data center][4])**
Robotics are one of the many ways retail stores are being digitized. Video analytics is another big one, being used to analyze facial expressions for customer satisfaction, obtain customer demographics as input to product development, or ensure queue lines dont get too long. My colleague, Patrick Donovan, wrote a detailed [blog post][5] about our trip to NRF and the impact on the physical infrastructure in the stores. In a nutshell, the equipment on-premise is becoming more mission critical, more integrated to business applications in the cloud, more tied to positive customer-experiences… and with that comes the need for more secure, more available, more manageable edge. But this is easier said than done in an environment that generally has no IT staff on-premise, and with hundreds or potentially thousands of stores spread out geographically. So how do we address this?
We answer this question in a white paper that Patrick and I are currently writing titled “An Integrated Ecosystem to Solve Edge Computing Infrastructure Challenges”. Heres a hint, (1) an integrated ecosystem of partners, and (2) an integrated micro data center that emerges from the ecosystem. Ill be sure to comment on this blog with the link when the white paper becomes publicly available! In the meantime, explore our [edge computing][2] landing page to learn more.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all
作者:[Wendy Torell][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Wendy-Torell/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/gettyimages-828488368-1060x445-100792228-large.jpg
[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
[3]: https://stores.org/2019/01/15/why-is-there-a-robot-in-my-store/
[4]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
[5]: https://blog.apc.com/2019/02/06/4-thoughts-edge-computing-infrastructure-retail-sector/

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to manage your Linux environment)
[#]: via: (https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to manage your Linux environment
======
### Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter.
![IIP Photo Archive \(CC BY 2.0\)][1]
The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they're located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking.
Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files:
```
/etc/environment
/etc/bash.bashrc
/etc/profile
```
And some of these local files:
```
~/.bashrc
~/.profile -- not read if ~/.bash_profile or ~/.bash_login
~/.bash_profile
~/.bash_login
```
You can modify any of the local four that exist, since they sit in your home directory and belong to you.
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
### Viewing your Linux environment settings
To view your environment settings, use the **env** command. Your output will likely look similar to this:
```
$ env
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;
01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:
*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:
*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:
*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;
31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:
*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:
*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:
*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:
*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:
*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:
*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:
*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:
*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:
*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:
*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:
*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36:
SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22
LESSCLOSE=/usr/bin/lesspipe %s %s
LANG=en_US.UTF-8
OLDPWD=/home/shs
XDG_SESSION_ID=2253
USER=shs
PWD=/home/shs
HOME=/home/shs
SSH_CLIENT=192.168.0.21 34975 22
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
SSH_TTY=/dev/pts/0
MAIL=/var/mail/shs
TERM=xterm
SHELL=/bin/bash
SHLVL=1
LOGNAME=shs
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LESSOPEN=| /usr/bin/lesspipe %s
_=/usr/bin/env
```
While you're likely to get a _lot_ of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like ***.tar=01;31:** , this tells you that tar files will be displayed in a file listing in red, while ***.jpg=01;35:** tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at [Customizing your colors on the Linux command line][3].
One easy way to turn colors off when you prefer a simpler display is to use a command such as this one:
```
$ ls -l --color=never
```
That command could easily be turned into an alias:
```
$ alias ll2='ls -l --color=never'
```
You can also display individual settings using the **echo** command. In this command, we display the number of commands that will be remembered in our history buffer:
```
$ echo $HISTSIZE
1000
```
Your last location in the file system will be remembered if you've moved.
```
PWD=/home/shs
OLDPWD=/tmp
```
### Making changes
You can make changes to environment settings with a command like this, but add a line lsuch as "HISTSIZE=1234" in your ~/.bashrc file if you want to retain this setting.
```
$ export HISTSIZE=1234
```
### What it means to "export" a variable
Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes.
### Adding and removing variables
You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file.
```
$ export MSG="Hello, World!"
```
You can unset a variable if you need by using the **unset** command:
```
$ unset MSG
```
If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example:
```
$ echo $MSG
Hello, World!
$ unset $MSG
$ echo $MSG
$ . ~/.bashrc
$ echo $MSG
Hello, World!
```
### Wrap-up
User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins).
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html#tk.rss_all
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/environment-rocks-leaves-100792229-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.networkworld.com/article/3269587/customizing-your-text-colors-on-the-linux-command-line.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Russia demands access to VPN providers servers)
[#]: via: (https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all)
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
Russia demands access to VPN providers servers
======
### 10 VPN service providers have been ordered to link their servers in Russia to the state censorship agency by April 26
![Getty Images][1]
The Russian censorship agency Roskomnadzor has ordered 10 [VPN][2] service providers to link their servers in Russia to its network in order to stop users within the country from reaching banned sites.
If they fail to comply, their services will be blocked, according to a machine translation of the order.
[RELATED: Best VPN routers for small business][3]
The 10 VPN providers are ExpressVPN, HideMyAss!, Hola VPN, IPVanish, Kaspersky Secure Connection, KeepSolid, NordVPN, OpenVPN, TorGuard, and VyprVPN.
In response at least five of the 10 Express VPN, IPVanish, KeepSolid, NordVPN, TorGuard and say they are tearing down their servers in Russia but continuing to offer their services to Russian customers if they can reach the providers servers located outside of Russia. A sixth provider, Kaspersky Labs, which is based in Moscow, says it will comply with the order. The other four could not be reached for this article.
IPVanish characterized the order as another phase of “Russias censorship agenda” dating back to 2017 when the government enacted a law forbidding the use of VPNs to access blocked Web sites.
“Up until recently, however, they had done little to enforce such rules,” IPVanish [says in its blog][4]. “These new demands mark a significant escalation.”
The reactions of those not complying are similar. TorGuard says it has taken steps to remove all its physical servers from Russia. It is also cutting off its business with data centers in the region
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
“We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred,” [TorGuard says in its blog][6]. “We do not store any logs so even if servers were compromised it would be impossible for customers data to be exposed.”
TorGuard says it is deploying more servers in adjacent countries to protect fast download speeds for customers in the region.
IPVanish says it has faced similar demands from Russia before and responded similarly. In 2016, a new Russian law required online service providers to store customers private data for a year. “In response, [we removed all physical server presence in Russia][7], while still offering Russians encrypted connections via servers outside of Russian borders,” the company says. “That decision was made in accordance with our strict zero-logs policy.”
KeepSolid says it had no servers in Russia, but it will not comply with the order to link with Roskomnadzor's network. KeepSolid says it will [draw on its experience dealing with the Great Firewall of China][8] to fight the Russian censorship attempt. "Our team developed a special [KeepSolid Wise protocol][9] which is designed for use in countries where the use of VPN is blocked," a spokesperson for the company said in an email statement.
NordVPN says its shutting down all its Russian servers, and all of them will be shredded as of April 1. [The company says in a blog][10] that some of its customers who connected to its Russian servers without use of the NordVPN application will have to reconfigure their devices to insure their security. Those customers using the app wont have to do anything differently because the option to connect to Russia via the app has been removed.
ExpressVPN is also not complying with the order. "As a matter of principle, ExpressVPN will never cooperate with efforts to censor the internet by any country," said the company's vice presidentn Harold Li in an email, but he said that blocking traffic will be ineffective. "We epect that Russian internet users will still be able to find means of accessing the sites and services they want, albeit perhaps with some additional effort."
Kaspersky Labs says it will comply with the Russian order and responded to emailed questions about its reaction with this written response:
“Kaspersky Lab is aware of the new requirements from Russian regulators for VPN providers operating in the country. These requirements oblige VPN providers to restrict access to a number of websites that were listed and prohibited by the Russian Government in the countrys territory. As a responsible company, Kaspersky Lab complies with the laws of all the countries where it operates, including Russia. At the same time, the new requirements dont affect the main purpose of Kaspersky Secure Connection which protects user privacy and ensures confidentiality and protection against data interception, for example, when using open Wi-Fi networks, making online payments at cafes, airports or hotels. Additionally, the new requirements are relevant to VPN use only in Russian territory and do not concern users in other countries.”
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all
作者:[Tim Greene][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Tim-Greene/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/ipsecurity-protocols-network-security-vpn-100775457-large.jpg
[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
[3]: http://www.networkworld.com/article/3002228/router/best-vpn-routers-for-small-business.html#tk.nww-fsb
[4]: https://nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://torguard.net/blog/why-torguard-has-removed-all-russian-servers/
[7]: https://blog.ipvanish.com/ipvanish-removes-russian-vpn-servers-from-moscow/
[8]: https://www.vpnunlimitedapp.com/blog/what-roskomnadzor-demands-from-vpns/
[9]: https://www.vpnunlimitedapp.com/blog/keepsolid-wise-a-smart-solution-to-get-total-online-freedom/
[10]: /cms/article/blog%20https:/nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: (liujing97)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to create a filesystem on a Linux partition or logical volume)
[#]: via: (https://opensource.com/article/19/4/create-filesystem-linux-partition)
[#]: author: (Kedar Vijay Kulkarni (Red Hat) https://opensource.com/users/kkulkarn)
How to create a filesystem on a Linux partition or logical volume
======
Learn to create a filesystem and mount it persistently or
non-persistently in your system.
![Filing papers and documents][1]
In computing, a filesystem controls how data is stored and retrieved and helps organize the files on the storage media. Without a filesystem, information in storage would be one large block of data, and you couldn't tell where one piece of information stopped and the next began. A filesystem helps manage all of this by providing names to files that store data and maintaining a table of files and directories—along with their start/end location, total size, etc.—on disks within the filesystem.
In Linux, when you create a hard disk partition or a logical volume, the next step is usually to create a filesystem by formatting the partition or logical volume. This how-to assumes you know how to create a partition or a logical volume, and you just want to format it to contain a filesystem and mount it.
### Create a filesystem
Imagine you just added a new disk to your system and created a partition named **/dev/sda1** on it.
1. To verify that the Linux kernel can see the partition, you can **cat** out **/proc/partitions** like this:
```
[root@localhost ~]# cat /proc/partitions
major minor #blocks name
253 0 10485760 vda
253 1 8192000 vda1
11 0 1048575 sr0
11 1 374 sr1
8 0 10485760 sda
8 1 10484736 sda1
252 0 3145728 dm-0
252 1 2097152 dm-1
252 2 1048576 dm-2
8 16 1048576 sdb
```
2. Decide what kind of filesystem you want to create, such as ext4, XFS, or anything else. Here are a few options:
```
[root@localhost ~]# mkfs.<tab><tab>
mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.minix mkfs.xfs
```
3. For the purposes of this exercise, choose ext4. (I like ext4 because it allows you to shrink the filesystem if you need to, a thing that isn't as straightforward with XFS.) Here's how it can be done (the output may differ based on device name/sizes):
```
[root@localhost ~]# mkfs.ext4 /dev/sda1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=8191 blocks
194688 inodes, 778241 blocks
38912 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=799014912
24 block groups
32768 blocks per group, 32768 fragments per group
8112 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
4. In the previous step, if you want to create a different kind of filesystem, use a different **mkfs** command variation.
### Mount a filesystem
After you create your filesystem, you can mount it in your operating system.
1. First, identify the UUID of your new filesystem. Issue the **blkid** command to list all known block storage devices and look for **sda1** in the output:
```
[root@localhost ~]# blkid
/dev/vda1: UUID="716e713d-4e91-4186-81fd-c6cfa1b0974d" TYPE="xfs"
/dev/sr1: UUID="2019-03-08-16-17-02-00" LABEL="config-2" TYPE="iso9660"
/dev/sda1: UUID="wow9N8-dX2d-ETN4-zK09-Gr1k-qCVF-eCerbF" TYPE="LVM2_member"
/dev/mapper/test-test1: PTTYPE="dos"
/dev/sda1: UUID="ac96b366-0cdd-4e4c-9493-bb93531be644" TYPE="ext4"
[root@localhost ~]#
```
2. Run the following command to mount the **/dev/sd1** device :
```
[root@localhost ~]# mkdir /mnt/mount_point_for_dev_sda1
[root@localhost ~]# ls /mnt/
mount_point_for_dev_sda1
[root@localhost ~]# mount -t ext4 /dev/sda1 /mnt/mount_point_for_dev_sda1/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.9G 920M 7.0G 12% /
devtmpfs 443M 0 443M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 30M 434M 7% /run
tmpfs 463M 0 463M 0% /sys/fs/cgroup
tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
[root@localhost ~]#
```
The **df -h** command shows which filesystem is mounted on which mount point. Look for **/dev/sd1**. The mount command above used the device name **/dev/sda1**. Substitute it with the UUID identified in the **blkid** command. Also, note that a new directory was created to mount **/dev/sda1** under **/mnt**.
3. A problem with using the mount command directly on the command line (as in the previous step) is that the mount won't persist across reboots. To mount the filesystem persistently, edit the **/etc/fstab** file to include your mount information:
```
UUID=ac96b366-0cdd-4e4c-9493-bb93531be644 /mnt/mount_point_for_dev_sda1/ ext4 defaults 0 0
```
4. After you edit **/etc/fstab** , you can **umount /mnt/mount_point_for_dev_sda1** and run the command **mount -a** to mount everything listed in **/etc/fstab**. If everything went right, you can still list **df -h** and see your filesystem mounted:
```
root@localhost ~]# umount /mnt/mount_point_for_dev_sda1/
[root@localhost ~]# mount -a
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.9G 920M 7.0G 12% /
devtmpfs 443M 0 443M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 30M 434M 7% /run
tmpfs 463M 0 463M 0% /sys/fs/cgroup
tmpfs 93M 0 93M 0% /run/user/0
/dev/sda1 2.9G 9.0M 2.7G 1% /mnt/mount_point_for_dev_sda1
```
5. You can also check whether the filesystem was mounted:
```
[root@localhost ~]# mount | grep ^/dev/sd
/dev/sda1 on /mnt/mount_point_for_dev_sda1 type ext4 (rw,relatime,seclabel,stripe=8191,data=ordered)
```
Now you know how to create a filesystem and mount it persistently or non-persistently within your system.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/create-filesystem-linux-partition
作者:[Kedar Vijay Kulkarni (Red Hat)][a]
选题:[lujun9972][b]
译者:[liujing97](https://github.com/liujing97)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kkulkarn
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Meta Networks builds user security into its Network-as-a-Service)
[#]: via: (https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all)
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
Meta Networks builds user security into its Network-as-a-Service
======
### Meta Networks has a unique approach to the security of its Network-as-a-Service. A tight security perimeter is built around every user and the specific resources each person needs to access.
![MF3d / Getty Images][1]
Network-as-a-Service (NaaS) is growing in popularity and availability for those organizations that dont want to host their own LAN or WAN, or that want to complement or replace their traditional network with something far easier to manage.
With NaaS, a service provider creates a multi-tenant wide area network comprised of geographically dispersed points of presence (PoPs) connected via high-speed Tier 1 carrier links that create the network backbone. The PoPs peer with cloud services to facilitate customer access to cloud applications such as SaaS offerings, as well as to infrastructure services from the likes of Amazon, Google and Microsoft. User organizations connect to the network from whatever facilities they have — data centers, branch offices, or even individual client devices — typically via SD-WAN appliances and/or VPNs.
Numerous service providers now offer Network-as-a-Service. As the network backbone and the PoPs become more of a commodity, the providers are distinguishing themselves on other value-added services, such as integrated security or WAN optimization.
**[ Also read:[What to consider when deploying a next generation firewall][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3]. ]**
Ever since its launch about a year ago, [Meta Networks][4] has staked security as its primary value-add. Whats different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative.
Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user. No access is possible unless it is explicitly granted, and its continuously verified at the packet level. This model effectively provides dynamically provisioned secure network segmentation.
## SDP tightly controls access to specific resources
This approach works very well when a company wants to securely connect employees, contractors, and external partners to specific resources on the network. For example, one of Meta Networks customers is Via Transportation, a New York-based company that has a ride-sharing platform. The company operates its own ride-sharing services in various cities in North America and Europe, and it licenses its technology to other transit systems around the world.
Vias operations are completely cloud-native, and so it has no legacy-style site-based WAN to connect its 400-plus employees and contractors to their cloud-based applications. Vias partners, primarily transportation operators in different cities and countries, also need controlled access to specific portions of Vias software platform to manage rideshares. Giving each group of users access to the applications they need — and _only_ to the ones they specifically need was a challenge using a VPN. Using the Meta NaaS instead gives Via more granular control over who has what access.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]**
Vias employees with managed devices connect to the Meta NaaS using client software on the device, and they are authenticated using Okta and a certificate. Contractors and customers with unmanaged devices use a browser-based access solution from Meta that doesnt require installation or setup. New users can be on-boarded quickly and assigned granular access policies based on their role. Integration with Okta provides information that facilitates identity-based access policies. Once users connect to the network, they can see only the applications and network resources that their policy allows; everything else is invisible to them under the SDP architecture.
For Via, there are several benefits to the Meta NaaS approach. First and foremost, the company doesnt have to own or operate its own WAN infrastructure. Everything is a managed service located in the cloud — the same business model that Via itself espouses. Next, this solution scales easily to support the companys growth. Metas security integrates with Vias existing identity management system, so identities and access policies can be centrally managed. And finally, the software-defined perimeter hides resources from unauthorized users, creating security by obscurity.
## Tightening security even further
Meta Networks further tightens the security around the user by doing device posture checks — “NAC lite,” if you will. A customer can define the criteria that devices have to meet before they are allowed to connect to the NaaS. For example, the check could be whether a security certificate is installed, if a registry key is set to a specific value, or if anti-virus software is installed and running. Its one more way to enforce company policies on network access.
When end users use the browser-based method to connect to the Meta NaaS, all activity is recorded in a rich log so that everything can be audited, but also to set alerts and look for anomalies. This data can be exported to a SIEM if desired, but Meta has its own notification and alert system for security incidents.
Meta Networks recently implemented some new features around management, including smart groups and support for the System for Cross-Domain Identity Management (SCIM) protocol. The smart groups feature provides the means to add an extra notation or tag to elements such as devices, services, network subnets or segments, and basically everything thats in the system. These tags can then be applied to policy. For example, a customer could label some of their services as a production, staging, or development environment. Then a policy could be implemented to say that only sales people can access the production environment. Smart groups are just one more way to get even more granular about policy.
The SCIM support makes on-boarding new users simple. SCIM is a protocol that is used to synchronize and provision users and identities from a third-party identity provider such as Okta, Azure AD, or OneLogin. A customer can use SCIM to provision all the users from the IdP into the Meta system, synchronize in real time the groups and attributes, and then use that information to build the access policies inside Meta NaaS.
These and other security features fit into Meta Networks vision that the security perimeter goes with you no matter where you are, and the perimeter includes everything that was formerly delivered through the data center. It is delivered through the cloud to your client device with always-on security. Its a broad approach to SDP and a unique approach to NaaS.
**Reviews: 4 free, open-source network monitoring tools**
* [Icinga: Enterprise-grade, open-source network-monitoring that scales][6]
* [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][7]
* [Observium open-source network monitoring tool: Wont run on Windows but has a great user interface][8]
* [Zabbix delivers effective no-frills network monitoring][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all
作者:[Linda Musthaler][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Linda-Musthaler/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/firewall_network-security_lock_padlock_cyber-security-100776989-large.jpg
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.metanetworks.com/
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[6]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
[7]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
[8]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
[9]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge)
[#]: via: (https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge
======
![istock][1]
Were now near reaching the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the *Top Ten Reasons to Think Outside the Router. *Click for the [#3][3], [#4][4], [#5][5], [#6][6], [#7][7], [#8][8], [#9][9] and [#10][10] reasons to retire traditional branch routers.
_The #2 reason its time to retire branch routers: conventional router-centric WAN architectures are rigid and complex to manage!_
### **Challenges of conventional WAN edge architecture**
A conventional WAN edge architecture consists of a disparate array of devices, including routers, firewalls, WAN optimization appliances, wireless controllers and so on. This architecture was born in the era when applications were hosted exclusively in the data center. With this model, deploying new applications or provisioning new policies or making policy changes has become an arduous and time-consuming task. Configuration, deployment and management requires specialized on-premise IT expertise to manually program and configure each device with its own management interface, often using an arcane CLI. This process has hit the wall in the cloud era proving too slow, complex, error-prone, costly and inefficient.
As cloud-first enterprises increasingly migrate applications and infrastructure to the cloud, the traditional WAN architecture is no longer efficient. IT is now faced with a new set of challenges when it comes to connecting users securely and directly to the applications that run their businesses:
* How do you manage and consistently apply QoS and security policies across the distributed enterprise?
* How do you intelligently automate traffic steering across multiple WAN transport services based on application type and unique requirements?
* How do you deliver the highest quality of experiences to users when running applications over broadband, especially voice and video?
* How do you quickly respond to continuously changing business requirements?
These are just some of the new challenges facing IT teams in the cloud era. To be successful, enterprises will need to shift toward a business-first networking model where top-down business intent drives how the network behaves. And they would be well served to deploy a business-driven unified [SD-WAN][11] edge platform to transform their networks from a business constraint to a business accelerant.
### **Shifting toward a business-driven WAN edge platform**
A business-driven WAN edge platform is designed to enable enterprises to realize the full transformation promise of the cloud. It is a model where top-down business intent is the driver, not bottoms-up technology constraints. Its outcome oriented, utilizing automation, artificial intelligence (AI) and machine learning to get smarter every day. Through this continuous adaptation, and the ability to improve the performance of underlying transport and applications, it delivers the highest quality of experience to end users. This is in stark contrast to the router-centric model where application policies must be shoe-horned to fit within the constraints of the network. A business-driven, top-down approach continuously stays in compliance with business intent and centrally defined security policies.
### **A unified platform for simplifying and consolidating the WAN Edge**
Achieving a business-driven architecture requires a unified platform, designed from the ground up as one system, uniting [SD-WAN][12], [firewall][13], [segmentation][14], [routing][15], [WAN optimization][16], application visibility and control in a single-platform. Furthermore, it requires [centralized orchestration][17] with complete observability of the entire wide area network through a single pane of glass.
The use case “[Simplifying WAN Architecture][18]” describes in detail key capabilities of the Silver Peak [Unity EdgeConnect™][19] SD-WAN edge platform. It illustrates how EdgeConnect enables enterprises to simplify branch office WAN edge infrastructure and streamline deployment, configuration and ongoing management.
![][20]
### **Business and IT outcomes of a business-driven SD-WAN**
* Accelerates deployment, leveraging consistent hardware, software, cloud delivery models
* Saves up to 40 percent on hardware, software, installation, management and maintenance costs when replacing traditional routers
* Protects existing investment in security through simplified service chaining with our broadest ecosystem partners: [Check Point][21], [Forcepoint][22], [McAfee][23], [OPAQ][24], [Palo Alto Networks][25], [Symantec][26] and [Zscaler][27].
* Reduces foot print by 75 percent as it unifies network functions into a single platform
* Saves more than 50 percent on WAN optimization costs by selectively applying it when and where is needed on an application-by-application basis
* Accelerates time-to-resolution of application or network performance bottlenecks from days to minutes with simple, visual application and WAN analytics
Calculate your [ROI][28] today and learn why the time is now to [think outside the router][29] and deploy the business-driven Silver Peak EdgeConnect SD-WAN edge platform!
![][30]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/silverpeak_main-100792490-large.jpg
[2]: https://www.silver-peak.com/why-silver-peak
[3]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
[4]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
[5]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
[6]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
[7]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
[8]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
[9]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
[10]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
[11]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[12]: https://www.silver-peak.com/sd-wan
[13]: https://www.silver-peak.com/products/unity-edge-connect/orchestrated-security-policies
[14]: https://www.silver-peak.com/resource-center/centrally-orchestrated-end-end-segmentation
[15]: https://www.silver-peak.com/products/unity-edge-connect/bgp-routing
[16]: https://www.silver-peak.com/products/unity-boost
[17]: https://www.silver-peak.com/products/unity-orchestrator
[18]: https://www.silver-peak.com/use-cases/simplifying-wan-architecture
[19]: https://www.silver-peak.com/products/unity-edge-connect
[20]: https://images.idgesg.net/images/article/2019/04/sp_linkthrough-copy-100792505-large.jpg
[21]: https://www.silver-peak.com/resource-center/check-point-silver-peak-securing-internet-sd-wan
[22]: https://www.silver-peak.com/company/tech-partners/forcepoint
[23]: https://www.silver-peak.com/company/tech-partners/mcafee
[24]: https://www.silver-peak.com/company/tech-partners/opaq-networks
[25]: https://www.silver-peak.com/resource-center/palo-alto-networks-and-silver-peak
[26]: https://www.silver-peak.com/company/tech-partners/symantec
[27]: https://www.silver-peak.com/resource-center/zscaler-and-silver-peak-solution-brief
[28]: https://www.silver-peak.com/sd-wan-interactive-roi-calculator
[29]: https://www.silver-peak.com/think-outside-router
[30]: https://images.idgesg.net/images/article/2019/04/roi-100792506-large.jpg

View File

@ -0,0 +1,171 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What is 5G? How is it better than 4G?)
[#]: via: (https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all)
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
What is 5G? How is it better than 4G?
======
### 5G networks will boost wireless throughput by a factor of 10 and may replace wired broadband. But when will they be available, and why are 5G and IoT so linked together?
![Thinkstock][1]
[5G wireless][2] is an umbrella term to describe a set of standards and technologies for a radically faster wireless internet that ideally is up to 20 times faster with 120 times less latency than 4G, setting the stage for IoT networking advances and support for new high-bandwidth applications.
## What is 5G? Technology or buzzword?
It will be years before the technology reaches its full potential worldwide, but meanwhile some 5G network services are being rolled out today. 5G is as much a marketing buzzword as a technical term, and not all services marketed as 5G are standard.
**[From Mobile World Congress:[The time of 5G is almost here][3].]**
## 5G speed vs 4G
With every new generation of wireless technology, the biggest appeal is increased speed. 5G networks have potential peak download speeds of [20 Gbps, with 10 Gbps being seen as typical][4]. That's not just faster than current 4G networks, which currently top out at around 1 Gbps, but also faster than cable internet connections that deliver broadband to many people's homes. 5G offers network speeds that rival optical-fiber connections.
Throughput alone isn't 5G's only important speed improvement; it also features a huge reduction in network latency*.* That's an important distinction: throughput measures how long it would take to download a large file, while latency is determined by network bottlenecks and delays that slow down responses in back-and-forth communication.
Latency can be difficult to quantify because it varies based on myriad network conditions, but 5G networks are capable of latency rates that are less than a millisecond in ideal conditions. Overall, 5G latency will be lower than 4G's by a factor of 60 to 120. That will make possible a number of applications such as virtual reality that delay makes impractical today.
## 5G technology
The technology underpinnings of 5G are defined by a series of standards that have been in the works for the better part of a decade. One of the most important of these is 5G New Radio, or 5G NR*,* formalized by the 3rd Generation Partnership Project, a standards organization that develops protocols for mobile telephony. 5G NR will dictate many of the ways in which consumer 5G devices will operate, and was [finalized in June of 2018][5].
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
A number of individual technologies have come together to make the speed and latency improvements of 5G possible, and below are some of the most important.
## Millimeter waves
5G networks will for the most part use frequencies in the 30 to 300 GHz range. (Wavelengths at these frequencies are between 1 and 10 millimeters, thus the name.) This high-frequency band can [carry much more information per unit of time than the lower-frequency signals][7] currently used by 4G LTE, which is generally below 1 GHz, or Wi-Fi, which tops out at 6 GHz.
Millimeter-wave technology has traditionally been expensive and difficult to deploy. Technical advances have overcome those difficulties, which is part of what's made 5G possible today.
## Small cells
One drawback of millimeter wave transmission is that it's more prone to interference than Wi-Fi or 4G signals as they pass through physical objects.
To overcome this, the model for 5G infrastructure will be different from 4G's. Instead of the large cellular-antenna masts we've come to accept as part of the landscape, 5G networks will be powered by [much smaller base stations spread throughout cities about 250 meters apart][8], creating cells of service that are also smaller.
These 5G base stations have lower power requirements than those for 4G and can be attached to buildings and utility poles more easily.
## Massive MIMO
Despite 5G base stations being much smaller than their 4G counterparts, they pack in many more antennas. These antennas are [multiple-input multiple-output (MIMO)][9], meaning that they can handle multiple two-way conversations over the same data signal simultaneously. 5G networks can handle more than [20 times more conversations in this way than 4G networks][10].
Massive MIMO promises to [radically improve on base station capacity limits][11], allowing individual base stations to have conversations with many more devices. This in particular is why 5G may drive wider adoption of IoT. In theory, a lot more internet-connected wireless gadgets will be able to be deployed in the same space without overwhelming the network.
## Beamforming
Making sure all these conversations go back and forth to the right places is tricky, especially with the aforementioned problems millimeter-wave signals have with interference. To overcome those issues, 5G stations deploy advanced beamforming techniques, which use constructive and destructive radio interference to make signals directional rather than broadcast. That effectively boosts signal strength and range in a particular direction.
## 5G availability
The first commercial 5G network was [rolled out in Qatar in May 2018][12]. Since then, networks have been popping up across the world, from Argentina to Vietnam. [Lifewire has a good, frequently updated list][13].
One thing to keep in mind, though, is that not all 5G networks deliver on all the technology's promises yet. Some early 5G offerings piggyback on existing 4G infrastructure, which reduces the potential speed gains; other services dubbed 5G for marketing purposes don't even comply with the standard. A closer look at offerings from U.S. wireless carriers will demonstrate some of the pitfalls.
## Wireless carriers and 5G
Technically, 5G is available in the U.S. today. But the caveats involved in that statement vary from carrier to carrier, demonstrating the long road that still lies ahead before 5G becomes omnipresent.
Verizon is making probably the biggest early 5G push. It announced [5G Home][14] in parts of four cities in October of 2018, a service that requires using a special 5G hotspot to connect to the network and feed it to your other devices via Wi-Fi.
Verizon planned an April rollout of a [mobile service in Minneapolis and Chicago][15], which will spread to other cities over the course of the year. Accessing the 5G network will cost customers an extra monthly fee plus what theyll have to spend on a phone that can actually connect to it (more on that in a moment). As an added wrinkle, Verizon is deploying what it calls [5G TF][16], which doesn't match up with the 5G NR standard.
AT&T [announced the availability of 5G in 12 U.S. cities in December 2018][17], with nine more coming by the end of 2019, but even in those cities, availability is limited to the downtown areas. To use the network requires a special Netgear hotspot that connects to the service, then provides a Wi-Fi signal to phones and other devices.
Meanwhile, AT&T is also rolling out speed boosts to its 4G network, which it's dubbed 5GE even though these improvements aren't related to 5G networking. ([This is causing backlash][18].)
Sprint will have 5G service in parts of four cities by May of 2019, and five more by the end of the year. But while Sprint's 5G offering makes use of massive MIMO cells, they [aren't using millimeter-wave signals][19], meaning that Sprint users won't see as much of a speed boost as customers of other carriers.
T-Mobile is pursuing a similar model,and it [won't roll out its service until the end of 2019][20] because there won't be any phones to connect to it.
One kink that might stop a rapid spread of 5G is the need to spread out all those small-cell base stations. Their small size and low power requirements make them easier to deploy than current 4G tech in a technical sense, but that doesn't mean it's simple to convince governments and property owners to install dozens of them everywhere. Verizon actually set up a [website that you can use to petition your local elected officials][21] to speed up 5G base station deployment.
## **5G phones: When available? When to buy?**
The first major 5G phone to be announced is the Samsung Galaxy S10 5G, which should be available by the end of the summer of 2019. You can also order a "[Moto Mod][22]" from Verizon, which [transforms Moto Z3 phones into 5G-compatible device][23]s.
But unless you can't resist the lure of being an early adopter, you may wish to hold off for a bit; some of the quirks and looming questions about carrier rollout may mean that you end up with a phone that [isn't compatible with your carrier's entire 5G network][24].
One laggard that may surprise you is Apple: analysts believe that there won't be a [5G-compatible iPhone until 2020 at the earliest][25]. But this isn't out of character for the company; Apple [also lagged behind Samsung in releasing 4G-compatible phones][26] in back in 2012.
Still, the 5G flood is coming. 5G-compatible devices [dominated Barcelona's Mobile World Congress in 2019][3], so expect to have a lot more choice on the horizon.
## Why are people talking about 6G already?
Some experts say [5G wont be able to meet the latency and reliability targets][27] it is shooting for. These skeptics are already looking ahead to 6G, which they say will try to address these projected shortcomings.
There is [a group that is researching new technologies that can be rolled into 6G][28] that calls itself
The Center for Converged TeraHertz Communications and Sensing (ComSenTer). Part of the spec theyre working on calls for 100Gbps speed for every device.
In addition to adding reliability, overcoming reliability and boosting speed, 6G is also trying to enable thousands of simultaneous connections. If successful, this feature could help to network IoT devices, which can be deployed in the thousands as sensors in a variety of industrial settings.
Even in its embryonic form, 6G may already be facing security concerns due to the emergence of newly discovered [potential for man-in-the-middle attacks in tera-hertz based networks][29]. The good news is that theres plenty of time to find solutions to the problem. 6G networks arent expected to start rolling out until 2030.
**More about 5g networks:**
* [How enterprises can prep for 5G networks][30]
* [5G vs 4G: How speed, latency and apps support differ][31]
* [Private 5G networks are coming][32]
* [5G and 6G wireless have security issues][33]
* [How millimeter-wave wireless could help support 5G and IoT][34]
Join the Network World communities on [Facebook][35] and [LinkedIn][36] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all
作者:[Josh Fruhlinger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Josh-Fruhlinger/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/04/5g-100718139-large.jpg
[2]: https://www.networkworld.com/article/3203489/what-is-5g-wireless-networking-benefits-standards-availability-versus-lte.html
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[4]: https://www.networkworld.com/article/3330603/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[5]: https://www.theverge.com/2018/6/15/17467734/5g-nr-standard-3gpp-standalone-finished
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3291323/millimeter-wave-wireless-could-help-support-5g-and-iot.html
[8]: https://spectrum.ieee.org/video/telecom/wireless/5g-bytes-small-cells-explained
[9]: https://www.networkworld.com/article/3250268/what-is-mu-mimo-and-why-you-need-it-in-your-wireless-routers.html
[10]: https://spectrum.ieee.org/tech-talk/telecom/wireless/5g-researchers-achieve-new-spectrum-efficiency-record
[11]: https://www.networkworld.com/article/3262991/future-wireless-networks-will-have-no-capacity-limits.html
[12]: https://venturebeat.com/2018/05/14/worlds-first-commercial-5g-network-launches-in-qatar/
[13]: https://www.lifewire.com/5g-availability-world-4156244
[14]: https://www.digitaltrends.com/computing/verizon-5g-home-promises-up-to-gigabit-internet-speeds-for-50/
[15]: https://lifehacker.com/heres-your-cheat-sheet-for-verizons-new-5g-data-plans-1833278817
[16]: https://www.theverge.com/2018/10/2/17927712/verizon-5g-home-internet-real-speed-meaning
[17]: https://www.cnn.com/2018/12/18/tech/5g-mobile-att/index.html
[18]: https://www.networkworld.com/article/3339720/like-4g-before-it-5g-is-being-hyped.html?nsdr=true
[19]: https://www.digitaltrends.com/mobile/sprint-5g-rollout/
[20]: https://www.cnet.com/news/t-mobile-delays-full-600-mhz-5g-launch-until-second-half/
[21]: https://lets5g.com/
[22]: https://www.verizonwireless.com/support/5g-moto-mod-faqs/?AID=11365093&SID=100098X1555750Xbc2e857934b22ebca1a0570d5ba93b7c&vendorid=CJM&PUBID=7105813&cjevent=2e2150cb478c11e98183013b0a1c0e0c
[23]: https://www.digitaltrends.com/cell-phone-reviews/moto-z3-review/
[24]: https://www.businessinsider.com/samsung-galaxy-s10-5g-which-us-cities-have-5g-networks-2019-2
[25]: https://www.cnet.com/news/why-apples-in-no-rush-to-sell-you-a-5g-iphone/
[26]: https://mashable.com/2012/09/09/iphone-5-4g-lte/#hYyQUelYo8qq
[27]: https://www.networkworld.com/article/3305359/6g-will-achieve-terabits-per-second-speeds.html
[28]: https://www.networkworld.com/article/3285112/get-ready-for-upcoming-6g-wireless-too.html
[29]: https://www.networkworld.com/article/3315626/5g-and-6g-wireless-technologies-have-security-issues.html
[30]: https://%20https//www.networkworld.com/article/3306720/mobile-wireless/how-enterprises-can-prep-for-5g.html
[31]: https://%20https//www.networkworld.com/article/3330603/mobile-wireless/5g-versus-4g-how-speed-latency-and-application-support-differ.html
[32]: https://%20https//www.networkworld.com/article/3319176/mobile-wireless/private-5g-networks-are-coming.html
[33]: https://www.networkworld.com/article/3315626/network-security/5g-and-6g-wireless-technologies-have-security-issues.html
[34]: https://www.networkworld.com/article/3291323/mobile-wireless/millimeter-wave-wireless-could-help-support-5g-and-iot.html
[35]: https://www.facebook.com/NetworkWorld/
[36]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 Essentials for Achieving Resiliency at the Edge)
[#]: via: (https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all)
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
3 Essentials for Achieving Resiliency at the Edge
======
### Edge computing requires different thinking and management to ensure the always-on availability that users have come to demand.
![iStock][1]
> “The IT industry has done a good job of making robust data centers that are highly manageable, highly secure, with redundant systems,” [says Kevin Brown][2], SVP Innovation and CTO for Schneider Electrics Secure Power Division.
However, he continues, companies then connect these data centers to messy edge closets and server rooms, which over time have become “micro mission-critical data centers” in their own right — making system availability vital. If not designed and managed correctly, the situation can be disastrous if users cannot connect to business-critical applications.
To avoid unacceptable downtime, companies should incorporate three essential ingredients into their edge computing deployments: remote management, physical security, and rapid deployments.
**Remote management**
Depending on the companys size, staff could be managing several — or many multiple — edge sites. Not only is this time consuming and costly, its also complex, especially if protocols differ from site to site.
While some organizations might deploy traditional remote monitoring technology to manage these sites, its important to note these tools: dont provide real-time status updates; are largely reactionary rather than proactive; and are sometimes limited in terms of data output.
Coupled with the need to overcome these limitations, the economics for managing edge sites necessitate that organizations consider a digital, or cloud-based, solution. In addition to cost savings, these platforms provide:
* Simplification in monitoring across edge sites
* Real-time visibility, right down to any device on the network
* Predictive analytics, including data-driven intelligence and recommendations to ensure proactive service delivery
**Physical security**
Small, local edge computing sites are often situated within larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. And sometimes theyre set up on-the-fly for a time-sensitive project.
However, when there is no dedicated location and open racks are unsecured, the risks of malicious and accidental incidents escalate.
To prevent unauthorized access to IT equipment at edge computing sites, proper physical security is critical and requires:
* Physical space monitoring, with environmental sensors for temperature and humidity
* Access control, with biometric sensors as an option
* Audio and video surveillance and monitoring with recording
* If possible, install IT equipment within a secure enclosure
**Rapid deployments**
The [benefits of edge computing][3] are significant, especially the ability to bring bandwidth-intensive computing closer to the user, which leads to faster speed to market and greater productivity.
Create a holistic plan that will enable the company to quickly deploy edge sites, while ensuring resiliency and reliability. That means having a standardized, repeatable process including:
* Pre-configured, integrated equipment that combines server, storage, networking, and software in a single enclosure — a prefabricated micro data center, if you will
* Designs that specify supporting racks, UPSs, PDUs, cable management, airflow practices, and cooling systems
These best practices as well as a balanced, systematic approach to edge computing deployments will ensure the always-on availability that todays employees and users have come to expect.
Learn how to enable resiliency within your edge computing deployment at [APC.com][4].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all
作者:[Anne Taylor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Anne-Taylor/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-900882382-100792635-large.jpg
[2]: https://www.youtube.com/watch?v=IfsCTFSH6Jc
[3]: https://www.networkworld.com/article/3342455/how-edge-computing-will-bring-business-to-the-next-level.html
[4]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp

View File

@ -0,0 +1,70 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5G: A deep dive into fast, new wireless)
[#]: via: (https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all)
[#]: author: (Craig Mathias https://www.networkworld.com/author/Craig-Mathias/)
5G: A deep dive into fast, new wireless
======
### 5G wireless networks are just about ready for prime time, overcoming backhaul and backward-compatibility issues, and promising the possibility of all-mobile networking through enhanced throughput.
The next step in the evolution of wireless WAN communications - [5G networks][1] \- is about to hit the front pages, and for good reason: it will complete the evolution of cellular from wireline augmentation to wireline replacement, and strategically from mobile-first to mobile-only.
So its not too early to start least basic planning to understanding how 5G will fit into and benefit IT plans across organizations of all sizes, industries and missions.
**[ From Mobile World Congress:[The time of 5G is almost here][2] ]**
5G will of course provide end-users with the additional throughput, capacity, and other elements to address the continuing and dramatic growth in geographic availability, user base, range of subscriber devices, demand for capacity, and application requirements, but will also enable service providers to benefit from new opportunities in overall strategy, service offerings and broadened marketplace presence.
A look at the key features you can expect in 5G wireless. (Click for larger image.)
![A look at the key features you can expect in 5G wireless.][3]
This article explores the technologies and market drivers behind 5G, with an emphasis on what 5G means to enterprise and organizational IT.
While 5G remains an imprecise term today, key objectives for the development of the advances required have become clear. These are as follows:
## 5G speeds
As is the case with Wi-Fi, major advances in cellular are first and foremost defined by new upper-bound _throughput_ numbers. The magic number here for 5G is in fact a _floor_ of 1 Gbps, with numbers as high as 10 Gbps mentioned by some. However, and again as is the case with Wi-Fi, its important to think more in terms of overall individual-cell and system-wide _capacity_. We believe, then, that per-user throughput of 50 Mbps is a more reasonable but clearly still remarkable working assumption, with up to 300 Mbps peak throughput realized in some deployments over the next five years. The possibility of reaching higher throughput than that exceeds our planning horizon, but such is, well, possible.
## Reduced latency
Perhaps even more important than throughput, though, is a reduction in the round-trip time for each packet. Reducing latency is important for voice, which will most certainly be all-IP in 5G implementations, video, and, again, in improving overall capacity. The over-the-air latency goal for 5G is less than 10ms, with 1ms possible in some defined classes of service.
## 5G network management and OSS
Operators are always seeking to reduce overhead and operating expense, so enhancements to both system management and operational support systems (OSS) yielding improvements in reliability, availability, serviceability, resilience, consistency, analytics capabilities, and operational efficiency, are all expected. The benefits of these will, in most cases, however, be transparent to end-users.
## Mobility and 5G technology
Very-high-speed user mobility, to as much as hundreds of kilometers per hour, will be supported, thus serving users on all modes of transportation. Regulatory and situation-dependent restrictions most notably, on aircraft however, will still apply.
To continue reading this article register now
[Get Free Access][4]
[Learn More][5] Existing Users [Sign In][4]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3385030/5g-a-deep-dive-into-fast-new-wireless.html#tk.rss_all
作者:[Craig Mathias][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Craig-Mathias/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[3]: https://images.idgesg.net/images/article/2017/06/2017_nw_5g_wireless_key_features-100727485-large.jpg
[4]: javascript://
[5]: /learn-about-insider/

View File

@ -0,0 +1,168 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Square Brackets in Bash: Part 2)
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
Using Square Brackets in Bash: Part 2
======
![square brackets][1]
We continue our tour of square brackets in Bash with a look at how they can act as a command.
[Creative Commons Zero][2]
Welcome back to our mini-series on square brackets. In the [previous article][3], we looked at various ways square brackets are used at the command line, including globbing. If you've not read that article, you might want to start there.
Square brackets can also be used as a command. Yep, for example, in:
```
[ "a" = "a" ]
```
which is, by the way, a valid command that you can execute, `[ ... ]` is a command. Notice that there are spaces between the opening bracket `[` and the parameters `"a" = "a"`, and then between the parameters and the closing bracket `]`. That is precisely because the brackets here act as a command, and you are separating the command from its parameters.
You would read the above line as " _test whether the string "a" is the same as string "a"_ ". If the premise is true, the `[ ... ]` command finishes with an exit status of 0. If not, the exit status is 1. [We talked about exit statuses in a previous article][4], and there you saw that you could access the value by checking the `$?` variable.
Try it out:
```
[ "a" = "a" ]
echo $?
```
And now try:
```
[ "a" = "b" ]
echo $?
```
In the first case, you will get a 0 (the premise is true), and running the second will give you a 1 (the premise is false). Remember that, in Bash, an exit status from a command that is 0 means it exited normally with no errors, and that makes it `true`. If there were any errors, the exit value would be a non-zero value (`false`). The `[ ... ]` command follows the same rules so that it is consistent with the rest of the other commands.
The `[ ... ]` command comes in handy in `if ... then` constructs and also in loops that require a certain condition to be met (or not) before exiting, like the `while` and `until` loops.
The logical operators for testing stuff are pretty straightforward:
```
[ STRING1 = STRING2 ] => checks to see if the strings are equal
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
etc...
```
You can also test for some very shell-specific things. The `-f` option, for example, tests whether a file exists or not:
```
for i in {000..099}; \
do \
if [ -f file$i ]; \
then \
echo file$i exists; \
else \
touch file$i; \
echo I made file$i; \
fi; \
done
```
If you run this in your test directory, line 3 will test to whether a file is in your long list of files. If it does exist, it will just print a message; but if it doesn't exist, it will create it, to make sure the whole set is complete.
You could write the loop more compactly like this:
```
for i in {000..099};\
do\
if [ ! -f file$i ];\
then\
touch file$i;\
echo I made file$i;\
fi;\
done
```
The `!` modifier in the condition inverts the premise, thus line 3 would translate to " _if the file`file$i` does not exist_ ".
Try it: delete some random files from the bunch you have in your test directory. Then run the loop shown above and watch how it rebuilds the list.
There are plenty of other tests you can try, including `-d` tests to see if the name belongs to a directory and `-h` tests to see if it is a symbolic link. You can also test whether a files belongs to a certain group of users (`-G`), whether one file is older than another (`-ot`), or even whether a file contains something or is, on the other hand, empty.
Try the following for example. Add some content to some of your files:
```
echo "Hello World" >> file023
echo "This is a message" >> file065
echo "To humanity" >> file010
```
and then run this:
```
for i in {000..099};\
do\
if [ ! -s file$i ];\
then\
rm file$i;\
echo I removed file$i;\
fi;\
done
```
And you'll remove all the files that are empty, leaving only the ones you added content to.
To find out more, check the manual page for the `test` command (a synonym for `[ ... ]`) with `man test`.
You may also see double brackets (`[[ ... ]]`) sometimes used in a similar way to single brackets. The reason for this is because double brackets give you a wider range of comparison operators. You can use `==`, for example, to compare a string to a pattern instead of just another string; or < and `>` to test whether a string would come before or after another in a dictionary.
To find out more about extended operators [check out this full list of Bash expressions][5].
### Next Time
In an upcoming article, we'll continue our tour and take a look at the role of parentheses `()` in Linux command lines. See you then!
_Read more:_
1. [The Meaning of Dot (`.`)][6]
2. [Understanding Angle Brackets in Bash (`<...>`)][7]
3. [More About Angle Brackets in Bash(`<` and `>`)][8]
4. [And, Ampersand, and & in Linux (`&`)][9]
5. [Ampersands and File Descriptors in Bash (`&`)][10]
6. [Logical & in Bash (`&`)][4]
7. [All about {Curly Braces} in Bash (`{}`)][11]
8. [Using Square Brackets in Bash: Part 1][3]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
作者:[Paul Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/bro66
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy (square brackets)
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option)
[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
When Wi-Fi is mission-critical, a mixed-channel architecture is the best option
======
### Multi-channel is the norm for Wi-Fi today, but its not always the best choice. Single-channel and hybrid APs offer compelling alternatives when reliable Wi-Fi is a must.
![Getty Images][1]
Ive worked with a number of companies that have implemented digital projects only to see them fail. The ideation was correct, the implementation was sound, and the market opportunity was there. The weak link? The Wi-Fi network.
For example, a large hospital wanted to improve clinician response times to patient alarms by having telemetry information sent to mobile devices. Without the system, the only way a nurse would know about a patient alarm is from an audible alert. And with all the background noise, its often tough to discern where noises are coming from. The problem was the Wi-Fi network in the hospital had not been upgraded in years and caused messages to be significantly delayed in their delivery, often taking four to five minutes to deliver. The long delivery times caused a lack of confidence in the system, so many clinicians stopped using it and went back to manual alerting. As a result, the project was considered a failure.
Ive seen similar examples in manufacturing, K-12 education, entertainment, and other industries. Businesses are competing on the basis of customer experience, and thats driven from the ever-expanding, ubiquitous wireless edge. Great Wi-Fi doesnt necessarily mean market leadership, but bad Wi-Fi will have a negative impact on customers and employees. And in todays competitive climate, thats a recipe for disaster.
**[ Read also:[Wi-Fi site-survey tips: How to avoid interference, dead spots][2] ]**
## Wi-Fi performance historically inconsistent
The problem with Wi-Fi is that its inherently flaky. Im sure everyone reading this has experienced the typical flaws with failed downloads, dropped connections, inconsistent performance, and lengthy wait times to connect to public hot spots.
Picture sitting in a conference prior to a keynote address and being able to tweet, send email, browse the web, and do other things with no problem. Then the keynote speaker comes on stage and the entire audiences start snapping pics, uploading those pictures, and streaming things and the Wi-Fi stops working. I find this to be the norm more than the exception, underscoring the need for [no-compromise Wi-Fi][3].
The question for network professionals is how to get to a place where the Wi-Fi is rock solid 100% of the time. Some say that just beefing up the existing network will do that, and it might, but in some cases, the type of Wi-Fi might not be appropriate.
The most commonly deployed type of Wi-Fi is multi-channel, also known as micro-cell, where each client connects to the access point (AP) using a radio channel. A high-quality experience is based on two things: good signal strength and minimal interference. Several things can cause interference, such as APs being too close, layout issues, or interference from other equipment. To minimize interference, businesses invest a significant amount of time and money in [site surveys to plan the optimal channel map][2], but even with thats done well, Wi-Fi glitches can still happen.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]**
## Multi-channel Wi-Fi not always the best choice
For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up.
## Single channel Wi-Fi offers better reliability but with a performance hit
Whats a network manager to do? Is inconsistent Wi-Fi just a fait accompli? Multi-channel is the norm, but it isnt designed for dynamic physical environments or those where reliable connectivity is a must.
Several years ago an alternative architecture was proposed that would solve these problems. As the name suggests, “single channel” Wi-Fi uses a single radio channel for all APs in the network. Think of this as being a single Wi-Fi fabric that operates on one channel. With this architecture, the placement of APs is irrelevant because they all utilize the same channel, so they wont interfere with one another. This has an obvious simplicity advantage, such as if coverage is poor, theres no reason to do another expensive site survey. Instead, just drop in APs where they are needed.
One of the disadvantages of single-channel is that aggregate network throughput was lower than multi-channel because only one channel can be used. This might be fine in environments where reliability trumps performance, but many organizations want both.
## Hybrid APs offer the best of both worlds
There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously. That means some web clients can be assigned to the multi-channel to have maximum throughput, while others can use single-channel for seamless roaming experience.
A practical use-case of such a mix might be a logistics facility where the office staff uses multi-channel, but the fork-lift operators use single-channel for continuous connectivity as they move throughout the warehouse.
Wi-Fi was once a network of convenience, but now it is perhaps the most mission-critical of all networks. A traditional multi-channel system might work, but due diligence should be done to see how it functions under a heavy load. IT leaders need to understand how important Wi-Fi is to digital transformation initiatives and do the proper testing to ensure its not the weak link in the infrastructure chain and choose the best technology for todays environment.
**Reviews: 4 free, open-source network monitoring tools:**
* [Icinga: Enterprise-grade, open-source network-monitoring that scales][5]
* [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][6]
* [Observium open-source network monitoring tool: Wont run on Windows but has a great user interface][7]
* [Zabbix delivers effective no-frills network monitoring][8]
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg
[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html
[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb
[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html
[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb
[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,137 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Zero-trust: microsegmentation networking)
[#]: via: (https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Zero-trust: microsegmentation networking
======
### Microsegmentation gives administrators the control to set granular policies in order to protect the application environment.
![Aaron Burson \(CC0\)][1]
The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both the advantages and disadvantages.
The network and security need to keep up with this rapid pace of change. If you cannot match with the speed of the [digital age,][2] then ultimately bad actors will become a hazard. Therefore, the organizations must move to a [zero-trust environment][3]: default deny, with least privilege access. In todays evolving digital world this is the primary key to success.
Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with [zero-trust][4].
**[ Dont miss[customer reviews of top remote access tools][5] and see [the most powerful IoT companies][6] . | Get daily insights by [signing up for Network World newsletters][7]. ]**
We need to have the ability to support all of these hybrid environments that can analyze at a process, flow data, and infrastructure level. As a matter of fact, there is never just one element to analyze within a network in order to create an effective security posture.
To adequately secure such an environment requires a solution with key components: such as appropriate visibility, microsegmentation, and breach detection. Let's learn more about one of these primary elements: zero-trust microsegmentation networking.
There are a variety of microsegmentation vendors, all with competing platforms. We have, for example, SDN-based, container-centric, network-based appliance be it physical or virtual, and container-centric to name just a few.
## What is microsegmentation?
Microsegmentation is the ability to put a wrapper around the access control for each component of an application. The traditional days are gone where we can just impose a block on source/destination/port numbers or higher up in the stack with protocols, such as HTTP or HTTPS.
As the communication patterns become more complex, thereby isolating the communication flows between entities, hence following the microsegmentation principles has become a necessity.
## Why is microsegmentation important?
Microsegmentation gives administrators the control to set granular policies in order to protect the application environment. It defines the rules and policies as to how an application can communicate within its tier. The policies are granular (a lot more granular than what we had before), which restrict the communication to hosts that are only allowed to communicate.
Eventually, this reduces the available attack surface and completely locks down the ability for the bad actors to move laterally within the application infrastructure. Why? Because it governs the applications activity at a granular level, thereby improving the entire security posture. The traditional zone-based networking no longer cuts it in todays [digital world][8].
## General networking
Let's start with the basics. We all know that with security, you are only as strong as your weakest link. As a result, enterprises have begun to further segment networks into microsegments. Some call them nanosegments.
But first, lets recap on what we actually started within the initial stage- nothing! We had IP addresses that were used for connectivity but unfortunately, they have no built-in authentication mechanism. Why? Because it wasn't a requirement back then.
Network connectivity based on network routing protocols was primarily used for sharing resources. A printer, 30 years ago, could cost the same as a house, so connectivity and the sharing of resources were important. The authentication of the communication endpoints was not considered significant.
## Broadcast domains
As networks grew in size, virtual LANs (VLANs) were introduced to divide the broadcast domains and improve network performance. A broadcast domain is a logical division of a computer network. All nodes can reach each other by sending a broadcast at the data link layer. When the broadcast domain swells, the network performance takes a hit.
Over time the role of the VLAN grew to be used as a security tool but it was never meant to be in that space. VLANs were used to improve performance, not to isolate the resources. The problem with VLANs is that there is no intra VLAN filtering. They have a very broad level of access and trust. If bad actors gain access to one segment in the zone, they should not be allowed to try and compromise another device within that zone, but with VLANs, this is a strong possibility.
Hence, VLAN offers the bad actor a pretty large attack surface to play with and move across laterally without inspection. Lateral movements are really hard to detect with traditional architectures.
Therefore, enterprises were forced to switch to microsegmentation. Microsegmentation further segments networks within the zone. On the contrary, the whole area of virtualization complicates the segmentation process. A virtualized server may only have a single physical network port but it supports numerous logical networks where services and applications reside across multiple security zones.
Thus, microsegmentation needs to work at both; the physical network layer as well as within the virtualized networking layer. As you are aware, there has been a change in the traffic pattern. The good thing about microsegmentation is that it controls both; the “north & south” and also the “east & west” movement of traffic, further isolating the size of broadcast domains.
## Microsegmentation a multi-stage process
Implementing microsegmentation is a multi-stage process. There are certain prerequisites that must be followed before the implementation. Firstly, you need to fully understand the communication patterns, map the flows and all the application dependencies.
Once this is done, it's only then you can enable microsegmentation in a platform-agnostic manner across all the environments. Segmenting your network appropriately creates a dark network until the administrator turns on the lights. Authentication is performed first and then access is granted to the communicating entities operating with zero-trust with least privilege access.
Once you are connecting the entities, they need to run through a number of technologies in order to be fully connected. There is not a once-off check with microsegmentation. Its rather a continuous process to make sure that both entities are doing what they are supposed to do.
This ensures that everyone is doing what they are entitled to do. You want to reduce the unnecessary cross-talk to an absolute minimum and only allow communication that is a complete necessity.
## How do you implement microsegmentation?
Firstly, you need strong visibility not just at the traffic flow level but also at the process and data contextual level. Without granular application visibility, it's impossible to map and fully understand what is normal traffic flow and irregular application communication patterns.
Visibility cannot be mapped out manually, as there could be hundreds of workloads. Therefore, an automatic approach must be taken. Manual mapping is more prone to errors and is inefficient. The visibility also needs to be in real-time. A static snapshot of the application architecture, even if it's down to a process level, will not tell you anything about the behaviors that are sanctioned or unsanctioned.
You also need to make sure that you, not under-segmenting, similar to what we had in the old days. Primarily, microsegmentation must manage communication workflows all the way up to Layer 7 of the Open Systems Interconnection (OSI) layer. Layer 4 microsegmentation only focuses on the Transport layer. If you are only segmenting the network at Layer 4 then you are widening your attack surface, thereby opening the network to be compromised.
Segmenting right up to the application layer means you are locking down the lateral movements, open ports, and protocols. It enables you to restrict access to the source and destination process rather than source and destination port numbers.
## Security issues with hybrid cloud
Since the [network perimeter][9] has been removed, therefore, it has become difficult to bolt the traditional security tools. Traditionally, we could position a static perimeter around the network infrastructure. However, this is not an available option today as we have a mixture of containerized applications, for example, a legacy database server. We have legacy communicating to the containerized land.
Hybrid enables organizations to use different types of cloud architects to include the on-premise and new technologies, such as containers. We are going to have a hybrid cloud in coming times which will change the way we think about networking. Hybrid forces the organizations to rethink about the network architectures.
When you attach the microsegment policies around the workload itself, then the policies will go with the workload. Then it would not matter if the entity moves to the on-premise or to the cloud. If the workload auto scales up and down or horizontally, the policy needs to go with the workload. Even if you go deeper than the workload, into the process level, you can set even more granular controls for microsegmentation.
## Identity
However, this is the point where identity becomes a challenge. If things are scaling and becoming dynamic, you cant tie policies to the IP addresses. Rather than using IP addresses as the base for microsegmentation, policies are based on the logical (not physical) attributes.
With microsegmentation, the workload identity is based on logical attributes, such as the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label associated with the workload.
These are what are known as logical attributes. Ultimately the policies map to the IP addresses but these are set by using the logical attributes, not the physical ones. As we progress in this technological era, the IP address is less relevant now. Named data networking is one of the perfect examples.
Other identity methods for microsegmentation are TLS certificates. If the traffic is encrypted with a different TLS certificate or from an invalid source, it automatically gets dropped, even if it comes from the right location. It will get blocked as it does not have the right identity.
You can even extend that further and look inside the actual payload. If an entity is trying to do a hypertext transfer protocol (HTTP) post to a record and if it tries to perform any other operation, it will get blocked.
## Policy enforcement
Practically, all of these policies can be implemented and enforced in different places throughout the network. However, if you enforce in only one place, that point in the network can become compromised and become an entry door to the bad actor. You can, for example, enforce in 10 different network points, even if you subvert in 2 of them the other 8 will still protect you.
Zero-trust microsegmentation ensures that you can enforce in different points throughout the network and also with different mechanics.
**This article is published as part of the IDG Contributor Network.[Want to Join?][10]**
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/07/hive-structured_windows_architecture_connections_connectivity_network_lincoln_park_pavilion_chicago_by_aaron_burson_cc0_via_unsplash_1200x800-100765880-large.jpg
[2]: https://youtu.be/AnMQH_noNDo
[3]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/
[4]: https://network-insight.net/2018/09/embrace-zero-trust-networking/
[5]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
[7]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[8]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/
[9]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/
[10]: /contributor-network/signup.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -100,7 +100,7 @@ via: https://opensource.com/article/19/4/log-analysis-tools
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel unveils an epic response to AMDs server push)
[#]: via: (https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel unveils an epic response to AMDs server push
======
### Intel introduced more than 50 new Xeon Scalable Processors for servers that cover a variety of workloads.
![Intel][1]
Intel on Tuesday introduced its second-generation Xeon Scalable Processors for servers, developed under the codename Cascade Lake, and its clear AMD has lit a fire under a once complacent company.
These new Xeon SP processors max out at 28 cores and 56 threads, a bit shy of AMDs Epyc server processors with 32 cores and 64 threads, but independent benchmarks are still to come, which may show Intel having a lead at single core performance.
And for absolute overkill, there is the Xeon SP Platinum 9200 Series, which sports 56 cores and 112 threads. It will also require up to 400W of power, more than twice what the high-end Xeons usually consume.
**[ Now read:[What is quantum computing (and why enterprises should care)][2] ]**
The new processors were unveiled at a big event at Intels headquarters in Santa Clara, California, and live-streamed on the web. [Newly minted CEO][3] Bob Swan kicked off the event, saying the new processors were the “first truly data-centric portfolio for our customers.”
“For the last several years, we have embarked on a journey to transform from a PC-centric company to a data-centric computing company and build the silicon processors with our partners to help our customers prosper and grow in an increasingly data-centric world,” he added.
He also said the move to a data-centric world isnt just CPUs, but a suite of accelerant technologies, including the [Agilex FPGA processors][4], Optane memory, and more.
This launch is the largest Xeon launch in the companys history, with more than 50 processor designs across the Xeon 8200 and 9200 lines. While something like that can lead to confusion, many of these are specific to certain workloads instead of general-purpose processors.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
Cascade Lake chips are the replacement for the previous Skylake platform, and the mainstream Cascade Lake chips have the same architecture as the Purley motherboard used by Skylake. Like the current Xeon Scalable processors, they have up to 28 cores with up to 38.5 MB of L3 cache, but speeds and feeds have been bumped up.
The Cascade Lake generation supports the new UPI (Ultra Path Interface) high-speed interconnect, up to six memory channels, AVX-512 support, and up to 48 PCIe lanes. Memory capacity has been doubled, from 768GB to 1.5TB of memory per socket. They work in the same socket as Purley motherboards and are built on a 14nm manufacturing process.
Some of the new Xeons, however, can access up to 4.5TB of memory per processor: 1.5TB of memory and 3TB of Optane memory, the new persistent memory that sits between DRAM and NAND flash memory and acts as a massive cache for both.
## Built-in fixes for Meltdown and Spectre vulnerabilities
Most important, though, is that these new Xeons have built-in fixes for the Meltdown and Spectre vulnerabilities. There are existing fixes for the exploits, but they have the effect of reducing performance, which varies based on workload. Intel showed a slide at the event that shows the company is using a combination of firmware and software mitigation.
New features also include Intel Deep Learning Boost (DL Boost), a technology developed to accelerate vector computing that Intel said makes this the first CPU with built-in inference acceleration for AI workloads. It works with the AVX-512 extension, which should make it ideal for machine learning scenarios.
Most of the new Xeons are available now, except for the 9200 Platinum, which is coming in the next few months. Many Intel partners Dell, Cray, Cisco, Supermicro all have new products, with Supermicro launching more than 100 new products built around Cascade Lake.
## Intel also rolls out Xeon D-1600 series processors
In addition to its hot rod Xeons, Intel also rolled out the Xeon D-1600 series processors, a low power variant based on a completely different architecture. Xeon D-1600 series processors are designed for space and/or power constrained environments, such as edge network devices and base stations.
Along with the new Xeons and FPGA chips, Intel also announced the Intel Ethernet 800 series adapter, which supports 25, 50 and 100 Gigabit transfer speeds.
Thank you, AMD. This is what competition looks like.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/intel-xeon-family-1-100792811-large.jpg
[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html
[3]: https://www.networkworld.com/article/3336921/intel-promotes-swan-to-ceo-bumps-off-itanium-and-eyes-mellanox.html
[4]: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Top Ten Reasons to Think Outside the Router #1: Its Time for a Router Refresh)
[#]: via: (https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Top Ten Reasons to Think Outside the Router #1: Its Time for a Router Refresh
======
![istock][1]
Were now at the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the _Top Ten Reasons to Think Outside the Router._ Click for the [#2][3], [#3][4], [#4][5], [#5][6], [#6][7], [#7][8], [#8][9], [#9][10] and [#10][11] reasons to retire traditional branch routers.
_**The #1 reason its time to retire conventional routers at the branch: your branch routers are coming due for a refresh the perfect time to evaluate new options.**_
Your WAN architecture is due for a branch router refresh! Youre under immense pressure to advance your organizations digital transformation initiatives and deliver a high quality of experience to your users and customers. Your applications at least SaaS apps are all cloud-based. You know you need to move more quickly to keep pace with changing business requirements to realize the transformational promise of the cloud. And, youre dealing with shifting traffic patterns and an insatiable appetite for more bandwidth at branch sites to support your users and applications. Finally, you know your IT budget for networking isnt going to increase.
_So, whats next?_ You really only have three options when it comes to refreshing your WAN. You can continue to try and stretch your conventional router-centric model. You can choose a basic [SD-WAN][12] model that may or may not be good enough. Or you can take a new approach and deploy a business-driven SD-WAN edge platform.
### **The pitfalls of a router-centric model**
![][13]
The router-centric approach worked well when enterprise applications were hosted in the data center; before the advent of the cloud. All traffic was routed directly from branch offices to the data center. With the emergence of the cloud, businesses were forced to conform to the constraints of the network when deploying new applications or making network changes. This is a bottoms-up device centric approach in which the network becomes a bottleneck to the business.
A router-centric approach requires manual device-by-device configuration that results in endless hours of manual programming, making it extremely difficult for network administrators to scale without experiencing major challenges in configuration, outages and troubleshooting. Any changes that arise when deploying a new application or changing a QoS or security policy, once again requires manually programming every router at every branch across the network. Re-programming is time consuming and requires utilizing a complex, cumbersome CLI, further adding to the inefficiencies of the model. In short, the router-centric WAN has hit the wall.
### **Basic SD-WAN, a step in the right direction**
![][14]
In this model, businesses realize the benefit of foundational features, but this model falls short of the goal of a fully automated, business-driven network. A basic SD-WAN approach is unable to provide what the business really needs, including the ability to deliver the best Quality of Experience for users.
Some of the basic SD-WAN features include the ability to use multiple forms of transport, path selection, centralized management, zero-touch provisioning and encrypted VPN overlays. However, a basic SD-WAN lacks in many areas:
* Limited end-to-end orchestration of WAN edge network functions
* Rudimentary path selection with traffic steering limited to pre-defined rules
* Long fail-over times in response to WAN transport outages
* Inability to use links when they experience brownouts due to link congestion or packet loss
* Fixed application definitions and manually scripted ACLs to control traffic steering across the internet
### **The solution: shift to a business-first networking model**
![][15]
In this model, the network enables the business. The WAN is transformed into a business accelerant that is fully automated and continuous, giving every application the resources it truly needs while delivering 10x the bandwidth for the same budget ultimately achieving the highest quality of experience to users and IT alike. With a business-first networking model, the network functions (SD-WAN, firewall, segmentation, routing, WAN optimization and application visibility and control) are unified in a single platform and are centrally orchestrated and managed. Top-down business intent is the driver, enabling businesses to unlock the full transformational promise of the cloud.
The business-driven [Silver Peak® EdgeConnect™ SD-WAN][16] edge platform was built for the cloud, enabling enterprises to liberate their applications from the constraints of existing WAN approaches. EdgeConnect offers the following advanced capabilities:
1\. Automates traffic steering and security policy enforcement based on business intent instead of TCP/IP addresses, delivering the highest Quality of Experience for users
2\. Actively embraces broadband to increase application performance and availability while lowering costs
3\. Securely and directly connect branch users to SaaS and IaaS cloud services
4\. Increases operational efficiency while increasing business agility and time-to-market via centralized orchestration
Silver Peak has more than 1,000 enterprise customer deployments across a range of vertical industries. Bentley Systems, [Nuffield Health][17] and [Solis Mammography][18] have all realized tangible business outcomes from their EdgeConnect deployments.
![][19]
Learn why the time is now to [think outside the router][20]!
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/istock-478729482-100792542-large.jpg
[2]: https://www.silver-peak.com/why-silver-peak
[3]: http://blog.silver-peak.com/think-outside-the-router-reason-2-simplify-and-consolidate-the-wan-edge
[4]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal
[5]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover
[6]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management
[7]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6
[8]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs
[9]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video
[10]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance
[11]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy
[12]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[13]: https://images.idgesg.net/images/article/2019/04/1_router-centric-vs-business-first-100792538-medium.jpg
[14]: https://images.idgesg.net/images/article/2019/04/2_basic-sd-wan-vs-business-first-100792539-medium.jpg
[15]: https://images.idgesg.net/images/article/2019/04/3_bus-first-networking-model-100792540-large.jpg
[16]: https://www.silver-peak.com/products/unity-edge-connect
[17]: https://www.silver-peak.com/resource-center/nuffield-health-deploys-uk-wide-sd-wan-silver-peak
[18]: https://www.silver-peak.com/resource-center/national-leader-mammography-services-accelerates-access-life-critical-scans
[19]: https://images.idgesg.net/images/article/2019/04/4_real-world-business-outcomes-100792541-large.jpg
[20]: https://www.silver-peak.com/think-outside-router

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Edge Computing is Key to Meeting Digital Transformation Demands and Partnerships Can Help Deliver Them)
[#]: via: (https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all)
[#]: author: (Rob McKernan https://www.networkworld.com/author/Rob-McKernan/)
Edge Computing is Key to Meeting Digital Transformation Demands and Partnerships Can Help Deliver Them
======
### Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of edge computing technology
![Getty Images][1]
Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of [edge computing][2] technology to make their businesses more efficient, innovative and profitable. In the process, theyre coming face to face with challenges ranging from time to market to reliability of IT infrastructure.
Its a complex problem, especially when you consider the scope of what digital transformation entails. “Digital transformation is not simply a list of IT projects, it involves completely rethinking how an organization uses technology to pursue new revenue streams, products, services, and business models,” as the [research firm IDC says][3].
Companies will be spending more than $650 billion per year on digital transformation efforts by 2024, a CAGR of more than 18.5% from 2018, according to the research firm [Market Research Engine][4].
The drivers behind all that spending include Internet of Things (IoT) technology, which involves collecting data from machines and sensors covering every aspect of the organization. That is contributing to Big Data the treasure trove of data that companies mine to find the keys to efficiency, opportunity and more. Artificial intelligence and machine learning are crucial to that effort, helping companies make sense of the mountains of data theyre creating and consuming, and to find opportunities.
**Requirements for Edge Computing**
All of these trends are creating the need for more and more compute power and data storage. And much of it needs to be close to the source of the data, and to those employees who are working with it. In other words, its driving the need for companies to build edge data centers or edge computing sites.
Physically, these edge computing sites bear little resemblance to large, centralized data centers, but they have many of the same requirements in terms of performance, reliability, efficiency and security. Given they are typically in locations with few if any IT personnel, the data centers must have a high degree of automation and remote management capabilities. And to meet business requirements, they must be built quickly.
**Answering the Call at the Edge**
These are complex requirements, but if companies are to meet time-to-market goals and deal with the lack of IT personnel at the edge, they demand simple solutions.
One solution is integration. Were seeing this already in the IT space, with vendors delivering hyper-converged infrastructure that combines servers, storage, networking and software that is tightly integrated and delivered in a single enclosure. This saves IT groups valuable time in terms of procuring and configuring equipment and makes it far easier to manage over the long term.
Now were seeing the same strategy applied to edge data centers. Prefabricated, modular data centers are an ideal solution for delivering edge data center capacity quickly and reliably. All the required infrastructure power, cooling, racks, UPSs can be configured and installed in a factory and delivered as a single, modular unit to the data center site (or multiple modules, depending on requirements).
Given theyre built in a factory under controlled conditions, modular data centers are more reliable over the long haul. They can be configured with management software built-in, enabling remote management capabilities and a high degree of automation. And they can be delivered in weeks or months, not years and in whatever size is required, including small “micro” data centers.
Few companies, however, have all the components required to deliver a complete, functional data center, not to mention the expertise required to install and configure it. So, it takes effective partnerships to deliver complete edge data center solutions.
**Tech Data Partnership Delivers at the Edge **
APC by Schneider Electric has a long history of partnering to deliver complete solutions that address customer needs. Of the thousands of partnerships it has established over the years, the [25-year partnership][5] with [Tech Data][6] is particularly relevant for the digital transformation era.
Tech Data is a $36.8 billion, Fortune 100 company that has established itself as the worlds leading end-to-end IT distributor. Power and physical infrastructure specialists from Tech Data team up with their counterparts from APC to deliver innovative solutions, including modular and [micro data centers][7]. Many of these solutions are pre-certified by major alliance partners, including IBM, HPE, Cisco, Nutanix, Dell EMC and others.
To learn more, [access the full story][8] that explains how the Tech Data and APC partnership helps deliver [Certainty in a Connected World][9] and effective edge computing solutions that meet todays time to market requirements.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all
作者:[Rob McKernan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rob-McKernan/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/gettyimages-494323751-942x445-100792905-large.jpg
[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
[3]: https://www.idc.com/getdoc.jsp?containerId=US43985717
[4]: https://www.marketresearchengine.com/digital-transformation-market
[5]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/full-resource.jsp
[6]: https://www.techdata.com/
[7]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp
[8]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/index.jsp
[9]: https://www.apc.com/us/en/who-we-are/certainty-in-a-connected-world.jsp

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel formally launches Optane for data center memory caching)
[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel formally launches Optane for data center memory caching
======
### Intel formally launched the Optane persistent memory product line, which includes 3D Xpoint memory technology. The Intel-only solution is meant to sit between DRAM and NAND and to speed up performance.
![Intel][1]
As part of its [massive data center event][2] on Tuesday, Intel formally launched the Optane persistent memory product line. It had been out for a while, but the current generation of Xeon server processors could not fully utilize it. The new Xeon 8200 and 9200 lines take full advantage of it.
And since Optane is an Intel product (co-developed with Micron), that means AMD and Arm server processors are out of luck.
As I have [stated in the past][3], Optane DC Persistent Memory uses 3D Xpoint memory technology that Intel developed with Micron Technology. 3D Xpoint is a non-volatile memory type that is much faster than solid-state drives (SSD), almost at the speed of DRAM, but it has the persistence of NAND flash.
**[ Read also:[Why NVMe? Users weigh benefits of NVMe-accelerated flash storage][4] and [IDCs top 10 data center predictions][5] | Get regularly scheduled insights [Sign up for Network World newsletters][6] ]**
The first 3D Xpoint products were SSDs called Intels ["ruler,"][7] because they were designed in a long, thin format similar to the shape of a ruler. They were designed that way to fit in 1u server carriages. As part of Tuesdays announcement, Intel introduced the new Intel SSD D5-P4326 'Ruler' SSD, using four-cell or QLC 3D NAND memory, with up to 1PB of storage in a 1U design.
Optane DC Persistent Memory will be available in DIMM capacities of 128GB on up to 512GB initially. Thats two to four times what you can get with DRAM, said Navin Shenoy, executive vice president and general manager of Intels Data Center Group, who keynoted the event.
“We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 TB in an 8-socket system. Thats three times larger than what we were able to do with the first-generation of Xeon Scalable,” he said.
## Intel Optane memory uses and speed
Optane runs in two different modes: Memory Mode and App Direct Mode. Memory mode is what I have been describing to you, where Optane memory exists “above” the DRAM and acts as a cache. In App Direct mode, the DRAM and Optane DC Persistent Memory are pooled together to maximize the total capacity. Not every workload is ideal for this kind of configuration, so it should be used in applications that are not latency-sensitive. The primary use case for Optane, as Intel is promoting it, is Memory Mode.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
When 3D Xpoint was initially announced a few years back, Intel claimed it was 1,000 times faster than NAND, with 1000 times the endurance, and 10 times the density potential of DRAM. Well that was a little exaggerated, but it does have some intriguing elements.
Optane memory, when used in 256B contiguous 4 cacheline, can achieve read speeds of 8.3GB/sec and write speeds of 3.0GB/sec. Compare that with the read/write speed of 500 or so MB/sec for a SATA SSD, and you can see the performance gain. Optane, remember, is feeding memory, so it caches frequently accessed SSD content.
This is the key takeaware of Optane DC. It will keep very large data sets very close to memory, and hence the CPU, with low latency while at the same time minimizing the need to access the slower storage subsystem, whether its SSD or HDD. It now offers the possibility of putting multiple terabytes of data very close to the CPU for much faster access.
## One challenge with Optane memory
The only real challenge is that Optane goes into DIMM slots, which is where memory goes. Now some motherboards come with as many as 16 DIMM slots per CPU socket, but thats still board real estate that the customer and OEM provider will need to balance out: Optane vs. memory. There are some Optane drives in PCI Express format, which alleviate the memory crowding on the motherboard.
3D Xpoint also offers higher endurance than traditional NAND flash memory due to the way it writes data. Intel promises a five-year warranty with its Optane, while a lot of SSDs offer only three years.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg
[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html
[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html
[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html
[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Running LEDs in reverse could cool computers)
[#]: via: (https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Running LEDs in reverse could cool computers
======
### The miniaturization of electronics is reaching its limits in part because of heat management. Many are now aggressively trying to solve the problem. A kind of reverse-running LED is one avenue being explored.
![monsitj / Getty Images][1]
The quest to find more efficient methods for cooling computers is almost as high on scientists agendas as the desire to discover better battery chemistries.
More cooling is crucial for reducing costs. It would also allow for more powerful processing to take place in smaller spaces, where limited processing should be crunching numbers instead of making wasteful heat. It would stop heat-caused breakdowns, thereby creating longevity in components, and it would promote eco-friendly data centers — less heat means less impact on the environment.
Removing heat from microprocessors is one angle scientists have been exploring, and they think they have come up with a simple, but unusual and counter-intuitive solution. They say that running a variant of a Light Emitting Diode (LED) with its electrodes reversed forces the component to act as if it were at an unusually low temperature. Placing it next to warmer electronics, then, with a nanoscale gap introduced, causes the LED to suck out the heat.
**[ Read also:[IDCs top 10 data center predictions][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
“Once the LED is reverse biased, it began acting as a very low temperature object, absorbing photons,” says Edgar Meyhofer, professor of mechanical engineering at University of Michigan, in a [press release][4] announcing the breakthrough. “At the same time, the gap prevents heat from traveling back, resulting in a cooling effect.”
The researchers say the LED and the adjacent electrical device (in this case a calorimeter, usually used for measuring heat energy) have to be extremely close. They say theyve been able to demonstrate cooling of six watts per meter-squared. Thats about the power of sunshine on the earths surface, they explain.
Internet of things (IoT) devices and smartphones could be among those electronics that would ultimately benefit from the LED modification. Both kinds of devices require increasing computing power to be squashed into smaller spaces.
“Removing the heat from the microprocessor is beginning to limit how much power can be squeezed into a given space,” the University of Michigan announcement says.
### Materials Science and cooling computers
[Ive written before about new forms of computer cooling][5]. Exotic materials, derived from Materials Science, are among ideas being explored. Sodium bismuthide (Na3Bi) could be used in transistor design, the U.S. Department of Energys Lawrence Berkeley National Laboratory says. The new substance carries a charge and is importantly tunable; however, it doesnt need to be chilled as superconductors currently do.
In fact, thats a problem with superconductors. They unfortunately need more cooling than most electronics — electrical resistance with the technology is expelled through extreme cooling.
Separately, [researchers in Germany at the University of Konstanz][6] say they soon will have superconductor-driven computers without waste heat. They plan to use electron spin — a new physical dimension in electrons that could create efficiency gains. The method “significantly reduces the energy consumption of computing centers,” the university said in a press release last year.
Another way to reduce heat could be [to replace traditional heatsinks with spirals and mazes][7] embedded on microprocessors. Miniscule channels printed on the chip itself could provide paths for coolant to travel, again separately, scientists from Binghamton University say.
“The miniaturization of the semiconductor technology is approaching its physical limits,” the University of Konstanz says. Heat management is very much on scientists agenda now. Its “one of the big challenges in miniaturization."
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386876/running-leds-in-reverse-could-cool-computers.html#tk.rss_all
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-944444446_3x2-100787357-large.jpg
[2]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb
[3]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
[4]: https://news.umich.edu/running-an-led-in-reverse-could-cool-future-computers/
[5]: https://www.networkworld.com/article/3326831/computers-could-soon-run-cold-no-heat-generated.html
[6]: https://www.uni-konstanz.de/en/university/news-and-media/current-announcements/news/news-in-detail/Supercomputer-ohne-Abwaerme/
[7]: https://www.networkworld.com/article/3322956/chip-cooling-breakthrough-will-reduce-data-center-power-costs.html
[8]: https://www.facebook.com/NetworkWorld/
[9]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why blockchain (might be) coming to an IoT implementation near you)
[#]: via: (https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
Why blockchain (might be) coming to an IoT implementation near you
======
![MF3D / Getty Images][1]
Companies have found that IoT partners well with a host of other popular enterprise computing technologies of late, and blockchain the innovative system of distributed trust most famous for underpinning cryptocurrencies is no exception. Yet while the two phenomena can be complementary in certain circumstances, those expecting an explosion of blockchain-enabled IoT technologies probably shouldnt hold their breath.
Blockchain technology can be counter-intuitive to understand at a basic level, but its probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes which can be largely anything with a CPU in it communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain.
**[ Also see[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]**
The system works because all the blocks have to agree with each other on the specifics of the data that theyre safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina Greensboro. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri.
Thats a powerful security technique absent a bad actor successfully controlling all of the nodes on a given blockchain (the [famous “51% attack][4]”), the data protected by that blockchain cant be falsified or otherwise fiddled with. So it should be no surprise that the use of blockchain is an attractive option to companies in some corners of the IoT world.
Part of the reason for that, over and above the bare fact of blockchains ability to securely distribute trusted information across a network, is its place in the technology stack, according to Jay Fallah, CTO and co-founder of NXMLabs, an IoT security startup.
“Blockchain stands at a very interesting intersection. Computing has accelerated in the last 15 years [in terms of] storage, CPU, etc, but networking hasnt changed that much until recently,” he said. “[Blockchain]s not a network technology, its not a data technology, its both.”
### Blockchain and IoT**
**
Where blockchain makes sense as a part of the IoT world depends on who you speak to and what they are selling, but the closest thing to a general summation may have come from Allison Clift-Jenning, CEO of enterprise blockchain vendor Filament.
“Anywhere where you've got people who are kind of wanting to trust each other, and have very archaic ways of doing it, that is usually a good place to start with use cases,” she said.
One example, culled directly from Filaments own customer base, is used car sales. Filaments working with “a major Detroit automaker” to create a trusted-vehicle history platform, based on a device that plugs into the diagnostic port of a used car, pulls information from there, and writes that data to a blockchain. Just like that, theres an immutable record of a used cars history, including whether its airbags have ever been deployed, whether its been flooded, and so on. No unscrupulous used car lot or duplicitous former owner could change the data, and even unplugging the device would mean that theres a suspicious blank period in the records.
Most of present-day blockchain IoT implementation is about trust and the validation of data, according to Elvira Wallis, senior vice president and global head of IoT at SAP.
“Most of the use cases that we have come across are in the realm of tracking and tracing items,” she said, giving the example of a farm-to-fork tracking system for high-end foodstuffs, using blockchain nodes mounted on crates and trucks, allowing for the creation of an un-fudgeable record of an items passage through transport infrastructure. (e.g., how long has this steak been refrigerated at such-and-such a temperature, how far has it traveled today, and so on.)
### **Is using blockchain with IoT a good idea?**
Different vendors sell different blockchain-based products for different use cases, which use different implementations of blockchain technology, some of which dont bear much resemblance to the classic, linear, mined-transaction blockchain used in cryptocurrency.
That means its a capability that youd buy from a vendor for a specific use case, at this point. Few client organizations have the in-house expertise to implement a blockchain security system, according to 451 Research senior analyst Csilla Zsigri.
The idea with any intelligent application of blockchain technology is to play to its strengths, she said, creating a trusted platform for critical information.
“Thats where I see it really adding value, just in adding a layer of trust and validation,” said Zsigri.
Yet while the basic idea of blockchain-enabled IoT applications is fairly well understood, its not applicable to every IoT use case, experts agree. Applying blockchain to non-transactional systems although there are exceptions, including NXM Labs blockchain-based configuration product for IoT devices isnt usually the right move.
If there isnt a need to share data between two different parties as opposed to simply moving data from sensor to back-end blockchain doesnt generally make sense, since it doesnt really do anything for the key value-add present in most IoT implementations today: data analysis.
“Were still in kind of the early dial-up era of blockchain today,” said Clift-Jennings. “Its slower than a typical database, it often isn't even readable, it often doesn't have a query engine tied to it. You don't really get privacy, by nature of it.”
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_mf3d_gettyimages-941175690_2400x1600-100788434-large.jpg
[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[4]: https://bitcoinist.com/51-percent-attack-hackers-steals-18-million-bitcoin-gold-btg-tokens/
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,234 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (File sharing with Git)
[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
File sharing with Git
======
SparkleShare is an open source, Git-based, Dropbox-style file sharing
application. Learn more in our series about little-known uses of Git.
![][1]
[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at SparkleShare, which uses Git as the backbone for file sharing.
### Git for file sharing
One of the nice things about Git is that it's inherently distributed. It's built to share. Even if you're sharing a repository just with other computers on your own network, Git brings transparency to the act of getting files from a shared location.
As interfaces go, Git is pretty simple. It varies from user to user, but the common incantation when sitting down to get some work done is just **git pull** or maybe the slightly more complex **git pull && git checkout -b my-branch**. Still, for some people, the idea of _entering a command_ into their computer at all is confusing or bothersome. Computers are meant to make life easy, and computers are good at repetitious tasks, and so there are easier ways to share files with Git.
### SparkleShare
The [SparkleShare][3] project is a cross-platform, open source, Dropbox-style file sharing application based on Git. It automates all Git commands, triggering the add, commit, push, and pull processes with the simple act of dragging-and-dropping a file into a specially designated SparkleShare directory. Because it is based on Git, you get fast, diff-based pushes and pulls, and you inherit all the benefits of Git version control and backend infrastructure (like Git hooks). It can be entirely self-hosted, or you can use it with Git hosting services like [GitLab][4], GitHub, Bitbucket, and others. Furthermore, because it's basically just a frontend to Git, you can access your SparkleShare files on devices that may not have a SparkleShare client but do have Git clients.
Just as you get all the benefits of Git, you also get all the usual Git restrictions: It's impractical to use SparkleShare to store hundreds of photos and music and videos because Git is designed and optimized for text. Git certainly has the capability to store large files of binary data but it is designed to track history, so once a file is added to it, it's nearly impossible to completely remove it. This somewhat limits the usefulness of SparkleShare for some people, but it makes it ideal for many workflows, including [calendaring][5].
#### Installing SparkleShare
SparkleShare is cross-platform, with installers for Windows and Mac available from its [website][6]. For Linux, there's a [Flatpak][7] in your software installer, or you can run these commands in a terminal:
```
$ sudo flatpak remote-add flathub <https://flathub.org/repo/flathub.flatpakrepo>
$ sudo flatpak install flathub org.sparkleshare.SparkleShare
```
### Creating a Git repository
SparkleShare isn't software-as-a-service (SaaS). You run SparkleShare on your computer to communicate with a Git repository—SparkleShare doesn't store your data. If you don't have a Git repository to sync a folder with yet, you must create one before launching SparkleShare. You have three options: hosted Git, self-hosted Git, or self-hosted SparkleShare.
#### Git hosting
SparkleShare can use any Git repository you can access for storage, so if you have or create an account with GitLab or any other hosting service, it can become the backend for your SparkleShare. For example, the open source [Notabug.org][8] service is a Git hosting service like GitHub and GitLab, but unique enough to prove SparkleShare's flexibility. Creating a new repository differs from host to host depending on the user interface, but all of the major ones follow the same general model.
First, locate the button in your hosting service to create a new project or repository and click on it to begin. Then step through the repository creation process, providing a name for your repository, privacy level (repositories often default to being public), and whether or not to initialize the repository with a README file. Whether you need a README or not, enable an initial README file. Starting a repository with a file isn't strictly necessary, but it forces the Git host to instantiate a **master** branch in the repository, which helps ensure that frontend applications like SparkleShare have a branch to commit and push to. It's also useful for you to see a file, even if it's an almost empty README file, to confirm that you have connected.
![Creating a Git repository][9]
Once you've created a repository, obtain the URL it uses for SSH clones. You can get this URL the same way anyone gets any URL for a Git project: navigate to the page of the repository and look for the **Clone** button or field.
![Cloning a URL on GitHub][10]
Cloning a GitHub URL.
![Cloning a URL on GitLab][11]
Cloning a GitLab URL.
This is the address SparkleShare uses to reach your data, so make note of it. Your Git repository is now configured.
#### Self-hosted Git
You can use SparkleShare to access a Git repository on any computer you have access to. No special setup is required, aside from a bare Git repository. However, if you want to give access to your Git repository to anyone else, then you should run a Git manager like [Gitolite][12] or SparkleShare's own Dazzle server to help you manage SSH keys and accounts. At the very least, create a user specific to Git so that users with access to your Git repository don't also automatically gain access to the rest of your server.
Log into your server as the Git user (or yourself, if you're very good at managing user and group permissions) and create a repository:
```
$ mkdir ~/sparkly.git
$ cd ~/sparkly.git
$ git init --bare .
```
Your Git repository is now configured.
#### Dazzle
SparkleShare's developers provide a Git management system called [Dazzle][13] to help you self-host Git repositories.
On your server, download the Dazzle application to some location in your path:
```
$ curl <https://raw.githubusercontent.com/hbons/Dazzle/master/dazzle.sh> \
\--output ~/bin/dazzle
$ chmod +x ~/bin/dazzle
```
Dazzle sets up a user specific to Git and SparkleShare and also implements access rights based on keys generated by the SparkleShare application. For now, just set up a project:
```
`$ dazzle create sparkly`
```
Your server is now configured as a SparkleShare host.
### Configuring SparkleShare
When you launch SparkleShare for the first time, you are prompted to configure what server you want SparkleShare to use for storage. This process may feel like a first-run setup wizard, but it's actually the usual process for setting up a new shared location within SparkleShare. Unlike many shared drive applications, with SparkleShare you can have several locations configured at once. The first shared location you configure isn't any more significant than any shared location you may set up later, and you're not signing up with SparkleShare or any other service. You're just pointing SparkleShare at a Git repository so that it knows what to keep your first SparkleShare folder in sync with.
On the first screen, identify yourself by whatever means you want on record in the Git commits that SparkleShare makes on your behalf. You can use anything, even fake information that resolves to nothing. It's purely for the commit messages, which you may never even see if you have no interest in reviewing the Git backend processes.
The next screen prompts you to choose your hosting type. If you are using GitLab, GitHub, Planio, or Bitbucket, then select the appropriate one. For anything else, select **Own server**.
![Choosing a Sparkleshare host][14]
At the bottom of this screen, you must enter the SSH clone URL. If you're self-hosting, the address is something like **<ssh://username@example.com>** and the remote path is the absolute path to the Git repository you created for this purpose.
Based on my self-hosted examples above, the address to my imaginary server is **<ssh://git@example.com:22122>** (the **:22122** indicates a nonstandard SSH port) and the remote path is **/home/git/sparkly.git**.
If I use my Notabug.org account instead, the address from the example above is **[git@notabug.org][15]** and the path is **seth/sparkly.git**.
SparkleShare will fail the first time it attempts to connect to the host because you have not yet copied the SparkleShare client ID (an SSH key specific to the SparkleShare application) to the Git host. This is expected, so don't cancel the process. Leave the SparkleShare setup window open and obtain the client ID from the SparkleShare icon in your system tray. Then copy the client ID to your clipboard so you can add it to your Git host.
![Getting the client ID from Sparkleshare][16]
#### Adding your client ID to a hosted Git account
Minor UI differences aside, adding an SSH key (which is all the client ID is) is basically the same process on any hosting service. In your Git host's web dashboard, navigate to your user settings and find the **SSH Keys** category. Click the **Add New Key** button (or similar) and paste the contents of your SparkleShare client ID.
![Adding an SSH key][17]
Save the key. If you want someone else, such as collaborators or family members, to be able to access this same repository, they must provide you with their SparkleShare client ID so you can add it to your account.
#### Adding your client ID to a self-hosted Git account
A SparkleShare client ID is just an SSH key, so copy and paste it into your Git user's **~/.ssh/authorized_keys** file.
#### Adding your client ID with Dazzle
If you are using Dazzle to manage your SparkleShare projects, add a client ID with this command:
```
`$ dazzle link`
```
When Dazzle prompts you for the ID, paste in the client ID found in the SparkleShare menu.
### Using SparkleShare
Once you've added your client ID to your Git host, click the **Retry** button in the SparkleShare window to finish setup. When it's finished cloning your repository, you can close the SparkleShare setup window, and you'll find a new **SparkleShare** folder in your home directory. If you set up a Git repository with a hosting service and chose to include a README or license file, you can see them in your SparkleShare directory.
![Sparkleshare file manager][18]
Otherwise, there are some hidden directories, which you can see by revealing hidden directories in your file manager.
![Showing hidden files in GNOME][19]
You use SparkleShare the same way you use any directory on your computer: you put files into it. Anytime a file or directory is placed into a SparkleShare folder, it's copied in the background to your Git repository.
#### Excluding certain files
Since Git is designed to remember _everything_ , you may want to exclude specific file types from ever being recorded. There are a few reasons to manage excluded files. By defining files that are off limits for SparkleShare, you can avoid accidental copying of large files. You can also design a scheme for yourself that enables you to store files that logically belong together (MIDI files with their **.flac** exports, for instance) in one directory, but manually back up the large files yourself while letting SparkleShare back up the text-based files.
If you can't see hidden files in your system's file manager, then reveal them. Navigate to your SparkleShare folder, then to the directory representing your repository, locate a file called **.gitignore** , and open it in a text editor. You can enter file extensions or file names, one per line, into **.gitignore** , and any file matching what you list will be (as the file name suggests) ignored.
```
Thumbs.db
$RECYCLE.BIN/
.DS_Store
._*
.fseventsd
.Spotlight-V100
.Trashes
.directory
.Trash-*
*.wav
*.ogg
*.flac
*.mp3
*.m4a
*.opus
*.jpg
*.png
*.mp4
*.mov
*.mkv
*.avi
*.pdf
*.djvu
*.epub
*.od{s,t}
*.cbz
```
You know the types of files you encounter most often, so concentrate on the ones most likely to sneak their way into your SparkleShare directory. If you want to exercise a little overkill, you can find good collections of **.gitignore** files on Notabug.org and also on the internet at large.
With those entries in your **.gitignore** file, you can place large files that you don't want sent to your Git host in your SparkleShare directory, and SparkleShare will ignore them entirely. Of course, that means it's up to you to make sure they get onto a backup or distributed to your SparkleShare collaborators through some other means.
### Automation
[Automation][20] is part of the silent agreement we have with computers: they do the repetitious, boring stuff that we humans either aren't very good at doing or aren't very good at remembering. SparkleShare is a nice, simple way to automate the routine distribution of data. It isn't right for every Git repository, by any means. It doesn't have an interface for advanced Git functions; it doesn't have a pause button or a manual override. And that's OK because its scope is intentionally limited. SparkleShare does what SparkleShare sets out to do, it does it well, and it's one Git repository you won't have to think about.
If you have a use for that kind of steady, invisible automation, give SparkleShare a try.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/file-sharing-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
[2]: https://git-scm.com/
[3]: http://www.sparkleshare.org/
[4]: http://gitlab.com
[5]: https://opensource.com/article/19/4/calendar-git
[6]: http://sparkleshare.org
[7]: /business/16/8/flatpak
[8]: http://notabug.org
[9]: https://opensource.com/sites/default/files/uploads/git-new-repo.jpg (Creating a Git repository)
[10]: https://opensource.com/sites/default/files/uploads/github-clone-url.jpg (Cloning a URL on GitHub)
[11]: https://opensource.com/sites/default/files/uploads/gitlab-clone-url.jpg (Cloning a URL on GitLab)
[12]: http://gitolite.org
[13]: https://github.com/hbons/Dazzle
[14]: https://opensource.com/sites/default/files/uploads/sparkleshare-host.jpg (Choosing a Sparkleshare host)
[15]: mailto:git@notabug.org
[16]: https://opensource.com/sites/default/files/uploads/sparkleshare-clientid.jpg (Getting the client ID from Sparkleshare)
[17]: https://opensource.com/sites/default/files/uploads/git-ssh-key.jpg (Adding an SSH key)
[18]: https://opensource.com/sites/default/files/uploads/sparkleshare-file-manager.jpg (Sparkleshare file manager)
[19]: https://opensource.com/sites/default/files/uploads/gnome-show-hidden-files.jpg (Showing hidden files in GNOME)
[20]: /downloads/ansible-quickstart

View File

@ -0,0 +1,190 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Authenticate a Linux Desktop to Your OpenLDAP Server)
[#]: via: (https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server)
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
How to Authenticate a Linux Desktop to Your OpenLDAP Server
======
![][1]
[Creative Commons Zero][2]
In this final part of our three-part series, we reach the conclusion everyone has been waiting for. The ultimate goal of using LDAP (in many cases) is enabling desktop authentication. With this setup, admins are better able to manage and control user accounts and logins. After all, Active Directory admins shouldnt have all the fun, right?
WIth OpenLDAP, you can manage your users on a centralized directory server and connect the authentication of every Linux desktop on your network to that server. And since you already have [OpenLDAP][3] and the [LDAP Authentication Manager][4] setup and running, the hard work is out of the way. At this point, there is just a few quick steps to enabling those Linux desktops to authentication with that server.
Im going to walk you through this process, using the Ubuntu Desktop 18.04 to demonstrate. If your desktop distribution is different, youll only have to modify the installation steps, as the configurations should be similar.
**What Youll Need**
Obviously youll need the OpenLDAP server up and running. Youll also need user accounts created on the LDAP directory tree, and a user account on the client machines with sudo privileges. With those pieces out of the way, lets get those desktops authenticating.
**Installation**
The first thing we must do is install the necessary client software. This will be done on all the desktop machines that require authentication with the LDAP server. Open a terminal window on one of the desktop machines and issue the following command:
```
sudo apt-get install libnss-ldap libpam-ldap ldap-utils nscd -y
```
During the installation, you will be asked to enter the LDAP server URI ( **Figure 1** ).
![][5]
Figure 1: Configuring the LDAP server URI for the client.
[Used with permission][6]
The LDAP URI is the address of the OpenLDAP server, in the form ldap://SERVER_IP (Where SERVER_IP is the IP address of the OpenLDAP server). Type that address, tab to OK, and press Enter on your keyboard.
In the next window ( **Figure 2)** , you are required to enter the Distinguished Name of the OpenLDAP server. This will be in the form dc=example,dc=com.
![][7]
Figure 2: Configuring the DN of your OpenLDAP server.
[Used with permission][6]
If youre unsure of what your OpenLDAP DN is, log into the LDAP Account Manager, click Tree View, and youll see the DN listed in the left pane ( **Figure 3** ).
![][8]
Figure 3: Locating your OpenLDAP DN with LAM.
[Used with permission][6]
The next few configuration windows, will require the following information:
* Specify LDAP version (select 3)
* Make local root Database admin (select Yes)
* Does the LDAP database require login (select No)
* Specify LDAP admin account suffice (this will be in the form cn=admin,dc=example,dc=com)
* Specify password for LDAP admin account (this will be the password for the LDAP admin user)
Once youve answered the above questions, the installation of the necessary bits is complete.
**Configuring the LDAP Client**
Now its time to configure the client to authenticate against the OpenLDAP server. This is not nearly as hard as you might think.
First, we must configure nsswitch. Open the configuration file with the command:
```
sudo nano /etc/nsswitch.conf
```
In that file, add ldap at the end of the following line:
```
passwd: compat systemd
group: compat systemd
shadow: files
```
These configuration entries should now look like:
```
passwd: compat systemd ldap
group: compat systemd ldap
shadow: files ldap
```
At the end of this section, add the following line:
```
gshadow files
```
The entire section should now look like:
```
passwd: compat systemd ldap
group: compat systemd ldap
shadow: files ldap
gshadow files
```
Save and close that file.
Now we need to configure PAM for LDAP authentication. Issue the command:
```
sudo nano /etc/pam.d/common-password
```
Remove use_authtok from the following line:
```
password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass
```
Save and close that file.
Theres one more PAM configuration to take care of. Issue the command:
```
sudo nano /etc/pam.d/common-session
```
At the end of that file, add the following:
```
session optional pam_mkhomedir.so skel=/etc/skel umask=077
```
The above line will create the default home directory (upon first login), on the Linux desktop, for any LDAP user that doesnt have a local account on the machine. Save and close that file.
**Logging In**
Reboot the client machine. When the login is presented, attempt to log in with a user on your OpenLDAP server. The user account should authenticate and present you with a desktop. You are good to go.
Make sure to configure every single Linux desktop on your network in the same fashion, so they too can authenticate against the OpenLDAP directory tree. By doing this, any user in the tree will be able to log into any configured Linux desktop machine on your network.
You now have an OpenLDAP server running, with the LDAP Account Manager installed for easy account management, and your Linux clients authenticating against that LDAP server.
And that, my friends, is all there is to it.
Were done.
Keep using Linux.
Its been an honor.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server
作者:[Jack Wallen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/jlwallen
[b]: https://github.com/lujun9972
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cyber-3400789_1280_0.jpg?itok=YiinDnTw
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
[3]: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804
[4]: https://www.linux.com/blog/learn/2019/3/how-install-ldap-account-manager-ubuntu-server-1804
[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_1.jpg?itok=DgYT8iY1
[6]: /LICENSES/CATEGORY/USED-PERMISSION
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_2.jpg?itok=CXITs7_J
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_3.jpg?itok=HmhiYj7J

View File

@ -0,0 +1,240 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Run a server with Git)
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth/users/seth)
Run a server with Git
======
Thanks to Gitolite, you can manage a Git server with Git. Learn how in
our series about little-known Git uses.
![computer servers processing data][1]
As I've tried to demonstrate in this series leading up to Git's 14th anniversary on April 7, [Git][2] can do a wide range of things beyond tracking source code. Believe it or not, Git can even manage your Git server, so you can, more or less, run a Git server with Git itself.
Of course, this involves a lot of components beyond everyday Git, not the least of which is [Gitolite][3], the backend application managing the fiddly bits that you configure using Git. The great thing about Gitolite is that, because it uses Git as its frontend interface, it's easy to integrate Git server administration within the rest of your Git-based workflow. Gitolite provides precise control over who can access specific repositories on your server and what permissions they have. You can manage that sort of thing yourself with the usual Linux system tools, but it takes a lot of work if you have more than just one or two repos across a half-dozen users.
Gitolite's developers have done the hard work to make it easy for you to provide many users with access to your Git server without giving them access to your entire environment—and you can do it all with Git.
What Gitolite is _not_ is a GUI admin and user panel. That sort of experience is available with the excellent [Gitea][4] project, but this article focuses on the simple elegance and comforting familiarity of Gitolite.
### Install Gitolite
Assuming your Git server runs Linux, you can install Gitolite with your package manager ( **yum** on CentOS and RHEL, **apt** on Debian and Ubuntu, **zypper** on OpenSUSE, and so on). For example, on RHEL:
```
`$ sudo yum install gitolite3`
```
Many repositories still have older versions of Gitolite for legacy support, but the current version is version 3.
You must have passwordless SSH access to your server. You can use a password to log in if you prefer, but Gitolite relies on SSH keys, so you must configure the option to log in with keys. If you don't know how to configure a server for passwordless SSH access, go learn how to do that first (the [Setting up SSH key authentication][5] section of Steve Ovens's Ansible article explains it well). It's an essential part of secure server administration—as well as of running Gitolite.
### Configure a Git user
Without Gitolite, if a person requests access to a Git repository you host on a server, you have to provide that person with a user account. Git provides a special shell, the **git-shell** , which is an ultra-specific shell that performs only Git tasks. This lets you have users who can access your server only through the filter of a very limited shell environment.
That solution works, but it usually means a user gains access to all repositories on your server unless you have a very good schema for group permissions and maintain those permissions strictly whenever a new repository is created. It also requires a lot of manual configuration at the system level, an area usually reserved for a specific tier of sysadmins and not necessarily the person usually in charge of Git repositories.
Gitolite sidesteps this issue entirely by designating one username for every person who needs access to any repository. By default, the username is **git** , and because Gitolite's documentation assumes that's what is used, it's a good default to keep when you're learning the tool. It's also a well-known convention for anyone who's ever used GitLab or GitHub or any other Git hosting service.
Gitolite calls this user the _hosting user_. Create an account on your server to act as the hosting user (I'll stick with **git** because that's the convention):
```
` $ sudo adduser --create-home git`
```
For you to control the **git** user account, it must have a valid public SSH key that belongs to you. You should already have this set up, so **cp** your public key ( _not your private key_ ) to the **git** user's home directory:
```
$ sudo cp ~/.ssh/id_ed25519.pub /home/git/
$ sudo chown git:git /home/git/id_ed25519.pub
```
If your public key doesn't end with the extension **.pub** , Gitolite will not use it, so rename the file accordingly. Change to that user account to run Gitolite's setup:
```
$ sudo su - git
$ gitolite setup --pubkey id_ed25519.pub
```
After the setup script runs, the **git** home's user directory will have a **repositories** directory, which (for now) contains the files **git-admin.git** and **testing.git**. That's all the setup the server requires, so log out.
### Use Gitolite
Managing Gitolite is a matter of editing text files in a Git repository, specifically **gitolite-admin.git**. You won't SSH into your server for Git administration, and Gitolite encourages you not to try. The repositories you and your users store on the Gitolite server are _bare_ repositories, so it's best to stay out of them.
```
$ git clone [git@example.com][6]:gitolite-admin.git gitolite-admin.git
$ cd gitolite-admin.git
$ ls -1
conf
keydir
```
The **conf** directory in this repository contains a file called **gitolite.conf**. Open it in a text editor or use **cat** to view its contents:
```
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
```
You may have an idea of what this configuration file does: **gitolite-admin** represents this repository, and the owner of the **id_ed25519** key has read, write, and Git administrative privileges. In other words, rather than mapping users to normal local Unix users (because all your users log in using the **git** hosting user identity), Gitolite maps users to SSH keys listed in the **keydir** directory.
The **testing.git** repository gives full permissions to everyone with access to the server using special group notation.
#### Add users
If you want to add a user called **alice** to your Git server, the person Alice must send you her public SSH key. Gitolite uses whatever is to the left of the **.pub** extension as the identifier for your Git users. Rather than using the default key name values, give keys a name indicative of the key owner. If a user has more than one key (e.g., one for her laptop, one for her desktop), you can use subdirectories to avoid file name collisions. For instance, the key Alice uses from her laptop might come to you as the default **id_rsa.pub** , so rename it **alice.pub** or similar (or let the users name the key according to their local user accounts on their computers), and place it into the **gitolite-admin.git/keydir/work/laptop/** directory. If she sends you another key from her desktop, name it **alice.pub** (the same as the previous one) and add it to **keydir/work/desktop/**. Another key might go into **keydir/home/desktop/** , and so on. Gitolite recursively searches **keydir** for a **.pub** file matching a repository "user" and treats any match as the same identity.
When you add keys to the **keydir** directory, you must commit them back to your server. This is such an easy thing to forget that there's a real argument here for using an automated Git application like [**Sparkleshare**][7] so any change is committed back to your Gitolite admin immediately. The first time you forget to commit and push—and waste three hours of your time and your user's time troubleshooting—you'll see that Gitolite is the perfect justification for using Sparkleshare.
```
$ git add keydir
$ git commit -m 'added alice-laptop-0.pub'
$ git push origin HEAD
```
Alice, by default, gains access to the **testing.git** directory so she can test connectivity and functionality with that.
#### Set permissions
As with users, directory permissions and groups are abstracted away from the normal Unix tools you might be used to (or find information about online). Permissions to projects are granted in the **gitolite.conf** file in **gitolite-admin.git/conf** directory. There are four levels of permissions:
* **R** allows read-only. A user with **R** permissions on a repository may clone it, and that's all.
* **RW** allows a user to perform a fast-forward push of a branch, create new branches, and create new tags. More or less, this one feels like a "normal" Git repository to most users.
* **RW+** allows Git actions that are potentially destructive. A user can perform normal fast-forward pushes, as well as rewind pushes, do rebases, and delete branches and tags. This may or may not be something you want to grant to all contributors on a project.
* **-** explicitly denies access to a repository. This is essentially the same as a user not being listed in the repository's configuration.
Create a new repository or modify an existing repository's permissions by adjusting **gitolite.conf**. For instance, to give Alice permissions to administrate a new repository called **widgets.git** :
```
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo widgets
RW+ = alice
```
Now Alice—and Alice alone—can clone the repo:
```
[alice]$ git clone [git@example.com][6]:widgets.git
Cloning into 'widgets'...
warning: You appear to have cloned an empty repository.
```
On her initial push, Alice must use the **-u** option to send her branch to the empty repository (as she would have to do with any Git host).
To make user management easier, you can define groups of repositories:
```
@qtrepo = widgets
@qtrepo = games
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo @qtrepo
RW+ = alice
```
Just as you can create group repositories, you can group users. One user group exists by default: **@all**. As you might expect, it includes all users, without exception. You can create your own:
```
@qtrepo = widgets
@qtrepo = games
@developers = alice bob
repo gitolite-admin
RW+ = id_ed22519
repo testing
RW+ = @all
repo @qtrepo
RW+ = @developers
```
As with adding or modifying key files, any change to the **gitolite.conf** file must be committed and pushed to take effect.
### Create a repository
By default, Gitolite assumes repository creation happens from the top down. For instance, a project manager with access to the Git server creates a project repository and, through the Gitolite administration repo, adds developers.
In practice, you might prefer to grant users permission to create repositories. Gitolite calls these "wild repos" (I'm not sure whether that's commentary on how the repos come into being or a reference to the wildcard characters required by the configuration file to let it happen). Here's an example:
```
@managers = alice bob
repo foo/CREATOR/[a-z]..*
C = @managers
RW+ = CREATOR
RW = WRITERS
R = READERS
```
The first line defines a group of users: the group is called **@managers** and contains users **alice** and **bob**. The next line sets up a wildcard allowing repositories that do not yet exist to be created in a directory called **foo** followed by a subdirectory named for the user creating the repo. For example:
```
[alice]$ git clone [git@example.com][6]:foo/alice/cool-app.git
Cloning into cool-app'...
Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.git
warning: You appear to have cloned an empty repository.
```
There are some mechanisms for the creator of a wild repo to define who can read and write to their repository, but they're limited in scope. For the most part, Gitolite assumes that a specific set of users governs project permission. One solution is to grant all users access to **gitolite-admin** using a Git hook to require manager approval to merge changes into the master branch.
### Learn more
Gitolite has many more features than what this introductory article covers, so try it out. The [documentation][8] is excellent, and once you read through it, you can customize your Gitolite server to provide your users whatever level of control you are comfortable with. Gitolite is a low-maintenance, simple system that you can install, set up, and then more or less forget about.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/server-administration-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data)
[2]: https://git-scm.com/
[3]: http://gitolite.com
[4]: http://gitea.io
[5]: Setting%20up%20SSH%20key%20authentication
[6]: mailto:git@example.com
[7]: https://opensource.com/article/19/4/file-sharing-git
[8]: http://gitolite.com/gitolite/quick_install.html

View File

@ -0,0 +1,186 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Fixing Ubuntu Freezing at Boot Time
======
_**This step-by-step tutorial shows you how to deal with Ubuntu freezing at the boot by installing proprietary NVIDIA drivers. The tutorial was performed on a newly installed Ubuntu system but it should be applicable otherwise as well.**_
The other day I bought an [Acer Predator laptop][1] ([affiliate][2] link) to test various Linux distribution. Its a bulky, heavy built laptop which is in contrast to my liking of smaller, lightweight laptops like the [awesome Dell XPS][3].
The reason why I opted for this gaming laptop even though I dont game on PC is [NVIDIA Graphics][4]. Acer Predator Helios 300 comes with [NVIDIA Geforce][5] GTX 1050Ti.
NVIDIA is known for its poor compatibility with Linux. A number of Its FOSS readers asked for my help with their NVIDIA laptops and I could do nothing because I didnt have a system with NVIDIA graphics card.
So when I decided to get a new dedicated device for testing Linux distributions, I opted for a laptop with NVIDA graphics.
This laptop comes with Windows 10 installed on the 120 GB SSD and 1TB of HDD for storing data. I [dual booted Windows 10 with Ubuntu 18.04][6]. The installation was quick, easy and painless.
I booted into [Ubuntu][7]. It was showing the familiar purple screen and then I noticed that it froze there. The mouse won move, I couldnt type anything and nothing else could be done except turning off the device by holding the power button.
And it was the same story at the next login try. Ubuntu just gets stuck at the purple screen even before reaching the login screen.
Sounds familiar? Let me show you how you can fix this problem of Ubuntu freezing at login.
Dont use Ubuntu?
Please note that while this tutorial was performed with Ubuntu 18.04, this would also work on other Ubuntu-based distributions such as Linux Mint, elementary OS etc. I have confirmed it with Zorin OS.
### Fix Ubuntu freezing at boot time because of NVIDIA drivers
![][8]
The solution I am going to describe here works for systems with NVIDIA graphics card. Its because your system is freezing thanks the open source [NVIDIA Nouveau drivers][9].
Without further delay, lets see how to fix this problem.
#### Step 1: Editing Grub
When you boot your system, just stop at the Grub screen like the one below. If you dont see this screen, keep holding Shift key at the boot time.
At this screen, press E key to go into the editing mode.
![Press E key][10]
You should see some sort of code like the one below. You should focus on the line that starts with Linux.
![Go to line starting with Linux][11]
#### Step 2: Temporarily Modifying Linux kernel parameters in Grub
Remember, our problem is with the NVIDIA Graphics drivers. This incompatibility with open source version of NVIDIA drivers caused the issue so what we can do here is to disable these drivers.
Now, there are several ways you can try to disable these drivers. My favorite way is to disable all video/graphics card using nomodeset.
Just add the following text at the end of the line starting with Linux. You should be able to type normally. Just make sure that you are adding it at the end of the line.
```
nomodeset
```
Now your screen should look like this:
![Disable graphics drivers by adding nomodeset to the kernel][12]
Press Ctrl+X or F10 to save and exit. Now youll boot with the newly modified kernel parameters here.
Explanation of what we did here (click to expand)
So, what did we just do here? Whats that nomodeset thing? Let me explain it to you briefly.
Normally, the video/graphics card were used after the X or any other display server was started. In other words, when you logged in to your system and see graphical user interface.
But lately, the video mode settings were moved to the kernel. Among other benefits, it enables you to have a beautiful, high resolution boot splash screens.
If you add the nomodeset parameter to the kernel, it instructs the kernel to load the video/graphics drivers after the display server is started.
In other words, you disabled loading the graphics driver at this time and the conflict it was causing goes away. After you login to the system and see everything because the graphics card is loaded again.
#### Step 3: Update your system and install proprietary NVIDIA drivers
Dont be too happy yet just because you are able to login to your system now. What you did was temporary and the next time you boot into your system, your system will still freeze because it will still try to load the Nouveau drivers.
Does this mean youll always have to edit Kernel from the grub screen? Thankfully, the answer is no.
What you can do here is to [install additional drivers in Ubuntu][13] for NVIDIA. Ubuntu wont freeze at boot time while using these proprietary drivers.
I am assuming that its your first login to a freshly installed system. This means you must [update Ubuntu][14] before you do anything else. Open a terminal using Ctrl+Alt+T [keyboard shortcut in Ubuntu][15] and use the following command:
```
sudo apt update && sudo apt upgrade -y
```
You may try installing additional drivers in Ubuntu right after the completion of the above command but in my experience, youll have to restart your system before you could successfully install the new drivers. And when you restart, youll have to change the kernel parameter again the same way we did earlier.
After your system is updated and restarted, press Windows key to go to the menu and search for Software & Updates.
![Click on Software & Updates][16]
Now go to the Additional Drivers tab and wait for a few seconds. Here youll see proprietary drivers available for your system. You should see NVIDIA in the list here.
Select the proprietary driver and click on Apply Changes.
![Installing NVIDIA Drivers][17]
It will take some time in the installation of the new drivers. If you have UEFI secure boot enabled on your system, youll be also asked to set a password. _You can set it to anything that is easy to remember_. Ill show you its implications later in step 4.
![You may have to setup a secure boot password][18]
Once the installation finishes, youll be asked to restart the system to take changes into effect.
![Restart your system once the new drivers are installed][19]
#### Step 4: Dealing with MOK (only for UEFI Secure Boot enabled devices)
If you were asked to setup a secure boot password, youll see a blue screen that says something about “MOK management”. Its a complicated topic and Ill try to explain it in simpler terms.
MOK ([Machine Owner Key][20]) is needed due to the secure boot feature that requires all kernel modules to be signed. Ubuntu does that for all the kernel modules that it ships in the ISO. Because you installed a new module (the additional driver) or made a change in the kernel modules, your secure system may treat it as an unwarranted/foreign change in your system and may refuse to boot.
Hence, you can either sign the kernel module on your own (telling your UEFI system not to panic because you made these changes) or you simply [disable the secure boot][21].
Now that you know a little about [secure boot and MOK][22], lets see what to do at the next boot when you see the blue screen at the next boot.
If you select “Continue boot”, chances are that your system will boot like normal and you wont have to do anything at all. But its possible that not all features of the new driver work correctly.
This is why, you should **choose Enroll MOK**.
![][23]
It will ask you to Continue in the next screen followed by asking a password. Use the password you had set while installing the additional drivers in the previous step. Youll be asked to reboot now.
Dont worry!
If you miss this blue screen of MOK or accidentally clicked Continue boot instead of Enroll MOK, dont panic. Your main aim is to be able to boot into your system and you have successfully done that part by disabling the Nouveau graphics driver.
The worst case would be that your system switched to the integrated Intel graphics instead of the NVIDIA graphics. You can install the NVIDIA graphics drivers later at any point of time. Your priority is to boot into the system.
#### Step 5: Enjoying Ubuntu Linux with proprietary NVIDIA drivers
Once the new driver is installed, youll have to restart your system again. Dont worry! Things should be better now and you wont need to edit the kernel parameters anymore. Youll be booting into Ubuntu straightaway.
I hope this tutorial helped you to fix the problem of Ubuntu freezing at the boot time and you were able to boot into your Ubuntu system.
If you have any questions or suggestions, please let me know in the comment section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/fix-ubuntu-freezing/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://amzn.to/2YVV6rt
[2]: https://itsfoss.com/affiliate-policy/
[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
[4]: https://www.nvidia.com/en-us/
[5]: https://www.nvidia.com/en-us/geforce/
[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[7]: https://www.ubuntu.com/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
[9]: https://nouveau.freedesktop.org/wiki/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
[14]: https://itsfoss.com/update-ubuntu/
[15]: https://itsfoss.com/ubuntu-shortcuts/
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
[21]: https://itsfoss.com/disable-secure-boot-in-acer/
[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Happy 14th anniversary Git: What do you love about Git?)
[#]: via: (https://opensource.com/article/19/4/what-do-you-love-about-git)
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/seth)
Happy 14th anniversary Git: What do you love about Git?
======
Git's huge influence on software development practices is hard to match.
![arrows cycle symbol for failing faster][1]
In the 14 years since Linus Torvalds developed Git, its influence on software development practices would be hard to match—in StackOverflow's 2018 developer survey, [87% of respondents][2] said they use Git for version control. Clearly, no other tool is anywhere close to knocking Git off its throne as the king of source control management (SCM).
In honor of Git's 14th anniversary on April 7, I asked some enthusiasts what they love most about it. Here's what they told me.
_(Some responses have been lightly edited for grammar and clarity)_
"I can't stand Git. Incomprehensible terminology, distributed so that truth does not exist, requires add-ons like Gerrit to make it 50% as usable as a nice centralized repository like Subversion or Perforce. But in the spirit of answering 'what do you like about Git?': Git makes arbitrarily abstruse source tree manipulations possible and usually makes it easy to undo them when it takes 20 tries to get them right." — _[Sweet Tea Dorminy][3]_
"I like that Git doesn't enforce any particular workflow and development teams are free to collaborate in a way that works for them, be it with pull requests or emailed diffs or push permission for all." — _[Andy Price][4]_
"I've been using Git since 2006 or 2007. What I love about Git is that it works well both for small projects that may never leave my computer and for large, collaborative, distributed projects. Git provides you all the tools to rollback from (almost) every bad commit you make, and as such has significantly reduced my stress when it comes to software management." — _[Jonathan S. Katz][5]_
"I appreciate Git's principle of ["plumbing" vs. "porcelain" commands][6]. Users can effectively share any kind of information using Git without needing to know how the internals work. That said, the curious have access to commands that peel back the layers, revealing the content-addressable filesystem that powers many code-sharing communities." — _[Matthew Broberg][7]_
"I love Git because I can do almost anything to explore, develop, build, test, and commit application codes in my own Git repo. It always motivates me to participate in open source projects." — _[Daniel Oh][8]_
"Git is the first version control tool I used, and it went from being scary to friendly over the years. I love how it empowers you to feel confident about code you are changing while it gives you the assurance that your master branch is safe (obviously unless you force-push half-baked code to the production/master branch). Its ability to reverse changes by checking out older commits is great too." — _[Kedar Vijay Kulkarni][9]_
"I love Git because it made several other SCM software obsolete. No one uses VS, Subversion can be used with git-svn (if needed at all), BitKeeper is remembered only by elders, it's similar with Monotone. Sure, there is Mercurial, but for me it was kind of 'still a work in progress' when I used it while upstreaming Firefox support for AArch64 (a few years ago). Someone may even mention Perforce, SourceSafe, or some other 'enterprise' solutions, but they are not popular in the FOSS world." — _[Marcin Juszkiewicz][10]_
"I love the simplicity of the internal model of SHA1ed (commit → tree → blob) objects. And porcelain commands. And that I used it as patching mechanism for JBoss/Red Hat Fuse. And that this mechanism works. And how Git can be explained in the [great tale of three trees][11]." — _[Grzegorz Grzybek][12]_
"I like the [generated Git man pages][13] which make me humble in front of Git. (This is a page that generates Git-sounding but in reality completely nonsense pages—which often gives the same feeling as real Git pages…)" — _[Marko Myllynen][14]_
"Git changed my life as a developer going from a world where SCM was a problem to a world where it is a solution." — _[Joel Takvorian][15]_
* * *
Now that we've heard from these 10 Git enthusiasts, it's your turn: What do _you_ appreciate about Git? Please share your opinions in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/what-do-you-love-about-git
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh (arrows cycle symbol for failing faster)
[2]: https://insights.stackoverflow.com/survey/2018/#work-_-version-control
[3]: https://github.com/sweettea
[4]: https://www.linkedin.com/in/andrew-price-8771796/
[5]: https://opensource.com/users/jkatz05
[6]: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain
[7]: https://opensource.com/users/mbbroberg
[8]: https://opensource.com/users/daniel-oh
[9]: https://opensource.com/users/kkulkarn
[10]: https://github.com/hrw
[11]: https://speakerdeck.com/schacon/a-tale-of-three-trees
[12]: https://github.com/grgrzybek
[13]: https://git-man-page-generator.lokaltog.net/
[14]: https://github.com/myllynen
[15]: https://github.com/jotak

View File

@ -0,0 +1,247 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage multimedia files with Git)
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
Manage multimedia files with Git
======
Learn how to use Git to track large multimedia files in your projects in
the final article in our series on little-known uses of Git.
![video editing dashboard][1]
Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on.
In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files.
### The problem with managing multimedia files with Git
It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git:
```
$ du -hs
108K .
$ cp ~/photos/dandelion.tif .
$ git add dandelion.tif
$ git commit -m 'added a photo'
[master (root-commit) fa6caa7] two photos
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 dandelion.tif
$ du -hs
1.8M .
```
Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file:
```
$ git rm dandelion.tif
$ git commit -m 'deleted a photo'
$ du -hs
828K .
```
You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network.
The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next.
### Git-portal
In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG.
So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in.
![Graphic showing relationship between art assets and Git][2]
A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance).
Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority.
Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists.
#### Install Git-portal
There are RPM packages for Git-portal located at <https://klaatu.fedorapeople.org/git-portal>, which you can download and install.
Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself:
```
$ git clone <https://gitlab.com/slackermedia/git-portal.git> git-portal.clone
$ cd git-portal.clone
$ ./configure
$ make
$ sudo make install
```
#### Use Git-portal
Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project:
```
$ mkdir bigproject.git
$ cd !$
$ git init
$ git-portal init
```
Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file.
Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data:
```
$ ls -1
_portal
song.1.qtr
song.qtr
song-Track_1-1.mid
song-Track_1-3.mid
song-Track_2-1.mid
$ git add song*qtr
$ git-portal song-Track*mid
$ git add song-Track*mid
```
If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected:
```
$ ls -lG
[...] _portal/
[...] song.1.qtr
[...] song.qtr
[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid*
[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid*
[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid*
```
As with Git, you can also add a directory of files:
```
$ cp -r ~/synth-presets/yoshimi .
$ git-portal add yoshimi
Directories cannot go through the portal. Sending files instead.
$ ls -lG _portal/yoshimi
[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat*
```
Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** :
```
$ ls
_portal/ song.qtr song-Track_1-3.mid@ yoshimi/
song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@
$ git-portal rm song-Track_1-3.mid
rm 'song-Track_1-3.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
```
If you forget to use Git-portal, then you have to remove the portal file manually:
```
$ git-portal rm song-Track_1-1.mid
rm 'song-Track_1-1.mid'
$ ls _portal/
song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/
$ trash _portal/song-Track_1-1.mid
```
Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory:
```
$ mkdir foo
$ mv yoshimi foo
$ git-portal status
bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid
bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat
```
If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote.
#### Add Git-portal remotes
Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge).
If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure:
```
$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
```
The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories:
```
$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal
$ git remote -v
origin [git@gitdawg.com][6]:seth/bigproject.git (fetch)
origin [git@gitdawg.com][6]:seth/bigproject.git (push)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch)
_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push)
```
You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync).
Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal:
```
$ git push origin HEAD
master destination detected
Syncing _portal content...
sending incremental file list
sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec
total size is 60,358,015 speedup is 6,474.10
Syncing _portal content to example.com:/home/seth/git/bigproject_portal
```
If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity.
### Other options
If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync.
A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12].
If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/manage-multimedia-files-git
作者:[Seth Kenlon (Red Hat, Community Moderator)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git)
[3]: http://gitlab.com/slackermedia/git-portal.git
[4]: https://www.apress.com/gp/book/9781484241691
[5]: http://mixedsignals.ml
[6]: mailto:git@gitdawg.com
[7]: mailto:seth@example.com
[8]: https://opensource.com/article/19/4/file-sharing-git
[9]: https://git-lfs.github.com/
[10]: https://git-annex.branchable.com/
[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7
[12]: https://git-annex.branchable.com/walkthrough/

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: (Moelf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Look Back at the History of Firefox)
[#]: via: (https://itsfoss.com/history-of-firefox)
[#]: author: (John Paul https://itsfoss.com/author/john/)
回顾 Firefox 历史
======
火狐浏览器从很久之前就一直是开源社区的一根顶梁柱。多年来它一直作为几乎所有 Linux 发行版的默认浏览器并且曾是阻挡微软彻底争霸浏览器界的最后一块磐石。这款浏览器的根子可以一直最回溯到互联网创生的时代。这周标志着互联网成立30周年趁这个机会回顾一下我们熟悉并爱戴的火狐浏览器实在是再好不过了。
### 发源
在90年代早期一个叫 [Marc Andreessen][1] 的年轻人正在伊利诺伊大学U of Illinois就读本科计算机科学。他那时还为[国家超算应用中心][2]工作。Marc 在那时候[了解][4]到了一款叫[ViolaWWW][5]的化石级浏览器。Marc 和 Eric Bina 看到了这种技术的潜力,他们开发了一个易于安装的 Unix 浏览器取名 [NCSA Mosaic][6]。第一个 alpha 版本发布于 1993 年 6 月。到 9 月的时候,浏览器已经有 Windows 和 Macintosh 移植版本了。因为比当时其他任何浏览器软件都易于使用Mosaic 很快变得相当流行。
1994 年Marc 毕业并移居到加州。他被一个叫 Jim Clark 的人找上了Clark 那时候通过卖电脑软硬件赚了点钱。Clark 也用过 Mosaic 浏览器并且在互联网上看到了发家的机会。Clark 创立了一家公司并且雇了 Marc 和 Eric 专做互联网软件。公司一开始叫 “Mosaic 通讯”,但是伊利诺伊大学不喜欢他们[名字里用 Mosaic][7]。所以公司转而改名大家后来熟悉的 “Netscape Communication 企业”。
公司的第一个企划是给任天堂 64 开发在线对战网络,然而不怎么成功。他们第一个以公司名义发布的产品是一款叫做 Mosaic Netscape 0.9 的浏览器,很快这款浏览器被改名叫 Netscape Navigator。在内部浏览器的开发代号就是 mozilla意味着”Mosaic 杀手“。一位员工还创作了一幅[类似哥斯拉的][8]卡通画。他们当时想在竞争中彻底胜出。
![Early Firefox Mascot][9]早期 Mozilla 在 Netscape 的吉祥物
他们取得了辉煌的生理。那时Netscape 最大的优势是他们的浏览器在各种操作系统上体验极为一致。Netscape 把这个情况宣传为给所有人公平的互联网体验。
随着越来越多的人使用 Netscape NavigatorNCSA Mosaic 的市场份额逐步下降。到了 1995 年Netscape 公开上市了。[第一天][10],股价从开盘的 $28直窜到 $78收盘于 $58。Netscape 那时所向披靡。
但好景不长。在 1994 年的夏天,微软发布了 Internet Explorer 1.0,这款浏览器基于 Spyglass Mosaic而后者又直接基于 NCSA Mosaic。[浏览器战争][11] 就此展开。
在接下来的几年里Netscape 和微软就浏览器霸主地位展开斗争。他们各自加入了很多新特性以取得优势。不幸的是IE 有和 Windows 操作系统捆绑的巨大优势。更甚于此,微软也有更多的程序员和资本可以调动。在接近 1997 年的尾声Netscape 公司开始逐步有金融困难。
### 迈向开源
![Mozilla Firefox][12]
1998 年 1 月Netscape 开源了 Netscape Communicator 4.0 软件套装的代码。[旨在][13] ”集合互联网上万千程序员的才智,把最好的功能加入 Netscape 的软件。这一策略能加速开发并且让 Netscape 能自由的向个人和商业用户提供未来高质量的 Netscape Communicator“。
这个项目会由新创立的 Mozilla Orgnization 管理。然而Netscape Communicator 4.0 的代码由于大小和复杂性,很难被社区上的程序员们独自开发。雪上加霜的是,浏览器的一些组件由于第三方证书问题并不能被开源。到头来,他们决定用新星的 [Gecko][14] 重写渲染引擎。
到了 1998 年的 11 月Netscape 被美国在线AOL收购[价格是价值 42 亿美元的股权][15]。
从头来过是一项艰巨的任务。Mozilla Firefox一开始有昵称 Phoenix直到 2002 年 6 月才面世它同样支持多系统LinuxMac OS微软 WindowsSolaris。
到了第二年AOL 宣布他们会停止浏览器开发。紧接着 Mozilla 基金会成立了,用于管理 Mozilla 的商标和项目相关的金融情况。最早 Mozilla 基金会收到了一笔来自 AOLIBMSun 微型操作系统和红帽Red Hat总计 2 百万美金的捐赠。
到了 2003 年 3月Mozilla [宣布][16] 由于越来越沉重的软件包袱,计划把浏览器套件分割成单独的应用。这个单独的浏览器一开始起名 Phoenix。但是由于和 BIOS 制造企业凤凰科技的商标官司,浏览器改名 Firebird 火鸟——结果和火鸟数据库的开发者又起了冲突。浏览器只能再次被重命名,才有了现在家喻户晓的 Firefox 火狐。
那时,[Mozilla 说][17],”我们在过去一年里学到了很多关于起名的技巧(不是因为我们愿意才学的)。我们现在很小心地研究了名字,确保未来不会再有什么夭蛾子了。我们同时展开了在美国专利局和商标办注册我们新品牌的流程”。
![Mozilla Firefox 1.0][18]Firefox 1.0 : [照片致谢][19]
第一个正式的 Firefox 版本是 [0.8][20],发布于 2004 年 2 月 8 日。紧接着 11 月 9 日他们发布了 1.0 版本。接着 2.0 和 3.0 分别在 06 年 10 月 和 08 年 6 月问世。每个大版本更新都带来了很多新的特性和提升。从很多角度上讲Firefox 都领先 IE 不少,无论是功能还是技术先进性,即便如此 IE 还是有更多用户。
一切都在 Google 发布 Chrome 浏览器的时候改变了。在 Chrome 发布2008 年 9 月的前几个月Firefox 占有 30% 的[浏览器份额][21] 而 IE 有超过 60%。而在 [2019 年 1 月][22] 的 StatCounter's 报告里Firefox 有不到 10% 的份额,而 Chrome 有超过 70%。
趣味知识点
和大家以为的不一样,火狐的 logo 其实没有狐狸。那其实是个 [红的熊猫][23]。在中文里,“火狐狸”是红熊猫的一个昵称(译者:我真的从来没听说过)
### 展望未来
如上文所说的一样Firefox 正在经历很长一段以来的份额低谷。曾经有那么一段时间,有很多浏览器都基于 Firefox 开发,比如早期的 [Flock 浏览器][24]。而现在大多数浏览器都基于谷歌的技术了,比如 Opera 和 Vivaldi。甚至连微软都放弃开发自己的浏览器而转而[加入 Chromium 帮派][25]。
这也许看起来和 Netscape 当年的辉煌形成鲜明的对比。但让我们不要忘记 Firefox 已经有的许多成就。一群来自世界各地的程序员,就这么开发除了星球上第二大份额的浏览器。他们在微软垄断如日中天的时候还占据这 30% 的份额,他们现在也在做一样的事。无论如何,他们都有我们,开源社区,坚定地站在他们身后。
抗争垄断是众多我使用 Firefox [的原因之一][26]。Mozilla 在改头换面的 [Firefox Quantum][27] 上赢回了一些份额,我相信他们还能一路向上攀爬。
你还想了解 Linux 和开源历史上的什么其他事件?欢迎在评论区告诉我们。
如果你觉得这篇文章不错,请大方在社交媒体上分享!比如 Hacker News 或者 [Reddit][28]。(译者:可惜 Reddit 已经是不存在的网站了)
--------------------------------------------------------------------------------
via: https://itsfoss.com/history-of-firefox
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Marc_Andreessen
[2]: https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
[5]: http://viola.org/
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
[8]: http://www.davetitus.com/mozilla/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
[10]: https://www.marketwatch.com/story/netscape-ipo-ignited-the-boom-taught-some-hard-lessons-20058518550
[11]: https://en.wikipedia.org/wiki/Browser_wars
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?resize=800%2C450&ssl=1
[13]: https://web.archive.org/web/20021001071727/wp.netscape.com/newsref/pr/newsrelease558.html
[14]: https://en.wikipedia.org/wiki/Gecko_(software)
[15]: http://news.cnet.com/2100-1023-218360.html
[16]: https://web.archive.org/web/20050618000315/http://www.mozilla.org/roadmap/roadmap-02-Apr-2003.html
[17]: https://www-archive.mozilla.org/projects/firefox/firefox-name-faq.html
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firefox-1.jpg?ssl=1
[19]: https://www.iceni.com/blog/firefox-1-0-introduced-2004/
[20]: https://en.wikipedia.org/wiki/Firefox_version_history
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
[23]: https://en.wikipedia.org/wiki/Red_panda
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
[26]: https://itsfoss.com/why-firefox/
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
[28]: http://reddit.com/r/linuxusersgroup
[29]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/mozilla-firefox.jpg?fit=800%2C450&ssl=1

View File

@ -0,0 +1,296 @@
关于 /dev/urandom 的流言终结
======
有很多关于 /dev/urandom 和 /dev/random 的流言在坊间不断流传。流言终究是流言。
本篇文章里针对的都是今年的 Linux 操作系统,其他类 Unix 操作系统不在讨论范围内。
### /dev/urandom 不安全。加密用途必须使用 /dev/random。
事实:/dev/urandom 才是类 Unix 操作系统下推荐的加密种子。
### /dev/urandom 是伪随机数生成器PRND而 /dev/random 是“真”随机数生成器。
事实:他们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。他们之间细微的差别和“真”不“真”随机完全无关
### /dev/random 在任何情况下都是密码学应用更好地选择。即便 /dev/urandom 也同样安全,我们还是不应该用 urandom。
事实:/dev/random 有个很恶心人的问题:它是阻塞的。(译者:意味着请求都得逐个执行,等待前一个事件完成)
### 但阻塞不是好事吗!/dev/random 只会给出电脑收集的信息熵足以支持的随机量。/dev/urandom 在用完了所有熵的情况下还会不断吐不安全的随机数给你。
事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 bits 的熵就足以生成计算上安全的随机数很长,很长一段时间了。
问题的关键还在后头:/dev/random 怎么知道有系统会多少可用的信息熵?接着看!
### 但密码学家老是讨论重新选种子re-seeding。这难道不和上一条冲突吗
事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
### 好,就算你说的都对,但是 /dev/(u)random 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?
事实:其实 man 页面和我说的不冲突。它看似好像在说 /dev/urandom 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道他说的并不是这个意思。
man 页面确实说在一些情况下推荐使用 /dev/random (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 /dev/urandom 。
虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
所以说呢,还确实有一些专家和我的一件事一致的:/dev/urandom 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
难以相信吗?觉得我肯定错了?读下去看我能不能说服你。
我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
首当其冲的,什么是随机性,或者更准确地:我们在探讨什么样的随机性?
另外一点很重要的是,我没有尝试以说教的态度对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(译者:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
并且我非常乐意听到不一样的观点。但我只是认为单单地说 /dev/urandom 坏是不够的。你得能指出到底有什么问题,并且剖析他们。
### 你是在说我笨?!
绝对没有!
事实上我自己也相信了 “/dev/urandom 不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet论坛推特上根我们重复这个观点。甚至连 man page 都似是而非地说着。我们当年怎么可能打发诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
### 真随机
什么叫一个随机变量是“真随机的”?
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
在我看来真随机的“试金石”是量子效应。一个光子穿过或不穿过一个50%的半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣我也不好多说什么了。
密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。他们更关心的是不可预测性。只要没有任何方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
## 两种安全,一种有用
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
你把他们打印出来然后挂在墙上来战士量子宇宙的美与和谐?牛逼!我很理解你。
但是等等,你说你要用他们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
因为我们使用的大多数算法并不是 ### 理论信息学上安全的。**他们只能提供** 计算意义上的安全。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
但所有那些大名鼎鼎的密码学算法AESRSADiffie-Hellman, 椭圆曲线还有所有那些加密软件包OpenSSLGnuTLSKeyczar你的操作系统的加密 API都仅仅是计算意义上的安全的。
那区别是什么呢?理论信息学上的安全肯定是安全的,句号。其他那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用他们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身,破解 RSA 算法本身。
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
所以喂计算性上安全的随机数给你仅仅是计算性上安全的算法就可以了,换而言之,用 /dev/urandom。
### Linux 随机数生成器的构架
#### 一种错误的看法
你对内核的随机数生成器的理解很可能是像这样的:
![image: mythical structure of the kernel's random number generator][1]
“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过去 bias 和“漂白”之后它进入内核的熵池,然后 /dev/random 和 /dev/urandom 从里面生成随机数。
“真”随机数生成器,/dev/random直接从池里选出随机数如果熵计数器表示能满足需要的数字大小那就吐出数字并且减少熵计数。如果不够的话他会阻塞程序直至有足够的熵进入和系统。
这里很重要一环是 /dev/random 几乎直接把那些进入系统的随机性吐了出来,不经扭曲。
而对 /dev/urandom 来说,事情是一样的。除了当没有足够的熵的时候,它不会阻塞,而会从一直在运行的伪随机数生成器里吐出“底质量”的随机数。这个 CSPRNG 只会用“真随机数”生成种子一次(或者好几次,这不重要),但你不能特别相信它。
在这种对随机数生成的理解下,很多人会觉得在 Linux 下尽量避免 /dev/urandom 看上去有那么点道理。
因为要么你有足够多的熵,你会相当于用了 /dev/random。要么没有那你就会从几乎没有高熵输入的 CSPRNG 那里得到一个低质量的随机数。
看上去很邪恶是吧?很不幸的是这种看法是完全错误的。实际上,随机数生成器的构架更像是这样的。
#### 更好地简化
##### Linux 4.8 之前
![image: actual structure of the kernel's random number generator before Linux 4.8][2]
这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 /dev/random还有一个给 /dev/urandom后两者依靠从主池里获取熵。这三个池都有各自的熵计数器但二级池后两个的计数器基本都在0附近而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
但你看到最大的区别了吗? CSPRNG 并不是和随机数生成器一起跑用来填充 /dev/urandom 需要输出但熵不够的时候。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 /dev/random 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和 hash 过了,这一切都发生在实际变成一个随机数,被/dev/urandom 或者 /dev/random 吐出去之前。
另外一个重要的区别是是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 /dev/random 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
最后强调一下终点:/dev/random 和 /dev/urandom 都是被同一个 CSPRNG 喂的输入。只有他们在用完各自熵池(根据某种预估标准)的时候,他们的行为会不同:/dev/random 阻塞,/dev/urandom 不阻塞。
##### Linux 4.8 以后
在 Linux 4.8 里,/dev/random 和 /dev/urandom 的等价性被放弃了。现在 /dev/urandom 的输出不来自于熵池,而是直接从 CSPRNG 来。
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
我们很快会理解为什么这不是一个安全问题。
### 阻塞有什么问题?
你有没有需要等着 /dev/random 来吐随机数?比如在虚拟机里生成一个 PGP 密钥?或者访问一个在生成会话密钥的网站?
这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不 work 你干嘛搭建它呢?
我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
但其实有个更深刻的问题:人们不喜欢被打断。他们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
为什么不禁止调用 `random()`?为什么不随便在论坛上找个人告诉你用写奇异的 ioctl 来增加熵计数器呢?为什么不干脆就把 SSL 加密给关了算了呢?
到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道他们会做些什么。
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,/dev/urandom 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
### CSPRNG 没问题
现在情况听上去很沧桑。如果连高质量的 /dev/random 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
实际上,“看上去随机”是现存大多数密码学算法的更集。如果你观察一个密码学 hash 的输出,它得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个块加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点那这就又是老一套了一切都废了也别谈后面的了。块加密hash一切都是基于某个数学算法比如 CSPRNG。所以别害怕到头来都一样。
### 那熵池快空了的情况呢?
毫无影响。
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 bits不需要更多了。
介于我们一直在很随意的使用“熵”这个概念,我用 bits 来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
It doesn't matter.
### 重新选种
但如果熵这么不重要,为什么还要有新的熵一直被收进随机数生成器里呢?
djb [提到][4] 太多的熵甚至可能会起到反效果。
首先,一般不会这样。如果你有很多随机性可以拿来用,用就对了!
但随机数生成器时不时要重新选种还有别的原因:
想象一下如果有个攻击者获取了你随机数生成器的所有内部状态。这是最坏的情况了,本质上你的一切都暴露给攻击者了。
你已经凉了,因为攻击者可以计算出所有未来会被输出的随机数了。
但是,如果不断有新的熵被混进系统,那内部状态会在一次变得随机起来。所以随机数生成器被设计成这样有些“自愈”能力。
但这是在给内部状态引入新的熵,这和阻塞输出没有任何关系。
### random 和 urandom 的 man 页面
这两个 man 页面在吓唬程序员方面很有建树:
> 从 /dev/urandom 读取数据不会因为需要更多熵而阻塞。这样的结果是,如果熵池里没有足够多的熵,取决于驱动使用的算法,返回的数值在理论上有被密码学攻击的可能性。发动这样攻击的步骤并没有出现在任何公开文献当中,但这样的攻击从理论上讲是可能存在的。如果你的应用担心这类情况,你应该使用 /dev/random。
没有“公开的文献”描述,但是 NSA 的小卖部里肯定卖这种攻击手段是吧?如果你真的真的很担心(你应该很担心),那就用 /dev/random 然后所有问题都没了?
然而事实是,可能什么情报局有这种攻击,或者什么邪恶黑客组织找到了方法。但如果我们就直接假设这种攻击一定存在也是不合理的。
而且就算你想给自己一个安心我要给你泼个冷水AESSHA-3 或者其他什么常见的加密算法也没有“公开文献记述”的攻击手段。难道你也不用这几个加密算法了?这显然是可笑的。
我们在回到 man 页面说:“使用 /dev/random”。我们已经知道了虽然 /dev/urandom 不阻塞,但是它的随机数和 /dev/random 都是从同一个 CSPRNG 里来的。
如果你真的需要信息论理论上安全的随机数(你不需要的相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 /dev/random。
man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
> 如果你不确定该用 /dev/random 还是 /dev/urandom ,那你可能应该用后者。通常来说,除了需要长期使用的 GPG/SSL/SSH 密钥以外,你总该使用/dev/urandom 。
行。我觉得没必要,但如果你真的要用 /dev/random 来生成 “长期使用的密钥”,用就是了也没人拦着!你可能需要等几秒钟或者敲几下键盘来增加熵,但没什么问题。
但求求你们,不要就因为“你想更安全点”就让连个邮件服务器要挂起半天。
### 正道
本篇文章里的观点显然在互联网上是“小众”的。但如果问问一个真正的密码学家,你很难找到一个认同阻塞 /dev/random 的人。
比如我们看看 [Daniel Bernstein][5] djb
> 我们密码学家对这种胡乱迷信行为表示不负责。你想想,写 /dev/random man 页面的人好像同时相信:
>
> * (1) 我们不知道如何用一个 256-bit 长的 /dev/random 的输出来生成一个无限长的随机密钥串流(这是我们需要 /dev/urandom 吐出来的),但与此同时
> * (2) 我们却知道怎么用单个密钥来加密一条消息(这是 SSLPGP 之类干的事情)
>
>
>
> 对密码学家来说这甚至都不好笑了
或早 [Thomas Pornin][6],他也是我在 stackexchange 上见过最乐于助人的一位:
> 简单来说,是的。展开说,答案还是一样。/dev/urandom 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 /dev/urandom “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
>
> urandom 的 man 页面多多少少有些误导人,或者干脆可以说是错的——特别是当它说 /dev/urandom 会“用完熵”以及 “/dev/random 是更好的”那几句话;
或者 [Thomas Ptacek][7],他不设计密码算法或者密码学系统,但他是一家名声在外的安全咨询公司的创始人,这家公司负责很多渗透和破解烂密码学算法的测试:
> 用 urandom。用 urandom。用 urandom。用 urandom。用 urandom。
### 没有完美
/dev/urandom 不是完美的,问题分两层:
在 Linux 上,不像 FreeBSD/dev/urandom 永远不阻塞。记得安全性取决于某个最一开始决定的随机性?种子?
Linux 的 /dev/urandom 会很乐意给你吐点不怎么随机的随机数,甚至在内核有机会收集一丁点熵之前。什么时候有这种情况?当你系统刚刚启动的时候。
FreeBSD 的行为更正确点:/dev/random 和 /dev/urandom 是一样的,在系统启动的时候 /dev/random 会阻塞到有足够的熵为止,然后他们都再也不阻塞了。
与此同时 Linux 实行了一个新的 syscall最早由 OpenBSD 引入叫 getentrypy(2),在 Linux 下这个叫 getrandom(2)。这个 syscall 有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个 syscall而不是一个字节设备译者指不在 /dev/ 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个 syscall 自 Linux 3.17 起存在。
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不关心系统是不是正确关机了,比如可能你系统崩溃了。
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
虚拟机是另外一层问题。因为用户喜欢克隆他们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。
但解决方案依然和用 /dev/random 没关系,而是你应该正确的给每个克隆或者恢复的的镜像重新生成种子文件,之类的。
### 太长不看;
别问,问就是用 /dev/urandom !
--------------------------------------------------------------------------------
via: https://www.2uo.de/myths-about-urandom/
作者:[Thomas Hühn][a]
译者:[Moelf](https://github.com/Moelf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2uo.de/
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
[4]:http://blog.cr.yp.to/20140205-entropy.html
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Take to the virtual skies with FlightGear)
[#]: via: (https://opensource.com/article/19/1/flightgear)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
使用 FlightGear 进入虚拟天空
======
你梦想驾驶飞机么?试试开源飞行模拟器 FlightGear 吧。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flightgear_cockpit_plane_sky.jpg?itok=LRy0lpOS)
如果你曾梦想驾驶飞机,你会喜欢 [FlightGear][1] 的。它是一个功能齐全的[开源][2]飞行模拟器,可在 Linux、MacOS 和 Windows 中运行。
FlightGear 项目始于 1996 年,原因是对商业飞行模拟程序的不满,因为这些程序无法扩展。它的目标是创建一个复杂、强大、可扩展、开放的飞行模拟器框架,来用于学术界和飞行员培训,或者任何想要玩飞行模拟场景的人。
### 入门
FlightGear 的硬件要求适中,包括支持 OpenGL 以实现平滑帧速的加速 3D 显卡。它在我的配备 i5 处理器和仅 4GB 的内存的 Linux 笔记本上运行良好。它的文档包括[在线手册][3]、一个包含[用户][5]和[开发者][6]门户的 [wiki][4],还有大量的教程(例如它的默认飞机 [Cessna 172p][7])教你如何操作它。
在 [Fedora][8]和 [Ubuntu][9] Linux 中很容易安装。Fedora 用户可以参考 [Fedora 安装页面][10]来运行 FlightGear。
在 Ubuntu 18.04 中,我需要安装一个仓库:
```
$ sudo add-apt-repository ppa:saiarcot895/flightgear
$ sudo apt-get update
$ sudo apt-get install flightgear
```
安装完成后,我从 GUI 启动它,但你也可以通过输入以下命令从终端启动应用:
```
$ fgfs
```
### 配置 FlightGear
应用窗口左侧的菜单提供配置选项。
![](https://opensource.com/sites/default/files/uploads/flightgear_menu.png)
**Summary** 返回应用的主页面。
**Aircraft** 显示你已安装的飞机,并提供了 FlightGear 的默认“机库”中安装多达 539 种其他飞机的选项。我安装了 Cessna 150L、Piper J-3 Cub 和 Bombardier CRJ-700。一些飞机包括 CRJ-700有教你如何驾驶商用喷气式飞机的教程。我发现这些教程内容翔实且准确。
![](https://opensource.com/sites/default/files/uploads/flightgear_aircraft.png)
要选择驾驶的飞机,请将其高亮显示,然后单击菜单底部的 **Fly!**。我选择了默认的 Cessna 172p 并发现驾驶舱的刻画非常准确。
![](https://opensource.com/sites/default/files/uploads/flightgear_cockpit-view.png)
默认机场是檀香山,但你在 **Location** 菜单中提供你最喜欢机场的 [ICAO 机场代码] [11]进行修改。我找到了一些小型的本地无塔机场,如 Olean 和 Dunkirk纽约以及包括 BuffaloO'Hare 和 Raleigh 在内的大型机场,甚至可以选择特定的跑道。
**Environment** 下,你可以调整一天中的时间、季节和天气。模拟包括高级天气建模和从 [NOAA][12] 下载当前天气的能力。
**Settings** 提供在暂停模式中开始模拟的选项。同样在设置中,你可以选择多人模式,这样你就可以与 FlightGear 支持者的全球服务器网络上的其他玩家一起“飞行”。你必须有比较快速的互联网连接来支持此功能。
**Add-ons** 菜单允许你下载飞机和其他场景。
### 开始飞行
为了“起飞”我的 Cessna我使用了罗技操纵杆它用起来不错。你可以使用顶部 **File** 菜单中的选项校准操纵杆。
总的来说,我发现模拟非常准确,图形界面也很棒。你自己试下 FlightGear - 我认为你会发现它是一个非常有趣和完整的模拟软件。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/1/flightgear
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: http://home.flightgear.org/
[2]: http://wiki.flightgear.org/GNU_General_Public_License
[3]: http://flightgear.sourceforge.net/getstart-en/getstart-en.html
[4]: http://wiki.flightgear.org/FlightGear_Wiki
[5]: http://wiki.flightgear.org/Portal:User
[6]: http://wiki.flightgear.org/Portal:Developer
[7]: http://wiki.flightgear.org/Cessna_172P
[8]: http://rpmfind.net/linux/rpm2html/search.php?query=flightgear
[9]: https://launchpad.net/~saiarcot895/+archive/ubuntu/flightgear
[10]: https://apps.fedoraproject.org/packages/FlightGear/
[11]: https://en.wikipedia.org/wiki/ICAO_airport_code
[12]: https://www.noaa.gov/

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
[#]: author: (Jeff Macharyas (Community Moderator) )
Sweet Home 3D一个帮助你决定梦想家庭的开源工具
======
室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
![Houses in a row][1]
我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她不会看到的房子!
我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还想记住房间是在右边还是左边,是否有风扇等。
由于这是一个相当繁琐且不太准确的方式来展示我的发现,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
[Sweet Home 3D][2] 完全满足了我的要求。Sweet Home 3D 可在 Sourceforge 上获取,并在 GNU 通用公共许可证下发布。它的[网站][3]信息非常丰富我能够立即启动并运行。Sweet Home 3D 由总部位于巴黎的 eTeks 的 Emmanuel Puybaret 开发。
### 绘制内墙
我将 Sweet Home 3D 下载到我的 MacBook Pro 上,并添加了 PNG 版本的平面楼层图,用作背景底图。
在此处,使用 Rooms 面板跟踪图案并设置“真实房间”尺寸是一件简单的事情。在我绘制房间后,我添加了墙壁,我可以定制颜色、厚度、高度等。
![Sweet Home 3D floorplan][5]
现在我画完了“内墙”,我从网站下载了各种“家具”,其中包括实际的家具以及门、窗、架子等。每个项目都以 ZIP 文件的形式下载,因此我创建了一个包含所有未压缩文件的文件夹。我可以自定义每件家具和重复的物品比如门,可以方便地复制粘贴到指定的地方。
在我将所有墙壁和门窗都布置完后,我就使用应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
![Sweet Home 3D floorplan][7]
完成之后,我将计划导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的 Preview方便旋转房屋并从各个角度查看。视频功能最有用我可以创建一个起点然后在房子中绘制一条路径并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
我的妻子能够(几乎)所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是装上卡车搬到新家。
Sweet Home 3D 在我的新工作中也是有用的。我正在寻找一种方法来改善学院建筑的地图,并计划在 [Inkscape][9] 或 Illustrator 或其他软件中重新绘制它。但是,由于我有平面地图,我可以使用 Sweet Home 3D 创建平面图的 3D 版本并将其上传到我们的网站以便更方便地找到地方。
### 开源犯罪现场?
一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计计划表示路线和犯罪现场的工具。这是法国政府建议优先考虑免费开源解决方案的具体应用。“
这是公民和政府如何利用开源解决方案创建个人项目、解决犯罪和建立世界的又一点证据。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/tool-find-home
作者:[Jeff Macharyas (Community Moderator)][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
[2]: https://sourceforge.net/projects/sweethome3d/
[3]: http://www.sweethome3d.com/
[4]: /file/426441
[5]: https://opensource.com/sites/default/files/uploads/virginia-house-create-screenshot.png (Sweet Home 3D floorplan)
[6]: /file/426451
[7]: https://opensource.com/sites/default/files/uploads/virginia-house-3d-screenshot.png (Sweet Home 3D floorplan)
[8]: https://opensource.com/article/18/5/blender-hotkey-cheat-sheet
[9]: https://opensource.com/article/19/1/inkscape-cheat-sheet
[10]: http://www.sweethome3d.com/blog/2018/12/10/customization_for_the_forensic_police.html