Merge pull request #2 from LCTT/master

Update from LCTT
This commit is contained in:
chen ni 2019-06-12 16:56:21 +08:00 committed by GitHub
commit 2d6fad6272
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
119 changed files with 10210 additions and 4367 deletions

View File

@ -0,0 +1,484 @@
从零写一个时间序列数据库
==================
编者按Prometheus 是 CNCF 旗下的开源监控告警解决方案,它已经成为 Kubernetes 生态圈中的核心监控系统。本文作者 Fabian Reinartz 是 Prometheus 的核心开发者,这篇文章是其于 2017 年写的一篇关于 Prometheus 中的时间序列数据库的设计思考,虽然写作时间有点久了,但是其中的考虑和思路非常值得参考。长文预警,请坐下来慢慢品味。
---
![](https://img.linux.net.cn/data/attachment/album/201906/11/180646l7cqbhazqs7nsqsn.jpg)
我从事监控工作。特别是在 [Prometheus][2] 上,监控系统包含一个自定义的时间序列数据库,并且集成在 [Kubernetes][3] 上。
在许多方面上 Kubernetes 展现出了 Prometheus 所有的设计用途。它使得<ruby>持续部署<rt>continuous deployments</rt></ruby><ruby>弹性伸缩<rt>auto scaling</rt></ruby>和其他<ruby>高动态环境<rt>highly dynamic environments</rt></ruby>下的功能可以轻易地访问。查询语句和操作模型以及其它概念决策使得 Prometheus 特别适合这种环境。但是,如果监控的工作负载动态程度显著地增加,这就会给监控系统本身带来新的压力。考虑到这一点,我们就可以特别致力于在高动态或<ruby>瞬态服务<rt>transient services</rt></ruby>环境下提升它的表现,而不是回过头来解决 Prometheus 已经解决的很好的问题。
Prometheus 的存储层在历史以来都展现出卓越的性能,单一服务器就能够以每秒数百万个时间序列的速度摄入多达一百万个样本,同时只占用了很少的磁盘空间。尽管当前的存储做的很好,但我依旧提出一个新设计的存储子系统,它可以修正现存解决方案的缺点,并具备处理更大规模数据的能力。
> 备注:我没有数据库方面的背景。我说的东西可能是错的并让你误入歧途。你可以在 Freenode 的 #prometheus 频道上对我fabxc提出你的批评。
## 问题,难题,问题域
首先,快速地概览一下我们要完成的东西和它的关键难题。我们可以先看一下 Prometheus 当前的做法 ,它为什么做的这么好,以及我们打算用新设计解决哪些问题。
### 时间序列数据
我们有一个收集一段时间数据的系统。
```
identifier -> (t0, v0), (t1, v1), (t2, v2), (t3, v3), ....
```
每个数据点是一个时间戳和值的元组。在监控中时间戳是一个整数值可以是任意数字。64 位浮点数对于计数器和测量值来说是一个好的表示方法,因此我们将会使用它。一系列严格单调递增的时间戳数据点是一个序列,它由标识符所引用。我们的标识符是一个带有<ruby>标签维度<rt>label dimensions</rt></ruby>字典的度量名称。标签维度划分了单一指标的测量空间。每一个指标名称加上一个唯一标签集就成了它自己的时间序列,它有一个与之关联的<ruby>数据流<rt>value stream</rt></ruby>
这是一个典型的<ruby>序列标识符<rt>series identifier</rt></ruby>集,它是统计请求指标的一部分:
```
requests_total{path="/status", method="GET", instance=”10.0.0.1:80”}
requests_total{path="/status", method="POST", instance=”10.0.0.3:80”}
requests_total{path="/", method="GET", instance=”10.0.0.2:80”}
```
让我们简化一下表示方法:度量名称可以当作另一个维度标签,在我们的例子中是 `__name__`。对于查询语句,可以对它进行特殊处理,但与我们存储的方式无关,我们后面也会见到。
```
{__name__="requests_total", path="/status", method="GET", instance=”10.0.0.1:80”}
{__name__="requests_total", path="/status", method="POST", instance=”10.0.0.3:80”}
{__name__="requests_total", path="/", method="GET", instance=”10.0.0.2:80”}
```
我们想通过标签来查询时间序列数据。在最简单的情况下,使用 `{__name__="requests_total"}` 选择所有属于 `requests_total` 指标的数据。对于所有选择的序列,我们在给定的时间窗口内获取数据点。
在更复杂的语句中,我们或许想一次性选择满足多个标签的序列,并且表示比相等条件更复杂的情况。例如,非语句(`method!="GET"`)或正则表达式匹配(`method=~"PUT|POST"`)。
这些在很大程度上定义了存储的数据和它的获取方式。
### 纵与横
在简化的视图中,所有的数据点可以分布在二维平面上。水平维度代表着时间,序列标识符域经纵轴展开。
```
series
^
| . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="GET"}
| . . . . . . . . . . . . . . . . . . . . . . {__name__="request_total", method="POST"}
| . . . . . . .
| . . . . . . . . . . . . . . . . . . . ...
| . . . . . . . . . . . . . . . . . . . . .
| . . . . . . . . . . . . . . . . . . . . . {__name__="errors_total", method="POST"}
| . . . . . . . . . . . . . . . . . {__name__="errors_total", method="GET"}
| . . . . . . . . . . . . . .
| . . . . . . . . . . . . . . . . . . . ...
| . . . . . . . . . . . . . . . . . . . .
v
<-------------------- time --------------------->
```
Prometheus 通过定期地抓取一组时间序列的当前值来获取数据点。我们从中获取到的实体称为目标。因此,写入模式完全地垂直且高度并发,因为来自每个目标的样本是独立摄入的。
这里提供一些测量的规模:单一 Prometheus 实例从数万个目标中收集数据点,每个数据点都暴露在数百到数千个不同的时间序列中。
在每秒采集数百万数据点这种规模下,批量写入是一个不能妥协的性能要求。在磁盘上分散地写入单个数据点会相当地缓慢。因此,我们想要按顺序写入更大的数据块。
对于旋转式磁盘,它的磁头始终得在物理上向不同的扇区上移动,这是一个不足为奇的事实。而虽然我们都知道 SSD 具有快速随机写入的特点,但事实上它不能修改单个字节,只能写入一页或更多页的 4KiB 数据量。这就意味着写入 16 字节的样本相当于写入满满一个 4Kib 的页。这一行为就是所谓的[写入放大][4],这种特性会损耗你的 SSD。因此它不仅影响速度而且还毫不夸张地在几天或几个周内破坏掉你的硬件。
关于此问题更深层次的资料,[“Coding for SSDs”系列][5]博客是极好的资源。让我们想想主要的用处:顺序写入和批量写入分别对于旋转式磁盘和 SSD 来说都是理想的写入模式。大道至简。
查询模式比起写入模式明显更不同。我们可以查询单一序列的一个数据点,也可以对 10000 个序列查询一个数据点,还可以查询一个序列几个周的数据点,甚至是 10000 个序列几个周的数据点。因此在我们的二维平面上,查询范围不是完全水平或垂直的,而是二者形成矩形似的组合。
[记录规则][6]可以减轻已知查询的问题,但对于<ruby>点对点<rt>ad-hoc</rt></ruby>查询来说并不是一个通用的解决方法。
我们知道我们想要批量地写入,但我们得到的仅仅是一系列垂直数据点的集合。当查询一段时间窗口内的数据点时,我们不仅很难弄清楚在哪才能找到这些单独的点,而且不得不从磁盘上大量随机的地方读取。也许一条查询语句会有数百万的样本,即使在最快的 SSD 上也会很慢。读入也会从磁盘上获取更多的数据而不仅仅是 16 字节的样本。SSD 会加载一整页HDD 至少会读取整个扇区。不论哪一种,我们都在浪费宝贵的读取吞吐量。
因此在理想情况下,同一序列的样本将按顺序存储,这样我们就能通过尽可能少的读取来扫描它们。最重要的是,我们仅需要知道序列的起始位置就能访问所有的数据点。
显然,将收集到的数据写入磁盘的理想模式与能够显著提高查询效率的布局之间存在着明显的抵触。这是我们 TSDB 需要解决的一个基本问题。
#### 当前的解决方法
是时候看一下当前 Prometheus 是如何存储数据来解决这一问题的让我们称它为“V2”。
我们创建一个时间序列的文件,它包含所有样本并按顺序存储。因为每几秒附加一个样本数据到所有文件中非常昂贵,我们在内存中打包 1Kib 样本序列的数据块一旦打包完成就附加这些数据块到单独的文件中。这一方法解决了大部分问题。写入目前是批量的样本也是按顺序存储的。基于给定的同一序列的样本相对之前的数据仅发生非常小的改变这一特性它还支持非常高效的压缩格式。Facebook 在他们 Gorilla TSDB 上的论文中描述了一个相似的基于数据块的方法,并且[引入了一种压缩格式][7],它能够减少 16 字节的样本到平均 1.37 字节。V2 存储使用了包含 Gorilla 变体等在内的各种压缩格式。
```
+----------+---------+---------+---------+---------+ series A
+----------+---------+---------+---------+---------+
+----------+---------+---------+---------+---------+ series B
+----------+---------+---------+---------+---------+
. . .
+----------+---------+---------+---------+---------+---------+ series XYZ
+----------+---------+---------+---------+---------+---------+
chunk 1 chunk 2 chunk 3 ...
```
尽管基于块存储的方法非常棒,但为每个序列保存一个独立的文件会给 V2 存储带来麻烦,因为:
* 实际上,我们需要的文件比当前收集数据的时间序列数量要多得多。多出的部分在<ruby>序列分流<rt>Series Churn</rt></ruby>上。有几百万个文件,迟早会使用光文件系统中的 [inode][1]。这种情况我们只能通过重新格式化来恢复磁盘,这种方式是最具有破坏性的。我们通常不想为了适应一个应用程序而格式化磁盘。
* 即使是分块写入,每秒也会产生数千块的数据块并且准备持久化。这依然需要每秒数千次的磁盘写入。尽管通过为每个序列打包好多个块来缓解,但这反过来还是增加了等待持久化数据的总内存占用。
* 要保持所有文件打开来进行读写是不可行的。特别是因为 99% 的数据在 24 小时之后不再会被查询到。如果查询它,我们就得打开数千个文件,找到并读取相关的数据点到内存中,然后再关掉。这样做就会引起很高的查询延迟,数据块缓存加剧会导致新的问题,这一点在“资源消耗”一节另作讲述。
* 最终,旧的数据需要被删除,并且数据需要从数百万文件的头部删除。这就意味着删除实际上是写密集型操作。此外,循环遍历数百万文件并且进行分析通常会导致这一过程花费数小时。当它完成时,可能又得重新来过。喔天,继续删除旧文件又会进一步导致 SSD 产生写入放大。
* 目前所积累的数据块仅维持在内存中。如果应用崩溃,数据就会丢失。为了避免这种情况,内存状态会定期的保存在磁盘上,这比我们能接受数据丢失窗口要长的多。恢复检查点也会花费数分钟,导致很长的重启周期。
我们能够从现有的设计中学到的关键部分是数据块的概念,我们当然希望保留这个概念。最新的数据块会保持在内存中一般也是好的主意。毕竟,最新的数据会大量的查询到。
一个时间序列对应一个文件,这个概念是我们想要替换掉的。
### 序列分流
在 Prometheus 的<ruby>上下文<rt>context</rt></ruby>中,我们使用术语<ruby>序列分流<rt>series churn</rt></ruby>来描述一个时间序列集合变得不活跃,即不再接收数据点,取而代之的是出现一组新的活跃序列。
例如由给定微服务实例产生的所有序列都有一个相应的“instance”标签来标识其来源。如果我们为微服务执行了<ruby>滚动更新<rt>rolling update</rt></ruby>,并且为每个实例替换一个新的版本,序列分流便会发生。在更加动态的环境中,这些事情基本上每小时都会发生。像 Kubernetes 这样的<ruby>集群编排<rt>Cluster orchestration</rt></ruby>系统允许应用连续性的自动伸缩和频繁的滚动更新,这样也许会创建成千上万个新的应用程序实例,并且伴随着全新的时间序列集合,每天都是如此。
```
series
^
| . . . . . .
| . . . . . .
| . . . . . .
| . . . . . . .
| . . . . . . .
| . . . . . . .
| . . . . . .
| . . . . . .
| . . . . .
| . . . . .
| . . . . .
v
<-------------------- time --------------------->
```
所以即便整个基础设施的规模基本保持不变,过一段时间后数据库内的时间序列还是会成线性增长。尽管 Prometheus 很愿意采集 1000 万个时间序列数据,但要想在 10 亿个序列中找到数据,查询效果还是会受到严重的影响。
#### 当前解决方案
当前 Prometheus 的 V2 存储系统对所有当前保存的序列拥有基于 LevelDB 的索引。它允许查询语句含有给定的<ruby>标签对<rt>label pair</rt></ruby>,但是缺乏可伸缩的方法来从不同的标签选集中组合查询结果。
例如,从所有的序列中选择标签 `__name__="requests_total"` 非常高效,但是选择  `instance="A" AND __name__="requests_total"` 就有了可伸缩性的问题。我们稍后会重新考虑导致这一点的原因和能够提升查找延迟的调整方法。
事实上正是这个问题才催生出了对更好的存储系统的最初探索。Prometheus 需要为查找亿万个时间序列改进索引方法。
### 资源消耗
当试图扩展 Prometheus或其他任何事情真的资源消耗是永恒不变的话题之一。但真正困扰用户的并不是对资源的绝对渴求。事实上由于给定的需求Prometheus 管理着令人难以置信的吞吐量。问题更在于面对变化时的相对未知性与不稳定性。通过其架构设计V2 存储系统缓慢地构建了样本数据块这一点导致内存占用随时间递增。当数据块完成之后它们可以写到磁盘上并从内存中清除。最终Prometheus 的内存使用到达稳定状态。直到监测环境发生了改变——每次我们扩展应用或者进行滚动更新序列分流都会增加内存、CPU、磁盘 I/O 的使用。
如果变更正在进行,那么它最终还是会到达一个稳定的状态,但比起更加静态的环境,它的资源消耗会显著地提高。过渡时间通常为数个小时,而且难以确定最大资源使用量。
为每个时间序列保存一个文件这种方法也使得一个单个查询就很容易崩溃 Prometheus 进程。当查询的数据没有缓存在内存中查询的序列文件就会被打开然后将含有相关数据点的数据块读入内存。如果数据量超出内存可用量Prometheus 就会因 OOM 被杀死而退出。
在查询语句完成之后,加载的数据便可以被再次释放掉,但通常会缓存更长的时间,以便更快地查询相同的数据。后者看起来是件不错的事情。
最后,我们看看之前提到的 SSD 的写入放大,以及 Prometheus 是如何通过批量写入来解决这个问题的。尽管如此,在许多地方还是存在因为批量太小以及数据未精确对齐页边界而导致的写入放大。对于更大规模的 Prometheus 服务器,现实当中会发现缩减硬件寿命的问题。这一点对于高写入吞吐量的数据库应用来说仍然相当普遍,但我们应该放眼看看是否可以解决它。
### 重新开始
到目前为止我们对于问题域、V2 存储系统是如何解决它的以及设计上存在的问题有了一个清晰的认识。我们也看到了许多很棒的想法这些或多或少都可以拿来直接使用。V2 存储系统相当数量的问题都可以通过改进和部分的重新设计来解决,但为了好玩(当然,在我仔细的验证想法之后),我决定试着写一个完整的时间序列数据库——从头开始,即向文件系统写入字节。
性能与资源使用这种最关键的部分直接影响了存储格式的选取。我们需要为数据找到正确的算法和磁盘布局来实现一个高性能的存储层。
这就是我解决问题的捷径——跳过令人头疼、失败的想法,数不尽的草图,泪水与绝望。
### V3—宏观设计
我们存储系统的宏观布局是什么?简而言之,是当我们在数据文件夹里运行 `tree` 命令时显示的一切。看看它能给我们带来怎样一副惊喜的画面。
```
$ tree ./data
./data
+-- b-000001
| +-- chunks
| | +-- 000001
| | +-- 000002
| | +-- 000003
| +-- index
| +-- meta.json
+-- b-000004
| +-- chunks
| | +-- 000001
| +-- index
| +-- meta.json
+-- b-000005
| +-- chunks
| | +-- 000001
| +-- index
| +-- meta.json
+-- b-000006
+-- meta.json
+-- wal
+-- 000001
+-- 000002
+-- 000003
```
在最顶层,我们有一系列以 `b-` 为前缀编号的<ruby><rt>block</rt></ruby>。每个块中显然保存了索引文件和含有更多编号文件的 `chunk` 文件夹。`chunks` 目录只包含不同序列<ruby>数据点的原始块<rt>raw chunks of data points</rt><ruby>。与 V2 存储系统一样,这使得通过时间窗口读取序列数据非常高效并且允许我们使用相同的有效压缩算法。这一点被证实行之有效,我们也打算沿用。显然,这里并不存在含有单个序列的文件,而是一堆保存着许多序列的数据块。
`index` 文件的存在应该不足为奇。让我们假设它拥有黑魔法,可以让我们找到标签、可能的值、整个时间序列和存放数据点的数据块。
但为什么这里有好几个文件夹都是索引和块文件的布局?并且为什么存在最后一个包含 `wal` 文件夹?理解这两个疑问便能解决九成的问题。
#### 许多小型数据库
我们分割横轴,即将时间域分割为不重叠的块。每一块扮演着完全独立的数据库,它包含该时间窗口所有的时间序列数据。因此,它拥有自己的索引和一系列块文件。
```
t0 t1 t2 t3 now
+-----------+ +-----------+ +-----------+ +-----------+
| | | | | | | | +------------+
| | | | | | | mutable | <--- write ---- Prometheus |
| | | | | | | | +------------+
+-----------+ +-----------+ +-----------+ +-----------+ ^
+--------------+-------+------+--------------+ |
| query
| |
merge -------------------------------------------------+
```
每一块的数据都是<ruby>不可变的<rt>immutable</rt></ruby>。当然,当我们采集新数据时,我们必须能向最近的块中添加新的序列和样本。对于该数据块,所有新的数据都将写入内存中的数据库中,它与我们的持久化的数据块一样提供了查找属性。内存中的数据结构可以高效地更新。为了防止数据丢失,所有传入的数据同样被写入临时的<ruby>预写日志<rt>write ahead log</rt></ruby>中,这就是 `wal` 文件夹中的一些列文件,我们可以在重新启动时通过它们重新填充内存数据库。
所有这些文件都带有序列化格式,有我们所期望的所有东西:许多标志、偏移量、变体和 CRC32 校验和。纸上得来终觉浅,绝知此事要躬行。
这种布局允许我们扩展查询范围到所有相关的块上。每个块上的部分结果最终合并成完整的结果。
这种横向分割增加了一些很棒的功能:
* 当查询一个时间范围,我们可以简单地忽略所有范围之外的数据块。通过减少需要检查的数据集,它可以初步解决序列分流的问题。
* 当完成一个块,我们可以通过顺序的写入大文件从内存数据库中保存数据。这样可以避免任何的写入放大,并且 SSD 与 HDD 均适用。
* 我们延续了 V2 存储系统的一个好的特性,最近使用而被多次查询的数据块,总是保留在内存中。
* 很好,我们也不再受限于 1KiB 的数据块尺寸,以使数据在磁盘上更好地对齐。我们可以挑选对单个数据点和压缩格式最合理的尺寸。
* 删除旧数据变得极为简单快捷。我们仅仅只需删除一个文件夹。记住,在旧的存储系统中我们不得不花数个小时分析并重写数亿个文件。
每个块还包含了 `meta.json` 文件。它简单地保存了关于块的存储状态和包含的数据,以便轻松了解存储状态及其包含的数据。
##### mmap
将数百万个小文件合并为少数几个大文件使得我们用很小的开销就能保持所有的文件都打开。这就解除了对 [mmap(2)][8] 的使用的阻碍,这是一个允许我们通过文件透明地回传虚拟内存的系统调用。简单起见,你可以将其视为<ruby>交换空间<rt>swap space</rt></ruby>,只是我们所有的数据已经保存在了磁盘上,并且当数据换出内存后不再会发生写入。
这意味着我们可以当作所有数据库的内容都视为在内存中却不占用任何物理内存。仅当我们访问数据库文件某些字节范围时,操作系统才会从磁盘上<ruby>惰性加载<rt>lazy load</rt></ruby>页数据。这使得我们将所有数据持久化相关的内存管理都交给了操作系统。通常操作系统更有资格作出这样的决定因为它可以全面了解整个机器和进程。查询的数据可以相当积极的缓存进内存但内存压力会使得页被换出。如果机器拥有未使用的内存Prometheus 目前将会高兴地缓存整个数据库,但是一旦其他进程需要,它就会立刻返回那些内存。
因此,查询不再轻易地使我们的进程 OOM因为查询的是更多的持久化的数据而不是装入内存中的数据。内存缓存大小变得完全自适应并且仅当查询真正需要时数据才会被加载。
就个人理解,这就是当今大多数数据库的工作方式,如果磁盘格式允许,这是一种理想的方式,——除非有人自信能在这个过程中超越操作系统。我们做了很少的工作但确实从外面获得了很多功能。
#### 压缩
存储系统需要定期“切”出新块并将之前完成的块写入到磁盘中。仅在块成功的持久化之后才会被删除之前用来恢复内存块的日志文件wal
我们希望将每个块的保存时间设置的相对短一些(通常配置为 2 小时),以避免内存中积累太多的数据。当查询多个块,我们必须将它们的结果合并为一个整体的结果。合并过程显然会消耗资源,一个星期的查询不应该由超过 80 个的部分结果所组成。
为了实现两者,我们引入<ruby>压缩<rt>compaction</rt></ruby>。压缩描述了一个过程:取一个或更多个数据块并将其写入一个可能更大的块中。它也可以在此过程中修改现有的数据。例如,清除已经删除的数据,或重建样本块以提升查询性能。
```
t0 t1 t2 t3 t4 now
+------------+ +----------+ +-----------+ +-----------+ +-----------+
| 1 | | 2 | | 3 | | 4 | | 5 mutable | before
+------------+ +----------+ +-----------+ +-----------+ +-----------+
+-----------------------------------------+ +-----------+ +-----------+
| 1 compacted | | 4 | | 5 mutable | after (option A)
+-----------------------------------------+ +-----------+ +-----------+
+--------------------------+ +--------------------------+ +-----------+
| 1 compacted | | 3 compacted | | 5 mutable | after (option B)
+--------------------------+ +--------------------------+ +-----------+
```
在这个例子中我们有顺序块 `[1,2,3,4]`。块 1、2、3 可以压缩在一起,新的布局将会是 `[1,4]`。或者,将它们成对压缩为 `[1,3]`。所有的时间序列数据仍然存在,但现在整体上保存在更少的块中。这极大程度地缩减了查询时间的消耗,因为需要合并的部分查询结果变得更少了。
#### 保留
我们看到了删除旧的数据在 V2 存储系统中是一个缓慢的过程,并且消耗 CPU、内存和磁盘。如何才能在我们基于块的设计上清除旧的数据相当简单只要删除我们配置的保留时间窗口里没有数据的块文件夹即可。在下面的例子中块 1 可以被安全地删除,而块 2 则必须一直保留,直到它落在保留窗口边界之外。
```
|
+------------+ +----+-----+ +-----------+ +-----------+ +-----------+
| 1 | | 2 | | | 3 | | 4 | | 5 | . . .
+------------+ +----+-----+ +-----------+ +-----------+ +-----------+
|
|
retention boundary
```
随着我们不断压缩先前压缩的块,旧数据越大,块可能变得越大。因此必须为其设置一个上限,以防数据块扩展到整个数据库而损失我们设计的最初优势。
方便的是,这一点也限制了部分存在于保留窗口内部分存在于保留窗口外的块的磁盘消耗总量。例如上面例子中的块 2。当设置了最大块尺寸为总保留窗口的 10% 后,我们保留块 2 的总开销也有了 10% 的上限。
总结一下,保留与删除从非常昂贵到了几乎没有成本。
> 如果你读到这里并有一些数据库的背景知识,现在你也许会问:这些都是最新的技术吗?——并不是;而且可能还会做的更好。
>
> 在内存中批量处理数据,在预写日志中跟踪,并定期写入到磁盘的模式在现在相当普遍。
>
> 我们看到的好处无论在什么领域的数据里都是适用的。遵循这一方法最著名的开源案例是 LevelDB、Cassandra、InfluxDB 和 HBase。关键是避免重复发明劣质的轮子采用经过验证的方法并正确地运用它们。
>
> 脱离场景添加你自己的黑魔法是一种不太可能的情况。
### 索引
研究存储改进的最初想法是解决序列分流的问题。基于块的布局减少了查询所要考虑的序列总数。因此假设我们索引查找的复杂度是 `O(n^2)`,我们就要设法减少 n 个相当数量的复杂度,之后就相当于改进 `O(n^2)` 复杂度。——恩,等等……糟糕。
快速回顾一下“算法 101”课上提醒我们的在理论上它并未带来任何好处。如果之前就很糟糕那么现在也一样。理论是如此的残酷。
实际上,我们大多数的查询已经可以相当快响应。但是,跨越整个时间范围的查询仍然很慢,尽管只需要找到少部分数据。追溯到所有这些工作之前,最初我用来解决这个问题的想法是:我们需要一个更大容量的[倒排索引][9]。
倒排索引基于数据项内容的子集提供了一种快速的查找方式。简单地说,我可以通过标签 `app="nginx"` 查找所有的序列而无需遍历每个文件来看它是否包含该标签。
为此,每个序列被赋上一个唯一的 ID ,通过该 ID 可以恒定时间内检索它(`O(1)`)。在这个例子中 ID 就是我们的正向索引。
> 示例:如果 ID 为 10、29、9 的序列包含标签 `app="nginx"`,那么 “nginx”的倒排索引就是简单的列表 `[10, 29, 9]`,它就能用来快速地获取所有包含标签的序列。即使有 200 多亿个数据序列也不会影响查找速度。
简而言之,如果 `n` 是我们序列总数,`m` 是给定查询结果的大小,使用索引的查询复杂度现在就是 `O(m)`。查询语句依据它获取数据的数量 `m` 而不是被搜索的数据体 `n` 进行缩放是一个很好的特性,因为 `m` 一般相当小。
为了简单起见,我们假设可以在恒定时间内查找到倒排索引对应的列表。
实际上,这几乎就是 V2 存储系统具有的倒排索引,也是提供在数百万序列中查询性能的最低需求。敏锐的人会注意到,在最坏情况下,所有的序列都含有标签,因此 `m` 又成了 `O(n)`。这一点在预料之中,也相当合理。如果你查询所有的数据,它自然就会花费更多时间。一旦我们牵扯上了更复杂的查询语句就会有问题出现。
#### 标签组合
与数百万个序列相关的标签很常见。假设横向扩展着数百个实例的“foo”微服务并且每个实例拥有数千个序列。每个序列都会带有标签 `app="foo"`。当然,用户通常不会查询所有的序列而是会通过进一步的标签来限制查询。例如,我想知道服务实例接收到了多少请求,那么查询语句便是 `__name__="requests_total" AND app="foo"`
为了找到满足两个标签选择子的所有序列,我们得到每一个标签的倒排索引列表并取其交集。结果集通常会比任何一个输入列表小一个数量级。因为每个输入列表最坏情况下的大小为 `O(n)`,所以在嵌套地为每个列表进行<ruby>暴力求解<rt>brute force solution</rt><ruby>下,运行时间为 `O(n^2)`。相同的成本也适用于其他的集合操作,例如取并集(`app="foo" OR app="bar"`)。当在查询语句上添加更多标签选择子,耗费就会指数增长到 `O(n^3)`、`O(n^4)`、`O(n^5)`……`O(n^k)`。通过改变执行顺序,可以使用很多技巧以优化运行效率。越复杂,越是需要关于数据特征和标签之间相关性的知识。这引入了大量的复杂度,但是并没有减少算法的最坏运行时间。
这便是 V2 存储系统使用的基本方法,幸运的是,看似微小的改动就能获得显著的提升。如果我们假设倒排索引中的 ID 都是排序好的会怎么样?
假设这个例子的列表用于我们最初的查询:
```
__name__="requests_total" -> [ 9999, 1000, 1001, 2000000, 2000001, 2000002, 2000003 ]
app="foo" -> [ 1, 3, 10, 11, 12, 100, 311, 320, 1000, 1001, 10002 ]
intersection => [ 1000, 1001 ]
```
它的交集相当小。我们可以为每个列表的起始位置设置游标,每次从最小的游标处移动来找到交集。当二者的数字相等,我们就添加它到结果中并移动二者的游标。总体上,我们以锯齿形扫描两个列表,因此总耗费是 `O(2n)=O(n)`,因为我们总是在一个列表上移动。
两个以上列表的不同集合操作也类似。因此 `k` 个集合操作仅仅改变了因子 `O(k*n)` 而不是最坏情况下查找运行时间的指数 `O(n^k)`
我在这里所描述的是几乎所有[全文搜索引擎][10]使用的标准搜索索引的简化版本。每个序列描述符都视作一个简短的“文档”,每个标签(名称 + 固定值)作为其中的“单词”。我们可以忽略搜索引擎索引中通常遇到的很多附加数据,例如单词位置和和频率。
关于改进实际运行时间的方法似乎存在无穷无尽的研究,它们通常都是对输入数据做一些假设。不出意料的是,还有大量技术来压缩倒排索引,其中各有利弊。因为我们的“文档”比较小,而且“单词”在所有的序列里大量重复,压缩变得几乎无关紧要。例如,一个真实的数据集约有 440 万个序列与大约 12 个标签,每个标签拥有少于 5000 个单独的标签。对于最初的存储版本,我们坚持使用基本的方法而不压缩,仅做微小的调整来跳过大范围非交叉的 ID。
尽管维持排序好的 ID 听起来很简单但实践过程中不是总能完成的。例如V2 存储系统为新的序列赋上一个哈希值来当作 ID我们就不能轻易地排序倒排索引。
另一个艰巨的任务是当磁盘上的数据被更新或删除掉后修改其索引。通常最简单的方法是重新计算并写入但是要保证数据库在此期间可查询且具有一致性。V3 存储系统通过每块上具有的独立不可变索引来解决这一问题,该索引仅通过压缩时的重写来进行修改。只有可变块上的索引需要被更新,它完全保存在内存中。
## 基准测试
我从存储的基准测试开始了初步的开发,它基于现实世界数据集中提取的大约 440 万个序列描述符,并生成合成数据点以输入到这些序列中。这个阶段的开发仅仅测试了单独的存储系统,对于快速找到性能瓶颈和高并发负载场景下的触发死锁至关重要。
在完成概念性的开发实施之后,该基准测试能够在我的 Macbook Pro 上维持每秒 2000 万的吞吐量 —— 并且这都是在打开着十几个 Chrome 的页面和 Slack 的时候。因此,尽管这听起来都很棒,它这也表明推动这项测试没有的进一步价值(或者是没有在高随机环境下运行)。毕竟,它是合成的数据,因此在除了良好的第一印象外没有多大价值。比起最初的设计目标高出 20 倍,是时候将它部署到真正的 Prometheus 服务器上了,为它添加更多现实环境中的开销和场景。
我们实际上没有可重现的 Prometheus 基准测试配置,特别是没有对于不同版本的 A/B 测试。亡羊补牢为时不晚,[不过现在就有一个了][11]
我们的工具可以让我们声明性地定义基准测试场景,然后部署到 AWS 的 Kubernetes 集群上。尽管对于全面的基准测试来说不是最好环境,但它肯定比 64 核 128GB 内存的专用<ruby>裸机服务器<rt>bare metal servers</rt></ruby>更能反映出我们的用户群体。
我们部署了两个 Prometheus 1.5.2 服务器V2 存储系统)和两个来自 2.0 开发分支的 Prometheus V3 存储系统)。每个 Prometheus 运行在配备 SSD 的专用服务器上。我们将横向扩展的应用部署在了工作节点上并且让其暴露典型的微服务度量。此外Kubernetes 集群本身和节点也被监控着。整套系统由另一个 Meta-Prometheus 所监督,它监控每个 Prometheus 的健康状况和性能。
为了模拟序列分流,微服务定期的扩展和收缩来移除旧的 pod 并衍生新的 pod生成新的序列。通过选择“典型”的查询来模拟查询负载对每个 Prometheus 版本都执行一次。
总体上,伸缩与查询的负载以及采样频率极大的超出了 Prometheus 的生产部署。例如,我们每隔 15 分钟换出 60% 的微服务实例去产生序列分流。在现代的基础设施上,一天仅大约会发生 1-5 次。这就保证了我们的 V3 设计足以处理未来几年的工作负载。就结果而言Prometheus 1.5.2 和 2.0 之间的性能差异在极端的环境下会变得更大。
总而言之,我们每秒从 850 个目标里收集大约 11 万份样本,每次暴露 50 万个序列。
在此系统运行一段时间之后,我们可以看一下数字。我们评估了两个版本在 12 个小时之后到达稳定时的几个指标。
> 请注意从 Prometheus 图形界面的截图中轻微截断的 Y 轴
![Heap usage GB](https://fabxc.org/tsdb/assets/heap_usage.png)
*堆内存使用GB*
内存资源的使用对用户来说是最为困扰的问题,因为它相对的不可预测且可能导致进程崩溃。
显然查询的服务器正在消耗内存这很大程度上归咎于查询引擎的开销这一点可以当作以后优化的主题。总的来说Prometheus 2.0 的内存消耗减少了 3-4 倍。大约 6 小时之后,在 Prometheus 1.5 上有一个明显的峰值,与我们设置的 6 小时的保留边界相对应。因为删除操作成本非常高,所以资源消耗急剧提升。这一点在下面几张图中均有体现。
![CPU usage cores](https://fabxc.org/tsdb/assets/cpu_usage.png)
*CPU 使用(核心/秒)*
类似的模式也体现在 CPU 使用上,但是查询的服务器与非查询的服务器之间的差异尤为明显。每秒获取大约 11 万个数据需要 0.5 核心/秒的 CPU 资源,比起评估查询所花费的 CPU 时间,我们的新存储系统 CPU 消耗可忽略不计。总的来说,新存储需要的 CPU 资源减少了 3 到 10 倍。
![Disk writes](https://fabxc.org/tsdb/assets/disk_writes.png)
*磁盘写入MB/秒)*
迄今为止最引人注目和意想不到的改进表现在我们的磁盘写入利用率上。这就清楚的说明了为什么 Prometheus 1.5 很容易造成 SSD 损耗。我们看到最初的上升发生在第一个块被持久化到序列文件中的时期,然后一旦删除操作引发了重写就会带来第二个上升。令人惊讶的是,查询的服务器与非查询的服务器显示出了非常不同的利用率。
在另一方面Prometheus 2.0 每秒仅向其预写日志写入大约一兆字节。当块被压缩到磁盘时,写入定期地出现峰值。这在总体上节省了:惊人的 97-99%。
![Disk usage](https://fabxc.org/tsdb/assets/disk_usage.png)
*磁盘大小GB*
与磁盘写入密切相关的是总磁盘空间占用量。由于我们对样本(这是我们的大部分数据)几乎使用了相同的压缩算法,因此磁盘占用量应当相同。在更为稳定的系统中,这样做很大程度上是正确地,但是因为我们需要处理高的序列分流,所以还要考虑每个序列的开销。
如我们所见Prometheus 1.5 在这两个版本达到稳定状态之前使用的存储空间因其保留操作而急速上升。Prometheus 2.0 似乎在每个序列上的开销显著降低。我们可以清楚的看到预写日志线性地充满整个存储空间,然后当压缩完成后瞬间下降。事实上对于两个 Prometheus 2.0 服务器,它们的曲线并不是完全匹配的,这一点需要进一步的调查。
前景大好。剩下最重要的部分是查询延迟。新的索引应当优化了查找的复杂度。没有实质上发生改变的是处理数据的过程,例如 `rate()` 函数或聚合。这些就是查询引擎要做的东西了。
![Query latency](https://fabxc.org/tsdb/assets/query_latency.png)
*第 99 个百分位查询延迟(秒)*
数据完全符合预期。在 Prometheus 1.5 上查询延迟随着存储的序列而增加。只有在保留操作开始且旧的序列被删除后才会趋于稳定。作为对比Prometheus 2.0 从一开始就保持在合适的位置。
我们需要花一些心思在数据是如何被采集上,对服务器发出的查询请求通过对以下方面的估计来选择:范围查询和即时查询的组合,进行更轻或更重的计算,访问更多或更少的文件。它并不需要代表真实世界里查询的分布。也不能代表冷数据的查询性能,我们可以假设所有的样本数据都是保存在内存中的热数据。
尽管如此,我们可以相当自信地说,整体查询效果对序列分流变得非常有弹性,并且在高压基准测试场景下提升了 4 倍的性能。在更为静态的环境下,我们可以假设查询时间大多数花费在了查询引擎上,改善程度明显较低。
![Ingestion rate](https://fabxc.org/tsdb/assets/ingestion_rate.png)
*摄入的样本/秒*
最后,快速地看一下不同 Prometheus 服务器的摄入率。我们可以看到搭载 V3 存储系统的两个服务器具有相同的摄入速率。在几个小时之后变得不稳定,这是因为不同的基准测试集群节点由于高负载变得无响应,与 Prometheus 实例无关。(两个 2.0 的曲线完全匹配这一事实希望足够具有说服力)
尽管还有更多 CPU 和内存资源,两个 Prometheus 1.5.2 服务器的摄入率大大降低。序列分流的高压导致了无法采集更多的数据。
那么现在每秒可以摄入的<ruby>绝对最大<rt>absolute maximum</rt></ruby>样本数是多少?
但是现在你可以摄取的每秒绝对最大样本数是多少?
我不知道 —— 虽然这是一个相当容易的优化指标,但除了稳固的基线性能之外,它并不是特别有意义。
有很多因素都会影响 Prometheus 数据流量,而且没有一个单独的数字能够描述捕获质量。最大摄入率在历史上是一个导致基准出现偏差的度量,并且忽视了更多重要的层面,例如查询性能和对序列分流的弹性。关于资源使用线性增长的大致猜想通过一些基本的测试被证实。很容易推断出其中的原因。
我们的基准测试模拟了高动态环境下 Prometheus 的压力,它比起真实世界中的更大。结果表明,虽然运行在没有优化的云服务器上,但是已经超出了预期的效果。最终,成功将取决于用户反馈而不是基准数字。
> 注意在撰写本文的同时Prometheus 1.6 正在开发当中,它允许更可靠地配置最大内存使用量,并且可能会显著地减少整体的消耗,有利于稍微提高 CPU 使用率。我没有重复对此进行测试,因为整体结果变化不大,尤其是面对高序列分流的情况。
## 总结
Prometheus 开始应对高基数序列与单独样本的吞吐量。这仍然是一项富有挑战性的任务,但是新的存储系统似乎向我们展示了未来的一些好东西。
第一个配备 V3 存储系统的 [alpha 版本 Prometheus 2.0][12] 已经可以用来测试了。在早期阶段预计还会出现崩溃,死锁和其他 bug。
存储系统的代码可以在[这个单独的项目中找到][13]。Prometheus 对于寻找高效本地存储时间序列数据库的应用来说可能非常有用,这一点令人非常惊讶。
> 这里需要感谢很多人作出的贡献,以下排名不分先后:
> Bjoern Rabenstein 和 Julius Volz 在 V2 存储引擎上的打磨工作以及 V3 存储系统的反馈,这为新一代的设计奠定了基础。
> Wilhelm Bierbaum 对新设计不断的建议与见解作出了很大的贡献。Brian Brazil 不断的反馈确保了我们最终得到的是语义上合理的方法。与 Peter Bourgon 深刻的讨论验证了设计并形成了这篇文章。
> 别忘了我们整个 CoreOS 团队与公司对于这项工作的赞助与支持。感谢所有那些听我一遍遍唠叨 SSD、浮点数、序列化格式的同学。
--------------------------------------------------------------------------------
via: https://fabxc.org/blog/2017-04-10-writing-a-tsdb/
作者:[Fabian Reinartz][a]
译者:[LuuMing](https://github.com/LuuMing)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://twitter.com/fabxc
[1]:https://en.wikipedia.org/wiki/Inode
[2]:https://prometheus.io/
[3]:https://kubernetes.io/
[4]:https://en.wikipedia.org/wiki/Write_amplification
[5]:http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/
[6]:https://prometheus.io/docs/practices/rules/
[7]:http://www.vldb.org/pvldb/vol8/p1816-teller.pdf
[8]:https://en.wikipedia.org/wiki/Mmap
[9]:https://en.wikipedia.org/wiki/Inverted_index
[10]:https://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices
[11]:https://github.com/prometheus/prombench
[12]:https://prometheus.io/blog/2017/04/10/promehteus-20-sneak-peak/
[13]:https://github.com/prometheus/tsdb

View File

@ -0,0 +1,149 @@
[#]: collector: (lujun9972)
[#]: translator: (warmfrog)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10936-1.html)
[#]: subject: (5 projects for Raspberry Pi at home)
[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
5 个可在家中使用的树莓派项目
======================================
![5 projects for Raspberry Pi at home][1]
[树莓派][2] 电脑可被用来进行多种设置用于不同的目的。显然它在教育市场帮助学生在教室和创客空间中学习编程与创客技巧方面占有一席之地,它在工作场所和工厂中有大量行业应用。我打算介绍五个你可能想要在你的家中构建的项目。
### 媒体中心
在家中人们常用树莓派作为媒体中心来服务多媒体文件。它很容易搭建,树莓派提供了大量的 GPU图形处理单元运算能力来在大屏电视上渲染你的高清电视节目和电影。将 [Kodi][3](从前的 XBMC运行在树莓派上是一个很棒的方式它可以播放你的硬盘或网络存储上的任何媒体。你同样可以安装一个插件来播放 YouTube 视频。
还有几个略微不同的选择,最常见的是 [OSMC][4](开源媒体中心)和 [LibreELEC][5],都是基于 Kodi 的。它们在放映媒体内容方面表现的都非常好,但是 OSMC 有一个更酷炫的用户界面,而 LibreElec 更轻量级。你要做的只是选择一个发行版,下载镜像并安装到一个 SD 卡中(或者仅仅使用 [NOOBS][6]),启动,然后就准备好了。
![LibreElec ][7]
*LibreElec;树莓派基金会, CC BY-SA*
![OSMC][8]
*OSMC.tv, 版权所有, 授权使用*
在往下走之前,你需要决定[使用哪种树莓派][9]。这些发行版在任何树莓派1、2、3 或 Zero上都能运行视频播放在这些树莓派中的任何一个上都能胜任。除了 Pi 3和 Zero W有内置 Wi-Fi唯一可察觉的不同是用户界面的反应速度在 Pi 3 上更快。Pi 2 也不会慢太多,所以如果你不需要 Wi-Fi 它也是可以的,但是当切换菜单时,你会注意到 Pi 3 比 Pi 1 和 Zero 表现的更好。
### SSH 网关
如果你想从外部网络访问你的家庭局域网的电脑和设备,你必须打开这些设备的端口来允许外部访问。在互联网中开放这些端口有安全风险,意味着你总是你总是处于被攻击、滥用或者其他各种未授权访问的风险中。然而,如果你在你的网络中安装一个树莓派,并且设置端口映射来仅允许通过 SSH 访问树莓派,你可以这么用来作为一个安全的网关来跳到网络中的其他树莓派和 PC。
大多数路由允许你配置端口映射规则。你需要给你的树莓派一个固定的内网 IP 地址来设置你的路由器端口 22 映射到你的树莓派端口 22。如果你的网络服务提供商给你提供了一个静态 IP 地址,你能够通过 SSH 和主机的 IP 地址访问(例如,`ssh pi@123.45.56.78`)。如果你有一个域名,你可以配置一个子域名指向这个 IP 地址,所以你没必要记住它(例如,`ssh pi@home.mydomain.com`)。
![][11]
然而,如果你不想将树莓派暴露在互联网上,你应该非常小心,不要让你的网络处于危险之中。如果你遵循一些简单的步骤来使它更安全:
1. 大多数人建议你更换你的登录密码(有道理,默认密码 “raspberry” 是众所周知的),但是这不能阻挡暴力攻击。你可以改变你的密码并添加一个双重验证(所以你需要你的密码*和*一个手机生成的与时间相关的密码),这么做更安全。但是,我相信最好的方法阻止入侵者访问你的树莓派是在你的 SSH 配置中[禁止密码认证][12],这样只能通过 SSH 密匙进入。这意味着任何试图猜测你的密码尝试登录的人都不会成功。只有你的私有密匙可以访问。简单来说,很多人建议将 SSH 端口从默认的 22 换成其他的,但是通过简单的 [Nmap][13] 扫描你的 IP 地址,你信任的 SSH 端口就会暴露。
2. 最好,不要在这个树莓派上运行其他的软件,这样你不会意外暴露其他东西。如果你想要运行其他软件,你最好在网络中的其他树莓派上运行,它们没有暴露在互联网上。确保你经常升级来保证你的包是最新的,尤其是 `openssh-server` 包,这样你的安全缺陷就被打补丁了。
3. 安装 [sshblack][14] 或 [fail2ban][15] 来将任何表露出恶意的用户加入黑名单,例如试图暴力破解你的 SSH 密码。
使树莓派安全后,让它在线,你将可以在世界的任何地方登录你的网络。一旦你登录到你的树莓派,你可以用 SSH 访问本地网络上的局域网地址例如192.168.1.31)访问其他设备。如果你在这些设备上有密码,用密码就好了。如果它们同样只允许 SSH 密匙,你需要确保你的密匙通过 SSH 转发,使用 `-A` 参数:`ssh -A pi@123.45.67.89`。
### CCTV / 宠物相机
另一个很棒的家庭项目是安装一个相机模块来拍照和录视频,录制并保存文件,在内网或者外网中进行流式传输。你想这么做有很多原因,但两个常见的情况是一个家庭安防相机或监控你的宠物。
[树莓派相机模块][16] 是一个优秀的配件。它提供全高清的相片和视频,包括很多高级配置,很[容易编程][17]。[红外线相机][18]用于这种目的是非常理想的,通过一个红外线 LED树莓派可以控制的你就能够在黑暗中看见东西。
如果你想通过一定频率拍摄静态图片来留意某件事,你可以仅仅写一个简短的 [Python][19] 脚本或者使用命令行工具 [raspistill][20], 在 [Cron][21] 中规划它多次运行。你可能想将它们保存到 [Dropbox][22] 或另一个网络服务,上传到一个网络服务器,你甚至可以创建一个[web 应用][23]来显示他们。
如果你想要在内网或外网中流式传输视频,那也相当简单。在 [picamera 文档][24]中(在 “web streaming” 章节)有一个简单的 MJPEGMotion JPEG例子。简单下载或者拷贝代码到文件中运行并访问树莓派的 IP 地址的 8000 端口,你会看见你的相机的直播输出。
有一个更高级的流式传输项目 [pistreaming][25] 也可以,它通过在网络服务器中用 [JSMpeg][26] (一个 JavaScript 视频播放器)和一个用于相机流的单独运行的 websocket。这种方法性能更好并且和之前的例子一样简单但是如果要在互联网中流式传输则需要包含更多代码并且需要你开放两个端口。
一旦你的网络流建立起来,你可以将你的相机放在你想要的地方。我用一个来观察我的宠物龟:
![Tortoise ][27]
*Ben Nuttall, CC BY-SA*
如果你想控制相机位置,你可以用一个舵机。一个优雅的方案是用 Pimoroni 的 [Pan-Tilt HAT][28],它可以让你简单的在二维方向上移动相机。为了与 pistreaming 集成,可以看看该项目的 [pantilthat 分支][29].
![Pan-tilt][30]
*Pimoroni.com, Copyright, 授权使用*
如果你想将你的树莓派放到户外你将需要一个防水的外围附件并且需要一种给树莓派供电的方式。POE通过以太网提供电力电缆是一个不错的实现方式。
### 家庭自动化或物联网
现在是 2017 年LCTT 译注:此文发表时间),到处都有很多物联网设备,尤其是家中。我们的电灯有 Wi-Fi我们的面包烤箱比过去更智能我们的茶壶处于俄国攻击的风险中除非你确保你的设备安全不然别将没有必要的设备连接到互联网之后你可以在家中充分的利用物联网设备来完成自动化任务。
市场上有大量你可以购买或订阅的服务,像 Nest Thermostat 或 Philips Hue 电灯泡,允许你通过你的手机控制你的温度或者你的亮度,无论你是否在家。你可以用一个树莓派来催动这些设备的电源,通过一系列规则包括时间甚至是传感器来完成自动交互。用 Philips Hue你做不到的当你进房间时打开灯光但是有一个树莓派和一个运动传感器你可以用 Python API 来打开灯光。类似地,当你在家的时候你可以通过配置你的 Nest 打开加热系统,但是如果你想在房间里至少有两个人时才打开呢?写一些 Python 代码来检查网络中有哪些手机,如果至少有两个,告诉 Nest 来打开加热器。
不用选择集成已存在的物联网设备,你可以用简单的组件来做的更多。一个自制的窃贼警报器,一个自动化的鸡笼门开关,一个夜灯,一个音乐盒,一个定时的加热灯,一个自动化的备份服务器,一个打印服务器,或者任何你能想到的。
### Tor 协议和屏蔽广告
Adafruit 的 [Onion Pi][31] 是一个 [Tor][32] 协议来使你的网络通讯匿名,允许你使用互联网而不用担心窥探者和各种形式的监视。跟随 Adafruit 的指南来设置 Onion Pi你会找到一个舒服的匿名的浏览体验。
![Onion-Pi][33]
*Onion-pi from Adafruit, Copyright, 授权使用*
![Pi-hole][34]
可以在你的网络中安装一个树莓派来拦截所有的网络交通并过滤所有广告。简单下载 [Pi-hole][35] 软件到 Pi 中,你的网络中的所有设备都将没有广告(甚至屏蔽你的移动设备应用内的广告)。
树莓派在家中有很多用法。你在家里用树莓派来干什么?你想用它干什么?
在下方评论让我们知道。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[warmfrog](https://github.com/warmfrog)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home)
[2]: https://www.raspberrypi.org/
[3]: https://kodi.tv/
[4]: https://osmc.tv/
[5]: https://libreelec.tv/
[6]: https://www.raspberrypi.org/downloads/noobs/
[7]: https://opensource.com/sites/default/files/libreelec_0.png (LibreElec )
[8]: https://opensource.com/sites/default/files/osmc.png (OSMC)
[9]: https://opensource.com/life/16/10/which-raspberry-pi-should-you-choose-your-project
[10]: mailto:pi@home.mydomain.com
[11]: https://opensource.com/sites/default/files/resize/screenshot_from_2017-04-07_15-13-01-700x380.png
[12]: http://stackoverflow.com/questions/20898384/ssh-disable-password-authentication
[13]: https://nmap.org/
[14]: http://www.pettingers.org/code/sshblack.html
[15]: https://www.fail2ban.org/wiki/index.php/Main_Page
[16]: https://www.raspberrypi.org/products/camera-module-v2/
[17]: https://opensource.com/life/15/6/raspberry-pi-camera-projects
[18]: https://www.raspberrypi.org/products/pi-noir-camera-v2/
[19]: http://picamera.readthedocs.io/
[20]: https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md
[21]: https://www.raspberrypi.org/documentation/linux/usage/cron.md
[22]: https://github.com/RZRZR/plant-cam
[23]: https://github.com/bennuttall/bett-bot
[24]: http://picamera.readthedocs.io/en/release-1.13/recipes2.html#web-streaming
[25]: https://github.com/waveform80/pistreaming
[26]: http://jsmpeg.com/
[27]: https://opensource.com/sites/default/files/tortoise.jpg (Tortoise)
[28]: https://shop.pimoroni.com/products/pan-tilt-hat
[29]: https://github.com/waveform80/pistreaming/tree/pantilthat
[30]: https://opensource.com/sites/default/files/pan-tilt.gif (Pan-tilt)
[31]: https://learn.adafruit.com/onion-pi/overview
[32]: https://www.torproject.org/
[33]: https://opensource.com/sites/default/files/onion-pi.jpg (Onion-Pi)
[34]: https://opensource.com/sites/default/files/resize/pi-hole-250x250.png (Pi-hole)
[35]: https://pi-hole.net/

View File

@ -0,0 +1,103 @@
为 man 手册页编写解析器的备忘录
======
![](https://img.linux.net.cn/data/attachment/album/201906/11/235607fiqfqapvpzqhh8n1.jpg)
我一般都很喜欢无所事事,但有时候太无聊了也不行 —— 2015 年的一个星期天下午就是这样,我决定开始写一个开源项目来让我不那么无聊。
在我寻求创意时,我偶然发现了一个请求,要求构建一个由 [Mathias Bynens][2] 提出的“[按 Web 标准构建的 Man 手册页查看器][1]”。没有考虑太多,我开始使用 JavaScript 编写一个手册页解析器,经过大量的反复思考,最终做出了一个 [Jroff][3]。
那时候,我非常熟悉手册页这个概念,而且使用过很多次,但我知道的仅止于此,我不知道它们是如何生成的,或者是否有一个标准。在经过两年后,我有了一些关于此事的想法。
### man 手册页是如何写的
当时令我感到惊讶的第一件事是,手册页的核心只是存储在系统某处的纯文本文件(你可以使用 `manpath` 命令检查这些目录)。
此文件中不仅包含文档,还包含使用了 20 世纪 70 年代名为 `troff` 的排版系统的格式化信息。
> troff 及其 GNU 实现 groff 是处理文档的文本描述以生成适合打印的排版版本的程序。**它更像是“你所描述的即你得到的”,而不是你所见即所得的。**
>
> - 摘自 [troff.org][4]
如果你对排版格式毫不熟悉,可以将它们视为 steroids 期刊用的 Markdown但其灵活性带来的就是更复杂的语法
![groff-compressor][5]
`groff` 文件可以手工编写,也可以使用许多不同的工具从其他格式生成,如 Markdown、Latex、HTML 等。
为什么 `groff` 和 man 手册页绑在一起是有历史原因的,其格式[随时间有变化][6]它的血统由一系列类似命名的程序组成RUNOFF > roff > nroff > troff > groff。
但这并不一定意味着 `groff` 与手册页有多紧密的关系,它是一种通用格式,已被用于[书籍][7],甚至用于[照相排版][8]。
此外,值得注意的是 `groff` 也可以调用后处理器将其中间输出结果转换为最终格式,这对于终端显示来说不一定是 ascii 一些支持的格式是TeX DVI、HTML、Canon、HP LaserJet4 兼容格式、PostScript、utf8 等等。
### 宏
该格式的其他很酷的功能是它的可扩展性,你可以编写宏来增强其基本功能。
鉴于 *nix 系统的悠久历史,有几个可以根据你想要生成的输出而将特定功能组合在一起的宏包,例如 `man`、`mdoc`、`mom`、`ms`、`mm` 等等。
手册页通常使用 `man``mdoc` 宏包编写。
区分原生的 `groff` 命令和宏的方式是通过标准 `groff` 包大写其宏名称。对于 `man` 宏包,每个宏的名称都是大写的,如 `.PP`、`.TH`、`.SH` 等。对于 `mdoc` 宏包,只有第一个字母是大写的: `.Pp`、`.Dt`、`.Sh`。
![groff-example][9]
### 挑战
无论你是考虑编写自己的 `groff` 解析器,还是只是好奇,这些都是我发现的一些更具挑战性的问题。
#### 上下文敏感的语法
表面上,`groff` 的语法是上下文无关的,遗憾的是,因为宏描述的是主体不透明的令牌,所以包中的宏集合本身可能不会实现上下文无关的语法。
这导致我在那时做不出来一个解析器生成器(不管好坏)。
#### 嵌套的宏
`mdoc` 宏包中的大多数宏都是可调用的,这差不多意味着宏可以用作其他宏的参数,例如,你看看这个:
* 宏 `Fl`Flag会在其参数中添加破折号因此 `Fl s` 会生成 `-s`
* 宏 `Ar`Argument提供了定义参数的工具
* 宏 `Op`Optional会将其参数括在括号中因为这是将某些东西定义为可选的标准习惯用法
* 以下组合 `.Op Fl s Ar file ` 将生成 `[-s file]`,因为 `Op` 宏可以嵌套。
#### 缺乏适合初学者的资源
让我感到困惑的是缺乏一个规范的、定义明确的、清晰的来源,网上有很多信息,这些信息对读者来说很重要,需要时间来掌握。
### 有趣的宏
总结一下,我会向你提供一个非常简短的宏列表,我在开发 jroff 时发现它很有趣:
`man` 宏包:
* `.TH`:用 `man` 宏包编写手册页时,你的第一个不是注释的行必须是这个宏,它接受五个参数:`title`、`section`、`date`、`source`、`manual`。
* `.BI`:粗体加斜体(特别适用于函数格式)
* `.BR`:粗体加正体(特别适用于参考其他手册页)
`mdoc` 宏包:
* `.Dd`、`.Dt`、`.Os`:类似于 `man` 宏包需要 `.TH``mdoc` 宏也需要这三个宏,需要按特定顺序使用。它们的缩写分别代表:文档日期、文档标题和操作系统。
* `.Bl`、`.It`、`.El`:这三个宏用于创建列表,它们的名称不言自明:开始列表、项目和结束列表。
--------------------------------------------------------------------------------
via: https://monades.roperzh.com/memories-writing-parser-man-pages/
作者:[Roberto Dip][a]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://monades.roperzh.com
[1]:https://github.com/h5bp/lazyweb-requests/issues/114
[2]:https://mathiasbynens.be/
[3]:jroff
[4]:https://www.troff.org/
[5]:https://user-images.githubusercontent.com/4419992/37868021-2e74027c-2f7f-11e8-894b-80829ce39435.gif
[6]:https://manpages.bsd.lv/history.html
[7]:https://rkrishnan.org/posts/2016-03-07-how-is-gopl-typeset.html
[8]:https://en.wikipedia.org/wiki/Phototypesetting
[9]:https://user-images.githubusercontent.com/4419992/37866838-e602ad78-2f6e-11e8-97a9-2a4494c766ae.jpg

View File

@ -0,0 +1,163 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10956-1.html)
[#]: subject: (Blockchain 2.0 Explaining Smart Contracts And Its Types [Part 5])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
区块链 2.0:智能合约及其类型(五)
======
![Explaining Smart Contracts And Its Types][1]
这是 区块链 2.0 系列的第 5 篇文章。本系列的前一篇文章探讨了我们如何[在房地产行业实现区块链][2]。本文简要探讨了区块链及相关技术领域内的<ruby>智能合约<rt>Smart Contract</rt></ruby>主题。智能合约是在区块链上验证和创建新“数据块”的基本协议,它被吹捧为该系统未来发展和应用的焦点。 然而,像所有“万灵药”一样,它不是一切的答案。我们将从基础知识中探索这个概念,以了解“智能合约”是什么以及它们不是什么。
### 不断发展的合同
这个世界建立在合同(合约)之上。在当前社会,没有合约的使用和再利用,地球上任何个人或公司都无法运作。订立、维护和执行合同的任务变得如此复杂,以至于整个司法和法律系统都必须以“合同法”的名义建立起来以支持它。事实上,大多数合同都是由一个“可信的”第三方监督,以确保最终的利益攸关者按照达成的条件得到妥善处理。有些合同甚至涉及到了第三方受益人。此类合同旨在对不是合同的活跃(或参与)方的第三方产生影响。解决和争论合同义务占据了民事诉讼所涉及的大部分法律纠纷。当然,更好的处理合同的方式来对于个人和企业来说都是天赐之物。更不用说它将以核查和证明的名义节省政府的巨大的[文书工作][7] [^1]。
本系列中的大多数文章都研究了如何利用现有的区块链技术。相比之下,这篇文章将更多地讲述对未来几年的预期。关于“智能合约”的讨论源于前一篇文章中提出的财产讨论。当前这篇文章旨在概述区块链自动执行“智能”可执行程序的能力。务实地处理这个问题意味着我们首先必须定义和探索这些“智能合约”是什么,以及它们如何适应现有的合同系统。我们将在下一篇题为“区块链 2.0:正在进行的项目”的文章中查看当前该领域正在进行的主要应用和项目。
### 定义智能合约
[本系列的第一篇文章][3]从基本的角度来看待区块链,将其看作由数据块组成的“分布式分类账本”,这些数据块是:
* 防篡改
* 不可否认(意味着每个数据块都是由某人显式创建的,并且该人不能否认相同的责任)
* 安全,且能抵御传统的网络攻击方法
* 几乎是永久性的(当然这取决于区块链协议层)
* 高度冗余,通过存在于多个网络节点或参与者系统上,其中一个节点的故障不会以任何方式影响系统的功能,并且,
* 根据应用的不同可以提供更快的处理速度。
由于每个数据实例都是安全存储和通过适当的凭证访问的,因此区块链网络可以为精确验证事实和信息提供简便的基础,而无需第三方监督。区块链 2.0 开发也允许“分布式应用程序DApp我们将在接下来的文章中详细介绍这个术语。这些分布式应用程序要求存在网络上并在其上运行。当用户需要它们时就会调用它们并通过使用已经过审核并存储在区块链上的信息来执行它们。
上面的最后一段为智能合约的定义提供了基础。<ruby>数字商会<rt>The Chamber for Digital Commerce</rt></ruby>提供了一个许多专家都同意的智能合约定义。
> “(智能合约是一种)计算机代码,在发生指定条件时,能够根据预先指定的功能自动运行。该代码可以在分布式分类帐本上存储和处理,并将产生的任何更改写入分布式分类帐本” [^2]。
智能合约如上所述是一种简单的计算机程序,就像 “if-then” 或 “if-else if” 语句一样工作。关于其“智能”的方面来自这样一个事实,即该程序的预定义输入来自区块链分类账本,如上所述,它是一个记录信息的安全可靠的来源。如有必要,程序可以调用外部服务或来源以获取信息,以验证操作条款,并且仅在满足所有预定义条件后才执行。
必须记住与其名称所暗示的不同智能合约通常不是自治实体严格来说也不是合同。1996 年Nick Szabo 很早就提到了智能合约,他将其与接受付款并交付用户选择的产品的自动售货机进行了比较。可以在[这里][4]查看全文。此外,人们正在制定允许智能合约进入主流合同使用的法律框架,因此目前该技术的使用仅限于法律监督不那么明确和严格的领域 [^4]。
### 智能合约的主要类型
假设读者对合同和计算机编程有基本的了解,并且基于我们对智能合约的定义,我们可以将智能合约和协议粗略地分类为以下主要类别。
#### 1、智能法律合约
这大概是最明显的一种。大多数(如果不是全部)合同都具有法律效力。在不涉及太多技术问题的情况下,智能法律合约是涉及到严格的法律追索权的合同,以防参与合同的当事人不履行其交易的目的。如前所述,不同国家和地区的现行法律框架对区块链上的智能和自动化合约缺乏足够的支持,其法律地位也不明确。但是,一旦制定了法律,就可以订立智能合约,以简化目前涉及严格监管的流程,如金融和房地产市场交易、政府补贴、国际贸易等。
#### 2、DAO
<ruby>去中心化自治组织<rt>Decentralized Autonomous Organization</rt></ruby>即DAO可以粗略地定义为区块链上存在的社区。该社区可以通过一组规则来定义这些规则通过智能合约来体现并放入代码中。然后每个参与者的每一个行动都将受到这些规则的约束其任务是在程序中断的情况下执行并获得追索权。许多智能合约构成了这些规则它们协同监管和监督参与者。
名为“创世纪 DAO” 的 DAO 是由以太坊参与者于 2016 年 5 月创建。该社区旨在成为众筹和风险投资平台。在极短的时间内,他们设法筹集了惊人的 1.5 亿美元。然而,由于黑客在系统中发现了漏洞,并设法从众筹投资者手中窃取价值约 5000 万美元的以太币。这次黑客破坏的后果导致以太坊区块链[分裂为两个][8],以太坊和以太坊经典。
#### 3、应用逻辑合约ALC
如果你已经听说过与区块链相结合的物联网,那么很可能它涉及到了<ruby>应用逻辑合约<rt>Application logic contract</rt></ruby>,即 ALC。此类智能合约包含特定于应用的代码这些代码可以与区块链上的其他智能合约和程序一起工作。它们有助于与设备进行通信并验证设备之间的通信在物联网领域。ALC 是每个多功能智能合约的关键部分,并且大多数都是在一个管理程序下工作。在这里引用的大多数例子中,它们到处都能找到[应用][9] [^6]。
*由于该领域还在开发中,因此目前所说的任何定义或标准最多只能说是变化而模糊的。*
### 智能合约是如何工作的?
为简化起见,让我们用个例子来说明。
约翰和彼得是两个争论足球比赛得分的人。他们对比赛结果持有相互矛盾的看法,他们都支持不同的球队(这是背景情况)。由于他们两个都需要去其他地方并且无法看完比赛,所以约翰认为如果 A 队在比赛中击败 B 队,他就*支付*给彼得 100 美元。彼得*考虑*之后*接受*了该赌注,同时明确表示他们必须接受这些条款。但是,他们没有兑现该赌注的相互信任,也没有时间和钱来指定第三方监督赌注。
假设约翰和彼得都使用像 [Etherparty][5] 这样的智能合约平台,它可以在合约谈判时自动结算赌注,他们都会将基于区块链的身份链接到该合约,并设置条款,明确表示一旦比赛结束,该程序将找出获胜方是谁,并自动将该金额从输家中归入获胜者银行账户。一旦比赛结束并且媒体报道同样的结果,该程序将在互联网上搜索规定的来源,确定哪支球队获胜,将其与合约条款联系起来,在这种情况下,如果 A 队赢了彼得将从约翰哪里得到钱,也就是说将约翰的 100 美元转移到彼得的账户。执行完毕后,除非另有说明,否则智能合约将终止并在未来所有的时间内处于非活动状态。
抛开例子的简单不说,这种情况涉及到一个经典的合同,而参与者选择使用智能合约实现了相同目的。所有的智能合约基本上都遵循类似的原则,对程序进行编码,以便在预定义的参数上执行,并且只抛出预期的输出。智能合同咨询的外部来源可以是有时被称为 IT 世界中的<ruby>神谕<rt>Oracle</rt></ruby>。神谕是当今全球许多智能合约系统的常见部分。
在这种情况下使用智能合约使参与者可以获得以下好处:
* 它比在一起并手动结算更快。
* 从其中删除了信任问题。
* 消除了受信任的第三方代表有关各方处理和解的必要性。
* 执行时无需任何费用。
* 在如何处理参数和敏感数据方面是安全的。
* 相关数据将永久保留在他们运行的区块链平台中,未来可以通过调用相同的函数并为其提供更多输入来设置投注。
* 随着时间的推移,假设约翰和彼得变得赌博成瘾,该程序可以帮助他们开发可靠的统计数据来衡量他们的连胜纪录。
  
现在我们知道**什么是智能合约**和**它们如何工作**,我们还没有解决**为什么我们需要它们**。
### 智能合约的需要
正如之前的例子我们重点提到过的,出于各种原因,我们需要智能合约。
#### 透明度
交易对手非常清楚所涉及的条款和条件。此外,由于程序或智能合约的执行涉及某些明确的输入,因此用户可以非常直接地核实会影响他们和合约受益人的因素。
#### 时间效率
如上所述,智能合约一旦被控制变量或用户调用所触发,就立即开始工作。由于数据是通过区块链和网络中的其它来源即时提供给系统,因此执行不需要任何时间来验证和处理信息并解决交易。例如,转移土地所有权契约,这是一个涉及手工核实大量文书工作并且需要数周时间的过程,可以在几分钟甚至几秒钟内通过智能合约程序来处理文件和相关各方。
#### 精度
由于平台基本上只是计算机代码和预定义的内容,因此不存在主观错误,所有结果都是精确的,完全没有人为错误。
#### 安全
区块链的一个固有特征是每个数据块都是安全加密的。这意味着为了实现冗余,即使数据存储在网络上的多个节点上,**也只有数据所有者才能访问以查看和使用数据**。类似地,利用区块链在过程中存储重要变量和结果,所有过程都将是完全安全和防篡改的。同样也通过按时间顺序为审计人员提供原始的、未经更改的和不可否认的数据版本,简化了审计和法规事务。
#### 信任
这个文章系列开篇说到区块链为互联网及其上运行的服务增加了急需的信任层。智能合约在任何情况下都不会在执行协议时表现出偏见或主观性,这意味着所涉及的各方对结果完全有约束力,并且可以不附带任何条件地信任该系统。这也意味着,在具有重要价值的传统合同中所需的“可信第三方”,在此处不需要。当事人之间的犯规和监督将成为过去的问题。
#### 成本效益
如示例中所强调的,使用智能合约需要最低的成本。企业通常有专门从事使其交易合法并遵守法规的行政人员。如果交易涉及多方,则重复工作是不可避免的。智能合约基本上使前者无关紧要,并且消除了重复,因为双方可以同时完成尽职调查。
### 智能合约的应用
基本上,如果两个或多个参与方使用共同的区块链平台,并就一组原则或业务逻辑达成一致,他们可以一起在区块链上创建一个智能合约,并且在没有人为干预的情况下执行。没有人可以篡改所设置的条件,如果原始代码允许,任何更改都会加上时间戳并带有编辑者的指纹,从而增加了问责制。想象一下,在更大的企业级规模上出现类似的情况,你就会明白智能合约的能力是什么,实际上从 2016 年开始的 **Capgemini 研究** 发现智能合约实际上可能是**“未来几年的”** [^8] 商业主流。商业的应用涉及保险、金融市场、物联网、贷款、身份管理系统、托管账户、雇佣合同以及专利和版税合同等用途。像以太坊这样的区块链平台,是一个设计时就考虑了智能合约的系统,它允许个人私人用户免费使用智能合约。
通过对处理智能合约的公司的探讨,本系列的下一篇文章中将更全面地概述智能合约在当前技术问题上的应用。
### 那么,它有什么缺点呢?
这并不是说对智能合约的使用没有任何顾虑。这种担忧实际上也减缓了这方面的发展。所有区块链的防篡改性质实质上使得,如果所涉及的各方需要在没有重大改革或法律追索的情况下,几乎不可能修改或添加现有条款的新条款。
其次,即使公有链上的活动是开放的,所有人都可以看到和观察。交易中涉及的各方的个人身份并不总是已知的。这种匿名性造成在任何一方违约的情况下法律有罪不罚的问题,特别是因为现行法律和立法者并不完全适应现代技术。
第三,区块链和智能合约在很多方面仍然存在安全缺陷,因为对其所以涉及的技术仍处于发展的初期阶段。 对代码和平台的这种缺乏经验最终导致了 2016 年的 DAO 事件。
所有这些都可能导致企业或公司在需要调整区块链以供其使用时需要大量的初始投资。然而,这些是最初的一次性投资,并且随之而来的是潜在的节约,这才是人们感兴趣的。
### 结论
目前的法律框架并没有真正支持一个全面的智能合约的社会,并且由于显然的原因,在不久的将来也不会支持。一个解决方案是选择**“混合”合约**,它将传统的法律文本和文件与在为此目的设计的区块链上运行的智能合约代码相结合。然而,即使是混合合约仍然很大程度上尚未得到探索,因为需要创新的立法机构才能实现这些合约。这里简要提到的应用以及更多内容将在[本系列的下一篇文章][6]中详细探讨。
[^1]: S. C. A. Chamber of Digital Commerce, “Smart contracts Is the law ready,” no. September, 2018.
[^2]: S. C. A. Chamber of Digital Commerce, “Smart contracts Is the law ready,” no. September, 2018.
[^4]: Cardozo Blockchain Project, “Smart Contracts & Legal Enforceability,” vol. 2, p. 28, 2018.
[^6]: F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167183.
[^8]: B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 124, 2016.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
作者:[ostechnix][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/03/smart-contracts-720x340.png
[2]: https://linux.cn/article-10914-1.html
[3]: https://linux.cn/article-10650-1.html
[4]: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html
[5]: https://etherparty.com/
[6]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
[7]: http://www.legal-glossary.org/
[8]: https://futurism.com/the-dao-heist-undone-97-of-eth-holders-vote-for-the-hard-fork/
[9]: https://www.everestgrp.com/2016-10-types-smart-contracts-based-applications-market-insights-36573.html/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (tomjlw)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10939-1.html)
[#]: subject: (How to build a mobile particulate matter sensor with a Raspberry Pi)
[#]: via: (https://opensource.com/article/19/3/mobile-particulate-matter-sensor)
[#]: author: (Stephan Tetzel https://opensource.com/users/stephan)
@ -10,19 +10,19 @@
如何用树莓派搭建一个颗粒物传感器
======
用树莓派,一个廉价的传感器和一个便宜的屏幕监测空气质量
> 用树莓派、一个廉价的传感器和一个便宜的屏幕监测空气质量
![小组交流,讨论][1]
![](https://img.linux.net.cn/data/attachment/album/201906/05/005121bbveeavwgyc1i1gk.jpg)
大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。
大约一年前,我写了一篇关于如何使用树莓派和廉价传感器测量[空气质量][2]的文章。我们这几年已在学校里和私下使用了这个项目。然而它有一个缺点:由于它基于无线/有线网,因此它不是便携的。如果你的树莓派你的智能手机和电脑不在同一个网络的话,你甚至都不能访问传感器测量的数据。
为了弥补这一缺陷,我们给树莓派添加了一块小屏幕,这样我们就可以直接从该设备上读取数据。以下是我们如何为我们的移动细颗粒物传感器搭建并配置好屏幕。
### 为树莓派搭建好屏幕
在[亚马逊][3],阿里巴巴以及其它来源有许多可以获取的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为320*480像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。一个3.5英寸屏幕和树莓派几乎一样大,这一点不错。
在[亚马逊][3]、阿里巴巴以及其它来源有许多可以买到的树莓派屏幕,从 ePaper 屏幕到可触控 LCD。我们选择了一个便宜的带触控功能且分辨率为 320*480 像素的[3.5英寸 LCD][3],可以直接插进树莓派的 GPIO 引脚。3.5 英寸屏幕和树莓派几乎一样大,这一点不错。
当你第一次启动屏幕打开树莓派的时候,因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令:
当你第一次启动屏幕打开树莓派的时候,因为缺少驱动屏幕会保持白屏。你得首先为屏幕安装[合适的驱动][5]。通过 SSH 登入并执行以下命令:
```
$ rm -rf LCD-show
@ -55,22 +55,21 @@ $ sudo apt install raspberrypi-ui-mods
$ sudo apt install chromium-browser
```
需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而自动登录并没有为树莓派用户默认设置好。你可以用 **raspi-config** 工具设置自动登录:
需要自动登录以使测量数据在启动后直接显示;否则你将只会看到登录界面。然而树莓派用户并没有默认设置好自动登录。你可以用 `raspi-config` 工具设置自动登录:
```
$ sudo raspi-config
```
在菜单中,选择:**3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin**
在菜单中,选择:“3 Boot Options → B1 Desktop / CLI → B4 Desktop Autologin”
在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹
**/home/pi/.config/lxsession/LXDE-pi/**
在启动后用 Chromium 打开我们的网站这块少了一步。创建文件夹 `/home/pi/.config/lxsession/LXDE-pi/`
```
$ mkdir -p /home/pi/config/lxsession/LXDE-pi/
```
然后在该文件夹里创建 **autostart** 文件:
然后在该文件夹里创建 `autostart` 文件:
```
$ nano /home/pi/.config/lxsession/LXDE-pi/autostart
@ -88,7 +87,7 @@ $ nano /home/pi/.config/lxsession/LXDE-pi/autostart
@chromium-browser --incognito --kiosk <http://localhost>
```
如果你想要隐藏鼠标指针,你得安装 **unclutter** 包并移除 **autostart** 文件开头的注释。
如果你想要隐藏鼠标指针,你得安装 `unclutter` 包并移除 `autostart` 文件开头的注释。
```
$ sudo apt install unclutter
@ -98,11 +97,11 @@ $ sudo apt install unclutter
我对去年的代码做了些小修改。因此如果你之前搭建过空气质量项目,确保用[原文章][2]中的指导为 AQI 网站重新下载脚本和文件。
通过添加触摸屏,你现在拥有了一个移动颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用小型测量站——你甚至可以使用移动电源以摆脱电网。
通过添加触摸屏,你现在拥有了一个便携的颗粒物传感器!我们在学校用它来检查教室里的空气质量或者进行比较测量。使用这种配置,你无需再依赖网络连接或 WLAN。你可以在任何地方使用这个小型测量站——你甚至可以使用移动电源以摆脱电网。
* * *
_这篇文章原来在[开源学校解决方案Open Scool Solutions][8]上发表获得许可重新发布。_
这篇文章原来在<ruby>[开源学校解决方案][8]<rt>Open Scool Solutions</rt></ruby>上发表,获得许可重新发布。
--------------------------------------------------------------------------------
@ -111,18 +110,18 @@ via: https://opensource.com/article/19/3/mobile-particulate-matter-sensor
作者:[Stephan Tetzel][a]
选题:[lujun9972][b]
译者:[tomjlw](https://github.com/tomjlw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/stephan
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat)
[2]: https://opensource.com/article/18/3/how-measure-particulate-matter-raspberry-pi
[2]: https://linux.cn/article-9620-1.html
[3]: https://www.amazon.com/gp/search/ref=as_li_qf_sp_sr_tl?ie=UTF8&tag=openschoolsol-20&keywords=lcd%20raspberry&index=aps&camp=1789&creative=9325&linkCode=ur2&linkId=51d6d7676e10d6c7db203c4a8b3b529a
[4]: https://amzn.to/2CcvgpC
[5]: https://github.com/goodtft/LCD-show
[6]: https://opensource.com/article/17/1/try-raspberry-pis-pixel-os-your-pc
[6]: https://linux.cn/article-8459-1.html
[7]: https://opensource.com/sites/default/files/uploads/mobile-aqi-sensor.jpg (Mobile particulate matter sensor)
[8]: https://openschoolsolutions.org/mobile-particulate-matter-sensor/

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10952-1.html)
[#]: subject: (5 Linux rookie mistakes)
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
5 个 Linux 新手会犯的失误
======
> Linux 爱好者们分享了他们犯下的一些最大错误。
![](https://img.linux.net.cn/data/attachment/album/201906/09/103635akfkghwh5mp58g68.jpg)
终身学习是明智的 —— 它可以让你的思维敏捷,让你在就业市场上更具竞争力。但是有些技能比其他技能更难学,尤其是那些小菜鸟错误,当你尝试修复它们时可能会花费你很多时间,给你带来很大困扰。
以学习 [Linux][2] 为例。如果你习惯于在 Windows 或 MacOS 图形界面中工作,那么转移到 Linux要将不熟悉的命令输入到终端中可能会有很大的学习曲线。但是其回报是值得的因为已经有数以百万计的人们已经证明了这一点。
也就是说,这趟学习之旅并不是一帆风顺的。我们让一些 Linux 爱好者回想了一下他们刚开始使用 Linux 的时候,并告诉我们他们犯下的最大错误。
“不要进入[任何类型的命令行界面CLI工作]时就期望命令会以合理或一致的方式工作,因为这可能会导致你感到挫折。这不是因为设计选择不当 —— 虽然当你在键盘上敲击时就像在敲在你的脑袋上一样 —— 而是反映了这些系统是历经了几代的软件和操作系统的发展而陆续添加完成的事实。顺其自然,写下或记住你需要的命令,并且(尽量不要)在[事情不是你所期望的][3]时感到沮丧。” —— [Gina Likins] [4]
“尽可能简单地复制和粘贴命令以使事情顺利进行,首先阅读命令,至少对将要执行的操作有一个大致的了解,特别是如果有管道命令时,如果有多个管道更要特别注意。有很多破坏性的命令看起来无害 —— 直到你意识到它们能做什么(例如 `rm`、`dd`),而你不会想要意外破坏什么东西(别问我怎么知道)。” —— [Katie McLaughlin] [5]
“在我的 Linux 之旅的早期,我并不知道我所处在文件系统中的位置的重要性。我正在删除一些我认为是我的主目录的文件,我输入了 `sudo rm -rf *`,然后就删除了我系统上的所有启动文件。现在,我经常使用 `pwd` 来确保我在发出这样的命令之前确认我在哪里。幸运的是,我能够使用 USB 驱动器启动被搞坏的笔记本电脑并恢复我的文件。” —— [Don Watkins] [6]
“不要因为你认为‘权限很难理解’而你希望应用程序可以访问某些内容时就将整个文件系统的权限重置为 [777][7]。”—— [Matthew Helmke] [8]
“我从我的系统中删除一个软件包,而我没有检查它依赖的其他软件包。我只是让它删除它想删除要的东西,最终导致我的一些重要程序崩溃并变得不可用。” —— [Kedar Vijay Kulkarni] [9]
你在学习使用 Linux 时犯过什么错误?请在评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-rookie-mistakes
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://opensource.com/resources/linux
[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/
[4]: https://opensource.com/users/lintqueen
[5]: https://opensource.com/users/glasnt
[6]: https://opensource.com/users/don-watkins
[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
[8]: https://twitter.com/matthewhelmke
[9]: https://opensource.com/users/kkulkarn

View File

@ -0,0 +1,98 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10957-1.html)
[#]: subject: (How we built a Linux desktop app with Electron)
[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
我们是如何使用 Electron 构建 Linux 桌面应用程序的
======
> 这是借助 Electron 框架,构建一个在 Linux 桌面上原生运行的开源电子邮件服务的故事。
![document sending](https://img.linux.net.cn/data/attachment/album/201906/10/123114abz0lvbllktkulx7.jpg)
[Tutanota][2] 是一种安全的开源电子邮件服务,它可通过浏览器使用,也有 iOS 和 Android 应用。其客户端代码在 GPLv3 下发布Android 应用程序可在 [F-Droid][3] 上找到,以便每个人都可以使用完全与 Google 无关的版本。
由于 Tutanota 关注开源和 Linux 客户端开发,因此我们希望为 Linux 和其他平台发布一个桌面应用程序。作为一个小团队,我们很快就排除了为 Linux、Windows 和 MacOS 构建原生应用程序的可能性,并决定使用 [Electron][4] 来构建我们的应用程序。
对于任何想要快速交付视觉一致的跨平台应用程序的人来说Electron 是最适合的选择,尤其是如果你已经有一个 Web 应用程序,想要从浏览器 API 的束缚中摆脱出来时。Tutanota 就是这样一个案例。
Tutanota 基于 [SystemJS][5] 和 [Mithril][6],旨在为每个人提供简单、安全的电子邮件通信。 因此,它必须提供很多用户期望从电子邮件客户端获得的标准功能。
由于采用了现代 API 和标准,其中一些功能(如基本的推送通知、搜索文本和联系人以及支持双因素身份验证)很容易在浏览器中提供。其它功能(例如自动备份或无需我们的服务器中转的 IMAP 支持)需要对系统资源的限制性访问,而这正是 Electron 框架提供的功能。
虽然有人批评 Electron “只是一个基本的包装”,但它有明显的好处:
* Electron 可以使你能够快速地为 Linux、Windows 和 MacOS 桌面构造 Web 应用。事实上,大多数 Linux 桌面应用都是使用 Electron 构建的。
* Electron 可以轻松地将桌面客户端与 Web 应用程序达到同样的功能水准。
* 发布桌面应用程序后,你可以自由使用开发功能添加桌面端特定的功能,从而增强可用性和安全性。
* 最后但同样重要的是,这是让应用程序具备原生的感觉、融入用户系统,而同时保持其识别度的好方法。
  
### 满足用户的需求
Tutanota 不依靠于大笔的投资资金,而是依靠社区驱动的项目。基于越来越多的用户升级到我们的免费服务的付费计划,我们有机地发展我们的团队。倾听用户的需求不仅对我们很重要,而且对我们的成功至关重要。
提供桌面客户端是 Tutanota 用户[最想要的功能][7],我们感到自豪的是,我们现在可以为所有用户提供免费的桌面客户端测试版。(我们还实现了另一个高度要求的功能 —— [搜索加密数据][8] —— 但这是另一个主题了。)
我们喜欢为用户提供签名版本的 Tutanota 并支持浏览器中无法实现的功能,例如通过后台进程推送通知。 现在,我们计划添加更多特定于桌面的功能,例如 IMAP 支持(而不依赖于我们的服务器充当代理),自动备份和离线可用性。
我们选择 Electron 是因为它的 Chromium 和 Node.js 的组合最适合我们的小型开发团队,因为它只需要对我们的 Web 应用程序进行最小的更改。在我们开始使用时,可以将浏览器 API 用于所有功能特别有用,随着我们的进展,慢慢地用更多原生版本替换这些组件。这种方法对附件下载和通知特别方便。
### 调整安全性
我们知道有些人关注 Electron 的安全问题,但我们发现 Electron 在 Web 应用程序中微调访问的选项非常令人满意。你可以使用 Electron 的[安全文档][9]和 Luca Carettoni 的[Electron 安全清单][10]等资源,来帮助防止 Web 应用程序中不受信任的内容发生灾难性事故。
### 实现特定功能
Tutanota Web 客户端从一开始就构建了一个用于进程间通信的可靠协议。我们利用 Web 线程在加密和请求数据时保持用户界面UI响应性。当我们开始实现我们的移动应用时这就派上用场这些应用程序使用相同的协议在原生部分和 Web 视图之间进行通信。
这就是为什么当我们开始构建桌面客户端时很多用于本机推送通知、打开邮箱和使用文件系统的部分等已经存在因此只需要实现原生端Node.js
另一个便利是我们的构建过程使用 [Babel 转译器][11],它允许我们以现代 ES6 JavaScript 编写整个代码库,并在不同环境之间混合和匹配功能模块。这使我们能够快速调整基于 Electron 的桌面应用程序的代码。但是,我们也遇到了一些挑战。
### 克服挑战
虽然 Electron 允许我们很容易地与不同平台的桌面环境集成,但你不能低估投入的时间!最后,正是这些小事情占用了比我们预期更多的时间,但对完成桌面客户端项目也至关重要。
特定于平台的代码导致了大部分阻碍:
* 例如,窗口管理和托盘仍然在三个平台上以略有不同的方式处理。
* 注册 Tutanota 作为默认邮件程序并设置自动启动需要深入 Windows 注册表,同时确保以 [UAC] [12] 兼容的方式提示用户进行管理员访问。
* 我们需要使用 Electron 的 API 作为快捷方式和菜单,以提供复制、粘贴、撤消和重做等标准功能。
由于用户对不同平台上的应用程序的某些(有时不直接兼容)行为的期望,此过程有点复杂。使三个版本感觉像原生的需要一些迭代,甚至需要对 Web 应用程序进行一些适度的补充,以提供类似于浏览器中的文本搜索的功能。
### 总结
我们在 Electron 方面的经验基本上是积极的,我们在不到四个月的时间内完成了该项目。尽管有一些相当耗时的功能,但我们感到惊讶的是,我们可以轻松地为 Linux 提供一个测试版的 [Tutanota 桌面客户端][13]。如果你有兴趣,可以深入了解 [GitHub][14] 上的源代码。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-desktop-electron
作者:[Nils Ganther][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nils-ganther
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
[2]: https://tutanota.com/
[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
[4]: https://electronjs.org/
[5]: https://github.com/systemjs/systemjs
[6]: https://mithril.js.org/
[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
[9]: https://electronjs.org/docs/tutorial/security
[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
[11]: https://babeljs.io/
[12]: https://en.wikipedia.org/wiki/User_Account_Control
[13]: https://tutanota.com/blog/posts/desktop-clients/
[14]: https://www.github.com/tutao/tutanota

View File

@ -0,0 +1,137 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10968-1.html)
[#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris)
[#]: via: (https://itsfoss.com/epic-games-lutris-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
有了 LutrisLinux 现在也可以启动 Epic 游戏商城
======
> 开源游戏平台 Lutris 现在使你能够在 Linux 上使用 Epic 游戏商城。我们使用 Ubuntu 19.04 版本进行了测试,以下是我们的使用体验。
[在 Linux 上玩游戏][1] 正变得越来越容易。Steam [正在开发中的][3] 特性可以帮助你实现 [在 Linux 上玩 Windows 游戏][2]。
如果说 Steam 在 Linux 运行 Windows 游戏领域还是新玩家,而 Lutris 却已从事多年。
[Lutris][4] 是一款为 Linux 开发的开源游戏平台,提供诸如 Origin、Steam、战网等平台的游戏安装器。它使用 Wine 来运行 Linux 不能支持的程序。
Lutris 近期宣布你可以通过它来运行 Epic 游戏商店。
### Lutris 为 Linux 带来了 Epic 游戏
![Epic Games Store Lutris Linux][5]
[Epic 游戏商城][6] 是一个类似 Steam 的电子游戏分销平台。它目前只支持 Windows 和 macOS。
Lutris 团队付出了大量努力使 Linux 用户可以通过 Lutris 使用 Epic 游戏商城。虽然我不用 Epic 商城,但可以通过 Lutris 在 Linux 上运行 Epic 商城终归是个好消息。
> 好消息! 你现在可以通过 Lutris 安装获得 [@EpicGames][7] 商城在 Linux 下的全功能支持!没有发现任何问题。 <https://t.co/cYmd7PcYdG>[@TimSweeneyEpic][8] 可能会很喜欢 😊
>
> ![pic.twitter.com/7mt9fXt7TH][9]
>
> — Lutris Gaming (@LutrisGaming) [April 17, 2019][10]
作为一名狂热的游戏玩家和 Linux 用户,我立即得到了这个消息,并安装了 Lutris 来运行 Epic 游戏。
**备注:** 我使用 [Ubuntu 19.04][11] 来测试 Linux 环境下的游戏运行情况。
### 通过 Lutris 在 Linux 下使用 Epic 游戏商城
为了在你的 Linux 系统中安装 Epic 游戏商城,请确保你已经安装了 Wine 和 Python 3。接下来[在 Ubuntu 中安装 Wine][12] ,或任何你正在使用的 Linux 发行版本也都可以。然后, [从官方网站下载 Lutris][13].
#### 安装 Epic 游戏商城
Lutris 安装成功后,直接启动它。
当我尝试时,我遇到了一个问题(当我用 GUI 启动时却没有遇到)。当我尝试在命令行输入 `lutris` 来启动时,我发现了下图所示的错误:
![][15]
感谢 Abhishek我了解到了这是一个常见问题 (你可以在 [GitHub][16] 上查看这个问题)。
总之,为了解决这个问题,我需要在命令行中输入以下命令:
```
export LC_ALL=C
```
当你遇到同样的问题时,只要你输入这个命令,就能正常启动 Lutris 了。
**注意**:每次启动 Lutris 时都必须输入这个命令。因此,最好将其添加到 `.bashrc` 文件或环境变量列表中。
上述操作完成后,只要启动并搜索 “Epic Games Store” 会显示以下图片中的内容:
![Epic Games Store in Lutris][17]
在这里,我已经安装过了,所以你将会看到“安装”选项,它会自动询问你是否需要安装需要的包。只需要继续操作就可以成功安装。就是这样,不需要任何黑科技。
#### 玩一款 Epic 游戏商城中的游戏
![Epic Games Store][18]
现在我们已经通过 Lutris 在 Linux 上安装了 Epic 游戏商城,启动它并登录你的账号就可以开始了。
但这真会奏效吗?
*是的Epic 游戏商城可以运行。* **但是所有游戏都不能玩。**LCTT 译注:莫生气,请看文末的进一步解释!)
好吧我并没有尝试过所有内容但是我拿了一个免费的游戏Transistor —— 一款回合制 ARPG 游戏)来检查它是否工作。
![Transistor Epic Games Store][19]
很不幸,游戏没有启动。当我运行时界面显示了 “Running” 不过什么都没有发生。
到目前为止,我还不知道有什么解决方案 —— 所以如果我找到解决方案,我会尽力让你们知道最新情况。
### 总结
通过 Lutris 这样的工具使 Linux 的游戏场景得到了改善,这终归是个好消息 。不过,仍有许多工作要做。
对于在 Linux 上运行的游戏来说,无障碍运行仍然是一个挑战。其中可能就会有我遇到的这种问题,或者其它类似的。但它正朝着正确的方向发展 —— 即使还存在着一些问题。
你有什么看法吗?你是否也尝试用 Lutris 在 Linux 上启动 Epic 游戏商城?在下方评论让我们看看你的意见。
### 补充
Transistor 实际上有一个原生的 Linux 移植版。到目前为止,我从 Epic 获得的所有游戏都是如此。所以我会试着压下我的郁闷,而因为 Epic 只让你玩你通过他们的商店/启动器购买的游戏,所以在 Linux 机器上用 Lutris 玩这个原生的 Linux 游戏是不可能的。这简直愚蠢极了。Steam 有一个原生的 Linux 启动器虽然不是很理想但它可以工作。GOG 允许你从网站下载购买的内容,可以在你喜欢的任何平台上玩这些游戏。他们的启动器完全是可选的。
我对此非常恼火,因为我在我的 Epic 库中的游戏都是可以在我的笔记本电脑上运行得很好的游戏,当我坐在桌面前时,玩起来很有趣。但是因为那台桌面机是我唯一拥有的 Windows 机器……
我选择使用 Linux 时已经知道会存在兼容性问题,并且我有一个专门用于游戏的 Windows 机器,而我通过 Epic 获得的游戏都是免费的所以我在这里只是表示一下不满。但是他们两个作为最著名的竞争对手Epic 应该有在我的 Linux 机器上玩原生 Linux 移植版的机制。
--------------------------------------------------------------------------------
via: https://itsfoss.com/epic-games-lutris-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Modrisco](https://github.com/Modrisco)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-7316-1.html
[2]: https://linux.cn/article-10061-1.html
[3]: https://linux.cn/article-10054-1.html
[4]: https://lutris.net/
[5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png
[6]: https://www.epicgames.com/store/en-US/
[7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw
[8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw
[9]: https://pbs.twimg.com/media/D4XkXafX4AARDkW?format=jpg&name=medium
[10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw
[11]: https://itsfoss.com/ubuntu-19-04-release-features/
[12]: https://itsfoss.com/install-latest-wine/
[13]: https://lutris.net/downloads/
[14]: https://itsfoss.com/ubuntu-mate-entroware/
[15]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-error.jpg
[16]: https://github.com/lutris/lutris/issues/660
[17]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-epic-games-store-800x520.jpg
[18]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-800x450.jpg
[19]: https://itsfoss.com/wp-content/uploads/2019/04/transistor-game-epic-games-store-800x410.jpg
[20]: https://itsfoss.com/skpe-alpha-linux/

View File

@ -1,26 +1,25 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (tomjlw)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10955-1.html)
[#]: subject: (How to identify same-content files on Linux)
[#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to identify same-content files on Linux
如何在 Linux 上识别同样内容的文件
======
Copies of files sometimes represent a big waste of disk space and can cause confusion if you want to make updates. Here are six commands to help you identify these files.
> 有时文件副本相当于对硬盘空间的巨大浪费,并会在你想要更新文件时造成困扰。以下是用来识别这些文件的六个命令。
![Vinoth Chandar \(CC BY 2.0\)][1]
In a recent post, we looked at [how to identify and locate files that are hard links][2] (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same _content_ , but are not otherwise connected.
在最近的帖子中,我们看了[如何识别并定位硬链接的文件][2](即,指向同一硬盘内容并共享 inode。在本文中我们将查看能找到具有相同*内容*,却不相链接的文件的命令。
Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files.
硬链接很有用是因为它们能够使文件存放在文件系统内的多个地方却不会占用额外的硬盘空间。另一方面,有时文件副本相当于对硬盘空间的巨大浪费,在你想要更新文件时也会有造成困扰之虞。在本文中,我们将看一下多种识别这些文件的方式。
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
### 用 diff 命令比较文件
### Comparing files with the diff command
Probably the easiest way to compare two files is to use the **diff** command. The output will show you the differences between the two files. The < and > signs indicate whether the extra lines are in the first (<) or second (>) file provided as arguments. In this example, the extra lines are in backup.html.
可能比较两个文件最简单的方法是使用 `diff` 命令。输出会显示你文件的不同之处。`<` 和 `>` 符号代表在当参数传过来的第一个(`<`)或第二个(`>`)文件中是否有额外的文字行。在这个例子中,在 `backup.html` 中有额外的文字行。
```
$ diff index.html backup.html
@ -30,18 +29,18 @@ $ diff index.html backup.html
> </pre>
```
If diff shows no output, that means the two files are the same.
如果 `diff` 没有输出那代表两个文件相同。
```
$ diff home.html index.html
$
```
The only drawbacks to diff are that it can only compare two files at a time, and you have to identify the files to compare. Some commands we will look at in this post can find the duplicate files for you.
`diff` 的唯一缺点是它一次只能比较两个文件并且你必须指定用来比较的文件,这篇帖子中的一些命令可以为你找到多个重复文件。
### Using checksums
### 使用校验和
The **cksum** (checksum) command computes checksums for files. Checksums are a mathematical reduction of the contents to a lengthy number (like 2819078353 228029). While not absolutely unique, the chance that files that are not identical in content would result in the same checksum is extremely small.
`cksum`checksum 命令计算文件的校验和。校验和是一种将文字内容转化成一个长数字例如2819078353 228029的数学简化。虽然校验和并不是完全独有的但是文件内容不同校验和却相同的概率微乎其微。
```
$ cksum *.html
@ -50,11 +49,11 @@ $ cksum *.html
4073570409 227985 index.html
```
In the example above, you can see how the second and third files yield the same checksum and can be assumed to be identical.
在上述示例中,你可以看到产生同样校验和的第二个和第三个文件是如何可以被默认为相同的。
### Using the find command
### 使用 find 命令
While the find command doesn't have an option for finding duplicate files, it can be used to search files by name or type and run the cksum command. For example:
虽然 `find` 命令并没有寻找重复文件的选项,它依然可以被用来通过名字或类型寻找文件并运行 `cksum` 命令。例如:
```
$ find . -name "*.html" -exec cksum {} \;
@ -63,9 +62,9 @@ $ find . -name "*.html" -exec cksum {} \;
4073570409 227985 ./index.html
```
### Using the fslint command
### 使用 fslint 命令
The **fslint** command can be used to specifically find duplicate files. Note that we give it a starting location. The command can take quite some time to complete if it needs to run through a large number of files. Here's output from a very modest search. Note how it lists the duplicate files and also looks for other issues, such as empty directories and bad IDs.
`fslint` 命令可以被特地用来寻找重复文件。注意我们给了它一个起始位置。如果它需要遍历相当多的文件,这就需要花点时间来完成。注意它是如何列出重复文件并寻找其它问题的,比如空目录和坏 ID。
```
$ fslint .
@ -86,15 +85,15 @@ index.html
-------------------------Non Stripped executables
```
You may have to install **fslint** on your system. You will probably have to add it to your search path, as well:
你可能需要在你的系统上安装 `fslint`。你可能也需要将它加入你的命令搜索路径:
```
$ export PATH=$PATH:/usr/share/fslint/fslint
```
### Using the rdfind command
### 使用 rdfind 命令
The **rdfind** command will also look for duplicate (same content) files. The name stands for "redundant data find," and the command is able to determine, based on file dates, which files are the originals — which is helpful if you choose to delete the duplicates, as it will remove the newer files.
`rdfind` 命令也会寻找重复(相同内容的)文件。它的名字意即“重复数据搜寻”,并且它能够基于文件日期判断哪个文件是原件——这在你选择删除副本时很有用因为它会移除较新的文件。
```
$ rdfind ~
@ -111,7 +110,7 @@ Totally, 223 KiB can be reduced.
Now making results file results.txt
```
You can also run this command in "dryrun" (i.e., only report the changes that might otherwise be made).
你可以在 `dryrun` 模式中运行这个命令 (换句话说,仅仅汇报可能会另外被做出的改动)。
```
$ rdfind -dryrun true ~
@ -128,7 +127,7 @@ Removed 9 files due to unique sizes from list.2 files left.
(DRYRUN MODE) Now making results file results.txt
```
The rdfind command also provides options for things such as ignoring empty files (-ignoreempty) and following symbolic links (-followsymlinks). Check out the man page for explanations.
`rdfind` 命令同样提供了类似忽略空文档(`-ignoreempty`)和跟踪符号链接(`-followsymlinks`)的功能。查看 man 页面获取解释。
```
-ignoreempty ignore empty files
@ -146,7 +145,7 @@ The rdfind command also provides options for things such as ignoring empty files
-n, -dryrun display what would have been done, but don't do it
```
Note that the rdfind command offers an option to delete duplicate files with the **-deleteduplicates true** setting. Hopefully the command's modest problem with grammar won't irritate you. ;-)
注意 `rdfind` 命令提供了 `-deleteduplicates true` 的设置选项以删除副本。希望这个命令语法上的小问题不会惹恼你。;-)
```
$ rdfind -deleteduplicates true .
@ -154,11 +153,11 @@ $ rdfind -deleteduplicates true .
Deleted 1 files. <==
```
You will likely have to install the rdfind command on your system. It's probably a good idea to experiment with it to get comfortable with how it works.
你将可能需要在你的系统上安装 `rdfind` 命令。试验它以熟悉如何使用它可能是一个好主意。
### Using the fdupes command
### 使用 fdupes 命令
The **fdupes** command also makes it easy to identify duplicate files and provides a large number of useful options — like **-r** for recursion. In its simplest form, it groups duplicate files together like this:
`fdupes` 命令同样使得识别重复文件变得简单。它同时提供了大量有用的选项——例如用来迭代的 `-r`。在这个例子中,它像这样将重复文件分组到一起:
```
$ fdupes ~
@ -173,7 +172,7 @@ $ fdupes ~
/home/shs/hideme.png
```
Here's an example using recursion. Note that many of the duplicate files are important (users' .bashrc and .profile files) and should clearly not be deleted.
这是使用迭代的一个例子,注意许多重复文件是重要的(用户的 `.bashrc``.profile` 文件)并且不应被删除。
```
# fdupes -r /home
@ -204,7 +203,7 @@ Here's an example using recursion. Note that many of the duplicate files are imp
/home/shs/PNGs/Sandra_rotated.png
```
The fdupe command's many options are listed below. Use the **fdupes -h** command, or read the man page for more details.
`fdupe` 命令的许多选项列如下。使用 `fdupes -h` 命令或者阅读 man 页面获取详情。
```
-r --recurse recurse
@ -229,15 +228,11 @@ The fdupe command's many options are listed below. Use the **fdupes -h** command
-h --help displays help
```
The fdupes command is another one that you're like to have to install and work with for a while to become familiar with its many options.
`fdupes` 命令是另一个你可能需要安装并使用一段时间才能熟悉其众多选项的命令。
### Wrap-up
### 总结
Linux systems provide a good selection of tools for locating and potentially removing duplicate files, along with options for where you want to run your search and what you want to do with duplicate files when you find them.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][4] ]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
Linux 系统提供能够定位并(潜在地)能移除重复文件的一系列的好工具,以及能让你指定搜索区域及当对你所发现的重复文件时的处理方式的选项。
--------------------------------------------------------------------------------
@ -245,8 +240,8 @@ via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-f
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[tomjlw](https://github.com/tomjlw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -258,3 +253,4 @@ via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-f
[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,154 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10943-1.html)
[#]: subject: (Using Testinfra with Ansible to verify server state)
[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state)
[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert)
使用 Testinfra 和 Ansible 验证服务器状态
======
> Testinfra 是一个功能强大的库,可用于编写测试来验证基础设施的状态。另外它与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。
![Terminal command prompt on orange background][1]
根据设计,[Ansible][2] 传递机器的期望状态,以确保 Ansible 剧本或角色的内容部署到目标机器上。但是,如果你需要确保所有基础架构更改都在 Ansible 中,该怎么办?或者想随时验证服务器的状态?
[Testinfra][3] 是一个基础架构测试框架,它可以轻松编写单元测试来验证服务器的状态。它是一个 Python 库,使用强大的 [pytest][4] 测试引擎。
### 开始使用 Testinfra
可以使用 Python 包管理器(`pip`)和 Python 虚拟环境轻松安装 Testinfra。
```
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install testinfra
```
Testinfra 也可以通过 Fedora 和 CentOS 的 EPEL 仓库中使用。例如,在 CentOS 7 上,你可以使用以下命令安装它:
```
$ yum install -y epel-release
$ yum install -y python-testinfra
```
#### 一个简单的测试脚本
在 Testinfra 中编写测试很容易。使用你选择的代码编辑器,将以下内容添加到名为 `test_simple.py` 的文件中:
```
import testinfra
def test_os_release(host):
assert host.file("/etc/os-release").contains("Fedora")
def test_sshd_inactive(host):
assert host.service("sshd").is_running is False
```
默认情况下Testinfra 为测试用例提供了一个 `host` 对象,该对象能访问不同的辅助模块。例如,第一个测试使用 `file` 模块来验证主机上文件的内容,第二个测试用例使用 `service` 模块来检查 systemd 服务的状态。
要在本机运行这些测试,请执行以下命令:
```
(venv)$ pytest test_simple.py
================================ test session starts ================================
platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
rootdir: /home/cverna/Documents/Python/testinfra
plugins: testinfra-3.0.0
collected 2 items
test_simple.py ..
================================ 2 passed in 0.05 seconds ================================
```
有关 Testinfra API 的完整列表,你可以参考[文档][5]。
### Testinfra 和 Ansible
Testinfra 支持的后端之一是 Ansible这意味着 Testinfra 可以直接使用 Ansible 的清单文件和清单中定义的一组机器来对它们进行测试。
我们使用以下清单文件作为示例:
```
[web]
app-frontend01
app-frontend02
[database]
db-backend01
```
我们希望确保我们的 Apache Web 服务器在 `app-frontend01``app-frontend02` 上运行。让我们在名为 `test_web.py` 的文件中编写测试:
```
def check_httpd_service(host):
"""Check that the httpd service is running on the host"""
assert host.service("httpd").is_running
```
要使用 Testinfra 和 Ansible 运行此测试,请使用以下命令:
```
(venv) $ pip install ansible
(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py
```
在调用测试时,我们使用 Ansible 清单文件的 `[web]` 组作为目标计算机,并指定我们要使用 Ansible 作为连接后端。
#### 使用 Ansible 模块
Testinfra 还为 Ansible 提供了一个很好的可用于测试的 API。该 Ansible 模块能够在测试中运行 Ansible 动作,并且能够轻松检查动作的状态。
```
def check_ansible_play(host):
"""
Verify that a package is installed using Ansible
package module
"""
assert not host.ansible("package", "name=httpd state=present")["changed"]
```
默认情况下Ansible 的[检查模式][6]已启用,这意味着 Ansible 将报告在远程主机上执行动作时会发生的变化。
### Testinfra 和 Nagios
现在我们可以轻松地运行测试来验证机器的状态,我们可以使用这些测试来触发监控系统上的警报。这是捕获意外的更改的好方法。
Testinfra 提供了与 [Nagios][7] 的集成它是一种流行的监控解决方案。默认情况下Nagios 使用 [NRPE][8] 插件对远程主机进行检查,但使用 Testinfra 可以直接从 Nagios 主控节点上运行测试。
要使 Testinfra 输出与 Nagios 兼容,我们必须在触发测试时使用 `--nagios` 标志。我们还使用 `-qq` 这个 pytest 标志来启用 pytest 的静默模式,这样就不会显示所有测试细节。
```
(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py
TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds
```
Testinfra 是一个功能强大的库,可用于编写测试以验证基础架构的状态。 另外与 Ansible 和 Nagios 相结合,提供了一个用于架构即代码 (IaC) 的简单解决方案。 它也是使用 [Molecule][9] 开发 Ansible 角色过程中添加测试的关键组件。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state
作者:[Clement Verna][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://www.ansible.com/
[3]: https://testinfra.readthedocs.io/en/latest/
[4]: https://pytest.org/
[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules
[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html
[7]: https://www.nagios.org/
[8]: https://en.wikipedia.org/wiki/Nagios#NRPE
[9]: https://github.com/ansible/molecule

View File

@ -1,98 +1,98 @@
[#]: collector: "lujun9972"
[#]: translator: "zhang5788"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Getting Started With Docker"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-10940-1.html"
[#]: subject: "Getting Started With Docker"
[#]: via: "https://www.ostechnix.com/getting-started-with-docker/"
[#]: author: "sk https://www.ostechnix.com/author/sk/"
[#]: author: "sk https://www.ostechnix.com/author/sk/"
Docker 入门指南
======
![Getting Started With Docker][1]
在我们的上一个教程中,我们已经了解[**如何在ubuntu上安装Docker**][1],和如何在[**CentOS上安装Docker**][2]。今天我们将会了解Docker的一些基础用法。该教程包含了如何创建一个新的docker容器如何运行该容器如何从现有的docker容器中创建自己的Docker镜像等Docker 的一些基础知识,操作。所有步骤均在Ubuntu 18.04 LTS server 版本下测试通过。
在我们的上一个教程中,我们已经了解[如何在 Ubuntu 上安装 Docker][1],和如何在 [CentOS 上安装 Docker][2]。今天,我们将会了解 Docker 的一些基础用法。该教程包含了如何创建一个新的 Docker 容器,如何运行该容器,如何从现有的 Docker 容器中创建自己的 Docker 镜像等 Docker 的一些基础知识、操作。所有步骤均在 Ubuntu 18.04 LTS server 版本下测试通过。
### 入门指南
在开始指南之前不要混淆Docker镜像和Docker容器这两个概念。在之前的教程中我就解释过Docker镜像是决定Docker容器行为的一个文件Docker容器则是Docker镜像的运行态或停止态。(译者注:在`macOS`下使用docker终端时不需要加`sudo`)
在开始指南之前,不要混淆 Docker 镜像和 Docker 容器这两个概念。在之前的教程中我就解释过Docker 镜像是决定 Docker 容器行为的一个文件Docker 容器则是 Docker 镜像的运行态或停止态。LCTT 译注:在 macOS 下使用 Docker 终端时,不需要加 `sudo`
##### 1. 搜索Docker镜像
#### 1、搜索 Docker 镜像
我们可以从Docker的仓库中获取镜像例如[**Docker hub**][3], 或者自己创建镜像。这里解释一下,`Docker hub`是一个云服务器用来提供给Docker的用户们创建测试,和保存他们的镜像。
我们可以从 Docker 仓库中获取镜像,例如 [Docker hub][3]或者自己创建镜像。这里解释一下Docker hub 是一个云服务器,用来提供给 Docker 的用户们创建、测试,和保存他们的镜像。
`Docker hub`拥有成千上万个Docker 的镜像文件。你可以在这里搜索任何你想要的镜像,通过`docker search`命令
Docker hub 拥有成千上万个 Docker 镜像文件。你可以通过 `docker search`命令在这里搜索任何你想要的镜像
例如,搜索一个基于ubuntu的镜像文件,只需要运行:
例如,搜索一个基于 Ubuntu 的镜像文件,只需要运行:
```shell
$ sudo docker search ubuntu
```
**Sample output:**
示例输出:
![][5]
搜索基于CentOS的镜像运行
搜索基于 CentOS 的镜像,运行:
```shell
$ sudo docker search ubuntu
$ sudo docker search centos
```
搜索AWS的镜像运行
搜索 AWS 的镜像,运行:
```shell
$ sudo docker search aws
```
搜索`wordpress`的镜像:
搜索 WordPress 的镜像:
```shell
$ sudo docker search wordpress
```
`Docker hub`拥有几乎所有种类的镜像,包含操作系统,程序和其他任意的类型,这些你都能在`docker hub`上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。
Docker hub 拥有几乎所有种类的镜像,包含操作系统、程序和其他任意的类型,这些你都能在 Docker hub 上找到已经构建完的镜像。如果你在搜索时,无法找到你想要的镜像文件,你也可以自己构建一个,将其发布出去,或者仅供你自己使用。
##### 2. 下载Docker 镜像
#### 2、下载 Docker 镜像
下载`ubuntu`的镜像,你需要在终端运行以下命令:
下载 Ubuntu 的镜像,你需要在终端运行以下命令:
```shell
$ sudo docker pull ubuntu
```
这条命令将会从**Docker hub**下载最近一个版本的ubuntu镜像文件。
这条命令将会从 Docker hub 下载最近一个版本的 Ubuntu 镜像文件。
**Sample output:**
示例输出:
> ```shell
> Using default tag: latest
> latest: Pulling from library/ubuntu
> 6abc03819f3e: Pull complete
> 05731e63f211: Pull complete
> 0bd67c50d6be: Pull complete
> Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5
> Status: Downloaded newer image for ubuntu:latest
> ```
```
Using default tag: latest
latest: Pulling from library/ubuntu
6abc03819f3e: Pull complete
05731e63f211: Pull complete
0bd67c50d6be: Pull complete
Digest: sha256:f08638ec7ddc90065187e7eabdfac3c96e5ff0f6b2f1762cf31a4f49b53000a5
Status: Downloaded newer image for ubuntu:latest
```
![下载docker 镜像][6]
![下载 Docker 镜像][6]
你也可以下载指定版本的ubuntu镜像。运行以下命令:
你也可以下载指定版本的 Ubuntu 镜像。运行以下命令:
```shell
$ docker pull ubuntu:18.04
```
Dokcer允许在任意的宿主机操作系统下下载任意的镜像文件并运行。
Docker 允许在任意的宿主机操作系统下,下载任意的镜像文件,并运行。
例如下载CentOS镜像
例如,下载 CentOS 镜像:
```shell
$ sudo docker pull centos
```
所有下载的镜像文件,都被保存在`/var/lib/docker`文件夹下。(译注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询)
所有下载的镜像文件,都被保存在 `/var/lib/docker` 文件夹下。(LCTT 译注:不同操作系统存放的文件夹并不是一致的,具体存放位置请在官方查询)
查看已经下载的镜像列表,可以使用以下命令:
@ -100,7 +100,7 @@ $ sudo docker pull centos
$ sudo docker images
```
**输出为:**
示例输出:
```shell
REPOSITORY TAG IMAGE ID CREATED SIZE
@ -109,17 +109,17 @@ centos latest 9f38484d220f 2 months ago
hello-world latest fce289e99eb9 4 months ago 1.84kB
```
正如你看到的那样,我已经下载了三个镜像文件:**ubuntu**, **CentOS**和**Hello-world**.
正如你看到的那样,我已经下载了三个镜像文件:`ubuntu`、`centos` 和 `hello-world`
现在,让我们继续,来看一下如何运行我们刚刚下载的镜像。
##### 3. 运行Docker镜像
#### 3、运行 Docker 镜像
运行一个容器有两种方法。我们可以使用`TAG`或者是`镜像ID`。`TAG`指的是特定的镜像快照。`镜像ID`是指镜像的唯一标识。
运行一个容器有两种方法。我们可以使用标签或者是镜像 ID。标签指的是特定的镜像快照。镜像 ID 是指镜像的唯一标识。
正如上面结果中显示,`latest`是所有镜像的一个标签。**7698f282e524**是Ubuntu docker 镜像的`镜像ID`,**9f38484d220f**是CentOS镜像的`镜像ID`**fce289e99eb9**是hello_world镜像的`镜像ID`
正如上面结果中显示,`latest` 是所有镜像的一个标签。`7698f282e524` 是 Ubuntu Docker 镜像的镜像 ID`9f38484d220f`是 CentOS 镜像的镜像 ID`fce289e99eb9` 是 hello_world 镜像的 镜像 ID
下载完Docker镜像之后你可以通过下面的命令来使用`TAG`的方式启动:
下载完 Docker 镜像之后,你可以通过下面的命令来使用其标签来启动:
```shell
$ sudo docker run -t -i ubuntu:latest /bin/bash
@ -127,12 +127,12 @@ $ sudo docker run -t -i ubuntu:latest /bin/bash
在这条语句中:
* **-t**: 在该容器中启动一个新的终端
* **-i**: 通过容器中的标准输入流建立交互式连接
* **ubuntu:latest**:带有标签`latest`的ubuntu容器
* **/bin/bash** : 在新的容器中启动`BASH Shell`
* `-t`在该容器中启动一个新的终端
* `-i`通过容器中的标准输入流建立交互式连接
* `ubuntu:latest`:带有标签 `latest` 的 Ubuntu 容器
* `/bin/bash`:在新的容器中启动 BASH Shell
或者,你可以通过`镜像ID`来启动新的容器:
或者,你可以通过镜像 ID 来启动新的容器:
```shell
$ sudo docker run -t -i 7698f282e524 /bin/bash
@ -140,15 +140,15 @@ $ sudo docker run -t -i 7698f282e524 /bin/bash
在这条语句里:
* **7698f282e524** —`镜像ID`
* `7698f282e524` — 镜像 ID
在启动容器之后,将会自动进入容器的`shell`中(注意看命令行的提示符)。
在启动容器之后,将会自动进入容器的 shell 中(注意看命令行的提示符)。
![][7]
Docker 容器的`Shell`
*Docker 容器的 Shell*
如果想要退回到宿主机的终端在这个例子中对我来说就是退回到18.04 LTS并且不中断该容器的执行你可以按下`CTRL+P `,再按下`CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,docker 容器仍然在后台运行,我们并没有中断它。
如果想要退回到宿主机的终端(在这个例子中,对我来说,就是退回到 18.04 LTS并且不中断该容器的执行你可以按下 `CTRL+P`,再按下 `CTRL+Q`。现在,你就安全的返回到了你的宿主机系统中。需要注意的是,Docker 容器仍然在后台运行,我们并没有中断它。
可以通过下面的命令来查看正在运行的容器:
@ -156,7 +156,7 @@ Docker 容器的`Shell`
$ sudo docker ps
```
**Sample output:**
示例输出:
```shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -165,14 +165,14 @@ CONTAINER ID IMAGE COMMAND CREATED
![][8]
列出正在运行的容器
*列出正在运行的容器*
可以看到:
可以看到
* **32fc32ad0d54** `容器 ID`
* **ubuntu:latest** Docker 镜像
* `32fc32ad0d54` 容器 ID
* `ubuntu:latest` Docker 镜像
需要注意的是,**`容器ID`和Docker `镜像ID`是不同的**
需要注意的是,容器 ID 和 Docker 的镜像 ID是不同的。
可以通过以下命令查看所有正在运行和停止运行的容器:
@ -192,13 +192,13 @@ $ sudo docker stop <container-id>
$ sudo docker stop 32fc32ad0d54
```
如果想要进入正在运行的容器中,你只需要运行
如果想要进入正在运行的容器中,你只需要运行
```shell
$ sudo docker attach 32fc32ad0d54
```
正如你看到的,**32fc32ad0d54**是一个容器的ID。当你在容器中想要退出时只需要在容器内的终端中输入命令
正如你看到的,`32fc32ad0d54` 是一个容器的 ID。当你在容器中想要退出时只需要在容器内的终端中输入命令
```shell
# exit
@ -210,46 +210,44 @@ $ sudo docker attach 32fc32ad0d54
$ sudo docker ps
```
##### 4. 构建自己的Docker镜像
#### 4、构建自己的 Docker 镜像
Docker不仅仅可以下载运行在线的容器你也可以创建你的自己的容器。
Docker 不仅仅可以下载运行在线的容器,你也可以创建你的自己的容器。
想要创建自己的Docker镜像你需要先运行一个你已经下载完的容器
想要创建自己的 Docker 镜像,你需要先运行一个你已经下载完的容器:
```shell
$ sudo docker run -t -i ubuntu:latest /bin/bash
```
现在,你运行了一个容器,并且进入了该容器。
现在,你运行了一个容器,并且进入了该容器。然后,在该容器安装任意一个软件或做任何你想做的事情。
然后,在该容器安装任意一个软件或做任何你想做的事情
例如,我们在容器中安装一个 Apache web 服务器
例如,我们在容器中安装一个**Apache web 服务器**。
当你完成所有的操作安装完所有的软件之后你可以执行以下的命令来构建你自己的Docker镜像
当你完成所有的操作,安装完所有的软件之后,你可以执行以下的命令来构建你自己的 Docker 镜像:
```shell
# apt update
# apt install apache2
```
同样的,安装和测试所有的你想要安装的软件在容器中
同样的,在容器中安装和测试你想要安装的所有软件。
当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机的host而不中断容器。请按下CTRL+P ,再按下CTRL+Q
当你安装完毕之后,返回的宿主机的终端。记住,不要关闭容器。想要返回到宿主机而不中断容器。请按下`CTRL+P`,再按下 `CTRL+Q`
从你的宿主机的终端中运行以下命令如寻找容器的ID
从你的宿主机的终端中,运行以下命令如寻找容器的 ID
```shell
$ sudo docker ps
```
最后从一个正在运行的容器中创建Docker镜像
最后,从一个正在运行的容器中创建 Docker 镜像:
```shell
$ sudo docker commit 3d24b3de0bfc ostechnix/ubuntu_apache
```
**输出为:**
示例输出:
```shell
sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962
@ -257,17 +255,17 @@ sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962
在这里:
* **3d24b3de0bfc** — 指ubuntu容器的ID。
* **ostechnix** — 我们创建的的名称
* **ubuntu_apache** — 我们创建的镜像
* `3d24b3de0bfc` — 指 Ubuntu 容器的 ID。
* `ostechnix` — 我们创建的容器的用户名称
* `ubuntu_apache` — 我们创建的镜像
让我们检查一下我们新创建的docker镜像
让我们检查一下我们新创建的 Docker 镜像:
```shell
$ sudo docker images
```
**输出为:**
示例输出:
```shell
REPOSITORY TAG IMAGE ID CREATED SIZE
@ -279,7 +277,7 @@ hello-world latest fce289e99eb9 4 months ago
![][9]
列出所有的docker镜像
*列出所有的 Docker 镜像*
正如你看到的,这个新的镜像就是我们刚刚在本地系统上从运行的容器上创建的。
@ -289,9 +287,9 @@ hello-world latest fce289e99eb9 4 months ago
$ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash
```
##### 5. 移除容器
#### 5、删除容器
如果你在docker上的工作已经全部完成你就可以删除哪些你不需要的容器。
如果你在 Docker 上的工作已经全部完成,你就可以删除那些你不需要的容器。
想要删除一个容器,首先,你需要停止该容器。
@ -301,14 +299,14 @@ $ sudo docker run -t -i ostechnix/ubuntu_apache /bin/bash
$ sudo docker ps
```
**输出为:**
示例输出:
```shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d24b3de0bfc ubuntu:latest "/bin/bash" 28 minutes ago Up 28 minutes goofy_easley
```
使用`容器ID`来停止该容器:
使用容器 ID 来停止该容器:
```shell
$ sudo docker stop 3d24b3de0bfc
@ -328,7 +326,7 @@ $ sudo docker rm 3d24b3de0bfc
$ sudo docker container prune
```
按下"**Y**",来确认你的操作
按下 `Y`,来确认你的操作:
```sehll
WARNING! This will remove all stopped containers.
@ -340,11 +338,11 @@ Deleted Containers:
Total reclaimed space: 5B
```
这个命令仅支持最新的docker。(译者注仅支持1.25及以上版本的Docker)
这个命令仅支持最新的 Docker。LCTT 译注:仅支持 1.25 及以上版本的 Docker
##### 6. 删除Docker镜像
#### 6、删除 Docker 镜像
当你移除完不要的Docker容器后你也可以删除你不需要的Docker镜像。
当你删除了不要的 Docker 容器后,你也可以删除你不需要的 Docker 镜像。
列出已经下载的镜像:
@ -352,7 +350,7 @@ Total reclaimed space: 5B
$ sudo docker images
```
**输出为:**
示例输出:
```shell
REPOSITORY TAG IMAGE ID CREATED SIZE
@ -364,13 +362,13 @@ hello-world latest fce289e99eb9 4 months ago
由上面的命令可以知道,在本地的系统中存在三个镜像。
使用`镜像ID`来删除镜像。
使用镜像 ID 来删除镜像。
```shell
$ sudo docekr rmi ce5aa74a48f1
```
**输出为:**
示例输出:
```shell
Untagged: ostechnix/ubuntu_apache:latest
@ -378,17 +376,17 @@ Deleted: sha256:ce5aa74a48f1e01ea312165887d30691a59caa0d99a2a4aa5116ae124f02f962
Deleted: sha256:d21c926f11a64b811dc75391bbe0191b50b8fe142419f7616b3cee70229f14cd
```
**解决问题**
#### 解决问题
Docker禁止我们删除一个还在被容器使用的镜像。
Docker 禁止我们删除一个还在被容器使用的镜像。
例如,当我试图删除Docker镜像**b72889fa879c**时,我只能获得一个错误提示:
例如,当我试图删除 Docker 镜像 `b72889fa879c` 时,我只能获得一个错误提示:
```shell
Error response from daemon: conflict: unable to delete b72889fa879c (must be forced) - image is being used by stopped container dde4dd285377
```
这是因为这个Docker镜像正在被一个容器使用。
这是因为这个 Docker 镜像正在被一个容器使用。
所以,我们来检查一个正在运行的容器:
@ -396,19 +394,19 @@ Error response from daemon: conflict: unable to delete b72889fa879c (must be for
$ sudo docker ps
```
**输出为:**
示例输出:
![][10]
注意,现在并没有正在运行的容器!!!
查看一下所有的容器(包含所有的正在运行和已经停止的容器)
查看一下所有的容器(包含所有的正在运行和已经停止的容器)
```shell
$ sudo docker pa -a
```
**输出为:**
示例输出:
![][11]
@ -420,9 +418,9 @@ $ sudo docker pa -a
$ sudo docker rm 12e892156219
```
我们仍然使用容器ID来删除这些容器。
我们仍然使用容器 ID 来删除这些容器。
当我们删除了所有使用该镜像的容器之后我们就可以删除Docker的镜像了。
当我们删除了所有使用该镜像的容器之后,我们就可以删除 Docker 的镜像了。
例如:
@ -438,19 +436,7 @@ $ sudo docker images
想要知道更多的细节,请参阅本指南末尾给出的官方资源的链接或者在评论区进行留言。
或者下载以下的关于Docker的电子书来了解更多。
* **Download** [**Free eBook: “Docker Containerization Cookbook”**][12]
* **Download** [**Free Guide: “Understanding Docker”**][13]
* **Download** [**Free Guide: “What is Docker and Why is it So Popular?”**][14]
* **Download** [**Free Guide: “Introduction to Docker”**][15]
* **Download** [**Free Guide: “Docker in Production”**][16]
这就是全部的教程了希望你可以了解Docker的一些基础用法。
这就是全部的教程了,希望你可以了解 Docker 的一些基础用法。
更多的教程马上就会到来,敬请关注。
@ -461,7 +447,7 @@ via: https://www.ostechnix.com/getting-started-with-docker/
作者:[sk][a]
选题:[lujun9972][b]
译者:[zhang5788](https://github.com/zhang5788)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -482,4 +468,4 @@ via: https://www.ostechnix.com/getting-started-with-docker/
[13]: https://ostechnix.tradepub.com/free/w_pacb32/prgm.cgi?a=1
[14]: https://ostechnix.tradepub.com/free/w_pacb31/prgm.cgi?a=1
[15]: https://ostechnix.tradepub.com/free/w_pacb29/prgm.cgi?a=1
[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1
[16]: https://ostechnix.tradepub.com/free/w_pacb28/prgm.cgi?a=1

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (chen-ni)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10961-1.html)
[#]: subject: (When IoT systems fail: The risk of having bad IoT data)
[#]: via: (https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk-of-having-bad-iot-data.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
@ -10,47 +10,45 @@
当物联网系统出现故障:使用低质量物联网数据的风险
======
伴随着物联网设备使用量的增长,这些设备产生的数据可以让消费者节约巨大的开支,也给商家带来新的机遇。但是当故障不可避免地出现的时候,会发生什么呢?
> 伴随着物联网设备使用量的增长,这些设备产生的数据可以让消费者节约巨大的开支,也给商家带来新的机遇。但是当故障不可避免地出现的时候,会发生什么呢?
![Oonal / Getty Images][1]
你可以去看任何统计数字,很明显物联网正在走进个人生活和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢?
不管你看的是什么统计数字,很明显物联网正在走进个人和私人生活的方方面面。这种增长虽然有不少好处,但是也带来了新的风险。一个很重要的问题是,出现问题的时候谁来负责呢?
也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2],我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司向消费者竞标的可能性,这种业务基于智能家居数据所揭示的风险的高低。
也许最大的问题出在基于物联网数据进行的个性化营销以及定价策略上。[保险公司长期以来致力于寻找利用物联网数据的最佳方式][2],我去年写过家庭财产保险公司是如何开始利用物联网传感器减少水灾带来的损失的。一些公司正在研究保险公司竞购消费者的可能性:这种业务基于智能家居数据所揭示的风险的高低。
但是最大的进步出现在汽车保险领域。许多汽车保险公司已经可以让客户在车辆上安装追踪设备,如果数据证明他们的驾驶习惯良好就可以获取保险折扣。
**[ 延伸阅读:[保险公司终于有了一个利用智能家居物联网的好办法][3] ]**
- 延伸阅读:[保险公司终于有了一个利用智能家居物联网的好办法][3]
### **UBI 车险的崛起**
### UBI 车险的崛起
UBI基于使用的保险车险是一种“按需付费”的业务可以通过追踪速度、位置以及其他因素来评估风险并计算车险保费。到2020年预计有[5000万美国司机][4]会加入到 UBI 车险的项目中。
UBI<ruby>基于使用的保险<rt>usage-based insurance</rt></ruby>)车险是一种“按需付费”的业务,可以通过追踪速度、位置,以及其他因素来评估风险并计算车险保费。到 2020 年,预计有 [5000 万美国司机][4]会加入到 UBI 车险的项目中。
不出所料,保险公司对 UBI 车险青睐有加,因为 UBI 车险可以帮助他们更加精确地计算风险。事实上,[AIG 爱尔兰已经在尝试让国家向 25 岁以下的司机强制推行 UBI 车险][5]。并且,被认定为驾驶习惯良好的司机自然也很乐意节省一笔费用。当然也有反对的声音了,大多数是来自于隐私权倡导者,以及会因此支付更多费用的群体。
### **出了故障会发生什么?**
### 出了故障会发生什么?
但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有错误,或者在传输过程中出了问题会发生什么?因为尽管有自动化程序错误检查等等,还是不可避免地会偶尔发生一些故障。
但是还有一个更加令人担忧的潜在问题:当物联网设备提供的数据有错误,或者在传输过程中出了问题会发生什么?因为尽管有自动化程序错误检查等等,还是不可避免地会偶尔发生一些故障。
不幸的是,这并不是一个理论上某天会给谨慎的司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。
不幸的是,这并不是一个理论上某天会给细心的司机不小心多扣几块钱保费的问题。这已经是一个会带来严重后果的现实问题。就像[保险行业仍然没有想清楚谁应该“拥有”面向客户的物联网设备产生的数据][6]一样,我们也不清楚谁将对这些数据所带来的问题负责。
计算机"故障"据说曾导致赫兹的租车被误报为被盗(虽然在这个例子中这并不是一个严格意义上的物联网问题),并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。
计算机“故障”据说曾导致赫兹的出租车辆被误报为被盗(虽然在这个例子中这并不是一个严格意义上的物联网问题),并且导致无辜的租车人被逮捕并扣留。结果呢?刑事指控,多年的诉讼官司,以及舆论的指责。非常强烈的舆论指责。
我们非常容易想象一些类似的情况,比如说一个物联网传感器出了故障,然后报告说某辆车超速了,然而事实上并没有超速。想想为这件事打官司的麻烦吧,或者想想和你的保险公司如何争执不下。
(当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对 _这个问题_ 。)
(当然,这个问题还有另外一面:消费者可能会想办法篡改他们的物联网设备上的数据,以获得更低的费率或者转移事故责任。我们同样也没有可行的办法来应对*这个问题*。)
### **政府监管是否有必要**
### 政府监管是否有必要
考虑到这些问题的潜在影响,以及所涉及公司对处理这些问题的无动于衷,我们似乎有理由猜想政府干预的必要性。
这可能是众议员 Bob Latta俄亥俄州共和党[重新引入 SMART IOT物联网现代应用、研究及趋势的现状法案][7]背后的一个动机。这项由 Latta 和众议员 Peter Welch佛蒙特州民主党领导的两党合作物联网工作组提出的[法案][8],于去年秋天通过众议院,但被参议院驳回了。商务部需要研究物联网行业的状况,并在两年后向众议院能源与商业部和参议院商务委员会报告。
这可能是美国众议员 Bob Latta俄亥俄州共和党[重新引入 SMART IOT物联网现代应用、研究及趋势的现状法案][7]背后的一个动机。这项由 Latta 和美国众议员 Peter Welch佛蒙特州民主党领导的两党合作物联网工作组提出的[法案][8],于去年秋天通过美国众议院,但被美国参议院驳回了。美国商务部需要研究物联网行业的状况,并在两年后向美国众议院能源与商业部和美国参议院商务委员会报告。
Latta 在一份声明中表示“由于预计会有数万亿美元的经济影响我们需要考虑物联网所带来的的政策机遇和挑战。SMART IoT 法案会让人们更容易理解政府在物联网政策上的做法、可以改进的地方,以及联邦政策如何影响尖端技术的研究和发明。”
这项研究受到了欢迎,但该法案甚至可能不会被通过。即便通过了,物联网在两年的等待时间里也可能会翻天覆地,让政府还是无法跟上。
加入 Network World 的[Facebook 社区][9] 和 [LinkedIn 社区][10],参与最前沿话题的讨论。
Latta 在一份声明中表示“由于预计会有数万亿美元的经济影响我们需要考虑物联网所带来的的政策机遇和挑战。SMART IoT 法案会让人们更容易理解美国政府在物联网政策上的做法、可以改进的地方,以及美国联邦政策如何影响尖端技术的研究和发明。”
这项研究受到了欢迎,但该法案甚至可能不会被通过。即便通过了,物联网在两年的等待时间里也可能会翻天覆地,让美国政府还是无法跟上。
--------------------------------------------------------------------------------
@ -59,7 +57,7 @@ via: https://www.networkworld.com/article/3396230/when-iot-systems-fail-the-risk
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[chen-ni](https://github.com/chen-ni)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,44 +1,44 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10945-1.html)
[#]: subject: (Securing telnet connections with stunnel)
[#]: via: (https://fedoramagazine.org/securing-telnet-connections-with-stunnel/)
[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/)
Securing telnet connections with stunnel
使用 stunnel 保护 telnet 连接
======
![][1]
Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where **stunnel** comes to the rescue.
Telnet 是一种客户端-服务端协议,通过 TCP 的 23 端口连接到远程服务器。Telnet 并不加密数据,因此它被认为是不安全的,因为数据是以明文形式发送的,所以密码很容易被嗅探。但是,仍有老旧系统需要使用它。这就是用到 **stunnel** 的地方。
Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.
stunnel 旨在为使用不安全连接协议的程序增加 SSL 加密。本文将以 telnet 为例介绍如何使用它。
### Server Installation
### 服务端安装
Install stunnel along with the telnet server and client [using sudo][2]:
[使用 sudo][2] 安装 stunnel 以及 telnet 的服务端和客户端:
```
sudo dnf -y install stunnel telnet-server telnet
```
Add a firewall rule, entering your password when prompted:
添加防火墙规则,在提示时输入你的密码:
```
firewall-cmd --add-service=telnet --perm
firewall-cmd --reload
```
Next, generate an RSA private key and an SSL certificate:
接下来,生成 RSA 私钥和 SSL 证书:
```
openssl genrsa 2048 > stunnel.key
openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt
```
You will be prompted for the following information one line at a time. When asked for _Common Name_ you must enter the correct host name or IP address, but everything else you can skip through by hitting the **Enter** key.
系统将一次提示你输入以下信息。当询问 `Common Name` 时,你必须输入正确的主机名或 IP 地址,但是你可以按回车键跳过其他所有内容。
```
You are about to be asked to enter information that will be
@ -57,14 +57,14 @@ Common Name (eg, your name or your server's hostname) []:
Email Address []
```
Merge the RSA key and SSL certificate into a single _.pem_ file, and copy that to the SSL certificate directory:
将 RSA 密钥和 SSL 证书合并到单个 `.pem` 文件中,并将其复制到 SSL 证书目录:
```
cat stunnel.crt stunnel.key > stunnel.pem
sudo cp stunnel.pem /etc/pki/tls/certs/
```
Now its time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the _/etc/stunnel/telnet.conf_ file:
现在可以定义服务和用于加密连接的端口了。选择尚未使用的端口。此例使用 450 端口进行隧道传输 telnet。编辑或创建 `/etc/stunnel/telnet.conf`
```
cert = /etc/pki/tls/certs/stunnel.pem
@ -80,15 +80,15 @@ accept = 450
connect = 23
```
The **accept** option is the port the server will listen to for incoming telnet requests. The **connect** option is the internal port the telnet server listens to.
`accept` 选项是服务器将监听传入 telnet 请求的接口。`connect` 选项是 telnet 服务器的内部监听接口。
Next, make a copy of the systemd unit file that allows you to override the packaged version:
接下来,创建一个 systemd 单元文件的副本来覆盖原来的版本:
```
sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system
```
Edit the _/etc/systemd/system/stunnel.service_ file to add two lines. These lines create a chroot jail for the service when it starts.
编辑 `/etc/systemd/system/stunnel.service` 来添加两行。这些行在启动时为服务创建 chroot 监狱。
```
[Unit]
@ -106,49 +106,49 @@ ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel
WantedBy=multi-user.target
```
Next, configure SELinux to listen to telnet on the new port you just specified:
接下来,配置 SELinux 以在你刚刚指定的新端口上监听 telnet
```
sudo semanage port -a -t telnetd_port_t -p tcp 450
```
Finally, add a new firewall rule:
最后,添加新的防火墙规则:
```
firewall-cmd --add-port=450/tcp --perm
firewall-cmd --reload
```
Now you can enable and start telnet and stunnel.
现在你可以启用并启动 telnet 和 stunnel。
```
systemctl enable telnet.socket stunnel@telnet.service --now
```
A note on the _systemctl_ command is in order. Systemd and the stunnel package provide an additional [template unit file][3] by default. The template lets you drop multiple configuration files for stunnel into _/etc/stunnel_ , and use the filename to start the service. For instance, if you had a _foobar.conf_ file, you could start that instance of stunnel with _systemctl start[stunnel@foobar.service][4]_ , without having to write any unit files yourself.
要注意 `systemctl` 命令是有顺序的。systemd 和 stunnel 包默认提供额外的[模板单元文件][3]。该模板允许你将 stunnel 的多个配置文件放到 `/etc/stunnel` 中,并使用文件名启动该服务。例如,如果你有一个 `foobar.conf` 文件,那么可以使用 `systemctl start stunnel@foobar.service` 启动该 stunnel 实例,而无需自己编写任何单元文件。
If you want, you can set this stunnel template service to start on boot:
如果需要,可以将此 stunnel 模板服务设置为在启动时启动:
```
systemctl enable stunnel@telnet.service
```
### Client Installation
### 客户端安装
This part of the article assumes you are logged in as a normal user ([with sudo privileges][2]) on the client system. Install stunnel and the telnet client:
本文的这部分假设你在客户端系统上以普通用户([拥有 sudo 权限][2])身份登录。安装 stunnel 和 telnet 客户端:
```
dnf -y install stunnel telnet
```
Copy the _stunnel.pem_ file from the remote server to your client _/etc/pki/tls/certs_ directory. In this example, the IP address of the remote telnet server is 192.168.1.143.
`stunnel.pem` 从远程服务器复制到客户端的 `/etc/pki/tls/certs` 目录。在此例中,远程 telnet 服务器的 IP 地址为 `192.168.1.143`
```
sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem
/etc/pki/tls/certs/
```
Create the _/etc/stunnel/telnet.conf_ file:
创建 `/etc/stunnel/telnet.conf`
```
cert = /etc/pki/tls/certs/stunnel.pem
@ -158,15 +158,15 @@ accept=450
connect=192.168.1.143:450
```
The **accept** option is the port that will be used for telnet sessions. The **connect** option is the IP address of your remote server and the port its listening on.
`accept` 选项是用于 telnet 会话的端口。`connect` 选项是你远程服务器的 IP 地址以及监听的端口。
Next, enable and start stunnel:
接下来,启用并启动 stunnel
```
systemctl enable stunnel@telnet.service --now
```
Test your connection. Since you have a connection established, you will telnet to _localhost_ instead of the hostname or IP address of the remote telnet server:
测试你的连接。由于有一条已建立的连接,你会 `telnet``localhost` 而不是远程 telnet 服务器的主机名或者 IP 地址。
```
[user@client ~]$ telnet localhost 450
@ -189,8 +189,8 @@ via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/
作者:[Curt Warfield][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -199,4 +199,3 @@ via: https://fedoramagazine.org/securing-telnet-connections-with-stunnel/
[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/stunnel-816x345.jpg
[2]: https://fedoramagazine.org/howto-use-sudo/
[3]: https://fedoramagazine.org/systemd-template-unit-files/
[4]: mailto:stunnel@foobar.service

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10935-1.html)
[#]: subject: (4 Ways to Run Linux Commands in Windows)
[#]: via: (https://itsfoss.com/run-linux-commands-in-windows/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
@ -10,13 +10,13 @@
在 Windows 中运行 Linux 命令的 4 种方法
======
_ **简介:想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。** _
> 想要使用 Linux 命令,但又不想离开 Windows ?以下是在 Windows 中运行 Linux bash 命令的几种方法。
如果你在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。
如果你在课程中正在学习 shell 脚本,那么需要使用 Linux 命令来练习命令和脚本。
你的学校实验室可能安装了 Linux但是你个人没有 [Linux 的笔记本][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你也想想知道如何在 Windows 上运行 Bash 命令和脚本。
你的学校实验室可能安装了 Linux但是你自己没有安装了 [Linux 的笔记本电脑][1],而是像其他人一样的 Windows 计算机。你的作业需要运行 Linux 命令,你或许想知道如何在 Windows 上运行 Bash 命令和脚本。
你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是为了运行 Linux 命令而单独使用分区的麻烦可能不适合所有人。
你可以[在双启动模式下同时安装 Windows 和 Linux][2]。此方法能让你在启动计算机时选择 Linux 或 Windows。但是为了运行 Linux 命令而使用单独分区的麻烦可能不适合所有人。
你也可以[使用在线 Linux 终端][3],但你的作业无法保存。
@ -24,32 +24,31 @@ _ **简介:想要使用 Linux 命令,但又不想离开 Windows ?以下是
### 在 Windows 中使用 Linux 命令
![][4]
![](https://img.linux.net.cn/data/attachment/album/201906/04/093809hlz2tblfzt7mbwwl.jpg)
作为一个热心的 Linux 用户和推广者,我希望看到越来越多的人使用“真正的” Linux但我知道有时候这不是优先考虑的问题。如果你只是想练习 Linux 来通过考试,可以使用这些方法之一在 Windows 上运行 Bash 命令。
#### 1\. 在 Windows 10 上使用 Linux Bash Shell
#### 1在 Windows 10 上使用 Linux Bash Shell
你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 WSL][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将使用 Windows 内部的真正 Linux 内核。
你是否知道可以在 Windows 10 中运行 Linux 发行版? [Windows 的 Linux 子系统 WSL][5] 能让你在 Windows 中运行 Linux。即将推出的 WSL 版本将在 Windows 内部使用真正 Linux 内核。
此 WSL 在 Windows 上也称为 Bash它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。
此 WSL 也称为 Bash on Windows,它作为一个常规的 Windows 应用运行,并提供了一个命令行模式的 Linux 发行版。不要害怕命令行模式,因为你的目的是运行 Linux 命令。这就是你所需要的。
![Ubuntu Linux inside Windows][6]
你可以在 Windows 应用商店中找到一些流行的 Linux 发行版,如 Ubuntu、Kali Linux、openSUSE 等。你只需像任何其他 Windows 应用一样下载和安装它。安装后,你可以运行所需的所有 Linux 命令。
![Linux distributions in Windows 10 Store][8]
请参考教程:[在 Windows 上安装 Linux bash shell][9]。
#### 2\. 使用 Git Bash 在 Windows 上运行 Bash 命令
#### 2使用 Git Bash 在 Windows 上运行 Bash 命令
你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。
你可能知道 [Git][10] 是什么。它是由 [Linux 创建者 Linus Torvalds][11] 开发的版本控制系统。
[Git for Windows][12] 是一组工具,能让你在命令行和图形界面中使用 Git。Git for Windows 中包含的工具之一是 Git Bash。
Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令Git Bash 还支持许多 Bash 程序,如 ssh、scp、cat、find 等。
Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令Git Bash 还支持许多 Bash 程序,如 `ssh``scp``cat``find` 等。
![Git Bash][13]
@ -57,21 +56,21 @@ Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令Git Bash 还
你可以从其网站免费下载和安装 Git for Windows 工具来在 Windows 中安装 Git Bash。
[下载 Git for Windows][12]
- [下载 Git for Windows][12]
#### 3\. 使用 Cygwin 在 Windows 中使用 Linux 命令
#### 3使用 Cygwin 在 Windows 中使用 Linux 命令
如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的免费开源软件。
如果要在 Windows 中运行 Linux 命令,那么 Cygwin 是一个推荐的工具。Cygwin 创建于 1995 年,旨在提供一个原生运行于 Windows 中的 POSIX 兼容环境。Cygwin 是由 Red Hat 员工和许多其他志愿者维护的自由开源软件。
二十年来Windows 用户使用 Cygwin 来运行和练习 Linux/Bash 命令。十多年前,我甚至用 Cygwin 来学习 Linux 命令。
![Cygwin | Image Credit][14]
![Cygwin][14]
你可以从下面的官方网站下载 Cygwin。我还建议你参考这个 [Cygwin 备忘录][15]来开始使用。
[下载 Cygwin][16]
- [下载 Cygwin][16]
#### 4\. 在虚拟机中使用 Linux
#### 4在虚拟机中使用 Linux
另一种方法是使用虚拟化软件并在其中安装 Linux。这样你可以在 Windows 中安装 Linux 发行版(带有图形界面)并像常规 Windows 应用一样运行它。
@ -83,7 +82,7 @@ Git Bash 为 Git 命令行提供了仿真层。除了 Git 命令Git Bash 还
你可以按照[本教程学习如何在 VirtualBox 中安装 Linux][20]。
**总结**
### 总结
运行 Linux 命令的最佳方法是使用 Linux。当选择不安装 Linux 时,这些工具能让你在 Windows 上运行 Linux 命令。都试试看,看哪种适合你。
@ -94,7 +93,7 @@ via: https://itsfoss.com/run-linux-commands-in-windows/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,171 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10947-1.html)
[#]: subject: (A deeper dive into Linux permissions)
[#]: via: (https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
更深入地了解 Linux 权限
======
> 在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 r、w、x 和 -。如何更清晰地了解这些字符试图告诉你什么以及这些权限如何工作?
![Sandra Henry-Stocker](https://img.linux.net.cn/data/attachment/album/201906/07/150718q09wnve6ne6v9063.jpg)
在 Linux 上查看文件权限时,有时你会看到的不仅仅是普通的 `r`、`w`、`x` 和 `-`。除了在所有者、组和其他中看到 `rwx` 之外,你可能会看到 `s` 或者 `t`,如下例所示:
```
drwxrwsrwt
```
要进一步明确的方法之一是使用 `stat` 命令查看权限。`stat` 的第四行输出以八进制和字符串格式显示文件权限:
```
$ stat /var/mail
File: /var/mail
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 801h/2049d Inode: 1048833 Links: 2
Access: (3777/drwxrwsrwt) Uid: ( 0/ root) Gid: ( 8/ mail)
Access: 2019-05-21 19:23:15.769746004 -0400
Modify: 2019-05-21 19:03:48.226656344 -0400
Change: 2019-05-21 19:03:48.226656344 -0400
Birth: -
```
这个输出提示我们,分配给文件权限的位数超过 9 位。事实上,有 12 位。这些额外的三位提供了一种分配超出通常的读、写和执行权限的方法 - 例如,`3777`(二进制 `011111111111`)表示使用了两个额外的设置。
该值的第一个 `1` (第二位)表示 SGID设置 GID为运行文件而赋予临时权限或以该关联组的权限来使用目录。
```
011111111111
^
```
SGID 将正在使用该文件的用户作为该组成员之一而分配临时权限。
第二个 `1`(第三位)是“粘连”位。它确保*只有*文件的所有者能够删除或重命名该文件或目录。
```
011111111111
^
```
如果权限是 `7777` 而不是 `3777`,我们知道 SUID设置 UID字段也已设置。
```
111111111111
^
```
SUID 将正在使用该文件的用户作为文件拥有者分配临时权限。
至于我们上面看到的 `/var/mail` 目录,所有用户都需要访问,因此需要一些特殊值来提供它。
但现在让我们更进一步。
特殊权限位的一个常见用法是使用 `passwd` 之类的命令。如果查看 `/usr/bin/passwd` 文件,你会注意到 SUID 位已设置,它允许你更改密码(以及 `/etc/shadow` 文件的内容),即使你是以普通(非特权)用户身份运行,并且对此文件没有读取或写入权限。当然,`passwd` 命令很聪明,不允许你更改其他人的密码,除非你是以 root 身份运行或使用 `sudo`
```
$ ls -l /usr/bin/passwd
-rwsr-xr-x 1 root root 63736 Mar 22 14:32 /usr/bin/passwd
$ ls -l /etc/shadow
-rw-r----- 1 root shadow 2195 Apr 22 10:46 /etc/shadow
```
现在,让我们看一下使用这些特殊权限可以做些什么。
### 如何分配特殊文件权限
与 Linux 命令行中的许多东西一样,你可以有不同的方法设置。 `chmod` 命令允许你以数字方式或使用字符表达式更改权限。
要以数字方式更改文件权限,你可以使用这样的命令来设置 SUID 和 SGID 位:
```
$ chmod 6775 tryme
```
或者你可以使用这样的命令:
```
$ chmod ug+s tryme <== 用于 SUID 和 SGID 权限
```
如果你要添加特殊权限的文件是脚本,你可能会对它不符合你的期望感到惊讶。这是一个非常简单的例子:
```
$ cat tryme
#!/bin/bash
echo I am $USER
```
即使设置了 SUID 和 SGID 位,并且 root 是文件所有者,运行脚本也不会产生你可能期望的 “I am root”。为什么因为 Linux 会忽略脚本的 SUID 和 SGID 位。
```
$ ls -l tryme
-rwsrwsrwt 1 root root 29 May 26 12:22 tryme
$ ./tryme
I am jdoe
```
另一方面,如果你对一个编译的程序之类进行类似的尝试,就像下面这个简单的 C 程序一样,你会看到不同的效果。在此示例程序中,我们提示用户输入文件名并创建它,并给文件写入权限。
```
#include <stdlib.h>
int main()
{
FILE *fp; /* file pointer*/
char fName[20];
printf("Enter the name of file to be created: ");
scanf("%s",fName);
/* create the file with write permission */
fp=fopen(fName,"w");
/* check if file was created */
if(fp==NULL)
{
printf("File not created");
exit(0);
}
printf("File created successfully\n");
return 0;
}
```
编译程序并运行该命令以使 root 用户成为所有者并设置所需权限后,你将看到它以预期的 root 权限运行 - 留下新创建的 root 为所有者的文件。当然,你必须具有 `sudo` 权限才能运行一些需要的命令。
```
$ cc -o mkfile mkfile.c <== 编译程序
$ sudo chown root:root mkfile <== 更改所有者和组为 “root”
$ sudo chmod ug+s mkfile <== 添加 SUID and SGID 权限
$ ./mkfile <== 运行程序
Enter name of file to be create: empty
File created successfully
$ ls -l empty
-rw-rw-r-- 1 root root 0 May 26 13:15 empty
```
请注意,文件所有者是 root - 如果程序未以 root 权限运行,则不会发生这种情况。
权限字符串中不常见设置的位置例如rw**s**rw**s**rw**t**)可以帮助提醒我们每个位的含义。至少第一个 “s”SUID 位于所有者权限区域中,第二个 SGID 位于组权限区域中。为什么粘连位是 “t” 而不是 “s” 超出了我的理解。也许创造者想把它称为 “tacky bit”但由于这个词的不太令人喜欢的第二个定义而改变了他们的想法。无论如何额外的权限设置为 Linux 和其他 Unix 系统提供了许多额外的功能。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397790/a-deeper-dive-into-linux-permissions.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/shs_rwsr-100797564-large.jpg
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: ( jdh8383 )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: translator: (jdh8383)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10938-1.html)
[#]: subject: (How To Check Available Security Updates On Red Hat (RHEL) And CentOS System?)
[#]: via: (https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
@ -10,35 +10,29 @@
如何在 CentOS 或 RHEL 系统上检查可用的安全更新?
======
当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。
![](https://img.linux.net.cn/data/attachment/album/201906/05/003907tljfmy4bnn4qj1tp.jpg)
大多数情况下,这应该是出于程序兼容性方面的考量。
当你更新系统时,根据你所在公司的安全策略,有时候可能只需要打上与安全相关的补丁。大多数情况下,这应该是出于程序兼容性方面的考量。那该怎样实践呢?有没有办法让 `yum` 只安装安全补丁呢?
那该怎样实践呢?有没有办法让 yum 只安装安全补丁呢?
答案是肯定的,可以用 `yum` 包管理器轻松实现。
答案是肯定的,可以用 yum 包管理器轻松实现
在这篇文章中,我们不但会提供所需的信息。而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息
在这篇文章中,我们不但会提供所需的信息
希望这样可以启发你去了解并修复你列表上的那些漏洞。一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险
而且,我们会介绍一些额外的命令,可以帮你获取指定安全更新的详实信息。
希望这样可以启发你去了解并修复你列表上的那些漏洞。
一旦有安全漏洞被公布,就必须更新受影响的软件,这样可以降低系统中的安全风险。
对于 RHEL 或 CentOS 6 系统,运行下面的 **[Yum 命令][1]** 来安装 yum 安全插件。
对于 RHEL 或 CentOS 6 系统,运行下面的 [Yum 命令][1] 来安装 yum 安全插件。
```
# yum -y install yum-plugin-security
```
在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 yum 的一部分了,不用单独安装。
在 RHEL 7&8 或是 CentOS 7&8 上面,这个插件已经是 `yum` 的一部分了,不用单独安装。
只列出全部可用的补丁包括安全Bug 修复以及产品改进),但不安装它们。
要列出全部可用的补丁包括安全、Bug 修复以及产品改进),但不安装它们:
```
# yum updateinfo list available
已加载插件: changelog, package_upload, product-id, search-disabled-repos,
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64
RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64
@ -54,20 +48,18 @@ RHBA-2016:1048 bugfix 389-ds-base-1.3.4.0-30.el7_2.x86_64
RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64
```
要统计补丁的大约数量,运行下面的命令
要统计补丁的大约数量,运行下面的命令
```
# yum updateinfo list available | wc -l
11269
```
想列出全部可用的安全补丁但不安装。
以下命令用来展示你系统里已安装和待安装的推荐补丁。
想列出全部可用的安全补丁但不安装,以下命令用来展示你系统里已安装和待安装的推荐补丁:
```
# yum updateinfo list security all
已加载插件: changelog, package_upload, product-id, search-disabled-repos,
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64
RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64
@ -81,13 +73,13 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64
RHSA-2018:1380 Important/Sec. 389-ds-base-1.3.7.5-21.el7_5.x86_64
RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64
RHSA-2018:3127 Moderate/Sec. 389-ds-base-1.3.8.4-15.el7.x86_64
i RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64
RHSA-2014:1031 Important/Sec. 389-ds-base-libs-1.3.1.6-26.el7_0.x86_64
```
要显示所有待安装的安全补丁
要显示所有待安装的安全补丁
```
# yum updateinfo list security all | egrep -v "^i"
# yum updateinfo list security all | grep -v "i"
RHSA-2014:1031 Important/Sec. 389-ds-base-1.3.1.6-26.el7_0.x86_64
RHSA-2015:0416 Important/Sec. 389-ds-base-1.3.3.1-13.el7.x86_64
@ -102,14 +94,14 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64
RHSA-2018:2757 Moderate/Sec. 389-ds-base-1.3.7.5-28.el7_5.x86_64
```
要统计全部安全补丁的大致数量,运行下面的命令
要统计全部安全补丁的大致数量,运行下面的命令
```
# yum updateinfo list security all | wc -l
3522
```
下面根据已装软件列出可更新的安全补丁。这包括 bugzillasbug修复CVEs知名漏洞数据库安全更新等。
下面根据已装软件列出可更新的安全补丁。这包括 bugzillabug 修复、CVE知名漏洞数据库、安全更新等
```
# yum updateinfo list security
@ -118,7 +110,7 @@ RHBA-2016:1298 bugfix 389-ds-base-1.3.4.0-32.el7_2.x86_64
# yum updateinfo list sec
已加载插件: changelog, package_upload, product-id, search-disabled-repos,
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
RHSA-2018:3665 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64
@ -134,11 +126,11 @@ RHSA-2018:3665 Important/Sec. NetworkManager-wifi-1:1.12.0-8.el7_6.x86_64
RHSA-2018:3665 Important/Sec. NetworkManager-wwan-1:1.12.0-8.el7_6.x86_64
```
显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁
显示所有与安全相关的更新,并且返回一个结果来告诉你是否有可用的补丁
```
# yum --security check-update
已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
rhel-7-server-rpms | 2.0 kB 00:00:00
--> policycoreutils-devel-2.2.5-20.el7.x86_64 from rhel-7-server-rpms excluded (updateinfo)
--> smc-raghumalayalam-fonts-6.0-7.el7.noarch from rhel-7-server-rpms excluded (updateinfo)
@ -162,7 +154,7 @@ NetworkManager-libnm.x86_64 1:1.12.0-10.el7_6 rhel-7
NetworkManager-ppp.x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms
```
列出所有可用的安全补丁,并且显示其详细信息
列出所有可用的安全补丁,并且显示其详细信息
```
# yum info-sec
@ -196,12 +188,12 @@ Description : The tzdata packages contain data files with rules for various
Severity : None
```
如果你想要知道某个更新的具体内容,可以运行下面这个命令
如果你想要知道某个更新的具体内容,可以运行下面这个命令
```
# yum updateinfo RHSA-2019:0163
已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
rhel-7-server-rpms | 2.0 kB 00:00:00
===============================================================================
Important: kernel security, bug fix, and enhancement update
@ -243,12 +235,12 @@ Description : The kernel packages contain the Linux kernel, the core of any
updateinfo info done
```
跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞
跟之前类似,你可以只查询那些通过 CVE 释出的系统漏洞
```
# yum updateinfo list cves
已加载插件: changelog, package_upload, product-id, search-disabled-repos,
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
CVE-2018-15688 Important/Sec. NetworkManager-1:1.12.0-8.el7_6.x86_64
CVE-2018-15688 Important/Sec. NetworkManager-adsl-1:1.12.0-8.el7_6.x86_64
@ -260,12 +252,12 @@ CVE-2018-15688 Important/Sec. NetworkManager-ppp-1:1.12.0-8.el7_6.x86_64
CVE-2018-15688 Important/Sec. NetworkManager-team-1:1.12.0-8.el7_6.x86_64
```
你也可以查看那些跟 bug 修复相关的更新,运行下面的命令
你也可以查看那些跟 bug 修复相关的更新,运行下面的命令
```
# yum updateinfo list bugfix | less
已加载插件: changelog, package_upload, product-id, search-disabled-repos,
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
RHBA-2018:3349 bugfix NetworkManager-1:1.12.0-7.el7_6.x86_64
RHBA-2019:0519 bugfix NetworkManager-1:1.12.0-10.el7_6.x86_64
@ -277,11 +269,11 @@ RHBA-2018:3349 bugfix NetworkManager-config-server-1:1.12.0-7.el7_6.noarch
RHBA-2019:0519 bugfix NetworkManager-config-server-1:1.12.0-10.el7_6.noarch
```
要想得到待安装更新的摘要信息,运行这个
要想得到待安装更新的摘要信息,运行这个
```
# yum updateinfo summary
已加载插件: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
rhel-7-server-rpms | 2.0 kB 00:00:00
Updates Information Summary: updates
13 Security notice(s)
@ -311,7 +303,7 @@ via: https://www.2daygeek.com/check-list-view-find-available-security-updates-on
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[jdh8383](https://github.com/jdh8383)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,474 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10949-1.html)
[#]: subject: (How to write a good C main function)
[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function)
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny)
如何写好 C main 函数
======
> 学习如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。
![Hand drawing out the word "code"](https://img.linux.net.cn/data/attachment/album/201906/08/211422umrzznnvmapcwuc3.jpg)
我知道,现在孩子们用 Python 和 JavaScript 编写他们的疯狂“应用程序”。但是不要这么快就否定 C 语言 —— 它能够提供很多东西,并且简洁。如果你需要速度,用 C 语言编写可能就是你的答案。如果你正在寻找稳定的职业或者想学习如何捕获[空指针解引用][2]C 语言也可能是你的答案!在本文中,我将解释如何构造一个 C 文件并编写一个 C main 函数来成功地处理命令行参数。
我:一个顽固的 Unix 系统程序员。
一个有编辑器、C 编译器,并有时间打发的人。
让我们开工吧。
### 一个无聊但正确的 C 程序
![Parody O'Reilly book cover, "Hating Other People's Code"][3]
C 程序以 `main()` 函数开头,通常保存在名为 `main.c` 的文件中。
```
/* main.c */
int main(int argc, char *argv[]) {
}
```
这个程序可以*编译*但不*干*任何事。
```
$ gcc main.c
$ ./a.out -o foo -vv
$
```
正确但无聊。
### main 函数是唯一的。
`main()` 函数是开始执行时所执行的程序的第一个函数,但不是第一个执行的函数。*第一个*函数是 `_start()`,它通常由 C 运行库提供,在编译程序时自动链入。此细节高度依赖于操作系统和编译器工具链,所以我假装没有提到它。
`main()` 函数有两个参数,通常称为 `argc``argv`,并返回一个有符号整数。大多数 Unix 环境都希望程序在成功时返回 `0`(零),失败时返回 `-1`(负一)。
参数 | 名称 | 描述
---|---|---
`argc` | 参数个数 | 参数向量的个数
`argv` | 参数向量 | 字符指针数组
参数向量 `argv` 是调用你的程序的命令行的标记化表示形式。在上面的例子中,`argv` 将是以下字符串的列表:
```
argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];
```
参数向量在其第一个索引 `argv[0]` 中确保至少会有一个字符串,这是执行程序的完整路径。
### main.c 文件的剖析
当我从头开始编写 `main.c` 时,它的结构通常如下:
```
/* main.c */
/* 0 版权/许可证 */
/* 1 包含 */
/* 2 定义 */
/* 3 外部声明 */
/* 4 类型定义 */
/* 5 全局变量声明 */
/* 6 函数原型 */
int main(int argc, char *argv[]) {
/* 7 命令行解析 */
}
/* 8 函数声明 */
```
下面我将讨论这些编号的各个部分,除了编号为 0 的那部分。如果你必须把版权或许可文本放在源代码中,那就放在那里。
另一件我不想讨论的事情是注释。
```
“评论谎言。”
- 一个愤世嫉俗但聪明又好看的程序员。
```
与其使用注释,不如使用有意义的函数名和变量名。
鉴于程序员固有的惰性,一旦添加了注释,维护负担就会增加一倍。如果更改或重构代码,则需要更新或扩充注释。随着时间的推移,代码会变得面目全非,与注释所描述的内容完全不同。
如果你必须写注释,不要写关于代码正在做*什么*,相反,写下代码*为什么*要这样写。写一些你将要在五年后读到的注释,那时你已经将这段代码忘得一干二净。世界的命运取决于你。*不要有压力。*
#### 1、包含
我添加到 `main.c` 文件的第一个东西是包含文件,它们为程序提供大量标准 C 标准库函数和变量。C 标准库做了很多事情。浏览 `/usr/include` 中的头文件,你可以了解到它们可以做些什么。
`#include` 字符串是 [C 预处理程序][4]cpp指令它会将引用的文件完整地包含在当前文件中。C 中的头文件通常以 `.h` 扩展名命名,且不应包含任何可执行代码。它只有宏、定义、类型定义、外部变量和函数原型。字符串 `<header.h>` 告诉 cpp 在系统定义的头文件路径中查找名为 `header.h` 的文件,它通常在 `/usr/include` 目录中。
```
/* main.c */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <libgen.h>
#include <errno.h>
#include <string.h>
#include <getopt.h>
#include <sys/types.h>
```
这是我默认会全局包含的最小包含集合,它将引入:
\#include 文件 | 提供的东西
---|---
stdio | 提供 `FILE`、`stdin`、`stdout`、`stderr` 和 `fprint()` 函数系列
stdlib | 提供 `malloc()`、`calloc()` 和 `realloc()`
unistd | 提供 `EXIT_FAILURE`、`EXIT_SUCCESS`
libgen | 提供 `basename()` 函数
errno | 定义外部 `errno` 变量及其可以接受的所有值
string | 提供 `memcpy()`、`memset()` 和 `strlen()` 函数系列
getopt | 提供外部 `optarg`、`opterr`、`optind` 和 `getopt()` 函数
sys/types | 类型定义快捷方式,如 `uint32_t``uint64_t`
#### 2、定义
```
/* main.c */
<...>
#define OPTSTR "vi:o:f:h"
#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]"
#define ERR_FOPEN_INPUT "fopen(input, r)"
#define ERR_FOPEN_OUTPUT "fopen(output, w)"
#define ERR_DO_THE_NEEDFUL "do_the_needful blew up"
#define DEFAULT_PROGNAME "george"
```
这在现在没有多大意义,但 `OPTSTR` 定义我这里会说明一下,它是程序推荐的命令行开关。参考 [getopt(3)][5] man 页面,了解 `OPTSTR` 将如何影响 `getopt()` 的行为。
`USAGE_FMT` 定义了一个 `printf()` 风格的格式字符串,它用在 `usage()` 函数中。
我还喜欢将字符串常量放在文件的 `#define` 这一部分。如果需要,把它们收集在一起可以更容易地修正拼写、重用消息和国际化消息。
最后,在命名 `#define` 时全部使用大写字母,以区别变量和函数名。如果需要,可以将单词放连在一起或使用下划线分隔,只要确保它们都是大写的就行。
#### 3、外部声明
```
/* main.c */
<...>
extern int errno;
extern char *optarg;
extern int opterr, optind;
```
`extern` 声明将该名称带入当前编译单元的命名空间(即 “文件”),并允许程序访问该变量。这里我们引入了三个整数变量和一个字符指针的定义。`opt` 前缀的几个变量是由 `getopt()` 函数使用的C 标准库使用 `errno` 作为带外通信通道来传达函数可能的失败原因。
#### 4、类型定义
```
/* main.c */
<...>
typedef struct {
int verbose;
uint32_t flags;
FILE *input;
FILE *output;
} options_t;
```
在外部声明之后,我喜欢为结构、联合和枚举声明 `typedef`。命名一个 `typedef` 是一种传统习惯。我非常喜欢使用 `_t` 后缀来表示该名称是一种类型。在这个例子中,我将 `options_t` 声明为一个包含 4 个成员的 `struct`。C 是一种空格无关的编程语言,因此我使用空格将字段名排列在同一列中。我只是喜欢它看起来的样子。对于指针声明,我在名称前面加上星号,以明确它是一个指针。
#### 5、全局变量声明
```
/* main.c */
<...>
int dumb_global_variable = -11;
```
全局变量是一个坏主意,你永远不应该使用它们。但如果你必须使用全局变量,请在这里声明,并确保给它们一个默认值。说真的,*不要使用全局变量*。
#### 6、函数原型
```
/* main.c */
<...>
void usage(char *progname, int opt);
int do_the_needful(options_t *options);
```
在编写函数时,将它们添加到 `main()` 函数之后而不是之前,在这里放函数原型。早期的 C 编译器使用单遍策略,这意味着你在程序中使用的每个符号(变量或函数名称)必须在使用之前声明。现代编译器几乎都是多遍编译器,它们在生成代码之前构建一个完整的符号表,因此并不严格要求使用函数原型。但是,有时你无法选择代码要使用的编译器,所以请编写函数原型并继续这样做下去。
当然,我总是包含一个 `usage()` 函数,当 `main()` 函数不理解你从命令行传入的内容时,它会调用这个函数。
#### 7、命令行解析
```
/* main.c */
<...>
int main(int argc, char *argv[]) {
int opt;
options_t options = { 0, 0x0, stdin, stdout };
opterr = 0;
while ((opt = getopt(argc, argv, OPTSTR)) != EOF)
switch(opt) {
case 'i':
if (!(options.input = fopen(optarg, "r")) ){
perror(ERR_FOPEN_INPUT);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'o':
if (!(options.output = fopen(optarg, "w")) ){
perror(ERR_FOPEN_OUTPUT);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'f':
options.flags = (uint32_t )strtoul(optarg, NULL, 16);
break;
case 'v':
options.verbose += 1;
break;
case 'h':
default:
usage(basename(argv[0]), opt);
/* NOTREACHED */
break;
}
if (do_the_needful(&options) != EXIT_SUCCESS) {
perror(ERR_DO_THE_NEEDFUL);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
return EXIT_SUCCESS;
}
```
好吧,代码有点多。这个 `main()` 函数的目的是收集用户提供的参数,执行最基本的输入验证,然后将收集到的参数传递给使用它们的函数。这个示例声明一个使用默认值初始化的 `options` 变量,并解析命令行,根据需要更新 `options`
`main()` 函数的核心是一个 `while` 循环,它使用 `getopt()` 来遍历 `argv`,寻找命令行选项及其参数(如果有的话)。文件前面定义的 `OPTSTR` 是驱动 `getopt()` 行为的模板。`opt` 变量接受 `getopt()` 找到的任何命令行选项的字符值,程序对检测命令行选项的响应发生在 `switch` 语句中。
如果你注意到了可能会问,为什么 `opt` 被声明为 32 位 `int`,但是预期是 8 位 `char`?事实上 `getopt()` 返回一个 `int`,当它到达 `argv` 末尾时取负值,我会使用 `EOF`*文件末尾*标记)匹配。`char` 是有符号的,但我喜欢将变量匹配到它们的函数返回值。
当检测到一个已知的命令行选项时,会发生特定的行为。在 `OPTSTR` 中指定一个以冒号结尾的参数,这些选项可以有一个参数。当一个选项有一个参数时,`argv` 中的下一个字符串可以通过外部定义的变量 `optarg` 提供给程序。我使用 `optarg` 来打开文件进行读写,或者将命令行参数从字符串转换为整数值。
这里有几个关于代码风格的要点:
* 将 `opterr` 初始化为 `0`,禁止 `getopt` 触发 `?`
* 在 `main()` 的中间使用 `exit(EXIT_FAILURE);``exit(EXIT_SUCCESS);`
* `/* NOTREACHED */` 是我喜欢的一个 lint 指令。
* 在返回 int 类型的函数末尾使用 `return EXIT_SUCCESS;`
* 显示强制转换隐式类型。
这个程序的命令行格式,经过编译如下所示:
```
$ ./a.out -h
a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]
```
事实上,在编译后 `usage()` 就会向 `stderr` 发出这样的内容。
#### 8、函数声明
```
/* main.c */
<...>
void usage(char *progname, int opt) {
fprintf(stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
int do_the_needful(options_t *options) {
if (!options) {
errno = EINVAL;
return EXIT_FAILURE;
}
if (!options->input || !options->output) {
errno = ENOENT;
return EXIT_FAILURE;
}
/* XXX do needful stuff */
return EXIT_SUCCESS;
}
```
我最后编写的函数不是个样板函数。在本例中,函数 `do_the_needful()` 接受一个指向 `options_t` 结构的指针。我验证 `options` 指针不为 `NULL`,然后继续验证 `input``output` 结构成员。如果其中一个测试失败,返回 `EXIT_FAILURE`,并且通过将外部全局变量 `errno` 设置为常规错误代码,我可以告知调用者常规的错误原因。调用者可以使用便捷函数 `perror()` 来根据 `errno` 的值发出便于阅读的错误消息。
函数几乎总是以某种方式验证它们的输入。如果完全验证代价很大,那么尝试执行一次并将验证后的数据视为不可变。`usage()` 函数使用 `fprintf()` 调用中的条件赋值验证 `progname` 参数。接下来 `usage()` 函数就退出了,所以我不会费心设置 `errno`,也不用操心是否使用正确的程序名。
在这里,我要避免的最大错误是解引用 `NULL` 指针。这将导致操作系统向我的进程发送一个名为 `SYSSEGV` 的特殊信号,导致不可避免的死亡。用户最不希望看到的是由 `SYSSEGV` 而导致的崩溃。最好是捕获 `NULL` 指针以发出更合适的错误消息并优雅地关闭程序。
有些人抱怨在函数体中有多个 `return` 语句,他们喋喋不休地说些“控制流的连续性”之类的东西。老实说,如果函数中间出现错误,那就应该返回这个错误条件。写一大堆嵌套的 `if` 语句只有一个 `return` 绝不是一个“好主意”™。
最后,如果你编写的函数接受四个以上的参数,请考虑将它们绑定到一个结构中,并传递一个指向该结构的指针。这使得函数签名更简单,更容易记住,并且在以后调用时不会出错。它还可以使调用函数速度稍微快一些,因为需要复制到函数堆栈中的东西更少。在实践中,只有在函数被调用数百万或数十亿次时,才会考虑这个问题。如果认为这没有意义,那也无所谓。
### 等等,你不是说没有注释吗!?!!
`do_the_needful()` 函数中,我写了一种特殊类型的注释,它被是作为占位符设计的,而不是为了说明代码:
```
/* XXX do needful stuff */
```
当你写到这里时,有时你不想停下来编写一些特别复杂的代码,你会之后再写,而不是现在。那就是我留给自己再次回来的地方。我插入一个带有 `XXX` 前缀的注释和一个描述需要做什么的简短注释。之后,当我有更多时间的时候,我会在源代码中寻找 `XXX`。使用什么前缀并不重要,只要确保它不太可能在另一个上下文环境(如函数名或变量)中出现在你代码库里。
### 把它们组合在一起
好吧,当你编译这个程序后,它*仍然*几乎没有任何作用。但是现在你有了一个坚实的骨架来构建你自己的命令行解析 C 程序。
```
/* main.c - the complete listing */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <libgen.h>
#include <errno.h>
#include <string.h>
#include <getopt.h>
#define OPTSTR "vi:o:f:h"
#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]"
#define ERR_FOPEN_INPUT "fopen(input, r)"
#define ERR_FOPEN_OUTPUT "fopen(output, w)"
#define ERR_DO_THE_NEEDFUL "do_the_needful blew up"
#define DEFAULT_PROGNAME "george"
extern int errno;
extern char *optarg;
extern int opterr, optind;
typedef struct {
int verbose;
uint32_t flags;
FILE *input;
FILE *output;
} options_t;
int dumb_global_variable = -11;
void usage(char *progname, int opt);
int do_the_needful(options_t *options);
int main(int argc, char *argv[]) {
int opt;
options_t options = { 0, 0x0, stdin, stdout };
opterr = 0;
while ((opt = getopt(argc, argv, OPTSTR)) != EOF)
switch(opt) {
case 'i':
if (!(options.input = fopen(optarg, "r")) ){
perror(ERR_FOPEN_INPUT);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'o':
if (!(options.output = fopen(optarg, "w")) ){
perror(ERR_FOPEN_OUTPUT);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'f':
options.flags = (uint32_t )strtoul(optarg, NULL, 16);
break;
case 'v':
options.verbose += 1;
break;
case 'h':
default:
usage(basename(argv[0]), opt);
/* NOTREACHED */
break;
}
if (do_the_needful(&options) != EXIT_SUCCESS) {
perror(ERR_DO_THE_NEEDFUL);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
return EXIT_SUCCESS;
}
void usage(char *progname, int opt) {
fprintf(stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME);
exit(EXIT_FAILURE);
/* NOTREACHED */
}
int do_the_needful(options_t *options) {
if (!options) {
errno = EINVAL;
return EXIT_FAILURE;
}
if (!options->input || !options->output) {
errno = ENOENT;
return EXIT_FAILURE;
}
/* XXX do needful stuff */
return EXIT_SUCCESS;
}
```
现在,你已经准备好编写更易于维护的 C 语言。如果你有任何问题或反馈,请在评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/how-write-good-c-main-function
作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jnyjny
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
[2]: https://www.owasp.org/index.php/Null_Dereference
[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code")
[4]: https://en.wikipedia.org/wiki/C_preprocessor
[5]: https://linux.die.net/man/3/getopt
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: (warmfrog)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10946-1.html)
[#]: subject: (NVMe on Linux)
[#]: via: (https://www.networkworld.com/article/3397006/nvme-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 上的 NVMe
===============
> 如果你还没注意到,一些极速的固态磁盘技术已经可以用在 Linux 和其他操作系统上了。
![Sandra Henry-Stocker][1]
NVMe 意即<ruby>非易失性内存主机控制器接口规范<rt>non-volatile memory express</rt></ruby>它是一个主机控制器接口和存储协议用于加速企业和客户端系统以及固态驱动器SSD之间的数据传输。它通过电脑的高速 PCIe 总线工作。每当我看到这些名词时,我的感受是“羡慕”。而羡慕的原因很重要。
使用 NVMe数据传输的速度比旋转磁盘快很多。事实上NVMe 驱动能够比 SATA SSD 快 7 倍。这比我们今天很多人用的固态硬盘快了 7 倍多。这意味着,如果你用一个 NVMe 驱动盘作为启动盘,你的系统能够启动的非常快。事实上,如今任何人买一个新的系统可能都不会考虑那些没有自带 NVMe 的,不管是服务器或者个人电脑。
### NVMe 在 Linux 下能工作吗?
是的NVMe 自 Linux 内核 3.3 版本就支持了。然而,要升级系统,通常同时需要一个 NVMe 控制器和一个 NVMe 磁盘。一些外置磁盘也行,但是要连接到系统上,需要的可不仅仅是通用的 USB 接口。
先使用下列命令检查内核版本:
```
$ uname -r
5.0.0-15-generic
```
如果你的系统已经用了 NVMe你将看到一个设备例如`/dev/nvme0`),但是只有在你安装了 NVMe 控制器的情况下才显示。如果你没有 NVMe 控制器,你可以用下列命令获取使用 NVMe 的相关信息。
```
$ modinfo nvme | head -6
filename: /lib/modules/5.0.0-15-generic/kernel/drivers/nvme/host/nvme.ko
version: 1.0
license: GPL
author: Matthew Wilcox <willy@linux.intel.com>
srcversion: AA383008D5D5895C2E60523
alias: pci:v0000106Bd00002003sv*sd*bc*sc*i*
```
### 了解更多
如果你想了解极速的 NVMe 存储的更多细节,可在 [PCWorld][3] 获取。
规范、白皮书和其他资源可在 [NVMexpress.org][4] 获取。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397006/nvme-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[warmfrog](https://github.com/warmfrog)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/nvme-100797708-large.jpg
[2]: https://www.networkworld.com/slideshow/153439/linux-best-desktop-distros-for-newbies.html#tk.nww-infsb
[3]: https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html
[4]: https://nvmexpress.org/
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10953-1.html)
[#]: subject: (Why translation platforms matter)
[#]: via: (https://opensource.com/article/19/5/translation-platforms)
[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/annegentle/users/bcotton)
什么是翻译平台最重要的地方?
======
> 技术上的考虑并不是判断一个好的翻译平台的最佳方式。
![](https://img.linux.net.cn/data/attachment/album/201906/09/112224nvvkrv16qv60vwpv.jpg)
语言翻译可以使开源软件能够被世界各地的人们使用,这是非开发人员参与他们喜欢的(开源)项目的好方法。有许多[翻译工具][2],你可以根据他们处理翻译中涉及的主要功能区域的能力来评估:技术交互能力、团队支持能力和翻译支持能力。
技术交互方面包括:
* 支持的文件格式
* 与开源存储库的同步
* 自动化支持工具
* 接口可能性
对团队合作(也可称为“社区活力”)的支持包括该平台如何:
* 监控变更(按译者、项目等)
* 跟进由项目推动的更新
* 显示进度状态
* 是否启用审核和验证步骤
* 协助(来自同一团队和跨语言的)翻译人员和项目维护人员之间的讨论
* 平台支持的全球通讯(新闻等)
翻译协助包括:
* 清晰、符合人体工程学的界面
* 简单几步就可以找到项目并开始工作
* 可以简单地了解到翻译和分发之间流程
* 可以使用翻译记忆机
* 词汇表丰富
前两个功能区域与源代码管理平台的差别不大,只有一些小的差别。我觉得最后一个区域也主要与源代码有关。但是,它们处理的数据非常不同,翻译平台的用户通常也不如开发人员了解技术,而数量也更多。
### 我的推荐
在我看来GNOME 平台提供了最好的翻译平台,原因如下:
* 其网站包含了团队组织和翻译平台。很容易看出谁在负责以及他们在团队中的角色。一切都集中在几个屏幕之内。
* 很容易找到要处理的内容,并且你会很快意识到你必须将文件下载到本地计算机并在修改后将其发回。这个流程不是很先进,但逻辑很容易理解。
* 一旦你发回文件,平台就可以向邮件列表发送通告,以便团队知道后续步骤,并且可以全局轻松讨论翻译(而不是评论特定句子)。
* 它支持多达 297 种语言。
* 它显示了基本句子、高级菜单和文档的明确的进度百分比。
  
再加上可预测的 GNOME 发布计划,社区可以使用一切可以促进社区工作的工具。
如果我们看看 Debian 翻译团队,他们多年来一直在为 Debian LCTT 译注此处原文是“Fedora”疑为笔误翻译了难以想象的大量内容尤其是新闻我们看到他们有一个高度以来于规则的翻译流程完全基于电子邮件手动推送到存储库。该团队还将所有内容都放在流程中而不是工具中尽管这似乎需要相当大的技术能力但它已成为领先的语言群体之一已经运作多年。
我认为,成功的翻译平台的主要问题不是基于单一的(技术、翻译)工作的能力,而是基于如何构建和支持翻译团队的流程。这就是可持续性的原因。
生产过程是构建团队最重要的方式;通过正确地将它们组合在一起,新手很容易理解该过程是如何工作的,采用它们,并将它们解释给下一组新人。
要建立一个可持续发展的社区,首先要考虑的是支持协同工作的工具,然后是可用性。
这解释了我为什么对 [Zanata][3] 工具沮丧,从技术和界面的角度来看,这是有效的,但在帮助构建社区方面却很差。我认为翻译是一个社区驱动的过程(可能是开源软件开发中最受社区驱动的过程之一),这对我来说是一个关键问题。
* * *
本文改编自“[什么是一个好的翻译平台?][4]”,最初发表在 Jibec 期刊上,并经许可重复使用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/translation-platforms
作者:[Jean-Baptiste Holcroft][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jibec/users/annegentle/users/bcotton
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel
[2]: https://opensource.com/article/17/6/open-source-localization-tools
[3]: http://zanata.org/
[4]: https://jibecfed.fedorapeople.org/blog-hugo/en/2016/09/whats-a-good-translation-platform/

View File

@ -0,0 +1,145 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10951-1.html)
[#]: subject: (How To Verify NTP Setup (Sync) is Working or Not In Linux?)
[#]: via: (https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何在 Linux 下确认 NTP 是否同步?
======
![](https://img.linux.net.cn/data/attachment/album/201906/08/215622oqdhiuhocsndijlu.jpg)
NTP 意即<ruby>网络时间协议<rt>Network Time Protocol</rt></ruby>它通过网络同步计算机系统之间的时钟。NTP 服务器可以使组织中的所有服务器保持同步以准确时间执行基于时间的作业。NTP 客户端会将其时钟与 NTP 服务器同步。
我们已经写了一篇关于 NTP 服务器和客户端安装和配置的文章。如果你想查看这些文章,请导航至以下链接。
* [如何在 Linux 上安装、配置 NTP 服务器和客户端?][1]
* [如何安装和配置 Chrony 作为 NTP 客户端?][2]
我假设我你经使用上述链接设置了 NTP 服务器和 NTP 客户端。现在,如何验证 NTP 设置是否正常工作?
Linux 中有三个命令可用于验证 NTP 同步情况。详情如下。在本文中,我们将告诉您如何使用所有这些命令验证 NTP 同步。
* `ntpq`ntpq 是一个标准的 NTP 查询程序。
* `ntpstat`:显示网络世界同步状态。
* `timedatectl`:它控制 systemd 系统中的系统时间和日期。
### 方法 1如何使用 ntpq 命令检查 NTP 状态?
`ntpq` 实用程序用于监视 NTP 守护程序 `ntpd` 的操作并确定性能。
该程序可以以交互模式运行,也可以使用命令行参数进行控制。它通过向服务器发送多个查询来打印出连接的对等项列表。如果 NTP 正常工作,你将获得类似于下面的输出。
```
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
```
细节:
* `-p`:打印服务器已知的对等项列表以及其状态摘要。
### 方法 2如何使用 ntpstat 命令检查 NTP 状态?
`ntpstat` 将报告在本地计算机上运行的 NTP 守护程序(`ntpd`)的同步状态。如果发现本地系统与参考时间源保持同步,则 `ntpstat` 将报告大致的时间精度。
`ntpstat` 命令根据 NTP 同步状态返回三种状态码。详情如下。
* `0`:如果时钟同步则返回 0。
* `1`:如果时钟不同步则返回 1。
* `2`:如果时钟状态不确定,则返回 2例如 ntpd 不可联系时。
```
# ntpstat
synchronised to NTP server (192.168.1.8) at stratum 3
time correct to within 508 ms
polling server every 64 s
```
### 方法 3如何使用 timedatectl 命令检查 NTP 状态?
[timedatectl 命令][3]用于查询和更改系统时钟及其在 systmed 系统中的设置。
```
# timedatectl
# timedatectl status
Local time: Thu 2019-05-30 05:01:05 CDT
Universal time: Thu 2019-05-30 10:01:05 UTC
RTC time: Thu 2019-05-30 10:01:05
Time zone: America/Chicago (CDT, -0500)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2019-03-10 01:59:59 CST
Sun 2019-03-10 03:00:00 CDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2019-11-03 01:59:59 CDT
Sun 2019-11-03 01:00:00 CST
```
### 更多技巧
Chrony 是一个 NTP 客户端的替代品。它可以更快地同步系统时钟,时间精度更高,对于一直不在线的系统尤其有用。
chronyd 较小,它使用较少的内存,只在必要时才唤醒 CPU这样可以更好地节省电能。即使网络拥塞较长时间它也能很好地运行。
你可以使用以下任何命令来检查 Chrony 状态。
检查 Chrony 跟踪状态。
```
# chronyc tracking
Reference ID : C0A80105 (CentOS7.2daygeek.com)
Stratum : 3
Ref time (UTC) : Thu Mar 28 05:57:27 2019
System time : 0.000002545 seconds slow of NTP time
Last offset : +0.001194361 seconds
RMS offset : 0.001194361 seconds
Frequency : 1.650 ppm fast
Residual freq : +184.101 ppm
Skew : 2.962 ppm
Root delay : 0.107966967 seconds
Root dispersion : 1.060455322 seconds
Update interval : 2.0 seconds
Leap status : Normal
```
运行 `sources` 命令以显示有关当前时间源的信息。
```
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-verify-ntp-sync-is-working-or-not-in-linux-using-ntpq-ntpstat-timedatectl/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10811-1.html
[2]: https://linux.cn/article-10820-1.html
[3]: https://www.2daygeek.com/change-set-time-date-and-timezone-on-linux/

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10960-1.html)
[#]: subject: (Two Methods To Check Or List Installed Security Updates on Redhat (RHEL) And CentOS System)
[#]: via: (https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在 RHEL 和 CentOS 上检查或列出已安装的安全更新的两种方法
======
![](https://img.linux.net.cn/data/attachment/album/201906/11/100735bdnjzkkmjbxbttmm.jpg)
我们过去曾写过两篇关于这个主题的文章,每篇文章都是根据不同的要求发表的。如果你想在开始之前浏览这些文章。请通过以下链接:
* [如何检查 RHEL 和 CentOS 上的可用安全更新?][1]
* [在 RHEL 和 CentOS 上安装安全更新的四种方法?][2]
这些文章与其他文章相互关联,因此,在深入研究之前,最好先阅读这些文章。
在本文中,我们将向你展示如何检查已安装的安全更新。我会介绍两种方法,你可以选择最适合你的。
此外,我还添加了一个小的 shell 脚本,它为你提供已安装的安全包计数。
运行以下命令获取系统上已安装的安全更新的列表。
```
# yum updateinfo list security installed
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos,
: subscription-manager, verify, versionlock
RHSA-2015:2315 Moderate/Sec. ModemManager-glib-1.1.0-8.git20130913.el7.x86_64
RHSA-2015:2315 Moderate/Sec. NetworkManager-1:1.0.6-27.el7.x86_64
RHSA-2016:2581 Low/Sec. NetworkManager-1:1.4.0-12.el7.x86_64
RHSA-2017:2299 Moderate/Sec. NetworkManager-1:1.8.0-9.el7.x86_64
RHSA-2015:2315 Moderate/Sec. NetworkManager-adsl-1:1.0.6-27.el7.x86_64
RHSA-2016:2581 Low/Sec. NetworkManager-adsl-1:1.4.0-12.el7.x86_64
RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64
RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64
```
要计算已安装的安全包的数量,请运行以下命令:
```
# yum updateinfo list security installed | wc -l
1046
```
仅打印安装包列表:
```
# yum updateinfo list security all | grep -w "i"
i RHSA-2015:2315 Moderate/Sec. ModemManager-glib-1.1.0-8.git20130913.el7.x86_64
i RHSA-2015:2315 Moderate/Sec. NetworkManager-1:1.0.6-27.el7.x86_64
i RHSA-2016:2581 Low/Sec. NetworkManager-1:1.4.0-12.el7.x86_64
i RHSA-2017:2299 Moderate/Sec. NetworkManager-1:1.8.0-9.el7.x86_64
i RHSA-2015:2315 Moderate/Sec. NetworkManager-adsl-1:1.0.6-27.el7.x86_64
i RHSA-2016:2581 Low/Sec. NetworkManager-adsl-1:1.4.0-12.el7.x86_64
i RHSA-2017:2299 Moderate/Sec. NetworkManager-adsl-1:1.8.0-9.el7.x86_64
i RHSA-2015:2315 Moderate/Sec. NetworkManager-bluetooth-1:1.0.6-27.el7.x86_64
i RHSA-2016:2581 Low/Sec. NetworkManager-bluetooth-1:1.4.0-12.el7.x86_64
i RHSA-2017:2299 Moderate/Sec. NetworkManager-bluetooth-1:1.8.0-9.el7.x86_64
i RHSA-2015:2315 Moderate/Sec. NetworkManager-config-server-1:1.0.6-27.el7.x86_64
i RHSA-2016:2581 Low/Sec. NetworkManager-config-server-1:1.4.0-12.el7.x86_64
i RHSA-2017:2299 Moderate/Sec. NetworkManager-config-server-1:1.8.0-9.el7.noarch
```
要计算已安装的安全包的数量,请运行以下命令:
```
# yum updateinfo list security all | grep -w "i" | wc -l
1043
```
或者,你可以检查指定包修复的漏洞列表。
在此例中,我们将检查 “openssh” 包中已修复的漏洞列表:
```
# rpm -q --changelog openssh | grep -i CVE
- Fix for CVE-2017-15906 (#1517226)
- CVE-2015-8325: privilege escalation via user's PAM environment and UseLogin=yes (#1329191)
- CVE-2016-1908: possible fallback from untrusted to trusted X11 forwarding (#1298741)
- CVE-2016-3115: missing sanitisation of input for X11 forwarding (#1317819)
- prevents CVE-2016-0777 and CVE-2016-0778
- Security fixes released with openssh-6.9 (CVE-2015-5352) (#1247864)
- only query each keyboard-interactive device once (CVE-2015-5600) (#1245971)
- add new option GSSAPIEnablek5users and disable using ~/.k5users by default CVE-2014-9278
- prevent a server from skipping SSHFP lookup - CVE-2014-2653 (#1081338)
- change default value of MaxStartups - CVE-2010-5107 (#908707)
- CVE-2010-4755
- merged cve-2007_3102 to audit patch
- fixed audit log injection problem (CVE-2007-3102)
- CVE-2006-5794 - properly detect failed key verify in monitor (#214641)
- CVE-2006-4924 - prevent DoS on deattack detector (#207957)
- CVE-2006-5051 - don't call cleanups from signal handler (#208459)
- use fork+exec instead of system in scp - CVE-2006-0225 (#168167)
```
同样,你可以通过运行以下命令来检查相应的包中是否修复了指定的漏洞:
```
# rpm -q --changelog openssh | grep -i CVE-2016-3115
- CVE-2016-3115: missing sanitisation of input for X11 forwarding (#1317819)
```
### 如何使用 Shell 脚本计算安装的安全包?
我添加了一个小的 shell 脚本,它可以帮助你计算已安装的安全包列表。
```
# vi /opt/scripts/security-check.sh
#!/bin/bash
echo "+-------------------------+"
echo "|Security Advisories Count|"
echo "+-------------------------+"
for i in Important Moderate Low
do
sec=$(yum updateinfo list security installed | grep $i | wc -l)
echo "$i: $sec"
done | column -t
echo "+-------------------------+"
```
`security-check.sh` 文件执行权限。
```
$ chmod +x security-check.sh
```
最后执行脚本统计。
```
# sh /opt/scripts/security-check.sh
+-------------------------+
|Security Advisories Count|
+-------------------------+
Important: 480
Moderate: 410
Low: 111
+-------------------------+
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-10938-1.html
[2]: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10965-1.html)
[#]: subject: (An open source bionic leg, Python data pipeline, data breach detection, and more news)
[#]: via: (https://opensource.com/article/19/6/news-june-8)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
开源新闻开源仿生腿、Python 数据管道、数据泄露检测
======
> 了解过去两周来最大的开源头条新闻。
![][1]
在本期开源新闻综述中,我们将介绍一个开源仿生腿、一个新的开源医学影像组织,麦肯锡发布的首个开源软件,以及更多!
### 使用开源推进仿生学
我们这一代人从电视剧《六百万美元人》和《仿生女人》中学到了仿生学一词。让科幻小说(尽管基于事实)正在成为现实的,要归功于[由密歇根大学和 Shirley Ryan AbilityLab 设计][2]的假肢。
该腿采用简单、低成本的模块化设计,“旨在通过为仿生学领域的零碎研究工作提供统一的平台,提高患者的生活质量并加速科学进步”。根据首席设计师 Elliot Rouse 的说法,它将“使研究人员能够有效地解决与一系列的实验室和社区活动中控制仿生腿相关的挑战。”
你可以从[开源腿][3]网站了解有该腿的更多信息并下载该设计。
### 麦肯锡发布了一个用于构建产品级数据管道的 Python 库
咨询巨头麦肯锡公司最近发布了其[第一个开源工具][4],名为 Kedro它是一个用于创建机器学习和数据管道的 Python 库。
Kedro 使得“管理大型工作流程更加容易,并确保整个项目的代码质量始终如一”,产品经理 Yetunde Dada 说。虽然它最初是作为一种专有的工具,但麦肯锡开源了 Kedro因此“客户可以在我们离开项目后使用它 —— 这是我们回馈的方式,”工程师 Nikolaos Tsaousis 说。
如果你有兴趣了解一下,可以从 GitHub 上获取 [Kedro 的源代码][5]。
### 新联盟推进开源医学成像
一组专家和患者倡导者聚集在一起组成了[开源成像联盟][6]。该联盟旨在“通过数字成像和机器学习帮助推进特发性肺纤维化和其他间质性肺病的诊断。”
根据联盟执行董事 Elizabeth Estes 的说法,该项目旨在“协作加速诊断,帮助预后处置,最终让医生更有效地治疗患者”。为此,他们正在组织和分享“来自患者的 15,000 个匿名图像扫描和临床数据,这将作为机器学习程序的输入数据来开发算法。”
### Mozilla 发布了一种简单易用的方法,以确定你是否遭受过数据泄露
向不那么精通软件的人解释安全性始终是一项挑战无论你的技能水平如何都很难监控你的风险。Mozilla 发布了 [Firefox Monitor][7],其数据由 [Have I Been Pwned][8] 提供,它是一种查看你的任何电子邮件是否出现在重大数据泄露事件中的简单方式。你可以输入电子邮件逐个搜索,或注册他们的服务以便将来通知你。
该网站还提供了大量有用的教程,用于了解黑客如何做的,数据泄露后如何处理以及如何创建强密码。请务必将网站加入书签,以防家人要求你在假日期间提供建议。
### 其它新闻
* [想要一款去谷歌化的 Android把你的手机发送给这个人][9]
* [CockroachDB 发行版使用了非 OSI 批准的许可证,但仍然保持开源][10]
* [基础设施自动化公司 Chef 承诺开源][11]
* [俄罗斯的 Windows 替代品将获得安全升级][12]
* [使用此代码在几分钟内从 Medium 切换到你自己的博客][13]
* [开源推进联盟宣布与台湾自由软件协会建立新合作伙伴关系][14]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/news-june-8
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
[2]: https://news.umich.edu/open-source-bionic-leg-first-of-its-kind-platform-aims-to-rapidly-advance-prosthetics/
[3]: https://opensourceleg.com/
[4]: https://www.information-age.com/kedro-mckinseys-open-source-software-tool-123482991/
[5]: https://github.com/quantumblacklabs/kedro
[6]: https://pulmonaryfibrosisnews.com/2019/05/31/international-open-source-imaging-consortium-osic-launched-to-advance-ipf-diagnosis/
[7]: https://monitor.firefox.com/
[8]: https://haveibeenpwned.com/
[9]: https://fossbytes.com/want-a-google-free-android-send-your-phone-to-this-guy/
[10]: https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/
[11]: https://www.infoq.com/news/2019/05/chef-open-source/
[12]: https://www.nextgov.com/cybersecurity/2019/05/russias-would-be-windows-replacement-gets-security-upgrade/157330/
[13]: https://github.com/mathieudutour/medium-to-own-blog
[14]: https://opensource.org/node/994

View File

@ -0,0 +1,282 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10962-1.html)
[#]: subject: (Screen Command Examples To Manage Multiple Terminal Sessions)
[#]: via: (https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/)
[#]: author: (sk https://www.ostechnix.com/author/sk/)
screen 命令示例:管理多个终端会话
======
![Screen Command Examples To Manage Multiple Terminal Sessions](https://img.linux.net.cn/data/attachment/album/201906/11/124801th0uy0hti3y211ha.jpg)
GNU Screen 是一个终端多路复用器窗口管理器。顾名思义Screen 可以在多个交互式 shell 之间复用物理终端,因此我们可以在每个终端会话中执行不同的任务。所有的 Screen 会话都完全独立地运行程序。因此,即使会话意外关闭或断开连接,在 Screen 会话内运行的程序或进程也将继续运行。例如,当通过 SSH [升级 Ubuntu][2] 服务器时,`screen` 命令将继续运行升级过程,以防万一 SSH 会话因任何原因而终止。
GNU Screen 允许我们轻松创建多个 Screen 会话,在不同会话之间切换,在会话之间复制文本,随时连上或脱离会话等等。它是每个 Linux 管理员应该在必要时学习和使用的重要命令行工具之一。在本简要指南中,我们将看到 `screen` 命令的基本用法以及在 Linux 中的示例。
### 安装 GNU Screen
GNU Screen 在大多数 Linux 操作系统的默认存储库中都可用。
要在 Arch Linux 上安装 GNU Screen请运行
```
$ sudo pacman -S screen
```
在 Debian、Ubuntu、Linux Mint 上:
```
$ sudo apt-get install screen
```
在 Fedora 上:
```
$ sudo dnf install screen
```
在 RHEL、CentOS 上:
```
$ sudo yum install screen
```
在 SUSE/openSUSE 上:
```
$ sudo zypper install screen
```
让我们继续看一些 `screen` 命令示例。
### 管理多个终端会话的 Screen 命令示例
在 Screen 中所有命令的默认前缀快捷方式是 `Ctrl + a`。使用 Screen 时,你需要经常使用此快捷方式。所以,要记住这个键盘快捷键。
#### 创建新的 Screen 会话
让我们创建一个新的 Screen 会话并连上它。为此,请在终端中键入以下命令:
```
screen
```
现在,在此会话中运行任何程序或进程,即使你与此会话断开连接,正在运行的进程或程序也将继续运行。
#### 从 Screen 会话脱离
要从屏幕会话中脱离,请按 `Ctrl + a``d`。你无需同时按下两个组合键。首先按 `Ctrl + a` 然后按 `d`。从会话中脱离后,你将看到类似下面的输出。
```
[detached from 29149.pts-0.sk]
```
这里,`29149` 是 Screen ID`pts-0.sk` 是屏幕会话的名称。你可以使用 Screen ID 或相应的会话名称来连上、脱离和终止屏幕会话。
#### 创建命名会话
你还可以用你选择的任何自定义名称创建一个 Screen 会话,而不是默认用户名,如下所示。
```
screen -S ostechnix
```
上面的命令将创建一个名为 `xxxxx.ostechnix` 的新 Screen 会话,并立即连上它。要从当前会话中脱离,请按 `Ctrl + a`,然后按 `d`
当你想要查找哪些进程在哪些会话上运行时,命名会话会很有用。例如,当在会话中设置 LAMP 系统时,你可以简单地将其命名为如下所示。
```
screen -S lampstack
```
#### 创建脱离的会话
有时,你可能想要创建一个会话,但不希望自动连上该会话。在这种情况下,运行以下命令来创建名为`senthil` 的已脱离会话:
```
screen -S senthil -d -m
```
也可以缩短为:
```
screen -dmS senthil
```
上面的命令将创建一个名为 `senthil` 的会话,但不会连上它。
#### 列出屏幕会话
要列出所有正在运行的会话(连上的或脱离的),请运行:
```
screen -ls
```
示例输出:
```
There are screens on:
29700.senthil (Detached)
29415.ostechnix (Detached)
29149.pts-0.sk (Detached)
3 Sockets in /run/screens/S-sk.
```
如你所见,我有三个正在运行的会话,并且所有会话都已脱离。
#### 连上 Screen 会话
如果你想连上会话,例如 `29415.ostechnix`,只需运行:
```
screen -r 29415.ostechnix
```
或:
```
screen -r ostechnix
```
或使用 Screen ID
```
screen -r 29415
```
要验证我们是否连上到上述会话,只需列出打开的会话并检查。
```
screen -ls
```
示例输出:
```
There are screens on:
29700.senthil (Detached)
29415.ostechnix (Attached)
29149.pts-0.sk (Detached)
3 Sockets in /run/screens/S-sk.
```
如你所见,在上面的输出中,我们目前已连上到 `29415.ostechnix` 会话。要退出当前会话,请按 `ctrl + a d`
#### 创建嵌套会话
当我们运行 `screen` 命令时,它将为我们创建一个会话。但是,我们可以创建嵌套会话(会话内的会话)。
首先,创建一个新会话或连上已打开的会话。然后我将创建一个名为 `nested` 的新会话。
```
screen -S nested
```
现在,在会话中按 `Ctrl + a``c` 创建另一个会话。只需重复此操作即可创建任意数量的嵌套 Screen 会话。每个会话都将分配一个号码。号码将从 `0` 开始。
你可以按 `Ctrl + n` 移动到下一个会话,然后按 `Ctrl + p` 移动到上一个会话。
以下是管理嵌套会话的重要键盘快捷键列表。
* `Ctrl + a "` - 列出所有会话
* `Ctrl + a 0` - 切换到会话号 0
* `Ctrl + a n` - 切换到下一个会话
* `Ctrl + a p` - 切换到上一个会话
* `Ctrl + a S` - 将当前区域水平分割为两个区域
* `Ctrl + a l` - 将当前区域垂直分割为两个区域
* `Ctrl + a Q` - 关闭除当前会话之外的所有会话
* `Ctrl + a X` - 关闭当前会话
* `Ctrl + a \` - 终止所有会话并终止 Screen
* `Ctrl + a ?` - 显示键绑定。要退出,请按回车
  
#### 锁定会话
Screen 有一个锁定会话的选项。为此,请按 `Ctrl + a``x`。 输入你的 Linux 密码以锁定。
```
Screen used by sk <sk> on ubuntuserver.
Password:
```
#### 记录会话
你可能希望记录 Screen 会话中的所有内容。为此,只需按 `Ctrl + a``H` 即可。
或者,你也可以使用 `-L` 参数启动新会话来启用日志记录。
```
screen -L
```
从现在开始,你在会话中做的所有活动都将记录并存储在 `$HOME` 目录中名为 `screenlog.x` 的文件中。这里,`x` 是一个数字。
你可以使用 `cat` 命令或任何文本查看器查看日志文件的内容。
![][3]
*记录 Screen 会话*
#### 终止 Screen 会话
如果不再需要会话,只需杀死它。要杀死名为 `senthil` 的脱离会话:
```
screen -r senthil -X quit
```
或:
```
screen -X -S senthil quit
```
或:
```
screen -X -S 29415 quit
```
如果没有打开的会话,你将看到以下输出:
```
$ screen -ls
No Sockets found in /run/screens/S-sk.
```
更多细节请参照 man 手册页:
```
$ man screen
```
还有一个名为 Tmux 的类似的命令行实用程序,它与 GNU Screen 执行相同的工作。要了解更多信息,请参阅以下指南。
* [Tmux 命令示例:管理多个终端会话][5]
### 资源
* [GNU Screen 主页][6]
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/
作者:[sk][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Screen-Command-Examples-720x340.jpg
[2]: https://www.ostechnix.com/how-to-upgrade-to-ubuntu-18-04-lts-desktop-and-server/
[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Log-screen-sessions.png
[4]: https://www.ostechnix.com/record-everything-terminal/
[5]: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/
[6]: https://www.gnu.org/software/screen/

View File

@ -10,7 +10,7 @@ export TSL_DIR='translated' # 已翻译
export PUB_DIR='published' # 已发布
# 定义匹配规则
export CATE_PATTERN='(talk|tech)' # 类别
export CATE_PATTERN='(talk|tech|news)' # 类别
export FILE_PATTERN='[0-9]{8} [a-zA-Z0-9_.,() -]*\.md' # 文件名
# 获取用于匹配操作的正则表达式

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco to buy IoT security, management firm Sentryo)
[#]: via: (https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco to buy IoT security, management firm Sentryo
======
Buying Sentryo will give Cisco support for anomaly and real-time threat detection for the industrial internet of things.
![IDG Worldwide][1]
Looking to expand its IoT security and management offerings Cisco plans to acquire [Sentryo][2], a company based in France that offers anomaly detection and real-time threat detection for Industrial Internet of Things ([IIoT][3]) networks.
Founded in 2014 Sentryo products include ICS CyberVision an asset inventory, network monitoring and threat intelligence platform and CyberVision network edge sensors, which analyze network flows.
**More on IoT:**
* [What is the IoT? How the internet of things works][4]
* [What is edge computing and how its changing the network][5]
* [Most powerful Internet of Things companies][6]
* [10 Hot IoT startups to watch][7]
* [The 6 ways to make money in IoT][8]
* [What is digital twin technology? [and why it matters]][9]
* [Blockchain, service-centric networking key to IoT success][10]
* [Getting grounded in IoT networking and security][11]
* [Building IoT-ready networks must become a priority][12]
* [What is the Industrial IoT? [And why the stakes are so high]][13]
“We have incorporated Sentryos edge sensor and our industrial networking hardware with Ciscos IOx application framework,” wrote Rob Salvagno, Cisco vice president of Corporate Development and Cisco Investments in a [blog][14] about the proposed buy.
“We believe that connectivity is foundational to IoT projects and by unleashing the power of the network we can dramatically improve operational efficiencies and uncover new business opportunities. With the addition of Sentryo, Cisco can offer control systems engineers deeper visibility into assets to optimize, detect anomalies and secure their networks.”
Gartner [wrote][15] of Sentryos system: “ICS CyberVision product provides visibility into its customers'' OT networks in way all OT users will understand, not just technical IT staff. With the increased focus of both hackers and regulators on industrial control systems, it is vital to have the right visibility of an organizations OT. Many OT networks not only are geographically dispersed, but also are complex and consist of hundreds of thousands of components.”
Sentryo's ICS CyberVision lets enterprises ensure continuity, resilience and safety of their industrial operations while preventing possible cyberattacks, said [Nandini Natarajan][16] , industry analyst at Frost & Sullivan. "It automatically profiles assets and communication flows using a unique 'universal OT language' in the form of tags, which describe in plain text what each asset is doing. ICS CyberVision gives anyone immediate insights into an asset's role and behaviors; it offers many different analytic views leveraging artificial intelligence algorithms to let users deep-dive into the vast amount of data a typical industrial control system can generate. Sentryo makes it easy to see important or relevant information."
In addition, Sentryo's platform uses deep packet inspection (DPI) to extract information from communications among industrial assets, Natarajan said. This DPI engine is deployed through an edge-computing architecture that can run either on Sentryo sensor appliances or on network equipment that is already installed. Thus, Sentryo can embed visibility and cybersecurity features in the industrial network rather than deploying an out-of-band monitoring network, Natarajan said.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][17] ]**
Sentryos technology will broaden [Ciscos][18] overarching IoT plan. In January it [launched][19] a family of switches, software, developer tools and blueprints to meld IoT and industrial networking with [intent-based networking][20] (IBN) and classic IT security, monitoring and application-development support.
The new platforms can be managed by Ciscos DNA Center, and Cisco IoT Field Network Director, letting customers fuse their IoT and industrial-network control with their business IT world.
DNA Center is Ciscos central management tool for enterprise networks, featuring automation capabilities, assurance setting, fabric provisioning and policy-based segmentation. It is also at the center of the companys IBN initiative offering customers the ability to automatically implement network and policy changes on the fly and ensure data delivery. The IoT Field Network Director is software that manages multiservice networks of Cisco industrial, connected grid routers and endpoints.
Liz Centoni, senior vice president and general manager of Cisco's IoT business group said the company expects the [Sentryo technology to help][21] IoT customers in a number of ways:
Network-enabled, passive DPI capabilities to discover IoT and OT assets, and establish communication patterns between devices and systems. Sentryos sensor is natively deployable on Ciscos IOx framework and can be built into the industrial network these devices run on instead of adding additional hardware.
As device identification and communication patterns are created, Cisco will integrate this with DNA Center and Identity Services Engine(ISE) to allow customers to easily define segmentation policy. This integration will allow OT teams to leverage IT security teams expertise to secure their environments without risk to the operational processes.
With these IoT devices lacking modern embedded software and security capabilities, segmentation will be the key technology to allow communication from operational assets to the rightful systems, and reduce risk of cyber security incidents like we saw with [WannaCry][22] and [Norsk Hydro][23].
According to [Crunchbase][24], Sentryo has $3.5M in estimated revenue annually and it competes most closely with Cymmetria, Team8, and Indegy. The acquisition is expected to close before the end of Ciscos Q1 Fiscal Year 2020 -- October 26, 2019. Financial details of the acquisition were not detailed.
Sentryo is Ciscos second acquisition this year. It bought Singularity for its network analytics technology in January. In 2018 Cisco bought six companies including Duo security software.
** **
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400847/cisco-to-buy-iot-security-management-firm-sentryo.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/09/nwan_019_iiot-100771131-large.jpg
[2]: https://www.sentryo.net/
[3]: https://www.networkworld.com/article/3243928/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[4]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[5]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[7]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[8]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[9]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[10]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[11]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[12]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[13]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[14]: https://blogs.cisco.com/news/cisco-industrial-iot-news
[15]: https://www.globenewswire.com/news-release/2018/06/28/1531119/0/en/Sentryo-Named-a-Cool-Vendor-by-Gartner.html
[16]: https://www.linkedin.com/pulse/industrial-internet-things-iiot-decoded-nandini-natarajan/
[17]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[18]: https://www.cisco.com/c/dam/en_us/solutions/iot/ihs-report.pdf
[19]: https://www.networkworld.com/article/3336454/cisco-goes-after-industrial-iot.html
[20]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
[21]: https://blogs.cisco.com/news/securing-the-internet-of-things-cisco-announces-intent-to-acquire-sentryo
[22]: https://blogs.cisco.com/security/talos/wannacry
[23]: https://www.securityweek.com/norsk-hydro-may-have-lost-40m-first-week-after-cyberattack
[24]: https://www.crunchbase.com/organization/sentryo#section-web-traffic-by-similarweb
[25]: https://www.facebook.com/NetworkWorld/
[26]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,116 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Zorin OS Becomes Even More Awesome With Zorin 15 Release)
[#]: via: (https://itsfoss.com/zorin-os-15-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Zorin OS Becomes Even More Awesome With Zorin 15 Release
======
Zorin OS has always been known as one of the [beginner-focused Linux distros][1] out there. Yes, it may not be the most popular but it sure is a good distribution specially for Windows migrants.
A few years back, I remember, a friend of mine always insisted me to install [Zorin OS][2]. Personally, I didnt like the UI back then. But, now that Zorin OS 15 is here I have more reasons to get it installed as my primary OS.
Fret not, in this article, well talk about everything that you need to know.
### New Features in Zorin 15
Lets see the major changes in the latest release of Zorin. Zorin 15 is based on Ubuntu 18.04.2 and thus it brings the performance improvement under the hood. Other than that, there are several UI (User Interface) improvements.
#### Zorin Connect
![Zorin Connect][3]
Zorin OS 15s main highlight is Zorin Connect. If you have an Android device, you are in for a treat. Similar to [PushBullet][4], [Zorin Connect][5] integrates your phone with the desktop experience.
You get to sync your smartphones notifications on your desktop while also being able to reply to it. Heck, you can also reply to the SMS messages and view those conversations.
In addition to these, you get the following abilities:
* Share files and web links between devices
* Use your phone as a remote control for your computer
* Control media playback on your computer from your phone, and pause playback automatically when a phone call arrives
As mentioned in their [official announcement post][6], the data transmission will be on your local network and no data will be transmitted to the cloud. To access Zorin Connect, navigate your way through Zorin menu > System Tools > Zorin Connect.
[Get ZORIN CONNECT ON PLAY STORE][5]
#### New Desktop Theme (with dark mode!)
![Zorin Dark Mode][7]
Im all in when someone mentions “Dark Mode” or “Dark Theme”. For me, this is the best thing that comes baked in with Zorin OS 15.
[][8]
Suggested read Necuno is a New Open Source Smartphone Running KDE
Its so pleasing to my eyes when I enable the dark mode on anything, you with me?
Not just a dark theme, the UI is a lot cleaner and intuitive with subtle new animations. You can find all the settings from the Zorin Appearance app built-in.
#### Adaptive Background & Night Light
You get an option to let the background adapt according to the brightness of the environment every hour of the day. Also, you can find the night mode if you dont want the blue light to stress your eyes.
#### To do app
![Todo][9]
I always wanted this to happen so that I dont have to use a separate service that offers a Linux client to add my tasks. Its good to see a built-in app with integration support for Google Tasks and Todoist.
#### Theres More?
Yes! Other major changes include the support for Flatpak, a touch layout for convertible laptops, a DND mode, and some redesigned apps (Settings, Libre Office) to give you better user experience.
If you want the detailed list of changes along with the minor improvements, you can follow the [announcement post][6]. If you are already a Zorin user, you would notice that they have refreshed their website with a new look as well.
### Download Zorin OS 15
**Note** : _Direct upgrades from Zorin OS 12 to 15 without needing to re-install the operating system will be available later this year._
In case you didnt know, there are three versions of Zorin OS Ultimate, Core, and the Lite version.
If you want to support the devs and the project while unlocking the full potential of Zorin OS, you should get the ultimate edition for $39.
If you just want the essentials, the core edition will do just fine (which you can download for free). In either case, if you have an old computer, the lite version is the one to go with.
[DOWNLOAD ZORIN OS 15][10]
**What do you think of Zorin 15?**
[][11]
Suggested read Ubuntu 14.04 Codenamed Trusty Tahr
Im definitely going to give it a try as my primary OS fingers crossed. What about you? What do you think about the latest release? Feel free to let us know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/zorin-os-15-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-beginners/
[2]: https://zorinos.com/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-connect.jpg?fit=800%2C473&ssl=1
[4]: https://www.pushbullet.com/
[5]: https://play.google.com/store/apps/details?id=com.zorinos.zorin_connect&hl=en_IN
[6]: https://zoringroup.com/blog/2019/06/05/zorin-os-15-is-here-faster-easier-more-connected/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/zorin-dark-mode.jpg?fit=722%2C800&ssl=1
[8]: https://itsfoss.com/necunos-linux-smartphone/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Todo.jpg?fit=800%2C652&ssl=1
[10]: https://zorinos.com/download/
[11]: https://itsfoss.com/ubuntu-1404-codenamed-trusty-tahr/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Free and Open Source Trello Alternative OpenProject 9 Released)
[#]: via: (https://itsfoss.com/openproject-9-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Free and Open Source Trello Alternative OpenProject 9 Released
======
[OpenProject][1] is a collaborative open source project management software. Its an alternative to proprietary solutions like [Trello][2] and [Jira][3].
You can use it for free if its for personal use and you set it up (and host it) on your own server. This way, you control your data.
Of course, you get access to premium features and priority help if you are a [Cloud or Enterprise edition user][4].
The OpenProject 9 release emphasizes on new board views, package list view, and work templates.
If you didnt know about this, you can give it a try. But, if you are an existing user you should know whats new before migrating to OpenProject 9.
### Whats New in OpenProject 9?
Here are some of the major changes in the latest release of OpenProject.
#### Scrum & Agile Boards
![][5]
For Cloud and Enterprise editions, theres a new [scrum][6] and [agile][7] board view. You also get to showcase your work in a [kanban-style][8] fashion, making it easier to support your agile and scrum teams.
The new board view makes it easy to know whos assigned for the task and update the status in a jiffy. You also get different board view options like basic board, status board, and version boards.
#### Work Package templates
![Work Package Template][9]
You dont have to create everything from scratch for every unique work package. So, instead, you just keep a template so that you can use it whenever you need to create a new work package. It will save a lot of time.
#### New Work Package list view
![Work Package View][10]
In the work package list, theres a subtle new addition that lets you view the avatars of the assigned people for a specific work.
#### Customizable work package view for my page
Your own page to display what you are working on (and the progress) shouldnt be always boring. So, now you get the ability to customize it and even add a Gantt chart to visualize your work.
[][11]
Suggested read Ubuntu 12.04 Has Reached End of Life
**Wrapping Up**
For detailed instructions on migrating and installation, you should follow the [official announcement post][12] covering all the essential details for the users.
Also, we would love to know about your experience with OpenProject 9, let us know about it in the comments section below! If you use some other project management software, feel free to suggest it to us and rest of your fellow Its FOSS readers.
--------------------------------------------------------------------------------
via: https://itsfoss.com/openproject-9-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.openproject.org/
[2]: https://trello.com/
[3]: https://www.atlassian.com/software/jira
[4]: https://www.openproject.org/pricing/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/open-project-9-scrum-agile.jpeg?fit=800%2C517&ssl=1
[6]: https://en.wikipedia.org/wiki/Scrum_(software_development)
[7]: https://en.wikipedia.org/wiki/Agile_software_development
[8]: https://en.wikipedia.org/wiki/Kanban
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-template.jpg?ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/work-package-view.jpg?fit=800%2C454&ssl=1
[11]: https://itsfoss.com/ubuntu-12-04-end-of-life/
[12]: https://www.openproject.org/openproject-9-new-scrum-agile-board-view/

View File

@ -1,116 +0,0 @@
18 Cyber-Security Trends Organizations Need to Brace for in 2018
======
### 18 Cyber-Security Trends Organizations Need to Brace for in 2018
Enterprises, end users and governments faced no shortage of security challenges in 2017. Some of those same challenges will continue into 2018, and there will be new problems to solve as well. Ransomware has been a concern for several years and will likely continue to be a big issue in 2018. The new year is also going to bring the formal introduction of the European Union's General Data Protection Regulation (GDPR), which will impact how organizations manage private information. A key trend that emerged in 2017 was an increasing use of artificial intelligence (AI) to help solve cyber-security challenges, and that's a trend that will continue to accelerate in 2018. What else will the new year bring? In this slide show, eWEEK presents 18 security predictions for the year ahead from 18 security experts.
### Africa Emerges as New Area for Threat Actors and Targets
"In 2018, Africa will emerge as a new focus area for cyber-threats--both targeting organizations based there and attacks originating from the continent. With its growth in technology adoption and operations and rising economy, and its increasing number of local resident threat actors, Africa has the largest potential for net-new impactful cyber events." -Steve Stone, IBM X-Force IRIS
### AI vs. AI
"2018 will see a rise in AI-based attacks as cyber-criminals begin using machine learning to spoof human behaviors. The cyber-security industry will need to tune their own AI tools to better combat the new threats. The cat and mouse game of cybercrime and security innovation will rapidly escalate to include AI-enabled tools on both sides." --Caleb Barlow, vice president of Threat Intelligence, IBM Security
### Cyber-Security as a Growth Driver
"CEOs view cyber-security as one of their top risks, but many also see it as an opportunity to innovate and find new ways to generate revenue. In 2018 and beyond, effective cyber-security measures will support companies that are transforming their security, privacy and continuity controls in an effort to grow their businesses." -Greg Bell, KMPG's Global Cyber Security Practice co-leader
### GDPR Means Good Enough Isn't Good Enough
"Too many professionals share a 'good enough' philosophy that they've adopted from their consumer mindset that they can simply upgrade and patch to comply with the latest security and compliance best practices or regulations. In 2018, with the upcoming enforcement of the EU GDPR 'respond fast' rules, organizations will quickly come to terms, and face fines, with why 'good enough' is not 'good' anymore." -Kris Lovejoy, CEO of BluVector
### Consumerization of Cyber-Security
"2018 will mark the debut of the 'consumerization of cyber-security.' This means consumers will be offered a unified, comprehensive suite of security offerings, including, in addition to antivirus and spyware protection, credit and identify abuse monitoring and identity restoration. This is a big step forward compared to what is available in one package today. McAfee Total Protection, which safeguards consumer identities in addition to providing virus and malware protection, is an early, simplified example of this. Consumers want to feel more secure." -Don Dixon, co-founder and managing director, Trident Capital Cybersecurity
### Ransomware Will Continue
"Ransomware will continue to plague organizations with 'old' attacks 'refreshed' and reused. The threat of ransomware will continue into 2018. This year we've seen ransomware wreak havoc across the globe with both WannaCry and NotPetya hitting the headlines. Threats of this type and on this scale will be a common feature of the next 12 months." -Andrew Avanessian, chief operating officer at Avecto
### More Encryption Will Be Needed
"It will become increasingly clear in the industry that HTTPS does not offer the robust security and end-to-end encryption as is commonly believed, and there will be a push to encrypt data before it is sent over HTTPS." -Darren Guccione, CEO and co-founder, Keeper Security
### Denial of Service Will Become Financially Lucrative
"Denial of service will become as financially lucrative as identity theft. Using stolen identities for new account fraud has been the major revenue driver behind breaches. However, in recent years ransomware attacks have caused as much if not more damage, as increased reliance on distributed applications and cloud services results in massive business damage when information, applications or systems are held hostage by attackers." -John Pescatore. SANS' director of emerging security trends
### Goodbye Social Security Number
"2018 is the turning point for the retirement of the Social Security number. At this point, the vast majority of SSNs are compromised, and we can no longer rely on them--nor should we have previously." -Michael Sutton, CISO, Zscaler
### Post-Quantum Cyber-Security Discussion Warms Up the Boardroom
"The uncertainty of cyber-security in a post-quantum world is percolating some circles, but 2018 is the year the discussions gain momentum in the top levels of business. As security experts grapple with preparing for a post-quantum world, top executives will begin to ask what can be done to ensure all of our connected 'things' remain secure." -Malte Pollmann, CEO of Utimaco
### Market Consolidation Is Coming
"There will be accelerated consolidation of cyber niche markets flooded with too many 'me-too' companies offering extremely similar products and services. As an example, authentication, end-point security and threat intelligence now boast a total of more than 25 competitors. Ultimately, only three to six companies in each niche can survive." -Mike Janke, co-founder of DataTribe
### Health Care Will Be a Lucrative Target
"Health records are highly valued on the black market because they are saturated with Personally Identifiable Information (PII). Health care institutions will continue to be a target as they have tighter allocations for security in their IT budgets. Also, medical devices are hard to update and often run on older operating system versions." -Larry Cashdollar, senior engineer, Security Intelligence Response Team, Akamai
### 2018: The Year of Simple Multifactor Authentication for SMBs
"Unfortunately, effective multifactor authentication (MFA) solutions have remained largely out of reach for the average small- and medium-sized business. Though enterprise multifactor technology is quite mature, it often required complex on-premises solutions and expensive hardware tokens that most small businesses couldn't afford or manage. However, the growth of SaaS and smartphones has introduced new multifactor solutions that are inexpensive and easy for small businesses to use. Next year, many SMBs will adopt these new MFA solutions to secure their more privileged accounts and users. 2018 will be the year of MFA for SMBs." -Corey Nachreiner, CTO at WatchGuard Technologies
### Automation Will Improve the IT Skills Gap
"The security skills gap is widening every year, with no signs of slowing down. To combat the skills gap and assist in the growing adoption of advanced analytics, automation will become an even higher priority for CISOs." -Haiyan Song, senior vice president of Security Markets at Splunk
### Industrial Security Gets Overdue Attention
"The high-profile attacks of 2017 acted as a wake-up call, and many plant managers now worry that they could be next. Plant manufacturers themselves will offer enhanced security. Third-party companies going on their own will stay in a niche market. The industrial security manufacturers themselves will drive a cooperation with the security industry to provide security themselves. This is because there is an awareness thing going on and impending government scrutiny. This is different from what happened in the rest of IT/IoT where security vendors just go to market by themselves as a layer on top of IT (i.e.: an antivirus on top of Windows)." -Renaud Deraison, co-founder and CTO, Tenable
### Cryptocurrencies Become the New Playground for Identity Thieves
"The rising value of cryptocurrencies will lead to greater attention from hackers and bad actors. Next year we'll see more fraud, hacks and money laundering take place across the top cryptocurrency marketplaces. This will lead to a greater focus on identity verification and, ultimately, will result in legislation focused on trader identity." -Stephen Maloney, executive vice president of Business Development &amp; Strategy, Acuant
### GDPR Compliance Will Be a Challenge
"In 2018, three quarters of companies or apps will be ruled out of compliance with GDPR and at least one major corporation will be fined to the highest extent in 2018 to set an example for others. Most companies are preparing internally by performing more security assessments and recruiting a mix of security professionals with privacy expertise and lawyers, but with the deadline quickly approaching, it's clear the bulk of businesses are woefully behind and may not be able to avoid these consequences." -Sanjay Beri, founder and CEO, Netskope
### Data Security Solidifies Its Spot in the IT Security Stack
"Many businesses are stuck in the mindset that security of networks, servers and applications is sufficient to protect their data. However, the barrage of breaches in 2017 highlights a clear disconnect between what organizations think is working and what actually works. In 2018, we expect more businesses to implement data security solutions that complement their existing network security deployments." -Jim Varner, CEO of SecurityFirst
### [Eight Cyber-Security Vendors Raise New Funding in November 2017][1]
Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products.
Though the pace of funding slowed in November, multiple firms raised new venture capital to develop and improve their cyber-security products.
--------------------------------------------------------------------------------
via: http://voip.eweek.com/security/18-cyber-security-trends-organizations-need-to-brace-for-in-2018
作者:[Sean Michael Kerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://voip.eweek.com/Authors/sean-michael-kerner
[1]:http://voip.eweek.com/security/eight-cyber-security-vendors-raise-new-funding-in-november-2017

View File

@ -1,255 +0,0 @@
A review of Virtual Labs virtualization solutions for MOOCs WebLog Pro Olivier Berger
======
### 1 Introduction
This is a memo that tries to capture some of the experience gained in the [FLIRT project][3] on the topic of Virtual Labs for MOOCs (Massive Open Online Courses).
In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.
We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.
We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.
Disclaimer: This memo doesnt intend to point to extensive literature on the subject, so part of our analysis may be biased by our particular context.
### 2 Context : MOOCs
Many MOOCs (Massive Open Online Courses) include a kind of “virtual laboratory” for learners to experiment with tools, as a way to apply the knowledge, practice, and be more active in the learning process. In quite a few (technical) disciplines, this can consist in using a set of standard applications in a professional domain, which represent typical tools that would be used in real life scenarii.
Our main perspective will be that of a MOOC editor and of MOOC production teams which want to make “virtual labs” available for MOOC participants.
Such a “virtual lab” would typically contain installations of existing applications, pre-installed and configured, and loaded with scenario data in order to perform a lab.
The main constraint here is that such labs would typically be fabricated with limited software development expertise and funds[1][4]. Thus we consider here only the assembly of existing “normal” applications and discard the option of developping novel “serious games” and simulator applications for such MOOCs.
#### 2.1 The FLIRT project
The [FLIRT project][5] groups a consortium of 19 partners in Industry, SMEs and Academia to work on a collection of MOOCs and SPOCs for professional development in Networks and Telecommunications. Lead by Institut Mines Telecom, it benefits from the funding support of the French “Investissements davenir” programme.
As part of the FLIRT roadmap, were leading an “innovation task” focused on Virtual Labs in the context of the Cloud. This memo was produced as part of this task.
#### 2.2 Some challenges in virtual labs design for distant learning
Virtual Labs used in distance learning contexts require the use of software applications in autonomy, either running on a personal, or professional computer. In general, the technical skills of participants may be diverse. So much for the quality (bandwith, QoS, filtering, limitations: firewaling) of the hardware and networks they use at home or at work. Its thus very optimistic to seek for one solution fits all strategy.
Most of the time theres a learning curve on getting familiar with the tools which students will have to use, which constitutes as many challenges to overcome for beginners. These tools may not be suited for beginners, but they will still be selected by the trainers as theyre representative of the professional context being taught.
In theory, this usability challenge should be addressed by devising an adapted pedagogical approach, especially in a context of distance learning, so that learners can practice the labs on their own, without the presence of a tutor or professor. Or some particular prerequisite skills could be required (“please follow System Administration 101 before applying to this course”).
Unfortunately there are many cases where instructors basically just translate to a distant learning scenario, previous lab resources that had previously been devised for in presence learning. This lets learner faced with many challenges to overcome. The only support resource is often a regular forum on the MOOCs LMS (Learning Management System).
My intuition[2][6] is that developing ad-hoc simulators for distant education would probably be more efficient and easy to use for learners. But that would require a too high investment for the designers of the courses.
In the context of MOOCs which are mainly free to participate to, not much investment is possible in devising ad-hoc lab applications, and instructors have to rely on existing applications, tools and scenarii to deliver a cheap enough environment. Furthermore, technical or licensing constraints[3][7] may lead to selecting lab tools which may not be easy to learn, but have the great advantage or being freely redistributable[4][8].
### 3 Virtual Machines for Virtual Labs
The learners who will try unattended learning in such typical virtual labs will face difficulties in making specialized applications run. They must overcome the technical details of downloading, installing and configuring programs, before even trying to perform a particular pedagogical scenario linked to the matter studied.
To diminish these difficulties, one traditional approach for implementing labs in MOOCs has been to assemble in advance a Virtual Machine image. This already made image can then be downloaded and run with a virtual machine simulator (like [VirtualBox][9][5][10]).
The pre-loaded VM will already have everything ready for use, so that the learners dont have to install anything on their machines.
An alternative is to let learners download and install the needed software tools themselves, but this leads to so many compatibility issues or technical skill prerequisites, that this is often not advised, and mentioned only as a fallback option.
#### 3.1 Downloading and installation issues
Experience shows[2][11] that such virtual machines also bring some issues. Even if installation of every piece of software is no longer required, learners still need to be able to run the VM simulator on a wide range of diverse hardware, OSes and configurations. Even managing to download the VMs, still causes many issues (lack admin privileges, weight vs download speed, memory or CPU load, disk space, screen configurations, firewall filtering, keayboard layout, etc.).
These problems arent generally faced by the majority of learners, but the impacted minority is not marginal either, and they generally will produce a lot of support requests for the MOOC team (usually in the forums), which needs to be anticipated by the community managers.
The use of VMs is no show stopper for most, but can be a serious problem for a minority of learners, and is then no silver bullet.
Some general usability issues may also emerge if users arent used to the look and feel of the enclosed desktop. For instance, the VM may consist of a GNU/Linux desktop, whereas users would use a Windows or Mac OS system.
#### 3.2 Fabrication issues for the VM images
On the MOOC teams side, the fabrication of a lightweight, fast, tested, license-free and easy to use VM image isnt necessarily easy.
Software configurations tend to rot as time passes, and maintenance may not be easy when the later MOOC editions evolutions lead to the need to maintain the virtual lab scenarii years later.
Ideally, this would require adopting an “industrial” process in building (and testing) the lab VMs, but this requires quite an expertise (system administration, packaging, etc.) that may or not have been anticipated at the time of building the MOOC (unlike video editing competence, for instance).
Our experiment with the [Vagrant][12] technology [[0][13]] and Debian packaging was interesting in this respect, as it allowed us to use a well managed “script” to precisely control the build of a minimal VM image.
### 4 Virtual Labs as a Service
To overcome the difficulties in downloading and running Virtual Machines on ones local computer, we have started exploring the possibility to run these applications in a kind of Software as a Service (SaaS) context, “on the cloud”.
But not all applications typically used in MOOC labs are already available for remote execution on the cloud (unless the course deals precisely with managing email in GMail).
We have then studied the option to use such an approach not for a single application, but for a whole virtual “desktop” which would be available on the cloud.
#### 4.1 IaaS deployments
A way to achieve this goal is to deploy Virtual Machine images quite similar to the ones described above, on the cloud, in an Infrastructure as a Service (IaaS) context[6][14], to offer access to remote desktops for every learners.
There are different technical options to achieve this goal, but a simplified description of the architecture can be seen as just running Virtual Machines on a single IaaS platform instead of on each learners computer. Access to the desktop and application interfaces is made possible with the use of Web pages (or other dedicated lightweight clients) which will display a “full screen” display of the remote desktop running for the user on the cloud VM. Under the hood, the remote display of a Linux desktop session is made with technologies like [VNC][15] and [RDP][16] connecting to a [Guacamole][17] server on the remote VM.
In the context of the FLIRT project, we have made early experiments with such an architecture. We used the CloVER solution by our partner [ProCAN][18] which provides a virtual desktops broker between [OpenEdX][19] and an [OpenStack][20] IaaS public platform.
The expected benefit is that users dont have to install anything locally, as the only tool needed locally is a Web browser (displaying a full-screen [HTML5 canvas][21] displaying the remote desktop run by the Guacamole server running on the cloud VM.
But there are still some issues with such an approach. First, the cost of operating such an infrastructure : Virtual Machines need to be hosted on a IaaS platform, and that cost of operation isnt null[7][22] for the MOOC editor, compared to the cost of VirtualBox and a VM running on the learners side (basically zero for the MOOC editor).
Another issue, which could be more problematic lies in the need for a reliable connection to the Internet during the whole sequences of lab execution by the learners[8][23]. Even if Guacamole is quite efficient at compressing rendering traffic, some basic connectivity is needed during the whole Lab work sessions, preventing some mobile uses for instance.
One other potential annoyance is the potential delays for making a VM available to a learner (provisioning a VM), when huge VMs images need to be copied inside the IaaS platform when a learner connects to the Virtual Lab activity for the first time (several minutes delays). This may be worse if the VM image is too big (hence the need for optimization of the content[9][24]).
However, the fact that all VMs are running on a platform under the control of the MOOC editor allows new kind of features for the MOOC. For instance, learners can submit results of their labs directly to the LMS without the need to upload or copy-paste results manually. This can help monitor progress or perform evaluation or grading.
The fact that their VMs run on the same platform also allows new kinds of pedagogical scenarii, as VMs of multiple learners can be interconnected, allowing cooperative activities between learners. The VM images may then need to be instrumented and deployed in particular configurations, which may require the use of a dedicated broker like CloVER to manage such scenarii.
For the records, we have yet to perform a rigorous benchmarking of such a solution in order to evaluate its benefits, or constraints given particular contexts. In FLIRT, our main focus will be in the context of SPOCs for professional training (a bit different a context than public MOOCs).
Still this approach doesnt solve the VMs fabrication issues for the MOOC staff. Installing software inside a VM, be it local inside a VirtualBox simulator of over the cloud through a remote desktop display, makes not much difference. This relies mainly on manual operations and may not be well managed in terms of quality of the process (reproducibility, optimization).
#### 4.2 PaaS deployments using containers
Some key issues in the IaaS context described above, are the cost of operation of running full VMs, and long provisioning delays.
Were experimenting with new options to address these issues, through the use of [Linux containers][25] running on a PaaS (Platform as a Service) platform, instead of full-fleshed Virtual Machines[10][26].
The main difference, with containers instead of Virtual Machines, lies in the reduced size of images, and much lower CPU load requirements, as the container remove the need for one layer of virtualization. Also, the deduplication techniques at the heart of some virtual file-systems used by container platforms lead to really fast provisioning, avoiding the need to wait for the labs to start.
The traditional making of VMs, done by installing packages and taking a snapshot, was affordable for the regular teacher, but involved manual operations. In this respect, one other major benefit of containers is the potential for better industrialization of the virtual lab fabrication, as they are generally not assembled manually. Instead, one uses a “scripting” approach in describing which applications and their dependencies need to be put inside a container image. But this requires new competence from the Lab creators, like learning the [Docker][27] technology (and the [OpenShift][28] PaaS, for instance), which may be quite specialized. Whereas Docker containers tend to become quite popular in Software Development faculty (through the “[devops][29]” hype), they may be a bit new to other field instructors.
The learning curve to mastering the automation of the whole container-based labs installation needs to be evaluated. Theres a trade-off to consider in adopting technology like Vagrant or Docker: acquiring container/PaaS expertise vs quality of industrialization and optimization. The production of a MOOC should then require careful planning if one has to hire or contract with a PaaS expert for setting up the Virtual Labs.
We may also expect interesting pedagogical benefits. As containers are lightweight, and platforms allow to “easily” deploy multiple interlinked containers (over dedicated virtual networks), this enables the setup of more realistic scenarii, where each learner may be provided with multiple “nodes” over virtual networks (all running their individual containers). This would be particularly interesting for Computer Networks or Security teaching for instance, where each learner may have access both to client and server nodes, to study client-server protocols, for instance. This is particularly interesting for us in the context of our FLIRT project, where we produce a collection of Computer Networks courses.
Still, this mode of operation relies on a good connectivity of the learners to the Cloud. In contexts of distance learning in poorly connected contexts, the PaaS architecture doesnt solve that particular issue compared to the previous IaaS architecture.
### 5 Future server-less Virtual Labs with WebAssembly
As we have seen, the IaaS or PaaS based Virtual Labs running on the Cloud offer alternatives to installing local virtual machines on the learners computer. But they both require to be connected for the whole duration of the Lab, as the applications would be executed on the remote servers, on the Cloud (either inside VMs or containers).
We have been thinking of another alternative which could allow the deployment of some Virtual Labs on the local computers of the learners without the hassles of downloading and installing a Virtual Machine manager and VM image. We envision the possibility to use the infrastructure provided by modern Web browsers to allow running the labs applications.
At the time of writing, this architecture is still highly experimental. The main idea is to rebuild the applications needed for the Lab so that they can be run in the “generic” virtual machine present in the modern browsers, the [WebAssembly][30] and Javascript execution engine.
WebAssembly is a modern language which seeks for maximum portability, and as its name hints, is a kind of assembly language for the Web platform. What is of interest for us is that WebAssembly is portable on most modern Web browsers, making it a very interesting platform for portability.
Emerging toolchains allow recompiling applications written in languages like C or C++ so that they can be run on the WebAssembly virtual machine in the browser. This is interesting as it doesnt require modifying the source code of these programs. Of course, there are limitations, in the kind of underlying APIs and libraries compatible with that platform, and on the sandboxing of the WebAssembly execution engine enforced by the Web browser.
Historically, WebAssembly has been developped so as to allow running games written in C++ for a framework like Unity, in the Web browser.
In some contexts, for instance for tools with an interactive GUI, and processing data retrieved from files, and which dont need very specific interaction with the underlying operating system, it seems possible to port these programs to WebAssembly for running them inside the Web browser.
We have to experiment deeper with this technology to validate its potential for running Virtual Labs in the context of a Web browser.
We used a similar approach in the past in porting a Relational Database course lab to the Web browser, for standalone execution. A real database would run in the minimal SQLite RDBMS, recompiled to JavaScript[11][31]. Instead of having to download, install and run a VM with a RDBMS, the students would only connect to a Web page, which would load the DBMS in memory, and allow performing the lab SQL queries locally, disconnected from any third party server.
In a similar manner, we can think for instance, of a Lab scenario where the Internet packet inspector features of the Wireshark tool would run inside the WebAssembly virtual machine, to allow dissecting provided capture files, without having to install Wireshard, directly into the Web browser.
We expect to publish a report on that last experiment in the future with more details and results.
### 6 Conclusion
The most promising architecture for Virtual Lab deployments seems to be the use of containers on a PaaS platform for deploying virtual desktops or virtual application GUIs available in the Web browser.
This would allow the controlled fabrication of Virtual Labs containing the exact bits needed for learners to practice while minimizing the delays.
Still the need for always-on connectivity can be a problem.
Also, the potential for inter-networked containers allowing the kind of multiple nodes and collaborative scenarii we described, would require a lot of expertise to develop, and management platforms for the MOOC operators, which arent yet mature.
We hope to be able to report on our progress in the coming months and years on those aspects.
### 7 References
[0]
Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac “Designing a virtual laboratory for a relational database MOOC”. International Conference on Computer Supported Education, SCITEPRESS, 23-25 may 2015, Lisbonne, Portugal, 2015, vol. 7, pp. 260-268, ISBN 978-989-758-107-6 [DOI: 10.5220/0005439702600268][1] ([preprint (HTML)][2])
### 8 Copyright
[![Creative Commons License](https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png)][45]
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][46]
.
### Footnotes:
[1][32] The FLIRT project also works on business model aspects of MOOC or SPOC production in the context of professional development, but the present memo starts from a minimalitic hypothesis where funding for course production is quite limited.
[2][33] research-based evidence needed
[3][34] In typical MOOCs which are free to participate, the VM should include only gratis tools, which typically means a GNU/Linux distribution loaded with applications available under free and open source licenses.
[4][35] Typically, Free and Open Source software, aka Libre Software
[5][36] VirtualBox is portable on many operating systems, making it a very popular solution for such a need
[6][37] the IaaS platform could typically be an open cloud for MOOCs or a private cloud for SPOCs (for closer monitoring of student activity or security control reasons).
[7][38] Depending of the expected use of the lab by learners, this cost may vary a lot. The size and configuration required for the included software may have an impact (hence the need to minimize the footprint of the VM images). With diminishing costs in general this may not be a show stopper. Refer to marketing figures of commercial IaaS offerings for accurate figures. Attention to additional licensing costs if the OS of the VM isnt free software, or if other licenses must be provided for every learners.
[8][39] The needs for always-on connectivity may not be a problem for professional development SPOCs where learners connect from enterprise networks for instance. It may be detrimental when MOOCs are very popular in southern countries where high bandwidth is both unreliable and expensive.
[9][40] In this respect, providing a full Linux desktop inside the VM doesnt necessarily make sense. Instead, running applications full-screen may be better, avoiding installation of whole desktop environments like Gnome or XFCE… but which has usability consequences. Careful tuning and testing is needed in any case.
[10][41] The availability of container based architectures is quite popular in the industry, but has not yet been deployed to a large scale in the context of large public MOOC hosting platforms, to our knowledge, at the time of writing. There are interesting technical challenges which the FLIRT project tries to tackle together with its partner ProCAN.
[11][42] See the corresponding paragraph [http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env][43] in [0][44]
--------------------------------------------------------------------------------
via: https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/
作者:[Author;Olivier Berger;Télécom Sudparis][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www-public.tem-tsp.eu
[1]:http://dx.doi.org/10.5220/0005439702600268
[2]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/
[3]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#org50fdc1a
[4]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.1
[5]:http://flirtmooc.wixsite.com/flirt-mooc-telecom
[6]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
[7]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.3
[8]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.4
[9]:http://virtualbox.org
[10]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.5
[11]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2
[12]:https://www.vagrantup.com/
[13]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
[14]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.6
[15]:https://en.wikipedia.org/wiki/Virtual_Network_Computing
[16]:https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
[17]:http://guacamole.apache.org/
[18]:https://www.procan-group.com/
[19]:https://open.edx.org/
[20]:http://openstack.org/
[21]:https://en.wikipedia.org/wiki/Canvas_element
[22]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.7
[23]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.8
[24]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.9
[25]:https://www.redhat.com/en/topics/containers
[26]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.10
[27]:https://en.wikipedia.org/wiki/Docker_(software)
[28]:https://www.openshift.com/
[29]:https://en.wikipedia.org/wiki/DevOps
[30]:http://webassembly.org/
[31]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.11
[32]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.1
[33]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.2
[34]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.3
[35]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.4
[36]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.5
[37]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.6
[38]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.7
[39]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.8
[40]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.9
[41]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.10
[42]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.11
[43]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env
[44]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50
[45]:http://creativecommons.org/licenses/by-nc-sa/4.0/
[46]:http://creativecommons.org/licenses/by-nc-sa/4.0/

View File

@ -1,45 +0,0 @@
New Training Options Address Demand for Blockchain Skills
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/blockchain-301.png?itok=1EA-Ob6F)
Blockchain technology is transforming industries and bringing new levels of trust to contracts, payment processing, asset protection, and supply chain management. Blockchain-related jobs are the second-fastest growing in todays labor market, [according to TechCrunch][1]. But, as in the rapidly expanding field of artificial intelligence, there is a pronounced blockchain skills gap and a need for expert training resources.
### Blockchain for Business
A new training option was recently announced from The Linux Foundation. Enrollment is now open for a free training course called[Blockchain: Understanding Its Uses and Implications][2], as well as a [Blockchain for Business][2] professional certificate program. Delivered through the edX training platform, the new course and program provide a way to learn about the impact of blockchain technologies and a means to demonstrate that knowledge. Certification, in particular, can make a difference for anyone looking to work in the blockchain arena.
“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” [said][3] Linux Foundation General Manager, Training & Certification Clyde Seepersad. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”
TechCrunch [also reports][4] that venture capital is rapidly flowing toward blockchain-focused startups. And, this new program is designed for business professionals who need to understand the potential or threat of blockchain to their company and industry.
“Professional Certificate programs on edX deliver career-relevant education in a flexible, affordable way, by focusing on the critical skills industry leaders and successful professionals are seeking today,” said Anant Agarwal, edX CEO and MIT Professor.
### Hyperledger Fabric
The Linux Foundation is steward to many valuable blockchain resources and includes some notable community members. In fact, a recent New York Times article — “[The People Leading the Blockchain Revolution][5]” — named Brian Behlendorf, Executive Director of The Linux Foundations [Hyperledger Project][6], one of the [top influential voices][7] in the blockchain world.
Hyperledger offers proven paths for gaining credibility and skills in the blockchain space. For example, the project offers a free course titled Introduction to Hyperledger Fabric for Developers. Fabric has emerged as a key open source toolset in the blockchain world. Through the Hyperledger project, you can also take the B9-lab Certified Hyperledger Fabric Developer course. More information on both courses is available [here][8].
“As you can imagine, someone needs to do the actual coding when companies move to experiment and replace their legacy systems with blockchain implementations,” states the Hyperledger website. “With training, you could gain serious first-mover advantage.”
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/7/new-training-options-address-demand-blockchain-skills
作者:[SAM DEAN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://techcrunch.com/2018/02/14/blockchain-engineers-are-in-demand/
[2]:https://www.edx.org/course/understanding-blockchain-and-its-implications
[3]:https://www.linuxfoundation.org/press-release/as-demand-skyrockets-for-blockchain-expertise-the-linux-foundation-and-edx-offer-new-introductory-blockchain-course-and-blockchain-for-business-professional-certificate-program/
[4]:https://techcrunch.com/2018/05/20/with-at-least-1-3-billion-invested-globally-in-2018-vc-funding-for-blockchain-blows-past-2017-totals/
[5]:https://www.nytimes.com/2018/06/27/business/dealbook/blockchain-stars.html
[6]:https://www.hyperledger.org/
[7]:https://www.linuxfoundation.org/blog/hyperledgers-brian-behlendorf-named-as-top-blockchain-influencer-by-new-york-times/
[8]:https://www.hyperledger.org/resources/training

View File

@ -1,185 +0,0 @@
How blockchain will influence open source
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/block-quilt-chain.png?itok=mECoDbrc)
What [Satoshi Nakamoto][1] started as Bitcoin a decade ago has found a lot of followers and turned into a movement for decentralization. For some, blockchain technology is a religion that will have the same impact on humanity as the Internet has had. For others, it is hype and technology suitable only for Ponzi schemes. While blockchain is still evolving and trying to find its place, one thing is for sure: It is a disruptive technology that will fundamentally transform certain industries. And I'm betting open source will be one of them.
### The open source model
Open source is a collaborative software development and distribution model that allows people with common interests to gather and produce something that no individual can create on their own. It allows the creation of value that is bigger than the sum of its parts. Open source is enabled by distributed collaboration tools (IRC, email, git, wiki, issue trackers, etc.), distributed and protected by an open source licensing model and often governed by software foundations such as the [Apache Software Foundation][2] (ASF), [Cloud Native Computing Foundation][3] (CNCF), etc.
One interesting aspect of the open source model is the lack of financial incentives in its core. Some people believe open source work should remain detached from money and remain a free and voluntary activity driven only by intrinsic motivators (such as "common purpose" and "for the greater good”). Others believe open source work should be rewarded directly or indirectly through extrinsic motivators (such as financial incentive). While the idea of open source projects prospering only through voluntary contributions is romantic, in reality, the majority of open source contributions are done through paid development. Yes, we have a lot of voluntary contributions, but these are on a temporary basis from contributors who come and go, or for exceptionally popular projects while they are at their peak. Creating and sustaining open source projects that are useful for enterprises requires developing, documenting, testing, and bug-fixing for prolonged periods, even when the software is no longer shiny and exciting. It is a boring activity that is best motivated through financial incentives.
### Commercial open source
Software foundations such as ASF survive on donations and other income streams such as sponsorships, conference fees, etc. But those funds are primarily used to run the foundations, to provide legal protection for the projects, and to ensure there are enough servers to run builds, issue trackers, mailing lists, etc.
Similarly, CNCF has member fees and other income streams, which are used to run the foundation and provide resources for the projects. These days, most software is not built on laptops; it is run and tested on hundreds of machines on the cloud, and that requires money. Creating marketing campaigns, brand designs, distributing stickers, etc. takes money, and some foundations can assist with that as well. At its core, foundations implement the right processes to interact with users, developers, and control mechanisms and ensure distribution of available financial resources to open source projects for the common good.
If users of open source projects can donate money and the foundations can distribute it in a fair way, what is missing?
What is missing is a direct, transparent, trusted, decentralized, automated bidirectional link for transfer of value between the open source producers and the open source consumer. Currently, the link is either unidirectional or indirect:
* **Unidirectional** : A developer (think of a "developer" as any role that is involved in the production, maintenance, and distribution of software) can use their brain juice and devote time to do a contribution and share that value with all open source users. But there is no reverse link.
* **Indirect** : If there is a bug that affects a specific user/company, the options are:
* To have in-house developers to fix the bug and do a pull request. That is ideal, but it not always possible to hire in-house developers who are knowledgeable about hundreds of open source projects used daily.
* To hire a freelancer specializing in that specific open source project and pay for the services. Ideally, the freelancer is also a committer for the open source project and can directly change the project code quickly. Otherwise, the fix might not ever make it to the project.
* To approach a company providing services around the open source project. Such companies typically employ open source committers to influence and gain credibility in the community and offer products, expertise, and professional services.
The third option has been a successful [model][4] for sustaining many open source projects. Whether they provide services (training, consulting, workshops), support, packaging, open core, or SaaS, there are companies that employ hundreds of staff members who work on open source full time. There is a long [list of companies][5] that have managed to build a successful open source business model over the years, and that list is growing steadily.
The companies that back open source projects play an important role in the ecosystem: They are the catalyst between the open source projects and the users. The ones that add real value do more than just package software nicely; they can identify user needs and technology trends, and they create a full stack and even an ecosystem of open source projects to address these needs. They can take a boring project and support it for years. If there is a missing piece in the stack, they can start an open source project from scratch and build a community around it. They can acquire a closed source software company and open source the projects (here I got a little carried away, but yes, I'm talking about my employer, [Red Hat][6]).
To summarize, with the commercial open source model, projects are officially or unofficially managed and controlled by a very few individuals or companies that monetize them and give back to the ecosystem by ensuring the project is successful. It is a win-win-win for open source developers, managing companies, and end users. The alternative is inactive projects and expensive closed source software.
### Self-sustaining, decentralized open source
For a project to become part of a reputable foundation, it must conform to certain criteria. For example, ASF and CNCF require incubation and graduation processes, respectively, where apart from all the technical and formal requirements, a project must have a healthy number of active committer and users. And that is the essence of forming a sustainable open source project. Having source code on GitHub is not the same thing as having an active open source project. The latter requires committers who write the code and users who use the code, with both groups enforcing each other continuously by exchanging value and forming an ecosystem where everybody benefits. Some project ecosystems might be tiny and short-lived, and some may consist of multiple projects and competing service providers, with very complex interactions lasting for many years. But as long as there is an exchange of value and everybody benefits from it, the project is developed, maintained, and sustained.
If you look at ASF [Attic][7], you will find projects that have reached their end of life. When a project is no longer technologically fit for its purpose, it is usually its natural end. Similarly, in the ASF [Incubator][8], you will find tons of projects that never graduated but were instead retired. Typically, these projects were not able to build a large enough community because they are too specialized or there are better alternatives available.
But there are also cases where projects with high potential and superior technology cannot sustain themselves because they cannot form or maintain a functioning ecosystem for the exchange of value. The open source model and the foundations do not provide a framework and mechanisms for developers to get paid for their work or for users to get their requests heard. There isnt a common value commitment framework for either party. As a result, some projects can sustain themselves only in the context of commercial open source, where a company acts as an intermediary and value adder between developers and users. That adds another constraint and requires a service provider company to sustain some open source projects. Ideally, users should be able to express their interest in a project and developers should be able to show their commitment to the project in a transparent and measurable way, which forms a community with common interest and intent for the exchange of value.
Imagine there is a model with mechanisms and tools that enable direct interaction between open source users and developers. This includes not only code contributions through pull requests, questions over the mailing lists, GitHub stars, and stickers on laptops, but also other ways that allow users to influence projects' destinies in a richer, more self-controlled and transparent manner.
This model could include incentives for actions such as:
* Funding open source projects directly rather than through software foundations
* Influencing the direction of projects through voting (by token holders)
* Feature requests driven by user needs
* On-time pull request merges
* Bounties for bug hunts
* Better test coverage incentives
* Up-to-date documentation rewards
* Long-term support guarantees
* Timely security fixes
* Expert assistance, support, and services
* Budget for evangelism and promotion of the projects
* Budget for regular boring activities
* Fast email and chat assistance
* Full visibility of the overall project findings, etc.
If you haven't guessed, I'm talking about using blockchain and [smart contracts][9] to allow such interactions between users and developers—smart contracts that will give power to the hand of token holders to influence projects.
![blockchain_in_open_source_ecosystem.png][11]
The usage of blockchain in the open source ecosystem
Existing channels in the open source ecosystem provide ways for users to influence projects through financial commitments to service providers or other limited means through the foundations. But the addition of blockchain-based technology to the open source ecosystem could open new channels for interaction between users and developers. I'm not saying this will replace the commercial open source model; most companies working with open source do many things that cannot be replaced by smart contracts. But smart contracts can spark a new way of bootstrapping new open source projects, giving a second life to commodity projects that are a burden to maintain. They can motivate developers to apply boring pull requests, write documentation, get tests to pass, etc., providing a direct value exchange channel between users and open source developers. Blockchain can add new channels to help open source projects grow and become self-sustaining in the long term, even when company backing is not feasible. It can create a new complementary model for self-sustaining open source projects—a win-win.
### Tokenizing open source
There are already a number of initiatives aiming to tokenize open source. Some focus only on an open source model, and some are more generic but apply to open source development as well:
* [Gitcoin][12] \- grow open source, one of the most promising ones in this area.
* [Oscoin][13] \- cryptocurrency for open source
* [Open collective][14] \- a platform for supporting open source projects.
* [FundYourselfNow][15] \- Kickstarter and ICOs for projects.
* [Kauri][16] \- support for open source project documentation.
* [Liberapay][17] \- a recurrent donations platform.
* [FundRequest][18] \- a decentralized marketplace for open source collaboration.
* [CanYa][19] \- recently acquired [Bountysource][20], now the worlds largest open source P2P bounty platform.
* [OpenGift][21] \- a new model for open source monetization.
* [Hacken][22] \- a white hat token for hackers.
* [Coinlancer][23] \- a decentralized job market.
* [CodeFund][24] \- an open source ad platform.
* [IssueHunt][25] \- a funding platform for open source maintainers and contributors.
* [District0x 1Hive][26] \- a crowdfunding and curation platform.
* [District0x Fixit][27] \- github bug bounties.
This list is varied and growing rapidly. Some of these projects will disappear, others will pivot, but a few will emerge as the [SourceForge][28], the ASF, the GitHub of the future. That doesn't necessarily mean they'll replace these platforms, but they'll complement them with token models and create a richer open source ecosystem. Every project can pick its distribution model (license), governing model (foundation), and incentive model (token). In all cases, this will pump fresh blood to the open source world.
### The future is open and decentralized
* Software is eating the world.
* Every company is a software company.
* Open source is where innovation happens.
Given that, it is clear that open source is too big to fail and too important to be controlled by a few or left to its own destiny. Open source is a shared-resource system that has value to all, and more importantly, it must be managed as such. It is only a matter of time until every company on earth will want to have a stake and a say in the open source world. Unfortunately, we don't have the tools and the habits to do it yet. Such tools would allow anybody to show their appreciation or ignorance of software projects. It would create a direct and faster feedback loop between producers and consumers, between developers and users. It will foster innovation—innovation driven by user needs and expressed through token metrics.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/8/open-source-tokenomics
作者:[Bilgin lbryam][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bibryam
[1]:https://en.wikipedia.org/wiki/Satoshi_Nakamoto
[2]:https://www.apache.org/
[3]:https://www.cncf.io/
[4]:https://medium.com/open-consensus/3-oss-business-model-progressions-dafd5837f2d
[5]:https://docs.google.com/spreadsheets/d/17nKMpi_Dh5slCqzLSFBoWMxNvWiwt2R-t4e_l7LPLhU/edit#gid=0
[6]:http://jobs.redhat.com/
[7]:https://attic.apache.org/
[8]:http://incubator.apache.org/
[9]:https://en.wikipedia.org/wiki/Smart_contract
[10]:/file/404421
[11]:https://opensource.com/sites/default/files/uploads/blockchain_in_open_source_ecosystem.png (blockchain_in_open_source_ecosystem.png)
[12]:https://gitcoin.co/
[13]:http://oscoin.io/
[14]:https://opencollective.com/opensource
[15]:https://www.fundyourselfnow.com/page/about
[16]:https://kauri.io/
[17]:https://liberapay.com/
[18]:https://fundrequest.io/
[19]:https://canya.io/
[20]:https://www.bountysource.com/
[21]:https://opengift.io/pub/
[22]:https://hacken.io/
[23]:https://www.coinlancer.com/home
[24]:https://codefund.io/
[25]:https://issuehunt.io/
[26]:https://blog.district0x.io/district-proposal-spotlight-1hive-283957f57967
[27]:https://github.com/district0x/district-proposals/issues/177
[28]:https://sourceforge.net/

View File

@ -1,116 +0,0 @@
Debian Turns 25! Here are Some Interesting Facts About Debian Linux
======
One of the oldest Linux distribution still in development, Debian has just turned 25. Lets have a look at some interesting facts about this awesome FOSS project.
### 10 Interesting facts about Debian Linux
![Interesting facts about Debian Linux][1]
The facts presented here have been collected from various sources available from the internet. They are true to my knowledge, but in case of any error, please remind me to update the article.
#### 1\. One of the oldest Linux distributions still under active development
[Debian project][2] was announced on 16th August 1993 by Ian Murdock, Debian Founder. Like Linux creator [Linus Torvalds][3], Ian was a college student when he announced Debian project.
![](https://farm6.staticflickr.com/5710/20006308374_7f51ae2a5c_z.jpg)
#### 2\. Some people get tattoo while some name their project after their girlfriends name
The project was named by combining the name of Ian and his then-girlfriend Debra Lynn. Ian and Debra got married and had three children. Debra and Ian got divorced in 2008.
#### 3\. Ian Murdock: The Maverick behind the creation of Debian project
![Debian Founder Ian Murdock][4]
Ian Murdock
[Ian Murdock][5] led the Debian project from August 1993 until March 1996. He shaped Debian into a community project based on the principals of Free Software. The [Debian Manifesto][6] and the [Debian Social Contract][7] are still governing the project.
He founded a commercial Linux company called [Progeny Linux Systems][8] and worked for a number of Linux related companies such as Sun Microsystems, Linux Foundation and Docker.
Sadly, [Ian committed suicide in December 2015][9]. His contribution to Debian is certainly invaluable.
#### 4\. Debian is a community project in the true sense
Debian is a community based project in true sense. No one owns Debian. Debian is being developed by volunteers from all over the world. It is not a commercial project, backed by corporates like many other Linux distributions.
Debian Linux distribution is composed of Free Software only. Its one of the few Linux distributions that is true to the spirit of [Free Software][10] and takes proud in being called a GNU/Linux distribution.
Debian has its non-profit organization called [Software in Public Interest][11] (SPI). Along with Debian, SPI supports many other open source projects financially.
#### 5\. Debian and its 3 branches
Debian has three branches or versions: Debian Stable, Debian Unstable (Sid) and Debian Testing.
Debian Stable, as the name suggests, is the stable branch that has all the software and packages well tested to give you a rock solid stable system. Since it takes time before a well-tested software lands in the stable branch, Debian Stable often contains older versions of programs and hence people joke that Debian Stable means stale.
[Debian Unstable][12] codenamed Sid is the version where all the development of Debian takes place. This is where the new packages first land or developed. After that, these changes are propagated to the testing version.
[Debian Testing][13] is the next release after the current stable release. If the current stable release is N, Debian testing would be the N+1 release. The packages from Debian Unstable are tested in this version. After all the new changes are well tested, Debian Testing is then promoted as the new Stable version.
There is no strict release schedule for Debian.
#### 7\. There was no Debian 1.0 release
Debian 1.0 was never released. The CD vendor, InfoMagic, accidentally shipped a development release of Debian and entitled it 1.0 in 1996. To prevent confusion between the CD version and the actual Debian release, the Debian Project renamed its next release to “Debian 1.1”.
#### 8\. Debian releases are codenamed after Toy Story characters
![Toy Story Characters][14]
Debian releases are codenamed after the characters from Pixars hit animation movie series [Toy Story][15].
Debian 1.1 was the first release with a codename. It was named Buzz after the Toy Story character Buzz Lightyear.
It was in 1996 and [Bruce Perens][16] had taken over leadership of the Project from Ian Murdock. Bruce was working at Pixar at the time.
This trend continued and all the subsequent releases had codenamed after Toy Story characters. For example, the current stable release is Stretch while the upcoming release has been codenamed Buster.
The unstable Debian version is codenamed Sid. This character in Toy Story is a kid with emotional problems and he enjoys breaking toys. This is symbolic in the sense that Debian Unstable might break your system with untested packages.
#### 9\. Debian also has a BSD ditribution
Debian is not limited to Linux. Debian also has a distribution based on FreeBSD kernel. It is called [Debian GNU/kFreeBSD][17].
#### 10\. Google uses Debian
[Google uses Debian][18] as its in-house development platform. Earlier, Google used a customized version of Ubuntu as its development platform. Recently they opted for Debian based gLinux.
#### Happy 25th birthday Debian
![Happy 25th birthday Debian][19]
I hope you liked these little facts about Debian. Stuff like these are reasons why people love Debian.
I wish a very happy 25th birthday to Debian. Please continue to be awesome. Cheers :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/debian-facts/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/Interesting-facts-about-debian.jpeg
[2]:https://www.debian.org
[3]:https://itsfoss.com/linus-torvalds-facts/
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/ian-murdock.jpg
[5]:https://en.wikipedia.org/wiki/Ian_Murdock
[6]:https://www.debian.org/doc/manuals/project-history/ap-manifesto.en.html
[7]:https://www.debian.org/social_contract
[8]:https://en.wikipedia.org/wiki/Progeny_Linux_Systems
[9]:https://itsfoss.com/ian-murdock-dies-mysteriously/
[10]:https://www.fsf.org/
[11]:https://www.spi-inc.org/
[12]:https://www.debian.org/releases/sid/
[13]:https://www.debian.org/releases/testing/
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/toy-story-characters.jpeg
[15]:https://en.wikipedia.org/wiki/Toy_Story_(franchise)
[16]:https://perens.com/about-bruce-perens/
[17]:https://wiki.debian.org/Debian_GNU/kFreeBSD
[18]:https://itsfoss.com/goobuntu-glinux-google/
[19]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/happy-25th-birthday-Debian.jpeg

View File

@ -1,97 +0,0 @@
Interview With Peter Ganten, CEO of Univention GmbH
======
I have been asking the Univention team to share the behind-the-scenes story of [**Univention**][1] for a couple of months. Finally, today we got the interview of **Mr. Peter H. Ganten** , CEO of Univention GmbH. Despite his busy schedule, in this interview, he shares what he thinks of the Univention project and its impact on open source ecosystem, what open source developers and companies will need to do to keep thriving and what are the biggest challenges for open source projects.
**OSTechNix: Whats your background and why have you founded Univention?**
**Peter Ganten:** I studied physics and psychology. In psychology I was a research assistant and coded evaluation software. I realized how important it is that results have to be disclosed in order to verify or falsify them. The same goes for the code that leads to the results. This brought me into contact with Open Source Software (OSS) and Linux.
![](https://www.ostechnix.com/wp-content/uploads/2018/10/peter-ganten-interview.jpg)
I was a kind of technical lab manager and I had the opportunity to try out a lot, which led to my book about Debian. That was still in the New Economy era where the first business models emerged on how to make money with Open Source. When the bubble burst, I had the plan to make OSS a solid business model without venture capital but with Hanseatic business style seriously, steadily, no bling bling.
**What were the biggest challenges at the beginning?**
When I came from the university, the biggest challenge clearly was to gain entrepreneurial and business management knowledge. I quickly learned that its not about Open Source software as an end to itself but always about customer value, and the benefits OSS offers its customers. We all had to learn a lot.
In the beginning, we expected that Linux on the desktop would become established in a similar way as Linux on the server. However, this has not yet been proven true. The replacement has happened with Android and the iPhone. Our conclusion then was to change our offerings towards ID management and enterprise servers.
**Why does UCS matter? And for whom makes it sense to use it?**
There is cool OSS in all areas, but many organizations are not capable to combine it all together and make it manageable. For the basic infrastructure (Windows desktops, users, user rights, roles, ID management, apps) we need a central instance to which groupware, CRM etc. is connected. Without Univention this would have to be laboriously assembled and maintained manually. This is possible for very large companies, but far too complex for many other organizations.
[**UCS**][2] can be used out of the box and is scalable. Thats why its becoming more and more popular more than 10,000 organizations are using UCS already today.
**Who are your users and most important clients? What do they love most about UCS?**
The Core Edition is free of charge and used by organizations from all sectors and industries such as associations, micro-enterprises, universities or large organizations with thousands of users. In the enterprise environment, where Long Term Servicing (LTS) and professional support are particularly important, we have organizations ranging in size from 30-50 users to several thousand users. One of the target groups is the education system in Germany. In many large cities and within their school administrations UCS is used, for example, in Cologne, Hannover, Bremen, Kassel and in several federal states. They are looking for manageable IT and apps for schools. Thats what we offer, because we can guarantee these authorities full control over their users identities.
Also, more and more cloud service providers and MSPs want to take UCS to deliver a selection of cloud-based app solutions.
**Is UCS 100% Open Source? If so, how can you run a profitable business selling it?**
Yes, UCS is 100% Open Source, every line, the whole code is OSS. You can download and use UCS Core Edition for **FREE!**
We know that in large, complex organizations, vendor support and liability is needed for LTS, SLAs, and we offer that with our Enterprise subscriptions and consulting services. We dont offer these in the Core Edition.
**And what are you giving back to the OS community?**
A lot. We are involved in the Debian team and co-finance the LTS maintenance for Debian. For important OS components in UCS like [**OpenLDAP**][3], Samba or KVM we co-finance the development or have co-developed them ourselves. We make it all freely available.
We are also involved on the political level in ensuring that OSS is used. We are engaged, for example, in the [**Free Software Foundation Europe (FSFE)**][4] and the [**German Open Source Business Alliance**][5], of which I am the chairman. We are working hard to make OSS more successful.
**How can I get started with UCS?**
Its easy to get started with the Core Edition, which, like the Enterprise Edition, has an App Center and can be easily installed on your own hardware or as an appliance in a virtual machine. Just [**download Univention ISO**][6] and install it as described in the below link.
Alternatively, you can try the [**UCS Online Demo**][7] to get a first impression of Univention Corporate Server without actually installing it on your system.
**What do you think are the biggest challenges for Open Source?**
There is a certain attitude you can see over and over again even in bigger projects: OSS alone is viewed as an almost mandatory prerequisite for a good, sustainable, secure and trustworthy IT solution but just having decided to use OSS is no guarantee for success. You have to carry out projects professionally and cooperate with the manufacturers. A danger is that in complex projects people think: “Oh, OSS is free, I just put it all together by myself”. But normally you do not have the know-how to successfully implement complex software solutions. You would never proceed like this with Closed Source. There people think: “Oh, the software costs 3 $ millions, so its okay if I have to spend another 300,000 Dollars on consultants.”
At OSS this is different. If such projects fail and leave burnt ground behind, we have to explain again and again that the failure of such projects is not due to the nature of OSS but to its poor implementation and organization in a specific project: You have to conclude reasonable contracts and involve partners as in the proprietary world, but youll gain a better solution.
Another challenge: We must stay innovative, move forward, attract new people who are enthusiastic about working on projects. Thats sometimes a challenge. For example, there are a number of proprietary cloud services that are good but lead to extremely high dependency. There are approaches to alternatives in OSS, but no suitable business models yet. So its hard to find and fund developers. For example, I can think of Evernote and OneNote for which there is no reasonable OSS alternative.
**And what will the future bring for Univention?**
I dont have a crystal ball, but we are extremely optimistic. We see a very high growth potential in the education market. More OSS is being made in the public sector, because we have repeatedly experienced the dead ends that can be reached if we solely rely on Closed Source.
Overall, we will continue our organic growth at double-digit rates year after year.
UCS and its core functionalities of identity management, infrastructure management and app center will increasingly be offered and used from the cloud as a managed service. We will support our technology in this direction, e.g., through containers, so that a hypervisor or bare metal is not always necessary for operation.
**You have been the CEO of Univention for a long time. What keeps you motivated?**
I have been the CEO of Univention for more than 16 years now. My biggest motivation is to realize that something is moving. That we offer the better way for IT. That the people who go this way with us are excited to work with us. I go home satisfied in the evening (of course not every evening). Its totally cool to work with the team I have. It motivates and pushes you every time I need it myself.
Im a techie and nerd at heart, I enjoy dealing with technology. So Im totally happy at this place and Im grateful to the world that I can do whatever I want every day. Not everyone can say that.
**Who gives you inspiration?**
My employees, the customers and the Open Source projects. The exchange with other people.
The motivation behind everything is that we want to make sure that mankind will be able to influence and change the IT that surrounds us today and in the future just the way we want it and we thinks its good. We want to make a contribution to this. That is why Univention is there. That is important to us every day.
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/interview-with-peter-ganten-ceo-of-univention-gmbh/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[1]: https://www.ostechnix.com/introduction-univention-corporate-server/
[2]: https://www.univention.com/products/ucs/
[3]: https://www.ostechnix.com/redhat-and-suse-announced-to-withdraw-support-for-openldap/
[4]: https://fsfe.org/
[5]: https://osb-alliance.de/
[6]: https://www.univention.com/downloads/download-ucs/
[7]: https://www.univention.com/downloads/ucs-online-demo/

View File

@ -1,76 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why CLAs aren't good for open source)
[#]: via: (https://opensource.com/article/19/2/cla-problems)
[#]: author: (Richard Fontana https://opensource.com/users/fontana)
Why CLAs aren't good for open source
======
Few legal topics in open source are as controversial as contributor license agreements.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03)
Few legal topics in open source are as controversial as [contributor license agreements][1] (CLAs). Unless you count the special historical case of the [Fedora Project Contributor Agreement][2] (which I've always seen as an un-CLA), or, like [Karl Fogel][3], you classify the [DCO][4] as a [type of CLA][5], today Red Hat makes no use of CLAs for the projects it maintains.
It wasn't always so. Red Hat's earliest projects followed the traditional practice I've called "inbound=outbound," in which contributions to a project are simply provided under the project's open source license with no execution of an external, non-FOSS contract required. But in the early 2000s, Red Hat began experimenting with the use of contributor agreements. Fedora started requiring contributors to sign a CLA based on the widely adapted [Apache ICLA][6], while a Free Software Foundation-derived copyright assignment agreement and a pair of bespoke CLAs were inherited from the Cygnus and JBoss acquisitions, respectively. We even took [a few steps][7] towards adopting an Apache-style CLA across the rapidly growing set of Red Hat-led projects.
This came to an end, in large part because those of us on the Red Hat legal team heard and understood the concerns and objections raised by Red Hat engineers and the wider technical community. We went on to become de facto leaders of what some have called the anti-CLA movement, marked notably by our [opposition to Project Harmony][8] and our [efforts][9] to get OpenStack to replace its CLA with the DCO. (We [reluctantly][10] sign tolerable upstream project CLAs out of practical necessity.)
### Why CLAs are problematic
Our choice not to use CLAs is a reflection of our values as an authentic open source company with deep roots in the free software movement. Over the years, many in the open source community have explained why CLAs, and the very similar mechanism of copyright assignment, are a bad policy for open source.
One reason is the red tape problem. Normally, open source development is characterized by frictionless contribution, which is enabled by inbound=outbound without imposition of further legal ceremony or process. This makes it relatively easy for new contributors to get involved in a project, allowing more effective growth of contributor communities and driving technical innovation upstream. Frictionless contribution is a key part of the advantage open source development holds over proprietary alternatives. But frictionless contribution is negated by CLAs. Having to sign an unusual legal agreement before a contribution can be accepted creates a bureaucratic hurdle that slows down development and discourages participation. This cost persists despite the growing use of automation by CLA-using projects.
CLAs also give rise to an asymmetry of legal power among a project's participants, which also discourages the growth of strong contributor and user communities around a project. With Apache-style CLAs, the company or organization leading the project gets special rights that other contributors do not receive, while those other contributors must shoulder certain legal obligations (in addition to the red tape burden) from which the project leader is exempt. The problem of asymmetry is most severe in copyleft projects, but it is present even when the outbound license is permissive.
When assessing the arguments for and against CLAs, bear in mind that today, as in the past, the vast majority of the open source code in any product originates in projects that follow the inbound=outbound practice. The use of CLAs by a relatively small number of projects causes collateral harm to all the others by signaling that, for some reason, open source licensing is insufficient to handle contributions flowing into a project.
### The case for CLAs
Since CLAs continue to be a minority practice and originate from outside open source community culture, I believe that CLA proponents should bear the burden of explaining why they are necessary or beneficial relative to their costs. I suspect that most companies using CLAs are merely emulating peer company behavior without critical examination. CLAs have an understandable, if superficial, appeal to risk-averse lawyers who are predisposed to favor greater formality, paper, and process regardless of the business costs. Still, some arguments in favor of CLAs are often advanced and deserve consideration.
**Easy relicensing:** If administered appropriately, Apache-style CLAs give the project steward effectively unlimited power to sublicense contributions under terms of the steward's choice. This is sometimes seen as desirable because of the potential need to relicense a project under some other open source license. But the value of easy relicensing has been greatly exaggerated by pointing to a few historical cases involving major relicensing campaigns undertaken by projects with an unusually large number of past contributors (all of which were successful without the use of a CLA). There are benefits in relicensing being hard because it results in stable legal expectations around a project and encourages projects to consult their contributor communities before undertaking significant legal policy changes. In any case, most inbound=outbound open source projects never attempt to relicense during their lifetime, and for the small number that do, relicensing will be relatively painless because typically the number of past contributors to contact will not be large.
**Provenance tracking:** It is sometimes claimed that CLAs enable a project to rigorously track the provenance of contributions, which purportedly has some legal benefit. It is unclear what is achieved by the use of CLAs in this regard that is not better handled through such non-CLA means as preserving Git commit history. And the DCO would seem to be much better suited to tracking contributions, given that it is normally used on a per-commit basis, while CLAs are signed once per contributor and are administratively separate from code contributions. Moreover, provenance tracking is often described as though it were a benefit for the public, yet I know of no case where a project provides transparent, ready public access to CLA acceptance records.
**License revocation:** Some CLA advocates warn of the prospect that a contributor may someday attempt to revoke a past license grant. To the extent that the concern is about largely judgment-proof individual contributors with no corporate affiliation, it is not clear why an Apache-style CLA provides more meaningful protection against this outcome compared to the use of an open source license. And, as with so many of the legal risks raised in discussions of open source legal policy, this appears to be a phantom risk. I have heard of only a few purported attempts at license revocation over the years, all of which were resolved quickly when the contributor backed down in the face of community pressure.
**Unauthorized employee contribution:** This is a special case of the license revocation issue and has recently become a point commonly raised by CLA advocates. When an employee contributes to an upstream project, normally the employer owns the copyrights and patents for which the project needs licenses, and only certain executives are authorized to grant such licenses. Suppose an employee contributed proprietary code to a project without approval from the employer, and the employer later discovers this and demands removal of the contribution or sues the project's users. This risk of unauthorized contributions is thought to be minimized by use of something like the [Apache CCLA][11] with its representations and signature requirement, coupled with some adequate review process to ascertain that the CCLA signer likely was authorized to sign (a step which I suspect is not meaningfully undertaken by most CLA-using companies).
Based on common sense and common experience, I contend that in nearly all cases today, employee contributions are done with the actual or constructive knowledge and consent of the employer. If there were an atmosphere of high litigation risk surrounding open source software, perhaps this risk should be taken more seriously, but litigation arising out of open source projects remains remarkably uncommon.
More to the point, I know of no case where an allegation of copyright or patent infringement against an inbound=outbound project, not stemming from an alleged open source license violation, would have been prevented by use of a CLA. Patent risk, in particular, is often cited by CLA proponents when pointing to the risk of unauthorized contributions, but the patent license grants in Apache-style CLAs are, by design, quite narrow in scope. Moreover, corporate contributions to an open source project will typically be few in number, small in size (and thus easily replaceable), and likely to be discarded as time goes on.
### Alternatives
If your company does not buy into the anti-CLA case and cannot get comfortable with the simple use of inbound=outbound, there are alternatives to resorting to an asymmetric and administratively burdensome Apache-style CLA requirement. The use of the DCO as a complement to inbound=outbound addresses at least some of the concerns of risk-averse CLA advocates. If you must use a true CLA, there is no need to use the Apache model (let alone a [monstrous derivative][10] of it). Consider the non-specification core of the [Eclipse Contributor Agreement][12]—essentially the DCO wrapped inside a CLA—or the Software Freedom Conservancy's [Selenium CLA][13], which merely ceremonializes an inbound=outbound contribution policy.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/2/cla-problems
作者:[Richard Fontana][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fontana
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/18/3/cla-vs-dco-whats-difference
[2]: https://opensource.com/law/10/6/new-contributor-agreement-fedora
[3]: https://www.red-bean.com/kfogel/
[4]: https://developercertificate.org
[5]: https://producingoss.com/en/contributor-agreements.html#developer-certificate-of-origin
[6]: https://www.apache.org/licenses/icla.pdf
[7]: https://www.freeipa.org/page/Why_CLA%3F
[8]: https://opensource.com/law/11/7/trouble-harmony-part-1
[9]: https://wiki.openstack.org/wiki/OpenStackAndItsCLA
[10]: https://opensource.com/article/19/1/cla-proliferation
[11]: https://www.apache.org/licenses/cla-corporate.txt
[12]: https://www.eclipse.org/legal/ECA.php
[13]: https://docs.google.com/forms/d/e/1FAIpQLSd2FsN12NzjCs450ZmJzkJNulmRC8r8l8NYwVW5KWNX7XDiUw/viewform?hl=en_US&formkey=dFFjXzBzM1VwekFlOWFWMjFFRjJMRFE6MQ#gid=0

View File

@ -1,45 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Discuss everything Fedora)
[#]: via: (https://fedoramagazine.org/discuss-everything-fedora/)
[#]: author: (Ryan Lerch https://fedoramagazine.org/introducing-flatpak/)
Discuss everything Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2019/03/fedora-discussion-816x345.jpg)
Are you interested in how Fedora is being developed? Do you want to get involved, or see what goes into making a release? You want to check out [Fedora Discussion][1]. It is a relatively new place where members of the Fedora Community meet to discuss, ask questions, and interact. Keep reading for more information.
Note that the Fedora Discussion system is mainly aimed at contributors. If you have questions on using Fedora, check out [Ask Fedora][2] (which is being migrated in the future).
![][3]
Fedora Discussion is a forum and discussion site that uses the [Discourse open source discussion platform][4].
There are already several categories useful for Fedora users, including [Desktop][5] (covering Fedora Workstation, Fedora Silverblue, KDE, XFCE, and more) and the [Server, Cloud, and IoT][6] category . Additionally, some of the [Fedora Special Interest Groups (SIGs) have discussions as well][7]. Finally, the [Fedora Friends][8] category helps you connect with other Fedora users and Community members by providing discussions about upcoming meetups and hackfests.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/discuss-everything-fedora/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/introducing-flatpak/
[b]: https://github.com/lujun9972
[1]: https://discussion.fedoraproject.org/
[2]: https://ask.fedoraproject.org
[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/discussion-screenshot-1024x663.png
[4]: https://www.discourse.org/about
[5]: https://discussion.fedoraproject.org/c/desktop
[6]: https://discussion.fedoraproject.org/c/server
[7]: https://discussion.fedoraproject.org/c/sigs
[8]: https://discussion.fedoraproject.org/c/friends

View File

@ -1,143 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to save time with TiDB)
[#]: via: (https://opensource.com/article/19/3/how-save-time-tidb)
[#]: author: (Morgan Tocker https://opensource.com/users/morgo)
How to save time with TiDB
======
TiDB, an open source-compatible, cloud-based database engine, simplifies many of MySQL database administrators' common tasks.
![Team checklist][1]
Last November, I wrote about key [differences between MySQL and TiDB][2], an open source-compatible, cloud-based database engine, from the perspective of scaling both solutions in the cloud. In this follow-up article, I'll dive deeper into the ways [TiDB][3] streamlines and simplifies administration.
If you come from a MySQL background, you may be used to doing a lot of manual tasks that are either not required or much simpler with TiDB.
The inspiration for TiDB came from the founders managing sharded MySQL at scale at some of China's largest internet companies. Since requirements for operating a large system at scale are a key concern, I'll look at some typical MySQL database administrator (DBA) tasks and how they translate to TiDB.
[![TiDB architecture][4]][5]
In [TiDB's architecture][5]:
* SQL processing is separated from data storage. The SQL processing (TiDB) and storage (TiKV) components independently scale horizontally.
* PD (Placement Driver) acts as the cluster manager and stores metadata.
* All components natively provide high availability, with PD and TiKV using the [Raft consensus algorithm][6].
* You can access your data via either MySQL (TiDB) or Spark (TiSpark) protocols.
### Adding/fixing replication slaves
**tl;dr:** It doesn't happen in the same way as in MySQL.
Replication and redundancy of data are automatically managed by TiKV. You also don't need to worry about creating initial backups to seed replicas, as _both_ the provisioning and replication are handled for you.
Replication is also quorum-based using the Raft consensus algorithm, so you don't have to worry about the inconsistency problems surrounding failures that you do with asynchronous replication (the default in MySQL and what many users are using).
TiDB does support its own binary log, so it can be used for asynchronous replication between clusters.
### Optimizing slow queries
**tl;dr:** Still happens in TiDB
There is no real way out of optimizing slow queries that have been introduced by development teams.
As a mitigating factor though, if you need to add breathing room to your database's capacity while you work on optimization, the TiDB's architecture allows you to horizontally scale.
### Upgrades and maintenance
**tl;dr:** Still required, but generally easier
Because the TiDB server is stateless, you can roll through an upgrade and deploy new TiDB servers. Then you can remove the older TiDB servers from the load balancer pool, shutting down them once connections have drained.
Upgrading PD is also quite straightforward since only the PD leader actively answers requests at a time. You can perform a rolling upgrade and upgrade PD's non-leader peers one at a time, and then change the leader before upgrading the final PD server.
For TiKV, the upgrade is marginally more complex. If you want to remove a node, I recommend first setting it to be a follower on each of the regions where it is currently a leader. After that, you can bring down the node without impacting your application. If the downtime is brief, TiKV will recover with its regional peers from the Raft log. In a longer downtime, it will need to re-copy data. This can all be managed for you, though, if you choose to deploy using Ansible or Kubernetes.
### Manual sharding
**tl;dr:** Not required
Manual sharding is mainly a pain on the part of the application developers, but as a DBA, you might have to get involved if the sharding is naive or has problems such as hotspots (many workloads do) that require re-balancing.
In TiDB, re-sharding or re-balancing happens automatically in the background. The PD server observes when data regions (TiKV's term for chunks of data in key-value form) get too small, too big, or too frequently accessed.
You can also explicitly configure PD to store regions on certain TiKV servers. This works really well when combined with MySQL partitioning.
### Capacity planning
**tl;dr:** Much easier
Capacity planning on a MySQL database can be a little bit hard because you need to plan your physical infrastructure requirements two to three years from now. As data grows (and the working set changes), this can be a difficult task. I wouldn't say it completely goes away in the cloud either, since changing a master server's hardware is always hard.
TiDB splits data into approximately 100MiB chunks that it distributes among TiKV servers. Because this increment is much smaller than a full server, it's much easier to move around and redistribute data. It's also possible to add new servers in smaller increments, which is easier on planning.
### Scaling
**tl;dr:** Much easier
This is related to capacity planning and sharding. When we talk about scaling, many people think about very large _systems,_ but that is not exclusively how I think of the problem:
* Scaling is being able to start with something very small, without having to make huge investments upfront on the chance it could become very large.
* Scaling is also a people problem. If a system requires too much internal knowledge to operate, it can become hard to grow as an engineering organization. The barrier to entry for new hires can become very high.
Thus, by providing automatic sharding, TiDB can scale much easier.
### Schema changes (DDL)
**tl;dr:** Mostly better
The data definition language (DDL) supported in TiDB is all online, which means it doesn't block other reads or writes to the system. It also doesn't block the replication stream.
That's the good news, but there are a couple of limitations to be aware of:
* TiDB does not currently support all DDL operations, such as changing the primary key or some "change data type" operations.
* TiDB does not currently allow you to chain multiple DDL changes in the same command, e.g., _ALTER TABLE t1 ADD INDEX (x), ADD INDEX (y)_. You will need to break these queries up into individual DDL queries.
This is an area that we're looking to improve in [TiDB 3.0][7].
### Creating one-off data dumps for the reporting team
**tl;dr:** May not be required
DBAs loathe manual tasks that create one-off exports of data to be consumed by another team, perhaps in an analytics tool or data warehouse.
This is often required when the types of queries that are be executed on the dataset are analytical. TiDB has hybrid transactional/analytical processing (HTAP) capabilities, so in many cases, these queries should work fine. If your analytics team is using Spark, you can also use the [TiSpark][8] connector to allow them to connect directly to TiKV.
This is another area we are improving with [TiFlash][7], a column store accelerator. We are also working on a plugin system to support external authentication. This will make it easier to manage access by the reporting team.
### Conclusion
In this post, I looked at some common MySQL DBA tasks and how they translate to TiDB. If you would like to learn more, check out our [TiDB Academy course][9] designed for MySQL DBAs (it's free!).
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/3/how-save-time-tidb
作者:[Morgan Tocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/morgo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
[2]: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb
[3]: https://github.com/pingcap/tidb
[4]: https://opensource.com/sites/default/files/uploads/tidb_architecture.png (TiDB architecture)
[5]: https://pingcap.com/docs/architecture/
[6]: https://raft.github.io/
[7]: https://pingcap.com/blog/tidb-3.0-beta-stability-at-scale/
[8]: https://github.com/pingcap/tispark
[9]: https://pingcap.com/tidb-academy/

View File

@ -1,60 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Google partners with Intel, HPE and Lenovo for hybrid cloud)
[#]: via: (https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Google partners with Intel, HPE and Lenovo for hybrid cloud
======
Google boosted its on-premises and cloud connections with Kubernetes and serverless computing.
![Ilze Lucero \(CC0\)][1]
Still struggling to get its Google Cloud business out of single-digit marketshare, Google this week introduced new partnerships with Lenovo and Intel to help bolster its hybrid cloud offerings, both built on Googles Kubernetes container technology.
At Googles Next 19 show this week, Intel and Google said they will collaborate on Google's Anthos, a new reference design based on the second-Generation Xeon Scalable processor introduced last week and an optimized Kubernetes software stack designed to deliver increased workload portability between public and private cloud environments.
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
As part the Anthos announcement, Hewlett Packard Enterprise (HPE) said it has validated Anthos on its ProLiant servers, while Lenovo has done the same for its ThinkAgile platform. This solution will enable customers to get a consistent Kubernetes experience between Google Cloud and their on-premises HPE or Lenovo servers. No official word from Dell yet, but they cant be far behind.
Users will be able to manage their Kubernetes clusters and enforce policy consistently across environments either in the public cloud or on-premises. In addition, Anthos delivers a fully integrated stack of hardened components, including OS and container runtimes that are tested and validated by Google, so customers can upgrade their clusters with confidence and minimize downtime.
### What is Google Anthos?
Google formally introduced [Anthos][4] at this years show. Anthos, formerly Cloud Services Platform, is meant to allow users to run their containerized applications without spending time on building, managing, and operating Kubernetes clusters. It runs both on Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and in your data center with GKE On-Prem. Anthos will also let you manage workloads running on third-party clouds such as Amazon Web Services (AWS) and Microsoft Azure.
Google also announced the beta release of Anthos Migrate, which auto-migrates virtual machines (VM) from on-premises or other clouds directly into containers in GKE with minimal effort. This allows enterprises to migrate their infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications.
Intel said it will publish the production design as an Intel Select Solution, as well as a developer platform, making it available to anyone who wants it.
### Serverless environments
Google isnt stopping with Kubernetes containers, its also pushing ahead with serverless environments. [Cloud Run][5] is Googles implementation of serverless computing, which is something of a misnomer. You still run your apps on servers; you just arent using a dedicated physical server. It is stateless, so resources are not allocated until you actually run or use the application.
Cloud Run is a fully serverless offering that takes care of all infrastructure management, including the provisioning, configuring, scaling, and managing of servers. It automatically scales up or down within seconds, even down to zero depending on traffic, ensuring you pay only for the resources you actually use. Cloud Run can be used on GKE, offering the option to run side by side with other workloads deployed in the same cluster.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388062/google-partners-with-intel-hpe-and-lenovo-for-hybrid-cloud.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://cloud.google.com/blog/topics/hybrid-cloud/new-platform-for-managing-applications-in-todays-multi-cloud-world
[5]: https://cloud.google.com/blog/products/serverless/announcing-cloud-run-the-newest-member-of-our-serverless-compute-stack
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,60 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE and Nutanix partner for hyperconverged private cloud systems)
[#]: via: (https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE and Nutanix partner for hyperconverged private cloud systems
======
Both companies will sell HP ProLiant appliances with Nutanix software but to different markets.
![Hewlett Packard Enterprise][1]
Hewlett Packard Enterprise (HPE) has partnered with Nutanix to offer Nutanixs hyperconverged infrastructure (HCI) software available as a managed private cloud service and on HPE-branded appliances.
As part of the deal, the two companies will be competing against each other in hardware sales, sort of. If you want the consumption model you get through HPEs GreenLake, where your usage is metered and you pay for only the time you use it (similar to the cloud), then you would get the ProLiant hardware from HPE.
If you want an appliance model where you buy the hardware outright, like in the traditional sense of server sales, you would get the same ProLiant through Nutanix.
**[ Read also:[What is hybrid cloud computing?][2] and [Multicloud mania: what to know][3] ]**
As it is, HPE GreenLake offers multiple cloud offerings to customers, including virtualization courtesy of VMware and Microsoft. With the Nutanix partnership, HPE is adding Nutanixs free Acropolis hypervisor to its offerings.
“Customers get to choose an alternative to VMware with this,” said Pradeep Kumar, senior vice president and general manager of HPEs Pointnext consultancy. “They like the Acropolis license model, since its license-free. Then they have choice points so pricing is competitive. Some like VMware, and I think its our job to offer them both and they can pick and choose.”
Kumar added that the whole Nutanix stack is 15 to 18% less with Acropolis than a VMware-powered system, since they save on the hypervisor.
The HPE-Nutanix partnership offers a fully managed hybrid cloud infrastructure delivered as a service and deployed in customers data centers or co-location facility. The managed private cloud service gives enterprises a hyperconverged environment in-house without having to manage the infrastructure themselves and, more importantly, without the burden of ownership. GreenLake operates more like a lease than ownership.
### HPE GreenLake's private cloud services promise to significantly reduce costs
HPE is pushing hard on GreenLake, which basically mimics cloud platform pricing models of paying for what you use rather than outright ownership. Kumar said HPE projects the consumption model will account for 30% of HPEs business in the next few years.
GreenLake makes some hefty promises. According to Nutanix-commissioned IDC research, customers will achieve a 60% reduction in the five-year cost of operations, while a HPE-commissioned Forrester report found customers benefit from a 30% Capex savings due to eliminated need for overprovisioning and a 90% reduction in support and professional services costs.
By shifting to an IT as a Service model, HPE claims to provide a 40% increase in productivity by reducing the support load on IT operations staff and to shorten the time to deploy IT projects by 65%.
The two new offerings from the partnership HPE GreenLakes private cloud service running Nutanix software and the HPE-branded appliances integrated with Nutanix software are expected to be available during the 2019 third quarter, the companies said.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3388297/hpe-and-nutanix-partner-for-hyperconverged-private-cloud-systems.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,76 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes)
[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes
======
Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices.
![Woolzian / Getty Images][1]
Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers.
The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
The problem has a Common Vulnerability Scoring System number of 9.8 out of 10.
According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors:
* The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device
* The incorrect processing of malformed CMP-specific Telnet options.
Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability.
The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6].
The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Ciscos ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated.
Cisco has released [free software updates][6] that address the vulnerability described in this advisory.
Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated.
“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated.
“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.”
Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg
[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html
[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp
[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -1,58 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it)
[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Fujitsu completes design of exascale supercomputer, promises to productize it
======
Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors.
![Riken Advanced Institute for Computational Science][1]
Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year.
The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010.
**[ Also read:[10 of the world's fastest supercomputers][2] ]**
It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBMs Sequoia, but even now, eight years later, its still in the top 20 of supercomputers in the world.
### What's in the Post-K supercomputer?
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip.
A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack.
Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.
Let me put it another way: IBMs Power processor and Nvidias Tesla are about to get pwned by a derivative chip to the one in your iPhone.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors.
Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intel follows AMDs lead (again) into single-socket Xeon servers)
[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Intel follows AMDs lead (again) into single-socket Xeon servers
======
Intel's new U series of processors are aimed at the low-end market where one processor is good enough.
![Intel][1]
Im really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMDs playbook.
Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didnt broadly promote them.”
**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]**
To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system.
The new U series isnt available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores.
The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket thats 165 fewer watts of power.
Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260s $4,702 list price. So, that will translate into much cheaper servers.
The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed.
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
All of the processors support up to 1TB of DDR4-2933 memory and Intels Optane persistent memory.
In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMDs lead.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg
[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html
[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html
[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html
[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html
[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html
[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,69 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT roundup: VMware, Nokia beef up their IoT)
[#]: via: (https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT roundup: VMware, Nokia beef up their IoT
======
Everyone wants in on the ground floor of the internet of things, and companies including Nokia, VMware and Silicon Labs are sharpening their offerings in anticipation of further growth.
![Getty Images][1]
When attempting to understand the world of IoT, its easy to get sidetracked by all the fascinating use cases: Automated oil and gas platforms! Connected pet feeders! Internet-enabled toilets! (Is “the Internet of Toilets” a thing yet?) But the most important IoT trend to follow may be the way that major tech vendors are vying to make large portions of the market their own.
VMwares play for a significant chunk of the IoT market is called Pulse IoT Center, and the company released version 2.0 of it this week. It follows the pattern set by other big companies getting into IoT: Leveraging their existing technological strengths and applying them to the messier, more heterodox networking environment that IoT represents.
Unsurprisingly, given that its VMware were talking about, theres now a SaaS option, and the company was also eager to talk up that Pulse IoT Center 2.0 has simplified device-onboarding and centralized management features.
**More about edge networking**
* [How edge networking and IoT will reshape data centers][2]
* [Edge computing best practices][3]
* [How edge computing can help secure the IoT][4]
That might sound familiar, and for good reason companies with any kind of a background in network management, from HPE/Aruba to Amazon, have been pushing to promote their system as the best framework for managing a complicated and often decentralized web of IoT devices from a single platform. By rolling features like software updates, onboarding and security into a single-pane-of-glass management console, those companies are hoping to be the organizational base for customers trying to implement IoT.
Whether theyre successful or not remains to be seen. While major IT companies have been trying to capture market share by competing across multiple verticals, the operational orientation of the IoT also means that non-traditional tech vendors with expertise in particular fields (particularly industrial and automotive) are suddenly major competitors.
**Nokia spreads the IoT network wide**
As a giant carrier-equipment vendor, Nokia is an important company in the overall IoT landscape. While some types of enterprise-focused IoT are heavily localized, like connected factory floors or centrally managed office buildings, others are so geographically disparate that carrier networks are the only connectivity medium that makes sense.
The Finnish company earlier this month broadened its footprint in the IoT space, announcing that it had partnered with Nordic Telecom to create a wide-area network focused on enabling IoT and emergency services. The network, which Nokia is billing as the first mission-critical communications network, operates using LTE technology in the 410-430MHz band a relatively low frequency, which allows for better propagation and a wide effective range.
The idea is to provide a high-throughput, low-latency network option to any user on the network, whether its an emergency services provider needing high-speed video communication or an energy or industrial company with a low-delay-tolerance application.
**Silicon Labs packs more onto IoT chips**
The heart of any IoT implementation remains the SoCs that make devices intelligent in the first place, and Silicon Labs announced that it's building more muscle into its IoT-focused product lineup.
The Austin-based chipmaker said that version 2 of its Wireless Gecko platform will pack more than double the wireless connectivity range of previous entries, which could seriously ease design requirements for companies planning out IoT deployments. The chipsets support Zigbee, Thread and Bluetooth mesh networking, and are designed for line-powered IoT devices, using Arm Cortex-M33 processors for relatively strong computing capacity and high energy efficiency.
Chipset advances arent the type of thing that will pay off immediately in terms of making IoT devices more capable, but improvements like these make designing IoT endpoints for particular uses that much easier, and new SoCs will begin to filter into line-of-business equipment over time.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home5-100768494-large.jpg
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,52 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Dell EMC and Cisco renew converged infrastructure alliance)
[#]: via: (https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Dell EMC and Cisco renew converged infrastructure alliance
======
Dell EMC and Cisco renewed their agreement to collaborate on converged infrastructure (CI) products for a few more years even though the momentum is elsewhere.
![Dell EMC][1]
Dell EMC and Cisco have renewed a collaboration on converged infrastructure (CI) products that has run for more than a decade, even as the momentum shifts elsewhere. The news was announced via a [blog post][2] by Pete Manca, senior vice president for solutions engineering at Dell EMC.
The deal is centered around Dell EMCs VxBlock product line, which originally started out in 2009 as a joint venture between EMC and Cisco called VCE (Virtual Computing Environment). EMC bought out Ciscos stake in the venture before Dell bought EMC.
The devices offered UCS servers and networking from Cisco, EMC storage, and VMware virtualization software in pre-configured, integrated bundles. VCE was retired in favor of new brands, VxBlock, VxRail, and VxRack. The lineup has been pared down to one device, the VxBlock 1000.
**[ Read also:[How to plan a software-defined data-center network][3] ]**
“The newly inked agreement entails continued company alignment across multiple organizations: executive, product development, marketing, and sales,” Manca wrote in the blog post. “This means well continue to share product roadmaps and collaborate on strategic development activities, with Cisco investing in a range of Dell EMC sales, marketing and training initiatives to support VxBlock 1000.”
Dell EMC cites IDC research that it holds a 48% market share in converged systems, nearly 1.5 times that of any other vendor. But IDC's April edition of the Converged Systems Tracker said the CI category is on the downswing. CI sales fell 6.4% year over year, while the market for hyperconverged infrastructure (HCI) grew 57.2% year over year.
For the unfamiliar, the primary difference between converged and hyperconverged infrastructure is that CI relies on hardware building blocks, while HCI is software-defined and considered more flexible and scalable than CI and operates more like a cloud system with resources spun up and down as needed.
Despite this, Dell is committed to CI systems. Just last month it announced an update and expansion of the VxBlock 1000, including higher scalability, a broader choice of components, and the option to add new technologies. It featured updated VMware vRealize and vSphere support, the option to consolidate high-value, mission-critical workloads with new storage and data protection options and support for Cisco UCS fabric and servers.
For customers who prefer to build their own infrastructure solutions, Dell EMC introduced Ready Stack, a portfolio of validated designs with sizing, design, and deployment resources that offer VMware-based IaaS, vSphere on Dell EMC PowerEdge servers and Dell EMC Unity storage, and Microsoft Hyper-V on Dell EMC PowerEdge servers and Dell EMC Unity storage.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
[2]: https://blog.dellemc.com/en-us/dell-technologies-cisco-reaffirm-joint-commitment-converged-infrastructure/
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,86 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600)
[#]: via: (https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600
======
Cisco introduced Catalyst 9600 switches, that let customers automate, set policy, provide security and gain assurance across wired and wireless networks.
![Martyn Williams][1]
Few events in the tech industry are truly transformative, but Ciscos replacement of its core Catalyst 6000 family could be one of those actions for customers and the company.
Introduced in 1999, [iterations of the Catalyst 6000][2] have nestled into the core of scores of enterprise networks, with the model 6500 becoming the companys largest selling box ever.
**Learn about edge networking**
* [How edge networking and IoT will reshape data centers][3]
* [Edge computing best practices][4]
* [How edge computing can help secure the IoT][5]
It goes without question that migrating these customers alone to the new switch the Catalyst 9600 which the company introduced today will be of monumental importance to Cisco as it looks to revamp and continue to dominate large campus-core deployments. The first [Catalyst 9000][6], introduced in June 2017, is already the fastest ramping product line in Ciscos history.
“There are at least tens of thousands of Cat 6000s running in campus cores all over the world,” said [Sachin Gupta][7], senior vice president for product management at Cisco. ”It is the Swiss Army knife of switches in term of features, and we have taken great care and over two years developing feature parity and an easy migration path for those users to the Cat 9000.”
Indeed the 9600 brings with it for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for newer items such as Intent-based networking (IBN), wireless networks and security segmentation. Strategically the 9600 helps fill out the companys revamped lineup which includes the 9200 family of access switches, the [9500][8] aggregation switch and [9800 wireless controller.][9]
Some of the nitty-gritty details about the 9600:
* It is a purpose-built 40 Gigabit and 100 Gigabit Ethernet line of modular switches targeted for the enterprise campus with a wired switching capacity of up to 25.6 Tbps, with up to 6.4 Tbps of bandwidth per slot.
* The switch supports granular port densities that fit diverse campus needs, including nonblocking 40 Gigabit and 100 Gigabit Ethernet Quad Small Form-Factor Pluggable (QSFP+, QSFP28) and 1, 10, and 25 GE Small Form-Factor Pluggable Plus (SFP, SFP+, SFP28).
* It can be configured to support up to 48 nonblocking 100 Gigabit Ethernet QSPF28 ports with the Cisco Catalyst 9600 Series Supervisor Engine 1; Up to 96 nonblocking 40 Gigabit Ethernet QSFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1 and Up to 192 nonblocking 25 Gigabit/10 Gigabit Ethernet SFP28/SFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1.
* It supports advanced routing and infrastructure services (MPLS, Layer 2 and Layer 3 VPNs, Multicast VPN, and Network Address Translation.
* Cisco Software-Defined Access capabilities (such as a host-tracking database, cross-domain connectivity, and VPN Routing and Forwarding [VRF]-aware Locator/ID Separation Protocol; and network system virtualization with Cisco StackWise virtual technology.
The 9600 series runs Ciscos IOS XE software which now runs across all Catalyst 9000 family members. The software brings with it support for other key products such as Ciscos [DNA Center][10] which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. What that means is that with one user interface, DNA Center, customers can automate, set policy, provide security and gain assurance across the entire wired and wireless network fabric, Gupta said.
“The 9600 is a big deal for Cisco and customers as it brings together the campus core and lets users establish standards access and usage policies across their wired and wireless environments,” said Brandon Butler, a senior research analyst with IDC. “It was important that Cisco add a powerful switch to handle the increasing amounts of traffic wireless and cloud applications are bringing to the network.”
IOS XE brings with it automated device provisioning and a wide variety of automation features including support for the network configuration protocol NETCONF and RESTCONF using YANG data models. The software offers near-real-time monitoring of the network, leading to quick detection and rectification of failures, Cisco says.
The software also supports hot patching which provides fixes for critical bugs and security vulnerabilities between regular maintenance releases. This support lets customers add patches without having to wait for the next maintenance release, Cisco says.
As with the rest of the Catalyst family, the 9600 is available via subscription-based licensing. Cisco says the [base licensing package][11] includes Network Advantage licensing options that are tied to the hardware. The base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features. These base licenses are perpetual.
An add-on licensing package includes the Cisco DNA Premier and Cisco DNA Advantage options. The Cisco DNA add-on licenses are available as a subscription.
IDCS Butler noted that there are competitors such as Ruckus, Aruba and Extreme that offer switches capable of melding wired and wireless environments.
The new switch is built for the next two decades of networking, Gupta said. “If any of our competitors though they could just go in and replace the Cat 6k they were misguided.”
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/02/170227-mwc-02759-100710709-large.jpg
[2]: https://www.networkworld.com/article/2289826/133715-The-illustrious-history-of-Cisco-s-Catalyst-LAN-switches.html
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[6]: https://www.networkworld.com/article/3256264/cisco-ceo-we-are-still-only-on-the-front-end-of-a-new-version-of-the-network.html
[7]: https://blogs.cisco.com/enterprise/looking-forward-catalyst-9600-switch-and-9100-access-point-meraki
[8]: https://www.networkworld.com/article/3202105/cisco-brings-intent-based-networking-to-the-end-to-end-network.html
[9]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
[10]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
[11]: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9600/software/release/16-11/release_notes/ol-16-11-9600.html#id_67835
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managed WAN and the cloud-native SD-WAN)
[#]: via: (https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html)
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
Managed WAN and the cloud-native SD-WAN
======
The motivation for WAN transformation is clear, today organizations require: improved internet access and last mile connectivity, additional bandwidth and a reduction in the WAN costs.
![Gerd Altmann \(CC0\)][1]
In recent years, a significant number of organizations have transformed their wide area network (WAN). Many of these organizations have some kind of cloud-presence across on-premise data centers and remote site locations.
The vast majority of organizations that I have consulted with have over 10 locations. And it is common to have headquarters in both the US and Europe, along with remote site locations spanning North America, Europe, and Asia.
A WAN transformation project requires this diversity to be taken into consideration when choosing the best SD-WAN vendor to satisfy both; networking and security requirements. Fundamentally, SD-WAN is not just about physical connectivity, there are many more related aspects.
**[ Related:[MPLS explained What you need to know about multi-protocol label switching][2]**
### Motivations for transforming the WAN
The motivation for WAN transformation is clear: Today organizations prefer improved internet access and last mile connectivity, additional bandwidth along with a reduction in the WAN costs. Replacing Multiprotocol Label Switching (MPLS) with SD-WAN has of course been the main driver for the SD-WAN evolution, but it is only a single piece of the jigsaw puzzle.
Many SD-WAN vendors are quickly brought to their knees when they try to address security and gain direct internet access from remote site locations. The problem is how to ensure optimized cloud access that is secure, has improved visibility and predictable performance without the high costs associated with MPLS? SD-WAN is not just about connecting locations. Primarily, it needs to combine many other important network and security elements into one seamless worldwide experience.
According to a recent report from [Cato Networks][3] into enterprise IT managers, a staggering 85% will confront use cases in 2019 that are poorly addressed or outright ignored by SD-WAN. Examples includes providing secure, Internet access from any location (50%) and improving visibility into and control over mobile access to cloud applications, such as Office 365 (46%).
### Issues with traditional SD-WAN vendors
First and foremost, SD-WAN unable to address the security challenges that arise during the WAN transformation. Such security challenges include protection against malware, ransomware and implementing the necessary security policies. Besides, there is a lack of visibility that is required to police the mobile users and remote site locations accessing resources in the public cloud.
To combat this, organizations have to purchase additional equipment. There has always been and will always be a high cost associated with buying such security appliances. Furthermore, the additional tools that are needed to protect the remote site locations increase the network complexity and reduce visibility. Lets us not forget that the variety of physical appliances require talented engineers for design, deployment and maintenance.
There will often be a single network-cowboy. This means the network and security configuration along with the design essentials are stored in the mind of the engineer, not in a central database from where the knowledge can be accessed if the engineer leaves his or her employment.
The physical appliance approach to SD-WAN makes it hard, if not impossible, to accommodate for the future. If the current SD-WAN vendors continue to focus just on connecting the devices with the physical appliances, they will have limited ability to accommodate for example, with the future of network IoT devices. With these factors in mind what are the available options to overcome the SD-WAN shortcomings?
One can opt for a do it yourself (DIY) solution, or a managed service, which can fall into the category of telcos, with the improvements of either co-managed or self-managed service categories.
### Option 1: The DIY solution
Firstly DIY, from the experience of trying to stitch together a global network, this is not only costly but also complex and is a very constrained approach to the network transformation. We started with physical appliances decades ago and it was sufficient to an extent. The reason it worked was that it suited the requirements of the time, but our environment has changed since then. Hence, we need to accommodate these changes with the current requirements.
Even back in those days, we always had a breachable perimeter. The perimeter-approach to networking and security never really worked and it was just a matter of time before the bad actor would penetrate the guarded walls.
Securing a global network involves more than just firewalling the devices. A solid security perimeter requires URL filtering, anti-malware and IPS to secure the internet traffic. If you try to deploy all these functions in a single device, such as, unified threat management (UTM), you will hit scaling problems. As a result, you will be left with appliance sprawl.
Back in my early days as an engineer, I recall stitching together a global network with a mixture of security and network appliances from a variety of vendors. It was me and just two others who used to get the job done on time and for a production network, our uptime levels were superior to most.
However, it involved too many late nights, daily flights to our PoPs and of course the major changes required a forklift. A lot of work had to be done at that time, which made me want to push some or most of the work to a 3rd party.
### Option 2: The managed service solution
Today, there is a growing need for the managed service approach to SD-WAN. Notably, it simplifies the network design, deployment and maintenance activities while offloading the complexity, in line with what most CIOs are talking about today.
Managed service provides a number of benefits, such as the elimination of backhauling to centralized cloud connectors or VPN concentrators. Evidently, backhauling is never favored for a network architect. More than often it will result in increased latency, congested links, internet chokepoints, and last-mile outages.
Managed service can also authenticate mobile users at the local communication hub and not at a centralized point which would increase the latency. So what options are available when considering a managed service?
### Telcos: An average service level
Lets be honest, telcos have a mixed track record and enterprises rely on them with caution. Essentially, you are building a network with 3rd party appliances and services that put the technical expertise outside of the organization.
Secondly, the telco must orchestrate, monitor and manage numerous technical domains which are likely to introduce further complexity. As a result, troubleshooting requires close coordination with the suppliers which will have an impact on the customer experience.
### Time equals money
To resolve a query could easily take two or three attempts. Its rare that you will get to the right person straight away. This eventually increases the time to resolve problems. Even for a minor feature change, you have to open tickets. Hence, with telcos, it increases the time required to solve a problem.
In addition, it takes time to make major network changes such as opening new locations, which could take up to 45 days. In the same report mentioned above, 71% of the respondents are frustrated with the telco customer-service-time to resolve the problems, 73% indicated that deploying new locations requires at least 15 days and 47% claimed that “high bandwidth costs” is the biggest frustration while working with telcos.
When it comes to lead times for projects, an engineer does not care. Does a project manager care if you have an optimum network design? No, many dont, most just care about the timeframes. During my career, now spanning 18 years, I have never seen comments from any of my contacts saying “you must adhere to your project managers timelines”.
However, out of the experience, the project managers have their ways and lead times do become a big part of your daily job. So as an engineer, 45-day lead time will certainly hit your brand hard, especially if you are an external consultant.
There is also a problem with bandwidth costs. Telcos need to charge due to their complexity. There is always going to be a series of problems when working with them. Lets face it, they offer an average service level.
### Co-management and self-service management
What is needed is a service that equips with the visibility and control of DIY to managed services. This, ultimately, opens the door to co-management and self-service management.
Co-management allows both the telco and enterprise to make changes to the WAN. Then we have the self-service management of WAN that allows the enterprises to have sole access over the aspect of their network.
However, these are just sticking plasters covering up the flaws. We need a managed service that not only connects locations but also synthesizes the site connectivity, along with security, mobile access, and cloud access.
### Introducing the cloud-native approach to SD-WAN
There should be a new style of managed services that combines the best of both worlds. It should offer the uptime, predictability and reach of the best telcos along with the cost structure and versatility of cloud providers. All such requirements can be met by what is known as the cloud-native carrier.
Therefore, we should be looking for a platform that can connect and secure all the users and resources at scale, no matter where they are positioned. Eventually, such a platform will limit the costs and increase the velocity and agility.
This is what a cloud-native carrier can offer you. You could say its a new kind of managed service, which is what enterprises are now looking for. A cloud-native carrier service brings the best of cloud services to the world of networking. This new style of managed service brings to SD-WAN the global reach, self-service, and agility of the cloud with the ability to easily migrate from MPLS.
In summary, a cloud-native carrier service will improve global connectivity to on-premises and cloud applications, enable secure branch to internet access, and both securely and optimally integrate cloud datacenters.
**This article is published as part of the IDG Contributor Network.[Want to Join?][4]**
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398476/managed-wan-and-the-cloud-native-sd-wan.html
作者:[Matt Conran][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Matt-Conran/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/03/network-wan-100713693-large.jpg
[2]: https://www.networkworld.com/article/2297171/sd-wan/network-security-mpls-explained.html
[3]: https://www.catonetworks.com/news/digital-transformation-survey
[4]: /contributor-network/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Moving to the Cloud? SD-WAN Matters!)
[#]: via: (https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Moving to the Cloud? SD-WAN Matters!
======
![istock][1]
This is the first in a two-part blog series that will explore how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The focus for this installment will be on automating secure IPsec connectivity and intelligently steering traffic to cloud providers.
Over the past several years weve seen a major shift in data center strategies where enterprise IT organizations are shifting applications and workloads to cloud, whether private or public. More and more, enterprises are leveraging software as-a-service (SaaS) applications and infrastructure as-a-service (IaaS) cloud services from leading providers like [Amazon AWS][3], [Google Cloud][4], [Microsoft Azure][5] and [Oracle Cloud Infrastructure][6]. This represents a dramatic shift in enterprise data traffic patterns as fewer and fewer applications are hosted within the walls of the traditional corporate data center.
There are several drivers for the shift to IaaS cloud services and SaaS apps, but business agility tops the list for most enterprises. The traditional IT model for provisioning and deprovisioning applications is rigid and inflexible and is no longer able to keep pace with changing business needs.
According to [LogicMonitors Cloud Vision 2020][7] study, more than 80 percent of enterprise workloads will run in the cloud by 2020 with more than 40 percent running on public cloud platforms. This major shift in the application consumption model is having a huge [impact on organizations and infrastructure][8]. A recent article entitled “[How Amazon Web Services is luring banks to the cloud][9],” published by CNBC, reported that some companies already have completely migrated all of their applications and IT workloads to public cloud infrastructures. An interesting fact is that while many enterprises must comply with stringent regulatory compliance mandates such as PCI-DSS or HIPAA, they still have made the move to the cloud. This tells us two things the maturity of using public cloud services and the trust these organizations have in using them is at an all-time high. Again, it is all about speed and agility without compromising performance, security and reliability.
### **Is there a direct correlation between moving to the cloud and adopting SD-WAN?**
As the cloud enables businesses to move faster, an SD-WAN architecture where top-down business intent is the driver is critical to ensuring success, especially when branch offices are geographically distributed across the globe. Traditional router-centric WAN architectures were never designed to support todays cloud consumption model for applications in the most efficient way. With a conventional router-centric WAN approach, access to applications residing in the cloud means traversing unnecessary hops, resulting in wasted bandwidth, additional cost, added latency and potentially higher packet loss. In addition, under the existing, traditional WAN model where management tends to be rigid, complex network changes can be lengthy, whether setting up new branches or troubleshooting performance issues. This leads to inefficiencies and a costly operational model. Therefore, enterprises greatly benefit from taking a business-first WAN approach toward achieving greater agility in addition to realizing substantial CAPEX and OPEX savings.
A business-driven SD-WAN platform is purpose-built to tackle the challenges inherent to the traditional router-centric model and more aptly support todays cloud consumption model. This means application policies are defined based on business intent, connecting users securely and directly to applications where ever they reside without unnecessary extra hops or security compromises. For example, if the application is hosted in the cloud and is trusted, a business-driven SD-WAN can automatically connect users to it without backhauling traffic to a POP or HQ data center. Now, in general this traffic is usually going across an internet link which, on its own, may not be secure. However, the right SD-WAN platform will have a unified stateful firewall built-in for local internet breakout allowing only branch-initiated sessions to enter the branch and providing the ability to service chain traffic to a cloud-based security service if necessary, before forwarding it to its final destination. If the application is moved and becomes hosted by another provider or perhaps back to a companys own data center, traffic must be intelligently redirected, wherever the application is being hosted. Without automation and embedded machine learning, dynamic and intelligent traffic steering is impossible.
### **A closer look at how the Silver Peak EdgeConnect™ SD-WAN edge platform addresses these challenges: **
**Automate traffic steering and connectivity to cloud providers**
An [EdgeConnect][10] virtual instance is easily spun up in any of the [leading cloud providers][11] through their respective marketplaces. For an SD-WAN to intelligently steer traffic to its destination, it requires insights into both HTTP and HTTPS traffic; it must be able to identify apps on the first packet received in order to steer traffic to the right destination in accordance with business intent. This is critical capability because once a TCP connection is NATd with a public IP address, it cannot be switched thus it cant be re-routed once a connection is established. So, the ability of EdgeConnect to identify, classify and automatically steer traffic based on the first packet and not the second or tenth packet to the correct destination will assure application SLAs, minimize wasting expensive bandwidth and deliver the highest quality of experience.
Another critical capability is automatic performance optimization. Irrespective of which link the traffic ends up traversing based on business intent and the unique requirements of the application, EdgeConnect automatically optimizes application performance without human intervention by correcting for out of order packets using Packet Order Correction (POC) or even under high latency conditions that can be related to distance or other issues. This is done using adaptive Forward Error Correction (FEC) and tunnel bonding where a virtual tunnel is created, resulting in a single logical overlay that traffic can be dynamically moved between the different paths as conditions change with each underlay WAN service. In this [lightboard video][12], Dinesh Fernando, a technical marketing engineer at Silver Peak, explains how EdgeConnect automates tunnel creation between sites and cloud providers, how it simplifies data transfers between multi-clouds, and how it improves application performance.
If your business is global and increasingly dependent on the cloud, the business-driven EdgeConnect SD-WAN edge platform enables seamless multi-cloud connectivity, turning the network into a business accelerant. EdgeConnect delivers:
1. A consistent deployment from the branch to the cloud, extending the reach of the SD-WAN into virtual private cloud environments
2. Multi-cloud flexibility, making it easier to initiate and distribute resources across multiple cloud providers
3. Investment protection by confidently migrating on premise IT resources to any combination of the leading public cloud platforms, knowing their cloud-hosted instances will be fully supported by EdgeConnect
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397921/moving-to-the-cloud-sd-wan-matters.html
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/istock-899678028-100797709-large.jpg
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[3]: https://www.silver-peak.com/company/tech-partners/cloud/aws
[4]: https://www.silver-peak.com/company/tech-partners/cloud/google-cloud
[5]: https://www.silver-peak.com/company/tech-partners/cloud/microsoft-azure
[6]: https://www.silver-peak.com/company/tech-partners/cloud/oracle-cloud
[7]: https://www.logicmonitor.com/resource/the-future-of-the-cloud-a-cloud-influencers-survey/?utm_medium=pr&utm_source=businesswire&utm_campaign=cloudsurvey
[8]: http://www.networkworld.com/article/3152024/lan-wan/in-the-age-of-digital-transformation-why-sd-wan-plays-a-key-role-in-the-transition.html
[9]: http://www.cnbc.com/2016/11/30/how-amazon-web-services-is-luring-banks-to-the-cloud.html?__source=yahoo%257cfinance%257cheadline%257cheadline%257cstory&par=yahoo&doc=104135637
[10]: https://www.silver-peak.com/products/unity-edge-connect
[11]: https://www.silver-peak.com/company/tech-partners?strategic_partner_type=69
[12]: https://www.silver-peak.com/resource-center/automate-connectivity-to-cloud-networking-with-sd-wan

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (With Cray buy, HPE rules but does not own the supercomputing market)
[#]: via: (https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
With Cray buy, HPE rules but does not own the supercomputing market
======
In buying supercomputer vendor Cray, HPE has strengthened its high-performance-computing technology, but serious competitors remain.
![Cray Inc.][1]
Hewlett Packard Enterprise was already the leader in the high-performance computing (HPC) sector before its announced acquisition of supercomputer maker Cray earlier this month. Now it has a commanding lead, but there are still competitors to the giant.
The news that HPE would shell out $1.3 billion to buy the company came just as Cray had announced plans to build three of the biggest systems yet — all exascale, and all with the same deployment time of 2021.
Sales had been slowing for HPC systems, but our government, with its endless supply of money, came to the rescue, throwing hundreds of millions at Cray for systems to be built at Lawrence Berkeley National Laboratory, Argonne National Laboratory and Oak Ridge National Laboratory.
**[ Read also:[How to plan a software-defined data-center network][2] ]**
And HPE sees a big revenue opportunity in HPC, a market that was $2 billion in 1990 and now nearly $30 billion, according to Steve Conway, senior vice president with Hyperion Research, which follows the HPC market. HPE thinks the HPC market will grow to $35 billion by 2021, and it hopes to earn a big chunk of that pie.
“They were solidly in the lead without Cray. They were already in a significant lead over the No. 2 company, Dell. This adds to their lead and gives them access to very high end of market, especially government supercomputers that sell for $300 million to $600 million each,” said Conway.
Hes not exaggerating. Earlier this month the U.S. Department of Energy announced a contract with Cray to build Frontier, an exascale supercomputer at Oak Ridge National Laboratory, sometime in 2021, with a $600 million price tag. Frontier will be powered by AMD Epyc processors and Radeon GPUs, which must have them doing backflips at AMD.
With Cray, HPE is sitting on a lot of technology for the supercomputing and even the high-end, non-HPC market. It had the ProLiant business, the bulk of server sales (and proof the Compaq acquisition wasnt such a bad idea), Integrity NonStop mission-critical servers, the SGI business it acquired in in 2016, plus a variety running everything from Arm to Xeon Scalable processors.
Conway thinks all of those technologies fit in different spaces, so he doubts HPE will try to consolidate any of it. All HPE has said so far is it will keep the supercomputer products it has now under the Cray business unit.
But the company is still getting something it didnt have. “It takes a certain kind of technical experience [to do HPC right] and only a few companies able to play at that level. Before this deal, HPE was not one of them,” said Conway.
And in the process, HPE takes Cray away from its many competitors: IBM, Lenovo, Dell/EMC, Huawei (well, not so much now), Super Micro, NEC, Hitachi, Fujitsu, and Atos.
“[The acquisition] doesnt fundamentally change things because theres still enough competitors that buyers can have competitive bids. But its gotten to be a much bigger market,” said Conway.
Cray sells a lot to government, but Conway thinks there is a new opportunity in the ever-expanding AI race. “Because HPC is indispensable at the forefront of AI, there is a new area for expanding the market,” he said.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397087/with-cray-buy-hpe-rules-but-does-not-own-the-supercomputing-market.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/06/the_cray_xc30_piz_daint_system_at_the_swiss_national_supercomputing_centre_via_cray_inc_3x2_978x652-100762113-large.jpg
[2]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,92 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco security spotlights Microsoft Office 365 e-mail phishing increase)
[#]: via: (https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco security spotlights Microsoft Office 365 e-mail phishing increase
======
Cisco blog follows DHS Cybersecurity and Infrastructure Security Agency (CISA) report detailing risks around Office 365 and other cloud services
![weerapatkiatdumrong / Getty Images][1]
Its no secret that if you have a cloud-based e-mail service, fighting off the barrage of security issues has become a maddening daily routine.
The leading e-mail service in [Microsofts Office 365][2] package seems to be getting the most attention from those attackers hellbent on stealing enterprise data or your private information via phishing attacks. Amazon and Google see their share of phishing attempts in their cloud-based services as well.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
But attackers are crafting and launching phishing campaigns targeting Office 365 users, [wrote][5] Ben Nahorney, a Threat Intelligence Analyst focused on covering the threat landscape for Cisco Security in a blog focusing on the Office 365 phishing issue.
Nahorney wrote of research from security vendor [Agari Data][6], that found over the last few quarters, there has been a steady increase in the number of phishing emails impersonating Microsoft. While Microsoft has long been the most commonly impersonated brand, it now accounts for more than half of all brand impersonations seen in the last quarter.
Recently cloud security firm Avanan wrote in its [annual phishing report][7], one in every 99 emails is a phishing attack, using malicious links and attachments as the main vector. “Of the phishing attacks we analyzed, 25 percent bypassed Office 365 security, a number that is likely to increase as attackers design new obfuscation methods that take advantage of zero-day vulnerabilities on the platform,” Avanan wrote.
The attackers attempt to steal a users login credentials with the goal of taking over accounts. If successful, attackers can often log into the compromised accounts, and perform a wide variety of malicious activity: Spread malware, spam and phishing emails from within the internal network; carry out tailored attacks such as spear phishing and [business email compromise][8] [a long-standing business scam that uses spear-phishing, social engineering, identity theft, e-mail spoofing], and target partners and customers, Nahorney wrote.
Nahorney wrote that at first glance, this may not seem very different than external email-based attacks. However, there is one critical distinction: The malicious emails sent are now coming from legitimate accounts.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
“For the recipient, its often even someone that they know, eliciting trust in a way that would not necessarily be afforded to an unknown source. To make things more complicated, attackers often leverage conversation hijacking, where they deliver their payload by replying to an email thats already located in the compromised inbox,” Nahorney stated.
The methods used by attackers to gain access to an Office 365 account are fairly straightforward, Nahorney wrote.
“The phishing campaigns usually take the form of an email from Microsoft. The email contains a request to log in, claiming the user needs to reset their password, hasnt logged in recently or that theres a problem with the account that needs their attention. A URL is included, enticing the reader to click to remedy the issue,” Nahorney wrote.
Once logged in, nefarious activities can go on unnoticed as the attacker has what look like authorized credentials.
“This gives the attacker time for reconnaissance: a chance to observe and plan additional attacks. Nor will this type of attack set off a security alert in the same way something like a brute-force attack against a webmail client will, where the attacker guesses password after password until they get in or are detected,” Nahorney stated.
Nahorney suggested the following steps customers can take to protect email:
* Use multi-factor authentication. If a login attempt requires a secondary authorization before someone is allowed access to an inbox, this will stop many attackers, even with phished credentials.
* Deploy advanced anti-phishing technologies. Some machine-learning technologies can use local identity and relationship modeling alongside behavioral analytics to spot deception-based threats.
* Run regular phishing exercises. Regular, mandated phishing exercises across the entire organization will help to train employees to recognize phishing emails, so that they dont click on malicious URLs, or enter their credentials into malicious website.
### Homeland Security flags Office 365, other cloud email services
The U.S. government, too, has been warning customers of Office 365 and other cloud-based email services that they should be on alert for security risks. The US Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) this month [issued a report targeting][10] Office 365 and other cloud services saying:
“Organizations that used a third party have had a mix of configurations that lowered their overall security posture (e.g., mailbox auditing disabled, unified audit log disabled, multi-factor authentication disabled on admin accounts). In addition, the majority of these organizations did not have a dedicated IT security team to focus on their security in the cloud. These security oversights have led to user and mailbox compromises and vulnerabilities.”
The agency also posted remediation suggestions including:
* Enable unified audit logging in the Security and Compliance Center.
* Enable mailbox auditing for each user.
* Ensure Azure AD password sync is planned for and configured correctly, prior to migrating users.
* Disable legacy email protocols, if not required, or limit their use to specific users.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398925/cisco-security-spotlights-microsoft-office-365-e-mail-phishing-increase.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/cso_phishing_social_engineering_security_threat_by_weerapatkiatdumrong_gettyimages-489433130_3x2_2400x1600-100796450-large.jpg
[2]: https://docs.microsoft.com/en-us/office365/securitycompliance/security-roadmap
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://blogs.cisco.com/security/office-365-phishing-threat-of-the-month
[6]: https://www.agari.com/
[7]: https://www.avanan.com/hubfs/2019-Global-Phish-Report.pdf
[8]: https://www.networkworld.com/article/3195072/fbi-ic3-vile-5b-business-e-mail-scam-continues-to-breed.html
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[10]: https://www.us-cert.gov/ncas/analysis-reports/AR19-133A
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,53 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Nvidia launches edge computing platform for AI processing)
[#]: via: (https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Nvidia launches edge computing platform for AI processing
======
EGX platform goes to the edge to do as much processing there as possible before sending data upstream to major data centers.
![Leo Wolfert / Getty Images][1]
Nvidia is launching a new platform called EGX Platform designed to bring real-time artificial intelligence (AI) to edge networks. The idea is to put AI computing closer to where sensors collect data before it is sent to larger data centers.
The edge serves as a buffer to data sent to data centers. It whittles down the data collected and only sends what is relevant up to major data centers for processing. This can mean discarding more than 90% of data collected, but the trick is knowing which data to keep and which to discard.
“AI is required in this data-driven world,” said Justin Boitano, senior director for enterprise and edge computing at Nvidia, on a press call last Friday. “We analyze data near the source, capture anomalies and report anomalies back to the mothership for analysis.”
**[ Now read[20 hot jobs ambitious IT pros should shoot for][2]. ]**
Boitano said we are hitting crossover where there is more compute at edge than cloud because more work needs to be done there.
EGX comes from 14 server vendors in a range of form factors, combining AI with network, security and storage from Mellanox. Boitano said that the racks will fit in any industry-standard rack, so they will fit into edge containers from the likes of Vapor IO and Schneider Electric.
EGX uses Nvidias low-power Jetson Nano processor, but also all the way up to Nvidia T4 processors that can deliver more than 10,000 trillion operations per second (TOPS) for real-time speech recognition and other real-time AI tasks.
Nvdia is working on software stack called Nvidia Edge Stack that can be updated constantly, and the software runs in containers, so no reboots are required, just a restart of the container. EGX runs enterprise-grade Kubernetes container platforms like Red Hat Openshift.
Edge Stack is optimized software that includes Nvidia drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server and DeepStream.
The company is boasting more than 40 early adopters, including BMW Group Logistics, which uses EGX and its own Isaac robotic platforms to handle increasingly complex logistics with real-time efficiency.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3397841/nvidia-launches-edge-computing-platform-for-ai-processing.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg
[2]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Satellite-based internet possible by year-end, says SpaceX)
[#]: via: (https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Satellite-based internet possible by year-end, says SpaceX
======
Amazon, Tesla-associated SpaceX and OneWeb are emerging as just some of the potential suppliers of a new kind of data-friendly satellite internet service that could bring broadband IoT connectivity to most places on Earth.
![Getty Images][1]
With SpaceXs successful launch of an initial array of broadband-internet-carrying satellites last week, and Amazons surprising posting of numerous satellite engineering-related job openings on its [job board][2] this month, one might well be asking if the next-generation internet space race is finally getting going. (I first wrote about [OneWebs satellite internet plans][3] it was concocting with Airbus four years ago.)
This new batch of satellite-driven internet systems, if they work and are eventually switched on, could provide broadband to most places, including previously internet-barren locations, such as rural areas. That would be good for high-bandwidth, low-latency remote-internet of things (IoT) and increasingly important edge-server connections for verticals like oil and gas and maritime. [Data could even end up getting stored in compliance-friendly outer space, too][4]. Leaky ground-based connections, also, perhaps a thing of the past.
Of the principal new internet suppliers, SpaceX has gotten farthest along. Thats in part because it has commercial impetus. It needed to create payload for its numerous rocket projects. The Tesla electric-car-associated company (the two firms share materials science) has not only launched its first tranche of 60 satellites for its own internet constellation, called Starlink, but also successfully launched numerous batches (making up the full constellation of 75 satellites) for Iridiums replacement, an upgraded constellation called Iridium NEXT.
[The time of 5G is almost here][5]
Potential competitor OneWeb launched its first six Airbus-built satellites in February. [It has plans for 900 more][6]. SpaceX has been approved for 4,365 more by the FCC, and Project Kuiper, as Amazons space internet project is known, wants to place 3,236 satellites in orbit, according to International Telecommunication Union filings [discovered by _GeekWire_][7] earlier this year. [Startup LeoSat, which I wrote about last year, aims to build an internet backbone constellation][8]. Facebook, too, is exploring [space-delivered internet][9].
### Why the move to space?
Laser technical progress, where data is sent in open, free space, rather than via a restrictive, land-based cable or via traditional radio paths, is partly behind this space-internet rush. “Bits travel faster in free space than in glass-fiber cable,” LeoSat explained last year. Additionally, improving microprocessor tech is also part of the mix.
One important difference from existing older-generation satellite constellations is that this new generation of internet satellites will be located in low Earth orbit (LEO). Initial Starlink satellites will be placed at about 350 miles above Earth, with later launches deployed at 710 miles.
Theres an advantage to that. Traditional satellites in geostationary orbit, or GSO, have been deployed about 22,000 miles up. That extra distance versus LEO introduces latency and is one reason earlier generations of Internet satellites are plagued by slow round-trip times. Latency didnt matter when GSO was introduced in 1964, and commercial satellites, traditionally, have been pitched as one-way video links, such as are used by sporting events for broadcast, and not for data.
And when will we get to experience these new ISPs? “Starlink is targeted to offer service in the Northern U.S. and Canadian latitudes after six launches,” [SpaceX says on its website][10]. Each launch would deliver about 60 satellites. “SpaceX is targeting two to six launches by the end of this year.”
Global penetration of the “populated world” could be obtained after 24 launches, it thinks.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398940/space-internet-maybe-end-of-year-says-spacex.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/10/network_iot_world-map_us_globe_nodes_global-100777483-large.jpg
[2]: https://www.amazon.jobs/en/teams/projectkuiper
[3]: https://www.itworld.com/article/2938652/space-based-internet-starts-to-get-serious.html
[4]: https://www.networkworld.com/article/3200242/data-should-be-stored-data-in-space-firm-says.html
[5]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[6]: https://www.airbus.com/space/telecommunications-satellites/oneweb-satellites-connection-for-people-all-over-the-globe.html
[7]: https://www.geekwire.com/2019/amazon-lists-scores-jobs-bellevue-project-kuiper-broadband-satellite-operation/
[8]: https://www.networkworld.com/article/3328645/space-data-backbone-gets-us-approval.html
[9]: https://www.networkworld.com/article/3338081/light-based-computers-to-be-5000-times-faster.html
[10]: https://www.starlink.com/
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Survey finds SD-WANs are hot, but satisfaction with telcos is not)
[#]: via: (https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Survey finds SD-WANs are hot, but satisfaction with telcos is not
======
A recent survey of over 400 IT executives by Cato Networks found that legacy telcos might be on the outside looking in for SD-WANs.
![istock][1]
This week SD-WAN vendor Cato Networks announced the results of its [Telcos and the Future of the WAN in 2019 survey][2]. The study was a mix of companies of all sizes, with 42% being enterprise-class (over 2,500 employees). More than 70% had a network with more than 10 locations, and almost a quarter (24%) had over 100 sites. All of the respondents have a cloud presence, and almost 80% have at least two data centers. The survey had good geographic diversity, with 57% of respondents coming from the U.S. and 24% from Europe.
Highlights of the survey include the following key findings:
## **SD-WANs are hot but not a panacea to all networking challenges**
The survey found that 44% of respondents have already deployed or will deploy an SD-WAN within the next 12 months. This number is up sharply from 25% when Cato ran the survey a year ago. Another 33% are considering SD-WAN but have no immediate plans to deploy. The primary drivers for the evolution of the WAN are improved internet access (46%), increased bandwidth (39%), improved last-mile availability (38%) and reduced WAN costs (37%). Its good to see cost savings drop to fourth in motivation, since there is so much more to SD-WAN.
[The time of 5G is almost here][3]
Its interesting that the majority of respondents believe SD-WAN alone cant address all challenges facing the WAN. A whopping 85% stated they would be confronting issues not addressed by SD-WAN alone. This includes secure, local internet breakout, improved visibility, and control over mobile access to cloud apps. This indicates that customers are looking for SD-WAN to be the foundation of the WAN but understand that other technologies need to be deployed as well.
## **Telco dissatisfaction is high**
The traditional telco has been a point of frustration for network professionals for years, and the survey spelled that out loud and clear. Prior to being an analyst, I held a number of corporate IT positions and found telcos to be the single most frustrating group of companies to deal with. The problem was, there was no choice. If you need MPLS services, you need a telco. The same cant be said for SD-WANs, though; businesses have more choices.
Respondents to the survey ranked telco service as “average.” Its been well documented that we are now in the customer-experience era and “good enough” service is no longer good enough. Regarding pricing, 54% gave telcos a failing grade. Although price isnt everything, this will certainly open the door to competitive SD-WAN vendors. Respondents gave the highest marks for overall experience to SaaS providers, followed by cloud computing suppliers. Global telcos scored the lowest of all vendor types.
A look deeper explains the frustration level. The network is now mission-critical for companies, but 48% stated they are able to reach the support personnel with the right expertise to solve a problem only on a second attempt. No retailer, airline, hotel or other type of company could survive this, but telco customers had no other options for years.
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
Another interesting set of data points is the speed at which telcos address customer needs. Digital businesses compete on speed, but telco process is the antithesis of fast. Moves, adds and changes take at least one business day for half of the respondents. Also, 70% indicated that opening a new location takes 15 days, and 38% stated it requires 45 days or more.
## **Security is now part of SD-WAN**
The use of broadband, cloud access and other trends raise the bar on security for SD-WAN, and the survey confirmed that respondents are skeptical that SD-WANs could address these issues. Seventy percent believe SD-WANs cant address malware/ransomware, and 49% dont think SD-WAN helps with enforcing company policies on mobile users. Because of this, network professionals are forced to buy additional security tools from other vendors, but that can drive up complexity. SD-WAN vendors that have intrinsic security capabilities can use that as a point of differentiation.
## **Managed services are critical to the growth of SD-WANs**
The survey found that 75% of respondents are using some kind of managed service provider, versus only 25% using an appliance vendor. This latter number was 32% last year. Im not surprised by this shift and expect it to continue. Legacy WANs were inefficient but straightforward to deploy. D-WANs are highly agile and more cost-effective, but complexity has gone through the roof. Network engineers need to factor in cloud connectivity, distributed security, application performance, broadband connectivity and other issues. Managed services can help businesses enjoy the benefits of SD-WAN while masking the complexity.
Despite the desire to use an MSP, respondents dont want to give up total control. Eighty percent stated they preferred self-service or co-managed models. This further explains the shift away from telcos, since they typically work with fully managed models.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398478/survey-finds-sd-wans-are-hot-but-satisfaction-with-telcos-is-not.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/02/istock-465661573-100750447-large.jpg
[2]: https://www.catonetworks.com/news/digital-transformation-survey/
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE Synergy For Dummies)
[#]: via: (https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html)
[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
HPE Synergy For Dummies
======
![istock/venimo][1]
Business must move fast today to keep up with competitive forces. That means IT must provide an agile — anytime, anywhere, any workload — infrastructure that ensures growth, boosts productivity, enhances innovation, improves the customer experience, and reduces risk.
A composable infrastructure helps organizations achieve these important objectives that are difficult — if not impossible — to achieve via traditional means, such as the ability to do the following:
* Deploy quickly with simple flexing, scaling, and updating
* Run workloads anywhere — on physical servers, on virtual servers, or in containers
* Operate any workload upon which the business depends, without worrying about infrastructure resources or compatibility
* Ensure the infrastructure is able to provide the right service levels so the business can stay in business
In other words, IT must inherently become part of the fabric of products and services that are rapidly innovated at every company, with an anytime, anywhere, any workload infrastructure.
**The anytime paradigm**
For organizations that seek to embrace DevOps, collaboration is the cultural norm. Development and operations staff work sidebyside to support software across its entire life cycle, from initial idea to production support.
To provide DevOps groups — as well as other stakeholders — the IT infrastructure required at the rate at which it is demanded, enterprise IT must increase its speed, agility, and flexibility to enable people anytime composition and recomposition of resources. Composable infrastructure enables this anytime paradigm.
**The anywhere ability**
Bare metal and virtualized workloads are just two application foundations that need to be supported in the modern data center. Today, containers are emerging as a compelling construct, providing significant benefits for certain kinds of workloads. Unfortunately, with traditional infrastructure approaches, IT needs to build out custom, unique infrastructure to support them, at least until an infrastructure is deployed that can seamlessly handle physical, virtual, and containerbased workloads.
Each environment would need its own hardware and software and might even need its own staff members supporting it.
Composable infrastructure provides an environment that supports the ability to run physical, virtual, or containerized workloads.
**Support any workload**
Do you have a legacy onpremises application that you have to keep running? Do you have enterprise resource planning (ERP) software that currently powers your business but that will take ten years to phase out? At the same time, do you have an emerging DevOps philosophy under which youd like to empower developers to dynamically create computing environments as a part of their development efforts?
All these things can be accomplished simultaneously on the right kind of infrastructure. Composable infrastructure enables any workload to operate as a part of the architecture.
**HPE Synergy**
HPE Synergy brings to life the architectural principles of composable infrastructure. It is a single, purpose-built platform that reduces operational complexity for workloads and increases operational velocity for applications and services.
Download a copy of the [HPE Synergy for Dummies eBook][2] to learn how to:
* Infuse the IT architecture with the ability to enable agility, flexibility, and speed
* Apply composable infrastructure concepts to support both traditional and cloud-native applications
* Deploy HPE Synergy infrastructure to revolutionize workload support in the data center
Also, you will find more information about HPE Synergy [here][3].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html
作者:[HPE][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/06/istock-1026657600-100798064-large.jpg
[2]: https://www.hpe.com/us/en/resources/integrated-systems/synergy-for-dummies.html
[3]: http://hpe.com/synergy

View File

@ -0,0 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric)
[#]: via: (https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html)
[#]: author: (HPE https://www.networkworld.com/author/Michael-Cooney/)
True Hyperconvergence at Scale: HPE Simplivity With Composable Fabric
======
Many hyperconverged solutions only focus on software-defined storage. However, many networking functions and technologies can be consolidated for simplicity and scale in the data center. This video describes how HPE SimpliVity with Composable Fabric gives organizations the power to run any virtual machine anywhere, anytime. Read more about HPE SimpliVity [here][1].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399619/true-hyperconvergence-at-scale-hpe-simplivity-with-composable-fabric.html
作者:[HPE][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://hpe.com/info/simplivity

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IoT Roundup: New research on IoT security, Microsoft leans into IoT)
[#]: via: (https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
IoT Roundup: New research on IoT security, Microsoft leans into IoT
======
Verizon sets up widely available narrow-band IoT service, while most Americans think IoT manufacturers should ensure their products protect personal information.
As with any technology whose use is expanding at such speed, it can be tough to track exactly whats going on in the [IoT][1] world everything from basic usage numbers to customer attitudes to more in-depth slices of the market is constantly changing. Fortunately, the month of May brought several new pieces of research to light, which should help provide at least a partial outline of whats really happening in IoT.
### Internet of things polls
Not all of the news is good. An IPSOS Mori poll performed on behalf of the Internet Society and Consumers International (respectively, an umbrella organization for open development and Internet use and a broad-based consumer advocacy group) found that, despite the skyrocketing numbers of smart devices in circulation around the world, more than half of users in large parts of the western world dont trust those devices to safeguard their privacy.
**More on IoT:**
* [What is the IoT? How the internet of things works][2]
* [What is edge computing and how its changing the network][3]
* [Most powerful Internet of Things companies][4]
* [10 Hot IoT startups to watch][5]
* [The 6 ways to make money in IoT][6]
* [What is digital twin technology? [and why it matters]][7]
* [Blockchain, service-centric networking key to IoT success][8]
* [Getting grounded in IoT networking and security][9]
* [Building IoT-ready networks must become a priority][10]
* [What is the Industrial IoT? [And why the stakes are so high]][11]
While almost 70 percent of respondents owned connected devices, 55 percent said they didnt feel their personal information was adequately protected by manufacturers. A further 28 percent said they had avoided using connected devices smart home, fitness tracking and similar consumer gadgetry primarily because they were concerned over privacy issues, and a whopping 85 percent of Americans agreed with the argument that manufacturers had a responsibility to produce devices that protected personal information.
Those concerns are understandable, according to data from the Ponemon Institute, a tech-research organization. Its survey of corporate risk and security personnel, released in early May, found that there have been few concerted efforts to limit exposure to IoT-based security threats, and that those threats are sharply on the rise when compared to past years, with the percentage of organizations that had experienced a data breach related to unsecured IoT devices rising from 15 percent in fiscal 2017 to 26 percent in fiscal 2019.
Beyond a lack of organizational wherewithal to address those threats, part of the problem in some verticals is technical. Security vendor Forescout said earlier this month that its research showed 40 percent of all healthcare IT environments had more than 20 different operating systems, and more than 30 percent had more than 100 hardly an ideal situation for smooth patching and updating.
To continue reading this article register now
[Get Free Access][12]
[Learn More][13] Existing Users [Sign In][12]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398607/iot-roundup-new-research-on-iot-security-microsoft-leans-into-iot.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
[2]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[5]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[6]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[7]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[8]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[9]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[10]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[11]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[12]: javascript://
[13]: /learn-about-insider/

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Its time for the IoT to 'optimize for trust')
[#]: via: (https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
Its time for the IoT to 'optimize for trust'
======
If we can't trust the internet of things (IoT) to gather accurate data and use it appropriately, IoT adoption and innovation are likely to suffer.
![Bose][1]
One of the strengths of internet of things (IoT) technology is that it can do so many things well. From smart toothbrushes to predictive maintenance on jetliners, the IoT has more use cases than you can count. The result is that various IoT uses cases require optimization for particular characteristics, from cost to speed to long life, as well as myriad others.
But in a recent post, "[How the internet of things will change advertising][2]" (which you should definitely read), the always-insightful Stacy Higginbotham tossed in a line that I cant stop thinking about: “It's crucial that the IoT optimizes for trust."
**[ Read also: Network World's[corporate guide to addressing IoT security][3] ]**
### Trust is the IoT's most important attribute
Higginbotham was talking about optimizing for trust as opposed to clicks, but really, trust is more important than just about any other value in the IoT. Its more important than bandwidth usage, more important than power usage, more important than cost, more important than reliability, and even more important than security and privacy (though they are obviously related). In fact, trust is the critical factor in almost every aspect of the IoT.
Dont believe me? Lets take a quick look at some recent developments in the field:
For one thing, IoT devices often dont take good care of the data they collect from you. Over 90% of data transactions on IoT devices are not fully encrypted, according to a new [study from security company Zscaler][4]. The [problem][5], apparently, is that many companies have large numbers of consumer-grade IoT devices on their networks. In addition, many IoT devices are attached to the companies general networks, and if that network is breached, the IoT devices and data may also be compromised.
In some cases, ownership of IoT data can raise surprisingly serious trust concerns. According to [Kaiser Health News][6], smartphone sleep apps, as well as smart beds and smart mattress pads, gather amazingly personal information: “It knows when you go to sleep. It knows when you toss and turn. It may even be able to tell when youre having sex.” And while companies such as Sleep Number say they dont share the data they gather, their written privacy policies clearly state that they _can_.
### **Lack of trust may lead to new laws**
In California, meanwhile, "lawmakers are pushing for new privacy rules affecting smart speakers” such as the Amazon Echo. According to the _[LA Times][7]_ , the idea is “to ensure that the devices dont record private conversations without permission,” requiring a specific opt-in process. Why is this an issue? Because consumers—and their elected representatives—dont trust that Amazon, or any IoT vendor, will do the right thing with the data it collects from the IoT devices it sells—perhaps because it turns out that thousands of [Amazon employees have been listening in on what Alexa users are][8] saying to their Echo devices.
The trust issues get even trickier when you consider that Amazon reportedly considered letting Alexa listen to users even without a wake word like “Alexa” or “computer,” and is reportedly working on [wearable devices designed to read human emotions][9] from listening to your voice.
“The trust has been breached,” said California Assemblyman Jordan Cunningham (R-Templeton) to the _LA Times_.
As critics of the bill ([AB 1395][10]) point out, the restrictions matter because voice assistants require this data to improve their ability to correctly understand and respond to requests.
### **Some first steps toward increasing trust**
Perhaps recognizing that the IoT needs to be optimized for trust so that we are comfortable letting it do its job, Amazon recently introduced a new Alexa voice command: “[Delete what I said today][11].”
Moves like that, while welcome, will likely not be enough.
For example, a [new United Nations report][12] suggests that “voice assistants reinforce harmful gender stereotypes” when using female-sounding voices and names like Alexa and Siri. Put simply, “Siris female obsequiousness—and the servility expressed by so many other digital assistants projected as young women—provides a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education.” I'm not sure IoT vendors are eager—or equipped—to tackle issues like that.
**More on IoT:**
* [What is the IoT? How the internet of things works][13]
* [What is edge computing and how its changing the network][14]
* [Most powerful Internet of Things companies][15]
* [10 Hot IoT startups to watch][16]
* [The 6 ways to make money in IoT][17]
* [What is digital twin technology? [and why it matters]][18]
* [Blockchain, service-centric networking key to IoT success][19]
* [Getting grounded in IoT networking and security][20]
* [Building IoT-ready networks must become a priority][21]
* [What is the Industrial IoT? [And why the stakes are so high]][22]
Join the Network World communities on [Facebook][23] and [LinkedIn][24] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399817/its-time-for-the-iot-to-optimize-for-trust.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/09/bose-sleepbuds-2-100771579-large.jpg
[2]: https://mailchi.mp/iotpodcast/stacey-on-iot-how-iot-changes-advertising?e=6bf9beb394
[3]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html
[4]: https://www.zscaler.com/blogs/research/iot-traffic-enterprise-rising-so-are-threats
[5]: https://www.csoonline.com/article/3397044/over-90-of-data-transactions-on-iot-devices-are-unencrypted.html
[6]: https://khn.org/news/a-wake-up-call-on-data-collecting-smart-beds-and-sleep-apps/
[7]: https://www.latimes.com/politics/la-pol-ca-alexa-google-home-privacy-rules-california-20190528-story.html
[8]: https://www.usatoday.com/story/tech/2019/04/11/amazon-employees-listening-alexa-customers/3434732002/
[9]: https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions
[10]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1395
[11]: https://venturebeat.com/2019/05/29/amazon-launches-alexa-delete-what-i-said-today-voice-command/
[12]: https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1
[13]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
[14]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[15]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
[16]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
[17]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
[18]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
[19]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
[20]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
[21]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
[22]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
[23]: https://www.facebook.com/NetworkWorld/
[24]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,102 @@
[#]: collector: (lujun9972)
[#]: translator: (GraveAccent)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5G will augment Wi-Fi, not replace it)
[#]: via: (https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
5G will augment Wi-Fi, not replace it
======
Jeff Lipton, vice president of strategy and corporate development at Aruba, adds a dose of reality to the 5G hype, discussing how it and Wi-Fi will work together and how to maximize the value of both.
![Thinkstock][1]
Theres arguably no technology topic thats currently hotter than [5G][2]. It was a major theme of the most recent [Mobile World Congress][3] show and has reared its head in other events such as Enterprise Connect and almost every vendor event I attend.
Some vendors have positioned 5G as a panacea to all network problems and predict it will eradicate all other forms of networking. Views like that are obviously extreme, but I do believe that 5G will have an impact on the networking industry and is something that network engineers should be aware of.
To help bring some realism to the 5G hype, I recently interviewed Jeff Lipton, vice president of strategy and corporate development at Aruba, a Hewlett Packard company, as I know HPE has been deeply involved in the evolution of both 5G and Wi-Fi.
**[ Also read:[The time of 5G is almost here][3] ]**
### Zeus Kerravala: 5G is being touted as the "next big thing." Do you see it that way?
**Jeff Lipton:** The next big thing is connecting "things" and generating actionable insights and context from those things. 5G is one of the technologies that serve this trend. Wi-Fi 6 is another — so are edge compute, Bluetooth Low Energy (BLE), artificial intelligence (AI) and machine learning (ML). These all are important, and they each have a place.
### Do you see 5G eclipsing Wi-Fi in the enterprise?
![Jeff Lipton, VP of strategy and corporate development, Aruba][4]
**Lipton:** No. 5G, like all cellular access, is appropriate if you need macro area coverage and high-speed handoffs. But its not ideal for most enterprise applications, where you generally dont need these capabilities. From a performance standpoint, [Wi-Fi 6][5] and 5G are roughly equal on most metrics, including throughput, latency, reliability, and connection density. Where they arent close is economics, where Wi-Fi is far better. I dont think many customers would be willing to trade Wi-Fi for 5G unless they need macro coverage or high-speed handoffs.
### Can Wi-Fi and 5G coexist? How would an enterprise use 5G and Wi-Fi together?
**Lipton:** Wi-Fi and 5G can and should be complementary. The 5G architecture decouples the cellular core and Radio Access Network (RAN). Consequently, Wi-Fi can be the enterprise radio front end and connect tightly with a 5G core. Since the economics of Wi-Fi — especially Wi-Fi 6 — are favorable and performance is extremely good, we envision many service providers using Wi-Fi as the radio front end for their 5G systems where it makes sense, as an alternative to Distributed Antenna (DAS) and small-cell systems.
Wi-Fi and 5G can and should be complementary." — Jeff Lipton
### If a business were considering moving to 5G only, how would this be done and how practical is it?
**Lipton:** To use 5G for primary in-building access, a customer would need to upgrade their network and virtually all of their devices. 5G provides good coverage outdoors, but cellular signals cant reliably penetrate buildings. And this problem will become worse with 5G, which partially relies on higher frequency radios. So service providers will need a way to provide indoor coverage. To provide this coverage, they propose deploying DAS or small-cell systems — paid for by the end customer. The customers would then connect their devices directly to these cellular systems and pay a service component for each device.
**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]**
There are several problems with this approach. First, DAS and small-cell systems are significantly more expensive than Wi-Fi networks. And the cost doesnt stop with the network. Every device would need to have a 5G cellular modem, which costs tens of dollars wholesale and usually over a hundred dollars to an end user. Since few, if any MacBooks, PCs, printers or AppleTVs today have 5G modems, these devices would need to be upgraded. I dont believe many enterprises would be willing to pay this additional cost and upgrade most of their equipment for an unclear benefit.
### Are economics a factor in the 5G versus Wi-Fi debate?
**Lipton:** Economics is always a factor. Lets focus the conversation on in-building enterprise applications, since this is the use case some carriers intend to target with 5G. Weve already mentioned that upgrading to 5G would require enterprises to deploy expensive DAS or small-cell systems for in-building coverage, upgrade virtually all of their equipment to contain 5G modems, and pay service contracts for each of these devices. Its also important to understand 5G cellular networks and DAS systems operate over licensed spectrum, which is analogous to a private highway. Service providers paid billions of dollars for this spectrum, and this expense needs to be monetized and embedded in service costs. So, from both deployment and lifecycle perspectives, Wi-Fi economics are favorable to 5G.
### Are there any security implications of 5G versus Wi-Fi?
**Lipton:** Cellular technologies are perceived by some to be more secure than Wi-Fi, but thats not true. LTE is relatively secure, but it also has weak points. For example, LTE is vulnerable to a range of attacks, including data interception and device tracking, according to researchers at Purdue and the University of Iowa. 5G improves upon LTE security with multiple authentication methods and better key management.
Wi-Fi security isnt standing still either and continues to advance. Of course, Wi-Fi implementations that do not follow best practices, such as those without even basic password protection, are not optimal. But those configured with proper access controls and passwords are highly secure. With new standards — specifically, WPA3 and Enhanced Open — Wi-Fi network security has improved even further.
Its also important to keep in mind that enterprises have made enormous investments in security and compliance solutions tailored to their specific needs. With cellular networks, including 5G, enterprises lose the ability to deploy their chosen security and compliance solutions, as well as most visibility into traffic flows. While future versions of 5G will offer high-levels of customization with a feature called network slicing, enterprises would still lose the level of security and compliance customization they currently need and have.
### Any parting thoughts to add to the discussion around 5G versus Wi-Fi?
**Lipton:** The debate around Wi-Fi versus 5G misses the point. They each have their place, and they are in many ways complementary. The Wi-Fi and 5G markets both will grow, driven by the need to connect and analyze a growing number of things. If a customer needs macro coverage or high-speed handoffs and can pay the additional cost for these capabilities, 5G makes sense.
5G also could be a fit for certain industrial use cases where customers require physical network segmentation. But for the vast majority of enterprise customers, Wi-Fi will continue to prove its value as a reliable, secure, and cost-effective wireless access technology, as it does today.
**More about 802.11ax (Wi-Fi 6):**
* [Why 802.11ax is the next big thing in wireless][7]
* [FAQ: 802.11ax Wi-Fi][8]
* [Wi-Fi 6 (802.11ax) is coming to a router near you][9]
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][10]
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399978/5g-will-augment-wi-fi-not-replace-it.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/wireless_connection_speed_connectivity_bars_cell_tower_5g_by_thinkstock-100796921-large.jpg
[2]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[4]: https://images.idgesg.net/images/article/2019/06/headshot_jlipton_aruba-100798360-small.jpg
[5]: https://www.networkworld.com/article/3215907/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture
[7]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[8]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
[9]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[10]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
[11]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data center workloads become more complex despite promises to the contrary)
[#]: via: (https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Data center workloads become more complex despite promises to the contrary
======
The data center is shouldering a greater burden than ever, despite promises of ease and the cloud.
![gorodenkoff / Getty Images][1]
Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads.
Thats the finding of the Uptime Institute's latest [annual global data center survey][2] (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure.
**[ Learn[how server disaggregation can boost data center efficiency][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand.
On top of it all, everyone is struggling to get enough staff with the right skills.
### Outages, staffing problems, lack of public cloud visibility among top concerns
Among the key findings of Uptime's report:
* The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021.
* The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier.
* Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.
* Ten percent of all respondents said their most recent significant outage cost more than $1 million.
* A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility.
* Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67.
* Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies.
* Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data centers outage could have been prevented with better management/processes or configuration.
Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method.
The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/cso_cloud_computing_backups_it_engineer_data_center_server_racks_connections_by_gorodenkoff_gettyimages-943065400_3x2_2400x1600-100796535-large.jpg
[2]: https://uptimeinstitute.com/2019-data-center-industry-survey-results
[3]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Moving to the Cloud? SD-WAN Matters! Part 2)
[#]: via: (https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html)
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
Moving to the Cloud? SD-WAN Matters! Part 2
======
![istock][1]
This is the second installment of the blog series exploring how enterprises can realize the full transformation promise of the cloud by shifting to a business first networking model powered by a business-driven [SD-WAN][2]. The first installment explored automating secure IPsec connectivity and intelligently steering traffic to cloud providers. We also framed the direct correlation between moving to the cloud and adopting an SD-WAN. In this blog, we will expand upon several additional challenges that can be addressed with a business-driven SD-WAN when embracing the cloud:
### Simplifying and automating security zone-based segmentation
Securing cloud-first branches requires a robust multi-level approach that addresses following considerations:
* Restricting outside traffic coming into the branch to sessions exclusively initiated by internal users with a built-in stateful firewall, avoiding appliance sprawl and lowering operational costs; this is referred to as the app whitelist model
* Encrypting communications between end points within the SD-WAN fabric and between branch locations and public cloud instances
* Service chaining traffic to a cloud-hosted security service like [Zscaler][3] for Layer 7 inspection and analytics for internet-bound traffic
* Segmenting traffic spanning the branch, WAN and data center/cloud
* Centralizing policy orchestration and automation of zone-based firewall, VLAN and WAN overlays
A traditional device-centric WAN approach for security segmentation requires the time-consuming manual configuration of routers and/or firewalls on a device-by-device and site-by-site basis. This is not only complex and cumbersome, but it simply cant scale to 100s or 1000s of sites. Anusha Vaidyanathan, director of product management at Silver Peak, explains how to automate end-to-end zone-based segmentation, emphasizing the advantages of a business-driven approach in this [lightboard video][4].
### Delivering the Highest Quality of Experience to IT teams
The goal for enterprise IT is enabling business agility and increasing operational efficiency. The traditional router-centric WAN approach doesnt provide the best quality of experience for IT as management and on-going network operations are manual and time consuming, device-centric, cumbersome, error-prone and inefficient.
A business-driven SD-WAN such as the Silver Peak [Unity EdgeConnect™][5] unified SD-WAN edge platform centralizes the orchestration of business-driven policies. EdgeConnect automation, machine learning and open APIs easily integrate with third-party management tools and real-time visibility tools to deliver the highest quality of experience for IT, enabling them to reclaim nights and weekends. Manav Mishra, vice president of product management at Silver Peak, explains the latest Silver Peak innovations in this [lightboard video][6].
As enterprises become increasingly dependent on the cloud and embrace a multi-cloud strategy, they must address a number of new challenges:
* A centralized approach to securely embracing the cloud and the internet
* How to extend the on-premise data center to a public cloud and migrating workloads between private and public cloud, taking application portability into account
* Deliver consistent high application performance and availability to hosted applications whether they reside in the data center, private or public clouds or are delivered as SaaS services
* A proactive way to quickly resolve complex issues that span the data center and cloud as well as multiple WAN transport services by harnessing the power of advanced visibility and analytics tools
The business-driven EdgeConnect SD-WAN edge platform enables enterprise IT organizations to easily and consistently embrace the public cloud. Unified security and performance capabilities with automation deliver the highest quality of experience for both users and IT while lowering overall WAN expenditures.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398488/moving-to-the-cloud-sd-wan-matters-part-2.html
作者:[Rami Rammaha][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Rami-Rammaha/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/istock-909772962-100797711-large.jpg
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
[3]: https://www.silver-peak.com/company/tech-partners/zscaler
[4]: https://www.silver-peak.com/resource-center/how-to-create-sd-wan-security-zones-in-edgeconnect
[5]: https://www.silver-peak.com/products/unity-edge-connect
[6]: https://www.silver-peak.com/resource-center/how-to-optimize-quality-of-experience-for-it-using-sd-wan

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco will use AI/ML to boost intent-based networking)
[#]: via: (https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco will use AI/ML to boost intent-based networking
======
Cisco explains how artificial intelligence and machine learning fit into a feedback loop that implements and maintain desired network conditions to optimize network performance for workloads using real-time data.
![xijian / Getty Images][1]
Artificial Intelligence and machine learning are expected to be some of the big topics at next weeks Cisco Live event and the company is already talking about how those technologies will help drive the next generation of [Intent-Based Networking][2].
“Artificial intelligence will change how we manage networks, and its a change we need,” wrote John Apostolopoulos Cisco CTO and vice president of Enterprise Networking in a [blog][3] about how Cisco says these technologies impact the network.
**[ Now see[7 free network tools you must have][4]. ]**
AI is the next major step for networking capabilities, and while researchers have talked in the past about how great AI would be, now the compute power and algorithms exist to make it possible, Apostolopoulos told Network World.
To understand how AI and ML can boost IBN, Cisco says it's necessary to understand four key factors an IBN environment needs: infrastructure, translation, activation and assurance.
Infrastructure can be virtual or physical and include wireless access points, switches, routers, compute and storage. “To make the infrastructure do what we want, we use the translation function to convert the intent, or what we are trying to make the network accomplish, from a person or computer into the correct network and security policies. These policies then must be activated on the network,” Apostolopoulos said.
The activation step takes the network and security polices and couples them with a deep understanding of the network infrastructure that includes both real-time and historic data about its behavior. It then activates or automates the policies across all of the network infrastructure elements, ideally optimizing for performance, reliability and security, Apostolopoulos wrote.
Finally assurance maintains a continuous validation-and-verification loop. IBN improves on translation and assurance to form a valuable feedback loop about whats going on in the network that wasnt available before. ** **
Apostolopoulos used the example of an international company that wanted to set up a world-wide video all-hands meeting. Everyone on the call had to have high-quality, low-latency video, and also needed the capability to send high-quality video into the call when it was time for Q&A.
“By applying machine learning and related machine reasoning, assurance can also sift through the massive amount of data related to such a global event to correctly identify if there are any problems arising. We can then get solutions to these issues and even automatically apply solutions more quickly and more reliably than before,” Apostolopoulos said.
In this case, assurance could identify that the use of WAN bandwidth to certain sites is increasing at a rate that will saturate the network paths and could proactively reroute some of the WAN flows through alternative paths to prevent congestion from occurring, Apostolopoulos wrote.
“In prior systems, this problem would typically only be recognized after the bandwidth bottleneck occurred and users experienced a drop in call quality or even lost their connection to the meeting. It would be challenging or impossible to identify the issue in real time, much less to fix it before it distracted from the experience of the meeting. Accurate and fast identification through ML and MR coupled with intelligent automation through the feedback loop is key to successful outcome.”
Apostolopoulos said AI can accelerate the path from intent into translation and activation and then examine network and behavior data in the assurance step to make sure everything is working correctly. Activation uses the insights to drive more intelligent actions for improved performance, reliability and security, creating a cycle of network optimization.
So what might an implementation of this look like? Applications that run on Ciscos DNA Center may be the central component in an IBN environment. Introduced on 2017 as the heart of its IBN initiative, [Cisco DNA Center][5] features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.
“DNA Center can bring together AI and ML in a unified manner,” Apostolopoulos said. “It can store data from across the network and then customers can do AI and ML on that data.”
Central to Cisco's push is being able to gather metadata about traffic as it passes without slowing the traffic, which is accomplished through the use of ASICs in its campus and data-center switches.
“We have designed our networking gear from the ASIC, OS and software levels to gather key data via our IBN architecture, which provides unified data collection and performs algorithmic analysis across the entire network (wired, wireless, LAN, WAN, datacenter), Apostolopoulos said. “We have a massive collection of network data, including a database of problems and associated root causes, from being the worlds top enterprise network vendor over the past 20-plus years. And we have been investing for many years to create innovative network-data analysis and ML, MR, and other AI techniques to identify and solve key problems.”
Machine learning and AI can then be applied to all that data to help network operators handle everything from policy setting and network control to security.
“I also want to stress that the feedback the IT user gets from the IBN system with AI is not overwhelming telemetry data,” Apostolopoulos said. Instead it is valuable and actionable insights at scale, derived from immense data and behavioral analytics using AI.
Managing and developing new AI/ML-based applications from enormous data sets beyond what Cisco already has is a key driver behind its the companys Unified Compute System (UCS) server that wasa rolled out last September. While the new server, the UCS C480 ML, is powerful it includes eight Nvidia Tesla V100-32G GPUs with 128GB of DDR4 RAM, 24 SATA hard drives and more it is the ecosystem of vendors Cloudera, HortonWorks and others that will end up being more important.
[Earlier this year Cisco forecast][6] that [AI and ML][7] will significantly boost network management this year.
“In 2019, companies will start to adopt Artificial Intelligence, in particular Machine Learning, to analyze the telemetry coming off networks to see these patterns, in an attempt to get ahead of issues from performance optimization, to financial efficiency, to security,” said [Anand Oswal][8], senior vice president of engineering in Ciscos Enterprise Networking Business. The pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed, while also de-prioritizing alerts that otherwise nag network operators but that arent critical, Oswal said.
“We will also start to use these tools to categorize and cluster device and user types, which can help us create profiles for use cases as well as spot outlier activities that could indicate security incursions,” he said.
The first application of AI in network management will be smarter alerts that simply report on activities that break normal patterns, but as the technology advances it will react to more situations autonomously. The idea is to give customers more information so they and the systems can make better network decisions. Workable tools should appear later in 2019, Oswal said.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/ai-vendor-relationship-management_bar-code_purple_artificial-intelligence_hand-on-virtual-screen-100795252-large.jpg
[2]: http://www.networkworld.com/cms/article/3202699
[3]: https://blogs.cisco.com/enterprise/improving-networks-with-ai
[4]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html
[5]: https://www.networkworld.com/article/3280988/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
[6]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
[7]: https://www.networkworld.com/article/3320978/data-center/network-operations-a-new-role-for-ai-and-ml.html
[8]: https://blogs.cisco.com/author/anandoswal
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloud adoption drives the evolution of application delivery controllers)
[#]: via: (https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html)
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
Cloud adoption drives the evolution of application delivery controllers
======
Application delivery controllers (ADCs) are on the precipice of shifting from traditional hardware appliances to software form factors.
![Aramyan / Getty Images / Microsoft][1]
Migrating to a cloud computing model will obviously have an impact on the infrastructure thats deployed. This shift has already been seen in the areas of servers, storage, and networking, as those technologies have evolved to a “software-defined” model. And it appears that application delivery controllers (ADCs) are on the precipice of a similar shift.
In fact, a new ZK Research [study about cloud computing adoption and the impact on ADCs][2] found that, when looking at the deployment model, hardware appliances are the most widely deployed — with 55% having fully deployed or are currently testing and only 15% currently researching hardware. (Note: I am an employee of ZK Research.)
Juxtapose this with containerized ADCs where only 34% have deployed or are testing but 24% are currently researching and it shows that software in containers will outpace hardware for growth. Not surprisingly, software on bare metal and in virtual machines showed similar although lower, “researching” numbers that support the thesis that the market is undergoing a shift from hardware to software.
**[ Read also:[How to make hybrid cloud work][3] ]**
The study, conducted in collaboration with Kemp Technologies, surveyed 203 respondents from the U.K. and U.S. The demographic split was done to understand regional differences. An equal number of mid and large size enterprises were looked at, with 44% being from over 5,000 employees and the other 56% from companies that have 300 to 5,000 people.
### Incumbency helps but isnt a fait accompli for future ADC purchases
The primary tenet of my research has always been that incumbents are threatened when markets transition, and this is something I wanted to investigate in the study. The survey asked whether buyers would consider an alternative as they evolve their applications from legacy (mode 1) to cloud-native (mode 2). The results offer a bit of good news and bad news for the incumbent providers. Only 8% said they would definitely select a new vendor, but 35% said they would not change. That means the other 57% will look at alternatives. This is sensible, as the requirements for cloud ADCs are different than ones that support traditional applications.
### IT pros want better automation capabilities
This begs the question as to what features ADC buyers want for a cloud environment versus traditional ones. The survey asked specifically what features would be most appealing in future purchases, and the top response was automation, followed by central management, application analytics, on-demand scaling (which is a form of automation), and visibility.
The desire to automate was a positive sign for the evolution of buyer mindset. Just a few years ago, the mere mention of automation would have sent IT pros into a panic. The reality is that IT cant operate effectively without automation, and technology professionals are starting to understand that.
The reason automation is needed is that manual changes are holding businesses back. The survey asked how the speed of ADC changes impacts the speed at which applications are rolled out, and a whopping 60% said it creates significant or minor delays. In an era of DevOps and continuous innovation, multiple minor delays create a drag on the business and can cause it to fall behind is more agile competitors.
![][4]
### ADC upgrades and service provisioning benefit most from automation
The survey also drilled down on specific ADC tasks to see where automation would have the most impact. Respondents were asked how long certain tasks took, answering in minutes, days, weeks, or months. Shockingly, there wasnt a single task where the majority said it could be done in minutes. The closest was adding DNS entries for new virtual IP addresses (VIPs) where 46% said they could do that in minutes.
Upgrading, provisioning new load balancers, and provisioning new VIPs took the longest. Looking ahead, this foreshadows big problems. As the data center gets more disaggregated and distributed, IT will deploy more software-based ADCs in more places. Taking days or weeks or month to perform these functions will cause the organization to fall behind.
The study clearly shows changes are in the air for the ADC market. For IT pros, I strongly recommend that as the environment shifts to the cloud, its prudent to evaluate new vendors. By all means, see what your incumbent vendor has, but look at least at two others that offer software-based solutions. Also, there should be a focus on automating as much as possible, so the primary evaluation criteria for ADCs should be how easy it is to implement automation.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400897/cloud-adoption-drives-the-evolution-of-application-delivery-controllers.html
作者:[Zeus Kerravala][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/05/cw_microsoft_sharepoint_vs_onedrive_clouds_and_hands_by_aramyan_gettyimages-909772962_2400x1600-100796932-large.jpg
[2]: https://kemptechnologies.com/research-papers/adc-market-research-study-zeus-kerravala/?utm_source=zkresearch&utm_medium=referral&utm_campaign=zkresearch&utm_term=zkresearch&utm_content=zkresearch
[3]: https://www.networkworld.com/article/3119362/hybrid-cloud/how-to-make-hybrid-cloud-work.html#tk.nww-fsb
[4]: https://images.idgesg.net/images/article/2019/06/adc-survey-zk-research-100798593-large.jpg
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (For enterprise storage, persistent memory is here to stay)
[#]: via: (https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html)
[#]: author: (John Edwards )
For enterprise storage, persistent memory is here to stay
======
Persistent memory also known as storage class memory has tantalized data center operators for many years. A new technology promises the key to success.
![Thinkstock][1]
It's hard to remember a time when semiconductor vendors haven't promised a fast, cost-effective and reliable persistent memory technology to anxious [data center][2] operators. Now, after many years of waiting and disappointment, technology may have finally caught up with the hype to make persistent memory a practical proposition.
High-capacity persistent memory, also known as storage class memory ([SCM][3]), is fast and directly addressable like dynamic random-access memory (DRAM), yet is able to retain stored data even after its power has been switched off—intentionally or unintentionally. The technology can be used in data centers to replace cheaper, yet far slower traditional persistent storage components, such as [hard disk drives][4] (HDD) and [solid-state drives][5] (SSD).
**Learn more about enterprise storage**
* [Why NVMe over Fabric matters][6]
* [What is hyperconvergence?][7]
* [How NVMe is changing enterprise storage][8]
* [Making the right hyperconvergence choice: HCI hardware or software?][9]
Persistent memory can also be used to replace DRAM itself in some situations without imposing a significant speed penalty. In this role, persistent memory can deliver crucial operational benefits, such as lightning-fast database-server restarts during maintenance, power emergencies and other expected and unanticipated reboot situations.
Many different types of strategic operational applications and databases, particularly those that require low-latency, high durability and strong data consistency, can benefit from persistent memory. The technology also has the potential to accelerate virtual machine (VM) storage and deliver higher performance to multi-node, distributed-cloud applications.
In a sense, persistent memory marks a rebirth of core memory. "Computers in the 50s to 70s used magnetic core memory, which was direct access, non-volatile memory," says Doug Wong, a senior member of [Toshiba Memory America's][10] technical staff. "Magnetic core memory was displaced by SRAM and DRAM, which are both volatile semiconductor memories."
One of the first persistent memory devices to come to market is [Intels Optane DC][11]. Other vendors that have released persistent memory products or are planning to do so include [Samsung][12], Toshiba America Memory and [SK Hynix][13].
### Persistent memory: performance + reliability
With persistent memory, data centers have a unique opportunity to gain faster performance and lower latency without enduring massive technology disruption. "It's faster than regular solid-state NAND flash-type storage, but you're also getting the benefit that its persistent," says Greg Schulz, a senior advisory analyst at vendor-independent storage advisory firm [StorageIO.][14] "It's the best of both worlds."
Yet persistent memory offers adopters much more than speedy, reliable storage. In an ideal IT world, all of the data associated with an application would reside within DRAM to achieve maximum performance. "This is currently not practical due to limited DRAM and the fact that DRAM is volatile—data is lost when power fails," observes Scott Nelson, senior vice president and general manager of Toshiba Memory America's memory business unit.
Persistent memory transports compatible applications to an "always on" status, providing continuous access to large datasets through increased system memory capacity, says Kristie Mann, [Intel's][15] director of marketing for data center memory and storage. She notes that Optane DC can supply data centers with up to three-times more system memory capacity (as much as 36TBs), system restarts in seconds versus minutes, 36% more virtual machines per node, and up to 8-times better performance on [Apache Spark][16], a widely used open-source distributed general-purpose cluster-computing framework.
System memory currently represents 60% of total platform costs, Mann says. She observes that Optane DC persistent memory provides significant customer value by delivering 1.2x performance/dollar on key customer workloads. "This value will dramatically change memory/storage economics and accelerate the data-centric era," she predicts.
### Where will persistent memory infiltrate enterprise storage?
Persistent memory is likely to first enter the IT mainstream with minimal fanfare, serving as a high-performance caching layer for high performance SSDs. "This could be adopted relatively-quickly," Nelson observes. Yet this intermediary role promises to be merely a stepping-stone to increasingly crucial applications.
Over the next few years, persistent technology will impact data centers serving enterprises across an array of sectors. "Anywhere time is money," Schulz says. "It could be financial services, but it could also be consumer-facing or sales-facing operations."
Persistent memory supercharges anything data-related that requires extreme speed at extreme scale, observes Andrew Gooding, vice president of engineering at [Aerospike][17], which delivered the first commercially available open database optimized for use with Intel Optane DC.
Machine learning is just one of many applications that stand to benefit from persistent memory. Gooding notes that ad tech firms, which rely on machine learning to understand consumers' reactions to online advertising campaigns, should find their work made much easier and more effective by persistent memory. "Theyre collecting information as users within an ad campaign browse the web," he says. "If they can read and write all that data quickly, they can then apply machine-learning algorithms and tailor specific ads for users in real time."
Meanwhile, as automakers become increasingly reliant on data insights, persistent memory promises to help them crunch numbers and refine sophisticated new technologies at breakneck speeds. "In the auto industry, manufacturers face massive data challenges in autonomous vehicles, where 20 exabytes of data needs to be processed in real time, and they're using self-training machine-learning algorithms to help with that," Gooding explains. "There are so many fields where huge amounts of data need to be processed quickly with machine-learning techniques—fraud detection, astronomy... the list goes on."
Intel, like other persistent memory vendors, expects cloud service providers to be eager adopters, targeting various types of in-memory database services. Google, for example, is applying persistent memory to big data workloads on non-relational databases from vendors such as Aerospike and [Redis Labs][18], Mann says.
High-performance computing (HPC) is yet another area where persistent memory promises to make a tremendous impact. [CERN][19], the European Organization for Nuclear Research, is using Intel's Optane DC to significantly reduce wait times for scientific computing. "The efficiency of their algorithms depends on ... persistent memory, and CERN considers it a major breakthrough that is necessary to the work they are doing," Mann observes.
### How to prepare storage infrastructure for persistent memory
Before jumping onto the persistent memory bandwagon, organizations need to carefully scrutinize their IT infrastructure to determine the precise locations of any existing data bottlenecks. This task will be primary application-dependent, Wong notes. "If there is significant performance degradation due to delays associated with access to data stored in non-volatile storage—SSD or HDD—then an SCM tier will improve performance," he explains. Yet some applications will probably not benefit from persistent memory, such as compute-bound applications where CPU performance is the bottleneck.
Developers may need to reevaluate fundamental parts of their storage and application architectures, Gooding says. "They will need to know how to program with persistent memory," he notes. "How, for example, to make sure writes are flushed to the actual persistent memory device when necessary, as opposed to just sitting in the CPU cache."
To leverage all of persistent memory's potential benefits, significant changes may also be required in how code is designed. When moving applications from DRAM and flash to persistent memory, developers will need to consider, for instance, what happens when a program crashes and restarts. "Right now, if they write code that leaks memory, that leaked memory is recovered on restart," Gooding explains. With persistent memory, that isn't necessarily the case. "Developers need to make sure the code is designed to reconstruct a consistent state when a program restarts," he notes. "You may not realize how much your designs rely on the traditional combination of fast volatile DRAM and block storage, so it can be tricky to change your code designs for something completely new like persistent memory."
Older versions of operating systems may also need to be updated to accommodate the new technology, although newer OSes are gradually becoming persistent memory aware, Schulz says. "In other words, if they detect that persistent memory is available, then they know how to utilize that either as a cache, or some other memory."
Hypervisors, such as [Hyper-V][20] and [VMware][21], now know how to leverage persistent memory to support productivity, performance and rapid restarts. By utilizing persistent memory along with the latest versions of VMware, a whole system can see an uplift in speed and also maximize the number of VMs to fit on a single host, says Ian McClarty, CEO and president of data center operator [PhoenixNAP Global IT Services][22]. "This is a great use case for companies who want to own less hardware or service providers who want to maximize hardware to virtual machine deployments."
Many key enterprise applications, particularly databases, are also becoming persistent memory aware. SQL Server and [SAPs][23] flagship [HANA][24] database management platform have both embraced persistent memory. "The SAP HANA platform is commonly used across multiple industries to process data and transactions, and then run advanced analytics ... to deliver real-time insights," Mann observes.
In terms of timing, enterprises and IT organizations should begin persistent memory planning immediately, Schulz recommends. "You should be talking with your vendors and understanding their roadmap, their plans, for not only supporting this technology, but also in what mode: as storage, as memory."
Join the Network World communities on [Facebook][25] and [LinkedIn][26] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3398988/for-enterprise-storage-persistent-memory-is-here-to-stay.html
作者:[John Edwards][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2017/08/file_folder_storage_sharing_thinkstock_477492571_3x2-100732889-large.jpg
[2]: https://www.networkworld.com/article/3353637/the-data-center-is-being-reimagined-not-disappearing.html
[3]: https://www.networkworld.com/article/3026720/the-next-generation-of-storage-disruption-storage-class-memory.html
[4]: https://www.networkworld.com/article/2159948/hard-disk-drives-vs--solid-state-drives--are-ssds-finally-worth-the-money-.html
[5]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
[6]: https://www.networkworld.com/article/3273583/why-nvme-over-fabric-matters.html
[7]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence
[8]: https://www.networkworld.com/article/3280991/what-is-nvme-and-how-is-it-changing-enterprise-storage.html
[9]: https://www.networkworld.com/article/3318683/making-the-right-hyperconvergence-choice-hci-hardware-or-software
[10]: https://business.toshiba-memory.com/en-us/top.html
[11]: https://www.intel.com/content/www/us/en/architecture-and-technology/optane-dc-persistent-memory.html
[12]: https://www.samsung.com/semiconductor/
[13]: https://www.skhynix.com/eng/index.jsp
[14]: https://storageio.com/
[15]: https://www.intel.com/content/www/us/en/homepage.html
[16]: https://spark.apache.org/
[17]: https://www.aerospike.com/
[18]: https://redislabs.com/
[19]: https://home.cern/
[20]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/
[21]: https://www.vmware.com/
[22]: https://phoenixnap.com/
[23]: https://www.sap.com/index.html
[24]: https://www.sap.com/products/hana.html
[25]: https://www.facebook.com/NetworkWorld/
[26]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Juniper: Security could help drive interest in SDN)
[#]: via: (https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Juniper: Security could help drive interest in SDN
======
Juniper finds that enterprise interest in software-defined networking (SDN) is influenced by other factors, including artificial intelligence (AI) and machine learning (ML).
![monsitj / Getty Images][1]
Security challenges and developing artificial intelligence/maching learning (AI/ML) technologies are among the key issues driving [software-defined networking][2] (SDN) implementations, according to a new Juniper survey of 500 IT decision makers.
And SDN interest abounds 98% of the 500 said they were already using or considering an SDN implementation. Juniper said it had [Wakefield Research][3] poll IT decision makers of companies with 500 or more employees about their SDN strategies between May 7 and May 14, 2019.
**More about SD-WAN**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4]
* [How to pick an off-site data-backup method][5]
* [SD-Branch: What it is and why youll need it][6]
* [What are the options for security SD-WAN?][7]
SDN includes technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
IDC estimates that the worldwide data-center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017-2022 period. The market-generated revenue of nearly $5.15 billion in 2017 was up more than 32.2% from 2016.
There are many ideas driving the development of SDN. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources from the data center to the campus or wide area network.
While the evolution of SDN is ongoing, Junipers study pointed out an issue that was perhaps not unexpected many users are still managing operations via the command line interface (CLI). CLI is the primary text-based user interface used for configuring, monitoring and maintaining most networked devices.
“If SDN is as attractive as it is then why manage the network with the same legacy technology of the past?” said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks. “If you deploy SDN and dont adjust the operational model then it is difficult to reap all the benefits SDN can bring. Its the difference between managing devices individually which you may have done in the past to managing fleets of devices via SDN it simplifies and reduces operational expenses.”
Juniper pointed to a [Gartner prediction][8] that stated “by 2020, only 30% of network operations teams will use the command line interface (CLI) as their primary interface, down from 85% at years end 2016.” Garter stated that poll results from a recent Gartner conference found some 71% still using CLI as the primary way to make network changes.
Gartner [wrote][9] in the past that CLI has remained the primary operational tool for mainstream network operations teams for easily the past 15-20 years but that “moving away from the CLI is a good thing for the networking industry, and while it wont disappear completely (advanced/nuanced troubleshooting for example), it will be supplanted as the main interface into networking infrastructure.”
Junipers study found that 87% of businesses are still doing most or some of their network management at the device level.
What all of this shows is that customers are obviously interested in SDN but are still grappling with the best ways to get there, Bushong said.
The Juniper study also found users interested in SDN because of the potential for a security boost.
SDN can empowers a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low-security network that does not touch any sensitive information. Another segment could have much more fine-grained remote-access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it. SDN users can roll out security policies across the network from the data center to the edge much more rapidly than traditional network environments.
“Many enterprises see security—not speed—as the biggest consequence of not making this transition in the next five years, with nearly 40 percent identifying the inability to quickly address new threats as one of their main concerns,” wrote Manoj Leelanivas, chief product officer at Juniper Networks, in a blog about the survey.
“SDN is not often associated with greater security but this makes sense when we remember this is an operational transformation. In security, the challenge lies not in identifying threats or creating solutions, but in applying these solutions to a fragmented network. Streamlining complex security operations, touching many different departments and managing multiple security solutions, is where a software-defined approach can provide the answer,” Leelanivas stated.
Some of the other key findings from Juniper included:
* **The future of AI** : The deployment of artificial intelligence is about changing the operational model, Bushong said. “The ability to more easily manage workflows over groups of devices and derive usable insights to help customers be more proactive rather than reactive is the direction we are moving. Everything will ultimately be AI-driven, he said.
* **Automation** : While automation is often considered a threat, Juniper said its respondents see it positively within the context of SDN, with 38% reporting it will improve security and 25% that it will enhance their jobs by streamlining manual operations.
* **Flexibility** : Agility is the #1 benefit respondents considering SDN want to gain (48%), followed by improved reliability (43%) and greater simplicity (38%).
* **SD-WAN** : The majority, 54%, have rolled out or are in the process of rolling out SD-WAN, while an additional 34% have it under current consideration.
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400739/juniper-sdn-snapshot-finds-security-legacy-network-tech-impacts-core-network-changes.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/sdn_software-defined-network_architecture-100791938-large.jpg
[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
[3]: https://www.wakefieldresearch.com/
[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[8]: https://blogs.gartner.com/andrew-lerner/2018/01/04/checking-in-on-the-death-of-the-cli/
[9]: https://blogs.gartner.com/andrew-lerner/2016/11/22/predicting-the-death-of-the-cli/
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
[11]: https://www.facebook.com/NetworkWorld/
[12]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Self-learning sensor chips wont need networks)
[#]: via: (https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Self-learning sensor chips wont need networks
======
Scientists working on new, machine-learning networks aim to embed everything needed for artificial intelligence (AI) onto a processor, eliminating the need to transfer data to the cloud or computers.
![Jiraroj Praditcharoenkul / Getty Images][1]
Tiny, intelligent microelectronics should be used to perform as much sensor processing as possible on-chip rather than wasting resources by sending often un-needed, duplicated raw data to the cloud or computers. So say scientists behind new, machine-learning networks that aim to embed everything needed for artificial intelligence (AI) onto a processor.
“This opens the door for many new applications, starting from real-time evaluation of sensor data,” says [Fraunhofer Institute for Microelectronic Circuits and Systems][2] on its website. No delays sending unnecessary data onwards, along with speedy processing, means theoretically there is zero latency.
Plus, on-microprocessor, self-learning means the embedded, or sensor, devices can self-calibrate. They can even be “completely reconfigured to perform a totally different task afterwards,” the institute says. “An embedded system with different tasks is possible.”
**[ Also read:[What is edge computing?][3] and [How edge networking and IoT will reshape data centers][4] ]**
Much internet of things (IoT) data sent through networks is redundant and wastes resources: a temperature reading taken every 10 minutes, say, when the ambient temperature hasnt changed, is one example. In fact, one only needs to know when the temperature has changed, and maybe then only when thresholds have been met.
### Neural network-on-sensor chip
The commercial German research organization says its developing a specific RISC-V microprocessor with a special hardware accelerator designed for a [brain-copying, artificial neural network (ANN) it has developed][5]. The architecture could ultimately be suitable for the condition-monitoring or predictive sensors of the kind we will likely see more of in the industrial internet of things (IIoT).
Key to Fraunhofer IMSs [Artificial Intelligence for Embedded Systems (AIfES)][6] is that the self-learning takes place at chip level rather than in the cloud or on a computer, and that it is independent of “connectivity towards a cloud or a powerful and resource-hungry processing entity.” But it still offers a “full AI mechanism, like independent learning,”
Its “decentralized AI,” says Fraunhofer IMS. "Its not focused towards big-data processing.”
Indeed, with these kinds of systems, no connection is actually required for the raw data, just for the post-analytical results, if indeed needed. Swarming can even replace that. Swarming lets sensors talk to one another, sharing relevant information without even getting a host network involved.
“It is possible to build a network from small and adaptive systems that share tasks among themselves,” Fraunhofer IMS says.
Other benefits in decentralized neural networks include that they can be more secure than the cloud. Because all processing takes place on the microprocessor, “no sensitive data needs to be transferred,” Fraunhofer IMS explains.
### Other edge computing research
The Fraunhofer researchers arent the only academics who believe entire networks become redundant with neuristor, brain-like AI chips. Binghamton University and Georgia Tech are working together on similar edge-oriented tech.
“The idea is we want to have these chips that can do all the functioning in the chip, rather than messages back and forth with some sort of large server,” Binghamton said on its website when [I wrote about the university's work last year][7].
One of the advantages of no major communications linking: Not only don't you have to worry about internet resilience, but also that energy is saved creating the link. Energy efficiency is an ambition in the sensor world — replacing batteries is time consuming, expensive, and sometimes, in the case of remote locations, extremely difficult.
Memory or storage for swaths of raw data awaiting transfer to be processed at a data center, or similar, doesnt have to be provided either — its been processed at the source, so it can be discarded.
**More about edge networking:**
* [How edge networking and IoT will reshape data centers][4]
* [Edge computing best practices][8]
* [How edge computing can help secure the IoT][9]
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3400659/self-learning-sensor-chips-wont-need-networks.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_by_jiraroj_praditcharoenkul_gettyimages-902668940_2400x1600-100788458-large.jpg
[2]: https://www.ims.fraunhofer.de/en.html
[3]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
[5]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/News/AIfES-Artificial_Intelligence_for_Embedded_Systems.html
[6]: https://www.ims.fraunhofer.de/en/Business_Units_and_Core_Competencies/Electronic_Assistance_Systems/technologies/Artificial-Intelligence-for-Embedded-Systems-AIfES.html
[7]: https://www.networkworld.com/article/3326557/edge-chips-could-render-some-networks-useless.html
[8]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
[9]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
[10]: https://www.facebook.com/NetworkWorld/
[11]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,53 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What to do when yesterdays technology wont meet todays support needs)
[#]: via: (https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html)
[#]: author: (Anand Rajaram )
What to do when yesterdays technology wont meet todays support needs
======
![iStock][1]
You probably already know that end user technology is exploding and are feeling the effects of it in your support organization every day. Remember when IT sanctioned and standardized every hardware and software instance in the workplace? Those days are long gone. Today, its the driving force of productivity that dictates what will or wont be used and that can be hard on a support organization.
Whatever users need to do their jobs better, faster, more efficiently is what you are seeing come into the workplace. So naturally, thats what comes into your service desk too. Support organizations see all kinds of [devices, applications, systems, and equipment][2], and its adding a great deal of complexity and demand to keep up with. In fact, four of the top five factors causing support ticket volumes to rise are attributed to new and current technology.
To keep up with the steady [rise of tickets][3] and stay out in front of this surge, support organizations need to take a good, hard look at the processes and technologies they use. Yesterdays methods wont cut it. The landscape is simply changing too fast. Supporting todays users and getting them back to work fast requires an expanding set of skills and tools.
So where do you start with a new technology project? Just because a technology is new or hyped doesnt mean its right for your organization. Its important to understand your project goals and the experience you really want to create and match your technology choices to those goals. But dont go it alone. Talk to your teams. Get intimately familiar with how your support organization works today. Understand your customers needs at a deep level. And bring the right people to the table to cover:
* Business problem analysis: What existing business issue are stakeholders unhappy with?
* The impact of that problem: How does that issue justify making a change?
* Process automation analysis: What area(s) can technology help automate?
* Other solutions: Have you considered any other options besides technology?
With these questions answered, youre ready to entertain your technology options. Put together your “must-haves” in a requirements document and reach out to potential suppliers. During the initial information-gathering stage, assess if the supplier understands your goals and how their technology helps you meet them. To narrow the field, compare solutions side by side against your goals. Select the top two or three for more in-depth product demos before moving into product evaluations. By the time youre ready for implementation, you have empirical, practical knowledge of how the solution will perform against your business goals.
The key takeaway is this: Technology for technologys sake is just technology. But technology that drives business value is a solution. If you want a solution that drives results for your organization and your customers, its worth following a strategic selection process to match your goals with the best technology for the job.
For more insight, check out the [LogMeIn Rescue][4] and HDI webinar “[Technology and the Service Desk: Expanding Mission, Expanding Skills”][5].
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3399875/what-to-do-when-yesterday-s-technology-won-t-meet-today-s-support-needs.html
作者:[Anand Rajaram][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/06/istock-1019006240-100798168-large.jpg
[2]: https://www.logmeinrescue.com/resources/datasheets/infographic-mobile-support-are-your-employees-getting-what-they-need?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[3]: https://www.logmeinrescue.com/resources/analyst-reports/the-importance-of-remote-support-in-a-shift-left-world?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[4]: https://www.logmeinrescue.com/?utm_source=idg%20media&utm_medium=display&utm_campaign=native&sfdc=
[5]: https://www.brighttalk.com/webcast/8855/312289?utm_source=LogMeIn7&utm_medium=brighttalk&utm_campaign=312289

View File

@ -1,109 +0,0 @@
Memories of writing a parser for man pages
======
I generally enjoy being bored, but sometimes enough is enough—that was the case a Sunday afternoon of 2015 when I decided to start an open source project to overcome my boredom.
In my quest for ideas, I stumbled upon a request to build a [“Man page viewer built with web standards”][1] by [Mathias Bynens][2] and without thinking too much, I started coding a man page parser in JavaScript, which after a lot of back and forths, ended up being [Jroff][3].
Back then, I was familiar with manual pages as a concept and used them a fair amount of times, but that was all I knew, I had no idea how they were generated or if there was a standard in place. Two years later, here are some thoughts on the matter.
### How man pages are written
The first thing that surprised me at the time, was the notion that manpages at their core are just plain text files stored somewhere in the system (you can check this directory using the `manpath` command).
This files not only contain the documentation, but also formatting information using a typesetting system from the 1970s called `troff`.
> troff, and its GNU implementation groff, are programs that process a textual description of a document to produce typeset versions suitable for printing. **Its more What you describe is what you get rather than WYSIWYG.**
>
> — extracted from [troff.org][4]
If you are totally unfamiliar with typesetting formats, you can think of them as Markdown on steroids, but in exchange for the flexibility you have a more complex syntax:
![groff-compressor][5]
The `groff` file can be written manually, or generated from other formats such as Markdown, Latex, HTML, and so on with many different tools.
Why `groff` and man pages are tied together has to do with history, the format has [mutated along time][6], and his lineage is composed of a chain of similarly-named programs: RUNOFF > roff > nroff > troff > groff.
But this doesnt necessarily mean that `groff` is strictly related to man pages, its a general-purpose format that has been used to [write books][7] and even for [phototypesetting][8].
Moreover, Its worth noting that `groff` can also call a postprocessor to convert its intermediate output to a final format, which is not necessarily ascii for terminal display! some of the supported formats are: TeX DVI, HTML, Canon, HP LaserJet4 compatible, PostScript, utf8 and many more.
### Macros
Other of the cool features of the format is its extensibility, you can write macros that enhance the basic functionalities.
With the vast history of *nix systems, there are several macro packages that group useful macros together for specific functionalities according to the output that you want to generate, examples of macro packages are `man`, `mdoc`, `mom`, `ms`, `mm`, and the list goes on.
Manual pages are conventionally written using `man` and `mdoc`.
You can easily distinguish native `groff` commands from macros by the way standard `groff` packages capitalize their macro names. For `man`, each macros name is uppercased, like .PP, .TH, .SH, etc. For `mdoc`, only the first letter is uppercased: .Pp, .Dt, .Sh.
![groff-example][9]
### Challenges
Whether you are considering to write your own `groff` parser, or just curious, these are some of the problems that I have found more challenging.
#### Context-sensitive grammar
Formally, `groff` has a context-free grammar, unfortunately, since macros describe opaque bodies of tokens, the set of macros in a package may not itself implement a context-free grammar.
This kept me away (for good or bad) from the parser generators that were available at the time.
#### Nested macros
Most of the macros in `mdoc` are callable, this roughly means that macros can be used as arguments of other macros, for example, consider this:
* The macro `Fl` (Flag) adds a dash to its argument, so `Fl s` produces `-s`
* The macro `Ar` (Argument) provides facilities to define arguments
* The `Op` (Optional) macro wraps its argument in brackets, as this is the standard idiom to define something as optional.
* The following combination `.Op Fl s Ar file` produces `[-s file]` because `Op` macros can be nested.
#### Lack of beginner-friendly resources
Something that really confused me was the lack of a canonical, well defined and clear source to look at, theres a lot of information in the web which assumes a lot about the reader that it takes time to grasp.
### Interesting macros
To wrap up, I will offer to you a very short list of macros that I found interesting while developing jroff:
**man**
* TH: when writing manual pages with `man` macros, your first line that is not a comment must be this macro, it accepts five parameters: title section date source manual
* BI: bold alternating with italics (especially useful for function specifications)
* BR: bold alternating with Roman (especially useful for referring to other manual pages)
**mdoc**
* .Dd, .Dt, .Os: similar to how `man` macros require the `.TH` the `mdoc` macros require these three macros, in that particular order. Their initials stand for: Document date, Document title and Operating system.
* .Bl, .It, .El: these three macros are used to create list, their names are self-explanatory: Begin list, Item and End list.
--------------------------------------------------------------------------------
via: https://monades.roperzh.com/memories-writing-parser-man-pages/
作者:[Roberto Dip][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://monades.roperzh.com
[1]:https://github.com/h5bp/lazyweb-requests/issues/114
[2]:https://mathiasbynens.be/
[3]:jroff
[4]:https://www.troff.org/
[5]:https://user-images.githubusercontent.com/4419992/37868021-2e74027c-2f7f-11e8-894b-80829ce39435.gif
[6]:https://manpages.bsd.lv/history.html
[7]:https://rkrishnan.org/posts/2016-03-07-how-is-gopl-typeset.html
[8]:https://en.wikipedia.org/wiki/Phototypesetting
[9]:https://user-images.githubusercontent.com/4419992/37866838-e602ad78-2f6e-11e8-97a9-2a4494c766ae.jpg

View File

@ -1,3 +1,5 @@
Translating by MjSeven
Manage your workstation with Ansible: Configure desktop settings
======

View File

@ -1,3 +1,4 @@
Translanting by robsean
BootISO A Simple Bash Script To Securely Create A Bootable USB Device From ISO File
======
Most of us (including me) very often create a bootable USB device from ISO file for OS installation.

View File

@ -1,185 +0,0 @@
Translating by robsean
Learning BASIC Like It's 1983
======
I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us.
Today, (almost) everyone knows how to use a computer, but very few people, even in the computing industry, grasp all of what is going on inside of any single machine. There are now [so many layers of software][1] doing so many different things that one struggles to identify the parts that are essential. In 1983, though, home computers were unsophisticated enough that a diligent person could learn how a particular computer worked through and through. That person is today probably less mystified than I am by all the abstractions that modern operating systems pile on top of the hardware. I expect that these layers of abstractions were easy to understand one by one as they were introduced; today, new programmers have to try to understand them all by working top to bottom and backward in time.
Many famous programmers, particularly in the video game industry, started programming games in childhood on 8-bit computers like the Apple II and the Commodore 64. John Romero, Richard Garriott, and Chris Roberts are all examples. Its easy to see how this happened. In the 8-bit computer era, many games were available only as printed BASIC listings in computer magazines and [books][2]. If you wanted to play one of those games, you had to type in the whole program by hand. Inevitably, you would get something wrong, so you would have to debug your program. By the time you got it working, you knew enough about how the program functioned to start modifying it yourself. If you were an avid gamer, you became a good programmer almost by necessity.
I also played computer games throughout my childhood. But the games I played came on CD-ROMs. I sometimes found myself having to google how to fix a crashing installer, which would involve editing the Windows Registry or something like that. This kind of minor troubleshooting may have made me comfortable enough with computers to consider studying computer science in college. But it never taught me anything crucial about how computers worked or how to control them.
Now, of course, I tell computers what to do for a living. All the same, I cant help feeling that I missed out on some fundamental insight afforded only to those that grew up programming simpler computers. What would it have been like to encounter computers for the first time in the early 1980s? How would that have been different from the experience of using a computer today?
This post is going to be a little different from the usual Two-Bit History post because Im going to try to imagine an answer to these questions.
### 1983
It was just last week that you saw [the Commodore 64 ad][3] on TV. Now that M*A*S*H was over, you were in the market for something new to do on Monday nights. This Commodore 64 thing looked even better than the Apple II that Rudys family had in their basement. Plus, the ad promised that the new computer would soon bring friends “knocking down” your door. You knew several people at school that would rather be hanging out at your house than Rudys anyway, if only they could play Zork there.
So you persuaded your parents to buy one. Your mother said that they would consider it only if having a home computer meant that you stayed away from the arcade. You reluctantly agreed. Your father thought he would start tracking the familys finances in MultiPlan, the spreadsheet program he had heard about, which is why the computer got put in the living room. A year later, though, you would be the only one still using it. You were finally allowed to put it on the desk in your bedroom, right under your Police poster.
(Your sister protested this decision, but it was 1983 and computers [werent for her][4].)
Dad picked it up from [ComputerLand][5] on the way home from work. The two of you laid the box down next to the TV and opened it. “WELCOME TO THE WORLD OF FRIENDLY COMPUTING,” said the packaging. Twenty minutes later, you werent convinced—the two of you were still trying to connect the Commodore to the TV set and wondering whether the TVs antenna cable was the 75-ohm or 300-ohm coax type. But eventually you were able to turn your TV to channel 3 and see a grainy, purple image.
![Commodore 64 startup screen][6]
`READY`, the computer reported. Your father pushed the computer toward you, indicating that you should be the first to give it a try. `HELLO`, you typed, carefully hunting for each letter. The computers response was baffling.
![Commodore 64 syntax error][7]
You tried typing in a few different words, but the response was always the same. Your father said that you had better read through the rest of the manual. That would be no mean feat—[the manual that came with the Commodore 64][8] was a small book. But that didnt bother you, because the introduction to the manual foreshadowed wonders.
The Commodore 64, it claimed, had “the most advanced picture maker in the microcomputer industry,” which would allow you “to design your own pictures in four different colors, just like the ones you see on arcade type video games.” The Commodore 64 also had “built-in music and sound effects that rival many well known music synthesizers.” All of these tools would be put in your hands, because the manual would walk you through it all:
> Just as important as all the available hardware is the fact that this USERS GUIDE will help you develop your understanding of computers. It wont tell you everything there is to know about computers, but it will refer you to a wide variety of publications for more detailed information about the topics presented. Commodore wants you to really enjoy your new COMMODORE 64. And to have fun, remember: programming is not the kind of thing you can learn in a day. Be patient with yourself as you go through the USERS GUIDE.
That night, in bed, you read through the entire first three chapters—”Setup,” “Getting Started,” and “Beginning BASIC Programming”—before finally succumbing to sleep with the manual splayed across your chest.
### Commodore BASIC
Now, its Saturday morning and youre eager to try out what youve learned. One of the first things the manual teaches you how to do is change the colors on the display. You follow the instructions, pressing `CTRL-9` to enter reverse type mode and then holding down the space bar to create long lines. You swap between colors using `CTRL-1` through `CTRL-8`, reveling in your sudden new power over the TV screen.
![Commodore 64 color bands][9]
As cool as this is, you realize it doesnt count as programming. In order to program the computer, you learned last night, you have to speak to it in a language called BASIC. To you, BASIC seems like something out of Star Wars, but BASIC is, by 1983, almost two decades old. It was invented by two Dartmouth professors, John Kemeny and Tom Kurtz, who wanted to make computing accessible to undergraduates in the social sciences and humanities. It was widely available on minicomputers and popular in college math classes. It then became standard on microcomputers after Bill Gates and Paul Allen wrote the MicroSoft BASIC interpreter for the Altair. But the manual doesnt explain any of this and you wont learn it for many years.
One of the first BASIC commands the manual suggests you try is the `PRINT` command. You type in `PRINT "COMMODORE 64"`, slowly, since it takes you a while to find the quotation mark symbol above the `2` key. You hit `RETURN` and this time, instead of complaining, the computer does exactly what you told it to do and displays “COMMODORE 64” on the next line.
Now you try using the `PRINT` command on all sorts of different things: two numbers added together, two numbers multiplied together, even several decimal numbers. You stop typing out `PRINT` and instead use `?`, since the manual has advised you that `?` is an abbreviation for `PRINT` often used by expert programmers. You feel like an expert already, but then you remember that you havent even made it to chapter three, “Beginning BASIC Programming.”
You get there soon enough. The chapter begins by prompting you to write your first real BASIC program. You type in `NEW` and hit `RETURN`, which gives you a clean slate. You then type your program in:
```
10 ?"COMMODORE 64"
20 GOTO 10
```
The 10 and the 20, the manual explains, are line numbers. They order the statements for the computer. They also allow the programmer to refer to other lines of the program in certain commands, just like youve done here with the `GOTO` command, which directs the program back to line 10. “It is good programming practice,” the manual opines, “to number lines in increments of 10—in case you need to insert some statements later on.”
You type `RUN` and stare as the screen clogs with “COMMODORE 64,” repeated over and over.
![Commodore 64 showing result of printing "Commodore 64" repeatedly][10]
Youre not certain that this isnt going to blow up your computer. It takes you a second to remember that you are supposed to hit the `RUN/STOP` key to break the loop.
The next few sections of the manual teach you about variables, which the manual tells you are like “a number of boxes within the computer that can each hold a number or a string of text characters.” Variables that end in a `%` symbol are whole numbers, while variables ending in a `$` symbol are strings of characters. All other variables are something called “floating point” variables. The manual warns you to be careful with variable names because only the first two letters of the name are actually recognized by the computer, even though nothing stops you from making a name as long as you want it to be. (This doesnt particularly bother you, but you could see how 30 years from now this might strike someone as completely insane.)
You then learn about the `IF... THEN...` and `FOR... NEXT...` constructs. With all these new tools, you feel equipped to tackle the next big challenge the manual throws at you. “If youre the ambitious type,” it goads, “type in the following program and see what happens.” The program is longer and more complicated than any you have seen so far, but youre dying to know what it does:
```
10 REM BOUNCING BALL
20 PRINT "{CLR/HOME}"
25 FOR X = 1 TO 10 : PRINT "{CRSR/DOWN}" : NEXT
30 FOR BL = 1 TO 40
40 PRINT " ●{CRSR LEFT}";:REM (● is a Shift-Q)
50 FOR TM = 1 TO 5
60 NEXT TM
70 NEXT BL
75 REM MOVE BALL RIGHT TO LEFT
80 FOR BL = 40 TO 1 STEP -1
90 PRINT " {CRSR LEFT}{CRSR LEFT}●{CRSR LEFT}";
100 FOR TM = 1 TO 5
110 NEXT TM
120 NEXT BL
130 GOTO 20
```
The program above takes advantage of one of the Commodore 64s coolest features. Non-printable command characters, when passed to the `PRINT` command as part of a string, just do the action they usually perform instead of printing to the screen. This allows you to replay arbitrary chains of commands by printing strings from within your programs.
It takes you a long time to type in the above program. You make several mistakes and have to re-enter some of the lines. But eventually you are able to type `RUN` and behold a masterpiece:
![Commodore 64 bouncing ball][11]
You think that this is a major contender for the coolest thing you have ever seen. You forget about it almost immediately though, because once youve learned about BASICs built-in functions like `RND` (which returns a random number) and `CHR$` (which returns the character matching a given number code), the manual shows you a program that many years from now will still be famous enough to be made the title of an [essay anthology][12]:
```
10 PRINT "{CLR/HOME}"
20 PRINT CHR$(205.5 + RND(1));
40 GOTO 20
```
When run, the above program produces a random maze:
![Commodore 64 maze program][13]
This is definitely the coolest thing you have ever seen.
### PEEK and POKE
Youve now made it through the first four chapters of the Commodore 64 manual, including the chapter titled “Advanced BASIC,” so youre feeling pretty proud of yourself. Youve learned a lot this Saturday morning. But this afternoon (after a quick lunch break), youre going to learn something that will make this magical machine in your living room much less mysterious.
The next chapter in the manual is titled “Advanced Color and Graphic Commands.” It starts off by revisiting the colored bars that you were able to type out first thing this morning and shows you how you can do the same thing from a program. It then teaches you how to change the background colors of the screen.
In order to do this, you need to use the BASIC `PEEK` and `POKE` commands. Those commands allow you to, respectively, examine and write to a memory address. The Commodore 64 has a main background color and a border color. Each is controlled by a specially designated memory address. You can write any color value you would like to those addresses to make the background or border that color.
The manual explains:
> Just as variables can be thought of as a representation of “boxes” within the machine where you placed your information, you can also think of some specially defined “boxes” within the computer that represent specific memory locations.
>
> The Commodore 64 looks at these memory locations to see what the screens background and border color should be, what characters are to be displayed on the screen—and where—and a host of other tasks.
You write a program to cycle through all the available combinations of background and border color:
```
10 FOR BA = 0 TO 15
20 FOR BO = 0 TO 15
30 POKE 53280, BA
40 POKE 53281, BO
50 FOR X = 1 TO 500 : NEXT X
60 NEXT BO : NEXT BA
```
While the `POKE` commands, with their big operands, looked intimidating at first, now you see that the actual value of the number doesnt matter that much. Obviously, you have to get the number right, but all the number represents is a “box” that Commodore just happened to store at address 53280. This box has a special purpose: Commodore uses it to determine what color the screens background should be.
![Commodore 64 changing background colors][14]
You think this is pretty neat. Just by writing to a special-purpose box in memory, you can control a fundamental property of the computer. You arent sure how the Commodore 64s circuitry takes the value you write in memory and changes the color of the screen, but youre okay not knowing that. At least you understand everything up to that point.
### Special Boxes
You dont get through the entire manual that Saturday, since you are now starting to run out of steam. But you do eventually read all of it. In the process, you learn about many more of the Commodore 64s special-purpose boxes. There are boxes you can write to control what is on screen—one box, in fact, for every place a character might appear. In chapter six, “Sprite Graphics,” you learn about the special-purpose boxes that allow you to define images that can be moved around and even scaled up and down. In chapter seven, “Creating Sound,” you learn about the boxes you can write to in order to make your Commodore 64 sing “Michael Row the Boat Ashore.” The Commodore 64, it turns out, has very little in the way of what you would later learn is called an API. Controlling the Commodore 64 mostly involves writing to memory addresses that have been given special meaning by the circuitry.
The many years you ultimately spend writing to those special boxes stick with you. Even many decades later, when you find yourself programming a machine with an extensive graphics or sound API, you know that, behind the curtain, the API is ultimately writing to those boxes or something like them. You will sometimes wonder about younger programmers that have only ever used APIs, and wonder what they must think the API is doing for them. Maybe they think that the API is calling some other, hidden API. But then what do think that hidden API is calling? You will pity those younger programmers, because they must be very confused indeed.
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][15] on Twitter or subscribe to the [RSS feed][16] to make sure you know when a new post is out.
Previously on TwoBitHistory…
> Have you ever wondered what a 19th-century computer program would look like translated into C?
>
> This week's post: A detailed look at how Ada Lovelace's famous program worked and what it was trying to do.<https://t.co/BizR2Zu7nt>
>
> — TwoBitHistory (@TwoBitHistory) [August 19, 2018][17]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2018/09/02/learning-basic.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/watch?v=kZRE7HIO3vk
[2]: https://en.wikipedia.org/wiki/BASIC_Computer_Games
[3]: https://www.youtube.com/watch?v=ZekAbt2o6Ms
[4]: https://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding
[5]: https://www.youtube.com/watch?v=MA_XtT3VAVM
[6]: https://twobithistory.org/images/c64_startup.png
[7]: https://twobithistory.org/images/c64_error.png
[8]: ftp://www.zimmers.net/pub/cbm/c64/manuals/C64_Users_Guide.pdf
[9]: https://twobithistory.org/images/c64_colors.png
[10]: https://twobithistory.org/images/c64_print_loop.png
[11]: https://twobithistory.org/images/c64_ball.gif
[12]: http://10print.org/
[13]: https://twobithistory.org/images/c64_maze.gif
[14]: https://twobithistory.org/images/c64_background.gif
[15]: https://twitter.com/TwoBitHistory
[16]: https://twobithistory.org/feed.xml
[17]: https://twitter.com/TwoBitHistory/status/1030974776821665793?ref_src=twsrc%5Etfw

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (qfzy1233)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (luuming)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,173 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 Explaining Smart Contracts And Its Types [Part 5])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 Explaining Smart Contracts And Its Types [Part 5]
======
![Explaining Smart Contracts And Its Types][1]
This is the 5th article in **Blockchain 2.0** series. The previous article of this series explored how can we implement [**Blockchain in real estate**][2]. This post briefly explores the topic of **Smart Contracts** within the domain of Blockchains and related technology. Smart Contracts, which are basic protocols to verify and create new “blocks” of data on the blockchain, are touted to be a focal point for future developments and applications of the system. However, like all “cure-all” medications, it is not the answer to everything. We explore the concept from the basics to understand what “smart contracts” are and what theyre not.
### Evolving Contracts
The world is built on contracts. No individual or firm on earth can function in current society without the use and reuse of contracts. The task of creating, maintaining, and enforcing contracts have become so complicated that entire judicial and legal systems have had to be setup in the name of **“contract law”** to support it. Most of all contracts are in fact overseen by a “trusted” third party to make sure the stakeholders at the ends are taken care of as per the conditions arrived. There are contracts that even talk about a third-party beneficiary. Such contracts are intended to have an effect on a third party who is not an active (or participating) party to the contract. Settling and arguing over contractual obligations takes up the bulk of most legal battles that civil lawsuits are involved in. Surely enough a better way to take care of contracts would be a godsend for individuals and enterprises alike. Not to mention the enormous paperwork it would save the government in the name of verifications and attestations[1][2].
Most posts in this series have looked at how existing blockchain tech is being leveraged today. In contrast, this post will be more about what to expect in the coming years. A natural discussion about “smart contracts” evolve from the property discussions presented in the previous post. The current post aims to provide an overview of the capabilities of blockchain to automate and carry out “smart” executable programs. Dealing with this issue pragmatically means well first have to define and explore what these “smart contracts” are and how they fit into the existing system of contracts. We look at major present-day applications and projects going on in the field in the next post titled, **“Blockchain 2.0: Ongoing Projects”**.
### Defining Smart Contracts
The [**first article of this series**][3] looked at blockchain from a fundamental point of view as a **“distributed ledger”** consisting of blocks of data that were:
* Tamper-proof
* Non-repudiable (Meaning every block of data is explicitly created by someone and that someone cannot deny any accountability of the same)
* Secure and is resilient to traditional methods of cyber attack
* Almost permanent (of course this depends on the blockchain protocol overlay)
* Highly redundant, by existing on multiple network nodes or participant systems, the failure of one of these nodes will not affect the capabilities of the system in any way, and,
* Offers faster processing depending on application.
Because every instance of data is securely stored and accessible by suitable credentials, a blockchain network can provide easy basis for precise verification of facts and information without the need for third party oversight. blockchain 2.0 developments also allow for **“distributed applications”** (a term which well be looking at in detail in coming posts). Such distributed applications exist and run on the network as per requirements. Theyre called when a user needs them and executed by making use of information that has already been vetted and stored on the blockchain.
The last paragraph provides a foundation for defining smart contracts. _**The Chamber for Digital Commerce**_ , provides a definition for smart contracts which many experts agree on.
_**“Computer code that, upon the occurrence of a specified condition or conditions, is capable of running automatically according to prespecified functions. The code can be stored and processed on a distributed ledger and would write any resulting change into the distributed ledger”[1].**_
Smart contracts are as mentioned above simple computer programs working like “if-then” or “if-else if” statements. The “smart” aspect about the same comes from the fact that the predefined inputs for the program comes from the blockchain ledger, which as proven above, is a secure and reliable source of recorded information. The program can call upon external services or sources to get information as well, if need be, to verify the terms of operation and will only execute once all the predefined conditions are met.
It has to be kept in mind that unlike what the name implies, smart contracts are not usually autonomous entities nor are they strictly speaking contracts. A very early mention of smart contracts was made by **Nick Szabo** in 1996, where he compared the same with a vending machine accepting payment and delivering the product chosen by the user[3]. The full text can be accessed **[here][4]**. Furthermore, Legal frameworks allowing the entry of smart contracts into mainstream contract use are still being developed and as such the use of the technology currently is limited to areas where legal oversight is less explicit and stringent[4].
### Major types of smart contracts
Assuming the reader has a basic understanding of contracts and computer programming, and building on from our definition of smart contracts, we can roughly classify smart contracts and protocols into the following major categories.
##### 1\. Smart legal contracts
These are presumably the most obvious kind. Most, if not, all contracts are legally enforceable. Without going into much technicalities, a smart legal contact is one that involves strict legal recourses in case parties involved in the same were to not fulfill their end of the bargain. As previously mentioned, the current legal framework in different countries and contexts lack sufficient support for smart and automated contracts on the blockchain and their legal status is unclear. However, once the laws are made, smart contracts can be made to simplify processes which currently involve strict regulatory oversight such as transactions in the financial and real estate market, government subsidies, international trade etc.
##### 2\. DAOs
**Decentralized Autonomous Organizations** , shortly DAO, can be loosely defined as communities that exist on the blockchain. The community may be defined by a set of rules arrived at and put into code via smart contracts. Every action by every participant would then be subject to these sets of rules with the task of enforcing and reaching at recourse in case of a break being left to the program. Multitudes of smart contracts make up these rules and they work in tandem policing and watching over participants.
A DAO called the **Genesis DAO** was created by **Ethereum** participants in may of 2016. The community was meant to be a crowdfunding and venture capital platform. In a surprisingly short period of time they managed to raise an astounding **$150 million**. However, hacker(s) found loopholes in the system and managed to steal about **$50 million** dollars worth of Ethers from the crowdfund investors. The hack and its fallout resulted in a fork of the Ethereum blockchain into two, **Ethereum** and **Ethereum Classic** [5].
##### 3\. Application logic contracts (ALCs)
If youve heard about the internet of things in conjunction with the blockchain, chances are that the matter talked about **Application logic contacts** , shortly ALC. Such smart contracts contain application specific code that work in conjunction with other smart contracts and programs on the blockchain. They aid in communicating with and validating communication between devices (while in the domain of IoT). ALCs are a pivotal piece of every multi-function smart contract and mostly always work under a managing program. They find applications everywhere in most examples cited here[6][7].
_Since development is ongoing in the area, any definition or standard so to speak of will be fluidic and vague at best currently._
### How smart contracts work**
To simplify things, lets proceed by taking an example.
John and Peter are two individuals debating about the scores in a football match. They have conflicting views about the outcome with both of them supporting different teams (context). Since both of them need to go elsewhere and wont be able to finish the match then, John bets that team A will beat team B in the match and _offers_ Peter $100 in that case. Peter _considers_ and _accepts_ the bet while making it clear that they are _bound_ to the terms. However, neither of them _trusts_ each other to honour the bet and they dont have the time nor the money to appoint a _third party_ to oversee the same.
Assuming both John and Peter were to use a smart contract platform such as **[Etherparty][5]** , to automatically settle the bet at the time of the contract negotiation, theyll both link their blockchain based identities to the contract and set the terms, making it clear that as soon as the match is over, the program will find out who the winning side is and automatically credit the amount to the winners bank account from the losers. As soon as the match ends and media outlets report the same, the program will scour the internet for the prescribed sources, identify which team won, relate it to the terms of the contract, in this case since B won Peter gets the money from John and after intimating both the parties transfers $100 from Johns to Peters account. After having executed, the smart contract will terminate and be inactive for all the time to come unless otherwise mentioned.
The simplicity of the example aside, the situation involved a classic contract (paying attention to the italicized words) and the participants chose to implement the same using a smart contract. All smart contracts basically work on a similar principle, with the program being coded to execute on predefined parameters and spitting out expected outputs only. The outside sources the smart contract consults for information is may a times referred to as the _Oracle_ in the IT world. Oracles are a common part of many smart contract systems worldwide today.
The use of a smart contract in this situation allowed the participants the following benefits:
* It was faster than getting together and settling the bet manually.
* Removed the issue of trust from the equation.
* Eliminated the need for a trusted third party to handle the settlement on behalf of the parties involved.
* Costed nothing to execute.
* Is secure in how it handles parameters and sensitive data.
* The associated data will remain in the blockchain platform they ran it on permanently and future bets can be placed on by calling the same function and giving it added inputs.
* Gradually over time, assuming John and Peter develop gambling addictions, the program will help them develop reliable statistics to gauge their winning streaks.
Now that we know **what smart contracts are** and **how they work** , were still yet to address **why we need them**.
### The need for smart contracts
As the previous example we visited highlights we need smart contracts for a variety of reasons.
##### **Transparency**
The terms and conditions involved are very clear to the counterparties. Furthermore, since the execution of the program or the smart contract involves certain explicit inputs, users have a very direct way of verifying the factors that would impact them and the contract beneficiaries.
##### Time Efficient
As mentioned, smart contracts go to work immediately once theyre triggered by a control variable or a user call. Since data is made available to the system instantaneously by the blockchain and from other sources in the network, the execution does not take any time at all to verify and process information and settle the transaction. Transferring land title deeds for instance, a process which involved manual verification of tons of paperwork and takes weeks on normal can be processed in a matter of minutes or even seconds with the help of smart contract programs working to vet the documents and the parties involved.
##### Precision
Since the platform is basically just computer code and everything predefined, there can be no subjective errors and all the results will be precise and completely free of human errors.
##### Safety
An inherent feature of the blockchain is that every block of data is cryptographically encrypted. Meaning even though the data is stored on a multitude of nodes on the network for redundancy, **only the owner of the data has access to see and use the data**. Similarly, all process will be completely secure and tamper proof with the execution utilizing the blockchain for storing important variables and outcomes in the process. The same also simplifies auditing and regulatory affairs by providing auditors with a native, un-changed and non-repudiable version of the data chronologically.
##### Trust
The article series started by saying that blockchain adds a much-needed layer of trust to the internet and the services that run on it. The fact that smart contracts will under no circumstances show bias or subjectivity in executing the agreement means that parties involved are fully bound the outcomes and can trust the system with no strings attached. This also means that the **“trusted third-party”** required in conventional contracts of significant value is not required here. Foul play between the parties involved and oversight will be issues of the past.
##### Cost effective
As highlighted in the example, utilizing a smart contract involves minimal costs. Enterprises usually have administrative staff who work exclusively for making that transactions they undertake are legitimate and comply with regulations. If the deal involved multiple parties, duplication of the effort is unavoidable. Smart contracts essentially make the former irrelevant and duplication is eliminated since both the parties can simultaneously have their due diligence done.
### Applications of Smart Contracts
Basically, if two or more parties use a common blockchain platform and agree on a set of principles or business logic, they can come together to create a smart contract on the blockchain and it is executed with no human intervention at all. No one can tamper with the conditions set and, any changes, if the original code allows for it, is timestamped and carries the editors fingerprint increasing accountability. Imagine a similar situation at a much larger enterprise scale and you understand what smart contracts are capable of and in fact a **Capgemini study** from 2016 found that smart contracts could actually be commercially mainstream **“in the early years of the next decade”** [8]. Commercial applications involve uses in Insurance, Financial Markets, IoT, Loans, Identity Management systems, Escrow Accounts, Employment contracts, and Patent & Royalty contracts among others. Platforms such as Ethereum, a blockchain designed keeping smart contracts in mind, allow for individual private users to utilize smart contracts free of cost as well.
A more comprehensive overview of the applications of smart contracts on current technological problems will be presented in the next article of the series by exploring the companies that deal with it.
### So, what are the drawbacks?
This is not to say that smart contracts come with no concerns regarding their use whatsoever. Such concerns have actually slowed down development in this aspect as well. The tamper-proof nature of everything blockchain essentially makes it next to impossible to modify or add new clauses to existing clauses if the parties involved need to without major overhaul or legal recourse.
Secondly, even though activity on a public blockchain is open for all to see and observe. The personal identities of the parties involved in a transaction are not always known. This anonymity raises question regarding legal impunity in case either party defaults especially since current laws and lawmakers are not exactly accommodative of modern technology.
Thirdly, blockchains and smart contracts are still subject to security flaws in many ways because the technology for all the interest in it is still in a very nascent stage of development. This inexperience with the code and platform is what ultimately led to the DAO incident in 2016.
All of this is keeping aside the significant initial investment that might be needed in case an enterprise or firm needs to adapt a blockchain for its use. The fact that these are initial one-time investments and come with potential savings down the road however is what interests people.
### Conclusion
Current legal frameworks dont really support a full-on smart contract enabled society and wont in the near future due to obvious reasons. A solution is to opt for **“hybrid” contracts** that combine traditional legal texts and documents with smart contract code running on blockchains designed for the purpose[4]. However, even hybrid contracts remain largely unexplored as innovative legislature is required to bring them into fruition. The applications briefly mentioned here and many more are explored in detail in the [**next post of the series**][6].
**References:**
* **[1] S. C. A. Chamber of Digital Commerce, “Smart contracts Is the law ready,” no. September, 2018.**
* **[2] [Legal Definition of ius quaesitum tertio][7].
**
* **[3][N. Szabo, “Nick Szabo — Smart Contracts: Building Blocks for Digital Markets,” 1996.][4]**
* **[4] Cardozo Blockchain Project, “Smart Contracts & Legal Enforceability,” vol. 2, p. 28, 2018.**
* **[5][The DAO Heist Undone: 97% of ETH Holders Vote for the Hard Fork.][8]**
* **[6] F. Idelberger, G. Governatori, R. Riveret, and G. Sartor, “Evaluation of Logic-Based Smart Contracts for Blockchain Systems,” 2016, pp. 167183.**
* **[7][Types of Smart Contracts Based on Applications | Market InsightsTM Everest Group.][9]**
* **[8] B. Cant et al., “Smart Contracts in Financial Services : Getting from Hype to Reality,” Capgemini Consult., pp. 124, 2016.**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/03/smart-contracts-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-blockchain-in-real-estate/
[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[4]: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html
[5]: https://etherparty.com/
[6]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
[7]: http://www.legal-glossary.org/
[8]: https://futurism.com/the-dao-heist-undone-97-of-eth-holders-vote-for-the-hard-fork/
[9]: https://www.everestgrp.com/2016-10-types-smart-contracts-based-applications-market-insights-36573.html/

View File

@ -1,59 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (HPE introduces hybrid cloud consulting business)
[#]: via: (https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
HPE introduces hybrid cloud consulting business
======
### HPE's Right Mix Advisor is designed to find a balance between on-premises and cloud systems.
![Hewlett Packard Enterprise][1]
Hybrid cloud is pretty much the de facto way to go, with only a few firms adopting a pure cloud play to replace their data center and only suicidal firms refusing to go to the cloud. But picking the right balance between on-premises and the cloud is tricky, and a mistake can be costly.
Enter Right Mix Advisor from Hewlett Packard Enterprise, a combination of consulting from HPE's Pointnext division and software tools. It draws on quite a bit of recent acquisitions. Another part of Right Mix Advisor is a British cloud consultancy RedPixie, Amazon Web Services (AWS) specialists Cloud Technology Partners, and automated discovery capabilities from an Irish startup iQuate.
Right Mix Advisor gathers data points from the companys entire enterprise, ranging from configuration management database systems (CMDBs), such as ServiceNow, to external sources, such as cloud providers. HPE says that in a recent customer engagement it scanned 9 million IP addresses across six data centers.
**[ Read also:[What is hybrid cloud computing][2]. | Learn [what you need to know about multi-cloud][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
HPE Pointnext consultants then work with the clients IT teams to analyze the data to determine the optimal configuration for workload placement. Pointnext has become HPEs main consulting outfit following its divestiture of EDS, which it acquired in 2008 but spun off in a merger with CSC to form DXC Consulting. Pointnext now has 25,000 consultants in 88 countries.
In a typical engagement, HPE claims it can deliver a concrete action plan within weeks, whereas previously businesses may have needed months to come to a conclusion using a manual processes. HPE has found migrating the right workloads to the right mix of hybrid cloud can typically result in 40 percent total cost of ownership savings*. *
Although HPE has thrown its weight behind AWS, that doesnt mean it doesnt support competitors. Erik Vogel, vice president of hybrid IT for HPE Pointnext, notes in the blog post announcing Right Mix Advisor that target environments could be Microsoft Azure or Azure Stack, AWS, Google or Ali Cloud.
“New service providers are popping up every day, and we see the big public cloud providers constantly producing new services and pricing models. As a result, the calculus for determining your right mix is constantly changing. If Azure, for example, offers a new service capability or a 10 percent pricing discount and it makes sense to leverage it, you want to be able to move an application seamlessly into that new environment,” he wrote.
Key to Right Mix Advisor is app migration, and Pointnext follows the 50/30/20 rule: About 50 percent of apps are suitable for migration to the cloud, and for about 30 percent, migration is not a good choice for migration to be worth the effort. The remaining 20 percent should be retired.
“With HPE Right Mix Advisor, you can identify that 50 percent,” he wrote. “Rather than hand you a laundry list of 10,000 apps to start migrating, HPE Right Mix Advisor hones in on whats most impactful right now to meet your business goals the 10 things you can do on Monday morning that you can be confident will really help your business.”
HPE has already done some pilot projects with the Right Mix service and expects to expand it to include channel partners.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384919/hpe-introduces-hybrid-cloud-consulting-business.html#tk.rss_all
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/11/hpe_building-100625424-large.jpg
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws)
[#]: via: (https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco warns of two security patches that dont work, issues 17 new ones for IOS flaws
======
### Cisco is issuing 17 new fixes for security problems with IOS and IOS/XE software that runs most of its routers and switches, while it has no patch yet to replace flawed patches to RV320 and RV 325 routers.
![Marisa9 / Getty][1]
Cisco has dropped [17 Security advisories describing 19 vulnerabilities][2] in the software that runs most of its routers and switches, IOS and IOS/XE.
The company also announced that two previously issued patches for its RV320 and RV325 Dual Gigabit WAN VPN Routers were “incomplete” and would need to be redone and reissued.
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
Cisco rates both those router vulnerabilities as “High” and describes the problems like this:
* [One vulnerability][5] is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as _root_.
* The [second exposure][6] is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information.
Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both.
On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software. Some of the security bugs, which are all rated as “High”, include:
* [A vulnerability][7] in the web UI of Cisco IOS XE Software could let an unauthenticated, remote attacker access sensitive configuration information.
* [A vulnerability][8] in Cisco IOS XE Software could let an authenticated, local attacker inject arbitrary commands that are executed with elevated privileges. The vulnerability is due to insufficient input validation of commands supplied by the user. An attacker could exploit this vulnerability by authenticating to a device and submitting crafted input to the affected commands.
* [A weakness][9] in the ingress traffic validation of Cisco IOS XE Software for Cisco Aggregation Services Router (ASR) 900 Route Switch Processor 3 could let an unauthenticated, adjacent attacker trigger a reload of an affected device, resulting in a denial of service (DoS) condition, Cisco said. The vulnerability exists because the software insufficiently validates ingress traffic on the ASIC used on the RSP3 platform. An attacker could exploit this vulnerability by sending a malformed OSPF version 2 message to an affected device.
* A problem in the [authorization subsystem][10] of Cisco IOS XE Software could allow an authenticated but unprivileged (level 1), remote attacker to run privileged Cisco IOS commands by using the web UI. The vulnerability is due to improper validation of user privileges of web UI users. An attacker could exploit this vulnerability by submitting a malicious payload to a specific endpoint in the web UI, Cisco said.
* A vulnerability in the [Cluster Management Protocol][11] (CMP) processing code in Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, adjacent attacker to trigger a DoS condition on an affected device. The vulnerability is due to insufficient input validation when processing CMP management packets, Cisco said.
Cisco has released free software updates that address the vulnerabilities described in these advisories and [directs users to their software agreements][12] to find out how they can download the fixes.
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3384742/cisco-warns-of-two-security-patches-that-dont-work-issues-17-new-ones-for-ios-flaws.html#tk.rss_all
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/woman-with-hands-over-face_mistake_oops_embarrassed_shy-by-marisa9-getty-100787990-large.jpg
[2]: https://tools.cisco.com/security/center/viewErp.x?alertId=ERP-71135
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-inject
[6]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190123-rv-info
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xeid
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-xecmd
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-rsp3-ospf
[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-iosxe-privesc
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190327-cmp-dos
[12]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
[13]: https://www.facebook.com/NetworkWorld/
[14]: https://www.linkedin.com/company/network-world

View File

@ -1,90 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Announcing the release of Fedora 30 Beta)
[#]: via: (https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/)
[#]: author: (Ben Cotton https://fedoramagazine.org/author/bcotton/)
Announcing the release of Fedora 30 Beta
======
![][1]
The Fedora Project is pleased to announce the immediate availability of Fedora 30 Beta, the next big step on our journey to the exciting Fedora 30 release.
Download the prerelease from our Get Fedora site:
* [Get Fedora 30 Beta Workstation][2]
* [Get Fedora 30 Beta Server][3]
* [Get Fedora 30 Beta Silverblue][4]
Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:
* [Get Fedora 30 Beta Spins][5]
* [Get Fedora 30 Beta Labs][6]
* [Get Fedora 30 Beta ARM][7]
### Beta Release Highlights
#### New desktop environment options
Fedora 30 Beta includes two new options for desktop environment. [DeepinDE][8] and [Pantheon Desktop][9] join GNOME, KDE Plasma, Xfce, and others as options for users to customize their Fedora experience.
#### DNF performance improvements
All dnf repository metadata for Fedora 30 Beta is compressed with the zchunk format in addition to xz or gzip. zchunk is a new compression format designed to allow for highly efficient deltas. When Fedoras metadata is compressed using zchunk, dnf will download only the differences between any earlier copies of the metadata and the current version.
#### GNOME 3.32
Fedora 30 Workstation Beta includes GNOME 3.32, the latest version of the popular desktop environment. GNOME 3.32 features updated visual style, including the user interface, the icons, and the desktop itself. For a full list of GNOME 3.32 highlights, see the [release notes][10].
#### Other updates
Fedora 30 Beta also includes updated versions of many popular packages like Golang, the Bash shell, the GNU C Library, Python, and Perl. For a full list, see the [Change set][11] on the Fedora Wiki. In addition, many Python 2 packages are removed in preparation for Python 2 end-of-life on 2020-01-01.
#### Testing needed
Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the [Common F30 Bugs page][12].
For tips on reporting a bug effectively, read [how to file a bug][13].
#### What is the Beta Release?
A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesnt just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.
#### More information
For more detailed information about whats new on Fedora 30 Beta release, you can consult the [Fedora 30 Change set][11]. It contains more technical information about the new packages and improvements shipped with this release.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/announcing-the-release-of-fedora-30-beta/
作者:[Ben Cotton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bcotton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/f30-beta-816x345.jpg
[2]: https://getfedora.org/workstation/prerelease/
[3]: https://getfedora.org/server/prerelease/
[4]: https://silverblue.fedoraproject.org/download
[5]: https://spins.fedoraproject.org/prerelease
[6]: https://labs.fedoraproject.org/prerelease
[7]: https://arm.fedoraproject.org/prerelease
[8]: https://www.deepin.org/en/dde/
[9]: https://www.fosslinux.com/4652/pantheon-everything-you-need-to-know-about-the-elementary-os-desktop.htm
[10]: https://help.gnome.org/misc/release-notes/3.32/
[11]: https://fedoraproject.org/wiki/Releases/30/ChangeSet
[12]: https://fedoraproject.org/wiki/Common_F30_bugs
[13]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to rebase to Fedora 30 Beta on Silverblue)
[#]: via: (https://fedoramagazine.org/how-to-rebase-to-fedora-30-beta-on-silverblue/)
[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/)
How to rebase to Fedora 30 Beta on Silverblue
======
![][1]
Silverblue is [an operating system for your desktop built on Fedora][2]. Its excellent for daily use, development, and container-based workflows. It offers [numerous advantages][3] such as being able to roll back in case of any problems. If you want to test Fedora 30 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert back if anything unforeseen happens.
### Switching to Fedora 30 branch
Switching to Fedora 30 on Silverblue is easy. First, check if the _30_ branch is available, which should be true now:
```
ostree remote refs fedora-workstation
```
You should see the following in the output:
```
fedora-workstation:fedora/30/x86_64/silverblue
```
Next, import the GPG key for the Fedora 30 branch. Without this step, you wont be able to rebase.
```
sudo ostree remote gpg-import fedora-workstation -k /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-30-primary
```
Next, rebase your system to the Fedora 30 branch.
```
rpm-ostree rebase fedora-workstation:fedora/30/x86_64/silverblue
```
Finally, the last thing to do is restart your computer and boot to Fedora 30.
### How to revert things back
Remember that Fedora 30s still in beta testing phase, so there could still be some issues. If anything bad happens — for instance, if you cant boot to Fedora 30 at all — its easy to go back. Just pick the previous entry in GRUB, and your system will start in its previous state before switching to Fedora 30. To make this change permanent, use the following command:
```
rpm-ostree rollback
```
Thats it. Now you know how to rebase to Fedora 30 and back. So why not test it today? 🙂
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/how-to-rebase-to-fedora-30-beta-on-silverblue/
作者:[Michal Konečný][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zlopez/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-f30beta-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/fedora-silverblue/
[3]: https://fedoramagazine.org/give-fedora-silverblue-a-test-drive/

View File

@ -1,54 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 Linux rookie mistakes)
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
5 Linux rookie mistakes
======
Linux enthusiasts share some of the biggest mistakes they made.
![magnifying glass on computer screen, finding a bug in the code][1]
It's smart to learn new skills throughout your life—it keeps your mind nimble and makes you more competitive in the job market. But some skills are harder to learn than others, especially those where small rookie mistakes can cost you a lot of time and trouble when you're trying to fix them.
Take learning [Linux][2], for example. If you're used to working in a Windows or MacOS graphical interface, moving to Linux, with its unfamiliar commands typed into a terminal, can have a big learning curve. But the rewards are worth it, as the millions and millions of people who have gone before you have proven.
That said, the journey won't be without pitfalls. We asked some of Linux enthusiasts to think back to when they first started using Linux and tell us about the biggest mistakes they made.
"Don't go into [any sort of command line interface (CLI) work] with an expectation that commands work in rational or consistent ways, as that is likely to lead to frustration. This is not due to poor design choices—though it can feel like it when you're banging your head against the proverbial desk—but instead reflects the fact that these systems have evolved and been added onto through generations of software and OS evolution. Go with the flow, write down or memorize the commands you need, and (try not to) get frustrated when [things aren't what you'd expect][3]." _—[Gina Likins][4]_
"As easy as it might be to just copy and paste commands to make the thing go, read the command first and at least have a general understanding of the actions that are about to be performed. Especially if there is a pipe command. Double especially if there is more than one. There are a lot of destructive commands that look innocuous until you realize what they can do (e.g., **rm** , **dd** ), and you don't want to accidentally destroy things. (Ask me how I know.)" _—[Katie McLaughlin][5]_
"Early on in my Linux journey, I wasn't as aware of the importance of knowing where you are in the filesystem. I was deleting some file in what I thought was my home directory, and I entered **sudo rm -rf *** and deleted all of the boot files on my system. Now, I frequently use **pwd** to ensure that I am where I think I am before issuing such commands. Fortunately for me, I was able to boot my wounded laptop with a USB drive and recover my files." _—[Don Watkins][6]_
"Do not reset permissions on the entire file system to [777][7] because you think 'permissions are hard to understand' and you want an application to have access to something." _—[Matthew Helmke][8]_
"I was removing a package from my system, and I did not check what other packages it was dependent upon. I just let it remove whatever it wanted and ended up causing some of my important programs to crash and become unavailable." _—[Kedar Vijay Kulkarni][9]_
What mistakes have you made while learning to use Linux? Share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-rookie-mistakes
作者:[Jen Wike Huger (Red Hat)][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
[2]: https://opensource.com/resources/linux
[3]: https://lintqueen.com/2017/07/02/learning-while-frustrated/
[4]: https://opensource.com/users/lintqueen
[5]: https://opensource.com/users/glasnt
[6]: https://opensource.com/users/don-watkins
[7]: https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
[8]: https://twitter.com/matthewhelmke
[9]: https://opensource.com/users/kkulkarn

View File

@ -1,69 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Microsoft/BMW IoT Open Manufacturing Platform might not be so open)
[#]: via: (https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
The Microsoft/BMW IoT Open Manufacturing Platform might not be so open
======
The new industrial IoT Open Manufacturing Platform from Microsoft and BMW runs only on Microsoft Azure. That could be an issue.
![Martyn Williams][1]
Last week at [Hannover Messe][2], Microsoft and German carmaker BMW announced a partnership to build a hardware and software technology framework and reference architecture for the industrial internet of things (IoT), and foster a community to spread these smart-factory solutions across the automotive and manufacturing industries.
The stated goal of the [Open Manufacturing Platform (OMP)][3]? According to the press release, it's “to drive open industrial IoT development and help grow a community to build future [Industry 4.0][4] solutions.” To make that a reality, the companies said that by the end of 2019, they plan to attract four to six partners — including manufacturers and suppliers from both inside and outside the automotive industry — and to have rolled out at least 15 use cases operating in actual production environments.
**[ Read also:[An inside look at an IIoT-powered smart factory][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]**
### Complex and proprietary is bad for IoT
It sounds like a great idea, right? As the companies rightly point out, many of todays industrial IoT solutions rely on “complex, proprietary systems that create data silos and slow productivity.” Who wouldnt want to “standardize data models that enable analytics and machine learning scenarios” and “accelerate future industrial IoT developments, shorten time to value, and drive production efficiencies while addressing common industrial challenges”?
But before you get too excited, lets talk about a key word in the effort: open. As Scott Guthrie, executive vice president of Microsoft Cloud + AI Group, said in a statement, "Our commitment to building an open community will create new opportunities for collaboration across the entire manufacturing value chain."
### The Open Manufacturing Platform is open only to Microsoft Azure
However, that will happen as long as all that collaboration occurs in Microsoft Azure. Im not saying Azure isnt up to the task, but its hardly the only (or even the leading) cloud platform interested in the industrial IoT. Putting everything in Azure might be an issue to those potential OMP partners. Its an “open” question as to how many companies already invested in Amazon Web Services (AWS) or the Google Cloud Platform (GCP) will be willing to make the switch or go multi-cloud just to take advantage of the OMP.
My guess is that Microsoft and BMW wont have too much trouble meeting their initial goals for the OMP. It shouldnt be that hard to get a handful of existing Azure customers to come up with 15 use cases leveraging advances in analytics, artificial intelligence (AI), and digital feedback loops. (As an example, the companies cited the autonomous transport systems in BMWs factory in Regensburg, Germany, part of the more than 3,000 machines, robots and transport systems connected with the BMW Groups IoT platform, which — naturally — is built on Microsoft Azure's cloud.)
### Will non-Azure users jump on board the OMP?
The question is whether tying all this to a single cloud provider will affect the effort to attract enough new companies — including companies not currently using Azure — to establish a truly viable open platform?
Perhaps [Stacey Higginbotham at Stacy on IoT put it best][7]:
> “What they really launched is a reference design for manufacturers to work from.”
Thats not nothing, of course, but its a lot less ambitious than building a new industrial IoT platform. And it may not easily fulfill the vision of a community working together to create shared solutions that benefit everyone.
**[ Now read this:[Why are IoT platforms so darn confusing?][8] ]**
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3387642/the-microsoftbmw-iot-open-manufacturing-platform-might-not-be-so-open.html#tk.rss_all
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2017/01/20170107_105344-100702818-large.jpg
[2]: https://www.hannovermesse.de/home
[3]: https://www.prnewswire.co.uk/news-releases/microsoft-and-the-bmw-group-launch-the-open-manufacturing-platform-859672858.html
[4]: https://en.wikipedia.org/wiki/Industry_4.0
[5]: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html
[6]: https://www.networkworld.com/newsletters/signup.html
[7]: https://mailchi.mp/iotpodcast/stacey-on-iot-industrial-iot-reminds-me-of-apples-ecosystem?e=6bf9beb394
[8]: https://www.networkworld.com/article/3336166/why-are-iot-platforms-so-darn-confusing.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,101 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How we built a Linux desktop app with Electron)
[#]: via: (https://opensource.com/article/19/4/linux-desktop-electron)
[#]: author: (Nils Ganther https://opensource.com/users/nils-ganther)
How we built a Linux desktop app with Electron
======
A story of building an open source email service that runs natively on
Linux desktops, thanks to the Electron framework.
![document sending][1]
[Tutanota][2] is a secure, open source email service that's been available as an app for the browser, iOS, and Android. The client code is published under GPLv3 and the Android app is available on [F-Droid][3] to enable everyone to use a completely Google-free version.
Because Tutanota focuses on open source and develops on Linux clients, we wanted to release a desktop app for Linux and other platforms. Being a small team, we quickly ruled out building native apps for Linux, Windows, and MacOS and decided to adapt our app using [Electron][4].
Electron is the go-to choice for anyone who wants to ship visually consistent, cross-platform applications, fast—especially if there's already a web app that needs to be freed from the shackles of the browser API. Tutanota is exactly such a case.
Tutanota is based on [SystemJS][5] and [Mithril][6] and aims to offer simple, secure email communications for everybody. As such, it has to provide a lot of the standard features users expect from any email client.
Some of these features, like basic push notifications, search for text and contacts, and support for two-factor authentication are easy to offer in the browser thanks to modern APIs and standards. Other features (such as automatic backups or IMAP support without involving our servers) need less-restricted access to system resources, which is exactly what the Electron framework provides.
While some criticize Electron as "just a basic wrapper," it has obvious benefits:
* Electron enables you to adapt a web app quickly for Linux, Windows, and MacOS desktops. In fact, most Linux desktop apps are built with Electron.
* Electron enables you to easily bring the desktop client to feature parity with the web app.
* Once you've published the desktop app, you can use free development capacity to add desktop-specific features that enhance usability and security.
* And last but certainly not least, it's a great way to make the app feel native and integrated into the user's system while maintaining its identity.
### Meeting users' needs
At Tutanota, we do not rely on big investor money, rather we are a community-driven project. We grow our team organically based on the increasing number of users upgrading to our freemium service's paid plans. Listening to what users want is not only important to us, it is essential to our success.
Offering a desktop client was users' [most-wanted feature][7] in Tutanota, and we are proud that we can now offer free beta desktop clients to all of our users. (We also implemented another highly requested feature—[search on encrypted data][8]—but that's a topic for another time.)
We liked the idea of providing users with signed versions of Tutanota and enabling functions that are impossible in the browser, such as push notifications via a background process. Now we plan to add more desktop-specific features, such as IMAP support without depending on our servers to act as a proxy, automatic backups, and offline availability.
We chose Electron because its combination of Chromium and Node.js promised to be the best fit for our small development team, as it required only minimal changes to our web app. It was particularly helpful to use the browser APIs for everything as we got started, slowly replacing those components with more native versions as we progressed. This approach was especially handy with attachment downloads and notifications.
### Tuning security
We were aware that some people cite security problems with Electron, but we found Electron's options for fine-tuning access in the web app quite satisfactory. You can use resources like the Electron's [security documentation][9] and Luca Carettoni's [Electron Security Checklist][10] to help prevent catastrophic mishaps with untrusted content in your web app.
### Achieving feature parity
The Tutanota web client was built from the start with a solid protocol for interprocess communication. We utilize web workers to keep user interface (UI) rendering responsive while encrypting and requesting data. This came in handy when we started implementing our mobile apps, which use the same protocol to communicate between the native part and the web view.
That's why when we started building the desktop clients, a lot of bindings for things like native push notifications, opening mailboxes, and working with the filesystem were already there, so only the native (node) side had to be implemented.
Another convenience was our build process using the [Babel transpiler][11], which allows us to write the entire codebase in modern ES6 JavaScript and mix-and-match utility modules between the different environments. This enabled us to speedily adapt the code for the Electron-based desktop apps. However, we encountered some challenges.
### Overcoming challenges
While Electron allows us to integrate with the different platforms' desktop environments pretty easily, you can't underestimate the time investment to get things just right! In the end, it was these little things that took up much more time than we expected but were also crucial to finish the desktop client project.
The places where platform-specific code was necessary caused most of the friction:
* Window management and the tray, for example, are still handled in subtly different ways on the three platforms.
* Registering Tutanota as the default mail program and setting up autostart required diving into the Windows Registry while making sure to prompt the user for admin access in a [UAC][12]-compatible way.
* We needed to use Electron's API for shortcuts and menus to offer even standard features like copy, paste, undo, and redo.
This process was complicated a bit by users' expectations of certain, sometimes not directly compatible behavior of the apps on different platforms. Making the three versions feel native required some iteration and even some modest additions to the web app to offer a text search similar to the one in the browser.
### Wrapping up
Our experience with Electron was largely positive, and we completed the project in less than four months. Despite some rather time-consuming features, we were surprised about the ease with which we could ship a beta version of the [Tutanota desktop client for Linux][13]. If you're interested, you can dive into the source code on [GitHub][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/4/linux-desktop-electron
作者:[Nils Ganther][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nils-ganther
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ (document sending)
[2]: https://tutanota.com/
[3]: https://f-droid.org/en/packages/de.tutao.tutanota/
[4]: https://electronjs.org/
[5]: https://github.com/systemjs/systemjs
[6]: https://mithril.js.org/
[7]: https://tutanota.uservoice.com/forums/237921-general/filters/top?status_id=1177482
[8]: https://tutanota.com/blog/posts/first-search-encrypted-data/
[9]: https://electronjs.org/docs/tutorial/security
[10]: https://www.blackhat.com/docs/us-17/thursday/us-17-Carettoni-Electronegativity-A-Study-Of-Electron-Security-wp.pdf
[11]: https://babeljs.io/
[12]: https://en.wikipedia.org/wiki/User_Account_Control
[13]: https://tutanota.com/blog/posts/desktop-clients/
[14]: https://www.github.com/tutao/tutanota

View File

@ -1,135 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Modrisco)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris)
[#]: via: (https://itsfoss.com/epic-games-lutris-linux/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Epic Games Store is Now Available on Linux Thanks to Lutris
======
_**Brief: Open Source gaming platform Lutris now enables you to use Epic Games Store on Linux. We tried it on Ubuntu 19.04 and heres our experience with it.**_
[Gaming on Linux][1] just keeps getting better. Want to [play Windows games on Linux][2], Steams new [in-progress feature][3] enables you to do that.
Steam might be new in the field of Windows games on Linux but Lutris has been doing it for years.
[Lutris][4] is an open source gaming platform for Linux where it provides installers for game clients like Origin, Steam, Blizzard.net app and so on. It utilizes Wine to run stuff that isnt natively supported on Linux.
Lutris has recently announced that you can now use Epic Games Store using Lutris.
### Lutris brings Epic Games to Linux
![Epic Games Store Lutris Linux][5]
[Epic Games Store][6] is a digital video game distribution platform like Steam. It only supports Windows and macOS for the moment.
The Lutris team worked hard to bring Epic Games Store to Linux via Lutris. Even though Im not a big fan of Epic Games Store, it was good to know about the support for Linux via Lutris:
> Good news! [@EpicGames][7] Store is now fully functional under Linux if you use Lutris to install it! No issues observed whatsoever. <https://t.co/cYmd7PcYdG>[@TimSweeneyEpic][8] will probably like this 😊 [pic.twitter.com/7mt9fXt7TH][9]
>
> — Lutris Gaming (@LutrisGaming) [April 17, 2019][10]
As an avid gamer and Linux user, I immediately jumped upon this news and installed Lutris to run Epic Games on it.
**Note:** _I used[Ubuntu 19.04][11] to test Epic Games store for Linux._
### Using Epic Games Store for Linux using Lutris
To install Epic Games Store on your Linux system, make sure that you have [Lutris][4] installed with its pre-requisites Wine and Python 3. So, first [install Wine on Ubuntu][12] or whichever Linux you are using and then [download Lutris from its website][13].
[][14]
Suggested read Ubuntu Mate Will Be Default OS On Entroware Laptops
#### Installing Epic Games Store
Once the installation of Lutris is successful, simply launch it.
While I tried this, I encountered an error (nothing happened when I tried to launch it using the GUI). However, when I typed in “ **lutris** ” on the terminal to launch it otherwise, I noticed an error that looked like this:
![][15]
Thanks to Abhishek, I learned that this is a common issue (you can check that on [GitHub][16]).
So, to fix it, all I had to do was type in a command in the terminal:
```
export LC_ALL=C
```
Just copy it and enter it in your terminal if you face the same issue. And, then, you will be able to open Lutris.
**Note:** _Youll have to enter this command every time you launch Lutris. So better to add it to your .bashrc or list of environment variable._
Once that is done, simply launch it and search for “ **Epic Games Store** ” as shown in the image below:
![Epic Games Store in Lutris][17]
Here, I have it installed already, so you will get the option to “Install” it and then it will automatically ask you to install the required packages that it needs. You just have to proceed in order to successfully install it. Thats it no rocket science involved.
#### Playing a Game on Epic Games Store
![Epic Games Store][18]
Now that we have Epic Games store via Lutris on Linux, simply launch it and log in to your account to get started.
But, does it really work?
_Yes, the Epic Games Store does work._ **But, all the games dont.**
Well, I havent tried everything, but I grabbed a free game (Transistor a turn-based ARPG game) to check if that works.
![Transistor Epic Games Store][19]
Unfortunately, it didnt. It says that it is “Running” when I launch it but then again, nothing happens.
As of now, Im not aware of any solutions to that so Ill try to keep you guys updated if I find a fix.
[][20]
Suggested read Alpha Version Of New Skype Client For Linux Is Out Now
**Wrapping Up**
Its good to see the gaming scene improve on Linux thanks to the solutions like Lutris for users. However, theres still a lot of work to be done.
For a game to run hassle-free on Linux is still a challenge. There can be issues like this which I encountered or similar. But, its going in the right direction even if it has issues.
What do you think of Epic Games Store on Linux via Lutris? Have you tried it yet? Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/epic-games-lutris-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Modrisco](https://github.com/Modrisco)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-gaming-guide/
[2]: https://itsfoss.com/steam-play/
[3]: https://itsfoss.com/steam-play-proton/
[4]: https://lutris.net/
[5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png
[6]: https://www.epicgames.com/store/en-US/
[7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw
[8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw
[9]: https://t.co/7mt9fXt7TH
[10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw
[11]: https://itsfoss.com/ubuntu-19-04-release-features/
[12]: https://itsfoss.com/install-latest-wine/
[13]: https://lutris.net/downloads/
[14]: https://itsfoss.com/ubuntu-mate-entroware/
[15]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-error.jpg
[16]: https://github.com/lutris/lutris/issues/660
[17]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-epic-games-store-800x520.jpg
[18]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-800x450.jpg
[19]: https://itsfoss.com/wp-content/uploads/2019/04/transistor-game-epic-games-store-800x410.jpg
[20]: https://itsfoss.com/skpe-alpha-linux/

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Awesome Fedora 30 is Here! Check Out the New Features)
[#]: via: (https://itsfoss.com/fedora-30/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
The Awesome Fedora 30 is Here! Check Out the New Features
======
The latest and greatest release of Fedora is here. Fedora 30 brings some visual as well as performance improvements.
Fedora releases a new version every six months and each release is supported for thirteen months.
Before you decide to download or upgrade Fedora, lets first see whats new in Fedora 30.
### New Features in Fedora 30
![Fedora 30 Release][1]
Heres whats new in the latest release of Fedora.
#### GNOME 3.32 gives a brand new look, features and performance improvement
A lot of visual improvements is brought by the latest release of GNOME.
GNOME 3.32 has refreshed new icons and UI and it almost looks like a brand new version of GNOME.
![Gnome 3.32 icons | Image Credit][2]
GNOME 3.32 also brings several other features like fractional scaling, permission control for each application, granular control on Night Light intensity among many other changes.
GNOME 3.32 also brings some performance improvements. Youll see faster file and app searches and a smoother scrolling.
#### Improved performance for DNF
Fedora 30 will see a faster [DNF][3] (the default package manager for Fedora) thanks to the [zchunk][4] compression algorithm.
The zchunk algorithm splits the file into independent chunks. This helps in dealing with delta or changes as you download only the changed chunks while downloading the new version of a file.
With zcunk, dnf will only download the difference between the metadata of the current version and the earlier versions.
#### Fedora 30 brings two new desktop environments into the fold
Fedora already offers several desktop environment choices. Fedora 30 extends the offering with [elementary OS][5] Pantheon desktop environment and Deepin Linux [DeepinDE][6].
So now you can enjoy the looks and feel of elementary OS and Deepin Linux in Fedora. How cool is that!
#### Linux Kernel 5
Fedora 29 has Linux Kernel 5.0.9 version that has improved support for hardware and some performance improvements. You may check out the [features of Linux kernel 5.0 in this article][7].
[][8]
Suggested read The Featureful Release of Nextcloud 14 Has Two New Security Features
#### Updated software
Youll also get newer versions of software. Some of the major ones are:
* GCC 9.0.1
* [Bash Shell 5.0][9]
* GNU C Library 2.29
* Ruby 2.6
* Golang 1.12
* Mesa 19.0.2
* Vagrant 2.2
* JDK12
* PHP 7.3
* Fish 3.0
* Erlang 21
* Python 3.7.3
### Getting Fedora 30
If you are already using Fedora 29 then you can upgrade to the latest release from your current install. You may follow this guide to learn [how to upgrade a Fedora version][10].
Fedora 29 users will still get the updates for seven more months so if you dont feel like upgrading, you may skip it for now. Fedora 28 users have no choice because Fedora 28 reached end of life next month which means there will be no security or maintenance update anymore. Upgrading to a newer version is no longer a choice.
You always has the option to download the ISO of Fedora 30 and install it afresh. You can download Fedora from its official website. Its only available for 64-bit systems and the ISO is 1.9 GB in size.
[Download Fedora 30 Workstation][11]
What do you think of Fedora 30? Are you planning to upgrade or at least try it out? Do share your thoughts in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/fedora-30/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2019/04/fedora-30-release-800x450.png
[2]: https://itsfoss.com/wp-content/uploads/2019/04/gnome-3-32-icons.png
[3]: https://fedoraproject.org/wiki/DNF?rd=Dnf
[4]: https://github.com/zchunk/zchunk
[5]: https://itsfoss.com/elementary-os-juno-features/
[6]: https://www.deepin.org/en/dde/
[7]: https://itsfoss.com/linux-kernel-5/
[8]: https://itsfoss.com/nextcloud-14-release/
[9]: https://itsfoss.com/bash-5-release/
[10]: https://itsfoss.com/upgrade-fedora-version/
[11]: https://getfedora.org/en/workstation/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (bodhix)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (zhang5788)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,168 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Testinfra with Ansible to verify server state)
[#]: via: (https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state)
[#]: author: (Clement Verna https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert)
Using Testinfra with Ansible to verify server state
======
Testinfra is a powerful library for writing tests to verify an
infrastructure's state. Coupled with Ansible and Nagios, it offers a
simple solution to enforce infrastructure as code.
![Terminal command prompt on orange background][1]
By design, [Ansible][2] expresses the desired state of a machine to ensure that the content of an Ansible playbook or role is deployed to the targeted machines. But what if you need to make sure all the infrastructure changes are in Ansible? Or verify the state of a server at any time?
[Testinfra][3] is an infrastructure testing framework that makes it easy to write unit tests to verify the state of a server. It is a Python library and uses the powerful [pytest][4] test engine.
### Getting started with Testinfra
Testinfra can be easily installed using the Python package manager (pip) and a Python virtual environment.
```
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install testinfra
```
Testinfra is also available in the package repositories of Fedora and CentOS using the EPEL repository. For example, on CentOS 7 you can install it with the following commands:
```
$ yum install -y epel-release
$ yum install -y python-testinfra
```
#### A simple test script
Writing tests in Testinfra is easy. Using the code editor of your choice, add the following to a file named **test_simple.py** :
```
import testinfra
def test_os_release(host):
assert host.file("/etc/os-release").contains("Fedora")
def test_sshd_inactive(host):
assert host.service("sshd").is_running is False
```
By default, Testinfra provides a host object to the test case; this object gives access to different helper modules. For example, the first test uses the **file** module to verify the content of the file on the host, and the second test case uses the **service** module to check the state of a systemd service.
To run these tests on your local machine, execute the following command:
```
(venv)$ pytest test_simple.py
================================ test session starts ================================
platform linux -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
rootdir: /home/cverna/Documents/Python/testinfra
plugins: testinfra-3.0.0
collected 2 items
test_simple.py ..
================================ 2 passed in 0.05 seconds ================================
```
For a full list of Testinfra's APIs, you can consult the [documentation][5].
### Testinfra and Ansible
One of Testinfra's supported backends is Ansible, which means Testinfra can directly use Ansible's inventory file and a group of machines defined in the inventory to run tests against them.
Let's use the following inventory file as an example:
```
[web]
app-frontend01
app-frontend02
[database]
db-backend01
```
We want to make sure that our Apache web server service is running on **app-frontend01** and **app-frontend02**. Let's write the test in a file called **test_web.py** :
```
def check_httpd_service(host):
"""Check that the httpd service is running on the host"""
assert host.service("httpd").is_running
```
To run this test using Testinfra and Ansible, use the following command:
```
(venv) $ pip install ansible
(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible test_web.py
```
When invoking the tests, we use the Ansible inventory **[web]** group as the targeted machines and also specify that we want to use Ansible as the connection backend.
#### Using the Ansible module
Testinfra also provides a nice API to Ansible that can be used in the tests. The Ansible module enables access to run Ansible plays inside a test and makes it easy to inspect the result of the play.
```
def check_ansible_play(host):
"""
Verify that a package is installed using Ansible
package module
"""
assert not host.ansible("package", "name=httpd state=present")["changed"]
```
By default, Ansible's [Check Mode][6] is enabled, which means that Ansible will report what would change if the play were executed on the remote host.
### Testinfra and Nagios
Now that we can easily run tests to validate the state of a machine, we can use those tests to trigger alerts on a monitoring system. This is a great way to catch unexpected changes.
Testinfra offers an integration with [Nagios][7], a popular monitoring solution. By default, Nagios uses the [NRPE][8] plugin to execute checks on remote hosts, but using Testinfra allows you to run the tests directly from the Nagios master.
To get a Testinfra output compatible with Nagios, we have to use the **\--nagios** flag when triggering the test. We also use the **-qq** pytest flag to enable pytest's **quiet** mode so all the test details will not be displayed.
```
(venv) $ py.test --hosts=web --ansible-inventory=inventory --connection=ansible --nagios -qq line test.py
TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.55 seconds
```
Testinfra is a powerful library for writing tests to verify an infrastructure's state. Coupled with Ansible and Nagios, it offers a simple solution to enforce infrastructure as code. It is also a key component of adding testing during the development of your Ansible roles using [Molecule][9].
* * *
Sysadmins who think the cloud is a buzzword and a bunch of hype should check out Ansible.
Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/using-testinfra-ansible-verify-server-state
作者:[Clement Verna][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/cverna/users/paulbischoff/users/dcritch/users/cobiacomm/users/wgarry155/users/kadinroob/users/koreyhilpert
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://www.ansible.com/
[3]: https://testinfra.readthedocs.io/en/latest/
[4]: https://pytest.org/
[5]: https://testinfra.readthedocs.io/en/latest/modules.html#modules
[6]: https://docs.ansible.com/ansible/playbooks_checkmode.html
[7]: https://www.nagios.org/
[8]: https://en.wikipedia.org/wiki/Nagios#NRPE
[9]: https://github.com/ansible/molecule

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (QiaoN)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -213,7 +213,7 @@ via: https://opensource.com/article/19/5/run-your-blog-github-pages-python
作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[QiaoN](https://github.com/QiaoN)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,490 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to write a good C main function)
[#]: via: (https://opensource.com/article/19/5/how-write-good-c-main-function)
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny)
How to write a good C main function
======
Learn how to structure a C file and write a C main function that handles
command line arguments like a champ.
![Hand drawing out the word "code"][1]
I know, Python and JavaScript are what the kids are writing all their crazy "apps" with these days. But don't be so quick to dismiss C—it's a capable and concise language that has a lot to offer. If you need speed, writing in C could be your answer. If you are looking for job security and the opportunity to learn how to hunt down [null pointer dereferences][2], C could also be your answer! In this article, I'll explain how to structure a C file and write a C main function that handles command line arguments like a champ.
**Me** : a crusty Unix system programmer.
**You** : someone with an editor, a C compiler, and some time to kill.
_Let's do this._
### A boring but correct C program
![Parody O'Reilly book cover, "Hating Other People's Code"][3]
A C program starts with a **main()** function, usually kept in a file named **main.c**.
```
/* main.c */
int main(int argc, char *argv[]) {
}
```
This program _compiles_ but doesn't _do_ anything.
```
$ gcc main.c
$ ./a.out -o foo -vv
$
```
Correct and boring.
### Main functions are unique
The **main()** function is the first function in your program that is executed when it begins executing, but it's not the first function executed. The _first_ function is **_start()** , which is typically provided by the C runtime library, linked in automatically when your program is compiled. The details are highly dependent on the operating system and compiler toolchain, so I'm going to pretend I didn't mention it.
The **main()** function has two arguments that traditionally are called **argc** and **argv** and return a signed integer. Most Unix environments expect programs to return **0** (zero) on success and **-1** (negative one) on failure.
Argument | Name | Description
---|---|---
argc | Argument count | Length of the argument vector
argv | Argument vector | Array of character pointers
The argument vector, **argv** , is a tokenized representation of the command line that invoked your program. In the example above, **argv** would be a list of the following strings:
```
`argv = [ "/path/to/a.out", "-o", "foo", "-vv" ];`
```
The argument vector is guaranteed to always have at least one string in the first index, **argv[0]** , which is the full path to the program executed.
### Anatomy of a main.c file
When I write a **main.c** from scratch, it's usually structured like this:
```
/* main.c */
/* 0 copyright/licensing */
/* 1 includes */
/* 2 defines */
/* 3 external declarations */
/* 4 typedefs */
/* 5 global variable declarations */
/* 6 function prototypes */
int main(int argc, char *argv[]) {
/* 7 command-line parsing */
}
/* 8 function declarations */
```
I'll talk about each of these numbered sections, except for zero, below. If you have to put copyright or licensing text in your source, put it there.
Another thing I won't talk about adding to your program is comments.
```
"Comments lie."
\- A cynical but smart and good looking programmer.
```
Instead of comments, use meaningful function and variable names.
Appealing to the inherent laziness of programmers, once you add comments, you've doubled your maintenance load. If you change or refactor the code, you need to update or expand the comments. Over time, the code mutates away from anything resembling what the comments describe.
If you have to write comments, do not write about _what_ the code is doing. Instead, write about _why_ the code is doing what it's doing. Write comments that you would want to read five years from now when you've forgotten everything about this code. And the fate of the world is depending on you. _No pressure_.
#### 1\. Includes
The first things I add to a **main.c** file are includes to make a multitude of standard C library functions and variables available to my program. The standard C library does lots of things; explore header files in **/usr/include** to find out what it can do for you.
The **#include** string is a [C preprocessor][4] (cpp) directive that causes the inclusion of the referenced file, in its entirety, in the current file. Header files in C are usually named with a **.h** extension and should not contain any executable code; only macros, defines, typedefs, and external variable and function prototypes. The string **< header.h>** tells cpp to look for a file called **header.h** in the system-defined header path, usually **/usr/include**.
```
/* main.c */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <libgen.h>
#include <errno.h>
#include <string.h>
#include <getopt.h>
#include <sys/types.h>
```
This is the minimum set of global includes that I'll include by default for the following stuff:
#include File | Stuff It Provides
---|---
stdio | Supplies FILE, stdin, stdout, stderr, and the fprint() family of functions
stdlib | Supplies malloc(), calloc(), and realloc()
unistd | Supplies EXIT_FAILURE, EXIT_SUCCESS
libgen | Supplies the basename() function
errno | Defines the external errno variable and all the values it can take on
string | Supplies memcpy(), memset(), and the strlen() family of functions
getopt | Supplies external optarg, opterr, optind, and getopt() function
sys/types | Typedef shortcuts like uint32_t and uint64_t
#### 2\. Defines
```
/* main.c */
<...>
#define OPTSTR "vi⭕f:h"
#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]"
#define ERR_FOPEN_INPUT "fopen(input, r)"
#define ERR_FOPEN_OUTPUT "fopen(output, w)"
#define ERR_DO_THE_NEEDFUL "do_the_needful blew up"
#define DEFAULT_PROGNAME "george"
```
This doesn't make a lot of sense right now, but the **OPTSTR** define is where I will state what command line switches the program will recommend. Consult the [**getopt(3)**][5] man page to learn how **OPTSTR** will affect **getopt()** 's behavior.
The **USAGE_FMT** define is a **printf()** -style format string that is referenced in the **usage()** function.
I also like to gather string constants as **#defines** in this part of the file. Collecting them makes it easier to fix spelling, reuse messages, and internationalize messages, if required.
Finally, use all capital letters when naming a **#define** to distinguish it from variable and function names. You can run the words together if you want or separate words with an underscore; just make sure they're all upper case.
#### 3\. External declarations
```
/* main.c */
<...>
extern int errno;
extern char *optarg;
extern int opterr, optind;
```
An **extern** declaration brings that name into the namespace of the current compilation unit (aka "file") and allows the program to access that variable. Here we've brought in the definitions for three integer variables and a character pointer. The **opt** prefaced variables are used by the **getopt()** function, and **errno** is used as an out-of-band communication channel by the standard C library to communicate why a function might have failed.
#### 4\. Typedefs
```
/* main.c */
<...>
typedef struct {
int verbose;
uint32_t flags;
FILE *input;
FILE *output;
} options_t;
```
After external declarations, I like to declare **typedefs** for structures, unions, and enumerations. Naming a **typedef** is a religion all to itself; I strongly prefer a **_t** suffix to indicate that the name is a type. In this example, I've declared **options_t** as a **struct** with four members. C is a whitespace-neutral programming language, so I use whitespace to line up field names in the same column. I just like the way it looks. For the pointer declarations, I prepend the asterisk to the name to make it clear that it's a pointer.
#### 5\. Global variable declarations
```
/* main.c */
<...>
int dumb_global_variable = -11;
```
Global variables are a bad idea and you should never use them. But if you have to use a global variable, declare them here and be sure to give them a default value. Seriously, _don't use global variables_.
#### 6\. Function prototypes
```
/* main.c */
<...>
void usage(char *progname, int opt);
int do_the_needful(options_t *options);
```
As you write functions, adding them after the **main()** function and not before, include the function prototypes here. Early C compilers used a single-pass strategy, which meant that every symbol (variable or function name) you used in your program had to be declared before you used it. Modern compilers are nearly all multi-pass compilers that build a complete symbol table before generating code, so using function prototypes is not strictly required. However, you sometimes don't get to choose what compiler is used on your code, so write the function prototypes and drive on.
As a matter of course, I always include a **usage()** function that **main()** calls when it doesn't understand something you passed in from the command line.
#### 7\. Command line parsing
```
/* main.c */
<...>
int main(int argc, char *argv[]) {
int opt;
options_t options = { 0, 0x0, stdin, stdout };
opterr = 0;
while ((opt = getopt(argc, argv, OPTSTR)) != EOF)
switch(opt) {
case 'i':
if (!(options.input = [fopen][6](optarg, "r")) ){
[perror][7](ERR_FOPEN_INPUT);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'o':
if (!(options.output = [fopen][6](optarg, "w")) ){
[perror][7](ERR_FOPEN_OUTPUT);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'f':
options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16);
break;
case 'v':
options.verbose += 1;
break;
case 'h':
default:
usage(basename(argv[0]), opt);
/* NOTREACHED */
break;
}
if (do_the_needful(&options) != EXIT_SUCCESS) {
[perror][7](ERR_DO_THE_NEEDFUL);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
return EXIT_SUCCESS;
}
```
OK, that's a lot. The purpose of the **main()** function is to collect the arguments that the user provides, perform minimal input validation, and then pass the collected arguments to functions that will use them. This example declares an **options** variable initialized with default values and parse the command line, updating **options** as necessary.
The guts of this **main()** function is a **while** loop that uses **getopt()** to step through **argv** looking for command line options and their arguments (if any). The **OPTSTR** **#define** earlier in the file is the template that drives **getopt()** 's behavior. The **opt** variable takes on the character value of any command line options found by **getopt()** , and the program's response to the detection of the command line option happens in the **switch** statement.
Those of you paying attention will now be questioning why **opt** is declared as a 32-bit **int** but is expected to take on an 8-bit **char**? It turns out that **getopt()** returns an **int** that takes on a negative value when it gets to the end of **argv** , which I check against **EOF** (the _End of File_ marker). A **char** is a signed quantity, but I like matching variables to their function return values.
When a known command line option is detected, option-specific behavior happens. Some options have an argument, specified in **OPTSTR** with a trailing colon. When an option has an argument, the next string in **argv** is available to the program via the externally defined variable **optarg**. I use **optarg** to open files for reading and writing or converting a command line argument from a string to an integer value.
There are a couple of points for style here:
* Initialize **opterr** to 0, which disables **getopt** from emiting a **?**.
* Use **exit(EXIT_FAILURE);** or **exit(EXIT_SUCCESS);** in the middle of **main()**.
* **/* NOTREACHED */** is a lint directive that I like.
* Use **return EXIT_SUCCESS;** at the end of functions that return **int**.
* Explicitly cast implicit type conversions.
The command line signature for this program, if it were compiled, would look something like this:
```
$ ./a.out -h
a.out [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]
```
In fact, that's what **usage()** will emit to **stderr** once compiled.
#### 8\. Function declarations
```
/* main.c */
<...>
void usage(char *progname, int opt) {
[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
int do_the_needful(options_t *options) {
if (!options) {
errno = EINVAL;
return EXIT_FAILURE;
}
if (!options->input || !options->output) {
errno = ENOENT;
return EXIT_FAILURE;
}
/* XXX do needful stuff */
return EXIT_SUCCESS;
}
```
Finally, I write functions that aren't boilerplate. In this example, function **do_the_needful()** accepts a pointer to an **options_t** structure. I validate that the **options** pointer is not **NULL** and then go on to validate the **input** and **output** structure members. **EXIT_FAILURE** returns if either test fails and, by setting the external global variable **errno** to a conventional error code, I signal to the caller a general reason. The convenience function **perror()** can be used by the caller to emit human-readable-ish error messages based on the value of **errno**.
Functions should almost always validate their input in some way. If full validation is expensive, try to do it once and treat the validated data as immutable. The **usage()** function validates the **progname** argument using a conditional assignment in the **fprintf()** call. The **usage()** function is going to exit anyway, so I don't bother setting **errno** or making a big stink about using a correct program name.
The big class of errors I am trying to avoid here is de-referencing a **NULL** pointer. This will cause the operating system to send a special signal to my process called **SYSSEGV** , which results in unavoidable death. The last thing users want to see is a crash due to **SYSSEGV**. It's much better to catch a **NULL** pointer in order to emit better error messages and shut down the program gracefully.
Some people complain about having multiple **return** statements in a function body. They make arguments about "continuity of control flow" and other stuff. Honestly, if something goes wrong in the middle of a function, it's a good time to return an error condition. Writing a ton of nested **if** statements to just have one return is never a "good idea."™
Finally, if you write a function that takes four or more arguments, consider bundling them in a structure and passing a pointer to the structure. This makes the function signatures simpler, making them easier to remember and not screw up when they're called later. It also makes calling the function slightly faster, since fewer things need to be copied into the function's stack frame. In practice, this will only become a consideration if the function is called millions or billions of times. Don't worry about it if that doesn't make sense.
### Wait, you said no comments!?!!
In the **do_the_needful()** function, I wrote a specific type of comment that is designed to be a placeholder rather than documenting the code:
```
`/* XXX do needful stuff */`
```
When you are in the zone, sometimes you don't want to stop and write some particularly gnarly bit of code. You'll come back and do it later, just not now. That's where I'll leave myself a little breadcrumb. I insert a comment with a **XXX** prefix and a short remark describing what needs to be done. Later on, when I have more time, I'll grep through source looking for **XXX**. It doesn't matter what you use, just make sure it's not likely to show up in your codebase in another context, as a function name or variable, for instance.
### Putting it all together
OK, this program _still_ does almost nothing when you compile and run it. But now you have a solid skeleton to build your own command line parsing C programs.
```
/* main.c - the complete listing */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <libgen.h>
#include <errno.h>
#include <string.h>
#include <getopt.h>
#define OPTSTR "vi⭕f:h"
#define USAGE_FMT "%s [-v] [-f hexflag] [-i inputfile] [-o outputfile] [-h]"
#define ERR_FOPEN_INPUT "fopen(input, r)"
#define ERR_FOPEN_OUTPUT "fopen(output, w)"
#define ERR_DO_THE_NEEDFUL "do_the_needful blew up"
#define DEFAULT_PROGNAME "george"
extern int errno;
extern char *optarg;
extern int opterr, optind;
typedef struct {
int verbose;
uint32_t flags;
FILE *input;
FILE *output;
} options_t;
int dumb_global_variable = -11;
void usage(char *progname, int opt);
int do_the_needful(options_t *options);
int main(int argc, char *argv[]) {
int opt;
options_t options = { 0, 0x0, stdin, stdout };
opterr = 0;
while ((opt = getopt(argc, argv, OPTSTR)) != EOF)
switch(opt) {
case 'i':
if (!(options.input = [fopen][6](optarg, "r")) ){
[perror][7](ERR_FOPEN_INPUT);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'o':
if (!(options.output = [fopen][6](optarg, "w")) ){
[perror][7](ERR_FOPEN_OUTPUT);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
break;
case 'f':
options.flags = (uint32_t )[strtoul][9](optarg, NULL, 16);
break;
case 'v':
options.verbose += 1;
break;
case 'h':
default:
usage(basename(argv[0]), opt);
/* NOTREACHED */
break;
}
if (do_the_needful(&options) != EXIT_SUCCESS) {
[perror][7](ERR_DO_THE_NEEDFUL);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
return EXIT_SUCCESS;
}
void usage(char *progname, int opt) {
[fprintf][10](stderr, USAGE_FMT, progname?progname:DEFAULT_PROGNAME);
[exit][8](EXIT_FAILURE);
/* NOTREACHED */
}
int do_the_needful(options_t *options) {
if (!options) {
errno = EINVAL;
return EXIT_FAILURE;
}
if (!options->input || !options->output) {
errno = ENOENT;
return EXIT_FAILURE;
}
/* XXX do needful stuff */
return EXIT_SUCCESS;
}
```
Now you're ready to write C that will be easier to maintain. If you have any questions or feedback, please share them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/how-write-good-c-main-function
作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jnyjny
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
[2]: https://www.owasp.org/index.php/Null_Dereference
[3]: https://opensource.com/sites/default/files/uploads/hatingotherpeoplescode-big.png (Parody O'Reilly book cover, "Hating Other People's Code")
[4]: https://en.wikipedia.org/wiki/C_preprocessor
[5]: https://linux.die.net/man/3/getopt
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/strtoul.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A Quick Look at Elvish Shell)
[#]: via: (https://itsfoss.com/elvish-shell/)
[#]: author: (John Paul https://itsfoss.com/author/john/)
A Quick Look at Elvish Shell
======
Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today.
### What is Elvish Shell?
![Pipelines In Elvish][1]
[Elvish][2] is more than just a shell. It is [also][3] “an expressive programming language”. It has a number of interesting features including:
* Written in Go
* Built-in file manager, inspired by the [Ranger file manager][4] (`Ctrl + N`)
* Searchable command history (`Ctrl + R`)
* History of directories visited (`Ctrl + L`)
* Powerful pipelines that support structured data, such as lists, maps, and functions
* Includes a “standard set of control structures: conditional control with `if`, loops with `for` and `while`, and exception handling with `try`
* Support for [third-party modules via a package manager to extend Elvish][5]
* Licensed under the BSD 2-Clause license
“Why is it named Elvish?” I hear you shout. Well, according to [their website][6], they chose their current name because:
> In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish.
### How to Install Elvish Shell
Elvish is available in several mainstream distributions.
Note that the software is very young. The most recent version is 0.12. According to the projects [GitHub page][3]: “Despite its pre-1.0 status, it is already suitable for most daily interactive use.”
![Elvish Control Structures][7]
#### Debian and Ubuntu
Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA][8] to install the latest version. You will need to use the following commands:
```
sudo add-apt-repository ppa:zhsj/elvish
sudo apt update
sudo apt install elvish
```
#### Fedora
Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository][9] to install Evlish. To do so, you will need to use these commands:
```
sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol
sudo dnf install elvish
```
#### Arch
Elvish is available in the [Arch User Repository][10].
I believe you know [how to change shell in Linux][11] so after installing you can switch to Elvish to use it.
### Final Thoughts on Elvish Shell
Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs.
For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf][12] instead. Fzf uses fuzzy search, so you dont need to remember the exact command you are looking for. Fzf also allows you to preview and open files.
I do think that the fact that Elvish is also a programming language is neat, but Ill stick with Bash shell scripting until Elvish matures a little more.
Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][13].
--------------------------------------------------------------------------------
via: https://itsfoss.com/elvish-shell/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1
[2]: https://elv.sh/
[3]: https://github.com/elves/elvish
[4]: https://ranger.github.io/
[5]: https://github.com/elves/awesome-elvish
[6]: https://elv.sh/ref/name.html
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1
[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish
[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository
[10]: https://aur.archlinux.org/packages/elvish/
[11]: https://linuxhandbook.com/change-shell-linux/
[12]: https://github.com/junegunn/fzf
[13]: http://reddit.com/r/linuxusersgroup

View File

@ -0,0 +1,244 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Packit packaging in Fedora with minimal effort)
[#]: via: (https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/)
[#]: author: (Petr Hracek https://fedoramagazine.org/author/phracek/)
Packit packaging in Fedora with minimal effort
======
![][1]
### What is packit
Packit ([https://packit.dev/)][2] is a CLI tool that helps you auto-maintain your upstream projects into the Fedora operating system. But what does it really mean?
As a developer, you might want to update your package in Fedora. If youve done it in the past, you know its no easy task. If you havent let me reiterate: its no easy task.
And this is exactly where packit can help: once you have your package in Fedora, you can maintain your SPEC file upstream and, with just one additional configuration file, packit will help you update your package in Fedora when you update your source code upstream.
Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project.
Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR.
Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others.
Packit provides also another two commands: _build_ and _create-update_.
The command _packit build_ performs a production build of your project in Fedora build system koji. You can Fedora version you want to build against using an option _dist-git-branch_. The command _packit create-updates_ creates a Bodhi update for the specific branch using the option — _dist-git-branch_.
### Installation
You can install packit on Fedora using dnf:
```
sudo dnf install -y packit
```
### Configuration
For demonstration use case, I have selected the upstream repository of **colin** ([https://github.com/user-cont/colin)][3]. Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images.
First of all, clone **colin** git repository:
```
$ git clone https://github.com/user-cont/colin.git
$ cd colin
```
Packit expects to run in the root of your git repository.
Packit ([https://github.com/packit-service/packit/)][4] needs information about your project, which has to be stored in the upstream repository in the _.packit.yaml_ file (<https://github.com/packit-service/packit/blob/master/docs/configuration.md#projects-configuration-file>).
See colins packit configuration file:
```
$ cat .packit.yaml
specfile_path: colin.spec
synced_files:
-.packit.yaml
- colin.spec
upstream_project_name: colin
downstream_package_name: colin
```
What do the values mean?
* _specfile_path_ a relative path to a spec file within the upstream repository (mandatory)
* _synced_files_ a list of relative paths to files in the upstream repo which are meant to be copied to dist-git during an update
* _upstream_project_name_ name of the upstream repository (e.g. in PyPI); this is used in %prep section
* _downstream_package_name_ name of the package in Fedora (mandatory)
For more information see the packit configuration documentation (<https://github.com/packit-service/packit/blob/master/docs/configuration.md>)
### What can packit do?
Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project.
Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories.
Export GitHub token taken from <https://github.com/settings/tokens>:
```
$ export GITHUB_TOKEN=<YOUR_TOKEN>
```
Obtain your Kerberos ticket needed for Fedora Account System (FAS) :
```
$ kinit <yourname>@FEDORAPROJECT.ORG
```
Export your Pagure API keys taken from <https://src.fedoraproject.org/settings#nav-api-tab>:
```
$ export PAGURE_USER_TOKEN=<PAGURE_USER_TOKEN>
```
Packit also needs a fork token to create a pull request. The token is taken from <https://src.fedoraproject.org/fork/YOU/rpms/PACKAGE/settings#apikeys-tab>
Do it by running:
```
$ export PAGURE_FORK_TOKEN=<PAGURE_FORK_TOKEN>
```
Or store these tokens in the **~/.config/packit.yaml** file:
```
$ cat ~/.config/packit.yaml
github_token: <GITHUB_TOKEN>
pagure_user_token: <PAGURE_USER_TOKEN>
pagure_fork_token: <PAGURE_FORK_TOKEN>
```
#### Propose a new upstream release in Fedora
The command for this first use case is called _**propose-update**_ (<https://github.com/jpopelka/packit/blob/master/docs/propose_update.md>). The command creates a new pull request in Fedora dist-git repository using a selected or the latest upstream release.
```
$ packit propose-update
INFO: Running 'anitya' versioneer
Version in upstream registries is '0.3.1'.
Version in spec file is '0.3.0'.
WARNING Version in spec file is outdated
Picking version of the latest release from the upstream registry.
Checking out upstream version 0.3.1
Using 'master' dist-git branch
Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec.
Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload).
INFO: Downloading file from URL https://files.pythonhosted.org/packages/source/c/colin/colin-0.3.0.tar.gz
100%[=============================>] 3.18M eta 00:00:00
Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz'
About to upload to lookaside cache
won't be doing kinit, no credentials provided
PR created: https://src.fedoraproject.org/rpms/colin/pull-request/14
```
Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged.
![][5]
#### Sync downstream changes back to the upstream repository
Another use case is to sync downstream changes into the upstream project repository.
The command for this purpose is called _**sync-from-downstream**_ (<https://github.com/jpopelka/packit/blob/master/docs/sync-from-downstream.md>). Files synced into the upstream repository are mentioned in the _packit.yaml_ configuration file under the _synced_files_ value.
```
$ packit sync-from-downstream
upstream active branch master
using "master" dist-git branch
Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec.
Creating remote fork-ssh with URL git@github.com:phracek/colin.git.
Pushing to remote fork-ssh using branch master-downstream-sync.
PR created: https://github.com/user-cont/colin/pull/229
```
As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository.
![][6]
#### Get the status of your upstream project
If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the _**status**_ command for this purpose.
```
$ packit status
Downstream PRs:
ID Title URL
---- -------------------------------- ---------------------------------------------------------
14 Update to upstream release 0.3.1 https://src.fedoraproject.org//rpms/colin/pull-request/14
12 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/12
11 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/11
8 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/8
Dist-git versions:
f27: 0.2.0
f28: 0.2.0
f29: 0.2.0
f30: 0.2.0
master: 0.2.0
GitHub upstream releases:
0.3.1
0.3.0
0.2.1
0.2.0
0.1.0
Latest builds:
f27: colin-0.2.0-1.fc27
f28: colin-0.3.1-1.fc28
f29: colin-0.3.1-1.fc29
f30: colin-0.3.1-2.fc30
Latest bodhi updates:
Update Karma status
------------------ ------- --------
colin-0.3.1-1.fc29 1 stable
colin-0.3.1-1.fc28 1 stable
colin-0.3.0-2.fc28 0 obsolete
```
#### Create an SRPM
The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is _**srpm**_.
```
$ packit srpm
Version in spec file is '0.3.1.37.g00bb80e'.
SRPM: /home/phracek/work/colin/colin-0.3.1.37.g00bb80e-1.fc29.src.rpm
```
### Packit as a service
In the summer, the people behind packit would like to introduce packit as a service (<https://github.com/packit-service/packit-service>). In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/
作者:[Petr Hracek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/phracek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/packit3-816x345.png
[2]: https://packit.dev/
[3]: https://github.com/user-cont/colin
[4]: https://github.com/packit-service/packit/
[5]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_pr-1024x781.png
[6]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_upstream_pr-1-1024x677.png

View File

@ -0,0 +1,145 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A short primer on assemblers, compilers, and interpreters)
[#]: via: (https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters)
[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny/users/shawnhcorey/users/jnyjny/users/jnyjny)
A short primer on assemblers, compilers, and interpreters
======
A gentle introduction to the historical evolution of programming
practices.
![keyboard with connected dots][1]
In the early days of computing, hardware was expensive and programmers were cheap. In fact, programmers were so cheap they weren't even called "programmers" and were in fact usually mathematicians or electrical engineers. Early computers were used to solve complex mathematical problems quickly, so mathematicians were a natural fit for the job of "programming."
### What is a program?
First, a little background. Computers can't do anything by themselves, so they require programs to drive their behavior. Programs can be thought of as very detailed recipes that take an input and produce an output. The steps in the recipe are composed of instructions that operate on data. While that sounds complicated, you probably know how this statement works:
```
`1 + 2 = 3`
```
The plus sign is the "instruction" while the numbers 1 and 2 are the data. Mathematically, the equal sign indicates that both sides of an equation are "equivalent," however most computer languages use some variant of equals to mean "assignment." If a computer were executing that statement, it would store the results of the addition (the "3") somewhere in memory.
Computers know how to do math with numbers and move data around the machine's memory hierarchy. I won't say too much about memory except that it generally comes in two different flavors: fast/small and slow/big. CPU registers are very fast, very small and act as scratch pads. Main memory is typically very big and not nearly as fast as register memory. CPUs shuffle the data they are working with from main memory to registers and back again while a program executes.
### Assemblers
Computers were very expensive and people were cheap. Programmers spent endless hours translating hand-written math into computer instructions that the computer could execute. The very first computers had terrible user interfaces, some only consisting of toggle switches on the front panel. The switches represented 1s and 0s in a single "word" of memory. The programmer would configure a word, indicate where to store it, and commit the word to memory. It was time-consuming and error-prone.
![Programmers operate the ENIAC computer][2]
_Programmers[Betty Jean Jennings][3] (left) and [Fran Bilas][4] (right) operate [ENIAC's][5] main control panel._
Eventually, an [electrical engineer][6] decided his time wasn't cheap and wrote a program with input written as a "recipe" expressed in terms people could read that output a computer-readable version. This was the first "assembler" and it was very controversial. The people that owned the expensive machines didn't want to "waste" compute time on a task that people were already doing; albeit slowly and with errors. Over time, people came to appreciate the speed and accuracy of the assembler versus a hand-assembled program, and the amount of "real work" done with the computer increased.
While assembler programs were a big step up from toggling bit patterns into the front panel of a machine, they were still pretty specialized. The addition example above might have looked something like this:
```
01 MOV R0, 1
02 MOV R1, 2
03 ADD R0, R1, R2
04 MOV 64, R0
05 STO R2, R0
```
Each line is a computer instruction, beginning with a shorthand name of the instruction followed by the data the instruction works on. This little program will first "move" the value 1 into a register called R0, then 2 into register R1. Line 03 adds the contents of registers R0 and R1 and stores the resulting value into register R2. Finally, lines 04 and 05 identify where the result should be stored in main memory (address 64). Managing where data is stored in memory is one of the most time-consuming and error-prone parts of writing computer programs.
### Compilers
Assembly was much better than writing computer instructions by hand; however, early programmers yearned to write programs like they were accustomed to writing mathematical formulae. This drove the development of higher-level compiled languages, some of which are historical footnotes and others are still in use today. [ALGO][7] is one such footnote, while real problems continue to be solved today with languages like [Fortran][8] and [C][9].
![Genealogy tree of ALGO and Fortran][10]
Genealogy tree of ALGO and Fortran programming languages
The introduction of these "high-level" languages allowed programmers to write their programs in simpler terms. In the C language, our addition assembly program would be written:
```
int x;
x = 1 + 2;
```
The first statement describes a piece of memory the program will use. In this case, the memory should be the size of an integer and its name is **x** The second statement is the addition, although written "backward." A C programmer would read that as "X is assigned the result of one plus two." Notice the programmer doesn't need to say where to put **x** in memory, as the compiler takes care of that.
A new type of program called a "compiler" would turn the program written in a high-level language into an assembly language version and then run it through the assembler to produce a machine-readable version of the program. This composition of programs is often called a "toolchain," in that one program's output is sent directly to another program's input.
The huge advantage of compiled languages over assembly language programs was porting from one computer model or brand to another. In the early days of computing, there was an explosion of different types of computing hardware from companies like IBM, Digital Equipment Corporation, Texas Instruments, UNIVAC, Hewlett Packard, and others. None of these computers shared much in common besides needing to be plugged into an electrical power supply. Memory and CPU architectures differed wildly, and it often took man-years to translate programs from one computer to another.
With high-level languages, the compiler toolchain only had to be ported to the new platform. Once the compiler was available, high-level language programs could be recompiled for a new computer with little or no modification. Compilation of high-level languages was truly revolutionary.
![IBM PC XT][11]
IBM PC XT released in 1983, is an early example of the decreasing cost of hardware.
Life became very good for programmers. It was much easier to express the problems they wanted to solve using high-level languages. The cost of computer hardware was falling dramatically due to advances in semiconductors and the invention of integrated chips. Computers were getting faster and more capable, as well as much less expensive. At some point, possibly in the late '80s, there was an inversion and programmers became more expensive than the hardware they used.
### Interpreters
Over time, a new programming model rose where a special program called an "interpreter" would read a program and turn it into computer instructions to be executed immediately. The interpreter takes the program as input and interprets it into an intermediate form, much like a compiler. Unlike a compiler, the interpreter then executes the intermediate form of the program. This happens every time an interpreted program runs, whereas a compiled program is compiled just one time and the computer executes the machine instructions "as written."
As a side note, when people say "interpreted programs are slow," this is the main source of the perceived lack of performance. Modern computers are so amazingly capable that most people can't tell the difference between compiled and interpreted programs.
Interpreted programs, sometimes called "scripts," are even easier to port to different hardware platforms. Because the script doesn't contain any machine-specific instructions, a single version of a program can run on many different computers without changes. The catch, of course, is the interpreter must be ported to the new machine to make that possible.
One example of a very popular interpreted language is [perl][12]. A complete perl expression of our addition problem would be:
```
`$x = 1 + 2`
```
While it looks and acts much like the C version, it lacks the variable initialization statement. There are other differences (which are beyond the scope of this article), but you can see that we can write a computer program that is very close to how a mathematician would write it by hand with pencil and paper.
### Virtual Machines
The latest craze in programming models is the virtual machine, often abbreviated as VM. There are two flavors of virtual machine; system virtual machines and process virtual machines. Both types of VMs provide a level of abstraction from the "real" computing hardware, though they have different scopes. A system virtual machine is software that offers a substitute for the physical hardware, while a process virtual machine is designed to execute a program in a system-independent manner. So in this case, a process virtual machine (virtual machine from here on) is similar in scope to an interpreter in that a program is first compiled into an intermediated form before the virtual machine executes it.
The main difference between an interpreter and a virtual machine is the virtual machine implements an idealized CPU accessed through its virtual instruction set. This abstraction makes it possible to write front-end language tools that compile programs written in different languages and target the virtual machine. Probably the most popular and well known virtual machine is the Java Virtual Machine (JVM). The JVM was initially only for the Java programming language back in the 1990s, but it now hosts [many][13] popular computer languages: Scala, Jython, JRuby, Clojure, and Kotlin to list just a few. There are other examples that may not be common knowledge. I only recently learned that my favorite language, [Python][14], is not an interpreted language, but a [language hosted on a virtual machine][15]!
Virtual machines continue the historical trend of reducing the amount of platform-specific knowledge a programmer needs to express their problem in a language that supports their domain-specific needs.
### That's a wrap
I hope you enjoy this primer on some of the less visible parts of software. Are there other topics you want me to dive into next? Let me know in the comments.
* * *
_This article was originally published on[PyBites][16] and is reprinted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters
作者:[Erik O'Shaughnessy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jnyjny/users/shawnhcorey/users/jnyjny/users/jnyjny
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A (keyboard with connected dots)
[2]: https://opensource.com/sites/default/files/uploads/two_women_operating_eniac.gif (Programmers operate the ENIAC computer)
[3]: https://en.wikipedia.org/wiki/Jean_Bartik (Jean Bartik)
[4]: https://en.wikipedia.org/wiki/Frances_Spence (Frances Spence)
[5]: https://en.wikipedia.org/wiki/ENIAC
[6]: https://en.wikipedia.org/wiki/Nathaniel_Rochester_%28computer_scientist%29
[7]: https://en.wikipedia.org/wiki/ALGO
[8]: https://en.wikipedia.org/wiki/Fortran
[9]: https://en.wikipedia.org/wiki/C_(programming_language)
[10]: https://opensource.com/sites/default/files/uploads/algolfortran_family-by-borkowski.png (Genealogy tree of ALGO and Fortran)
[11]: https://opensource.com/sites/default/files/uploads/639px-ibm_px_xt_color.jpg (IBM PC XT)
[12]: www.perl.org
[13]: https://en.wikipedia.org/wiki/List_of_JVM_languages
[14]: /resources/python
[15]: https://opensource.com/article/18/4/introduction-python-bytecode
[16]: https://pybit.es/python-interpreters.html

View File

@ -0,0 +1,484 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Creating a Source-to-Image build pipeline in OKD)
[#]: via: (https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
Creating a Source-to-Image build pipeline in OKD
======
S2I is an ideal way to build and compile Go applications in a repeatable
way, and it just gets better when paired with OKD BuildConfigs.
![][1]
In the first three articles in this series, we explored the general [requirements][2] of a Source-to-Image (S2I) system and [prepared][3] and [tested][4] an environment specifically for a Go (Golang) application. This S2I build is perfect for local development or maintaining a builder image with a code pipeline, but if you have access to an [OKD][5] or OpenShift cluster (or [Minishift][6]), you can set up the entire workflow using OKD BuildConfigs, not only to build and maintain the builder image but also to use the builder image to create the application image and subsequent runtime image automatically. This way, the images can be rebuilt automatically when downstream images change and can trigger OKD deploymentConfigs to redeploy applications running from these images.
### Step 1: Build the builder image in OKD
As in local S2I usage, the first step is to create the builder image to build the GoHelloWorld test application that we can reuse to compile other Go-based applications. This first build step will be a Docker build, just like before, that pulls the Dockerfile and S2I scripts from a Git repository to build the image. Therefore, those files must be committed and available in a public Git repo (or you can use the companion [GitHub repo][7] for this article).
_Note:_ OKD BuildConfigs do not require that source Git repos are public. To use a private repo, you must set up deploy keys and link the keys to a builder service account. This is not difficult, but for simplicity's sake, this exercise will use a public repository.
#### Create an image stream for the builder image
The BuildConfig will create a builder image for us to compile the GoHelloWorld app, but first, we need a place to store the image. In OKD, that place is an image stream.
An [image stream][8] and its tags are like a manifest or list of related images and image tags. It serves as an abstraction layer that allows you to reference an image, even if the image changes. Think of it as a collection of aliases that reference specific images and, as images are updated, automatically points to the new image version. The image stream is nothing except these aliases—just metadata about real images stored in a registry.
An image stream can be created with the **oc create imagestream <name>** command, or it can be created from a YAML file with **oc create -f <filename>**. Either way, a brand-new image stream is a small placeholder object that is empty until it is populated with image references, either manually (who wants to do things manually?) or with a BuildConfig.
Our golang-builder image stream looks like this:
```
# imageStream-golang-builder.yaml
\---
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
generation: 1
name: golang-builder
spec:
lookupPolicy:
local: false
```
Other than a name, and a (mostly) empty spec, there is nothing really there.
_Note:_ The **lookupPolicy** has to do with allowing Kubernetes-native components to resolve image stream references since image streams are OKD-native and not a part of the Kubernetes core. This topic is out of scope for this article, but you can read more about how it works in OKD's documentation [Using Image Streams with Kubernetes Resources][9].
Create an image stream for the builder image and its progeny.
```
$ oc create -f imageStream-golangBuilder.yaml
# Check the ImageStream
$ oc get imagestream golang-builder
NAME DOCKER REPO TAGS UPDATED
imagestream.image.openshift.io/golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder
```
Note that the newly created image stream has no tags and has never been updated.
#### Create a BuildConfig for the builder image
In OKD, a [BuildConfig][10] describes how to build container images from a specific source and triggers for when they build. Don't be thrown off by the language—just as you might say you build and re-build the same image from a Dockerfile, but in reality, you have built multiple images, a BuildConfig builds and rebuilds the same image, but in reality, it creates multiple images. (And suddenly the reason for image streams becomes much clearer!)
Our builder image BuildConfig describes how to build and re-build our builder image(s). The BuildConfig's core is made up of four important parts:
1. Build source
2. Build strategy
3. Build output
4. Build triggers
The _build source_ (predictably) describes where the thing that runs the build comes from. The builds described by the golang-builder BuildConfig will use the Dockerfile and S2I scripts we created previously and, using the Git-type build source, clone a Git repository to get the files to do the builds.
```
source:
type: Git
git:
ref: master
uri: <https://github.com/clcollins/golang-s2i.git>
```
The _build strategy_ describes what the build will do with the source files from the build source. The golang-builder BuildConfig mimics the **docker build** we used previously to build our local builder image by using the Docker-type build strategy.
```
strategy:
type: Docker
dockerStrategy: {}
```
The **dockerStrategy** build type tells OKD to build a Docker image from the Dockerfile contained in the source specified by the build source.
The _build output_ tells the BuildConfig what to do with the resulting image. In this case, we specify the image stream we created above and a tag to give to the image. As with our local build, we're tagging it with **golang-builder:1.12** as a reference to the Go version inherited from the parent image.
```
output:
to:
kind: ImageStreamTag
name: golang-builder:1.12
```
Finally, the BuildConfig defines a set of _build triggers_ —events that will cause the image to be rebuilt automatically. For this BuildConfig, a change to the BuildConfig configuration or an update to the upstream image (golang:1.12) will trigger a new build.
```
triggers:
\- type: ConfigChange
\- imageChange:
type: ImageChange
```
Using the [builder image BuildConfig][11] from the GitHub repo as a reference (or just using that file), create a BuildConfig YAML file and use it to create the BuildConfig.
```
$ oc create -f buildConfig-golang-builder.yaml
# Check the BuildConfig
$ oc get bc golang-builder
NAME TYPE FROM LATEST
golang-builder Docker Git@master 1
```
Because the BuildConfig included the "ImageChange" trigger, it immediately kicks off a new build. You can check that the build was created with the **oc get builds** command.
```
# Check the Builds
$ oc get builds
NAME TYPE FROM STATUS STARTED DURATION
golang-builder-1 Docker Git@8eff001 Complete About a minute ago 13s
```
While the build is running and after it has completed, you can view its logs with **oc logs -f <build name>** and see the Docker build output as you would locally.
```
$ oc logs -f golang-builder-1-build
Step 1/11 : FROM docker.io/golang:1.12
\---> 7ced090ee82e
Step 2/11 : LABEL maintainer "Chris Collins <[collins.christopher@gmail.com][12]>"
\---> 7ad989b765e4
Step 3/11 : ENV CGO_ENABLED 0 GOOS linux GOCACHE /tmp STI_SCRIPTS_PATH /usr/libexec/s2i SOURCE_DIR /go/src/app
\---> 2cee2ce6757d
<...>
```
If you did not include any build triggers (or did not have them in the right place), your build may not start automatically. You can manually kick off a new build with the **oc start-build** command.
```
$ oc start-build golang-builder
# Or, if you want to automatically tail the build log
$ oc start-build golang-builder --follow
```
When the build completes, the resulting image is tagged and pushed to the integrated image registry and the image stream is updated with the new image's information. Check the image stream with the **oc get imagestream** command to see that the new tag exists.
```
$ oc get imagestream golang-builder
NAME DOCKER REPO TAGS UPDATED
golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder 1.12 33 seconds ago
```
### Step 2: Build the application image in OKD
Now that we have a builder image for our Golang applications created and stored within OKD, we can use this builder image to compile all of our Go apps. First on the block is the example GoHelloWorld app from our [local build example][4]. GoHelloWorld is a simple Go app that just outputs **Hello World!** when it's run.
Just as we did in the local example with the **s2i build** command, we can tell OKD to use our builder image and S2I to build the application image for GoHelloWorld, compiling the Go binary from the source code in the [GoHelloWorld GitHub repository][13]. This can be done with a BuildConfig with a **sourceStrategy** build.
#### Create an image stream for the application image
First things first, we need to create an image stream to manage the image created by the BuildConfig. The image stream is just like the golang-builder image stream, just with a different name. Create it with **oc create is** or using a YAML file from the [GitHub repo][7].
```
$ oc create -f imageStream-goHelloWorld-appimage.yaml
imagestream.image.openshift.io/go-hello-world-appimage created
```
#### Create a BuildConfig for the application image
Just as we did with the builder image BuildConfig, this BuildConfig will use the Git source option to clone our source code from the GoHelloWorld repository.
```
source:
type: Git
git:
uri: <https://github.com/clcollins/goHelloWorld.git>
```
Instead of using a DockerStrategy build to create an image from a Dockerfile, this BuildConfig will use the sourceStrategy definition to build the image using S2I.
```
strategy:
type: Source
sourceStrategy:
from:
kind: ImageStreamTag
name: golang-builder:1.12
```
Note the **from:** hash in sourceStrategy. This tells OKD to use the **golang-builder:1.12** image we created previously for the S2I build.
The BuildConfig will output to the new **appimage** image stream we created, and we'll include config- and image-change triggers to kick off new builds automatically if anything updates.
```
output:
to:
kind: ImageStreamTag
name: go-hello-world-appimage:1.0
triggers:
\- type: ConfigChange
\- imageChange:
type: ImageChange
```
Once again, create a BuildConfig or use the one from the GitHub repo.
```
`$ oc create -f buildConfig-goHelloWorld-appimage.yaml`
```
The new build shows up alongside the golang-builder build and, because image-change triggers are specified, the build starts immediately.
```
$ oc get builds
NAME TYPE FROM STATUS STARTED DURATION
golang-builder-1 Docker Git@8eff001 Complete 8 minutes ago 13s
go-hello-world-appimage-1 Source Git@99699a6 Running 44 seconds ago
```
If you want to watch the build logs, use the **oc logs -f** command. Once the application image build completes, it is pushed to the image stream we specified, then the new image stream tag is created.
```
$ oc get is go-hello-world-appimage
NAME DOCKER REPO TAGS UPDATED
go-hello-world-appimage docker-registry.default.svc:5000/golang-builder/go-hello-world-appimage 1.0 10 minutes ago
```
Success! The GoHelloWorld app was cloned from source into a new image and compiled and tested using our S2I scripts. We can use the image as-is but, as with our local S2I builds, we can do better and create an image with just the new Go binary in it.
### Step 3: Build the runtime image in OKD
Now that the application image has been created with a compiled Go binary for the GoHelloWorld app, we can use something called chain builds to mimic when we extracted the binary from our local application image and created a new runtime image with just the binary in it.
#### Create an image stream for the runtime image
Once again, the first step is to create an image stream image for the new runtime image.
```
# Create the ImageStream
$ oc create -f imageStream-goHelloWorld.yaml
imagestream.image.openshift.io/go-hello-world created
# Get the ImageStream
$ oc get imagestream go-hello-world
NAME DOCKER REPO TAGS UPDATED
go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world
```
#### Chain builds
Chain builds are when one or more BuildConfigs are used to compile software or assemble artifacts for an application, and those artifacts are saved and used by a subsequent BuildConfig to generate a runtime image without re-compiling the code.
![Chain Build workflow][14]
Chain build workflow
#### Create a BuildConfig for the runtime image
The runtime BuildConfig uses the DockerStrategy build to build the image from a Dockerfile—the same thing we did with the builder image BuildConfig. This time, however, the source is not a Git source, but a Dockerfile source.
What is the Dockerfile source? It's an inline Dockerfile! Instead of cloning a repo with a Dockerfile in it and building that, we specify the Dockerfile in the BuildConfig. This is especially appropriate with our runtime Dockerfile because it's just three lines long.
```
source:
type: Dockerfile
dockerfile: |-
FROM scratch
COPY app /app
ENTRYPOINT ["/app"]
images:
\- from:
kind: ImageStreamTag
name: go-hello-world-appimage:1.0
paths:
\- sourcePath: /go/src/app/app
destinationDir: "."
```
Note that the Dockerfile in the Dockerfile source definition above is the same as the Dockerfile we used in the [third article][4] in this series when we built the slim GoHelloWorld image locally using the binary we extracted with the S2I **save-artifacts** script.
Something else to note: **scratch** is a reserved word in Dockerfiles. Unlike other **FROM** statements, it does not define an _actual_ image, but rather that the first layer of this image will be nothing. It is defined with **kind: DockerImage** but does not have a registry or group/namespace/project string. Learn more about this behavior in this excellent [container best practices][15] reference.
The **images** section of the Dockerfile source describes the source of the artifact(s) to be used in the build; in this case, from the appimage generated earlier. The **paths** subsection describes where to get the binary (i.e., in the **/go/src/app** directory of the app image, get the **app** binary) and where to save it (i.e., in the current working directory of the build itself: **"."** ). This allows the **COPY app /app** to grab the binary from the current working directory and add it to **/app** in the runtime image.
_Note:_ **paths** is an array of source and the destination path _pairs_. Each entry in the list consists of a source and destination. In the example above, there is just one entry because there is just a single binary to copy.
The Docker strategy is then used to build the inline Dockerfile.
```
strategy:
type: Docker
dockerStrategy: {}
```
Once again, it is output to the image stream created earlier and includes build triggers to automatically kick off new builds.
```
output:
to:
kind: ImageStreamTag
name: go-hello-world:1.0
triggers:
\- type: ConfigChange
\- imageChange:
type: ImageChange
```
Create a BuildConfig YAML or use the runtime BuildConfig from the GitHub repo.
```
$ oc create -f buildConfig-goHelloWorld.yaml
buildconfig.build.openshift.io/go-hello-world created
```
If you watch the logs, you'll notice the first step is **FROM scratch** , which confirms we're adding the compiled binary to a blank image.
```
$ oc logs -f pod/go-hello-world-1-build
Step 1/5 : FROM scratch
\--->
Step 2/5 : COPY app /app
\---> 9e70e6c710f8
Removing intermediate container 4d0bd9cef0a7
Step 3/5 : ENTRYPOINT /app
\---> Running in 7a2dfeba28ca
\---> d697577910fc
<...>
```
Once the build is completed, check the image stream tag to validate that the new image was pushed to the registry and image stream was updated.
```
$ oc get imagestream go-hello-world
NAME DOCKER REPO TAGS UPDATED
go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world 1.0 4 minutes ago
```
Make a note of the **DOCKER REPO** string for the image. It will be used in the next section to run the image.
### Did we create a tiny, binary-only image?
Finally, let's validate that we did, indeed, build a tiny image with just the binary.
Check out the image details. First, get the image's name from the image stream.
```
$ oc describe imagestream go-hello-world
Name: go-hello-world
Namespace: golang-builder
Created: 42 minutes ago
Labels: <none>
Annotations: <none>
Docker Pull Spec: docker-registry.default.svc:5000/golang-builder/go-hello-world
Image Lookup: local=false
Unique Images: 1
Tags: 1
1.0
no spec tag
* docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053
13 minutes ago
```
The image is listed at the bottom, described with the SHA hash (e.g., **sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053** ; yours will be different).
Get the details of the image using the hash.
```
$ oc describe image sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053
Docker Image: docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053
Name: sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053
Created: 15 minutes ago
Annotations: image.openshift.io/dockerLayersOrder=ascending
image.openshift.io/manifestBlobStored=true
openshift.io/image.managed=true
Image Size: 1.026MB
Image Created: 15 minutes ago
Author: <none>
Arch: amd64
Entrypoint: /app
Working Dir: <none>
User: <none>
Exposes Ports: <none>
Docker Labels: io.openshift.build.name=go-hello-world-1
io.openshift.build.namespace=golang-builder
Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OPENSHIFT_BUILD_NAME=go-hello-world-1
OPENSHIFT_BUILD_NAMESPACE=golang-builder
```
Notice the image size, 1.026MB, is exactly as we want. The image is a scratch image with just the binary inside it!
### Run a pod with the runtime image
Using the runtime image we just created, let's create a pod on-demand and run it and validate that it still works.
This almost never happens in Kubernetes/OKD, but we will run a pod, just a pod, by itself.
```
$ oc run -it go-hello-world --image=docker-registry.default.svc:5000/golang-builder/go-hello-world:1.0 --restart=Never
Hello World!
```
Everything is working as expected—the image runs and outputs "Hello World!" just as it did in the previous, local S2I builds.
By creating this workflow in OKD, we can use the golang-builder S2I image for any Go application. This builder image is in place and built for any other applications, and it will auto-update and rebuild itself anytime the upstream golang:1.12 image changes.
New apps can be built automatically using the S2I build by creating a chain build strategy in OKD with an appimage BuildConfig to compile the source and the runtime BuildConfig to create the final image. Using the build triggers, any change to the source code in the Git repo will trigger a rebuild through the entire pipeline, rebuilding the appimage and the runtime image automatically.
This is a great way to maintain updated images for any application. Paired with an OKD deploymentConfig with an image build trigger, long-running applications (e.g., webapps) will be automatically redeployed when new code is committed.
Source-to-Image is an ideal way to develop builder images to build and compile Go applications in a repeatable way, and it just gets better when paired with OKD BuildConfigs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire
[2]: https://opensource.com/article/19/5/source-image-golang-part-1
[3]: https://opensource.com/article/19/5/source-image-golang-part-2
[4]: https://opensource.com/article/19/5/source-image-golang-part-3
[5]: https://www.okd.io/
[6]: https://github.com/minishift/minishift
[7]: https://github.com/clcollins/golang-s2i.git
[8]: https://docs.okd.io/latest/architecture/core_concepts/builds_and_image_streams.html#image-streams
[9]: https://docs.okd.io/latest/dev_guide/managing_images.html#using-is-with-k8s
[10]: https://docs.okd.io/latest/dev_guide/builds/index.html#defining-a-buildconfig
[11]: https://github.com/clcollins/golang-s2i/blob/master/okd/buildConfig-golang-builder.yaml
[12]: mailto:collins.christopher@gmail.com
[13]: https://github.com/clcollins/goHelloWorld.git
[14]: https://opensource.com/sites/default/files/uploads/chainingbuilds.png (Chain Build workflow)
[15]: http://docs.projectatomic.io/container-best-practices/#_from_scratch

View File

@ -0,0 +1,90 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Learn Python with these awesome resources)
[#]: via: (https://opensource.com/article/19/5/resources-learning-python)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Learn Python with these awesome resources
======
Expand your Python knowledge by adding these resources to your personal
learning network.
![Book list, favorites][1]
I've been using and teaching Python for a long time now, but I'm always interested in increasing my knowledge about this practical and useful programming language. That's why I've been trying to expand my Python [personal learning network][2] (PLN), a concept that describes informal and mutually beneficial networks for sharing information.
Educators [Kelly Paredes][3] and [Sean Tibor][4] recently talked about how to build your Python PLN on their podcast, [Teaching Python][5], which I subscribed to after meeting them at [PyCon 2019][6] in Cleveland (and adding them to my Python PLN). This podcast inspired me to think more about the people in my Python PLN, including those I met recently at PyCon.
I'll share some of the places I've met members of my PLN; maybe they can become part of your Python PLN, too.
### Young Coders mentors
[Betsy Waliszewski][7], the event coordinator for the Python Foundation, is a member of my Python PLN. When we ran into each other at PyCon2019, because I'm a teacher, she recommended I check out the [Young Coders][8] workshop for kids ages 12 and up. There, I met [Katie Cunningham][9], who was running the program, which taught participants how to set up and configure a Raspberry Pi and use Python. The young students also received two books: _[Python for Kids][10]_ by Jason Briggs and _[Learn to Program with Minecraft][11]_ by Craig Richardson. I'm always looking for new ways to improve my teaching, so I quickly picked up two copies of the Minecraft book at [NoStarch Press][12]' booth at the conference. Katie is a great teacher and a prolific author with a wonderful [YouTube][13] channel full of Python training videos.
I added Katie to my PLN, along with two other people I met at the Young Coders workshop: [Nat Dunn][14] and [Sean Valentine][15]. Like Katie, they were volunteering their time to introduce young programmers to Python. Nat is the president of [Webucator][16], an IT training company that has been a sponsor of the Python Software Foundation for several years and sponsored the PyCon 2018 Education Summit. He decided to teach at Young Coders after teaching Python to his 13-year-old son and 14-year-old nephew. Sean is the director of strategic initiatives at the [Hidden Genius Project][17], a technology and leadership mentoring program for black male youth. Sean said many Hidden Genius participants "built projects using Python, so we saw [Young Coders] as a great opportunity to partner." Learning about the Hidden Genius Project has inspired me to think deeper about the implications of coding and its power to change lives.
### Open Spaces meetups
I found PyCon's [Open Spaces][18], self-organizing, impromptu hour-long meetups, just as useful as the official programmed events. One of my favorites was about the [Circuit Playground Express][19] device, which was part of our conference swag bags. I am fascinated by this device, and the Open Space provided an avenue to learn more. The organizers offered a worksheet and a [GitHub][20] repo with all the tools we needed to be successful, as well as an opportunity for hands-on learning and direction to explore this unique hardware.
This meetup whetted my appetite to learn even more about programming the Circuit Playground Express, so after PyCon, I reached out on Twitter to [Nina Zakharenko][21], who [presented a keynote][22] at the conference about programming the device. Nina has been in my Python PLN since last fall when I heard her talk at [All Things Open][23], and I recently signed up for her [Python Fundamentals][24] class to add to my learning. Nina recommended I add [Kattni Rembor][25], whose [code examples][26] are helping me learn to program with CircuitPython, to my Python PLN.
### Other resources from my PLN
I also met fellow [Opensource.com][27] Community Moderator [Moshe Zadka][28] at PyCon2019 and talked with him at length. He shared several new Python resources, including _[How to Think Like a Computer Scientist][29]_. Community Moderator [Seth Kenlon][30] is another member of my PLN; he has published many great [Python articles][31], and I recommend you follow him, too.
My Python personal learning network continues to grow each day. Besides the folks I have already mentioned, I recommend you follow [Al Sweigart][32], [Eric Matthes][33], and [Adafruit][34] because they share great content. I also recommend the book _[Make: Getting Started with Adafruit Circuit Playground Express][35]_ and [Podcast.__init__][36], a podcast all about the Python community, both of which I learned about from my PLN.
Who is in your Python PLN? Please share your favorites in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/5/resources-learning-python
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites)
[2]: https://en.wikipedia.org/wiki/Personal_learning_network
[3]: https://www.teachingpython.fm/hosts/kellypared
[4]: https://twitter.com/smtibor
[5]: https://www.teachingpython.fm/20
[6]: https://us.pycon.org/2019/
[7]: https://www.linkedin.com/in/betsywaliszewski
[8]: https://us.pycon.org/2019/events/letslearnpython/
[9]: https://www.linkedin.com/in/kcunning/
[10]: https://nostarch.com/pythonforkids
[11]: https://nostarch.com/programwithminecraft
[12]: https://nostarch.com/
[13]: https://www.youtube.com/c/KatieCunningham
[14]: https://www.linkedin.com/in/natdunn/
[15]: https://www.linkedin.com/in/sean-valentine-b370349b/
[16]: https://www.webucator.com/
[17]: http://www.hiddengeniusproject.org/
[18]: https://us.pycon.org/2019/events/open-spaces/
[19]: https://www.adafruit.com/product/3333
[20]: https://github.com/adafruit/PyCon2019
[21]: https://twitter.com/nnja
[22]: https://www.youtube.com/watch?v=35mXD40SvXM
[23]: https://allthingsopen.org/
[24]: https://frontendmasters.com/courses/python/
[25]: https://twitter.com/kattni
[26]: https://github.com/kattni/ChiPy_2018
[27]: http://Opensource.com
[28]: https://opensource.com/users/moshez
[29]: http://openbookproject.net/thinkcs/python/english3e/
[30]: https://opensource.com/users/seth
[31]: https://www.google.com/search?source=hp&ei=gVToXPq-FYXGsAW-mZ_YAw&q=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&oq=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&gs_l=psy-ab.12...627.15303..15584...1.0..0.176.2802.4j21......0....1..gws-wiz.....0..35i39j0j0i131j0i67j0i20i263.r2SAW3dxlB4
[32]: http://alsweigart.com/
[33]: https://twitter.com/ehmatthes?lang=en
[34]: https://twitter.com/adafruit
[35]: https://www.adafruit.com/product/3944
[36]: https://www.pythonpodcast.com/episodes/

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use Firefox Send with ffsend in Fedora)
[#]: via: (https://fedoramagazine.org/use-firefox-send-with-ffsend-in-fedora/)
[#]: author: (Sylvia Sánchez https://fedoramagazine.org/author/lailah/)
Use Firefox Send with ffsend in Fedora
======
![][1]
_ffsend_ is the command line client of Firefox Send. This article will show how Firefox Send and _ffsend_ work. Itll also detail how it can be installed and used in Fedora.
### What are Firefox Send and ffsend ?
Firefox Send is a file sharing tool from Mozilla that allows sending encrypted files to other users. You can install Send on your own server, or use the Mozilla-hosted link [send.firefox.com][2]. The hosted version officially supports files up to 1 GB, and links that expire after a configurable download count (default of 1) or 24 hours, and then all the files on the Send server are deleted. This tool is still _in experimental phase_ , and therefore shouldnt be used in production or to share important or sensitive data.
While Firefox Send is the tool itself and can be used with a web interface, _ffsend_ is a command-line utility you can use with scripts and arguments. It has a wide range of configuration options and can be left working in the background without any human intervention.
### How does it work?
FFSend can both upload and download files. The remote host can use either the Firefox tool or another web browser to download the file. Neither Firefox Send nor _ffsend_ require the use of Firefox.
Its important to highlight that _ffsend_ uses client-side encryption. This means that files are encrypted _before_ theyre uploaded. You share secrets together with the link, so be careful when sharing, because anyone with the link will be able to download the file. As an extra layer of protection, you can protect the file with a password by using the following argument:
```
ffsend password URL -p PASSWORD
```
### Other features
There are a few other features worth mentioning. Heres a list:
* Configurable download limit, between 1 and 20 times, before the link expires
* Built-in extract and archiving functions
* Track history of shared files
* Inspect or delete shared files
* Folders can be shared as well, either as they are or as compressed files
* Generate a QR code, for easier download on a mobile phone
### How to install in Fedora
While Fedora Send works with Firefox without installing anything extra, youll need to install the CLI tool to use _ffsend_. This tool is in the official repositories, so you only need a simple _dnf_ command [with][3] _[sudo][3]_.
```
$ sudo dnf install ffsend
```
After that, you can use _ffsend_ from the terminal .
### Upload a file
Uploading a file is a simple as
```
$ ffsend upload /etc/os-release
Upload complete
Share link: https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg
```
The file now can be easily share using the Share link URL.
## Downloading a file
Downloading a file is as simple as uploading.
```
$ ffsend download https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg
Download complete
```
Before downloading a file it might be useful to check if the file exist and get information about it. _ffsend_ provides 2 handy commands for that.
```
$ ffsend exists https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ
Exists: true
Password: false
$ ffsend info https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ
ID: 88a6324e2a99ebb6
Downloads: 0 of 1
Expiry: 23h59m (86388s
```
## Upload history
_ffsend_ also provides a way to check the history of the uploads made with the tools. This can be really useful if you upload a lot of files during a scripted tasks for example and you want to keep track of each files download status.
```
$ ffsend history
LINK EXPIRY
1 https://send.firefox.com/download/#8TJ9QNw 23h59m
2 https://send.firefox.com/download/KZIA-PaQ 23h54m
```
## Delete a file
Another useful feature is the possibility to delete a file.
```
ffsend delete https://send.firefox.com/download/2d9faa7f34bb1478/#phITKvaYBjCGSRI8TJ9QNw
```
Firefox Send is a great service and the _ffsend_ tools makes it really convenient to use from the terminal. More examples and documentation is available on _ffsend_ s [Gitlab repository][4].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/use-firefox-send-with-ffsend-in-fedora/
作者:[Sylvia Sánchez][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/lailah/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/firefox-send-816x345.png
[2]: http://send.firefox.com/
[3]: https://fedoramagazine.org/howto-use-sudo/
[4]: https://gitlab.com/timvisee/ffsend

View File

@ -0,0 +1,44 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How many browser tabs do you usually have open?)
[#]: via: (https://opensource.com/article/19/6/how-many-browser-tabs)
[#]: author: (Lauren Pritchett https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst)
How many browser tabs do you usually have open?
======
Plus, get a few tips for browser productivity.
![Browser of things][1]
Here's a potentially loaded question: How many browser tabs do you usually have open at one time? Do you have multiple windows, each with multiple tabs? Or are you a minimalist, and only have a couple of tabs open at once. Another option is to move a 20-tabbed browser window to a different monitor so that it is out of the way while working on a particular task. Does your approach differ between work, personal, and mobile browsers? Is your browser strategy related to your [productivity habits][2]?
### 4 tips for browser productivity
1. Know your browser shortcuts to save clicks. Whether you use Firefox or Chrome, there are plenty of keyboard shortcuts to help make switching between tabs and performing certain functions a breeze. For example, Chrome makes it easy to open up a blank Google document. Use the shortcut **"Ctrl + t"** to open a new tab, then type **"doc.new"**. The same can be done for spreadsheets, slides, and forms.
2. Organize your most frequent tasks with bookmark folders. When it's time to start a particular task, simply open all of the bookmarks in the folder **(Ctrl + click)** to check it off your list quickly.
3. Get the right browser extensions for you. There are thousands of browser extensions out there all claiming to improve productivity. Before you install, make sure you're not just adding more distractions to your screen.
4. Reduce screen time by using a timer. It doesn't matter if you use an old-fashioned egg timer or a fancy browser extension. To prevent eye strain, implement the 20/20/20 rule. Every 20 minutes, take a 20-second break from your screen and look at something 20 feet away.
Take our poll to share how many browser tabs you like to have open at once. Be sure to tell us about your favorite browser tricks in the comments.
There are two components of productivity—doing the right things and doing those things efficiently...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/how-many-browser-tabs
作者:[Lauren Pritchett][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
[2]: https://enterprisersproject.com/article/2019/1/5-time-wasting-habits-break-new-year

View File

@ -0,0 +1,214 @@
[#]: collector: (lujun9972)
[#]: translator: (runningwater)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set up virtual environments for Python on MacOS)
[#]: via: (https://opensource.com/article/19/6/virtual-environments-python-macos)
[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez)
How to set up virtual environments for Python on MacOS
======
Save yourself a lot of confusion by managing your virtual environments
with pyenv and virtualwrapper.
![][1]
If you're a Python developer and a MacOS user, one of your first tasks upon getting a new computer is to set up your Python development environment. Here is the best way to do it (although we have written about [other ways to manage Python environments on MacOS][2]).
### Preparation
First, open a terminal and enter **xcode-select --install** at its cold, uncaring prompt. Click to confirm, and you'll be all set with a basic development environment. This step is required on MacOS to set up local development utilities, including "many commonly used tools, utilities, and compilers, including make, GCC, clang, perl, svn, git, size, strip, strings, libtool, cpp, what, and many other useful commands that are usually found in default Linux installations," according to [OS X Daily][3].
Next, install [Homebrew][4] by executing the following Ruby script from the internet:
```
`ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
```
If you, like me, have trust issues with arbitrarily running scripts from the internet, click on the script above and take a longer look to see what it does.
Once this is done, congratulations, you have an excellent package management tool in Homebrew. Naively, you might think that you next **brew install python** or something. No, haha. Homebrew will give you a version of Python, but the version you get will be out of your control if you let the tool manage your environment for you. You want [pyenv][5], "a tool for simple Python version management," that can be installed on [many operating systems][6]. Run:
```
`$ brew install pyenv`
```
You want pyenv to run every time you open your prompt, so include the following in your configuration files (by default on MacOS, this is **.bash_profile** in your home directory):
```
$ cd ~/
$ echo 'eval "$(pyenv init -)"' >> .bash_profile
```
By adding this line, every new terminal will initiate pyenv to manage the **PATH** environment variable in your terminal and insert the version of Python you want to run (as opposed to the first one that shows up in the environment. For more information, read "[How to set your $PATH variable in Linux][7].") Open a new terminal for the updated **.bash_profile** to take effect.
Before installing your favorite version of Python, you'll want to install a couple of helpful tools:
```
`$ brew install zlib sqlite`
```
The [zlib][8] compression algorithm and the [SQLite][9] database are dependencies for pyenv and often [cause build problems][10] when not configured correctly. Add these exports to your current terminal window to ensure the installation completes:
```
$ export LDFLAGS="-L/usr/local/opt/zlib/lib -L/usr/local/opt/sqlite/lib"
$ export CPPFLAGS="-I/usr/local/opt/zlib/include -I/usr/local/opt/sqlite/include"
```
Now that the preliminaries are done, it's time to install a version of Python that is fit for a modern person in the modern age:
```
`$ pyenv install 3.7.3`
```
Go have a cup of coffee. From beans you hand-roast. After you pick them. What I'm saying here is it's going to take some time.
### Adding virtual environments
Once it's finished, it's time to make your virtual environments pleasant to use. Without this next step, you will effectively be sharing one Python development environment for every project you work on. Using virtual environments to isolate dependency management on a per-project basis will give us more certainty and reproducibility than Python offers out of the box. For these reasons, install **virtualenvwrapper** into the Python environment:
```
$ pyenv global 3.7.3
# Be sure to keep the $() syntax in this command so it can evaluate
$ $(pyenv which python3) -m pip install virtualenvwrapper
```
Open your **.bash_profile** again and add the following to be sure it works each time you open a new terminal:
```
# We want to regularly go to our virtual environment directory
$ echo 'export WORKON_HOME=~/.virtualenvs' >> .bash_profile
# If in a given virtual environment, make a virtual environment directory
# If one does not already exist
$ echo 'mkdir -p $WORKON_HOME' >> .bash_profile
# Activate the new virtual environment by calling this script
# Note that $USER will substitute for your current user
$ echo '. ~/.pyenv/versions/3.7.3/bin/virtualenvwrapper.sh' >> .bash_profile
```
Close the terminal and open a new one (or run **exec /bin/bash -l** to refresh the current terminal session), and you'll see **virtualenvwrapper** initializing the environment:
```
$ exec /bin/bash -l
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkproject
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkproject
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/initialize
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/premkvirtualenv
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postmkvirtualenv
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/prermvirtualenv
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postrmvirtualenv
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/predeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postdeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/preactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/postactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/get_env_details
```
From now on, all your work should be in a virtual environment, allowing you to use temporary environments to play around with development safely. With this toolchain, you can set up multiple projects and switch between them, depending upon what you're working on at that moment:
```
$ mkvirtualenv test1
Using base prefix '/Users/moshe/.pyenv/versions/3.7.3'
New python executable in /Users/moshe/.virtualenvs/test1/bin/python3
Also creating executable in /Users/moshe/.virtualenvs/test1/bin/python
Installing setuptools, pip, wheel...
done.
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/predeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postdeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/preactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/postactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test1/bin/get_env_details
(test1)$ mkvirtualenv test2
Using base prefix '/Users/moshe/.pyenv/versions/3.7.3'
New python executable in /Users/moshe/.virtualenvs/test2/bin/python3
Also creating executable in /Users/moshe/.virtualenvs/test2/bin/python
Installing setuptools, pip, wheel...
done.
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/predeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postdeactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/preactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/postactivate
virtualenvwrapper.user_scripts creating /Users/moshe/.virtualenvs/test2/bin/get_env_details
(test2)$ ls $WORKON_HOME
get_env_details postmkvirtualenv premkvirtualenv
initialize postrmvirtualenv prermvirtualenv
postactivate preactivate test1
postdeactivate predeactivate test2
postmkproject premkproject
(test2)$ workon test1
(test1)$
```
The **deactivate** command exits you from the current environment.
### Recommended practices
You may already set up your long-term projects in a directory like **~/src**. When you start working on a new project, go into this directory, add a subdirectory for the project, then use the power of Bash interpretation to name the virtual environment based on your directory name. For example, for a project named "pyfun":
```
$ mkdir -p ~/src/pyfun && cd ~/src/pyfun
$ mkvirtualenv $(basename $(pwd))
# we will see the environment initialize
(pyfun)$ workon
pyfun
test1
test2
(pyfun)$ deactivate
$
```
Whenever you want to work on this project, go back to that directory and reconnect to the virtual environment by entering:
```
$ cd ~/src/pyfun
(pyfun)$ workon .
```
Since initializing a virtual environment means taking a point-in-time copy of your Python version and the modules that are loaded, you will occasionally want to refresh the project's virtual environment, as dependencies can change dramatically. You can do this safely by deleting the virtual environment because the source code will remain unscathed:
```
$ cd ~/src/pyfun
$ rmvirtualenv $(basename $(pwd))
$ mkvirtualenv $(basename $(pwd))
```
This method of managing virtual environments with pyenv and virtualwrapper will save you from uncertainty about which version of Python you are running as you develop code locally. This is the simplest way to avoid confusion—especially when you're working with a larger team.
If you are just beginning to configure your Python environment, read up on how to use [Python 3 on MacOS][2]. Do you have other beginner or intermediate Python questions? Leave a comment and we will consider them for the next article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/virtual-environments-python-macos
作者:[Matthew Broberg][a]
选题:[lujun9972][b]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbbroberg/users/moshez/users/mbbroberg/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_snake_file_box.jpg?itok=UuDVFLX-
[2]: https://opensource.com/article/19/5/python-3-default-macos
[3]: http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/
[4]: https://brew.sh/
[5]: https://github.com/pyenv/pyenv
[6]: https://github.com/pyenv/pyenv/wiki
[7]: https://opensource.com/article/17/6/set-path-linux
[8]: https://zlib.net/
[9]: https://www.sqlite.org/index.html
[10]: https://github.com/pyenv/pyenv/wiki/common-build-problems#build-failed-error-the-python-zlib-extension-was-not-compiled-missing-the-zlib

View File

@ -0,0 +1,59 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to stream music with GNOME Internet Radio)
[#]: via: (https://opensource.com/article/19/6/gnome-internet-radio)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/r3bl)
How to stream music with GNOME Internet Radio
======
If you're looking for a simple, straightforward interface that gets your
streams playing, try GNOME's Internet Radio plugin.
![video editing dashboard][1]
Internet radio is a great way to listen to stations from all over the world. Like many developers, I like to turn on a station as I code. You can listen to internet radio with a media player for the terminal like [MPlayer][2] or [mpv][3], which is what I use to listen via the Linux command line. However, if you prefer using a graphical user interface (GUI), you may want to try [GNOME Internet Radio][4], a nifty plugin for the GNOME desktop. You can find it in the package manager.
![GNOME Internet Radio plugin][5]
Listening to internet radio with a graphical desktop operating system generally requires you to launch an application such as [Audacious][6] or [Rhythmbox][7]. They have nice interfaces, plenty of options, and cool audio visualizers. But if you want a simple, straightforward interface that gets your streams playing, GNOME Internet Radio is for you.
After installing it, a small icon appears in your toolbar, which is where you do all your configuration and management.
![GNOME Internet Radio icons][8]
The first thing I did was go to the Settings menu. I enabled the following two options: show title notifications and show volume adjustment.
![GNOME Internet Radio Settings][9]
GNOME Internet Radio includes a few pre-configured stations, and it is really easy to add others. Just click the ( **+** ) sign. You'll need to enter a channel name, which can be anything you prefer (including the station name), and the station address. For example, I like to listen to Synthetic FM. I enter the name, e.g., "Synthetic FM," and the stream address, i.e., <https://mediaserv38.live-streams.nl:2199/tunein/syntheticfm.pls>.
Then click the star next to the stream to add it to your menu.
However you listen to music and whatever genre you choose, it is obvious—coders need their music! The GNOME Internet Radio plugin makes it simple to get your favorite internet radio station queued up.
In honor of the Gnome desktop's 18th birthday on August 15, we've rounded up 18 reasons to toast...
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/gnome-internet-radio
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss/users/r3bl
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
[2]: https://opensource.com/article/18/12/linux-toy-mplayer
[3]: https://mpv.io/
[4]: https://extensions.gnome.org/extension/836/internet-radio/
[5]: https://opensource.com/sites/default/files/uploads/packagemanager_s.png (GNOME Internet Radio plugin)
[6]: https://audacious-media-player.org/
[7]: https://help.gnome.org/users/rhythmbox/stable/
[8]: https://opensource.com/sites/default/files/uploads/titlebaricons.png (GNOME Internet Radio icons)
[9]: https://opensource.com/sites/default/files/uploads/gnomeinternetradio_settings.png (GNOME Internet Radio Settings)

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Aging in the open: How this community changed us)
[#]: via: (https://opensource.com/open-organization/19/6/four-year-celebration)
[#]: author: (Bryan Behrenshausen https://opensource.com/users/bbehrens)
Aging in the open: How this community changed us
======
Our community dedicated to exploring open organizational culture and
design turns four years old this week.
![Browser window with birthday hats and a cake][1]
A community will always surprise you.
That's not an easy statement for someone like me to digest. I'm not one for surprises. I revel in predictability. I thrive on consistency.
A passionate and dedicated community offers few of these comforts. Participating in something like [the open organization community at Opensource.com][2]—which [turns four years old this week][3]—means acquiescing to dynamism, to constant change. Every day brings novelty. Every correspondence is packed with possibility. Every interaction reveals undisclosed pathways.
To [a certain type of person][4] (me again), it can be downright terrifying.
But that unrelenting and genuine surprise is the [very source of a community's richness][5], its sheer abundance. If a community is the nucleus of all those reactions that catalyze innovations and breakthroughs, then unpredictability and serendipity are its fuel. I've learned to appreciate it—more accurately, perhaps, to stand in awe of it. Four years ago, when the Opensource.com team heeded [Jim Whitehurst's call][6] to build a space for others to "share your thoughts and opinions… on how you think we can all lead and work better in the future" (see the final page of _The Open Organization_ ), we had little more than a mandate, a platform, and a vision. We'd be an open organization [committed to studying, learning from, and propagating open organizations][7]. The rest was a surprise—or rather, a series of surprises:
* [Hundreds of articles, reviews, guides, and tutorials][2] on infusing open principles into organizations of all sizes across industries
* [A book series][8] spanning five volumes (with [another currently in production][9])
* A detailed, comprehensive, community-maintained [definition of the "open organization" concept][10]
* A [robust maturity model][11] for anyone seeking to understand how that definition might (or might not) support their own work
All of that—everything you see there, [and more][12]—is the work of a community that never stopped conversing, never ceased creating, never failed to outsmart itself. No one could have predicted it. No one could have [planned for it][13]. We simply do our best to keep up with it.
And after four years the work continues, more focused and impassioned than ever. Remaining involved with this bunch of writers, educators, consultants, coaches, leaders, and mentors—all united by their belief that openness is the surest source of hope for organizations struggling to address the challenges of our age—has made me appreciate the power of the utterly surprising. I'm even getting a little more comfortable with it.
That's been its gift to me. But the gifts it has given each of its participants have been equally special.
As we celebrate four years of challenges, collaboration, and camaraderie this week, let's recount those surprising gifts by hearing from some of the members:
* * *
Four years of the open organization community—congratulations to all!
My first thought was to look at the five most-read articles over the past four years. Here they are:
* [5 laws every aspiring DevOps engineer should know][14]
* [What value do you bring to your company?][15]
* [8 answers to management questions from an open point of view][16]
* [What to do when you're feeling underutilized][17]
* [What's the point of DevOps?][18]
All great articles. And then I started to think: Of all the great content over the past four years, which articles have impacted me the most?
I remembered reading several great articles about meetings and how to make them more effective. So I typed "opensource.com meetings" into my search engine, and these two wonderful articles were at the top of the results list:
* [The secret to better one-on-one meetings][19]
* [Time to rethink your team's approach to meetings][20]
Articles like that have inspired my favorite open organization management principle, which I've tried to apply and has made a huge difference: All meetings are optional.
**—Jeff Mackanic, senior director, Marketing, Red Hat**
* * *
Being a member of the "open community" has reminded me of the power of getting things done via values and shared purpose without command and control—something that seems more important than ever in today's fragmented and often abusive management world, and at a time when truth and transparency themselves are under attack.
Four years is a long journey for this kind of initiative—but there's still so much to learn and understand about what makes "open" work and what it will take to accelerate the embrace of its principles more widely through different domains of work and society. Congratulations on all you, your colleagues, partners, and other members of the community have done thus far!
**—Brook Manville, Principal, Brook Manville LLC, author of _A Company of Citizens_ and co-author of _The Harvard Business Review Leader's Handbook_**
* * *
The Open Organization Ambassador program has, in the last four years, become an inspired community of experts. We have defined what it means to be a truly open organization. We've written books, guides, articles, and other resources for learning about, understanding, and implementing open principles. We've done this while bringing open principles to other communities, and we've done this together.
For me, personally and professionally, the togetherness is the best part of this endeavor. I have learned so much from my colleagues. I'm absolutely ecstatic to be one of the idealists and activists in this community—committed to making our workplaces more equitable and open.
**—Laura Hilliger, co-founder, We Are Open Co-Op, and[Open Organization Ambassador][21]**
* * *
Finding the open organization community opened me up to knowing that there are others out there who thought as I did. I wasn't alone. My ideas on leadership and the workplace were not crazy. This sense of belonging increased once I joined the Ambassador team. Our monthly meetings are never long enough. I don't like when we have to hang up because each session is full of laughter, sharpening each other, idea exchange, and joy. Humans seek community. We search for people who share values and ideals as we do but who also push back and help us expand. This is the gift of the open organization community—expansion, growth, and lifelong friendships. Thank you to all of those who contribute their time, intellect, content, and whole self to this awesome think tank that is changing the shape of how we organize to solve problems!
**—Jen Kelchner, founder and Chief Change Architect, LDR21, and[Open Organization Ambassador][21]**
* * *
Happy fourth birthday, open organization community! Thank you for being an ever-present reminder in my life that being open is better than being closed, that listening is more fruitful than telling, and that together we can achieve more.
**—Michael Doyle, professional coach and[Open Organization Ambassador][21]**
* * *
Wow, what a journey it's been exploring the world of open organizations. We're seeing more interest now than ever before. It's amazing to see what this community has done and I'm excited to see what the future holds for open organizations and open leadership.
**—Jason Hibbets, senior community architect, Red Hat**
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/6/four-year-celebration
作者:[Bryan Behrenshausen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bbehrens
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_anniversary_celebrate_hats_cake.jpg?itok=Zfsv6DE_ (Browser window with birthday hats and a cake)
[2]: https://opensource.com/open-organization
[3]: https://opensource.com/open-organization/15/5/introducing-open-organization
[4]: https://opensource.com/open-organization/18/11/design-communities-personality-types
[5]: https://opensource.com/open-organization/18/1/why-build-community-1
[6]: https://www.redhat.com/en/explore/the-open-organization-book
[7]: https://opensource.com/open-organization/resources/ambassadors-program
[8]: https://opensource.com/open-organization/resources/book-series
[9]: https://opensource.com/open-organization/19/5/educators-guide-project
[10]: https://opensource.com/open-organization/resources/open-org-definition
[11]: https://opensource.com/open-organization/resources/open-org-maturity-model
[12]: https://opensource.com/open-organization/resources
[13]: https://opensource.com/open-organization/19/2/3-misconceptions-agile
[14]: https://opensource.com/open-organization/17/5/5-devops-laws
[15]: https://opensource.com/open-organization/15/7/what-value-do-you-bring-your-company
[16]: https://opensource.com/open-organization/16/5/open-questions-and-answers-about-open-management
[17]: https://opensource.com/open-organization/17/4/feeling-underutilized
[18]: https://opensource.com/open-organization/17/5/what-is-the-point-of-DevOps
[19]: https://opensource.com/open-organization/18/5/open-one-on-one-meetings-guide
[20]: https://opensource.com/open-organization/18/3/open-approaches-meetings
[21]: https://opensource.com/open-organization/resources/meet-ambassadors

View File

@ -0,0 +1,224 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a CentOS homelab in an hour)
[#]: via: (https://opensource.com/article/19/6/create-centos-homelab-hour)
[#]: author: (Bob Murphy https://opensource.com/users/murph)
Create a CentOS homelab in an hour
======
Set up a self-sustained set of basic Linux servers with nothing more
than a system with virtualization software, a CentOS ISO, and about an
hour of your time.
![metrics and data shown on a computer screen][1]
When working on new Linux skills (or, as I was, studying for a Linux certification), it is helpful to have a few virtual machines (VMs) available on your laptop so you can do some learning on the go.
But what happens if you are working somewhere without a good internet connection and you want to work on a web server? What about using other software that you don't already have installed? If you were depending on downloading it from the distribution's repositories, you may be out of luck. With a bit of preparation, you can set up a [homelab][2] that will allow you to install anything you need wherever you are, with or without a network connection.
The requirements are:
* A downloaded ISO file of the Linux distribution you intend to use (for example, CentOS, Red Hat, etc.)
* A host computer with virtualization. I use [Fedora][3] with [KVM][4] and [virt-manager][5], but any Linux will work similarly. You could even use Windows or Mac with virtualization, with some difference in implementation
* About an hour of time
### 1\. Create a VM for your repo host
Use virt-manager to create a VM with modest specs; 1GB RAM, one CPU, and 16GB of disk space are plenty.
Install [CentOS 7][6] on the VM.
![Installing a CentOS homelab][7]
Select your language and continue.
Click _Installation Destination_ , select your local disk, mark the _Automatically Configure Partitioning_ checkbox, and click *Done *in the upper-left corner.
Under _Software Selection_ , select _Infrastructure Server_ , mark the _FTP Server_ checkbox, and click _Done_.
![Installing a CentOS homelab][8]
Select _Network and Host Name_ , enable Ethernet in the upper-right, then click _Done_ in the upper-left corner.
Click _Begin Installation_ to start installing the OS.
You must create a root password, then you can create a user with a password as it installs.
### 2\. Start the FTP service
The next step is to start and set the FTP service to run and allow it through the firewall.
Log in with your root password, then start the FTP server:
```
`systemctl start vsftpd`
```
Enable it to work on every start:
```
`systemctl enable vsftpd`
```
Set the port as allowed through the firewall:
```
`firewall-cmd --add-service=ftp --perm`
```
Enable this change immediately:
```
`firewall-cmd --reload`
```
Get your IP address:
```
`ip a`
```
(it's probably **eth0** ). You'll need it in a minute.
### 3\. Copy the files for your local repository
Mount the CD you installed from to your VM through your virtualization software.
Create a directory for the CD to be mounted to temporarily:
```
`mkdir /root/temp`
```
Mount the install CD:
```
`mount /dev/cdrom /root/temp`
```
Copy all the files to the FTP server directory:
```
`rsync -avhP /root/temp/ /var/ftp/pub/`
```
### 4\. Point the server to the local repository
Red Hat-based systems use files that end in **.repo** to identify where to get updates and new software. Those files can be found at
```
`cd /etc/yum.repos.d`
```
You need to get rid of the repo files that point your server to look to the CentOS repositories on the internet. I prefer to copy them to root's home directory to get them out of the way:
```
`mv * ~`
```
Then create a new repo file to point to your server. Use your favorite text editor to create a file named **network.repo** with the following lines (substituting the IP address you got in step 2 for _< your IP>_), then save it:
```
[network]
name=network
baseurl=<ftp://192.168.122.\><your ip>/pub
gpgcheck=0
```
When that's done, we can test it out with the following:
```
`yum clean all; yum install ftp`
```
If your FTP client installs as expected from the "network" repository, your local repo is set up!
![Installing a CentOS homelab][9]
### 5\. Install a new VM with the repository you set up
Go back to the virtual machine manager, and create another VM—but this time, select _Network Install_ with a URL of:
```
`ftp://192.168.122.<your IP>/pub`
```
If you're using a different host OS or virtualization manager, install your VM similarly as before, and skip to the next section.
### 6\. Set the new VM to use your existing network repository
You can copy the repo file from your existing server to use here.
As in the first server example, enter:
```
cd /etc/yum.repos.d
mv * ~
```
Then:
```
`scp root@192.168.122.<your IP>:/etc/yum.repos.d/network.repo /etc/yum.repos.d`
```
Now you should be ready to work with your new VM and get all your software from your local repository.
Test this again:
```
`yum clean all; yum install screen`
```
This will install your software from your local repo server.
This setup, which gives you independence from the network with the ability to install software, can create a much more dependable environment for expanding your skills on the road.
* * *
_Bob Murphy will present this topic as well as an introduction to[GNU Screen][10] at [Southeast Linux Fest][11], June 15-16 in Charlotte, N.C._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/create-centos-homelab-hour
作者:[Bob Murphy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/murph
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
[2]: https://opensource.com/article/19/3/home-lab
[3]: https://getfedora.org/
[4]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine
[5]: https://virt-manager.org/
[6]: https://www.centos.org/download/
[7]: https://opensource.com/sites/default/files/uploads/homelab-3b_0.png (Installing a CentOS homelab)
[8]: https://opensource.com/sites/default/files/uploads/homelab-5b.png (Installing a CentOS homelab)
[9]: https://opensource.com/sites/default/files/uploads/homelab-14b.png (Installing a CentOS homelab)
[10]: https://opensource.com/article/17/3/introduction-gnu-screen
[11]: https://southeastlinuxfest.org/

View File

@ -0,0 +1,103 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam)
[#]: via: (https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/)
[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/)
ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam
======
The Microsoft 98-366 exam is almost similar to the CompTIA Network+ certification test when it comes to its content.
It is also known as Networking Fundamentals, and its purpose is to assess your knowledge of switches, routers, OSI models, wide area and local area networks, wireless networking, and IP addressing.
Those who pass the exam earn the MTA (Microsoft Technology Associate) certificate. This certifications an ideal entry-level credential to help you begin your IT career.
### 6 Resources for Microsoft MTA 98-366 Exam
Using approved training materials is the best method to prepare for your certification exam. Most candidates are fond of shortcuts and often use PDFs and brain dumps to prepare for the test.
It is important to note that these materials need additional study methods. They will not help you gain better knowledge that is meant to make you perform better at work.
When you take your time to master the course contents of a certification exam, you are not only getting ready for the test but also developing your skills, expertise, and knowledge in the topics covered there.
* **[ExamSnap][1]**
Another important point to note is that you shouldnt rely only on brain dumps. Microsoft can withhold or withdraw your certification if it is discovered that you have cheated.
This may also result in the situation when a person is not allowed to earn the credentials any more. Thus, use only verified platforms such as Examsnap.
Most people tend to believe that there is no way they can get discovered. However, you need to know that Microsoft has been offering professional certification exams for years and they know what they are doing.
Now when we have established the importance of using legal materials to prepare for your certification exam, we need to highlight the top resources you can use to prepare for the Microsoft 98-366 test.
### 1\. Microsoft Video Academy
The MVA (Microsoft Video Academy) provides you with introductory lessons on the 98-366 certification exam.This is not sufficient for your study although it is a great way to begin your preparation for the test. The materials in the video do not cover everything you need to know before taking the exam.
In fact, it is introductory series that is meant to lay down the foundation for your study. You will have to explore other materials for you to get an in-depth knowledge of the topics.
These videos are available without payment, so you can easily access them and use whenever you want.
* [Microsoft Certification Overview][2]
### 2\. Examsnap
If you have been looking for material that can help you prepare for the Microsoft Networking Fundamentals exam, look no further because you will know about Examsnap now.
It is an online platform that provides you with exam dumps and video series of various certification courses. All you need to do is to register on the website and pay a fee for you to be able to access various tools that will help you prepare for the test.
Examsnap will enable you to pass your exams with confidence by providing you with the most accurate and comprehensive preparation materials on the Internet. The platform also has training courses to help you improve your study.
Before making any payment on the site, you can complete the trial course to establish whether the site is suitable for your exam preparation needs.
### 3\. Exam 98-366: MTA Networking Fundamentals
This is a study resource that is a book. It provides you with a comprehensive approach to the various topics. It is important for you to note that this study guide is critical to your success.
It offers you more detailed material than the introductory lectures by the MVA. It is advisable that you do not focus on the sample questions in each part when using this book.
You should not concentrate on the sample questions because they are not so informative. You can make up for this shortcoming by checking out other practice test options. Overall, this book is a top resource that will contribute greatly to your certification exam success.
### 4\. Measure-Up
Measure-Up is the official Microsoft practice test provider whereby you can access different materials. You can get a lab and hands-on practice with networking software tools, which are very beneficial for your preparation. The site also has study questions that you can purchase.
### 5\. U-Certify
The U-Certify platform is a reputable organization that offers video courses that are considered to be more understandable than those offered by the MVA. In addition to video courses, the site presents flashcards at the end of the videos.
You can also access a series of practice tests, which contain several hundreds of study questions on the platform. There are more contents in videos and tests that you can access the moment you subscribe. Depending on what you are looking for, you can choose to buy the tests or the videos.
### 6\. Networking Essentials Wiki
There are several posts that are linked to the Networking Essentials page on Wiki, and you can be sure that these articles are greatly detailed with information that will be helpful for your exam preparation.
It is important to note that they are not meant to be studied as the only resource materials. You should only use them as additional means for the purpose of getting more information on specific topics but not as an individual study tool.
### Conclusion
You may not be able to access all the top resources available. However, you can access some of them. In addition to the resources mentioned, there are also some others that are very good. Visit the Microsoft official website to get the list of reliable resource platforms you can use. Study and be well-prepared for the 98-366 certification exam!
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/
作者:[2daygeek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://www.2daygeek.com/author/2daygeek/
[b]: https://github.com/lujun9972
[1]: https://www.examsnap.com/
[2]: https://www.microsoft.com/en-us/learning/certification-overview.aspx

View File

@ -0,0 +1,174 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?)
[#]: via: (https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?
======
Patching of the Linux server is one of important and routine task of Linux admin.
Keeping the system with latest patch level is must. It protects your system against unnecessary attack.
There are three kind of erratas available in the RHEL/CentOS repository, these are Security, Bug Fix and Product Enhancement.
Now, you have two options to handle this.
Either install only security updates or all the errata packages.
We have already written an article in the past **[to check available security updates?][1]**.
Also, **[check the installed security updates on your system][2]** using this link.
You can navigate to the above link, if you would like to verify available security updates before installing them.
In this article, we will show your, how to install security updates in multiple ways on RHEL and CentOS system.
### 1) How To Install Entire Errata Updates In Red Hat And CentOS System?
Run the following command to download and apply all available security updates on your system.
Make a note, this command will install the last available version of any package with at least one security errata.
Also, install non-security erratas if they provide a more updated version of the package.
```
# yum update --security
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
RHEL7-Server-DVD | 4.3 kB 00:00:00
rhel-7-server-rpms | 2.0 kB 00:00:00
--> 1:grub2-tools-extra-2.02-0.76.el7.1.x86_64 from rhel-7-server-rpms removed (updateinfo)
--> nss-pem-1.0.3-5.el7_6.1.x86_64 from rhel-7-server-rpms removed (updateinfo)
.
35 package(s) needed (+0 related) for security, out of 115 available
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated
---> Package NetworkManager.x86_64 1:1.12.0-10.el7_6 will be an update
```
Once you ran the above command, it will check all the available updates and its dependency satisfaction.
```
--> Finished Dependency Resolution
--> Running transaction check
---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased
---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================
Installing:
kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M
kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M
Updating:
NetworkManager x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 1.7 M
NetworkManager-adsl x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 157 k
.
Removing:
kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M
kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M
```
If these dependencies were satisfied, which finally gives you a total summary about it.
The transaction summary shows, how many packages will be getting Installed, upgraded and removed from the system.
```
Transaction Summary
=====================================================================================================================================================================
Install 2 Packages
Upgrade 33 Packages
Remove 2 Packages
Total download size: 124 M
Is this ok [y/d/N]:
```
### How To Install Only Security Updates In Red Hat And CentOS System?
Run the following command to install only the packages that have a security errata.
```
# yum update-minimal --security
Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
rhel-7-server-rpms | 2.0 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated
---> Package NetworkManager.x86_64 1:1.12.0-8.el7_6 will be an update
.
--> Finished Dependency Resolution
--> Running transaction check
---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased
---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================
Installing:
kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M
kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M
Updating:
NetworkManager x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 1.7 M
NetworkManager-adsl x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 157 k
.
Removing:
kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M
kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M
Transaction Summary
=====================================================================================================================================================================
Install 2 Packages
Upgrade 33 Packages
Remove 2 Packages
Total download size: 124 M
Is this ok [y/d/N]:
```
### How To Install Security Update Using CVE reference In Red Hat And CentOS System?
If you would like to install a security update using a CVE reference, run the following command.
```
# yum update --cve
# yum update --cve CVE-2008-0947
```
### How To Install Security Update Using Specific Advisory In Red Hat And CentOS System?
Run the following command, if you want to apply only a specific advisory.
```
# yum update --advisory=
# yum update --advisory=RHSA-2014:0159
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/
[2]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/

View File

@ -0,0 +1,178 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Shell Script To Monitor CPU Utilization And Send Email)
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Linux Shell Script To Monitor CPU Utilization And Send Email
======
There are many opensource monitoring tools are available to monitor Linux systems performance.
It will send an email alert when the system reaches the given threshold limit.
It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
If you only have few systems and want to monitor them then writing a small shell script can achieve this.
In this tutorial we have added two shell script to monitor CPU utilization on Linux system.
When the system reaches the given threshold then it will trigger a mail to corresponding email id.
### Method-1 : Linux Shell Script To Monitor CPU Utilization And Send an Email
If you want to only get CPU utilization percentage through mail when the system reaches the given threshold, use the following script.
This is very simple and straightforward and one line script.
It will trigger an email when your system reaches `80%` CPU utilization.
```
*/5 * * * * /usr/bin/cat /proc/loadavg | awk '{print $1}' | awk '{ if($1 > 80) printf("Current CPU Utilization is: %.2f%\n"), $0;}' | mail -s "High CPU Alert" [email protected]
```
**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement.
**Output:** You will be getting an email alert similar to below.
```
Current CPU Utilization is: 80.40%
```
We had added many useful shell scripts in the past. If you want to check those, navigate to the below link.
* **[How to automate day to day activities using shell scripts?][1]**
### Method-2 : Linux Shell Script To Monitor CPU Utilization And Send an Email
If you want to get more information about the CPU utilization in the mail alert.
Then use the following script, which includes top CPU utilization process details based on the top Command and ps Command.
This will inconstantly gives you an idea what is going on your system.
It will trigger an email when your system reaches `80%` CPU utilization.
**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement.
```
# vi /opt/scripts/cpu-alert.sh
#!/bin/bash
cpuuse=$(cat /proc/loadavg | awk '{print $1}')
if [ "$cpuuse" > 80 ]; then
SUBJECT="ATTENTION: CPU Load Is High on $(hostname) at $(date)"
MESSAGE="/tmp/Mail.out"
TO="[email protected]"
echo "CPU Current Usage is: $cpuuse%" >> $MESSAGE
echo "" >> $MESSAGE
echo "+------------------------------------------------------------------+" >> $MESSAGE
echo "Top CPU Process Using top command" >> $MESSAGE
echo "+------------------------------------------------------------------+" >> $MESSAGE
echo "$(top -bn1 | head -20)" >> $MESSAGE
echo "" >> $MESSAGE
echo "+------------------------------------------------------------------+" >> $MESSAGE
echo "Top CPU Process Using ps command" >> $MESSAGE
echo "+------------------------------------------------------------------+" >> $MESSAGE
echo "$(ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10)" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
rm /tmp/Mail.out
fi
```
Finally add a **[cronjob][2]** to automate this. It will run every 5 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/scripts/cpu-alert.sh
```
**Note:** You will be getting an email alert 5 mins later since the script has scheduled to run every 5 minutes (But it's not exactly 5 mins and it depends the timing).
Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it's clear now.
**Output:** You will be getting an email alert similar to below.
```
CPU Current Usage is: 80.51%
+------------------------------------------------------------------+
Top CPU Process Using top command
+------------------------------------------------------------------+
top - 13:23:01 up 1:43, 1 user, load average: 2.58, 2.58, 1.51
Tasks: 306 total, 3 running, 303 sleeping, 0 stopped, 0 zombie
%Cpu0 : 6.2 us, 6.2 sy, 0.0 ni, 87.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 18.8 us, 0.0 sy, 0.0 ni, 81.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 50.0 us, 37.5 sy, 0.0 ni, 12.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 5.9 us, 5.9 sy, 0.0 ni, 88.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 29.4 us, 23.5 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu6 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu7 : 5.9 us, 0.0 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16248588 total, 223436 free, 5816924 used, 10208228 buff/cache
KiB Swap: 17873388 total, 17871340 free, 2048 used. 7440884 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8867 daygeek 20 2743884 440420 360952 R 100.0 2.7 1:07.25 /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox
9119 daygeek 20 36136 784 R 46.7 0.0 0:00.07 /usr/bin/CROND -n
1057 daygeek 20 889808 487692 461692 S 13.3 3.0 4:21.12 /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3
3098 daygeek 20 1929012 351412 120532 S 13.3 2.2 16:42.51 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9236 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /us+
1 root 20 188820 10144 7708 S 6.7 0.1 0:06.92 /sbin/init
818 gdm 20 199836 25120 15876 S 6.7 0.2 0:01.85 /usr/lib/Xorg vt1 -displayfd 3 -auth /run/user/120/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3
1170 daygeek 9 -11 2676516 16516 12520 S 6.7 0.1 1:28.30 /usr/bin/pulseaudio --daemonize=no
8271 root 20 I 6.7 0:00.21 [kworker/u16:4-i915]
9117 daygeek 20 13528 4036 3144 R 6.7 0.0 0:00.01 top -bn1
+------------------------------------------------------------------+
Top CPU Process Using ps command
+------------------------------------------------------------------+
%CPU PID USER COMMAND
8.8 8522 daygeek /usr/lib/virtualbox/VirtualBox
86.2 8867 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox
76.1 8921 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment Ubuntu-18.04 --startvm e8c32dbb-8b01-41b0-977a-bf28b9db1117 --no-startvm-errormsgbox
5.5 8080 daygeek /usr/bin/nautilus --gapplication-service
4.7 4575 daygeek /usr/lib/firefox/firefox -contentproc -childID 12 -isForBrowser -prefsLen 9375 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab
4.4 3511 daygeek /usr/lib/firefox/firefox -contentproc -childID 8 -isForBrowser -prefsLen 9308 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab
4.4 3190 daygeek /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 9237 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab
4.4 1612 daygeek /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab
4.2 3565 daygeek /usr/bin/../lib/notepadqq/notepadqq-bin
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/shell-script/
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/

View File

@ -0,0 +1,140 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tweaking the look of Fedora Workstation with themes)
[#]: via: (https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/)
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
Tweaking the look of Fedora Workstation with themes
======
![][1]
Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources.
### Theme Types
When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the **Application** (GTK) theme, the **shell** theme, the **icon** theme, and the **cursor** theme.
#### Application (GTK) themes
As the name suggests, Application themes change the styling of the applications that are displayed on a users desktop. Application themes control the style of the window borders and the window titlebar. Additionally, they also control the style of the widgets in the windows — like dropdowns, text inputs, and buttons. One point to note is that an application theme does not change the icons that are displayed in an application — this is achieved using the icon theme.
![Two application windows with two different application themes. The default Adwaita theme on the left, the Adapta theme on the right.][2]
Application themes are also known as GTK themes, as GTK ( **G** IMP **T** ool **k** it) is the underlying technology that is used to render the windows and user interface widgets in those windows on Fedora Workstation.
#### Shell Themes
Shell themes change the appearance of the GNOME Shell. The GNOME Shell is the technology that displays the top bar (and the associated widgets like drop downs), as well as the overview screen and the applications list it contains.
![Comparison of two Shell themes, with the Fedora Workstation default on top, and the Adapta shell theme on the bottom.][3]
#### Icon Themes
As the name suggests, icon themes change the icons used in the desktop. Changing the icon theme will change the icons displayed both in the Shell, and in applications.
![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the left, and the Yaru icon theme on the right][4]
One important item to note with icon themes is that all icon themes will not have customized icons for all application icons. Consequently, changing the icon theme will not change all the icons in the applications list in the overview.
![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the top, and the Yaru icon theme on the bottom][5]
#### Cursor Theme
The cursor theme allows a user to change how the mouse pointer is displayed. Most cursor themes change all the common cursors, including the pointer, drag handles and the loading cursor.
![Comparison of multiple cursors of two different cursor themes. Fedora 30 default is on the left, the Breeze Snow theme on the right.][6]
### Changing the themes
Changing themes on Fedora Workstation is a simple process. To change all 4 types of themes, use the **Tweaks** application. Tweaks is a tool used to change a range of different options in Fedora Workstation. It is not installed by default, and is installed using the Software application:
![][7]
Alternatively, install Tweaks from the command line with the command:
```
sudo dnf install gnome-tweak-tool
```
In addition to Tweaks, to change the Shell theme, the **User Themes** GNOME Shell Extension needs to be installed and enabled. [Check out this post for more details on installing extensions][8].
Next, launch Tweaks, and switch to the Appearance pane. The Themes section in the Appearance pane allows the changing of the multiple theme types. Simply choose the theme from the dropdown, and the new theme will apply automatically.
![][9]
### Installing themes
Armed with the knowledge of the types of themes, and how to change themes, it is time to install some themes. Broadly speaking, there are two ways to install new themes to your Fedora Workstation — installing theme packages from the Fedora repositories, or manually installing a theme. One point to note when installing themes, is that you may need to close and re-open the Tweaks application to make a newly installed theme appear in the dropdowns.
#### Installing from the Fedora repositories
The Fedora repositories contain a small selection of additional themes that once installed are available to we chosen in Tweaks. Theme packages are not available in the Software application, and have to be searched for and installed via the command line. Most theme packages have a consistent naming structure, so listing available themes is pretty easy.
To find Application (GTK) themes use the command:
```
dnf search gtk | grep theme
```
To find Shell themes:
```
dnf search shell-theme
```
Icon themes:
```
dnf search icon-theme
```
Cursor themes:
```
dnf search cursor-theme
```
Once you have found a theme to install, install the theme using dnf. For example:
```
sudo dnf install numix-gtk-theme
```
#### Installing themes manually
For a wider range of themes, there are a plethora of places on the internet to find new themes to use on Fedora Workstation. Two popular places to find themes are [OpenDesktop][10] and [GNOMELook][11].
Typically when downloading themes from these sites, the themes are encapsulated in an archive like a tar.gz or zip file. In most cases, to install these themes, simply extract the contents into the correct directory, and the theme will appear in Tweaks. Note too, that themes can be installed either globally (must be done using sudo) so all users on the system can use them, or can be installed just for the current user.
For Application (GTK) themes, and GNOME Shell themes, extract the archive to the **.themes/** directory in your home directory. To install for all users, extract to **/usr/share/themes/**
For Icon and Cursor themes, extract the archive to the **.icons/** directory in your home directory. To install for all users, extract to **/usr/share/icons/**
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ryanlerch/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/themes.png-816x345.jpg
[2]: https://fedoramagazine.org/wp-content/uploads/2019/06/application-theme-1024x514.jpg
[3]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-theme-1024x649.jpg
[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/icon-theme-application-1024x441.jpg
[5]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-icons-1024x637.jpg
[6]: https://fedoramagazine.org/wp-content/uploads/2019/06/cursortheme-1024x467.jpg
[7]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-in-software-1024x725.png
[8]: https://fedoramagazine.org/install-extensions-via-software-application/
[9]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-choose-themes.png
[10]: https://www.opendesktop.org/
[11]: https://www.gnome-look.org/

Some files were not shown because too many files have changed in this diff Show More