mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
38661ce062
@ -0,0 +1,251 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (messon007)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12135-1.html)
|
||||
[#]: subject: (9 open source cloud native projects to consider)
|
||||
[#]: via: (https://opensource.com/article/19/8/cloud-native-projects)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo)
|
||||
|
||||
值得关注的 9 个开源云原生项目
|
||||
======
|
||||
|
||||
> 工作中用了容器?熟悉这些出自云原生计算基金会的项目吗?
|
||||
|
||||

|
||||
|
||||
随着用容器来开发应用的实践变得流行,[云原生应用][2]也在增长。云原生应用的定义为:
|
||||
|
||||
> “云原生技术用于开发使用打包在容器中的服务所构建的应用程序,以微服务的形式部署,并通过敏捷的 DevOps 流程和持续交付工作流在弹性基础设施上进行管理。”
|
||||
|
||||
这个定义提到了构成云原生应用的不可或缺的四个元素:
|
||||
|
||||
1. 容器
|
||||
2. 微服务
|
||||
3. DevOps
|
||||
4. 持续集成和持续交付 (CI/CD)
|
||||
|
||||
尽管这些技术各有各自独特的历史,但它们之间却相辅相成,在短时间内实现了云原生应用和工具的惊人的指数级增长。这个[云原生计算基金会(CNCF)][4]信息图呈现了当今云原生应用生态的规模和广度。
|
||||
|
||||
![Cloud-Native Computing Foundation applications ecosystem][5]
|
||||
|
||||
*云原生计算基金会项目*
|
||||
|
||||
我想说,瞧着吧!这仅仅是一个开始。正如 NodeJS 的出现引发了无数的 JavaScript 工具的爆炸式增长一样,容器技术的普及也推动了云原生应用的指数增长。
|
||||
|
||||
好消息是,有几个组织负责监管并将这些技术连接在一起。 其中之一是 <ruby>[开放容器倡议][6]<rt>Open Containers Initiative</rt></ruby>(OCI),它是一个轻量级的、开放的治理机构(或项目),“它是在 Linux 基金会的支持下形成的,其明确目的是创建开放的行业标准的容器格式和运行时。” 另一个是 CNCF,它是“一个致力于使云原生计算普及和可持续发展的开源软件基金会”。
|
||||
|
||||
通常除了围绕云原生应用建立社区之外,CNCF 还帮助项目围绕其云原生应用建立结构化管理。CNCF 创建了成熟等级的概念(沙箱级、孵化级或毕业级),分别与下图中的“创新者”、“早期采用者”和“早期大量应用”相对应。
|
||||
|
||||
![CNCF project maturity levels][7]
|
||||
|
||||
*CNCF 项目成熟等级*
|
||||
|
||||
CNCF 为每个成熟等级制定了详细的[标准][8](为方便读者而列在下面)。获得技术监督委员会(TOC)三分之二的同意才能转为孵化或毕业级。
|
||||
|
||||
**沙箱级**
|
||||
|
||||
> 要想成为沙箱级,一个项目必须至少有两个 TOC 赞助商。 有关详细过程,请参见《CNCF 沙箱指南 v1.0》。
|
||||
|
||||
**孵化级**
|
||||
|
||||
> 注:孵化级是我们期望对项目进行全面的尽职调查的起点。
|
||||
>
|
||||
> 要进入孵化级,项目除了满足沙箱级的要求之外还要满足:
|
||||
>
|
||||
> * 证明至少有三个独立的最终用户已成功将其用于生产,且 TOC 判断这些最终用户具有足够的质量和范围。
|
||||
> * 提交者的数量要合理。提交者定义为具有提交权的人,即可以接受部分或全部项目贡献的人。
|
||||
> * 显示出有大量持续提交和合并贡献。
|
||||
> * 由于这些指标可能会根据项目的类型、范围和大小而有很大差异,所以 TOC 有权决定是否满足这些标准的活动水平。
|
||||
|
||||
**毕业级**
|
||||
|
||||
> 要从沙箱或孵化级毕业,或者要使一个新项目作为已毕业项目加入,项目除了必须满足孵化级的标准外还要满足:
|
||||
>
|
||||
> * 至少有两个来自组织的提交者。
|
||||
> * 已获得并保持了“核心基础设施计划最佳实践徽章”。
|
||||
> * 已完成独立的第三方安全审核,并发布了具有与以下示例类似的范围和质量的结果(包括已解决的关键漏洞):<https://github.com/envoyproxy/envoy#security-audit>,并在毕业之前需要解决所有关键的漏洞。
|
||||
> * 采用《CNCF 行为准则》。
|
||||
> * 明确规定项目治理和提交流程。最好将其列在 `GOVERNANCE.md` 文件中,并引用显示当前提交者和荣誉提交者的 `OWNERS.md` 文件。
|
||||
> * 至少有一个主仓的项目采用者的公开列表(例如,`ADOPTERS.md` 或项目网站上的徽标)。
|
||||
> * 获得 TOC 的绝大多数票,进入毕业阶段。如果项目能够表现出足够的成熟度,则可以尝试直接从沙箱级过渡到毕业级。项目可以无限期保持孵化状态,但是通常预计它们会在两年内毕业。
|
||||
|
||||
### 值得关注的 9 个项目
|
||||
|
||||
本文不可能涵盖所有的 CNCF 项目,我将介绍最有趣的 9 个毕业和孵化的开源项目。
|
||||
|
||||
名称|授权类型|简要描述
|
||||
---|---|---
|
||||
[Kubernetes][9] | Apache 2.0 | 容器编排平台
|
||||
[Prometheus][10] | Apache 2.0 | 系统和服务监控工具
|
||||
[Envoy][11] | Apache 2.0 | 边缘和服务代理
|
||||
[rkt][12] | Apache 2.0 | Pod 原生的容器引擎
|
||||
[Jaeger][13] | Apache 2.0 | 分布式跟踪系统
|
||||
[Linkerd][14] | Apache 2.0 | 透明服务网格
|
||||
[Helm][15] | Apache 2.0 | Kubernetes 包管理器
|
||||
[Etcd][16] | Apache 2.0 | 分布式键值存储
|
||||
[CRI-O][17] | Apache 2.0 | 专门用于 Kubernetes 的轻量级运行时环境
|
||||
|
||||
我也创建了视频材料来介绍这些项目。
|
||||
|
||||
- [video](https://youtu.be/3cDxYO2GK4w)
|
||||
|
||||
### 毕业项目
|
||||
|
||||
毕业的项目被认为是成熟的,已被许多组织采用的,并且严格遵守了 CNCF 的准则。以下是三个最受欢迎的开源 CNCF 毕业项目。(请注意,其中一些描述来源于项目的网站并被做了改编。)
|
||||
|
||||
#### Kubernetes(希腊语“舵手”)
|
||||
|
||||
Kubernetes! 说起云原生应用,怎么能不提 Kubernetes 呢? Google 发明的 Kubernetes 无疑是最著名的基于容器的应用程序的容器编排平台,而且它还是一个开源工具。
|
||||
|
||||
什么是容器编排平台?通常,一个容器引擎本身可以管理几个容器。但是,当你谈论数千个容器和数百个服务时,管理这些容器变得非常复杂。这就是容器编排引擎的用武之地。容器编排引擎通过自动化容器的部署、管理、网络和可用性来帮助管理大量的容器。
|
||||
|
||||
Docker Swarm 和 Mesosphere Marathon 也是容器编排引擎,但是可以肯定地说,Kubernetes 已经赢得了这场比赛(至少现在是这样)。Kubernetes 还催生了像 [OKD][18] 这样的容器即服务(CaaS)平台,它是 Kubernetes 的 Origin 社区发行版,并成了 [Red Hat OpenShift][19] 的一部分。
|
||||
|
||||
想开始学习,请访问 [Kubernetes GitHub 仓库][9],并从 [Kubernetes 文档][20]页面访问其文档和学习资源。
|
||||
|
||||
#### Prometheus(普罗米修斯)
|
||||
|
||||
Prometheus 是 2012 年在 SoundCloud 上构建的一个开源的系统监控和告警工具。之后,许多公司和组织都采用了 Prometheus,并且该项目拥有非常活跃的开发者和用户群体。现在,它已经成为一个独立的开源项目,独立于公司之外进行维护。
|
||||
|
||||
![Prometheus’ architecture][21]
|
||||
|
||||
*Prometheus 的架构*
|
||||
|
||||
理解 Prometheus 的最简单方法是可视化一个生产系统,该系统需要 24(小时)x 365(天)都可以正常运行。没有哪个系统是完美的,也有减少故障的技术(称为容错系统),但是,如果出现问题,最重要的是尽快发现问题。这就是像 Prometheus 这样的监控工具的用武之地。Prometheus 不仅仅是一个容器监控工具,但它在云原生应用公司中最受欢迎。此外,其他开源监视工具,包括 [Grafana][22],都借助了 Prometheus。
|
||||
|
||||
开始使用 Prometheus 的最佳方法是下载其 [GitHub 仓库][10]。在本地运行 Prometheus 很容易,但是你需要安装一个容器引擎。你可以在 [Prometheus 网站][23]上查看详细的文档。
|
||||
|
||||
#### Envoy(使者)
|
||||
|
||||
Envoy(或 Envoy 代理)是专为云原生应用设计的开源的边缘代理和服务代理。由 Lyft 创建的 Envoy 是为单一服务和应用而设计的高性能的 C++ 开发的分布式代理,同时也是为由大量微服务组成的服务网格架构而设计的通信总线和通用数据平面。Envoy 建立在 Nginx、HAProxy、硬件负载均衡器和云负载均衡器等解决方案的基础上,Envoy 与每个应用相伴(并行)运行,并通过提供平台无关的方式提供通用特性来抽象网络。
|
||||
|
||||
当基础设施中的所有服务流量都经过 Envoy 网格时,很容易就可以通过一致的可观测性来可视化问题域,调整整体性能,并在单个位置添加基础功能。基本上,Envoy 代理是一个可帮助组织为生产环境构建容错系统的服务网格工具。
|
||||
|
||||
服务网格应用有很多替代方案,例如 Uber 的 [Linkerd][24](下面会讨论)和 [Istio][25]。Istio 通过将其部署为 [Sidecar][26] 并利用了 [Mixer][27] 的配置模型,实现了对 Envoy 的扩展。Envoy 的显著特性有:
|
||||
|
||||
* 包括所有的“<ruby>入场筹码<rt>table stakes</rt></ruby>(LCTT 译注:引申为基础必备特性)”特性(与 Istio 这样的控制平面组合时)
|
||||
* 带载运行时 99% 数据可达到低延时
|
||||
* 可以作为核心的 L3/L4 过滤器,提供了开箱即用的 L7 过滤器
|
||||
* 支持 gRPC 和 HTTP/2(上行/下行)
|
||||
* 由 API 驱动,并支持动态配置和热重载
|
||||
* 重点关注指标收集、跟踪和整体可监测性
|
||||
|
||||
要想了解 Envoy,证实其能力并实现其全部优势,需要丰富的生产级环境运行的经验。你可以在其[详细文档][28]或访问其 [GitHub][11] 仓库了解更多信息。
|
||||
|
||||
### 孵化项目
|
||||
|
||||
下面是六个最流行的开源的 CNCF 孵化项目。
|
||||
|
||||
#### rkt(火箭)
|
||||
|
||||
rkt, 读作“rocket”,是一个 Pod 原生的容器引擎。它有一个命令行接口用来在 Linux 上运行容器。从某种意义上讲,它和其他容器如 [Podman][29]、Docker 和 CRI-O 相似。
|
||||
|
||||
rkt 最初是由 CoreOS (后来被 Red Hat 收购)开发的,你可以在其网站上找到详细的[文档][30],以及在 [GitHub][12] 上访问其源代码。
|
||||
|
||||
#### Jaeger(贼鸥)
|
||||
|
||||
Jaeger 是一个开源的端到端的分布式追踪系统,适用于云端应用。在某种程度上,它是像 Prometheus 这样的监控解决方案。但它有所不同,因为其使用场景有所扩展:
|
||||
|
||||
* 分布式事务监控
|
||||
* 性能和延时优化
|
||||
* 根因分析
|
||||
* 服务依赖性分析
|
||||
* 分布式上下文传播
|
||||
|
||||
Jaeger 是一项 Uber 打造的开源技术。你可以在其网站上找到[详细文档][31],以及在 [GitHub][13] 上找到其源码。
|
||||
|
||||
#### Linkerd
|
||||
|
||||
像创建 Envoy 代理的 Lyft 一样,Uber 开发了 Linkerd 开源解决方案用于生产级的服务维护。在某些方面,Linkerd 就像 Envoy 一样,因为两者都是服务网格工具,旨在提供平台级的可观测性、可靠性和安全性,而无需进行配置或代码更改。
|
||||
|
||||
但是,两者之间存在一些细微的差异。 尽管 Envoy 和 Linkerd 充当代理并可以通过所连接的服务进行上报,但是 Envoy 并不像 Linkerd 那样被设计为 Kubernetes Ingress 控制器。Linkerd 的显著特点包括:
|
||||
|
||||
* 支持多种平台(Docker、Kubernetes、DC/OS、Amazon ECS 或任何独立的机器)
|
||||
* 内置服务发现抽象,可以将多个系统联合在一起
|
||||
* 支持 gRPC、HTTP/2 和 HTTP/1.x请 求和所有的 TCP 流量
|
||||
|
||||
你可以在 [Linkerd 网站][32]上阅读有关它的更多信息,并在 [GitHub][14] 上访问其源码。
|
||||
|
||||
#### Helm(舵轮)
|
||||
|
||||
Helm 基本上就是 Kubernetes 的包管理器。如果你使用过 Apache Maven、Maven Nexus 或类似的服务,你就会理解 Helm 的作用。Helm 可帮助你管理 Kubernetes 应用程序。它使用“Helm Chart”来定义、安装和升级最复杂的 Kubernetes 应用程序。Helm 并不是实现此功能的唯一方法;另一个流行的概念是 [Kubernetes Operators][33],它被 Red Hat OpenShift 4 所使用。
|
||||
|
||||
你可以按照其文档中的[快速开始指南][34]或 [GitHub 指南][15]来试用 Helm。
|
||||
|
||||
#### Etcd
|
||||
|
||||
Etcd 是一个分布式的、可靠的键值存储,用于存储分布式系统中最关键的数据。其主要特性有:
|
||||
|
||||
* 定义明确的、面向用户的 API(gRPC)
|
||||
* 自动 TLS,可选的客户端证书验证
|
||||
* 速度(可达每秒 10,000 次写入)
|
||||
* 可靠性(使用 Raft 实现分布式)
|
||||
|
||||
Etcd 是 Kubernetes 和许多其他技术的默认的内置数据存储方案。也就是说,它很少独立运行或作为单独的服务运行;相反,它以集成到 Kubernetes、OKD/OpenShift 或其他服务中的形式来运作。还有一个 [etcd Operator][35] 可以用来管理其生命周期并解锁其 API 管理功能:
|
||||
|
||||
你可以在 [etcd 文档][36]中了解更多信息,并在 [GitHub][16]上访问其源码。
|
||||
|
||||
#### CRI-O
|
||||
|
||||
CRI-O 是 Kubernetes 运行时接口的 OCI 兼容实现。CRI-O 用于各种功能,包括:
|
||||
|
||||
* 使用 runc(或遵从 OCI 运行时规范的任何实现)和 OCI 运行时工具运行
|
||||
* 使用容器/镜像进行镜像管理
|
||||
* 使用容器/存储来存储和管理镜像层
|
||||
* 通过容器网络接口(CNI)来提供网络支持
|
||||
|
||||
CRI-O 提供了大量的[文档][37],包括指南、教程、文章,甚至播客,你还可以访问其 [GitHub 页面][17]。
|
||||
|
||||
我错过了其他有趣且开源的云原生项目吗?请在评论中提醒我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/cloud-native-projects
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://opensource.com/article/18/7/what-are-cloud-native-apps
|
||||
[3]: https://thenewstack.io/10-key-attributes-of-cloud-native-applications/
|
||||
[4]: https://www.cncf.io
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cncf_1.jpg (Cloud-Native Computing Foundation applications ecosystem)
|
||||
[6]: https://www.opencontainers.org
|
||||
[7]: https://opensource.com/sites/default/files/uploads/cncf_2.jpg (CNCF project maturity levels)
|
||||
[8]: https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc
|
||||
[9]: https://github.com/kubernetes/kubernetes
|
||||
[10]: https://github.com/prometheus/prometheus
|
||||
[11]: https://github.com/envoyproxy/envoy
|
||||
[12]: https://github.com/rkt/rkt
|
||||
[13]: https://github.com/jaegertracing/jaeger
|
||||
[14]: https://github.com/linkerd/linkerd
|
||||
[15]: https://github.com/helm/helm
|
||||
[16]: https://github.com/etcd-io/etcd
|
||||
[17]: https://github.com/cri-o/cri-o
|
||||
[18]: https://www.okd.io/
|
||||
[19]: https://www.openshift.com
|
||||
[20]: https://kubernetes.io/docs/home
|
||||
[21]: https://opensource.com/sites/default/files/uploads/cncf_3.jpg (Prometheus’ architecture)
|
||||
[22]: https://grafana.com
|
||||
[23]: https://prometheus.io/docs/introduction/overview
|
||||
[24]: https://linkerd.io/
|
||||
[25]: https://istio.io/
|
||||
[26]: https://istio.io/docs/reference/config/networking/v1alpha3/sidecar
|
||||
[27]: https://istio.io/docs/reference/config/policy-and-telemetry
|
||||
[28]: https://www.envoyproxy.io/docs/envoy/latest
|
||||
[29]: https://podman.io
|
||||
[30]: https://coreos.com/rkt/docs/latest
|
||||
[31]: https://www.jaegertracing.io/docs/1.13
|
||||
[32]: https://linkerd.io/2/overview
|
||||
[33]: https://coreos.com/operators
|
||||
[34]: https://helm.sh/docs
|
||||
[35]: https://github.com/coreos/etcd-operator
|
||||
[36]: https://etcd.io/docs/v3.3.12
|
||||
[37]: https://github.com/cri-o/cri-o/blob/master/awesome.md
|
@ -1,18 +1,20 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12136-1.html)
|
||||
[#]: subject: (A handy utility for creating Raspberry Pi SD card images)
|
||||
[#]: via: (https://opensource.com/article/20/4/raspberry-pi-imager-mac)
|
||||
[#]: author: (James Farrell https://opensource.com/users/jamesf)
|
||||
|
||||
一个方便的用于创建树莓派 SD 卡镜像的程序
|
||||
======
|
||||
开始在 Mac 上使用 Raspberry Pi Imager
|
||||
|
||||
> 开始在 Mac 上使用 Raspberry Pi Imager。
|
||||
|
||||
![Raspberries with pi symbol overlay][1]
|
||||
|
||||
有多种购买树莓派的方法,同时树莓派会取决于你从哪里购买的从而可能附带或不附带操作系统。将操作系统安装到树莓派上就是用系统镜像“刷新” SD 卡。为了使之尽可能简单,[树莓派基金会][2]引入了 Raspberry Pi Imager,你可以将其下载到所有主流平台。以下这个有用的新工具介绍。
|
||||
有多种购买树莓派的方法,根据你的购买渠道的不同,可能附带或不附带操作系统。要在树莓派上安装操作系统,只需将操作系统镜像 “闪存” 到 SD 卡即可。为了使之尽可能简单,[树莓派基金会][2]推出一个 Raspberry Pi Imager 实用程序,你可以在所有主流平台上下载它。下面就来简单介绍一下这个有用的新工具。
|
||||
|
||||
### 安装 Imager
|
||||
|
||||
@ -22,7 +24,7 @@ Mac 的安装包是常规的 DMG 镜像,它会挂载到你的桌面,然后
|
||||
|
||||
![Raspberry Pi Imager installer][4]
|
||||
|
||||
只需将可爱的树莓图标拖到 Application 文件夹,就可以完成。从 Launchpad 中调用它,你会看到一系列简单的按钮和菜单供你选择。真的不能比这更简单了:
|
||||
只需将可爱的树莓图标拖到“应用”文件夹,就可以完成。从启动台中调用它,你会看到一系列简单的按钮和菜单供你选择。真的不能比这更简单了:
|
||||
|
||||
![Raspberry Pi Imager home screen][5]
|
||||
|
||||
@ -32,13 +34,13 @@ Mac 的安装包是常规的 DMG 镜像,它会挂载到你的桌面,然后
|
||||
|
||||
### 安装一些镜像
|
||||
|
||||
我决定使用 16g 的 micro SD 卡。我选择了默认的 Raspbian 镜像,选择已连接的 USB/SD 设备,然后按下 WRITE。这是一个简短的演示:
|
||||
我决定使用 16g 的 micro SD 卡。我选择了默认的 Raspbian 镜像,选择已连接的 USB/SD 设备,然后按下 “WRITE” 按钮。这是一个简短的演示:
|
||||
|
||||
![Raspberry Pi Imager demo][6]
|
||||
|
||||
我没有在此处发布完整信息。我相信它在下载镜像后写入,对于我的无线连接这花费了几分钟完成。该过程在完成之前先经过写入,然后经过验证周期。完成后,我弹出设备,并将卡插入到我的 RPi 3 中,然后按照通常的图形 Raspbian 安装向导和桌面环境进行设置。
|
||||
我没有在此处发布整个操作过程。我认为它是在写入的时候下载了镜像,对于我的无线连接这花费了几分钟。该过程在完成之前要先经过写入,然后经过验证环节。完成后,我弹出设备,并将卡插入到我的树莓派 3 中,然后按照通常的图形 Raspbian 安装向导和桌面环境进行设置。
|
||||
|
||||
这对我来说还不够。我每天都会下载许多 Linux,今天我还在寻找更多。我回到了[树莓派下载][3]页面,并下载了 RISC OS 镜像。这个过程几乎一样容易。下载 RISCOSPi.5.24.zip 文件,将其解压缩,然后找到 ro524-1875M.img 文件。在 “Operating System” 按钮中,我选择了 “Use Custom” 并选择了所需的镜像文件。这个过程几乎是相同的。唯一真正的区别是我必须在下载目录中搜寻并选择一个镜像。文件写完后,回到树莓派 3,RISC OS 可以使用了。
|
||||
这对我来说还不够。我每天都会下载许多 Linux,今天我还在寻找更多。我回到了[树莓派下载][3]页面,并下载了 RISC OS 镜像。这个过程几乎一样容易。下载 RISCOSPi.5.24.zip 文件,将其解压缩,然后找到 ro524-1875M.img 文件。在 “Operating System” 按钮中,我选择了 “Use Custom” 并选择了所需的镜像文件。这个过程几乎是相同的。唯一真正的不同是我必须在下载目录中搜寻并选择一个镜像。文件写完后,回到树莓派 3,RISC OS 可以使用了。
|
||||
|
||||
### 对 USB C 的抱怨
|
||||
|
||||
@ -46,11 +48,11 @@ Mac 的安装包是常规的 DMG 镜像,它会挂载到你的桌面,然后
|
||||
|
||||
![USB C adapter][7]
|
||||
|
||||
是的,那是一个 USB C 到 USB A 适配器,然后是一个 USB 到 SD 卡读卡器,以及一个 SD 到 micro SD 适配器。我可能可以在网上找到一些东西来简化此过程,但这是支持我家五花八门的 Mac、Windows 和 Linux 主机的部分。够了,但我希望你能从这种混乱中得到一笑。
|
||||
是的,那是一个 USB C 到 USB A 适配器,然后是一个 USB 到 SD 卡读卡器,以及一个 SD 到 micro SD 适配器。我可能可以在网上找到一些东西来简化此过程,但这些都是我手头有的部件,以支持我家五花八门的 Mac、Windows 和 Linux 主机。说到这里就不多说了,但我希望你能从这些疯狂的东西中得到一个笑点。
|
||||
|
||||
### 总结
|
||||
|
||||
新的 Raspberry Pi Imager 是一种简单有效的工具来快速烧录树莓派镜像。[BalenaEtcher][8] 是用于对可移动设备进行烧录的类似工具,但是新的 Raspberry Pi Imager 通过消除获取那些常见镜像的步骤,使普通树莓派系统安装(如 Raspbian)更加容易。
|
||||
新的 Raspberry Pi Imager 是一种简单有效的工具,可以快速烧录树莓派镜像。[BalenaEtcher][8] 是用于对可移动设备进行烧录的类似工具,但是新的 Raspberry Pi Imager 通过省去了获取那些常见镜像的步骤,使普通树莓派系统安装(如 Raspbian)更加容易。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -59,7 +61,7 @@ via: https://opensource.com/article/20/4/raspberry-pi-imager-mac
|
||||
作者:[James Farrell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,56 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ethernet consortium announces completion of 800GbE spec)
|
||||
[#]: via: (https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Ethernet consortium announces completion of 800GbE spec
|
||||
======
|
||||
The specification for 800GbE doubles the maximum speed of the current Ethernet standard, but also tweaks other aspects including latency.
|
||||
Martyn Williams/IDGNS
|
||||
|
||||
The industry-backed Ethernet Technology Consortium has announced the completion of a specification for 800 Gigabit Ethernet technology.
|
||||
|
||||
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members.
|
||||
|
||||
**[ Now see [the hidden cause of slow internet and how to fix it][1].]**
|
||||
|
||||
The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.) The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps.
|
||||
|
||||
And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps.
|
||||
|
||||
The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.
|
||||
|
||||
By lowering latency this will feed the need for speed in latency-sensitive applications like [high-performance computing][2] and artificial intelligence, where lots of data needs to be moved around as fast as possible.
|
||||
|
||||
[][3]
|
||||
|
||||
Doubling from 400G to 800G wasn’t too great of a technological leap. It meant adding more lanes at the same transfer rate, with a few tweaks. But breaking a terabit, something Cisco and other networking firms have been talking about for a decade, will require a significant reworking of the technology and won’t be an easy fix.
|
||||
|
||||
It likely won’t be cheap, either. 800G works with existing hardware and 400GbE switches are not cheap, running as high as six figures. Moving past the terabit barrier with a major revision to the technology will likely be even more expensive. But for hyperscalers and HPC customers, that’s par for the course.
|
||||
|
||||
The ETC didn’t say when to expect new hardware supporting the 800G, but given its modest change to existing specs, it could appear this year, assuming the pandemic-induced shutdown doesn’t throw a monkey wrench into plans.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3107744/internet/the-hidden-cause-of-slow-internet-and-how-to-fix-it.html
|
||||
[2]: https://www.networkworld.com/article/3444399/high-performance-computing-do-you-need-it.html
|
||||
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How underwater Internet of Things will work)
|
||||
[#]: via: (https://www.networkworld.com/article/3538393/how-underwater-internet-of-things-will-work.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
How underwater Internet of Things will work
|
||||
======
|
||||
Lasers will be used to power underwater devices and transmit data below the ocean's surface.
|
||||
North Sea Port
|
||||
|
||||
More than two-thirds of the world's surface is covered by water. It plays an important role in our economic existence, including in major verticals such as oil and gas, shipping and tourism.
|
||||
|
||||
As the Internet of Things proliferates, questions arise as to how IoT will manifest itself underwater given that radio waves degrade over distance in seawater, and underwater acoustic communication (which does actually work okay) is easily eavesdropped on and isn't stealthy.
|
||||
|
||||
To make the underwater Internet of Things happen, light is the answer, some say. Researchers at King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia, are proposing underwater optical communications. They're investigating simultaneous lightwave information and power transfer (SLIPT) configurations, which they're using to transmit energy and data to underwater electronic devices. Recently, the researchers announced a breakthrough experiment in which they were able to achieve an underwater, two-way transmission of data and power over 1.5 yards between a solar panel-equipped sensor and a receiver.
|
||||
|
||||
The SLIPT system will be more usable than wires strung. And in the case of human underwater equipment inspections, for example, SLIPT will be less prone to error than hand signals and less prone to audible confusion than ultrasound voice-based communicators. Remarkably, to this day, hand signals are still a common form of communication between divers.
|
||||
|
||||
"SLIPT can help charge devices in inaccessible locations where continuous powering is costly or not possible," said Jose Filho, a PhD student at KAUST, [in an article][1] on the school's web site.
|
||||
|
||||
Filho, who has been involved in developing the laser project, envisages ships or boats on the water's surface beaming optical communications to underwater vehicles or IoT sensors on the ocean floor. The lasers would simultaneously communicate with and power underwater robots and devices. Return data is relayed to the surface vessel, which then communicates to land bases or data centers via RF (radio).
|
||||
|
||||
Surface buoys – or even unmanned aerial vehicles (drones) flying well above turbulent waves – could be used to inject power down to the seabed surface and, at the same time, receive data, researchers believe.
|
||||
|
||||
[][2]
|
||||
|
||||
The school explains that there's still much development that needs to be performed before SLIPT is operational, but it sees potential. "Underwater optical communication provides an enormous bandwidth and is useful for reliably transmitting information over several meters," co-first author Abderrahmen Trichili said in the article.
|
||||
|
||||
KAUST, located on the Red Sea coast, has been involved in this area of technical exploration for some years. It was involved in developing some early, record-breaking underwater data communications. In 2015 it ran a 4.8 gigabit per second, 16-QAM-OFDM transmission with a 450-nanometer laser. OFDM, or Orthogonal Frequency Division Multiplexing, splits single data streams into multiple channels to reduce interference.
|
||||
|
||||
Interestingly, seas and oceans are becoming increasingly important to data centers. Large swaths of the world's population are found on or near coasts, rather than inland, and we're seeing a shift towards edge-style computing that positions resources closer to sources of data. There's also a need for compute cooling, which ocean water can provide. Even wave energy as a method of powering servers means sea and data are becoming intertwined.
|
||||
|
||||
Microsoft launched an [undersea water-cooling data center][3] 117 feet below the water surface in 2018. Additionally, garden-hose-sized cables carry almost all global, public Internet traffic [underwater, across oceans and between continents][4]. It's not done through satellite, as many imagine.
|
||||
|
||||
So, this is not a brand-new synergy. Apart from the eco-monitoring drivers, one of the likeliest and most important reasons that ocean-based computing is being explored keenly is that there isn't any rent payable or jurisdictional ownership on the high seas.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3538393/how-underwater-internet-of-things-will-work.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://discovery.kaust.edu.sa/en/article/952/the-power-of-light-for-internet-of-underwater-things
|
||||
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[3]: https://www.networkworld.com/article/3283332/microsoft-launches-undersea-free-cooling-data-center.html
|
||||
[4]: https://www.networkworld.com/article/3004465/how-vulnerable-are-the-internets-undersea-cables.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT offers a way to track COVID-19 via connected thermometers)
|
||||
[#]: via: (https://www.networkworld.com/article/3539058/iot-offers-a-way-to-track-covid-19-via-connected-thermometers.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
IoT offers a way to track COVID-19 via connected thermometers
|
||||
======
|
||||
The COVID-19 pandemic has catapulted one manufacturer of connected thermometers to national prominence, as Kinsa provides a possible window into the spread of the disease.
|
||||
[Kinsa / Leaflet / OpenStreetMap / CARTO][1]
|
||||
|
||||
A company called Kinsa is leveraging [IoT][2] tech to create a network of connected thermometers, collecting a huge amount of anonymous health data that could offer insights into the current and future pandemics.
|
||||
|
||||
The company’s founder and CEO, Inder Singh, said that the ability to track fever levels across the U.S. in close to real time could be a crucial piece of information for both the public at large and for decision-makers in the healthcare sector and government.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
The system’s networking technology is relatively straightforward – the thermometer connects via Bluetooth to an app on the user’s phone, which reports anonymized data back to Kinsa’s cloud over the Internet. Singh emphasizes that the company only organizes data down to the county level, and asserts that identifying individuals through Kinsa’s data is more or less impossible.
|
||||
|
||||
“We’re not providing PII, we’re not providing identified data,” he said. “The app just guides you to the care and services you need.”
|
||||
|
||||
Armed with the temperature reading and some basic demographic information about the person whose temperature was taken and their other symptoms, the app can offer rudimentary guidance about whether a visit to the doctor is needed or not, and whether the user’s area is seeing unusual levels of fever.
|
||||
|
||||
However, the real value is in the aggregated data that Kinsa analyzes and breaks out on its [U.S. Health Weather Map][1], gleaned from the million-plus thermometers in the company’s ecosystem. The idea, according to Singh, is to provide the public with a way to make more informed decisions about their health.
|
||||
|
||||
“It’s very participatory,” he said. “Everyone gets the data, and everyone can respond.”
|
||||
|
||||
Kinsa still sells its thermometers directly to consumers, but plans are afoot for the company to collaborate more closely with local governments, health authorities and even school districts – Singh said that Kinsa is already partnering with two U.S. states (which he declined to name), and several city governments, including St. Augustine, Florida.
|
||||
|
||||
“Our hope is that we can figure out how to build a scalable model – we’re never gonna scale globally by just selling $20 thermometers,” he said. The goal is to become widespread enough that the product can act as a meaningful early warning system for the healthcare sector.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3539058/iot-offers-a-way-to-track-covid-19-via-connected-thermometers.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://healthweather.us/?mode=Atypical
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I use Hugo for my classroom's open source CMS)
|
||||
[#]: via: (https://opensource.com/article/20/4/hugo-classroom)
|
||||
[#]: author: (Peter Cheer https://opensource.com/users/petercheer)
|
||||
|
||||
How I use Hugo for my classroom's open source CMS
|
||||
======
|
||||
This open source software streamlines text editing while leaving room
|
||||
for customization.
|
||||
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
|
||||
|
||||
People love Markdown text with good reason—it is easy to write, easy to read, easy to edit, and it can be converted to a wide range of other text mark up formats. While Markdown text is very good for content creation and manipulation, it imposes limitations on the options for content display.
|
||||
|
||||
If we could combine the virtues of Markdown with the power and flexibility of Cascading Style Sheets, HTML5, and JavaScript, that would be something special. One of the programs trying to do this is [Hugo][2]. Hugo was created in 2013 by Steve Francia; it is cross-platform and open source under an Apache 2.0 license with an active developer community and a growing user base.
|
||||
|
||||
The basic concept is that pieces of content, such as web pages or blog posts, written in Markdown and associated with metadata, are converted into HTML and combined with templates and themes to produce a complete web site. The power and flexibility come through these themes and templates or changing the default behaviors of Hugo. This power comes with a degree of unavoidable complexity, but there are lots of [pre-built templates][3] available if you lack the time or inclination to make your own.
|
||||
|
||||
Installing Hugo on my Linux machine was quick and easy. Starting a new project is as simple as typing **hugo new site quickstart** at the command line which creates a new project with this folder structure:
|
||||
|
||||
* **archetypes**: Content template files that contain preconfigured front matter metadata (date, title, draft). You can create new archetypes with custom front matter fields.
|
||||
* **assets**: Stores all the files, which are processed by Hugo Pipes (e.g., CSS/Sass files). This directory is not created by default.
|
||||
* **config.toml**: The default site config file.
|
||||
* **content**: Where all the content Markdown files live.
|
||||
* **data**: Used to store configuration files that can be used by Hugo when generating your website.
|
||||
* **layouts**: Stores templates as .html files.
|
||||
* **static**: Stores all the static content—images, CSS, JavaScript, etc.
|
||||
* **themes**: For the Hugo theme of your choice.
|
||||
|
||||
|
||||
|
||||
The Markdown files in the content folder can be created manually or by Hugo and edited with any text editor or your Markdown creation tool of choice. If created manually, you will need to add any metadata that is needed. I prefer to use [Ghostwriter][4] for writing Markdown. Images are usually kept in a sub-folder in the static folder. Site development can proceed quickly, as Hugo includes a web server for testing and pre-viewing.
|
||||
|
||||
To check your work, type **hugo server** at the command line to start the server. By default, Hugo will not publish:
|
||||
|
||||
* Content with a future **publishdate** value.
|
||||
* Content with **draft: true** status.
|
||||
* Content with a past **expirydate** value.
|
||||
|
||||
|
||||
|
||||
Adding **hugo server -D** will include draft articles, and Hugo can be configured to mark all new articles as draft. After starting the web server, you can see your work in a web browser at localhost:1313. Once the server is started by default, it will automatically reload the browser window when it detects a change to one of your files.
|
||||
|
||||
There are tasks Markdown cannot do that need some HTML code. Hugo recognizes this but believes in keeping Markdown code as clean, simple, and uncluttered as possible. Hugo does this with shortcodes such as **{{< youtube id= "w7Ft2ymGmfc" autoplay= "true">}}**, which will embed the YouTube video with id. w7Ft2ymGmfc. There are quite a few pre-built shortcodes for common tasks, but it is also possible to create your own for particular jobs.
|
||||
|
||||
I work in education quite a lot and wanted to include some interactive puzzles and questions on my Hugo-generated website. To get the output looking like this:
|
||||
|
||||
![JClic shortcode][5]
|
||||
|
||||
I created the activities with an open source Java program called [JClic][6], exported them as HTML5, put that into static/activities/excel, and displayed it in an iframe.
|
||||
|
||||
The HTML code, which would spoil the nice clean Markdown content, looks like this:
|
||||
|
||||
|
||||
```
|
||||
<[iframe][7]
|
||||
src="/activity/excel/index.html"
|
||||
title="Activity"
|
||||
height="400"
|
||||
frameborder="0"
|
||||
marginwidth="0"
|
||||
marginheight="0"
|
||||
scrolling="no"
|
||||
style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 20px; width: 100%;"
|
||||
allowfullscreen="true">
|
||||
</[iframe][7]>
|
||||
```
|
||||
|
||||
The code is saved in layouts/shortcodes as **activity.html**
|
||||
|
||||
This makes the shortcode placed inside my Markdown file **{{<activity>}}**, which is much neater.
|
||||
|
||||
When your project is ready, you can build it with the **hugo** command; this will create a public folder and generate the website in it. Hugo has a number of built-in deployment options for different hosting providers—basically, you deploy your site by copying the public folder to your production web server. There is a lot more to Hugo that I haven't even gotten to yet, including configuration options, importing content from other static site generators and Wordpress, display data from JSON files, syntax highlighting of source code, and the fact that it is very fast (an advantage when working with large sites).
|
||||
|
||||
In many software tools, ease-of-use comes at the expense of flexibility, or vice-versa; Hugo makes a largely successful attempt at including both. For basic use with Markdown content and a pre-built theme, Hugo is easy to use and produces rapid results. Alternatively, if you have the need to alter the configuration settings or dive in and create your own themes, shortcodes, templates, or metadata schemes, that choice is open to you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/hugo-classroom
|
||||
|
||||
作者:[Peter Cheer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/petercheer
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
|
||||
[2]: https://gohugo.io/
|
||||
[3]: https://themes.gohugo.io/
|
||||
[4]: http://github.com/wereturtle/ghostwriter
|
||||
[5]: https://opensource.com/sites/default/files/uploads/jclic_shortcode.png (JClic shortcode)
|
||||
[6]: https://clic.xtec.cat/legacy/en/index.html
|
||||
[7]: http://december.com/html/4/element/iframe.html
|
@ -0,0 +1,170 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I use Python to map the global spread of COVID-19)
|
||||
[#]: via: (https://opensource.com/article/20/4/python-map-covid-19)
|
||||
[#]: author: (AnuragGupta https://opensource.com/users/999anuraggupta)
|
||||
|
||||
How I use Python to map the global spread of COVID-19
|
||||
======
|
||||
Create a color coded geographic map of the potential spread of the virus
|
||||
using these open source scripts.
|
||||
![Globe up in the clouds][1]
|
||||
|
||||
The spread of disease is a real concern for a world in which global travel is commonplace. A few organizations track significant epidemics (and any pandemic), and fortunately, they publish their work as open data. The raw data can be difficult for humans to process, though, and that's why data science is so vital. For instance, it could be useful to visualize the worldwide spread of COVID-19 with Python and Pandas.
|
||||
|
||||
It can be hard to know where to start when you're faced with large amounts of raw data. The more you do it, however, the more patterns begin to emerge. Here's a common scenario, applied to COVID-19 data:
|
||||
|
||||
1. Download COVID-19 country spread daily data into a Pandas DataFrame object from GitHub. For this, you need the Python Pandas library.
|
||||
2. Process and clean the downloaded data and make it suitable for visualizing. The downloaded data (as you will see for yourself) is in quite good condition. The one problem with this data is that it uses the names of countries, but it's better to use three-digit ISO 3 codes. To generate the three-digit ISO 3 codes, use a small Python library called pycountry. Having generated these codes, you can add an extra column to our DataFrame and populate it with these codes.
|
||||
3. Finally, for the visualization, use the **express** module of a library called Plotly. This article uses what are called choropleth maps (available in Plotly) to visualize the worldwide spread of the disease.
|
||||
|
||||
|
||||
|
||||
### Step 1: Corona data
|
||||
|
||||
We will download the latest corona data from:
|
||||
|
||||
<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>
|
||||
|
||||
We will load the data directly into a Pandas DataFrame. Pandas provides a function, **read_csv()**, which can take a URL and return a DataFrame object as shown below:
|
||||
|
||||
|
||||
```
|
||||
import pycountry
|
||||
import plotly.express as px
|
||||
import pandas as pd
|
||||
URL_DATASET = r'<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>'
|
||||
df1 = pd.read_csv(URL_DATASET)
|
||||
print(df1.head(3)) # Get first 3 entries in the dataframe
|
||||
print(df1.tail(3)) # Get last 3 entries in the dataframe
|
||||
```
|
||||
|
||||
The screenshot of output (on Jupyter) is:
|
||||
|
||||
![Jupyter screenshot][2]
|
||||
|
||||
From output, you can see that the DataFrame (df1) has the following columns:
|
||||
|
||||
1. Date
|
||||
2. Country
|
||||
3. Confirmed
|
||||
4. Recovered
|
||||
5. Dead
|
||||
|
||||
|
||||
|
||||
Further, you can see that the **Date** column has entries starting from January 22 to March 31. This database is updated daily, so you will get the current values.
|
||||
|
||||
### Step 2: Cleaning and modifying the data frame
|
||||
|
||||
We need to add another column to this DataFrame, which has the three-letter ISO alpha-3 codes. To do this, I followed these steps:
|
||||
|
||||
1. Create a list of all countries in the database. This was required because in the **df**, in the column **Country**, each country was figuring for each date. So in effect, the **Country** column had multiple entries for each country. To do this, I used the **unique().tolist()** functions.
|
||||
2. Then I took a dictionary **d_country_code** (initially empty) and populated it with keys consisting of country names and values consisting of their three-letter ISO codes.
|
||||
3. To generate the three-letter ISO code for a country, I used the function **pycountry.countries.search_fuzzy(country)**. You need to understand that the return value of this function is a "list of **Country** objects." I passed the return value of this function to a name country_data. Further, in this list of objects, the first object i.e., at index 0, is the best fit. Further, this **\** object has an attribute **alpha_3**. So, I can "access" the 3 letter ISO code by using **country_data[0].alpha_3**. However, it is possible that some country names in the DataFrame may not have a corresponding ISO code (For example, disputed territories). So, for such countries, I gave an ISO code of "i.e. a blank string. Further, you need to wrap this code in a try-except block. The statement: **print(_‘could not add ISO 3 code for ->'_, country)** will give a printout of those countries for which the ISO 3 codes could not be found. In fact, you will find such countries as shown with white color in the final output.
|
||||
4. Having got the three-letter ISO code for each country (or an empty string for some), I added the country name (as key) and its corresponding ISO code (as value) to the dictionary **d_country_code**. For adding these, I used the **update()** method of the Python dictionary object.
|
||||
5. Having created a dictionary of country names and their codes, I added them to the DataFrame using a simple for loop.
|
||||
|
||||
|
||||
|
||||
### Step 3: Visualizing the spread using Plotly
|
||||
|
||||
A choropleth map is a map composed of colored polygons. It is used to represent spatial variations of a quantity. We will use the express module of Plotly conventionally called **px**. Here we show you how to create a choropleth map using the function: **px.choropleth**.
|
||||
|
||||
The signature of this function is:
|
||||
|
||||
|
||||
```
|
||||
`plotly.express.choropleth(data_frame=None, lat=None, lon=None, locations=None, locationmode=None, geojson=None, featureidkey=None, color=None, hover_name=None, hover_data=None, custom_data=None, animation_frame=None, animation_group=None, category_orders={}, labels={}, color_discrete_sequence=None, color_discrete_map={}, color_continuous_scale=None, range_color=None, color_continuous_midpoint=None, projection=None, scope=None, center=None, title=None, template=None, width=None, height=None)`
|
||||
```
|
||||
|
||||
The noteworthy points are that the **choropleth()** function needs the following things:
|
||||
|
||||
1. A geometry in the form of a **geojson** object. This is where things are a bit confusing and not clearly mentioned in its documentation. You may or may not provide a **geojson** object. If you provide a **geojson** object, then that object will be used to plot the earth features, but if you don't provide a **geojson** object, then the function will, by default, use one of the built-in geometries. (In our example here, we will use a built-in geometry, so we won't provide any value for the **geojson** argument)
|
||||
2. A pandas DataFrame object for the attribute **data_frame**. Here we provide our DataFrame ie **df1** we created earlier.
|
||||
3. We will use the data of **Confirmed** column to decide the color of each country polygon.
|
||||
4. Further, we will use the **Date** column to create the **animation_frame**. Thus as we slide across the dates, the colors of the countries will change as per the values in the **Confirmed** column.
|
||||
|
||||
|
||||
|
||||
The complete code is given below:
|
||||
|
||||
|
||||
```
|
||||
import pycountry
|
||||
import plotly.express as px
|
||||
import pandas as pd
|
||||
# ----------- Step 1 ------------
|
||||
URL_DATASET = r'<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>'
|
||||
df1 = pd.read_csv(URL_DATASET)
|
||||
# print(df1.head) # Uncomment to see what the dataframe is like
|
||||
# ----------- Step 2 ------------
|
||||
list_countries = df1['Country'].unique().tolist()
|
||||
# print(list_countries) # Uncomment to see list of countries
|
||||
d_country_code = {} # To hold the country names and their ISO
|
||||
for country in list_countries:
|
||||
try:
|
||||
country_data = pycountry.countries.search_fuzzy(country)
|
||||
# country_data is a list of objects of class pycountry.db.Country
|
||||
# The first item ie at index 0 of list is best fit
|
||||
# object of class Country have an alpha_3 attribute
|
||||
country_code = country_data[0].alpha_3
|
||||
d_country_code.update({country: country_code})
|
||||
except:
|
||||
print('could not add ISO 3 code for ->', country)
|
||||
# If could not find country, make ISO code ' '
|
||||
d_country_code.update({country: ' '})
|
||||
|
||||
# print(d_country_code) # Uncomment to check dictionary
|
||||
|
||||
# create a new column iso_alpha in the df
|
||||
# and fill it with appropriate iso 3 code
|
||||
for k, v in d_country_code.items():
|
||||
df1.loc[(df1.Country == k), 'iso_alpha'] = v
|
||||
|
||||
# print(df1.head) # Uncomment to confirm that ISO codes added
|
||||
# ----------- Step 3 ------------
|
||||
fig = px.choropleth(data_frame = df1,
|
||||
locations= "iso_alpha",
|
||||
color= "Confirmed", # value in column 'Confirmed' determines color
|
||||
hover_name= "Country",
|
||||
color_continuous_scale= 'RdYlGn', # color scale red, yellow green
|
||||
animation_frame= "Date")
|
||||
|
||||
fig.show()
|
||||
```
|
||||
|
||||
The output is something like the following:
|
||||
|
||||
![Map][3]
|
||||
|
||||
You can download and run the [complete code][4].
|
||||
|
||||
To wrap up, here are some excellent resources on choropleth in Plotly:
|
||||
|
||||
* <https://github.com/plotly/plotly.py/blob/master/doc/python/choropleth-maps.md>
|
||||
* [https://plotly.com/python/reference/#choropleth][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/python-map-covid-19
|
||||
|
||||
作者:[AnuragGupta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/999anuraggupta
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-globe.png?itok=_drXt4Tn (Globe up in the clouds)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/jupyter_screenshot.png (Jupyter screenshot)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/map_2.png (Map)
|
||||
[4]: https://github.com/ag999git/jupyter_notebooks/blob/master/corona_spread_visualization
|
||||
[5]: tmp.azs72dmHFd#choropleth
|
@ -0,0 +1,273 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to take advantage of Linux's extensive vocabulary)
|
||||
[#]: via: (https://www.networkworld.com/article/3539011/how-to-takke-advantage-of-linuxs-extensive-vocabulary.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to take advantage of Linux's extensive vocabulary
|
||||
======
|
||||
Linux systems don't only know a lot of words, it has commands that can help you use them by finding words that are on the tip of your tongue or fixing your typos.
|
||||
Sandra Henry-Stocker
|
||||
|
||||
While you might not think of Linux as a writing tutor, it does have some commendable language skills – at least when it comes to English. While the average American probably has a vocabulary between 20,000 and 50,000 words, Linux can claim over 100,000 words (spellings, not definitions). And you can easily put this vocabulary to work for you in a number of ways. Let’s look at how Linux can help with your word challenges.
|
||||
|
||||
### Help with finding words
|
||||
|
||||
First, let’s focus on finding words.If you use the **wc** command to count the number of words in the **/usr/share/dict/words** file on your system, you should see something like this:
|
||||
|
||||
```
|
||||
$ wc -l /usr/share/dict/words
|
||||
102402 /usr/share/dict/words
|
||||
```
|
||||
|
||||
As you can see, the **words** file on this system contains 102,402 words. So, when you’re trying to nail down just the right word and are having trouble, you stand a good chance of finding it on your system by remembering (or guessing at) some part of it. But you'll need a little help narrowing down those 102,402 words to a group worth your time to review. In this command, we’re looking for words that start with the letters “revi”.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
```
|
||||
$ grep ^reviv /usr/share/dict/words
|
||||
revival
|
||||
revival's
|
||||
revivalist
|
||||
revivalist's
|
||||
revivalists
|
||||
revivals
|
||||
revive
|
||||
revived
|
||||
revives
|
||||
revivification
|
||||
revivification's
|
||||
revivified
|
||||
revivifies
|
||||
revivify
|
||||
revivifying
|
||||
reviving
|
||||
```
|
||||
|
||||
That’s sixteen words that start with the string “revi”. The **^** character represents the beginning of the word and, as you might have suspected, each word in the file is on a line by itself.
|
||||
|
||||
A good number of the words in the **/usr/share/dict/words** file are names. If you want to find words regardless of whether they're capitalized, add the **-i** (ignore case) option to your **grep** command.
|
||||
|
||||
```
|
||||
$ grep -i ^wool /usr/share/dict/words
|
||||
Woolf
|
||||
Woolf's
|
||||
Woolite
|
||||
Woolite's
|
||||
Woolongong
|
||||
Woolongong's
|
||||
Woolworth
|
||||
Woolworth's
|
||||
wool
|
||||
...
|
||||
```
|
||||
|
||||
You can also look for words that end in or contain a certain string of letters. In this next command, we look for words that contain the string “nativ” at any location.
|
||||
|
||||
```
|
||||
$ grep 'nativ' /usr/share/dict/words
|
||||
alternative
|
||||
alternative's
|
||||
alternatively
|
||||
alternatives
|
||||
imaginative
|
||||
imaginatively
|
||||
native
|
||||
native's
|
||||
natives
|
||||
nativities
|
||||
nativity
|
||||
nativity's
|
||||
nominative
|
||||
nominative's
|
||||
nominatives
|
||||
unimaginative
|
||||
```
|
||||
|
||||
In this next command, we look for words that end in “emblance”, the **$** character representing the end of the line. Only two words in the **words** file fit the bill.
|
||||
|
||||
[][2]
|
||||
|
||||
```
|
||||
$ grep 'emblance$' /usr/share/dict/words
|
||||
resemblance
|
||||
semblance
|
||||
```
|
||||
|
||||
If we, for some reason, want to find words with exactly 21 letters, we could use this command:
|
||||
|
||||
```
|
||||
$ grep '^.....................$' /usr/share/dict/words
|
||||
counterintelligence's
|
||||
electroencephalograms
|
||||
electroencephalograph
|
||||
```
|
||||
|
||||
On the other hand, making sure we've typed the correct number of dots can be tedious. This next command is little easier to manage:
|
||||
|
||||
```
|
||||
$ grep -E '^[[:alpha:]]{21}$' /usr/share/dict/words
|
||||
electroencephalograms
|
||||
electroencephalograph
|
||||
```
|
||||
|
||||
This command does the same thing:
|
||||
|
||||
```
|
||||
$ grep -E '^\w{21}$' /usr/share/dict/words
|
||||
electroencephalograms
|
||||
electroencephalograph
|
||||
```
|
||||
|
||||
The one important difference between these commands is that the one with the dots matches any string of 21 characters. The two specifying "alpha" or "\w" only match letters, so they find only two matching words.
|
||||
|
||||
Now let’s look for words that contain 20 letters (or more) in a row.
|
||||
|
||||
```
|
||||
$ grep -E '(\w{20})' /usr/share/dict/words
|
||||
Andrianampoinimerina
|
||||
Andrianampoinimerina's
|
||||
counterrevolutionaries
|
||||
counterrevolutionary
|
||||
counterrevolutionary's
|
||||
electroencephalogram
|
||||
electroencephalogram's
|
||||
electroencephalograms
|
||||
electroencephalograph
|
||||
electroencephalograph's
|
||||
electroencephalographs
|
||||
uncharacteristically
|
||||
```
|
||||
|
||||
That command returns words with apostrophes because they contain 20 letters in a row before they get to that point.
|
||||
|
||||
Next, we’ll check out words with 21 or more characters. The 1 and 20 in combination with the **v** (invert) option in this command cause **grep** to skip over words with anywhere from 1 to 20 characters.
|
||||
|
||||
```
|
||||
$ grep -vwE '\w{1,20}' /usr/share/dict/words
|
||||
counterrevolutionaries
|
||||
electroencephalograms
|
||||
electroencephalograph
|
||||
electroencephalographs
|
||||
```
|
||||
|
||||
In this next command, we look for words that start with “ex” and have four additional letters.
|
||||
|
||||
```
|
||||
$ grep '^ex.\{4\}$' /usr/share/dict/words
|
||||
exacts
|
||||
exalts
|
||||
exam's
|
||||
exceed
|
||||
excels
|
||||
except
|
||||
excess
|
||||
excise
|
||||
excite
|
||||
excuse
|
||||
…
|
||||
```
|
||||
|
||||
In case you're curious, the **words** file on this system contains 43 such words:
|
||||
|
||||
```
|
||||
$ grep '^ex.\{4\}$' /usr/share/dict/words | wc -l
|
||||
43
|
||||
```
|
||||
|
||||
To get help with spelling, you should try **aspell**. It can help you with individual words or run a spell check scan through an entire text file. In this first example, we ask **aspell** to help with a single word. It finds the word we’re after along with a couple other possibilities.
|
||||
|
||||
### Checking a word
|
||||
|
||||
```
|
||||
$ aspell -a
|
||||
@(#) International Ispell Version 3.1.20 (but really Aspell 0.60.7)
|
||||
prolifferate <== entered word
|
||||
& prolifferate 3 0: proliferate, proliferated, proliferates <== replacement options
|
||||
```
|
||||
|
||||
If **aspell** doesn’t provide a list of words, that means that the spelling you offered was correct. Here's an example:
|
||||
|
||||
```
|
||||
$ aspell -a
|
||||
@(#) International Ispell Version 3.1.20 (but really Aspell 0.60.7)
|
||||
proliferate <== entered text
|
||||
* <== no suggestions
|
||||
```
|
||||
|
||||
Typing **^C** (control-c) exits **aspell**.
|
||||
|
||||
### Checking a file
|
||||
|
||||
When checking a file with **aspell**, you get suggestions for each misspelled word. When **aspell** spots typos, it highlights the misspelled words one at a time and gives you a chance to choose from a list of properly spelled words that are similar enough to the misspelled words to be good candidates for replacing them.
|
||||
|
||||
To start checking a file, type **aspell -c** followed by the file name.
|
||||
|
||||
```
|
||||
$ aspell -c thesis
|
||||
```
|
||||
|
||||
You'll see something like this:
|
||||
|
||||
```
|
||||
This thesis focusses on …
|
||||
|
||||
1) focuses 6) Fosse's
|
||||
2) focused 7) flosses
|
||||
3) cusses 8) courses
|
||||
4) fusses 9) focus
|
||||
5) focus's 0) fuses
|
||||
i) Ignore I) Ignore all
|
||||
r) Replace R) Replace all
|
||||
a) Add l) Add Lower
|
||||
b) Abort x) Exit
|
||||
```
|
||||
|
||||
Make your selection by pressing the key listed next to the word you want (1, 2, etc.) and **aspell** will replace the misspelled word in the file and move on to the next one if there are others. Notice that you also have options to replace the word by typing another one. Press "x" when you're done.
|
||||
|
||||
### Help with crossword puzzles
|
||||
|
||||
If you’re working on a crossword puzzle and need to find a five-letter word that starts with a “d” and has a “u” as its fourth letter, you can use a command like this:
|
||||
|
||||
```
|
||||
$ grep -i '^d..u.$' /usr/share/dict/words
|
||||
datum
|
||||
debug
|
||||
debut
|
||||
demur
|
||||
donut
|
||||
```
|
||||
|
||||
### Help with word scrambles
|
||||
|
||||
If you’re working on a puzzle that requires you to de-scramble the letters in a string until you've found a proper word, you can offer the list of letters to grep like this example in which **grep** turns the letters "yxusonlia" into the word “anxiously”.
|
||||
|
||||
```
|
||||
$ grep -P '^(?:([yxusonlia])(?!.*?\1)){9}$' /usr/share/dict/words
|
||||
anxiously
|
||||
```
|
||||
|
||||
Linux’s word skills are impressive and sometimes even fun. Whether you're hoping to find words you can't quite call to mind or get a little help cheating on word puzzles, Linux offers some clever options.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3539011/how-to-takke-advantage-of-linuxs-extensive-vocabulary.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,255 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using Python to visualize COVID-19 projections)
|
||||
[#]: via: (https://opensource.com/article/20/4/python-data-covid-19)
|
||||
[#]: author: (AnuragGupta https://opensource.com/users/999anuraggupta)
|
||||
|
||||
Using Python to visualize COVID-19 projections
|
||||
======
|
||||
I'll demonstrate how to create two visualizations of the spread of a
|
||||
virus across the globe, provided open data and using open source
|
||||
libraries.
|
||||
![Colorful sound wave graph][1]
|
||||
|
||||
Using [Python][2] and some graphing libraries, you can project the total number of confirmed cases of COVID-19, and also display the total number of deaths for a country (this article uses India as an example) on a given date. Humans sometimes need help interpreting and processing the meaning of data, so this article also demonstrates how to create an animated horizontal bar graph for five countries, showing the variation of cases by date.
|
||||
|
||||
### Projecting confirmed cases and deaths for India
|
||||
|
||||
This is done in three steps.
|
||||
|
||||
#### 1\. Download data
|
||||
|
||||
Scientific data isn't always open, but fortunately, many modern science and healthcare organizations are eager to share information with each other and the public. Data about COVID-19 cases is available online, and it's updated frequently.
|
||||
|
||||
To parse the data, you first must download it: <https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>
|
||||
|
||||
Load the data directly into a Pandas DataFrame. Pandas provides a function, **read_csv()**, which can take a URL and give back a DataFrame object, as shown below:
|
||||
|
||||
|
||||
```
|
||||
import pycountry
|
||||
import plotly.express as px
|
||||
import pandas as pd
|
||||
URL_DATASET = r'<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>'
|
||||
df1 = pd.read_csv(URL_DATASET)
|
||||
print(df1.head(3)) # Get first 3 entries in the dataframe
|
||||
print(df1.tail(3)) # Get last 3 entries in the dataframe
|
||||
```
|
||||
|
||||
The top row of the data set contains column names:
|
||||
|
||||
1. Date
|
||||
2. Country
|
||||
3. Confirmed
|
||||
4. Recovered
|
||||
5. Deaths
|
||||
|
||||
|
||||
|
||||
The output of the **head** query includes a unique identifier (not listed as a column) plus an entry for each column:
|
||||
|
||||
|
||||
```
|
||||
0 2020-01-22 Afghanistan 0 0 0
|
||||
1 2020-01-22 Albania 0 0 0
|
||||
1 2020-01-22 Algeria 0 0 0
|
||||
```
|
||||
|
||||
The output of the **tail** query is similar but contains the tail end of the data set:
|
||||
|
||||
|
||||
```
|
||||
12597 2020-03-31 West Bank and Gaza 119 18 1
|
||||
12598 2020-03-31 Zambia 35 0 0
|
||||
12599 2020-03-31 Zimbabwe 8 0 1
|
||||
```
|
||||
|
||||
From the output, you can see that the DataFrame (**df1**) has the following columns:
|
||||
|
||||
1. Date
|
||||
2. Country
|
||||
3. Confirmed
|
||||
4. Recovered
|
||||
5. Dead
|
||||
|
||||
|
||||
|
||||
Further, you can see that the **Date** column has entries starting from January 22 to March 31. This database is updated daily, so you will have current values.
|
||||
|
||||
#### 2\. Select data for India
|
||||
|
||||
In this step, we will select only those rows in the DataFrame that include India. This is shown in the script below:
|
||||
|
||||
|
||||
```
|
||||
#### ----- Step 2 (Select data for India)----
|
||||
df_india = df1[df1['Country'] == 'India']
|
||||
print(df_india.head(3))
|
||||
```
|
||||
|
||||
#### 3\. Plot data
|
||||
|
||||
Here we create a bar chart. We will put the dates on the X-axis and the number of confirmed cases and the number of deaths on the Y-axis. There are a few noteworthy things about this part of the script which are as follows:
|
||||
|
||||
* The line of code: **plt.rcParams["_figure.figsize"_]=20,20** is meant only for Jupyter. So remove it if you are using some other IDE.
|
||||
|
||||
* Notice the line of code: **ax1 = plt.gca()**. To ensure that both the plots i.e. for confirmed cases as well as for deaths are plotted on the same graph, we need to give to the second graph the **ax** object of the plot. So we use **gca()** to do this. (By the way, 'gca' stands for 'get current axis').
|
||||
|
||||
|
||||
|
||||
|
||||
The complete script is given below:
|
||||
|
||||
|
||||
```
|
||||
# Author:- Anurag Gupta # email:- [999.anuraggupta@gmail.com][3]
|
||||
%matplotlib inline
|
||||
import matplotlib.pyplot as plt
|
||||
import pandas as pd
|
||||
|
||||
#### ----- Step 1 (Download data)----
|
||||
URL_DATASET = r'<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>'
|
||||
df1 = pd.read_csv(URL_DATASET)
|
||||
# print(df1.head(3)) # Uncomment to see the dataframe
|
||||
|
||||
#### ----- Step 2 (Select data for India)----
|
||||
df_india = df1[df1['Country'] == 'India']
|
||||
print(df_india.head(3))
|
||||
|
||||
#### ----- Step 3 (Plot data)----
|
||||
# Increase size of plot
|
||||
plt.rcParams["figure.figsize"]=20,20 # Remove if not on Jupyter
|
||||
# Plot column 'Confirmed'
|
||||
df_india.plot(kind = 'bar', x = 'Date', y = 'Confirmed', color = 'blue')
|
||||
|
||||
ax1 = plt.gca()
|
||||
df_india.plot(kind = 'bar', x = 'Date', y = 'Deaths', color = 'red', ax = ax1)
|
||||
plt.show()
|
||||
```
|
||||
|
||||
The entire script is [available on GitHub][4].
|
||||
|
||||
### Creating an animated horizontal bar graph for five countries
|
||||
|
||||
Note for Jupyter: To run this in Jupyter as a dynamic animation rather than as a static png, you need to add a magic command at the beginning of your cell, namely: **%matplotlib notebook**. This will keep the figure alive instead of displaying a static png file and can hence also show animations. If you are on another IDE, remove this line.
|
||||
|
||||
#### 1\. Download the data
|
||||
|
||||
This step is exactly the same as in the previous script, and therefore, it need not be repeated.
|
||||
|
||||
#### 2\. Create a list of all dates
|
||||
|
||||
If you examine the data you downloaded, you notice that it has a column **Date**. Now, this column has a date value for each country. So the same date is occurring a number of times. We need to create a list of dates with only unique values. This will be used on the X-axis of our bar charts. We have a line of code like: **list_dates = df[_‘Date’_].unique()**. The **unique()** method will pick up only the unique values for each date.
|
||||
|
||||
#### 3\. Pick five countries and create an **ax** object
|
||||
|
||||
Take a list of five countries. (You can choose whatever countries you prefer, or even increase or decrease the number of countries). I have also taken a list of five colors for the bars of each country. (You can change this too if you like). One important line of code here is: **fig, ax = plt.subplots(figsize=(15, 8))**. This is needed to create an **ax** object.
|
||||
|
||||
#### 4\. Write the call back function
|
||||
|
||||
If you want to do animation in Matplotlib, you need to create an object of a class called **matplotlib.animation.FuncAnimation**. The signature of this class is available online. The constructor of this class, apart from other parameters, also takes a parameter called **func**, and you have to give this parameter a callback function. So in this step, we will write the callback function, which is repeatedly called in order to render the animation.
|
||||
|
||||
#### 5\. Create **FuncAnimation** object
|
||||
|
||||
This step has partly been explained in the previous step.
|
||||
|
||||
Our code to create an object of this class is:
|
||||
|
||||
|
||||
```
|
||||
my_anim = animation.FuncAnimation(fig = fig, func = plot_bar,
|
||||
frames= list_dates, blit=True,
|
||||
interval=20)
|
||||
```
|
||||
|
||||
The three important parameters to be given are:
|
||||
|
||||
* **fig**, which must be given a fig object, which we created earlier.
|
||||
* **func**, which must be the call back function.
|
||||
* **frames**, which must contain the variable on which the animation is to be done. Here in our case, it will be the list of dates we created earlier.
|
||||
|
||||
|
||||
|
||||
#### 6\. Save the animation to an mp4 file
|
||||
|
||||
You can save the animation created into an mp4 file. But for this you need **ffmpeg**. You can download this using pip by **pip install ffmpeg-python**, or using conda (on Jupyter) **install -c conda-forge ffmpeg**.
|
||||
|
||||
And finally, you can run the animation using **plt.show()**. Please note that on many platforms, the **ffmpeg** may not work properly and may require further "tweaking."
|
||||
|
||||
|
||||
```
|
||||
%matplotlib notebook
|
||||
# Author:- Anurag Gupta # email:- [999.anuraggupta@gmail.com][3]
|
||||
import pandas as pd
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.animation as animation
|
||||
from time import sleep
|
||||
|
||||
#### ---- Step 1:- Download data
|
||||
URL_DATASET = r'<https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv>'
|
||||
df = pd.read_csv(URL_DATASET, usecols = ['Date', 'Country', 'Confirmed'])
|
||||
# print(df.head(3)) # uncomment this to see output
|
||||
|
||||
#### ---- Step 2:- Create list of all dates
|
||||
list_dates = df['Date'].unique()
|
||||
# print(list_dates) # Uncomment to see the dates
|
||||
|
||||
#### --- Step 3:- Pick 5 countries. Also create ax object
|
||||
fig, ax = plt.subplots(figsize=(15, 8))
|
||||
# We will animate for these 5 countries only
|
||||
list_countries = ['India', 'China', 'US', 'Italy', 'Spain']
|
||||
# colors for the 5 horizontal bars
|
||||
list_colors = ['black', 'red', 'green', 'blue', 'yellow']
|
||||
|
||||
### --- Step 4:- Write the call back function
|
||||
# plot_bar() is the call back function used in FuncAnimation class object
|
||||
def plot_bar(some_date):
|
||||
df2 = df[df['Date'].eq(some_date)]
|
||||
ax.clear()
|
||||
# Only take Confirmed column in descending order
|
||||
df3 = df2.sort_values(by = 'Confirmed', ascending = False)
|
||||
# Select the top 5 Confirmed countries
|
||||
df4 = df3[df3['Country'].isin(list_countries)]
|
||||
# print(df4) # Uncomment to see that dat is only for 5 countries
|
||||
sleep(0.2) # To slow down the animation
|
||||
# ax.barh() makes a horizontal bar plot.
|
||||
return ax.barh(df4['Country'], df4['Confirmed'], color= list_colors)
|
||||
|
||||
###----Step 5:- Create FuncAnimation object---------
|
||||
my_anim = animation.FuncAnimation(fig = fig, func = plot_bar,
|
||||
frames= list_dates, blit=True,
|
||||
interval=20)
|
||||
|
||||
### --- Step 6:- Save the animation to an mp4
|
||||
# Place where to save the mp4. Give your file path instead
|
||||
path_mp4 = r'C:\Python-articles\population_covid2.mp4'
|
||||
# my_anim.save(path_mp4, fps=30, extra_args=['-vcodec', 'libx264'])
|
||||
my_anim.save(filename = path_mp4, writer = 'ffmpeg',
|
||||
fps=30,
|
||||
extra_args= ['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])
|
||||
plt.show()
|
||||
```
|
||||
|
||||
The complete script is [available on GitHub][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/python-data-covid-19
|
||||
|
||||
作者:[AnuragGupta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/999anuraggupta
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: mailto:999.anuraggupta@gmail.com
|
||||
[4]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_india
|
||||
[5]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_animated
|
@ -0,0 +1,202 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux
|
||||
======
|
||||
|
||||
By default when you set up the server you will configure the primary network interface.
|
||||
|
||||
This is part of the build work that everyone does.
|
||||
|
||||
Sometimes you may need to configure an additional network interface for several reasons.
|
||||
|
||||
This could be a network bonding/teaming or high availability or a separate interface for application requirements or backups.
|
||||
|
||||
To do so, you need to know how many interfaces your computer has and their speed to configure it.
|
||||
|
||||
There are many commands to check for available network interfaces, but we only use the IP command.
|
||||
|
||||
Later we will write a separate article with all these tools.
|
||||
|
||||
In this tutorial, we will show you the Available Network Interface Card (NIC) information, such as the interface name, associated IP address, MAC address, and interface speed.
|
||||
|
||||
### What’s IP Command
|
||||
|
||||
**[IP command][1]** is similar to ifconfig, which is used to for assigning Static IP Address, Route & Default Gateway, etc.,
|
||||
|
||||
```
|
||||
# ip a
|
||||
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether fa:16:3e:a0:7d:5a brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.101/24 brd 192.168.1.101 scope global eth0
|
||||
inet6 fe80::f816:3eff:fea0:7d5a/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### What’s ethtool Command
|
||||
|
||||
The ethtool is used to query or control network driver and hardware settings.
|
||||
|
||||
```
|
||||
# ethtool eth0
|
||||
```
|
||||
|
||||
### 1) How to Check the Available Network Interfaces on Linux Using the IP Command
|
||||
|
||||
When you run the IP command without any arguments, it gives you plenty of information, but if you only need the available network interfaces, use the following customized IP command.
|
||||
|
||||
```
|
||||
# ip a |awk '/state UP/{print $2}'
|
||||
|
||||
eth0:
|
||||
eth1:
|
||||
```
|
||||
|
||||
### 2) How to Check the IP Address of a Network Interface on Linux Using the IP Command
|
||||
|
||||
If you only want to see which IP address is assigned to which interface, use the following customized IP command.
|
||||
|
||||
```
|
||||
# ip -o a show | cut -d ' ' -f 2,7
|
||||
or
|
||||
ip a |grep -i inet | awk '{print $7, $2}'
|
||||
|
||||
lo 127.0.0.1/8
|
||||
192.168.1.101/24
|
||||
192.168.1.102/24
|
||||
```
|
||||
|
||||
### 3) How to Check the Network Interface Card MAC Address on Linux Using the IP Command
|
||||
|
||||
If you only want to see the network interface name and the corresponding MAC address, use the following format.
|
||||
|
||||
To check a specific network interface MAC address.
|
||||
|
||||
```
|
||||
# ip link show dev eth0 |awk '/link/{print $2}'
|
||||
00:00:00:55:43:5c
|
||||
```
|
||||
|
||||
To check MAC address for all network interface.
|
||||
|
||||
```
|
||||
# vi /opt/scripts/mac-addresses.sh
|
||||
|
||||
#!/bin/sh
|
||||
ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output;
|
||||
do
|
||||
echo $output:
|
||||
ethtool -P $output
|
||||
done
|
||||
```
|
||||
|
||||
Run the below shell script to get the MAC address for multiple network interfaces.
|
||||
|
||||
```
|
||||
# sh /opt/scripts/mac-addresses.sh
|
||||
|
||||
eth0:
|
||||
Permanent address: 00:00:00:55:43:5c
|
||||
eth1:
|
||||
Permanent address: 00:00:00:55:43:5d
|
||||
```
|
||||
|
||||
### 4) How to Check the Network Interface Port Speed on Linux Using the ethtool Command
|
||||
|
||||
If you want to check the network interface port speed on Linux, use the ethtool command.
|
||||
|
||||
To check the speed of a particular network interface port.
|
||||
|
||||
```
|
||||
# ethtool eth0 |grep "Speed:"
|
||||
|
||||
Speed: 10000Mb/s
|
||||
```
|
||||
|
||||
To check the port speed for all network interfaces.
|
||||
|
||||
```
|
||||
# vi /opt/scripts/port-speed.sh
|
||||
|
||||
#!/bin/sh
|
||||
ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output;
|
||||
do
|
||||
echo $output:
|
||||
ethtool $output |grep "Speed:"
|
||||
done
|
||||
```
|
||||
|
||||
Run the below shell script to get the port speed for multiple network interfaces.
|
||||
|
||||
```
|
||||
# sh /opt/scripts/port-speed.sh
|
||||
|
||||
eth0:
|
||||
Speed: 10000Mb/s
|
||||
eth1:
|
||||
Speed: 10000Mb/s
|
||||
```
|
||||
|
||||
### 5) Shell Script to Verify Network Interface Card Information
|
||||
|
||||
This **[shell script][2]** allows you to gather all of the above information, such as network interface names, IP addresses of network interfaces, MAC addresses of network interfaces, and the speed of a network interface port.
|
||||
|
||||
```
|
||||
# vi /opt/scripts/nic-info.sh
|
||||
|
||||
#!/bin/sh
|
||||
hostname
|
||||
echo "-------------"
|
||||
for iname in $(ip a |awk '/state UP/{print $2}')
|
||||
do
|
||||
echo "$iname"
|
||||
ip a | grep -A2 $iname | awk '/inet/{print $2}'
|
||||
ip a | grep -A2 $iname | awk '/link/{print $2}'
|
||||
ethtool $iname |grep "Speed:"
|
||||
done
|
||||
```
|
||||
|
||||
Run the below shell script to check network card information.
|
||||
|
||||
```
|
||||
# sh /opt/scripts/nic-info.sh
|
||||
|
||||
vps.2daygeek.com
|
||||
----------------
|
||||
eth0:
|
||||
192.168.1.101/24
|
||||
00:00:00:55:43:5c
|
||||
Speed: 10000Mb/s
|
||||
eth1:
|
||||
192.168.1.102/24
|
||||
00:00:00:55:43:5d
|
||||
Speed: 10000Mb/s
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/ip-command-configure-network-interface-usage-linux/
|
||||
[2]: https://www.2daygeek.com/category/shell-script/
|
@ -0,0 +1,129 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Things You Should Know About Ubuntu 20.04)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-20-04-faq/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Things You Should Know About Ubuntu 20.04
|
||||
======
|
||||
|
||||
[Ubuntu 20.04 release][1] is just around the corner and you may have a few questions and doubts regarding upgrades, installation etc.
|
||||
|
||||
I hosted some Q&A sessions on various social media channels to answer doubts of readers like you.
|
||||
|
||||
I am going to list these common questions about Ubuntu 20.04 with their answers. I hope it helps you clear the doubts you have. And if you still have questions, feel free to ask in the comment section below.
|
||||
|
||||
### Ubuntu 20.04: Your Questions Answered
|
||||
|
||||
![][2]
|
||||
|
||||
Just to clarify, some of the answers here maybe influenced by my personal opinion. If you are an experienced Ubuntu user, some of the questions may sound _silly_ to you but it not to the new Ubuntu users.
|
||||
|
||||
#### When will Ubuntu 20.04 be released?
|
||||
|
||||
Ubuntu 20.04 LTS is releasing on 23rd April 2020. All the participating flavors like Kubuntu, Lubuntu, Xubuntu, Budgie, MATE etc will have their 20.04 release available on the same day.
|
||||
|
||||
#### What are the system requirements for Ubuntu 20.04?
|
||||
|
||||
For the default GNOME version, you should have a minimum 4 GB of RAM, 2 GHz dual core processor and at least 25 GB of disk space.
|
||||
|
||||
Other [Ubuntu flavors][3] may have different system requirements.
|
||||
|
||||
#### Can I use Ubuntu 20.04 on 32-bit systems?
|
||||
|
||||
No, not at all. You cannot use Ubuntu 20.04 on 32-bit systems. Even if you are using 32-bit Ubuntu 18.04, you cannot upgrade to Ubuntu 20.04. There is ISO for 32-bit systems for past several years.
|
||||
|
||||
![Error while upgrading 32-bit Ubuntu 18.04 to Ubuntu 20.04][4]
|
||||
|
||||
#### Can I use Wine on Ubuntu 20.04?
|
||||
|
||||
Yes, you can still use Wine on Ubuntu 20.04 as the 32-bit lib support is still there for packages needed by Wine and [Steam Play][5].
|
||||
|
||||
#### Do I have to pay for Ubuntu 20.04 or purchase a license?
|
||||
|
||||
No, Ubuntu is completely free to use. You don’t have to buy a license key or activate Ubuntu like you do in Windows.
|
||||
|
||||
The download section of Ubuntu requests you to donate some money but it’s up to you if you want to give some money for developing this awesome operating system.
|
||||
|
||||
#### What GNOME version does it have?
|
||||
|
||||
Ubuntu 20.04 has GNOME 3.36.
|
||||
|
||||
#### Does Ubuntu 20.04 have better performance than Ubuntu 18.04?
|
||||
|
||||
Yes, in several aspects. Ubuntu 20.04 installs faster and it even boost faster. I have shown the performance comparison in the video below at 4:40 minutes.
|
||||
|
||||
The scroll, Windows animation and other UI elements are more fluid and give a smoother experience in GNOME 3.36.
|
||||
|
||||
#### How long will Ubuntu 20.04 be supported?
|
||||
|
||||
It is a long-term support (LTS) release and like any LTS release, it will be supported for five years. Which means that Ubuntu 20.04 will get security and maintenance updates until April 2025.
|
||||
|
||||
#### Will I lose data while upgrading to Ubuntu 20.04?
|
||||
|
||||
You can upgrade to Ubuntu 20.04 from Ubuntu 19.10 or Ubuntu 18.04. You don’t need to create a live USB and install from it. All you need is a good internet connection that can download around 1.5 GB of data.
|
||||
|
||||
Upgrading from an existing system doesn’t harm your files. You should have all your files as it is and most of your existing software should be either have the same version or upgraded versions.
|
||||
|
||||
If you have used some third-party tools or [additional PPA][6], the upgrade procedure will disable them. You can enable these additional repositories again if they are available for Ubuntu 20.04.
|
||||
|
||||
Upgrading takes like an hour and after a restart, you will be logged in to the newer version.
|
||||
|
||||
Though your data will not be touched and you won’t lose system files and configurations, it is always a good idea to make backup of important data externally.
|
||||
|
||||
#### When will I get to upgrade to Ubuntu 20.04?
|
||||
|
||||
![][7]
|
||||
|
||||
If you are using Ubuntu 19.10 and have correct update settings in place (as mentioned in the earlier sections), you should be notified for upgrading to Ubuntu 20.04 within a few days of Ubuntu 18.04 release.
|
||||
|
||||
For Ubuntu 18.04 users, it may take some weeks before they are officially notified of the availability of Ubuntu 18.04. Probably, you may get the prompt after the first point release of Ubuntu 20.04.1.
|
||||
|
||||
#### If I upgrade to Ubuntu 20.04, can I downgrade to 19.10 or 18.04?
|
||||
|
||||
No, you cannot. While upgrading to a newer version is easy, there is no option to downgrade. If you want to go back to Ubuntu 18.04, you’ll have [install Ubuntu 18.04][8] again.
|
||||
|
||||
#### I am using Ubuntu 18.04 LTS. Should I Upgrade to Ubuntu 20.04 LTS?
|
||||
|
||||
That depends upon you. If you are impressed by the new features in Ubuntu 20.04 and want to get your hands on it, you should upgrade.
|
||||
|
||||
If you want a more stable system, I advise waiting for the first point release Ubuntu 20.04.1 release that will have bug fixes in the new release. 20.04.1 should typically be coming approximately two months after the release of Ubuntu 20.04.
|
||||
|
||||
In either case, I recommend upgrading to Ubuntu 20.04 sooner or later. Ubuntu 20.04 has newer kernel, performance improvement and above all newer versions of software available in the repository.
|
||||
|
||||
Make a backup on external disk and with a good internet connectivity, the upgrade should not be an issue.
|
||||
|
||||
#### Should I do a fresh install of Ubuntu 20.04 or upgrade to it from 18.04/19.10?
|
||||
|
||||
If you have a choice, make a backup of your data and do a fresh install of Ubuntu 20.04.
|
||||
|
||||
Upgrading to 20.04 from an existing version is a convenient option. However, in my opinion, it still keeps some traces/packages of the older version. A fresh install is always cleaner.
|
||||
|
||||
#### Any other questions about Ubuntu 20.04?
|
||||
|
||||
If you have any other doubts regarding Ubuntu 20.04, please feel free to leave a comment below. If you think some other information should be added to the list, please let me know.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-20-04-faq/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-20-04-release-features/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu_20_04_faq.jpg?ssl=1
|
||||
[3]: https://itsfoss.com/which-ubuntu-install/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-32-bit.jpg?ssl=1
|
||||
[5]: https://itsfoss.com/steam-play/
|
||||
[6]: https://itsfoss.com/ppa-guide/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/upgrade-ubuntu-20-04.jpg?ssl=1
|
||||
[8]: https://itsfoss.com/install-ubuntu/
|
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ethernet consortium announces completion of 800GbE spec)
|
||||
[#]: via: (https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
以太网联盟宣布完成 800Gb 以太网规范
|
||||
======
|
||||
800Gb 以太网规范使当前以太网标准的最高速度提高了一倍,但同时也对包括延迟在内的其他方面进行了调整。
|
||||
Martyn Williams/IDGNS
|
||||
|
||||
由业界支持的以太网技术联盟已宣布完成 800Gb 以太网技术规范。
|
||||
|
||||
它基于当前高端 400Gb 以太网协议中使用的许多技术,新规范正式称为 800GBASE-R。设计它的联盟(当时称为 25Gb 以太网联盟)在开发 25、50 和 100Gb 以太网协议方面也发挥了重要作用,其成员包括 Broadcom、Cisco、Google 和 Microsoft。
|
||||
|
||||
800Gb 以太网规范增加了新的媒体访问控制(MAC)和物理编码子层(PCS)方法,新规范对这些功能进行了调整,来使用 8 条 106.25Gbps 的物理通道分发数据。(通道可以是铜双绞线,也可以是光缆,一束光纤或光波。)800GBASE-R 规范建立在两个 400 GbE 2xClause PCS 之上,以创建一个以 800Gbps 的总速率运行的单个 MAC。
|
||||
|
||||
尽管主要是使用八条 106.25Gb 通道,但这并不是固定的。它可以以一半的速度 (53.125Gbps) 使用 16 条通道。
|
||||
|
||||
新标准提供了 400G 以太网规范的一半延迟,但是新规范也将运行在 50 Gbps、100 Gbps 和 200 Gbps 的网络上的前向纠错(FEC)开销减少了一半,从而减少了网卡上的数据包处理负担。
|
||||
|
||||
通过降低延迟,这将满足对延迟敏感的应用(例如[高性能计算][2]和人工智能)中对速度的需求,在这些应用中,需要尽可能快地移动大量数据。
|
||||
|
||||
[][3]
|
||||
|
||||
从 400G 增加到 800G 并不是技术上大的飞跃。这意味着要以相同的传输速率添加更多通道,并进行一些调整。但是,要想突破 Tb,Cisco 和其他网络公司已经讨论了十年了,这将需要对技术进行重大修改,而且并非易事。
|
||||
|
||||
新技术可能也不便宜。800G 可与现有硬件一起使用,而 400Gb 以太网交换机价格不菲,高达六位数。对技术进行重大修改,越过 Tb 障碍,可能会变得更加昂贵。但是对于大客户和高性能计算客户而言,这是预料之中的。
|
||||
|
||||
ETC 并未透露何时会支持 800G 的新硬件,但鉴于其对现有规格的适度更改,它可能会在今年出现,前提是疫情引起的停滞不会影响它。
|
||||
|
||||
加入 [Facebook][4] 和 [LinkedIn][5] 上的 Network World 社区,评论热门主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3538529/ethernet-consortium-announces-completion-of-800gbe-spec.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.networkworld.com/article/3444399/high-performance-computing-do-you-need-it.html
|
||||
[3]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy)
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -1,255 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (messon007)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (9 open source cloud native projects to consider)
|
||||
[#]: via: (https://opensource.com/article/19/8/cloud-native-projects)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo)
|
||||
|
||||
值得考虑的9个开源的云原生项目
|
||||
======
|
||||
工作中用了容器?熟悉这些出自云原生计算基金会的项目?
|
||||
|
||||
![clouds in the sky with blue pattern][1]
|
||||
|
||||
随着用容器来开发应用的实践变得流行,[云原生应用][2]也在增长。云原生应用的定义为:
|
||||
> "云原生技术被用于开发应用程序,这些应用通过将服务打包在容器中来完成构建,被部署为微服务,并通过敏捷的DevOps流程和持续集成工作流在弹性基础设施上管理。"
|
||||
|
||||
这个定义提到了构成云原生应用的4个元素:
|
||||
|
||||
1. 容器
|
||||
2. 微服务
|
||||
3. DevOps
|
||||
4. 持续集成和持续交付 (CI/CD)
|
||||
|
||||
|
||||
尽管这些技术各有各自独特的历史,但它们相互补充,共同导致了云原生应用和工具在短时间内惊人的指数级增长。这个[云原生计算基金会][4]信息图呈现了当今云原生应用生态的规模和广度。
|
||||
|
||||
![Cloud-Native Computing Foundation applications ecosystem][5]
|
||||
|
||||
云原生计算基金会项目
|
||||
|
||||
我想说,瞧着吧!这仅仅是一个开始。正如NodeJS的出现引发了无休止的JavaScript工具的爆炸式增长一样,容器技术的普及也推动了云原生应用的指数增长。
|
||||
|
||||
好消息是,有几个组织负责监管这些技术并将它们融合在一起。 其中之一是[**Open Containers Initiative(OCI)**][6],它是一个轻量级的,开放的治理机构(或项目),“它是在Linux基金会的主持下形成的,其明确目的是创建开放的行业标准的容器格式和运行时。” 另一个是** CNCF **,“它是一个致力于使云原生计算具有通用性和可持续性的开源软件基金会”。
|
||||
|
||||
除了常见的围绕云原生应用建立社区之外,CNCF还帮助项目基于其云原生应用建立结构化的管理。CNCF创建了成熟等级的概念(沙箱级,孵化级或毕业级),分别与下图中的“创新者”,“早期采用者”和“早期大量应用”相对应。
|
||||
![CNCF project maturity levels][7]
|
||||
|
||||
CNCF项目成熟等级
|
||||
|
||||
CNCF为每个成熟等级制定了详细的[标准][8](为方便读者而列在下面)。 获得技术监督委员会(TOC)三分之二的同意才能转为孵化或毕业级。
|
||||
|
||||
### 沙箱级
|
||||
|
||||
> 要想成为沙箱级,一个项目必须至少有两个TOC赞助商。 有关详细过程,请参见《 CNCF沙箱指南v1.0》。
|
||||
|
||||
|
||||
### 孵化级
|
||||
|
||||
>注意:孵化级是我们期望对项目进行全面尽职调查的起点。
|
||||
>
|
||||
>要进入孵化阶段,项目除了满足沙箱阶段的要求之外还要满足:
|
||||
>
|
||||
> *证明至少有三个独立的最终用户已成功将其用于生产,且TOC判断这些最终用户具有足够的质量和范围。
|
||||
> *合入者的数量要合理。合入者定义为具有合入权的人。即可以接受对部分或全部项目贡献的人。
|
||||
> *演示有大量正在进行的提交和合并的贡献。
|
||||
> *由于这些指标可能会根据项目的类型,范围和大小而有很大差异,因此TOC对足以满足这些标准的活动级别拥有最终决策权
|
||||
|
||||
|
||||
|
||||
### 毕业级
|
||||
|
||||
>要从沙箱或孵化级毕业,或者要使一个新项目作为已毕业项目加入,项目除了必须满足孵化级的标准外还要满足:
|
||||
>
|
||||
> *至少有两个组织的提交者。
|
||||
> *已获得并维护了“核心基础设施计划最佳实践徽章”。
|
||||
> *已完成独立和第三方安全审核,并发布了具有与以下示例类似的范围和质量的结果(包括已解决的关键漏洞):<https://github.com/envoyproxy/envoy#security-audit>和所有关键毕业之前需要解决漏洞。
|
||||
> *采用CNCF行为准则。
|
||||
> *明确定义项目治理和提交流程。最好将其排布在GOVERNANCE.md文件中,并引用显示当前提交者和荣誉提交者的OWNERS.md文件。
|
||||
> *至少有主仓的项目采用者的公开列表(例如,ADOPTERS.md或项目网站上的徽标)。
|
||||
> *获得TOC的多数票,进入毕业阶段。如果项目能够证明足够的成熟度,则可以尝试直接从沙箱过渡到毕业。项目可以无限期保持孵化状态,但是通常预计它们会在两年内毕业。
|
||||
|
||||
## 值得考虑的9个项目
|
||||
|
||||
本文不可能涵盖所有的CNCF项目,我将介绍最有趣的9个“已毕业和孵化中”的开源项目。
|
||||
|
||||
名称|授权类型|简要描述
|
||||
---|---|---
|
||||
[Kubernetes][9] | Apache 2.0 | 容器编排平台
|
||||
[Prometheus][10] | Apache 2.0 | 系统和服务监控工具
|
||||
[Envoy][11] | Apache 2.0 | 边缘和服务代理
|
||||
[rkt][12] | Apache 2.0 | Pod原生的容器引擎
|
||||
[Jaeger][13] | Apache 2.0 | 分布式跟踪系统
|
||||
[Linkerd][14] | Apache 2.0 | 无感服务网格
|
||||
[Helm][15] | Apache 2.0 | Kubernetes包管理器
|
||||
[Etcd][16] | Apache 2.0 | 分布式键值存储
|
||||
[CRI-O][17] | Apache 2.0 | 专门用于Kubernetes的轻量级运行时环境
|
||||
|
||||
我也创建了视频材料来介绍这些项目。
|
||||
|
||||
## 毕业项目
|
||||
|
||||
已毕业的项目被认为是成熟的,已被许多组织采用的,并且严格遵守了CNCF的准则。 以下是三个最受欢迎的开源CNCF毕业项目。 (请注意,其中一些描述来源于项目的网站并被做了改编。)
|
||||
|
||||
### Kubernetes
|
||||
|
||||
Kubernetes! 我们如何在不提及Kubernetes的情况下谈论云原生应用程序? Google发明的Kubernetes无疑是最著名的基于容器的应用程序的容器编排平台,而且它还是一个开源工具。
|
||||
|
||||
什么是容器编排平台? 通常,一个容器引擎本身可以管理几个容器。 但是,当您谈论数千个容器和数百个服务时,管理这些容器变得非常复杂。 这就是容器编排引擎的用武之地。容器编排引擎通过自动化容器的部署,管理,网络和可用性来帮助管理大量的容器。
|
||||
|
||||
Docker Swarm和Mesosphere Marathon也是容器编排引擎,但是可以肯定地说Kubernetes在竞争中胜出(至少现在是这样)。Kubernetes还诞生了容器即服务(CaaS)平台如[OKD][18],它是Origin社区针对Kubernetes的发行版,并成了[Red Hat OpenShift][19]的一部分。
|
||||
|
||||
想开始学习请访问[Kubernetes GitHub仓库][9],并从[Kubernetes文档][20]页面访问其文档和学习资源。
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus是2012年在SoundCloud上构建的一个开源系统监控和告警工具。之后,许多公司和组织都使用了Prometheus,并且该项目拥有非常活跃的开发者和用户群体。 现在,它是一个独立于公司的独立维护的开源项目。
|
||||
|
||||
![Prometheus’ architecture][21]
|
||||
|
||||
Prometheus的架构
|
||||
|
||||
理解Prometheus的最简单方法是可视化一个生产系统,该系统需要24(小时)x365天都可以正常运行。 没有哪个系统是完美的,也有减少故障的技术(称为容错系统)。 但是,如果出现问题,最重要的是尽快识别它。 这就是像Prometheus这样的监控工具的用武之地。Prometheus不仅是容器监控工具,它在云原生应用公司中也最受欢迎。 此外,其他开源监视工具,包括[Grafana][22],都借鉴了Prometheus。
|
||||
|
||||
开始使用Prometheus的最佳方法是下载其[GitHub仓库][10]。 在本地运行Prometheus很容易,但是您需要安装一个容器引擎。 您可以在[Prometheus网站][23]上查看详细的文档。
|
||||
|
||||
### Envoy
|
||||
|
||||
Envoy(或Envoy代理)是专为云原生应用设计的开源的边缘代理和服务代理。 由Lyft创建的Envoy是为单一服务和应用而设计的高性能的C++分布式代理,同时也是为由大量微服务组成的服务网格架构而设计的通信总线和通用数据平面。 基于对Nginx,HAProxy,硬件负载均衡器和云负载均衡器等方案了解的基础上,Envoy与每个应用相伴(并行)运行,并对网络进行了高度抽象,最终以平台无关的方式来提供通用功能。
|
||||
|
||||
当基础设施中的所有服务流量都经过一个Envoy网格时,很容易就可以通过连贯的监测来可视化问题域,调整整体性能,并在单个位置添加基础功能。基本上,Envoy代理是一个可帮助组织为生产环境构建容错系统的服务网格工具。
|
||||
|
||||
服务网格应用有很多替代方案,例如Uber的[Linkerd][24](下面会讨论)和[Istio][25]。 Istio通过将其部署为[Sidecar][26]并利用了[Mixer][27]的配置模型,实现了对Envoy的扩展。 Envoy的显著特性有:
|
||||
|
||||
*所有的“table stakes(入场筹码,引申为基础必备特性)”特性(与控制平面(例如Istio)组合时)
|
||||
*带载运行时99%数据可达到低延时
|
||||
*将L3/L4过滤器作为核心,支持额外的L7过滤器
|
||||
*支持gRPC和HTTP / 2(上行/下行)
|
||||
*由API驱动,并支持动态配置和热重载
|
||||
*重点关注指标收集,跟踪和整体可监测性
|
||||
|
||||
|
||||
要想了解Envoy,证实其能力并意识到其全部优势,需要丰富的在生产级环境运行的经验。 您可以在[详细文档][28]或访问其[GitHub][11]仓库了解更多信息。
|
||||
|
||||
## 孵化项目
|
||||
|
||||
下面是最流行的开源的CNCF孵化项目中的六个。
|
||||
|
||||
### rkt
|
||||
|
||||
rkt, 拼为"rocket", 是一个pod原生的容器引擎。它有一个命令行接口用来在Linux上运行容器。从某种意义上讲,它和其他容器如[Podman][29], Docker和CRI-O相似。
|
||||
|
||||
rkt最初由CoreOS开发(后来被Red Hat收购),您可以在其网站上找到详细的[文档][30],以及在[GitHub][12]上访问其源代码。
|
||||
|
||||
### Jaeger
|
||||
|
||||
Jaeger是面向云原生应用的开源的端到端的分布式跟踪系统。 在某种程度上,它是像Prometheus这样的监控解决方案。但它有所不同,因为其使用场景有所扩展:
|
||||
|
||||
* 分布式事务监控
|
||||
* 性能和延时优化
|
||||
* 根因分析
|
||||
* 服务的依赖分析
|
||||
* 分布式上下文传播
|
||||
|
||||
|
||||
|
||||
Jaeger是Uber建立的开源的技术。 您可以在其网站上找到[详细文档][31],以及在GitHub上找到其[源码][13]。
|
||||
|
||||
### Linkerd
|
||||
|
||||
像创建Envoy代理的Lyft一样,Uber开发了Linkerd开源解决方案用于生产级的服务维护。在某些方面,Linkerd就像Envoy一样,因为两者都是服务网格工具,旨在提供平台级的可观测性,可靠性和安全性,而无需进行配置或代码更改。
|
||||
|
||||
但是,两者之间存在一些细微的差异。 尽管Envoy和Linkerd充当代理并可以通过所连接的服务进行上报,但是Envoy并不像Linkerd那样被设计为Kubernetes Ingress控制器。 Linkerd的显著功能包括:
|
||||
|
||||
*支持多种平台(Docker,Kubernetes,DC / OS,Amazon ECS或单机)
|
||||
*内置服务发现抽象将多个系统集成在一起
|
||||
*支持gRPC,HTTP / 2和HTTP / 1.x请求和所有的TCP流量
|
||||
|
||||
|
||||
您可以在[Linkerd网站][32]上阅读有关它的更多信息,并在[GitHub][14]上访问其源码。
|
||||
|
||||
### Helm
|
||||
|
||||
Helm基本上是Kubernetes的软件包管理器。 如果您使用过Apache Maven,Maven Nexus或类似的服务,您就会理解Helm的作用。 Helm可帮助您管理Kubernetes应用程序。 它使用“Helm图”来定义,安装和升级最复杂的Kubernetes应用程序。 Helm并不是实现此功能的唯一方法; 另一个流行的概念是[Kubernetes Operators][33],它被Red Hat OpenShift 4所使用。
|
||||
|
||||
您可以按照其文档中的[快速开始指南][34]或[GitHub指南][15]来试用Helm。
|
||||
|
||||
### Etcd
|
||||
|
||||
Etcd是用于分布式系统中最关键数据的分布式的,可靠的键值存储。 其主要特性有:
|
||||
|
||||
*定义明确的,面向用户的API(gRPC)
|
||||
*客户端证书验证可选的自动TLS
|
||||
*速度(可达每秒10,000次写入)
|
||||
*可靠性(使用Raft实现分布式)
|
||||
|
||||
Etcd是Kubernetes和许多其他技术的默认的内置数据存储方案。也就是说,它很少独立运行或作为单独的服务运行; 相反,它利用了集成到Kubernetes,OKD / OpenShift或其他服务中的一个。 还有[etcd Operator][35]来管理其生命周期并解锁其API管理功能:
|
||||
|
||||
您可以在[etcd文档][36]中了解更多信息,并在GitHub上访问其[源码][16]。
|
||||
|
||||
### CRI-O
|
||||
|
||||
CRI-O是Kubernetes运行时接口的OCI兼容实现。CRI-O用于各种功能,包括:
|
||||
|
||||
*使用runc(或遵从OCI运行时规范的任何实现)和OCI运行时工具运行
|
||||
*使用容器/镜像进行镜像管理
|
||||
*使用容器/存储来存储和管理镜像层
|
||||
*通过容器网络接口(CNI)来提供网络支持
|
||||
|
||||
CRI-O提供了大量的[文档][37],包括指南,教程,文章,甚至播客,您还可以访问其[GitHub页面][17]。
|
||||
|
||||
* * *
|
||||
|
||||
我错过了其他有趣且开源的云原生项目吗? 请在评论中提醒我。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/8/cloud-native-projects
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[messon007](https://github.com/messon007)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brsonhttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e (clouds in the sky with blue pattern)
|
||||
[2]: https://opensource.com/article/18/7/what-are-cloud-native-apps
|
||||
[3]: https://thenewstack.io/10-key-attributes-of-cloud-native-applications/
|
||||
[4]: https://www.cncf.io
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cncf_1.jpg (Cloud-Native Computing Foundation applications ecosystem)
|
||||
[6]: https://www.opencontainers.org
|
||||
[7]: https://opensource.com/sites/default/files/uploads/cncf_2.jpg (CNCF project maturity levels)
|
||||
[8]: https://github.com/cncf/toc/blob/master/process/graduation_criteria.adoc
|
||||
[9]: https://github.com/kubernetes/kubernetes
|
||||
[10]: https://github.com/prometheus/prometheus
|
||||
[11]: https://github.com/envoyproxy/envoy
|
||||
[12]: https://github.com/rkt/rkt
|
||||
[13]: https://github.com/jaegertracing/jaeger
|
||||
[14]: https://github.com/linkerd/linkerd
|
||||
[15]: https://github.com/helm/helm
|
||||
[16]: https://github.com/etcd-io/etcd
|
||||
[17]: https://github.com/cri-o/cri-o
|
||||
[18]: https://www.okd.io/
|
||||
[19]: https://www.openshift.com
|
||||
[20]: https://kubernetes.io/docs/home
|
||||
[21]: https://opensource.com/sites/default/files/uploads/cncf_3.jpg (Prometheus’ architecture)
|
||||
[22]: https://grafana.com
|
||||
[23]: https://prometheus.io/docs/introduction/overview
|
||||
[24]: https://linkerd.io/
|
||||
[25]: https://istio.io/
|
||||
[26]: https://istio.io/docs/reference/config/networking/v1alpha3/sidecar
|
||||
[27]: https://istio.io/docs/reference/config/policy-and-telemetry
|
||||
[28]: https://www.envoyproxy.io/docs/envoy/latest
|
||||
[29]: https://podman.io
|
||||
[30]: https://coreos.com/rkt/docs/latest
|
||||
[31]: https://www.jaegertracing.io/docs/1.13
|
||||
[32]: https://linkerd.io/2/overview
|
||||
[33]: https://coreos.com/operators
|
||||
[34]: https://helm.sh/docs
|
||||
[35]: https://github.com/coreos/etcd-operator
|
||||
[36]: https://etcd.io/docs/v3.3.12
|
||||
[37]: https://github.com/cri-o/cri-o/blob/master/awesome.md
|
Loading…
Reference in New Issue
Block a user