mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-19 22:51:41 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into new
This commit is contained in:
commit
083db84838
@ -1,23 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (luuming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11004-1.html)
|
||||
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
|
||||
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
|
||||
[#]: author: (Simon James https://fosspost.org/author/simonjames)
|
||||
|
||||
5 款不错的开源语音识别/语音文字转换系统
|
||||
|
||||
======
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/open-source-speech-recognition-speech-to-text.png?resize=1237%2C527&ssl=1)
|
||||
|
||||
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>(STT)系统就像它名字所蕴含的那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
|
||||
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>(STT)系统就像它名字所蕴含的意思那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
|
||||
|
||||
语音文字转换技术非常有用。它可以用到许多应用中,例如自动转录,使用自己的声音写书籍或文本,用生成的文本文件和其他工具做复杂的分析等。
|
||||
|
||||
在过去,语音文字转换技术以专有软件和库为主导,开源替代品并不存在或是有严格的限制并且没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你立即使用。
|
||||
在过去,语音文字转换技术以专有软件和库为主导,要么没有开源替代品,要么有着严格的限制,也没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你随时使用。
|
||||
|
||||
这里我列出了 5 个。
|
||||
|
||||
@ -27,74 +26,74 @@
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
|
||||
|
||||
该项目由 Firefox 浏览器背后的组织 Mozilla 团队开发。它 100% 自由并且使用 TensorFlow 机器学习框架实现。
|
||||
该项目由 Firefox 浏览器的开发组织 Mozilla 团队开发。它是 100% 的自由开源软件,其名字暗示使用了 TensorFlow 机器学习框架实现去功能。
|
||||
|
||||
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
|
||||
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它来转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
|
||||
|
||||
它也支持许多编程语言,例如 Python(3.6)。可以让你在数秒之内获取:
|
||||
它也支持许多编程语言,例如 Python(3.6)。可以让你在数秒之内完成工作:
|
||||
|
||||
```
|
||||
pip3 install deepspeech
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
你也可以通过 npm 安装它:
|
||||
你也可以通过 `npm` 安装它:
|
||||
|
||||
```
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
想要获得更多信息,请参考[项目主页][2]。
|
||||
- [项目主页][2]
|
||||
|
||||
#### Kaldi
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
|
||||
|
||||
Kaldi 是一个用 C++ 写的开源语音识别软件,并且在 Apache 公共许可下发布。它可以运行在 Windows,macOS 和 Linux 上。它的开发始于 2009。
|
||||
Kaldi 是一个用 C++ 编写的开源语音识别软件,并且在 Apache 公共许可证下发布。它可以运行在 Windows、macOS 和 Linux 上。它的开发始于 2009。
|
||||
|
||||
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的三方模块可以用来完成你的任务。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
|
||||
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的可以用来完成你的任务的第三方模块。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
|
||||
|
||||
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的完成。
|
||||
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的实现。
|
||||
|
||||
[Project’s homepage][5].
|
||||
- [项目主页][5]
|
||||
|
||||
#### Julius
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
|
||||
|
||||
可能是有史以来最古老的语音识别软件之一。它的发展始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
|
||||
它可能是有史以来最古老的语音识别软件之一。它的开发始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
|
||||
|
||||
Julius 的主要特点包括了执行实时 STT 的能力,低内存占用(20000 单词少于 64 MB),输出<ruby>最优词<rt>N-best word</rt></ruby>/<ruby>词图<rt>Word-graph</rt></ruby>的能力,当作服务器单元运行的能力和很多东西。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux,Windows,macOS 甚至 Android(在智能手机上)。
|
||||
Julius 的主要特点包括了执行实时 STT 的能力,低内存占用(20000 单词少于 64 MB),能够输出<ruby>最优词<rt>N-best word</rt></ruby>和<ruby>词图<rt>Word-graph</rt></ruby>,能够作为服务器单元运行等等。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux、Windows、macOS 甚至 Android(在智能手机上)。
|
||||
|
||||
它当前仅支持英语和日语。软件或许易于从 Linux 发行版的仓库中安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于大约一个半月之前。
|
||||
它当前仅支持英语和日语。软件应该能够从 Linux 发行版的仓库中轻松安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于本文发布前大约一个半月之前。
|
||||
|
||||
[Project’s homepage][8].
|
||||
- [项目主页][8]
|
||||
|
||||
#### Wav2Letter++
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
|
||||
|
||||
如果你在寻找一个更加时髦的,那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可下发布。
|
||||
如果你在寻找一个更加时髦的,那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可证下发布。
|
||||
|
||||
Facebook 描述它的库是“最快<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的想法使其能在默认情况下对性能进行优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
|
||||
Facebook 描述它的库是“最快、<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的理念使其默认针对性能进行了优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
|
||||
|
||||
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此命名为 Wav2Letter++。
|
||||
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此被命名为 Wav2Letter++。
|
||||
|
||||
[Project’s homepage][12].
|
||||
- [项目主页][12]
|
||||
|
||||
#### DeepSpeech2
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
|
||||
|
||||
中国巨头百度的研究人员也在开发他们自己的语音文字转换引擎,叫做“DeepSpeech2”。它是一个端对端的开源引擎,使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可下发布。
|
||||
中国软件巨头百度的研究人员也在开发他们自己的语音文字转换引擎,叫做“DeepSpeech2”。它是一个端对端的开源引擎,使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可证下发布。
|
||||
|
||||
引擎可以训练在任何模型之上,并且可以用于任何想要的语言。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
|
||||
该引擎可以在你想用的任何模型和任何语言上训练。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
|
||||
|
||||
[Project’s homepage][14].
|
||||
- [项目主页][14]
|
||||
|
||||
### 总结
|
||||
|
||||
语音识别领域仍然主要地由专有软件巨头所占据,比如 Google 和 IBM(它们为此提供了闭源商业服务),但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
|
||||
语音识别领域仍然主要由专有软件巨头所占据,比如 Google 和 IBM(它们为此提供了闭源商业服务),但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
|
||||
|
||||
如果你对清单有其他的建议或评论,我们很乐意在下面听到。
|
||||
|
||||
@ -104,8 +103,8 @@ via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
|
||||
|
||||
作者:[Simon James][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11009-1.html)
|
||||
[#]: subject: (Edge computing is in most industries’ future)
|
||||
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
边缘计算是大多数行业的未来
|
||||
======
|
||||
|
||||
> 几乎每个行业都可以利用边缘计算来加速数字化转型。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/231224cdl3kwedn0hw2lie.jpg)
|
||||
|
||||
边缘计算的发展将取得一次巨大的飞跃。[据 Gartner 数据][2],现在公司有 10% 的数据是在传统数据中心或云之外生成的。但在未来六年内,这一比例将升至 75%。
|
||||
|
||||
这很大程度上取决于处理来自设备数据的需要,比如物联网(IoT)数据传感器。早期采用这一方法的包括:
|
||||
|
||||
* **制造商**:设备与传感器似乎是这个行业特有的,因此需要为产生的数据找到更快速的方法也就不足为奇。一份 [Automation World][3] 最近的研究发现 43% 的制造商已经部署了边缘计算项目。最常用用途包括生产/制造数据分析与设备数据分析。
|
||||
* **零售商**:与大多数深受数字化运营需求影响的产业一样,零售商也不得不革新了其客户体验。为此,这些组织“正在积极投资贴近于买家的计算能力”,施耐德电气公司 IT 部门执行副总裁 [Dave Johnson][4] 如是说。他列举了一些例子,例如在试衣间的增强现实(AR)镜子,提供了不同的服装选择,而不用顾客试用这些服装。又如用于显示店内导航的基于信标的热图。
|
||||
* **医疗保健机构**:随着医疗保健成本的不断上升,这一行业已经具备了提高生产能力与成本效率方面的创新能力。管理咨询公司[麦肯锡已经确定][5],至少有 11 个有益于患者、医疗机构或两者的医疗保健用例。举两个例子:提高护理效率并有助于优化设备的跟踪移动医疗设备;跟踪用户锻炼并提供健康建议的可穿戴设备。
|
||||
|
||||
虽然以上这些是明显的用例,随着边缘计算市场的扩大,采用它的行业也会增加。
|
||||
|
||||
### 数字化转型的优势
|
||||
|
||||
随着边缘计算的快速处理能力完全符合数字化转型的目标:提高效率、生产能力和加速产品上市和客户体验。以下是一些有潜力的应用及将被边缘计算改变的行业:
|
||||
|
||||
**农业**:农民和组织已经使用无人机将农田和气候环境传给灌溉设备。其他的应用可能包括了对工人、牲畜和设备的监测与位置跟踪,从而改善生产能力、效率和成本。
|
||||
|
||||
**能源**:在这一领域有许多的潜在的应用,可以使消费者与供应商都受益。例如,智能电表有助于业主更好地管理能源使用,同时减少电网运营商对手动抄表的需求。同样的,水管上的传感器能够监测到漏水,同时提供实时漏水数据。
|
||||
|
||||
**金融服务**:银行正在采取交互式 ATM 机,这种交互式 ATM 机能够快速地处理数据以提供更好的用户体验。在管理层次,可以更快速地分析交易数据中的欺诈行为。
|
||||
|
||||
**物流**:由于消费者需要更快速地交付商品和服务,物流公司将需要转换其地图和寻路功能以获取实时数据,尤其在最后一公里计划和跟踪方面。这可能涉及到基于街道、包裹及汽车的传感器数据传输处理过程。
|
||||
|
||||
得益于边缘计算,所有行业都有转型的潜力。但是,这将取决于他们如何处理计算基础设施。可以在 [APC.com][6] 找到如何克服任何 IT 阻碍的解决方案。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[ninifly](https://github.com/ninifly)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
|
||||
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (yizhuoyan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11010-1.html)
|
||||
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
|
||||
如何在 Debian/Ubuntu 系统中检查程序包是否安装?
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/235541yl41p73z5jv78y8p.jpg)
|
||||
|
||||
我们近期发布了一篇关于批量程序包安装的文章。在此同时,关于如何获取系统上已安装了的程序包信息,我也做了些调查然后找到了些方法。我会把这些方法分享在我们的网站上,希望能帮助到其他人。
|
||||
|
||||
有很多种方法可以检查程序包是否已安装,我找到了 7 种命令,你可以从中选择你喜欢的使用。
|
||||
|
||||
如下:
|
||||
|
||||
* `apt-cache`:可用于查询 APT 缓存或程序包的元数据。
|
||||
* `apt`:是基于 Debian 的系统中的安装、下载、删除、搜索和管理包的强有力的工具。
|
||||
* `dpkg-query`:一个查询 dpkg 数据库的工具。
|
||||
* `dpkg`:基于 Debian 的系统的包管理工具。
|
||||
* `which`:返回在终端中输入命令时执行的可执行文件的全路径。
|
||||
* `whereis`:可用于搜索指定命令的二进制文件、源码文件和帮助文件。
|
||||
* `locate`:比 `find` 命令快,因为其使用 `updatedb` 数据库搜索,而 `find`命令在实际系统中搜索。
|
||||
|
||||
### 方法一、使用 apt-cache 命令
|
||||
|
||||
`apt-cache` 命令用于从 APT 内部数据库中查询**APT 缓存**和**包的元数据**,将会搜索和显示指定包的信息,包括是否安装、程序包版本、源码仓库信息等。
|
||||
|
||||
下面的示例清楚的显示 `nano` 包已经在系统中安装了以及对应安装的版本号。
|
||||
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.9.3-2
|
||||
Candidate: 2.9.3-2
|
||||
Version table:
|
||||
*** 2.9.3-2 500
|
||||
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
```
|
||||
|
||||
### 方法二、使用 apt 命令
|
||||
|
||||
`apt` 是一个功能强大的命令行工具,可用于安装、下载、删除、搜索、管理程序包以及查询关于程序包的信息,类似对于 `libapt-pkg` 库的所有功能的底层访问。其包含一些与包管理相关的但很少用到的命令行功能。
|
||||
|
||||
```
|
||||
# apt -qq list nano
|
||||
nano/bionic,now 2.9.3-2 amd64 [installed]
|
||||
```
|
||||
|
||||
### 方法三、使用 dpkg-query 命令
|
||||
|
||||
`dpkg-query` 是显示 `dpkg` 数据库中程序包信息列表的一个工具。
|
||||
|
||||
下面示例中的输出的第一列 `ii`,表示查询的程序包已安装了。
|
||||
|
||||
```
|
||||
# dpkg-query --list | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### 方法四、使用 dpkg 命令
|
||||
|
||||
`dpkg`(**d**ebian **p**ac**k**a**g**e)是一个安装、构建、删除和管理 Debian 包的工具,但和其他包管理系统不同的是,其不能自动下载和安装包或包依赖。
|
||||
|
||||
下面示例中的输出的第一列 `ii`,表示查询的包已安装了。
|
||||
|
||||
```
|
||||
# dpkg -l | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### 方法五、使用 which 命令
|
||||
|
||||
`which` 命令返回在终端中输入命令时执行的可执行文件的全路径。这对于你想要给可执行文件创建桌面快捷方式或符号链接时非常有用。
|
||||
|
||||
`which` 命令仅在当前用户 `PATH` 环境变量配置的目录列表中搜索,而不是在所有用户的目录中搜索。这意思是当你登入你自己账号时,其不会在 `root` 用户文件或目录中搜索。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# which nano
|
||||
/bin/nano
|
||||
```
|
||||
|
||||
### 方法六、使用 whereis 命令
|
||||
|
||||
`whereis` 命令用于针对指定命令搜索对应的程序二进制文件、源码文件以及帮助文件等。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# whereis nano
|
||||
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
|
||||
```
|
||||
|
||||
### 方法七、使用 locate 命令
|
||||
|
||||
`locate` 命令比 `find` 命令快,因为其在 `updatedb` 数据库中搜索,而 `find` 命令在实际系统中进行搜索。
|
||||
|
||||
对于获取指定文件,其使用数据库而不是在特定目录路径中搜索。
|
||||
|
||||
`locate` 命令不会预安装在大多数系统中,需要手动安装。
|
||||
|
||||
`locate` 使用的数据库会根据定时任务定期更新。当然,我们也可以手动更新。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# locate --basename '\nano'
|
||||
/usr/bin/nano
|
||||
/usr/share/nano
|
||||
/usr/share/doc/nano
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[yizhuoyan](https://github.com/yizhuoyan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
53
published/20190604 Kubernetes is a dump truck- Here-s why.md
Normal file
53
published/20190604 Kubernetes is a dump truck- Here-s why.md
Normal file
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11011-1.html)
|
||||
[#]: subject: (Kubernetes is a dump truck: Here's why)
|
||||
[#]: via: (https://opensource.com/article/19/6/kubernetes-dump-truck)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
|
||||
|
||||
为什么说 Kubernetes 是一辆翻斗车
|
||||
======
|
||||
|
||||
> 翻斗车是解决各种基本业务问题的优雅解决方案。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/24/012846v737bts00uwk3qd7.jpg)
|
||||
|
||||
这篇文章写于 Kubernetes 的生日(6 月 7 日星期五)前夕。
|
||||
|
||||
翻斗车很优雅。说真的,不信你听我说。它们以优雅的方式解决了各种各样的技术问题。它们可以搬动泥土、砾石、岩石、煤炭、建筑材料或道路上的障碍。它们甚至可以拉动拖车及它们上面的其他重型设备。你可以给一辆翻斗车装上五吨泥土,然后自驾游遍全国。对于像我这样的电脑极客来说,那就是优雅。
|
||||
|
||||
但是,它们并不容易使用。驾驶翻斗车需要特殊的驾驶执照。它们也不容易装配和维护。购买翻斗车和各种维护时要做很多选择。但是,它们可以优雅的搬动那些垃圾。
|
||||
|
||||
你知道搬动垃圾有什么不优雅的地方吗?假如你有一款新型的紧凑型轿车,它们到处可以买到,易于驾驶、更易于维护。但是,用它们来装泥土就很糟糕。这需要跑 200 趟才能运走 5 吨垃圾,而且,之后没人再会想要这辆车了。
|
||||
|
||||
好吧,你可以买一辆出售的翻斗车,而不是想自己造一辆。但是我不同,我是个极客,我喜欢自己造东西。但……
|
||||
|
||||
如果你拥有一家建筑公司,你就不会想着自己造一辆翻斗车。你肯定不会维持一条供应链来重构翻斗车(这可是一条很大的供应链)。但你可以学会驾驶一辆。
|
||||
|
||||
好吧,我的这个比喻很粗糙,但很容易理解。易用性是相对的,易于维护是相对的,易于装配也是相对的。这实际上取决于你想要做什么。[Kubernetes][2] 也不例外。
|
||||
|
||||
一次性构建 Kubernetes 并不太难。配置好 Kubernetes 呢?好吧,这稍微难一些。你如何看待 KubeCon?它们又宣布了多少新项目?哪些是“真实的”呢?而你应该学习哪些?你对 Harbour、TikV、NATD、Vitess,开放策略代理有多深入的了解?更不用说 Envoy、eBPF 和 Linux 中的一系列底层技术?这就像是 1904 年工业革命爆发时建造翻斗车一样,你要弄清楚使用的螺钉、螺栓、金属和活塞。(有没有蒸汽朋克在这里吗?)
|
||||
|
||||
像翻斗车一样构造和配置 Kubernetes 是一个技术问题,如果你从事金融服务、零售、生物研究、食品服务等等,这可能不是你应该做的事情。但是,学习如何驾驶 Kubernetes 肯定是你应该学习的东西。
|
||||
|
||||
Kubernetes 就像一辆翻斗车,因其可以解决的各种技术问题(以及它所拖带的生态系统)而优雅。所以,我会给你一句引用的话,这是我的一位计算机科学教授在我大学的第一年告诉我们的,她说,“有一天,你会看到一段代码并对自己说,‘真特么优雅!’”
|
||||
|
||||
Kubernetes 很优雅。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/kubernetes-dump-truck
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dump_truck_car_container_kubernetes.jpg?itok=4BdmyVGd (Dump truck with kids standing in the foreground)
|
||||
[2]: https://kubernetes.io/
|
@ -1,20 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11008-1.html)
|
||||
[#]: subject: (Exploring /run on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3403023/exploring-run-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Exploring /run on Linux
|
||||
探索 Linux 上的 /run
|
||||
======
|
||||
There's been a small but significant change in how Linux systems work with respect to runtime data.
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
If you haven’t been paying close attention, you might not have noticed a small but significant change in how Linux systems work with respect to runtime data. A re-arrangement of how and where it’s accessible in the file system started taking hold about eight years ago. And while this change might not have been big enough of a splash to wet your socks, it provides some additional consistency in the Linux file system and is worthy of some exploration.
|
||||
> Linux 系统在运行时数据方面的工作方式发生了微小但重大的变化。
|
||||
|
||||
To get started, cd your way over to /run. If you use df to check it out, you’ll see something like this:
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/092816aqczi984w30j8k12.jpg)
|
||||
|
||||
如果你没有密切关注,你可能没有注意到 Linux 系统在运行时数据方面的工作方式有一些小但重大的变化。 它重新组织了文件系统中可访问的方式和位置,而这个变化在大约八年前就开始了。虽然这种变化可能不足以让你的袜子变湿,但它在 Linux 文件系统中提供了更多一致性,值得进行一些探索。
|
||||
|
||||
要开始,请转到 `/run`。如果你使用 `df` 来检查它,你会看到这样的输出:
|
||||
|
||||
```
|
||||
$ df -k .
|
||||
@ -22,18 +24,16 @@ Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
tmpfs 609984 2604 607380 1% /run
|
||||
```
|
||||
|
||||
Identified as a “tmpfs” (temporary file system), we know that the files and directories in /run are not stored on disk but only in volatile memory. They represent data kept in memory (or disk-based swap) that takes on the appearance of a mounted file system to allow it to be more accessible and easier to manage.
|
||||
它被识别为 “tmpfs”(临时文件系统),因此我们知道 `/run` 中的文件和目录没有存储在磁盘上,而只存储在内存中。它们表示保存在内存(或基于磁盘的交换空间)中的数据,它看起来像是一个已挂载的文件系统,这个可以使其更易于访问和管理。
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
/run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.
|
||||
`/run` 是各种各样数据的家园。例如,如果你查看 `/run/user`,你会注意到一组带有数字名称的目录。
|
||||
|
||||
```
|
||||
$ ls /run/user
|
||||
1000 1002 121
|
||||
```
|
||||
|
||||
A long file listing will clarify the significance of these numbers.
|
||||
使用长文件列表可以发现这些数字的重要性。
|
||||
|
||||
```
|
||||
$ ls -l
|
||||
@ -43,9 +43,9 @@ drwx------ 5 dory dory 120 Jun 16 16:14 1002
|
||||
drwx------ 8 gdm gdm 220 Jun 14 12:18 121
|
||||
```
|
||||
|
||||
This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.
|
||||
我们看到每个目录与当前登录的用户或显示管理器 gdm 相关。数字代表他们的 UID。每个目录的内容都是运行中的进程所使用的文件。
|
||||
|
||||
The /run/user files represent only a very small portion of what you’ll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.
|
||||
`/run/user` 文件只是你在 `/run` 中找到的一小部分。还有很多其他文件。有一些文件包含了各种系统进程的进程 ID。
|
||||
|
||||
```
|
||||
$ ls *.pid
|
||||
@ -53,7 +53,7 @@ acpid.pid atopacctd.pid crond.pid rsyslogd.pid
|
||||
atd.pid atop.pid gdm3.pid sshd.pid
|
||||
```
|
||||
|
||||
As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).
|
||||
如下所示,上面列出的 `sshd.pid` 文件包含 ssh 守护程序(`sshd`)的进程 ID。
|
||||
|
||||
```
|
||||
$ cat sshd.pid
|
||||
@ -67,7 +67,7 @@ dory 18232 18109 0 16:14 ? 00:00:00 sshd: dory@pts/1
|
||||
shs 19276 10923 0 16:50 pts/0 00:00:00 grep --color=auto sshd
|
||||
```
|
||||
|
||||
Some of the subdirectories within /run can only be accessed with root authority such as /run/sudo. Running as root, for example, we can see some files related to real or attempted sudo usage:
|
||||
`/run` 中的某些子目录只能使用 root 权限访问,例如 `/run/sudo`。例如,以 root 身份运行我们可以看到一些与真实或尝试使用 `sudo` 相关的文件:
|
||||
|
||||
```
|
||||
/run/sudo/ts# ls -l
|
||||
@ -76,7 +76,7 @@ total 8
|
||||
-rw------- 1 root shs 168 Jun 17 08:33 shs
|
||||
```
|
||||
|
||||
In keeping with the shift to using /run, some of the old locations for runtime data are now symbolic links. /var/run is now a pointer to /run and /var/lock a pointer to /run/lock, allowing old references to work as expected.
|
||||
为了与 `/run` 的变化保持一致,一些运行时数据的旧位置现在是符号链接。`/var/run` 现在是指向 `/run` 的指针,`/var/lock` 指向 `/run/lock` 的指针,可以保证旧的引用按预期工作。
|
||||
|
||||
```
|
||||
$ ls -l /var
|
||||
@ -98,11 +98,7 @@ drwxrwxrwt 8 root root 4096 Jun 17 00:00 tmp
|
||||
drwxr-xr-x 3 root root 4096 Jan 19 12:14 www
|
||||
```
|
||||
|
||||
While minor as far as technical changes go, the transition to using /run simply allows for a better organization of runtime data in the Linux file system.
|
||||
|
||||
**[ Now read this:[Invaluable tips and tricks for troubleshooting Linux][3] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
虽然技术上的变化很小,但转换到使用 `/run` 只是为了在 Linux 文件系统中更好地组织运行时数据。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -110,8 +106,8 @@ via: https://www.networkworld.com/article/3403023/exploring-run-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11002-1.html)
|
||||
[#]: subject: (Get the latest Ansible 2.8 in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/get-the-latest-ansible-2-8-in-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
@ -12,11 +12,11 @@
|
||||
|
||||
![][1]
|
||||
|
||||
Ansible 是世界上最受欢迎的自动化引擎之一。它能让你自动化几乎任何事情,从本地系统的设置到大量的平台和应用。它是跨平台的,因此你可以将其用于各种操作系统。继续阅读以获取有关如何在 Fedora 中获取最新 Ansible,一些更改和改进,以及如何使用它。
|
||||
Ansible 是世界上最受欢迎的自动化引擎之一。它能让你自动化几乎任何事情,从本地系统的设置到大量的平台和应用。它是跨平台的,因此你可以将其用于各种操作系统。请继续阅读以获取有关如何在 Fedora 中获取最新 Ansible,以及它的一些更改和改进,以及如何使用它。
|
||||
|
||||
### 发布版本和功能
|
||||
|
||||
最近发布了 Ansible 2.8,其中包含许多修复,功能和增强。仅仅几天之后,它就可在 Fedora 29 和 30 以及 EPEL 中获取。两周前发布了后续版本 2.8.1。同样,新版本在几天内就可以在 Fedora 中获取。
|
||||
Ansible 2.8 最近发布了,其中包含许多修复、功能和增强。仅仅几天之后,它就可在 Fedora 29 和 30 以及 EPEL 中获取。两周前发布了后续版本 2.8.1。同样,新版本在几天内就可以在 Fedora 中获取。
|
||||
|
||||
[使用 sudo][2] 能够非常容易地从官方仓库安装:
|
||||
|
||||
@ -24,9 +24,9 @@ Ansible 是世界上最受欢迎的自动化引擎之一。它能让你自动化
|
||||
$ sudo dnf -y install ansible
|
||||
```
|
||||
|
||||
2.8 版本有很长的更新列表,你可以在 [2.8 的迁移指南][3]中阅读查看。但其中包含了一些好东西,比如 _Python 解释器发现_ 。Ansible 2.8 现在会试图找出哪个 Python 是它运行的平台的首选。如果失败,Ansible 会使用后备列表。但是,你仍然可以使用变量 _ansible_python_interpreter_ 来设置 Python 解释器。
|
||||
2.8 版本有很长的更新列表,你可以在 [2.8 的迁移指南][3]中阅读查看。但其中包含了一些好东西,比如 *Python 解释器发现功能* 。Ansible 2.8 现在会试图找出哪个 Python 是它所运行的平台的首选版本。如果失败,Ansible 会使用后备列表。但是,你仍然可以使用变量 `ansible_python_interpreter` 来设置 Python 解释器。
|
||||
|
||||
另一个变化使 Ansible 在各个平台上更加一致。由于 _sudo_ 专用于 UNIX/Linux,而其他平台并没有,因此现在在更多地方使用 _become_。这包括了命令行开关。例如,_-ask-sudo-pass_ 已变成了 _-ask-become-pass_,提示符也变成了 _BECOME password:_。
|
||||
另一个变化使 Ansible 在各个平台上更加一致。由于 `sudo` 专用于 UNIX/Linux,而其他平台并没有,因此现在在更多地方使用 `become`。这包括了命令行开关。例如,`-ask-sudo-pass` 已变成了 `-ask-become-pass`,提示符也变成了 `BECOME password:`。
|
||||
|
||||
2.8 和 2.8.1 版本中还有许多其他功能。有关所有细节,请查看 [GitHub 上的官方更新日志][4]。
|
||||
|
||||
@ -36,7 +36,7 @@ $ sudo dnf -y install ansible
|
||||
|
||||
我们之前在 Fedora Magazine 中也讨论过这个话题:
|
||||
|
||||
> [使用 Ansible 设置工作站][5]
|
||||
- [使用 Ansible 设置工作站][5]
|
||||
|
||||
试试看 Ansible,说下你的想法。很重要的一部分是让 Fedora 保持最新版本。自动化快乐!
|
||||
|
||||
@ -47,7 +47,7 @@ via: https://fedoramagazine.org/get-the-latest-ansible-2-8-in-fedora/
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,151 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11007-1.html)
|
||||
[#]: subject: (Bash Script to Monitor Memory Usage on Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-memory-utilization-usage-and-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
用 Bash 脚本监控 Linux 上的内存使用情况
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/085446setqkshf5zk0tn2x.jpg)
|
||||
|
||||
目前市场上有许多开源监控工具可用于监控 Linux 系统的性能。当系统达到指定的阈值限制时,它可以发送电子邮件警报。它可以监视 CPU 利用率、内存利用率、交换利用率、磁盘空间利用率等所有内容。
|
||||
|
||||
如果你只有很少的系统并且想要监视它们,那么编写一个小的 shell 脚本可以使你的任务变得非常简单。
|
||||
|
||||
在本教程中,我们添加了两个 shell 脚本来监视 Linux 系统上的内存利用率。当系统达到给定阈值时,它将给特定电子邮件地址发邮件。
|
||||
|
||||
### 方法-1:用 Linux Bash 脚本监视内存利用率并发送电子邮件
|
||||
|
||||
如果只想在系统达到给定阈值时通过邮件获取当前内存利用率百分比,请使用以下脚本。
|
||||
|
||||
这是个非常简单直接的单行脚本。在大多数情况下,我更喜欢使用这种方法。
|
||||
|
||||
当你的系统达到内存利用率的 80% 时,它将触发一封电子邮件。
|
||||
|
||||
```
|
||||
*/5 * * * * /usr/bin/free | awk '/Mem/{printf("RAM Usage: %.2f%\n"), $3/$2*100}' | awk '{print $3}' | awk '{ if($1 > 80) print $0;}' | mail -s "High Memory Alert" 2daygeek@gmail.com
|
||||
```
|
||||
|
||||
**注意:**你需要更改电子邮件地址而不是使用我们的电子邮件地址。此外,你可以根据你的要求更改内存利用率阈值。
|
||||
|
||||
**输出:**你将收到类似下面的电子邮件提醒。
|
||||
|
||||
```
|
||||
High Memory Alert: 80.40%
|
||||
```
|
||||
|
||||
我们过去添加了许多有用的 shell 脚本。如果要查看这些内容,请导航至以下链接。
|
||||
|
||||
* [如何使用 shell 脚本自动执行日常活动?][1]
|
||||
|
||||
### 方法-2:用 Linux Bash 脚本监视内存利用率并发送电子邮件
|
||||
|
||||
如果要在邮件警报中获取有关内存利用率的更多信息。使用以下脚本,其中包括基于 `top` 命令和 `ps` 命令的最高内存利用率和进程详细信息。
|
||||
|
||||
这将立即让你了解系统的运行情况。
|
||||
|
||||
当你的系统达到内存利用率的 “80%” 时,它将触发一封电子邮件。
|
||||
|
||||
**注意:**你需要更改电子邮件地址而不是使用我们的电子邮件地址。此外,你可以根据你的要求更改内存利用率阈值。
|
||||
|
||||
```
|
||||
# vi /opt/scripts/memory-alert.sh
|
||||
|
||||
#!/bin/sh
|
||||
ramusage=$(free | awk '/Mem/{printf("RAM Usage: %.2f\n"), $3/$2*100}'| awk '{print $3}')
|
||||
|
||||
if [ "$ramusage" > 20 ]; then
|
||||
|
||||
SUBJECT="ATTENTION: Memory Utilization is High on $(hostname) at $(date)"
|
||||
MESSAGE="/tmp/Mail.out"
|
||||
TO="2daygeek@gmail.com"
|
||||
echo "Memory Current Usage is: $ramusage%" >> $MESSAGE
|
||||
echo "" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "Top Memory Consuming Process Using top command" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "$(top -b -o +%MEM | head -n 20)" >> $MESSAGE
|
||||
echo "" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "Top Memory Consuming Process Using ps command" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "$(ps -eo pid,ppid,%mem,%Memory,cmd --sort=-%mem | head)" >> $MESSAGE
|
||||
mail -s "$SUBJECT" "$TO" < $MESSAGE
|
||||
rm /tmp/Mail.out
|
||||
fi
|
||||
```
|
||||
|
||||
最后添加一个 [cron 任务][2] 来自动执行此操作。它将每 5 分钟运行一次。
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/5 * * * * /bin/bash /opt/scripts/memory-alert.sh
|
||||
```
|
||||
|
||||
**注意:**由于脚本计划每 5 分钟运行一次,因此你将在最多 5 分钟后收到电子邮件提醒(但不是 5 分钟,取决于具体时间)。
|
||||
|
||||
比如说,如果你的系统达到 8.25 的给定限制,那么你将在 5 分钟内收到电子邮件警报。希望现在说清楚了。
|
||||
|
||||
**输出:**你将收到类似下面的电子邮件提醒。
|
||||
|
||||
```
|
||||
Memory Current Usage is: 80.71%
|
||||
|
||||
+------------------------------------------------------------------+
|
||||
Top Memory Consuming Process Using top command
|
||||
+------------------------------------------------------------------+
|
||||
top - 12:00:58 up 5 days, 9:03, 1 user, load average: 1.82, 2.60, 2.83
|
||||
Tasks: 314 total, 1 running, 313 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu0 : 8.3 us, 12.5 sy, 0.0 ni, 75.0 id, 0.0 wa, 0.0 hi, 4.2 si, 0.0 st
|
||||
%Cpu1 : 13.6 us, 4.5 sy, 0.0 ni, 81.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu2 : 21.7 us, 21.7 sy, 0.0 ni, 56.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu3 : 13.6 us, 9.1 sy, 0.0 ni, 77.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu4 : 17.4 us, 8.7 sy, 0.0 ni, 73.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu5 : 20.8 us, 4.2 sy, 0.0 ni, 70.8 id, 0.0 wa, 0.0 hi, 4.2 si, 0.0 st
|
||||
%Cpu6 : 9.1 us, 0.0 sy, 0.0 ni, 90.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu7 : 17.4 us, 4.3 sy, 0.0 ni, 78.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
KiB Mem : 16248588 total, 5015964 free, 6453404 used, 4779220 buff/cache
|
||||
KiB Swap: 17873388 total, 16928620 free, 944768 used. 6423008 avail Mem
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
17163 daygeek 20 2033204 487736 282888 S 10.0 3.0 8:26.07 /usr/lib/firefox/firefox -contentproc -childID 15 -isForBrowser -prefsLen 9408 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /u+
|
||||
1121 daygeek 20 4191388 419180 100552 S 5.0 2.6 126:02.84 /usr/bin/gnome-shell
|
||||
1902 daygeek 20 1701644 327216 82536 S 20.0 2.0 153:27.92 /opt/google/chrome/chrome
|
||||
2969 daygeek 20 1051116 324656 92388 S 15.0 2.0 149:38.09 /opt/google/chrome/chrome --type=renderer --field-trial-handle=10346122902703263820,11905758137655502112,131072 --service-pipe-token=1339861+
|
||||
1068 daygeek 20 1104856 309552 278072 S 5.0 1.9 143:47.42 /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3
|
||||
27246 daygeek 20 907344 265600 108276 S 30.0 1.6 10:42.80 /opt/google/chrome/chrome --type=renderer --field-trial-handle=10346122902703263820,11905758137655502112,131072 --service-pipe-token=8587368+
|
||||
|
||||
+------------------------------------------------------------------+
|
||||
Top Memory Consuming Process Using ps command
|
||||
+------------------------------------------------------------------+
|
||||
PID PPID %MEM %CPU CMD
|
||||
8223 1 6.4 6.8 /usr/lib/firefox/firefox --new-window
|
||||
13948 1121 6.3 1.2 /usr/bin/../lib/notepadqq/notepadqq-bin
|
||||
8671 8223 4.4 7.5 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 6999 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
17163 8223 3.0 0.6 /usr/lib/firefox/firefox -contentproc -childID 15 -isForBrowser -prefsLen 9408 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
1121 1078 2.5 1.6 /usr/bin/gnome-shell
|
||||
17937 8223 2.5 0.8 /usr/lib/firefox/firefox -contentproc -childID 16 -isForBrowser -prefsLen 9410 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
8499 8223 2.2 0.6 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 6635 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
8306 8223 2.2 0.8 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
9198 8223 2.1 0.6 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8604 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-bash-script-to-monitor-memory-utilization-usage-and-send-email/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
||||
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (BitTorrent Client Deluge 2.0 Released: Here’s What’s New)
|
||||
[#]: via: (https://itsfoss.com/deluge-2-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
BitTorrent Client Deluge 2.0 Released: Here’s What’s New
|
||||
======
|
||||
|
||||
You probably already know that [Deluge][1] is one of the [best Torrent clients available for Linux users][2]. However, the last stable release was almost two years back.
|
||||
|
||||
Even though it was in active development, a major stable release wasn’t there – until recently. The latest version while we write this happens to be 2.0.2. So, if you haven’t downloaded the latest stable version – do try it out.
|
||||
|
||||
In either case, if you’re curious, let us talk about what’s new.
|
||||
|
||||
![Deluge][3]
|
||||
|
||||
### Major improvements in Deluge 2.0
|
||||
|
||||
The new release introduces multi-user support – which was a much needed addition.
|
||||
|
||||
In addition to that, there has been several performance improvements to handle more torrents with faster loading times.
|
||||
|
||||
Also, with version 2.0, Deluge used Python 3 with minimal support for Python 2.7. Even for the user interface, they migrated from GTK UI to GTK3.
|
||||
|
||||
As per the release notes, there are several more significant additions/improvements, which include:
|
||||
|
||||
* Multi-user support.
|
||||
* Performance updates to handle thousands of torrents with faster loading times.
|
||||
* A New Console UI which emulates GTK/Web UIs.
|
||||
* GTK UI migrated to GTK3 with UI improvements and additions.
|
||||
* Magnet pre-fetching to allow file selection when adding torrent.
|
||||
* Fully support libtorrent 1.2 release.
|
||||
* Language switching support.
|
||||
* Improved documentation hosted on ReadTheDocs.
|
||||
* AutoAdd plugin replaces built-in functionality.
|
||||
|
||||
|
||||
|
||||
### How to install or upgrade to Deluge 2.0
|
||||
|
||||
![][4]
|
||||
|
||||
You should follow the official [installation guide][5] (using PPA or PyPi) for any Linux distro. However, if you are upgrading, you should go through the note mentioned in the release note:
|
||||
|
||||
“_Deluge 2.0 is not compatible with Deluge 1.x clients or daemons so these will require upgrading too._ _Also_ _third-party Python scripts may not be compatible if they directly connect to the Deluge client and will need migrating._“
|
||||
|
||||
So, they insist to always make a backup of your [config][6] before a major version upgrade to guard against data loss.
|
||||
|
||||
[][7]
|
||||
|
||||
Suggested read Ubuntu's Snap Apps Website Gets Much Needed Improvements
|
||||
|
||||
And, if you are an author of a plugin, you need to upgrade it make it compatible with the new release.
|
||||
|
||||
Direct download app packages not yet available for Windows and Mac OS. However, the release note mentions that they are being worked on.
|
||||
|
||||
As an alternative, you can install them manually by following the [installation guide][5] in the updated official documentation.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
What do you think about the latest stable release? Do you utilize Deluge as your BitTorrent client? Or do you find something else as a better alternative?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/deluge-2-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dev.deluge-torrent.org/
|
||||
[2]: https://itsfoss.com/best-torrent-ubuntu/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/deluge.jpg?fit=800%2C410&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/Deluge-2-release.png?resize=800%2C450&ssl=1
|
||||
[5]: https://deluge.readthedocs.io/en/latest/intro/01-install.html
|
||||
[6]: https://dev.deluge-torrent.org/wiki/Faq#WheredoesDelugestoreitssettingsconfig
|
||||
[7]: https://itsfoss.com/snap-store/
|
@ -0,0 +1,42 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Codethink open sources part of onboarding process)
|
||||
[#]: via: (https://opensource.com/article/19/6/codethink-onboarding-process)
|
||||
[#]: author: (Laurence Urhegyi https://opensource.com/users/laurence-urhegyi)
|
||||
|
||||
Codethink open sources part of onboarding process
|
||||
======
|
||||
In other words, how to Git going in FOSS.
|
||||
![Teacher or learner?][1]
|
||||
|
||||
Here at [Codethink][2], we’ve recently focused our energy into enhancing the onboarding process we use for all new starters at the company. As we grow steadily in size, it’s important that we have a well-defined approach to both welcoming new employees into the company, and introducing them to the organization’s culture.
|
||||
|
||||
As part of this overall onboarding effort, we’ve created [_How to Git going in FOSS_][3]: an introductory guide to the world of free and open source software (FOSS), and some of the common technologies, practices, and principles associated with free and open source software.
|
||||
|
||||
This guide was initially aimed at work experience students and summer interns. However, the document is in fact equally applicable to anyone who is new to free and open source software, no matter their prior experience in software or IT in general. _How to Git going in FOSS_ is hosted on GitLab and consists of several repositories, each designed to be a self-guided walk through.
|
||||
|
||||
Our guide begins with a general introduction to FOSS, including explanations of the history of GNU/Linux, how to use [Git][4] (as well as Git hosting services such as GitLab), and how to use a text editor. The document then moves on to exercises that show the reader how to implement some of the things they’ve just learned.
|
||||
|
||||
_How to Git going in FOSS_ is fully public and available for anyone to try. If you’re new to FOSS or know someone who is, then please have a read-through, and see what you think. If you have any feedback, feel free to raise an issue on GitLab. And, of course, we also welcome contributions. We’re keen to keep improving the guide however possible. One future improvement we plan to make is an additional exercise that is more complex than the existing two, such as potentially introducing the reader to [Continuous Integration][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/codethink-onboarding-process
|
||||
|
||||
作者:[Laurence Urhegyi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurence-urhegyi
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?)
|
||||
[2]: https://www.codethink.co.uk/about.html
|
||||
[3]: https://gitlab.com/ct-starter-guide
|
||||
[4]: https://git-scm.com
|
||||
[5]: https://en.wikipedia.org/wiki/Continuous_integration
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/6/news-june-22)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look Cloudflare's open source random number generator, more open source robotics data, new npm functionality, and more!
|
||||
|
||||
### Cloudflare announces open source random number generator project
|
||||
|
||||
Is there such a thing as a truly random number? Internet security and services provider Cloudflare things so. To prove it, the company has formed [The League of Entropy][2], an open source project to create a generator for random numbers.
|
||||
|
||||
The League consists of Cloudflare and "five other organisations — predominantly universities and security companies." They share random numbers, using an open source tool called [Drand][3] (short for Distributed Randomness Beacon Daemon). The numbers are then "composited into one random number" on the basis that "several random numbers are more random than one random number." While the League's random number generator isn't intended "for any kind of password or cryptographic seed generation," Cloudflare's CEO Matthew Prince points out that if "you need a way of having a known random source, this is a really valuable tool."
|
||||
|
||||
### Cruise open sources robotics data analysis tool
|
||||
|
||||
Projects involved in creating self-driving vehicles generate petabytes of data. And with amounts of data that large comes the challenge of quickly and effectively analyzing it. To make the task easier, General Motors subsidiary Cruise has made its Webviz data visualization tool "[freely available to developers][4] in need of a modular robotics analysis solution."
|
||||
|
||||
Webviz "takes as input any bag file (the message format used by the popular Robot Operating System) and outputs charts and graphs." It "contains a collection of general panels (which visualize data) applicable to most robotics developers," said Esther Weon, a software engineer at Cruise. The company also plans to "release a public API that’ll allow developers to build custom panels themselves."
|
||||
|
||||
The code for Webviz is [available on GitHub][5], where you can download or contribute to the project.
|
||||
|
||||
### npm provides more security
|
||||
|
||||
The team behind npm, the site providing JavaScript package hosting, has a new collaboration with GitHub to automatically scan for exposed tokens that could give hackers access that doesn't belong to them. The project includes a handy automatic revoke of leaked credentials them if are still valid. This could drastically reduce vulnerabilities in the JavaScript community. For instructions on how to participate, see the [original article][6].
|
||||
|
||||
Note that this news was found via the [Changelog news][7].
|
||||
|
||||
### Better end of life tracking via open source
|
||||
|
||||
A new project, [endoflife.date][8], aims to overcome the complexity of end of life (EOL) announcements for software. It's part tracker, part public announcement on what good documentation looks like for software. As the README states: "The reason this site exists is because this information is very often hidden away. If you're releasing something on a regular basis:
|
||||
|
||||
1. List only supported releases.
|
||||
2. Give EoL dates/policy if possible.
|
||||
3. Hide unsupported releases behind a few extra clicks.
|
||||
4. Mention security/active release difference if needed."
|
||||
|
||||
|
||||
|
||||
Check out the [source code][9] for more information.
|
||||
|
||||
### In other news
|
||||
|
||||
* [Medicine needs to embrace open source][10]
|
||||
* [Using geospatial data to create safer roads][11]
|
||||
* [Embracing open source could be a big competitive advantage for businesses][12]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/news-june-22
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://thenextweb.com/dd/2019/06/17/cloudflares-new-open-source-project-helps-anyone-obtain-truly-random-numbers/
|
||||
[3]: https://github.com/dedis/drand
|
||||
[4]: https://venturebeat.com/2019/06/18/cruise-open-sources-webview-a-tool-for-robotics-data-analysis/
|
||||
[5]: https://github.com/cruise-automation/webviz
|
||||
[6]: https://blog.npmjs.org/post/185680936500/protecting-package-publishers-npm-token-security
|
||||
[7]: https://changelog.com/news/npm-token-scanning-extending-to-github-NAoe
|
||||
[8]: https://endoflife.date/
|
||||
[9]: https://github.com/captn3m0/endoflife.date
|
||||
[10]: https://www.zdnet.com/article/medicine-needs-to-embrace-open-source/
|
||||
[11]: https://itbrief.co.nz/story/using-geospatial-data-to-create-safer-roads
|
||||
[12]: https://www.fastcompany.com/90364152/embracing-open-source-could-be-a-big-competitive-advantage-for-businesses
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Edge computing is in most industries’ future)
|
||||
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Edge computing is in most industries’ future
|
||||
======
|
||||
Nearly every industry can take advantage of edge computing in the journey to speed digital transformation efforts
|
||||
![iStock][1]
|
||||
|
||||
The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, [according to Gartner][2].
|
||||
|
||||
That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include:
|
||||
|
||||
* **Manufacturers:** Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent [_Automation World_][3] survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics.
|
||||
|
||||
* **Retailers** : Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” [writes Dave Johnson][4], executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show in-store traffic.
|
||||
|
||||
|
||||
|
||||
* **Healthcare organizations** : As healthcare costs continue to escalate, this industry is ripe for innovation that improves productivity and cost efficiencies. Management consulting firm [McKinsey & Co. has identified][5] at least 11 healthcare use cases that benefit patients, the facility, or both. Two examples: tracking mobile medical devices for nursing efficiency as well as optimization of equipment, and wearable devices that track user exercise and offer wellness advice.
|
||||
|
||||
|
||||
|
||||
While these are strong use cases, as the edge computing market grows, so too will the number of industries adopting it.
|
||||
|
||||
**Getting the edge on digital transformation**
|
||||
|
||||
Faster processing at the edge fits perfectly into the objectives and goals of digital transformation — improving efficiencies, productivity, speed to market, and the customer experience. Here are just a few of the potential applications and industries that will be changed by edge computing:
|
||||
|
||||
**Agriculture:** Farmers and organizations already use drones to transmit field and climate conditions to watering equipment. Other applications might include monitoring and location tracking of workers, livestock, and equipment to improve productivity, efficiencies, and costs.
|
||||
|
||||
**Energy** : There are multiple potential applications in this sector that could benefit both consumers and providers. For example, smart meters help homeowners better manage energy use while reducing grid operators’ need for manual meter reading. Similarly, sensors on water pipes would detect leaks, while providing real-time consumption data.
|
||||
|
||||
**Financial services** : Banks are adopting interactive ATMs that quickly process data to provide better customer experiences. At the organizational level, transactional data can be more quickly analyzed for fraudulent activity.
|
||||
|
||||
**Logistics** : As consumers demand faster delivery of goods and services, logistics companies will need to transform mapping and routing capabilities to get real-time data, especially in terms of last-mile planning and tracking. That could involve street-, package-, and car-based sensors transmitting data for processing.
|
||||
|
||||
All industries have the potential for transformation, thanks to edge computing. But it will depend on how they address their computing infrastructure. Discover how to overcome any IT obstacles at [APC.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
|
||||
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (17 predictions about 5G networks and devices)
|
||||
[#]: via: (https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
17 predictions about 5G networks and devices
|
||||
======
|
||||
Not surprisingly, the new Ericsson Mobility Report is bullish on the potential of 5G technology. Here’s a quick look at the most important numbers.
|
||||
![Vertigo3D / Getty Images][1]
|
||||
|
||||
_“As market after market switches on 5G, we are at a truly momentous point in time. No previous generation of mobile technology has had the potential to drive economic growth to the extent that 5G promises. It goes beyond connecting people to fully realizing the Internet of Things (IoT) and the Fourth Industrial Revolution.”_ —The opening paragraph of the [June 2019 Ericsson Mobility Report][2]
|
||||
|
||||
Almost every significant technology advancement now goes through what [Gartner calls the “hype cycle.”][3] These days, Everyone expects new technologies to be met with gushing optimism and dreamy visions of how it’s going to change the world in the blink of an eye. After a while, we all come to expect the vendors and the press to go overboard with excitement, at least until reality and disappointment set in when things don’t pan out exactly as expected.
|
||||
|
||||
**[ Also read:[The time of 5G is almost here][4] ]**
|
||||
|
||||
Even with all that in mind, though, Ericsson’s whole-hearted embrace of 5G in its Internet Mobility Report is impressive. The optimism is backed up by lots of numbers, but they can be hard to tease out of the 36-document. So, let’s recap some of the most important top-line predictions (with my comments at the end).
|
||||
|
||||
### Worldwide 5G growth projections
|
||||
|
||||
1. “More than 10 million 5G subscriptions are projected worldwide by the end of 2019.”
|
||||
2. “[We] now expect there to be 1.9 billion 5G subscriptions for enhanced mobile broadband by the end of 2024. This will account for over 20 percent of all mobile subscriptions at that time. The peak of LTE subscriptions is projected for 2022, at around 5.3 billion subscriptions, with the number declining slowly thereafter.”
|
||||
3. “In 2024, 5G networks will carry 35 percent of mobile data traffic globally.”
|
||||
4. “5G can cover up to 65 percent of the world’s population in 2024.”
|
||||
5. ”NB-IoT and Cat-M technologies will account for close to 45 percent of cellular IoT connections in 2024.”
|
||||
6. “By the end of 2024, nearly 35 percent of cellular IoT connections will be Broadband IoT, with 4G connecting the majority.” But 5G connections will support more advanced use cases.
|
||||
7. “Despite challenging 5G timelines, device suppliers are expected to be ready with different band and architecture support in a range of devices during 2019.”
|
||||
8. “Spectrum sharing … chipsets are currently in development and are anticipated to be in 5G commercial devices in late 2019."
|
||||
9. “[VoLTE][5] is the foundation for enabling voice and communication services on 5G devices. Subscriptions are expected to reach 2.1 billion by the end of 2019. … The number of VoLTE subscriptions is projected to reach 5.9 billion by the end of 2024, accounting for more than 85 percent of combined LTE and 5G subscriptions.”
|
||||
|
||||
|
||||
|
||||
![][6]
|
||||
|
||||
### Regional 5G projections
|
||||
|
||||
1. “In North America, … service providers have already launched commercial 5G services, both for fixed wireless access and mobile. … By the end of 2024, we anticipate close to 270 million 5G subscriptions in the region, accounting for more than 60 percent of mobile subscriptions.”
|
||||
2. “In Western Europe … The momentum for 5G in the region was highlighted by the first commercial launch in April. By the end of 2024, 5G is expected to account for around 40 percent of mobile subscriptions.
|
||||
3. In Central and Eastern Europe, … The first 5G subscriptions are expected in 2019, and will make up 15 percent of subscriptions in 2024.”
|
||||
4. “In North East Asia, … the region’s 5G subscription penetration is projected to reach 47 percent [by the end of 2024].
|
||||
5. “[In India,] 5G subscriptions are expected to become available in 2022 and will represent 6 percent of mobile subscriptions at the end of 2024.”
|
||||
6. “In the Middle East and North Africa, we anticipate commercial 5G deployments with leading communications service providers during 2019, and significant volumes in 2021. … Around 60 million 5G subscriptions are forecast for the end of 2024, representing 3 percent of total mobile subscriptions.”
|
||||
7. “Initial 5G commercial devices are expected in the [South East Asia and Oceania] region during the first half of 2019. By the end of 2024, it is anticipated that almost 12 percent of subscriptions in the region will be for 5G.]”
|
||||
8. “In Latin America … the first 5G deployments will be possible in the 3.5GHz band during 2019. Argentina, Brazil, Chile, Colombia, and Mexico are anticipated to be the first countries in the region to deploy 5G, with increased subscription uptake forecast from 2020. By the end of 2024, 5G is set to make up 7 percent of mobile subscriptions.”
|
||||
|
||||
|
||||
|
||||
### Is 5G really so inevitable?
|
||||
|
||||
Considered individually, these predictions all seem perfectly reasonable. Heck, 10 million 5G subscriptions is only a drop in the global bucket. And rumors are already flying that Apple’s next round of iPhones will include 5G capability. Also, 2024 is still five years in the future, so why wouldn’t the faster connections drive impressive traffic stats? Similarly, North America and North East Asia will experience the fastest 5G penetration.
|
||||
|
||||
But when you look at them all together, these numbers project a sense of 5G inevitability that could well be premature. It will take a _lot_ of spending, by a lot of different parties—carriers, chip makers, equipment vendors, phone manufacturers, and consumers—to make this kind of growth a reality.
|
||||
|
||||
I’m not saying 5G won’t take over the world. I’m just saying that when so many things have to happen in a relatively short time, there are a lot of opportunities for the train to jump the tracks. Don’t be surprised if it takes longer than expected for 5G to turn into the worldwide default standard Ericsson—and everyone else—seems to think it will inevitably become.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/5g_wireless_technology_network_connections_by_credit-vertigo3d_gettyimages-1043302218_3x2-100787550-large.jpg
|
||||
[2]: https://www.ericsson.com/assets/local/mobility-report/documents/2019/ericsson-mobility-report-june-2019.pdf
|
||||
[3]: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
|
||||
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[5]: https://www.gsma.com/futurenetworks/technology/volte/
|
||||
[6]: https://images.idgesg.net/images/article/2019/06/ericsson-mobility-report-june-2019-graph-100799481-large.jpg
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,144 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why your workplace arguments aren't as effective as you'd like)
|
||||
[#]: via: (https://opensource.com/open-organization/19/6/barriers-productive-arguments)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland/users/ron-mcfarland)
|
||||
|
||||
Why your workplace arguments aren't as effective as you'd like
|
||||
======
|
||||
Open organizations rely on open conversations. These common barriers to
|
||||
productive argument often get in the way.
|
||||
![Arrows pointing different directions][1]
|
||||
|
||||
Transparent, frank, and often contentious arguments are part of life in an open organization. But how can we be sure those conversations are _productive_ —not _destructive_?
|
||||
|
||||
This is the second installment of a two-part series on how to argue and actually achieve something. In the [first article][2], I mentioned what arguments are (and are not), according to author Sinnott-Armstrong in his book _Think Again: How to Reason and Argue._ I also offered some suggestions for making arguments as productive as possible.
|
||||
|
||||
In this article, I'll examine three barriers to productive arguments that Sinnott-Armstrong elaborates in his book: incivility, polarization, and language issues. Finally, I'll explain his suggestions for addressing those barriers.
|
||||
|
||||
### Incivility
|
||||
|
||||
"Incivility" has become a social concern in recent years. Consider this: As a tactic in arguments, incivility _can_ have an effect in certain situations—and that's why it's a common strategy. Sinnott-Armstrong notes that incivility:
|
||||
|
||||
* **Attracts attention:** Incivility draws people's attention in one direction, sometimes to misdirect attention from or outright obscure other issues. It redirects people's attention to shocking statements. Incivility, exaggeration, and extremism can increase the size of an audience.
|
||||
|
||||
* **Energizes:** Sinnott-Armstrong writes that seeing someone being uncivil on a topic of interest can generate energy from a state of powerlessness.
|
||||
|
||||
* **Stimulates memory:** Forgetting shocking statements is difficult; they stick in our memory more easily than statements that are less surprising to us.
|
||||
|
||||
* **Excites the powerless:** The groups most likely to believe and invest in someone being uncivil are those that feel they're powerless and being treated unfairly.
|
||||
|
||||
|
||||
|
||||
|
||||
Unfortunately, incivility as a tactic in arguments has its costs. One such cost is polarization.
|
||||
|
||||
### Polarization
|
||||
|
||||
Sinnott-Armstrong writes about five forms of polarization:
|
||||
|
||||
* **Distance:** If two people's or groups' views are far apart according to some relevant scale, have significant disagreements and little common ground, then they're polarized.
|
||||
|
||||
* **Differences:** If two people or groups have fewer values and beliefs _in common_ than they _don't have in common_ , then they're polarized.
|
||||
|
||||
* **Antagonism:** Groups are more polarized the more they feel hatred, disdain, fear, or other negative emotions toward other people or groups.
|
||||
|
||||
* **Incivility:** Groups tend to be more polarized when they talk more negatively about people of the other groups.
|
||||
|
||||
* **Rigidity:** Groups tend to be more polarized when they treat their values as indisputable and will not compromise.
|
||||
|
||||
* **Gridlock:** Groups tend to be more polarized when they're unable to cooperate and work together toward common goals.
|
||||
|
||||
|
||||
|
||||
|
||||
And I'll add one more form of polarization to Sinnott-Armstrong's list:
|
||||
|
||||
* **Non-disclosure:** Groups tend to be more polarized when one or both of the groups refuses to share valid, verifiable information—or when they distract each other with useless or irrelevant information. One of the ways people polarize is by not talking to each other and withhold information. Similarly, they talk on subjects that distract us from the issue at hand. Some issues are difficult to talk about, but by doing so solutions can be explored.
|
||||
|
||||
|
||||
|
||||
### Language issues
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes.
|
||||
|
||||
Language issues can be argument-stoppers Sinnott-Armstrong says. In particular, he outlines some of the following language-related barriers to productive argument.
|
||||
|
||||
* **Guarding:** Using words like "all" can make a statement unbelievable; words like "sometimes" can make a statement too vague.
|
||||
|
||||
* **Assuring:** Simply stating "trust me, I know what I'm talking about," without offering evidence that this is the case, can impede arguments.
|
||||
|
||||
* **Evaluating:** Offering an evaluation of something—like saying "It is good"―without any supporting reasoning.
|
||||
|
||||
* **Discounting:** This involves anticipating what the another person will say and attempting to weaken it as much as possible by framing an argument in a negative way. (Contrast these two sentences, for example: "Ramona is smart but boring" and "Ramona is boring but smart." The difference is subtle, but you'd probably want to spend less time with Ramona if you heard the first statement about her than if you heard the second.)
|
||||
|
||||
|
||||
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes. In addition, Sinnott-Armstrong specifically draws readers' attention to two other language problems that can kill productive debates: vagueness and ambiguity.
|
||||
|
||||
* **Vagueness:** This occurs when a word or sentence is not precise enough and having many ways to interpret its true meaning and intent, which leads to confusion. Consider the sentence "It is big." "It" must be defined if it's not already obvious to everyone in the conversation. And a word like "big" must be clarified through comparison to something that everyone has agreed upon.
|
||||
|
||||
* **Ambiguity:** This occurs when a sentence could have two distinct meanings. For example: "Police killed man with axe." Who was holding the axe, the man or the police? "My neighbor had a friend for dinner." Did your neighbor invite a friend to share a meal—or did she eat her friend?
|
||||
|
||||
|
||||
|
||||
|
||||
### Overcoming barriers
|
||||
|
||||
To help readers avoid these common roadblocks to productive arguments, Sinnott-Armstrong recommends a simple, four-step process for evaluating another person's argument.
|
||||
|
||||
1. **Observation:** First, observe a stated opinion and its related evidence to determine the precise nature of the claim. This might require you to ask some questions for clarification (you'll remember I employed this technique when arguing with my belligerent uncle, which I described [in the first article of this series][2]).
|
||||
|
||||
2. **Hypothesis:** Develop some hypothesis about the argument. In this case, the hypothesis should be an inference based on generally acceptable standards (for more on the structure of arguments themselves, also see [the first part of this series][2]).
|
||||
|
||||
3. **Comparison:** Compare that hypothesis with others and evaluate which is more accurate. More important issues will require you to conduct more comparisons. In other cases, premises are so obvious that no further explanation is required.
|
||||
|
||||
4. **Conclusion:** From the comparison analysis, reach a conclusion about whether your hypothesis about a competing argument is correct.
|
||||
|
||||
|
||||
|
||||
|
||||
In many cases, the question is not whether a particular claim is _correct_ or _incorrect_ , but whether it is _believable._ So Sinnott-Armstrong also offers a four-step "believability test" for evaluating claims of this type.
|
||||
|
||||
1. **Expertise:** Does the person presenting the argument have authority in an appropriate field? Being a specialist is one field doesn't necessarily make that person an expert in another.
|
||||
|
||||
2. **Motive:** Would self-interest or other personal motives compel a person to withhold information or make false statements? To confirm one's statements, it might be wise to seek a totally separate, independent authority for confirmation.
|
||||
|
||||
3. **Sources:** Are the sources the person offers as evidence of a claim recognized experts? Do those sources have the expertise on the specific issue addressed?
|
||||
|
||||
4. **Agreement:** Is there agreement among many experts within the same specialty?
|
||||
|
||||
|
||||
|
||||
|
||||
### Let's argue
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs.
|
||||
|
||||
When I was a university student, I would usually sit toward the front of the classroom. When I didn't understand something, I would start asking questions for clarification. Everyone else in the class would just sit silently, saying nothing. After class, however, other students would come up to me and thank me for asking those questions—because everyone else in the room was confused, too.
|
||||
|
||||
Clarification is a powerful act—not just in the classroom, but during arguments anywhere. Building an organizational culture in which people feel empowered to ask for clarification is critical for productive arguments (I've [given presentations on this topic][3] before). If members have the courage to clarify premises, and they can do so in an environment where others don't think they're being belligerent, then this might be the key to a successful and productive argument.
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs. Then, practice some of Sinnott-Armstrong's suggestions. Arguing productively will enhance [transparency, inclusivity, and collaboration][4] in your organization—leading to a more open culture.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/6/barriers-productive-arguments
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/directions-arrows.png?itok=EE3lFewZ (Arrows pointing different directions)
|
||||
[2]: https://opensource.com/open-organization/19/5/productive-arguments
|
||||
[3]: https://www.slideshare.net/RonMcFarland1/argue-successfully-achieve-something
|
||||
[4]: https://opensource.com/open-organization/resources/open-org-definition
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco issues critical security warnings on SD-WAN, DNA Center)
|
||||
[#]: via: (https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco issues critical security warnings on SD-WAN, DNA Center
|
||||
======
|
||||
Vulnerabilities to Cisco's SD-WAN and DNA Center software top a list of nearly 30 security advisories issued by the company.
|
||||
![zajcsik \(CC0\)][1]
|
||||
|
||||
Cisco has released two critical warnings about security issues with its SD-WAN and DNA Center software packages.
|
||||
|
||||
The worse, with a Common Vulnerability Scoring System rating of 9.3 out of 10, is a vulnerability in its [Digital Network Architecture][2] (DNA) Center software that could let an unauthenticated attacker connect an unauthorized network device to the subnet designated for cluster services.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
A successful exploit could let an attacker reach internal services that are not hardened for external access, Cisco [stated][7]. The vulnerability is due to insufficient access restriction on ports necessary for system operation, and the company discovered the issue during internal security testing, Cisco stated.
|
||||
|
||||
Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access, automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis.
|
||||
|
||||
This vulnerability affects Cisco DNA Center Software releases prior to 1.3, and it is fixed in version 1.3 and releases after that.
|
||||
|
||||
Cisco wrote that system updates are available from the Cisco cloud but not from the [Software Center][8] on Cisco.com. To upgrade to a fixed release of Cisco DNA Center Software, administrators can use the “System Updates” feature of the software.
|
||||
|
||||
A second critical warning – with a CVVS score of 7.8 – is a weakness in the command-line interface of the Cisco SD-WAN Solution that could let an authenticated local attacker elevate lower-level privileges to the root user on an affected device.
|
||||
|
||||
Cisco [wrote][9] that the vulnerability is due to insufficient authorization enforcement. An attacker could exploit this vulnerability by authenticating to the targeted device and executing commands that could lead to elevated privileges. A successful exploit could let the attacker make configuration changes to the system as the root user, the company stated.
|
||||
|
||||
This vulnerability affects a range of Cisco products running a release of the Cisco SD-WAN Solution prior to Releases 18.3.6, 18.4.1, and 19.1.0 including:
|
||||
|
||||
* vBond Orchestrator Software
|
||||
* vEdge 100 Series Routers
|
||||
* vEdge 1000 Series Routers
|
||||
* vEdge 2000 Series Routers
|
||||
* vEdge 5000 Series Routers
|
||||
* vEdge Cloud Router Platform
|
||||
* vManage Network Management Software
|
||||
* vSmart Controller Software
|
||||
|
||||
|
||||
|
||||
Cisco said it has released free [software updates][10] that address the vulnerability described in this advisory. Cisco wrote that it fixed this vulnerability in Release 18.4.1 of the Cisco SD-WAN Solution.
|
||||
|
||||
The two critical warnings were included in a dump of [nearly 30 security advisories][11].
|
||||
|
||||
There were two other “High” impact rated warnings involving the SD-WAN software.
|
||||
|
||||
One, a vulnerability in the vManage web-based UI (Web UI) of the Cisco SD-WAN Solution could let an authenticated, remote attacker gain elevated privileges on an affected vManage device, Cisco [wrote][12].
|
||||
|
||||
The vulnerability is due to a failure to properly authorize certain user actions in the device configuration. An attacker could exploit this vulnerability by logging in to the vManage Web UI and sending crafted HTTP requests to vManage. A successful exploit could let attackers gain elevated privileges and make changes to the configuration that they would not normally be authorized to make, Cisco stated.
|
||||
|
||||
Another vulnerability in the vManage web-based UI could let an authenticated, remote attacker inject arbitrary commands that are executed with root privileges.
|
||||
|
||||
This exposure is due to insufficient input validation, Cisco [wrote][13]. An attacker could exploit this vulnerability by authenticating to the device and submitting crafted input to the vManage Web UI.
|
||||
|
||||
Both vulnerabilities affect Cisco vManage Network Management Software that is running a release of the Cisco SD-WAN Solution prior to Release 18.4.0 and Cisco has released free [software updates][10] to correct them.
|
||||
|
||||
Other high-rated vulnerabilities Cisco disclosed included:
|
||||
|
||||
* A [vulnerability][14] in the Cisco Discovery Protocol (CDP) implementation for the Cisco TelePresence Codec (TC) and Collaboration Endpoint (CE) Software could allow an unauthenticated, adjacent attacker to inject arbitrary shell commands that are executed by the device.
|
||||
* A [weakness][15] in the internal packet-processing functionality of the Cisco StarOS operating system running on virtual platforms could allow an unauthenticated, remote attacker to cause an affected device to stop processing traffic, resulting in a denial of service (DoS) condition.
|
||||
* A [vulnerability][16] in the web-based management interface of the Cisco RV110W Wireless-N VPN Firewall, Cisco RV130W Wireless-N Multifunction VPN Router, and Cisco RV215W Wireless-N VPN Router could allow an unauthenticated, remote attacker to cause a reload of an affected device, resulting in a denial of service (DoS) condition.
|
||||
|
||||
|
||||
|
||||
Cisco has [released software][10] fixes for those advisories as well.
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/04/lightning_storm_night_gyorgy_karoly_toth_aka_zajcsik_cc0_via_pixabay_1200x800-100754504-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-dnac-bypass
|
||||
[8]: https://software.cisco.com/download/home
|
||||
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privesca
|
||||
[10]: https://tools.cisco.com/security/center/resources/security_vulnerability_policy.html#fixes
|
||||
[11]: https://tools.cisco.com/security/center/publicationListing.x?product=Cisco&sort=-day_sir&limit=50#~Vulnerabilities
|
||||
[12]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privilescal
|
||||
[13]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-cmdinj
|
||||
[14]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-tele-shell-inj
|
||||
[15]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-staros-asr-dos
|
||||
[16]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-rvrouters-dos
|
||||
[17]: https://www.facebook.com/NetworkWorld/
|
||||
[18]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (With Tableau, SaaS king Salesforce becomes a hybrid cloud company)
|
||||
[#]: via: (https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
With Tableau, SaaS king Salesforce becomes a hybrid cloud company
|
||||
======
|
||||
Once dismissive of software, Salesforce acknowledges the inevitability of the hybrid cloud.
|
||||
![Martyn Williams/IDGNS][1]
|
||||
|
||||
I remember a time when people at Salesforce events would hand out pins that read “Software” inside a red circle with a slash through it. The High Priest of SaaS (a.k.a. CEO Marc Benioff) was so adamant against installed, on-premises software that his keynotes were always comical.
|
||||
|
||||
Now, Salesforce is prepared to [spend $15.7 billion to acquire Tableau Software][2], the leader in on-premises data analytics.
|
||||
|
||||
On the hell-freezes-over scale, this is up there with Microsoft embracing Linux or Apple PR people returning a phone call. Well, we know at least one of those has happened.
|
||||
|
||||
**[ Also read:[Hybrid Cloud: The time for adoption is upon us][3] | Stay in the know: [Subscribe and get daily newsletter updates][4] ]**
|
||||
|
||||
So, why would a company that is so steeped in the cloud, so anti-on-premises software, make such a massive purchase?
|
||||
|
||||
Partly it is because Benioff and company are finally coming to the same conclusion as most everyone else: The hybrid cloud, a mix of on-premises systems and public cloud, is the wave of the future, and pure cloud plays are in the minority.
|
||||
|
||||
The reality is that data is hybrid and does not sit in a single location, and Salesforce is finally acknowledging this, said Tim Crawford, president of Avoa, a strategic CIO advisory firm.
|
||||
|
||||
“I see the acquisition of Tableau by Salesforce as less about getting into the on-prem game as it is a reality of the world we live in. Salesforce needed a solid analytics tool that went well beyond their existing capability. Tableau was that tool,” he said.
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
|
||||
|
||||
Salesforce also understands that they need a better understanding of customers and those data insights that drive customer decisions. That data is both on-prem and in the cloud, Crawford noted. It is in Salesforce, other solutions, and the myriad of Excel spreadsheets spread across employee systems. Tableau crosses the hybrid boundaries and brings a straightforward way to visualize data.
|
||||
|
||||
Salesforce had analytics features as part of its SaaS platform, but it was geared around their own platform, whereas everyone uses Tableau and Tableau supports all manner of analytics.
|
||||
|
||||
“There’s a huge overlap between Tableau customers and Salesforce customers,” Crawford said. “The data is everywhere in the enterprise, not just in Salesforce. Salesforce does a great job with its own data, but Tableau does great with data in a lot of places because it’s not tied to one platform. So, it opens up where the data comes from and the insights you get from the data.”
|
||||
|
||||
Crawford said that once the deal is done and Tableau is under some deeper pockets, the organization may be able to innovate faster or do things they were unable to do prior. That hardly indicates Tableau was struggling, though. It pulled in [$1.16 billion in revenue][6] in 2018.
|
||||
|
||||
Crawford also expects Salesforce to push Tableau to open up new possibilities for customer insights by unlocking customer data inside and outside of Salesforce. One challenge for the two companies is to maintain that neutrality so that they don’t lose the ability to use Tableau for non-customer centric activities.
|
||||
|
||||
“It’s a beautiful way to visualize large sets of data that have nothing to do with customer centricity,” he said.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/09/150914-salesforce-dreamforce-2-100614575-large.jpg
|
||||
[2]: https://www.cio.com/article/3402026/how-salesforces-tableau-acquisition-will-impact-it.html
|
||||
[3]: http://www.networkworld.com/article/2172875/cloud-computing/hybrid-cloud--the-year-of-adoption-is-upon-us.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[6]: https://www.geekwire.com/2019/tableau-hits-841m-annual-recurring-revenue-41-transition-subscription-model-continues/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Carrier services help expand healthcare, with 5G in the offing)
|
||||
[#]: via: (https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Carrier services help expand healthcare, with 5G in the offing
|
||||
======
|
||||
Many telehealth initiatives tap into wireless networking supplied by service providers that may start offering services such as Citizen's Band and 5G to support remote medical care.
|
||||
![Thinkstock][1]
|
||||
|
||||
There are connectivity options aplenty for most types of [IoT][2] deployment, but the idea of simply handing the networking part of the equation off to a national licensed wireless carrier could be the best one for certain kinds of deployments in the medical field.
|
||||
|
||||
Telehealth systems, for example, are still a relatively new facet of modern medicine, but they’re already among the most important applications that use carrier networks to deliver care. One such system is operated by the University of Mississippi Medical Center, for the treatment and education of diabetes patients.
|
||||
|
||||
**[More on wireless:[The time of 5G is almost here][3]]**
|
||||
|
||||
**[ Now read[20 hot jobs ambitious IT pros should shoot for][4]. ]**
|
||||
|
||||
Greg Hall is the director of IT at UMMC’s center for telehealth. He said that the remote patient monitoring system is relatively simple by design – diabetes patients receive a tablet computer that they can use to input and track their blood sugar levels, alert clinicians to symptoms like nerve pain or foot sores, and even videoconference with their doctors directly. The tablet connects via Verizon, AT&T or CSpire – depending on who’s got the best coverage in a given area – back to UMMC’s servers.
|
||||
|
||||
According to Hall, there are multiple advantages to using carrier connectivity instead of unlicensed (i.e. purpose-built [Wi-Fi][5] or other technology) to connect patients – some of whom live in remote parts of the state – to their caregivers.
|
||||
|
||||
“We weren’t expecting everyone who uses the service to have Wi-Fi,” he said, “and they can take their tablet with them if they’re traveling.”
|
||||
|
||||
The system serves about 250 patients in Mississippi, up from roughly 175 in the 2015 pilot program that got the effort off the ground. Nor is it strictly limited to diabetes care – Hall said that it’s already been extended to patients suffering from chronic obstructive pulmonary disease, asthma and even used for prenatal care, with further expansion in the offing.
|
||||
|
||||
“The goal of our program isn’t just the monitoring piece, but also the education piece, teaching a person to live with their [condition] and thrive,” he said.
|
||||
|
||||
It hasn’t all been smooth sailing. One issue was caused by the natural foliage of the area, as dense areas of pine trees can cause transmission problems, thanks to their needles being a particularly troublesome length and interfering with 2.5GHz wireless signals. But Hall said that the team has been able to install signal boosters or repeaters to overcome that obstacle.
|
||||
|
||||
Neurologist Dr. Allen Gee’s practice in Wyoming attempts to address a similar issue – far-flung patients with medical needs that might not be addressed by the sparse local-care options. From his main office in Cody, he said, he can cover half the state via telepresence, using a purpose-built system that is based on cellular-data connectivity from TCT, Spectrum and AT&T, as well as remote audiovisual equipment and a link to electronic health records stored in distant locations. That allows him to receive patient data, audio/visual information and even imaging diagnostics remotely. Some specialists in the state are able to fly to those remote locations, others are not.
|
||||
|
||||
While Gee’s preference is to meet with patients in person, that’s just not always possible, he said.
|
||||
|
||||
“Medical specialists don’t get paid for windshield time,” he noted. “Being able to transfer information from an EHR facilitates the process of learning about the patient.”
|
||||
|
||||
### 5G is coming**
|
||||
|
||||
**
|
||||
|
||||
According to Alan Stewart-Brown, vice president at infrastructure management vendor Opengear, there’s a lot to like about current carrier networks for medical use – particularly wide coverage and a lack of interference – but there are bigger things to come.
|
||||
|
||||
“We have customers that have equipment in ambulances for instance, where they’re livestreaming patients’ vital signs to consoles that doctors can monitor,” he said. “They’re using carrier 4G for that right now and it works well enough, but there are limitations, namely latency, which you don’t get on [5G][6].”
|
||||
|
||||
Beyond the simple fact of increased throughput and lower latency, widespread 5G deployments could open a wide array of new possibilities for medical technology, mostly involving real-time, very-high-definition video streaming. These include medical VR, remote surgery and the like.
|
||||
|
||||
“The process you use to do things like real-time video – right now on a 4G network, that may or may not have a delay,” said Stewart-Brown. “Once you can get rid of the delay, the possibilities are endless as to what you can use the technology for.”
|
||||
|
||||
### Citizens band
|
||||
|
||||
Ron Malenfant, chief architect for service provider IoT at Cisco, agreed that the future of 5G for medical IoT is bright, but said that the actual applications of the technology have to be carefully thought out.
|
||||
|
||||
“The use cases need to be worked on,” he said. “The innovative [companies] are starting to say ‘OK, what does 5G mean to me’ and starting to plan use cases.”
|
||||
|
||||
One area that the carriers themselves have been eyeing recently is the CBRS band of radio frequencies, which sits around 3.5GHz. It’s what’s referred to as “lightly licensed” spectrum, in that parts of it are used for things like CB radio and other parts are the domain of the U.S. armed forces, and it could be used to build private networks for institutional users like hospitals, instead of deploying small but expensive 4G cells. The idea is that the institutions would be able to lease those frequencies for their specific area from the carrier directly for private LTE/CBRS networks, and, eventually 5G, Malenfant said.
|
||||
|
||||
There’s also the issue, of course, that there are still a huge amount of unknowns around 5G, which isn’t expected to supplant LTE in the U.S. for at least another year or so. The medical field’s stiff regulatory requirements could also prove a stumbling block for the adoption of newer wireless technology.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/stethoscope_mobile_healthcare_ipad_tablet_doctor_patient-100765655-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[4]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[5]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
|
||||
[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Several deals solidify the hybrid cloud’s status as the cloud of choice)
|
||||
[#]: via: (https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Several deals solidify the hybrid cloud’s status as the cloud of choice
|
||||
======
|
||||
On-premises and cloud connections are being built by all the top vendors to bridge legacy and modern systems, creating hybrid cloud environments.
|
||||
![Getty Images][1]
|
||||
|
||||
The hybrid cloud market is expected to grow from $38.27 billion in 2017 to $97.64 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 17.0% during the forecast period, according to Markets and Markets.
|
||||
|
||||
The research firm said the hybrid cloud is rapidly becoming a leading cloud solution, as it provides various benefits, such as cost, efficiency, agility, mobility, and elasticity. One of the many reasons is the need for interoperability standards between cloud services and existing systems.
|
||||
|
||||
Unless you are a startup company and can be born in the cloud, you have legacy data systems that need to be bridged, which is where the hybrid cloud comes in.
|
||||
|
||||
So, in very short order we’ve seen a bunch of new alliances involving the old and new guard, reiterating that the need for hybrid solutions remains strong.
|
||||
|
||||
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
### HPE/Google
|
||||
|
||||
In April, the Hewlett Packard Enterprise (HPE) and Google announced a deal where HPE introduced a variety of server solutions for Google Cloud’s Anthos, along with a consumption-based model for the validated HPE on-premises infrastructure that is integrated with Anthos.
|
||||
|
||||
Following up with that, the two just announced a strategic partnership to create a hybrid cloud for containers by combining HPE’s on-premises infrastructure, Cloud Data Services, and GreenLake consumption model with Anthos. This allows for:
|
||||
|
||||
* Bi-directional data mobility for data mobility and consistent data services between on-premises and cloud
|
||||
* Application workload mobility to move containerized app workloads across on-premises and multi-cloud environments
|
||||
* Multi-cloud flexibility, offering the choice of HPE Cloud Volumes and Anthos for what works best for the workload
|
||||
* Unified hybrid management through Anthos, so customers can get a unified and consistent view of their applications and workloads regardless of where they reside
|
||||
* Charged as a service via HPE GreenLake
|
||||
|
||||
|
||||
|
||||
### IBM/Cisco
|
||||
|
||||
This is a furthering of an already existing partnership between IBM and Cisco designed to deliver a common and secure developer experience across on-premises and public cloud environments for building modern applications.
|
||||
|
||||
[Cisco said it will support IBM Cloud Private][4], an on-premises container application development platform, on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure. This includes support for IBM Cloud Pak for Applications. IBM Cloud Paks deliver enterprise-ready containerized software solutions and developer tools for building apps and then easily moving to any cloud—public or private.
|
||||
|
||||
This architecture delivers a common and secure Kubernetes experience across on-premises (including edge) and public cloud environments. IBM’s Multicloud Manager covers monitoring and management of clusters and container-based applications running from on-premises to the edge, while Cisco’s Virtual Application Centric Infrastructure (ACI) will allow customers to extend their network fabric from on-premises to the IBM Cloud.
|
||||
|
||||
### IBM/Equinix
|
||||
|
||||
Equinix expanded its collaboration with IBM Cloud to bring private and scalable connectivity to global enterprises via Equinix Cloud Exchange Fabric (ECX Fabric). This provides private connectivity to IBM Cloud, including Direct Link Exchange, Direct Link Dedicated and Direct Link Dedicated Hosting, that is secure and scalable.
|
||||
|
||||
ECX Fabric is an on-demand, SDN-enabled interconnection service that allows any business to connect between its own distributed infrastructure and any other company’s distributed infrastructure, including cloud providers. Direct Link provides IBM customers with a connection between their network and IBM Cloud. So ECX Fabric provides IBM customers with a secured and scalable network connection to the IBM Cloud service.
|
||||
|
||||
At the same time, ECX Fabric provides secure connections to other cloud providers, and most customers prefer a multi-vendor approach to avoid vendor lock-in.
|
||||
|
||||
“Each of the partnerships focus on two things: 1) supporting a hybrid-cloud platform for their existing customers by reducing the friction to leveraging each solution and 2) leveraging the unique strength that each company brings. Each of the solutions are unique and would be unlikely to compete directly with other partnerships,” said Tim Crawford, president of Avoa, an IT consultancy.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_hand_plus_sign_private-100787051-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (ninifly)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,123 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (VSCodium: 100% Open Source Version of Microsoft VS Code)
|
||||
[#]: via: (https://itsfoss.com/vscodium/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
VSCodium: 100% Open Source Version of Microsoft VS Code
|
||||
======
|
||||
|
||||
_**Brief: VSCodium is a fork of Microsoft’s popular Visual Studio Code editor. It’s identical to VS Code with the single biggest difference that unlike VS Code, VSCodium doesn’t track your usage data.**_
|
||||
|
||||
Microsoft’s [Visual Studio Code][1] is an excellent editor not only for web developers but also for other programmers. Due to its features, it’s considered one of the best open source code editors.
|
||||
|
||||
Yes, it’s one of the many open source products from Microsoft. You can [easily install Visual Studio Code in Linux][2] thanks to the ready to use binaries in the form of DEB, RPM and Snap packages.
|
||||
|
||||
And there is a problem which might not be an issue for a regular user but significant to an open source purist.
|
||||
|
||||
The ready to use binaries Microsoft provides are not open source.
|
||||
|
||||
Confused? Let me explain.
|
||||
|
||||
The source code of VS Code is open sourced under MIT license. You can access it on the [GitHub][3]. However, the [installation files that Microsoft has created contain proprietary telemetry/tracking][4].
|
||||
|
||||
This tracking basically collects usage data and sends it to Microsoft to ‘help improve their products and services’. Telemetry reporting is common with software products these days. Even [Ubuntu does that but with more transparency][5].
|
||||
|
||||
You can [disable the telemetry in VS Code][6] but can you trust Microsoft completely? If the answer is no, then what are your options?
|
||||
|
||||
You can build it from the source code and thus keep everything open source. But [installing from source code][7] is not always the prettiest option specially in today’s world when we are so used to of having binaries.
|
||||
|
||||
Another option is to use VSCodium!
|
||||
|
||||
### VSCodium: 100% open source form of Visual Studio Code
|
||||
|
||||
![][8]
|
||||
|
||||
[VSCodium][9] is a fork of Microsoft’s Visual Studio Code. This project’s sole aim is to provide you with ready to use binaries without Microsoft’s telemetry code.
|
||||
|
||||
[][10]
|
||||
|
||||
Suggested read SuiteCRM: An Open Source CRM Takes Aim At Salesforce
|
||||
|
||||
This solves the problem where you want to use VS Code without the proprietary code from Microsoft but you are not comfortable with building it from the source.
|
||||
|
||||
Since [VSCodium is a fork of VS Code][11], it looks and functions exactly the same as VS Code.
|
||||
|
||||
Here’s a screenshot of the first run of VS Code and VSCodium side by side in Ubuntu. Can you distinguish one from another?
|
||||
|
||||
![Can you guess which is VSCode and VSCodium?][12]
|
||||
|
||||
If you have not been able to distinguish between the two, look at the bottom.
|
||||
|
||||
![That’s Microsoft][13]
|
||||
|
||||
Apart from this and the logo of the two applications, there is no other noticeable difference.
|
||||
|
||||
![VSCodium and VS Code in GNOME Menu][14]
|
||||
|
||||
#### Installing VSCodium on Linux
|
||||
|
||||
While VSCodium is available in some distributions like Parrot OS, you’ll have to add additional repositories in other Linux distributions.
|
||||
|
||||
On Ubuntu and Debian based distributions, you can use the following commands to install VSCodium.
|
||||
|
||||
First, add the GPG key of the repository:
|
||||
|
||||
```
|
||||
wget -qO - https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg | sudo apt-key add -
|
||||
```
|
||||
|
||||
And then add the repository itself:
|
||||
|
||||
```
|
||||
echo 'deb https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/repos/debs/ vscodium main' | sudo tee --append /etc/apt/sources.list.d/vscodium.list
|
||||
```
|
||||
|
||||
Now update your system and install VSCodium:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt install codium
|
||||
```
|
||||
|
||||
You can find the [installation instructions for other distributions on its page][15]. You should also read the [instructions about migrating from VS Code to VSCodium][16].
|
||||
|
||||
**What do you think of VSCodium?**
|
||||
|
||||
Personally, I like the concept of VSCodium. To use the cliche, the project has its heart in the right place. I think, Linux distributions committed to open source may even start including it in their official repository.
|
||||
|
||||
What do you think? Is it worth switching to VSCodium or would you rather opt out of the telemetry and continue using VS Code?
|
||||
|
||||
And please, no “I use Vim” comments :D
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/vscodium/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://code.visualstudio.com/
|
||||
[2]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[3]: https://github.com/Microsoft/vscode
|
||||
[4]: https://github.com/Microsoft/vscode/issues/60#issuecomment-161792005
|
||||
[5]: https://itsfoss.com/ubuntu-data-collection-stats/
|
||||
[6]: https://code.visualstudio.com/docs/supporting/faq#_how-to-disable-telemetry-reporting
|
||||
[7]: https://itsfoss.com/install-software-from-source-code/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium.png?resize=800%2C450&ssl=1
|
||||
[9]: https://vscodium.com/
|
||||
[10]: https://itsfoss.com/suitecrm-ondemand/
|
||||
[11]: https://github.com/VSCodium/vscodium
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-vs-vscode.png?resize=800%2C450&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-vscode-tracking.png?resize=800%2C259&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-and-vscode.jpg?resize=800%2C220&ssl=1
|
||||
[15]: https://vscodium.com/#install
|
||||
[16]: https://vscodium.com/#migrate
|
@ -0,0 +1,247 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1)
|
||||
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1
|
||||
======
|
||||
|
||||
As **Docker** usage and adoption is growing faster and faster, monitoring **Docker container** images is becoming more challenging. As multiple Docker container images are getting created day-by-day, monitoring them is very important. There are already some in built tools and technologies, but configuring them is little complex. As micro-services based architecture is becoming the de-facto standard in coming days, learning such tool adds one more arsenal to your tool-set.
|
||||
|
||||
Based on the above scenarios, there was in need of one light weight and robust tool requirement was growing. So Portainer.io addressed this. “ **Portainer.io** “,(Latest version is 1.20.2) the tool is very light weight(with 2-3 commands only one can configure it) and has become popular among Docker users.
|
||||
|
||||
**This tool has advantages over other tools; some of these are as below** ,
|
||||
|
||||
* Light weight (requires only 2-3 commands to be required to run to install this tool) {Also installation image is only around 26-30MB of size)
|
||||
* Robust and easy to use
|
||||
* Can be used for Docker monitor and Build
|
||||
* This tool provides us a detailed overview of your Docker environments
|
||||
* This tool allows us to manage your containers, images, networks and volumes.
|
||||
* Portainer is simple to deploy – this requires just one Docker command (can be run from anywhere.)
|
||||
* Complete Docker-container environment can be monitored easily
|
||||
|
||||
|
||||
|
||||
**Portainer is also equipped with** ,
|
||||
|
||||
* Community support
|
||||
* Enterprise support
|
||||
* Has professional services available(along with partner OEM services)
|
||||
|
||||
|
||||
|
||||
**Functionality and features of Portainer tool are,**
|
||||
|
||||
1. It comes-up with nice Dashboard, easy to use and monitor.
|
||||
2. Many in-built templates for ease of operation and creation
|
||||
3. Support of services (OEM, Enterprise level)
|
||||
4. Monitoring of Containers, Images, Networks, Volume and configuration at almost real-time.
|
||||
5. Also includes Docker-Swarm monitoring
|
||||
6. User management with many fancy capabilities
|
||||
|
||||
|
||||
|
||||
**Read Also :[How to Install Docker CE on Ubuntu 16.04 / 18.04 LTS System][1]**
|
||||
|
||||
### How to install and configure Portainer.io on Ubuntu Linux / RHEL / CentOS
|
||||
|
||||
**Note:** This installation is done on Ubuntu 18.04 but the installation on RHEL & CentOS would be same. We are assuming Docker CE is already installed on your system.
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ lsb_release -a
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 18.04 LTS
|
||||
Release: 18.04
|
||||
Codename: bionic
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Create the Volume for portainer
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker volume create portainer_data
|
||||
portainer_data
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Launch and start Portainer Container using the beneath docker command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
|
||||
Unable to find image 'portainer/portainer:latest' locally
|
||||
latest: Pulling from portainer/portainer
|
||||
d1e017099d17: Pull complete
|
||||
0b1e707a06d2: Pull complete
|
||||
Digest: sha256:d6cc2c20c0af38d8d557ab994c419c799a10fe825e4aa57fea2e2e507a13747d
|
||||
Status: Downloaded newer image for portainer/portainer:latest
|
||||
35286de9f2e21d197309575bb52b5599fec24d4f373cc27210d98abc60244107
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Once the complete installation is done, use the ip of host or Docker using port 9000 of the Docker engine where portainer is running using your browser.
|
||||
|
||||
**Note:** If OS firewall is enabled on your Docker host then make sure 9000 port is allowed else its GUI will not come up.
|
||||
|
||||
In my case, IP address of my Docker Host / Engine is “192.168.1.16” so URL will be,
|
||||
|
||||
<http://192.168.1.16:9000>
|
||||
|
||||
[![Portainer-Login-User-Name-Password][2]][3]
|
||||
|
||||
Please make sure that you enter 8-character passwords. Let the admin be the user as it is and then click “Create user”.
|
||||
|
||||
Now the following screen appears, in this select “Local” rectangle box.
|
||||
|
||||
[![Connect-Portainer-Local-Docker][4]][5]
|
||||
|
||||
Click on “Connect”
|
||||
|
||||
Nice GUI with admin as user home screen appears as below,
|
||||
|
||||
[![Portainer-io-Docker-Monitor-Dashboard][6]][7]
|
||||
|
||||
Now Portainer is ready to launch and manage your Docker containers and it can also be used for containers monitoring.
|
||||
|
||||
### Bring-up container image on Portainer tool
|
||||
|
||||
[![Portainer-Endpoints][8]][9]
|
||||
|
||||
Now check the present status, there are two container images are already running, if you create one more that appears instantly.
|
||||
|
||||
From your command line kick-start one or two containers as below,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker run --name test -it debian
|
||||
Unable to find image 'debian:latest' locally
|
||||
latest: Pulling from library/debian
|
||||
e79bb959ec00: Pull complete
|
||||
Digest: sha256:724b0fbbda7fda6372ffed586670573c59e07a48c86d606bab05db118abe0ef5
|
||||
Status: Downloaded newer image for debian:latest
|
||||
root@linuxtechi:/#
|
||||
```
|
||||
|
||||
Now click Refresh button (Are you sure message appears, click “continue” on this) in Portainer GUI, you will now see 3 container images as highlighted below,
|
||||
|
||||
[![Portainer-io-new-container-image][10]][11]
|
||||
|
||||
Click on the “ **containers** ” (in which it is red circled above), next window appears with “ **Dashboard Endpoint summary** ”
|
||||
|
||||
[![Portainer-io-Docker-Container-Dash][12]][13]
|
||||
|
||||
In this page, click on “ **Containers** ” as highlighted in red color. Now you are ready to monitor your container image.
|
||||
|
||||
### Simple Docker container image monitoring
|
||||
|
||||
From the above step, it appears that a fancy and nice looking “Container List” page appears as below,
|
||||
|
||||
[![Portainer-Container-List][14]][15]
|
||||
|
||||
All the container images can be controlled from here (stop, start, etc)
|
||||
|
||||
**1)** Now from this page, stop the earlier started {“test” container (this was the debian image that we started earlier)}
|
||||
|
||||
To do this select the check box in front of this image and click stop button from above,
|
||||
|
||||
[![Stop-Container-Portainer-io-dashboard][16]][17]
|
||||
|
||||
From the command line option, you will see that this image has been stopped or exited now,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker container ls -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
d45902e717c0 debian "bash" 21 minutes ago Exited (0) 49 seconds ago test
|
||||
08b96eddbae9 centos:7 "/bin/bash" About an hour ago Exited (137) 9 minutes ago mycontainer2
|
||||
35286de9f2e2 portainer/portainer "/portainer" 2 hours ago Up About an hour 0.0.0.0:9000->9000/tcp compassionate_benz
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
**2)** Now start the stopped containers (test & mycontainer2) from Portainer GUI,
|
||||
|
||||
Select the check box in front of stopped containers, and the click on Start
|
||||
|
||||
[![Start-Containers-Portainer-GUI][18]][19]
|
||||
|
||||
You will get a quick window saying, “ **Container successfully started** ” and with running state
|
||||
|
||||
[![Conatiner-Started-successfully-Portainer-GUI][20]][21]
|
||||
|
||||
### Various other options and features are explored as below step-by-step
|
||||
|
||||
**1)** Click on “ **Images** ” which is highlighted, you will get the below window,
|
||||
|
||||
[![Docker-Container-Images-Portainer-GUI][22]][23]
|
||||
|
||||
This is the list of container images that are available but some may not running. These images can be imported, exported or uploaded to various locations, below screen shot shows the same,
|
||||
|
||||
[![Upload-Docker-Container-Image-Portainer-GUI][24]][25]
|
||||
|
||||
**2)** Click on “ **volumes”** which is highlighted, you will get the below window,
|
||||
|
||||
[![Volume-list-Portainer-io-gui][26]][27]
|
||||
|
||||
**3)** Volumes can be added easily with following option, click on add volume button, below window appears,
|
||||
|
||||
Provide the name as “ **myvol** ” in the name box and click on “ **create the volume** ” button.
|
||||
|
||||
[![Volume-Creation-Portainer-io-gui][28]][29]
|
||||
|
||||
The newly created volume appears as below, (with unused state)
|
||||
|
||||
[![Volume-unused-Portainer-io-gui][30]][31]
|
||||
|
||||
#### Conclusion:
|
||||
|
||||
As from the above installation steps, configuration and playing around with various options you can see how easy and fancy looking is Portainer.io tool is. This provides multiple features and options to explore on building, monitoring docker container. As explained this is very light weight tool, so doesn’t add any overload to host system. Next set-of options will be explored in part-2 of this series.
|
||||
|
||||
Read Also: **[Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2][32]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/how-to-setup-docker-on-ubuntu-server-16-04/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password-1024x681.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker-1024x538.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard-1024x544.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints-1024x252.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image-1024x544.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash-1024x544.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List-1024x538.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard-1024x447.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI-1024x449.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI-1024x538.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI-1024x544.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI-1024x544.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui-1024x544.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui-1024x544.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui-1024x544.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui.jpg
|
||||
[32]: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/
|
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora 30 Workstation Installation Guide with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/fedora-30-workstation-installation-guide/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Fedora 30 Workstation Installation Guide with Screenshots
|
||||
======
|
||||
|
||||
If you are a **Fedora distribution** lover and always try the things at Fedora Workstation and Servers, then it is good news for you as Fedora has released its latest OS edition as **Fedora 30** for the Workstation and Server. One of the important updates in Fedora 30 from its previous release is that it has introduced **Fedora CoreOS** as a replacement of Fedora Atomic host.
|
||||
|
||||
Some other noticeable updates in Fedora 30 are listed beneath:
|
||||
|
||||
* Updated Desktop Gnome 3.32
|
||||
* New Linux Kernel 5.0.9
|
||||
* Updated Bash Version 5.0, PHP 7.3 & GCC 9
|
||||
* Updated Python 3.7.3, JDK12, Ruby 2.6 Mesa 19.0.2 and Golang 1.12
|
||||
* Improved DNF (Default Package Manager)
|
||||
|
||||
|
||||
|
||||
In this article we will walk through the Fedora 30 workstation Installation steps for laptop or desktop.
|
||||
|
||||
**Following are the minimum system requirement for Fedora 30 workstation,**
|
||||
|
||||
* 1GHz Processor (Recommended 2 GHz Dual Core processor)
|
||||
* 2 GB RAM
|
||||
* 15 GB unallocated Hard Disk
|
||||
* Bootable Media (USB / DVD)
|
||||
* nternet Connection (Optional)
|
||||
|
||||
|
||||
|
||||
Let’s Jump into Installation steps,
|
||||
|
||||
### Step:1) Download Fedora 30 Workstation ISO File
|
||||
|
||||
Download the Fedora 30 Workstation ISO file on your system from its Official Web Site
|
||||
|
||||
<https://getfedora.org/en/workstation/download/>
|
||||
|
||||
Once the ISO file is downloaded, then burn it either in USB drive or DVD and make it bootable.
|
||||
|
||||
### Step:2) Boot Your Target System with Bootable media (USB Drive or DVD)
|
||||
|
||||
Reboot your target machine (i.e. machine where you want to install Fedora 30), Set the boot medium as USB or DVD from Bios settings so system boots up with bootable media.
|
||||
|
||||
### Step:3) Choose Start Fedora-Workstation-30 Live
|
||||
|
||||
When the system boots up with bootable media then we will get the following screen, to begin with installation on your system’s hard disk, choose “ **Start Fedora-Workstation-30 Live** “,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Start-Fedora-Workstation-30-Live.jpg>
|
||||
|
||||
### Step:4) Select Install to Hard Drive Option
|
||||
|
||||
Select “ **Install to Hard Drive** ” option to install Fedora 30 on your system’s hard disk, you can also try Fedora on your system without installing it, for that select “ **Try Fedora** ” Option
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Install-hard-drive.jpg>
|
||||
|
||||
### Step:5) Choose appropriate language for your Fedora 30 Installation
|
||||
|
||||
In this step choose your language which will be used during Fedora 30 Installation,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-Fedora30-Installation.jpg>
|
||||
|
||||
Click on Continue
|
||||
|
||||
### Step:6) Choose Installation destination and partition Scheme
|
||||
|
||||
In the next window we will be present the following screen, here we will choose our installation destination, means on which hard disk we will do installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-Destination-Fedora-30.jpg>
|
||||
|
||||
In the next screen we will see the local available hard disk, select the disk that suits your installation and then choose how you want to create partitions on it from storage configuration tab.
|
||||
|
||||
If you choose “ **Automatic** ” partition scheme, then installer will create the necessary partition for your system automatically but if you want to create your own customize partition scheme then choose “ **Custom** ” option,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Custom-Partition-Fedora-30-installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
In this article I will demonstrate how to create [**LVM**][1] based custom partitions, in my case I have around 40 GB unallocated hard drive, so I will be creating following partitions on it,
|
||||
|
||||
* /boot = 2 GB (ext4 file system)
|
||||
* /home = 15 GB (ext4 file system)
|
||||
* /var = 10 GB (ext4 file system)
|
||||
* / = 10 GB (ext4 file system)
|
||||
* Swap = 2 GB
|
||||
|
||||
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LVM-Partition-MountPoint-Fedora-30-Installation.jpg>
|
||||
|
||||
Select “ **LVM** ” as partitioning scheme and then click on plus (+) symbol,
|
||||
|
||||
Specify the mount point as /boot and partition size as 2 GB and then click on “Add mount point”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partiton-fedora30-installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-standard-parttion-fedora-30.jpg>
|
||||
|
||||
Now create next partition as /home of size 15 GB, Click on + symbol
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-fedora-30-installation.jpg>
|
||||
|
||||
Click on “ **Add mount point** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Modify-Volume-Group-Fedora30-Installation.jpg>
|
||||
|
||||
If you might have noticed, /home partition is created as LVM partition under default Volume Group, if you wish to change default Volume Group name then click on “ **Modify** ” option from Volume Group Tab,
|
||||
|
||||
Mention the Volume Group name you want to set and then click on Save. Now onward all the LVM partition will be part of fedora30 volume group.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Group-Fedora-30-Installation.jpg>
|
||||
|
||||
Similarly create the next two partitions **/var** and **/** of size 10 GB respectively,
|
||||
|
||||
**/var partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-fedora30-installation.jpg>
|
||||
|
||||
**/ (slash) partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-fedora30-installation.jpg>
|
||||
|
||||
Now create the last partition as swap of size 2 GB,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-partition-fedora30-installation.jpg>
|
||||
|
||||
In the next window, click on Done
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-After-Parttions-Creation-Fedora30.jpg>
|
||||
|
||||
In the next screen, choose “ **Accept Changes** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-Fedora30-Installation.jpg>
|
||||
|
||||
Now we will get Installation Summary window, here you can also change the time zone that suits to your installation and then click on “ **Begin Installation** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-Fedora30-Installation.jpg>
|
||||
|
||||
### Step:7) Fedora 30 Installation started
|
||||
|
||||
In this step we can see Fedora 30 Installation has been started and it is in progress,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Installation-Progress.jpg>
|
||||
|
||||
Once the Installation is completed, you will be prompted to restart your system
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora30-Installation-Completed-Screen.jpg>
|
||||
|
||||
Click on Quit and reboot your system.
|
||||
|
||||
Don’t forget the Change boot medium from Bios settings so your system boots up with hard disk.
|
||||
|
||||
### Step:8) Welcome message and login Screen after reboot
|
||||
|
||||
When we first time reboot Fedora 30 system after the successful installation, we will get below welcome screen,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Welcome-Screen-After-Fedora30-Installation.jpg>
|
||||
|
||||
Click on Next
|
||||
|
||||
In the next screen you can Sync your online accounts or else you can skip,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Online-Accounts-Sync-Fedora30.jpg>
|
||||
|
||||
In the next window you will be required to specify the local account (user name) and its password, later this account will be used to login to the system
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-Fedora30-Installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-password-fedora30.jpg>
|
||||
|
||||
Click on Next
|
||||
|
||||
And finally, we will get below screen which confirms that we are ready to use Fedora 30,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Reday-to-use-fedora30-message.jpg>
|
||||
|
||||
Click on “ **Start Using Fedora** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Gnome-Desktop-Screen-Fedora30.jpg>
|
||||
|
||||
Above Gnome Desktop Screen confirms that we have successfully installed Fedora 30 Workstation, now explore it and have fun 😊
|
||||
|
||||
In Fedora 30 workstation, if you want to install any packages or software from command line use DNF command.
|
||||
|
||||
Read More On: **[26 DNF Command Examples for Package Management in Fedora Linux][2]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/
|
||||
[2]: https://www.linuxtechi.com/dnf-command-examples-rpm-management-fedora-linux/
|
521
sources/tech/20190507 Prefer table driven tests.md
Normal file
521
sources/tech/20190507 Prefer table driven tests.md
Normal file
@ -0,0 +1,521 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Prefer table driven tests)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/07/prefer-table-driven-tests)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Prefer table driven tests
|
||||
======
|
||||
|
||||
I’m a big fan of testing, specifically [unit testing][1] and TDD ([done correctly][2], of course). A practice that has grown around Go projects is the idea of a table driven test. This post explores the how and why of writing a table driven test.
|
||||
|
||||
Let’s say we have a function that splits strings:
|
||||
|
||||
```
|
||||
// Split slices s into all substrings separated by sep and
|
||||
// returns a slice of the substrings between those separators.
|
||||
func Split(s, sep string) []string {
|
||||
var result []string
|
||||
i := strings.Index(s, sep)
|
||||
for i > -1 {
|
||||
result = append(result, s[:i])
|
||||
s = s[i+len(sep):]
|
||||
i = strings.Index(s, sep)
|
||||
}
|
||||
return append(result, s)
|
||||
}
|
||||
```
|
||||
|
||||
In Go, unit tests are just regular Go functions (with a few rules) so we write a unit test for this function starting with a file in the same directory, with the same package name, `strings`.
|
||||
|
||||
```
|
||||
package split
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSplit(t *testing.T) {
|
||||
got := Split("a/b/c", "/")
|
||||
want := []string{"a", "b", "c"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Tests are just regular Go functions with a few rules:
|
||||
|
||||
1. The name of the test function must start with `Test`.
|
||||
2. The test function must take one argument of type `*testing.T`. A `*testing.T` is a type injected by the testing package itself, to provide ways to print, skip, and fail the test.
|
||||
|
||||
|
||||
|
||||
In our test we call `Split` with some inputs, then compare it to the result we expected.
|
||||
|
||||
### Code coverage
|
||||
|
||||
The next question is, what is the coverage of this package? Luckily the go tool has a built in branch coverage. We can invoke it like this:
|
||||
|
||||
```
|
||||
% go test -coverprofile=c.out
|
||||
PASS
|
||||
coverage: 100.0% of statements
|
||||
ok split 0.010s
|
||||
```
|
||||
|
||||
Which tells us we have 100% branch coverage, which isn’t really surprising, there’s only one branch in this code.
|
||||
|
||||
If we want to dig in to the coverage report the go tool has several options to print the coverage report. We can use `go tool cover -func` to break down the coverage per function:
|
||||
|
||||
```
|
||||
% go tool cover -func=c.out
|
||||
split/split.go:8: Split 100.0%
|
||||
total: (statements) 100.0%
|
||||
```
|
||||
|
||||
Which isn’t that exciting as we only have one function in this package, but I’m sure you’ll find more exciting packages to test.
|
||||
|
||||
#### Spray some .bashrc on that
|
||||
|
||||
This pair of commands is so useful for me I have a shell alias which runs the test coverage and the report in one command:
|
||||
|
||||
```
|
||||
cover () {
|
||||
local t=$(mktemp -t cover)
|
||||
go test $COVERFLAGS -coverprofile=$t $@ \
|
||||
&& go tool cover -func=$t \
|
||||
&& unlink $t
|
||||
}
|
||||
```
|
||||
|
||||
### Going beyond 100% coverage
|
||||
|
||||
So, we wrote one test case, got 100% coverage, but this isn’t really the end of the story. We have good branch coverage but we probably need to test some of the boundary conditions. For example, what happens if we try to split it on comma?
|
||||
|
||||
```
|
||||
func TestSplitWrongSep(t *testing.T) {
|
||||
got := Split("a/b/c", ",")
|
||||
want := []string{"a/b/c"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or, what happens if there are no separators in the source string?
|
||||
|
||||
```
|
||||
func TestSplitNoSep(t *testing.T) {
|
||||
got := Split("abc", "/")
|
||||
want := []string{"abc"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We’re starting build a set of test cases that exercise boundary conditions. This is good.
|
||||
|
||||
### Introducing table driven tests
|
||||
|
||||
However the there is a lot of duplication in our tests. For each test case only the input, the expected output, and name of the test case change. Everything else is boilerplate. What we’d like to to set up all the inputs and expected outputs and feel them to a single test harness. This is a great time to introduce table driven testing.
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
type test struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}
|
||||
|
||||
tests := []test{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We declare a structure to hold our test inputs and expected outputs. This is our table. The `tests` structure is usually a local declaration because we want to reuse this name for other tests in this package.
|
||||
|
||||
In fact, we don’t even need to give the type a name, we can use an anonymous struct literal to reduce the boilerplate like this:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, adding a new test is a straight forward matter; simply add another line the `tests` structure. For example, what will happen if our input string has a trailing separator?
|
||||
|
||||
```
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, // trailing sep
|
||||
```
|
||||
|
||||
But, when we run `go test`, we get
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:24: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Putting aside the test failure, there are a few problems to talk about.
|
||||
|
||||
The first is by rewriting each test from a function to a row in a table we’ve lost the name of the failing test. We added a comment in the test file to call out this case, but we don’t have access to that comment in the `go test` output.
|
||||
|
||||
There are a few ways to resolve this. You’ll see a mix of styles in use in Go code bases because the table testing idiom is evolving as people continue to experiment with the form.
|
||||
|
||||
### Enumerating test cases
|
||||
|
||||
As tests are stored in a slice we can print out the index of the test case in the failure message:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
sep . string
|
||||
want []string
|
||||
}{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for i, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("test %d: expected: %v, got: %v", i+1, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now when we run `go test` we get this
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:24: test 4: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Which is a little better. Now we know that the fourth test is failing, although we have to do a little bit of fudging because slice indexing—and range iteration—is zero based. This requires consistency across your test cases; if some use zero base reporting and others use one based, it’s going to be confusing. And, if the list of test cases is long, it could be difficult to count braces to figure out exactly which fixture constitutes test case number four.
|
||||
|
||||
### Give your test cases names
|
||||
|
||||
Another common pattern is to include a name field in the test fixture.
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
{name: "simple", input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{name: "wrong sep", input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{name: "no sep", input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{name: "trailing sep", input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("%s: expected: %v, got: %v", tc.name, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now when the test fails we have a descriptive name for what the test was doing. We no longer have to try to figure it out from the output—also, now have a string we can search on.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:25: trailing sep: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
We can dry this up even more using a map literal syntax:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("%s: expected: %v, got: %v", name, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Using a map literal syntax we define our test cases not as a slice of structs, but as map of test names to test fixtures. There’s also a side benefit of using a map that is going to potentially improve the utility of our tests.
|
||||
|
||||
Map iteration order is _undefined_ 1 This means each time we run `go test`, our tests are going to be potentially run in a different order.
|
||||
|
||||
This is super useful for spotting conditions where test pass when run in statement order, but not otherwise. If you find that happens you probably have some global state that is being mutated by one test with subsequent tests depending on that modification.
|
||||
|
||||
### Introducing sub tests
|
||||
|
||||
Before we fix the failing test there are a few other issues to address in our table driven test harness.
|
||||
|
||||
The first is we’re calling `t.Fatalf` when one of the test cases fails. This means after the first failing test case we stop testing the other cases. Because test cases are run in an undefined order, if there is a test failure, it would be nice to know if it was the only failure or just the first.
|
||||
|
||||
The testing package would do this for us if we go to the effort to write out each test case as its own function, but that’s quite verbose. The good news is since Go 1.7 a new feature was added that lets us do this easily for table driven tests. They’re called [sub tests][3].
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As each sub test now has a name we get that name automatically printed out in any test runs.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Each subtest is its own anonymous function, therefore we can use `t.Fatalf`, `t.Skipf`, and all the other `testing.T`helpers, while retaining the compactness of a table driven test.
|
||||
|
||||
#### Individual sub test cases can be executed directly
|
||||
|
||||
Because sub tests have a name, you can run a selection of sub tests by name using the `go test -run` flag.
|
||||
|
||||
```
|
||||
% go test -run=.*/trailing -v
|
||||
=== RUN TestSplit
|
||||
=== RUN TestSplit/trailing_sep
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
### Comparing what we got with what we wanted
|
||||
|
||||
Now we’re ready to fix the test case. Let’s look at the error.
|
||||
|
||||
```
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Can you spot the problem? Clearly the slices are different, that’s what `reflect.DeepEqual` is upset about. But spotting the actual difference isn’t easy, you have to spot that extra space after `c`. This might look simple in this simple example, but it is any thing but when you’re comparing two complicated deeply nested gRPC structures.
|
||||
|
||||
We can improve the output if we switch to the `%#v` syntax to view the value as a Go(ish) declaration:
|
||||
|
||||
```
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %#v, got: %#v", tc.want, got)
|
||||
}
|
||||
```
|
||||
|
||||
Now when we run our test it’s clear that the problem is there is an extra blank element in the slice.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: []string{"a", "b", "c"}, got: []string{"a", "b", "c", ""}
|
||||
```
|
||||
|
||||
But before we go to fix our test failure I want to talk a little bit more about choosing the right way to present test failures. Our `Split` function is simple, it takes a primitive string and returns a slice of strings, but what if it worked with structs, or worse, pointers to structs?
|
||||
|
||||
Here is an example where `%#v` does not work as well:
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
fmt.Printf("%v %v\n", x, y)
|
||||
fmt.Printf("%#v %#v\n", x, y)
|
||||
}
|
||||
```
|
||||
|
||||
The first `fmt.Printf`prints the unhelpful, but expected slice of addresses; `[0xc000096000 0xc000096008 0xc000096010] [0xc000096018 0xc000096020 0xc000096028]`. However our `%#v` version doesn’t fare any better, printing a slice of addresses cast to `*main.T`;`[]*main.T{(*main.T)(0xc000096000), (*main.T)(0xc000096008), (*main.T)(0xc000096010)} []*main.T{(*main.T)(0xc000096018), (*main.T)(0xc000096020), (*main.T)(0xc000096028)}`
|
||||
|
||||
Because of the limitations in using any `fmt.Printf` verb, I want to introduce the [go-cmp][4] library from Google.
|
||||
|
||||
The goal of the cmp library is it is specifically to compare two values. This is similar to `reflect.DeepEqual`, but it has more capabilities. Using the cmp pacakge you can, of course, write:
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
fmt.Println(cmp.Equal(x, y)) // false
|
||||
}
|
||||
```
|
||||
|
||||
But far more useful for us with our test function is the `cmp.Diff` function which will produce a textual description of what is different between the two values, recursively.
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
diff := cmp.Diff(x, y)
|
||||
fmt.Printf(diff)
|
||||
}
|
||||
```
|
||||
|
||||
Which instead produces:
|
||||
|
||||
```
|
||||
% go run
|
||||
{[]*main.T}[2].I:
|
||||
-: 3
|
||||
+: 4
|
||||
```
|
||||
|
||||
Telling us that at element 2 of the slice of `T`s the `I`field was expected to be 3, but was actually 4.
|
||||
|
||||
Putting this all together we have our table driven go-cmp test
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
got := Split(tc.input, tc.sep)
|
||||
diff := cmp.Diff(tc.want, got)
|
||||
if diff != "" {
|
||||
t.Fatalf(diff)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Running this we get
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:27: {[]string}[?->3]:
|
||||
-: <non-existent>
|
||||
+: ""
|
||||
FAIL
|
||||
exit status 1
|
||||
FAIL split 0.006s
|
||||
```
|
||||
|
||||
Using `cmp.Diff` our test harness isn’t just telling us that what we got and what we wanted were different. Our test is telling us that the strings are different lengths, the third index in the fixture shouldn’t exist, but the actual output we got an empty string, “”. From here fixing the test failure is straight forward.
|
||||
|
||||
1. Please don’t email me to argue that map iteration order is _random_. [It’s not][5].
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Writing table driven tests in Go][6]
|
||||
2. [Internets of Interest #7: Ian Cooper on Test Driven Development][7]
|
||||
3. [Automatically run your package’s tests with inotifywait][8]
|
||||
4. [How to write benchmarks in Go][9]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2019/04/03/absolute-unit-test
|
||||
[2]: https://www.youtube.com/watch?v=EZ05e7EMOLM
|
||||
[3]: https://blog.golang.org/subtests
|
||||
[4]: https://github.com/google/go-cmp
|
||||
[5]: https://golang.org/ref/spec#For_statements
|
||||
[6]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
|
||||
[7]: https://dave.cheney.net/2018/10/15/internets-of-interest-7-ian-cooper-on-test-driven-development (Internets of Interest #7: Ian Cooper on Test Driven Development)
|
||||
[8]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait)
|
||||
[9]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go)
|
@ -0,0 +1,256 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots
|
||||
======
|
||||
|
||||
Red Hat has released its most awaited OS **RHEL 8** on 7th May 2019. RHEL 8 is based on **Fedora 28** distribution and Linux **kernel version 4.18**. One of the important key features in RHEL 8 is that it has introduced “ **Application Streams** ” which allows developers tools, frameworks and languages to be updated frequently without impacting the core resources of base OS. In other words, application streams will help to segregate the users space packages from OS Kernel Space.
|
||||
|
||||
Apart from this, there are many new features which are noticed in RHEL 8 like:
|
||||
|
||||
* XFS File system supports copy-on-write of file extents
|
||||
* Introduction of Stratis filesystem, Buildah, Podman, and Skopeo
|
||||
* Yum utility is based on DNF
|
||||
* Chrony replace NTP.
|
||||
* Cockpit is the default Web Console tool for Server management.
|
||||
* OpenSSL 1.1.1 & TLS 1.3 support
|
||||
* PHP 7.2
|
||||
* iptables replaced by nftables
|
||||
|
||||
|
||||
|
||||
### Minimum System Requirements for RHEL 8:
|
||||
|
||||
* 4 GB RAM
|
||||
* 20 GB unallocated disk space
|
||||
* 64-bit x86 or ARM System
|
||||
|
||||
|
||||
|
||||
**Note:** RHEL 8 supports the following architectures:
|
||||
|
||||
* AMD or Intel x86 64-bit
|
||||
* 64-bit ARM
|
||||
* IBM Power Systems, Little Endian & IBM Z
|
||||
|
||||
|
||||
|
||||
In this article we will demonstrate how to install RHEL 8 step by step with screenshots.
|
||||
|
||||
### RHEL 8 Installation Steps with Screenshots
|
||||
|
||||
### Step:1) Download RHEL 8.0 ISO file
|
||||
|
||||
Download RHEL 8 iso file from its official web site,
|
||||
|
||||
<https://access.redhat.com/downloads/>
|
||||
|
||||
I am assuming you have the active subscription if not then register yourself for evaluation and then download ISO file
|
||||
|
||||
### Step:2) Create Installation bootable media (USB or DVD)
|
||||
|
||||
Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its bios settings and set the boot medium as USB or DVD,
|
||||
|
||||
### Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option
|
||||
|
||||
When the system boots up with installation media (USB or DVD), we will get the following screen, choose “ **Install Red Hat Enterprise Linux 8.0** ” and hit enter,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Install-RHEL8.jpg>
|
||||
|
||||
### Step:4) Choose your preferred language for RHEL 8 installation
|
||||
|
||||
In this step, you need to choose a language that you want to use for RHEL 8 installation, so make a selection that suits to your setup.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Continue
|
||||
|
||||
### Step:5) Preparing RHEL 8 Installation
|
||||
|
||||
In this step we will decide the installation destination for RHEL 8, apart from this we can configure the followings:
|
||||
|
||||
* Time Zone
|
||||
* Kdump (enabled/disabled)
|
||||
* Software Selection (Packages)
|
||||
* Networking and Hostname
|
||||
* Security Policies & System purpose
|
||||
|
||||
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-summary-rhel8.jpg>
|
||||
|
||||
By default, installer will automatically pick time zone and will enable the **kdump** , if wish to change the time zone then click on “ **Time & Date**” option and set your preferred time zone and then click on Done.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/timezone-rhel8-installation.jpg>
|
||||
|
||||
To configure IP address and Hostname click on “ **Network & Hostname**” option from installation summary screen,
|
||||
|
||||
If your system is connected to any switch or modem, then it will try to get IP from DHCP server otherwise we can configure IP manually.
|
||||
|
||||
Mention the hostname that you want to set and then click on “ **Apply”**. Once you are done with IP address and hostname configuration click on “Done”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-Hostname-RHEL8-Installation.jpg>
|
||||
|
||||
To define the installation disk and partition scheme for RHEL 8, click on “ **Installation Destination** ” option,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Installation-Disk-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
As we can see I have around 60 GB free disk space on sda drive, I will be creating following customize lvm based partitions on this disk,
|
||||
|
||||
* /boot = 2GB (xfs file system)
|
||||
* / = 20 GB (xfs file system)
|
||||
* /var = 10 GB (xfs file system)
|
||||
* /home = 15 GB (xfs file system)
|
||||
* /tmp = 5 GB (xfs file system)
|
||||
* Swap = 2 GB (xfs file system)
|
||||
|
||||
|
||||
|
||||
**Note:** If you don’t want to create manual partitions then select “ **Automatic** ” option from Storage Configuration Tab
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Create-New-Partition-RHEL8-Installation.jpg>
|
||||
|
||||
Let’s create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning scheme and then click on + “plus” symbol,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partition-rhel8-installation.jpg>
|
||||
|
||||
Click on “ **Add mount point** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Boot-partition-details-rhel8-installation.jpg>
|
||||
|
||||
To create next partition as / of size 20 GB, click on + symbol and specify the details as shown below,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-rhel8-installation.jpg>
|
||||
|
||||
Click on “Add mount point”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-root-partition-details-rhel8-installation.jpg>
|
||||
|
||||
As we can see installer has created the Volume group as “ **rhel_rhel8** “, if you want to change this name then click on Modify option and specify the desired name and then click on Save
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Change-VolumeGroup-RHEL8-Installation.jpg>
|
||||
|
||||
Now onward all partitions will be part of Volume Group “ **VolGrp** ”
|
||||
|
||||
Similarly create next three partitions **/home** , **/var** and **/tmp** of size 15GB, 10 GB and 5 GB respectively
|
||||
|
||||
**/home partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-rhel8-installation.jpg>
|
||||
|
||||
**/var partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-rhel8-installation.jpg>
|
||||
|
||||
**/tmp partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/tmp-partition-rhel8-installation.jpg>
|
||||
|
||||
Now finally create last partition as swap of size of 2 GB,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-Partition-RHEL8-Installation.jpg>
|
||||
|
||||
Click on “Add mount point”
|
||||
|
||||
Once you are done with partition creations, click on Done on Next screen, example is shown below
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-after-partition-creation-rhel8-installation.jpg>
|
||||
|
||||
In the next window, choose “ **Accept Changes** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-RHEL8-Installation.jpg>
|
||||
|
||||
### Step:6) Select Software Packages and Choose Security Policy and System purpose
|
||||
|
||||
After accepting the changes in above step, we will be redirected to installation summary window.
|
||||
|
||||
By default, installer will select “ **Server with GUI”** as software packages and if you want to change it then click on “ **Software Selection** ” option and choose your preferred “ **Basic Environment** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Software-Selection-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
If you want to set the security policies during the installation, the choose the required profile from Security polices option else you can leave as it is.
|
||||
|
||||
From “ **System Purpose** ” option specify the Role, Red Hat Service Level Agreement and Usage. Though You can leave this option as it is.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/System-role-agreement-usage-rhel8-installation.jpg>
|
||||
|
||||
Click on Done to proceed further.
|
||||
|
||||
### Step:7) Choose “Begin Installation” option to start installation
|
||||
|
||||
From the Installation summary window click on “Begin Installation” option to start the installation,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-RHEL8-Installation.jpg>
|
||||
|
||||
As we can see below RHEL 8 Installation is started & is in progress
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Progress.jpg>
|
||||
|
||||
Set the root password,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Root-Password-RHEL8.jpg>
|
||||
|
||||
Specify the local user details like its Full Name, user name and its password,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LocalUser-Details-RHEL8-Installation.jpg>
|
||||
|
||||
Once the installation is completed, installer will prompt us to reboot the system,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Completed-Message.jpg>
|
||||
|
||||
Click on “Reboot” to restart your system and don’t forget to change boot medium from bios settings so that system boots up with hard disk.
|
||||
|
||||
### Step:8) Initial Setup after installation
|
||||
|
||||
When the system is rebooted first time after the successful installation then we will get below window there we need to accept the license (EULA),
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-EULA-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done,
|
||||
|
||||
In the next Screen click on “ **Finish Configuration** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Finish-Configuration-RHEL8-Installation.jpg>
|
||||
|
||||
### Step:8) Login Screen of RHEL 8 Server after Installation
|
||||
|
||||
As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same user name and password that we created during the installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Login-Screen-RHEL8.jpg>
|
||||
|
||||
After the login we will get couple of Welcome Screen and follow the screen instructions and then finally we will get the following screen,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Ready-to-Use-RHEL8.jpg>
|
||||
|
||||
Click on “Start Using Red Hat Enterprise Linux”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/GNOME-Desktop-RHEL8-Server.jpg>
|
||||
|
||||
This confirms that we have successfully installed RHEL 8, that’s all from this article. We will be writing articles on RHEL 8 in the coming future till then please do share your feedback and comments on this article.
|
||||
|
||||
Read Also :** [How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][1]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File)
|
||||
[#]: via: (https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File
|
||||
======
|
||||
|
||||
Recently Red Hat has released its most awaited operating system “ **RHEL 8** “, in case you have installed RHEL 8 Server on your system and wondering how to setup local yum or dnf repository using installation DVD or ISO file then refer below steps and procedure.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Setup-Local-Repo-RHEL8.jpg>
|
||||
|
||||
In RHEL 8, we have two package repositories:
|
||||
|
||||
* BaseOS
|
||||
* Application Stream
|
||||
|
||||
|
||||
|
||||
BaseOS repository have all underlying OS packages where as Application Stream repository have all application related packages, developer tools and databases etc. Using Application stream repository, we can have multiple of versions of same application and Database.
|
||||
|
||||
### Step:1) Mount RHEL 8 ISO file / Installation DVD
|
||||
|
||||
To mount RHEL 8 ISO file inside your RHEL 8 server use the beneath mount command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mount -o loop rhel-8.0-x86_64-dvd.iso /opt/
|
||||
```
|
||||
|
||||
**Note:** I am assuming you have already copied RHEL 8 ISO file inside your system,
|
||||
|
||||
In case you have RHEL 8 installation DVD, then use below mount command to mount it,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mount /dev/sr0 /opt
|
||||
```
|
||||
|
||||
### Step:2) Copy media.repo file from mounted directory to /etc/yum.repos.d/
|
||||
|
||||
In our case RHEL 8 Installation DVD or ISO file is mounted under /opt folder, use cp command to copy media.repo file to /etc/yum.repos.d/ directory,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cp -v /opt/media.repo /etc/yum.repos.d/rhel8.repo
|
||||
'/opt/media.repo' -> '/etc/yum.repos.d/rhel8.repo'
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Set “644” permission on “ **/etc/yum.repos.d/rhel8.repo** ”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# chmod 644 /etc/yum.repos.d/rhel8.repo
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step:3) Add repository entries in “/etc/yum.repos.d/rhel8.repo” file
|
||||
|
||||
By default, **rhel8.repo** file will have following content,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/default-rhel8-repo-file.jpg>
|
||||
|
||||
Edit rhel8.repo file and add the following contents,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/yum.repos.d/rhel8.repo
|
||||
[InstallMedia-BaseOS]
|
||||
name=Red Hat Enterprise Linux 8 - BaseOS
|
||||
metadata_expire=-1
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
baseurl=file:///opt/BaseOS/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[InstallMedia-AppStream]
|
||||
name=Red Hat Enterprise Linux 8 - AppStream
|
||||
metadata_expire=-1
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
baseurl=file:///opt/AppStream/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
```
|
||||
|
||||
rhel8.repo should look like above once we add the content, In case you have mounted the Installation DVD or ISO on different folder then change the location and folder name in base url line for both repositories and rest of parameter leave as it is.
|
||||
|
||||
### Step:4) Clean Yum / DNF and Subscription Manager Cache
|
||||
|
||||
Use the following command to clear yum or dnf and subscription manager cache,
|
||||
|
||||
```
|
||||
root@linuxtechi ~]# dnf clean all
|
||||
[root@linuxtechi ~]# subscription-manager clean
|
||||
All local data removed
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step:5) Verify whether Yum / DNF is getting packages from Local Repo
|
||||
|
||||
Use dnf or yum repolist command to verify whether these commands are getting packages from Local repositories or not.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repolist
|
||||
Updating Subscription Management repositories.
|
||||
Unable to read consumer identity
|
||||
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
|
||||
Last metadata expiration check: 1:32:44 ago on Sat 11 May 2019 08:48:24 AM BST.
|
||||
repo id repo name status
|
||||
InstallMedia-AppStream Red Hat Enterprise Linux 8 - AppStream 4,672
|
||||
InstallMedia-BaseOS Red Hat Enterprise Linux 8 - BaseOS 1,658
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
**Note :** You can use either dnf or yum command, if you use yum command then its request is redirecting to DNF itself because in RHEL 8 yum is based on DNF command.
|
||||
|
||||
If you have noticed the above command output carefully, we are getting warning message “ **This system is not registered to Red Hat Subscription Management**. **You can use subscription-manager to register”** , if you want to suppress or prevent this message while running dnf / yum command then edit the file “/etc/yum/pluginconf.d/subscription-manager.conf”, changed the parameter “enabled=1” to “enabled=0”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/yum/pluginconf.d/subscription-manager.conf
|
||||
[main]
|
||||
enabled=0
|
||||
```
|
||||
|
||||
save and exit the file.
|
||||
|
||||
### Step:6) Installing packages using DNF / Yum
|
||||
|
||||
Let’s assume we want to install nginx web server then run below dnf command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx
|
||||
```
|
||||
|
||||
![][1]
|
||||
|
||||
Similarly if you want to install **LEMP** stack on your RHEL 8 system use the following dnf command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx mariadb php -y
|
||||
```
|
||||
|
||||
[![][2]][3]
|
||||
|
||||
This confirms that we have successfully configured Local yum / dnf repository on our RHEL 8 server using Installation DVD or ISO file.
|
||||
|
||||
In case these steps help you technically, please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/dnf-install-nginx-rhel8-1024x376.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8-1024x540.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8.jpg
|
@ -1,141 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?
|
||||
======
|
||||
|
||||
We have recently published an article about bulk package installation.
|
||||
|
||||
While doing that, i was struggled to get the installed package information and did a small google search and found few methods about it.
|
||||
|
||||
I would like to share it in our website so, that it will be helpful for others too.
|
||||
|
||||
There are numerous ways we can achieve this.
|
||||
|
||||
I have add seven ways to achieve this. However, you can choose the preferred method for you.
|
||||
|
||||
Those methods are listed below.
|
||||
|
||||
* **`apt-cache Command:`** apt-cache command is used to query the APT cache or package metadata.
|
||||
* **`apt Command:`** APT is a powerful command-line tool for installing, downloading, removing, searching and managing packages on Debian based systems.
|
||||
* **`dpkg-query Command:`** dpkg-query is a tool to query the dpkg database.
|
||||
* **`dpkg Command:`** dpkg is a package manager for Debian based systems.
|
||||
* **`which Command:`** The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
* **`whereis Command:`** The whereis command used to search the binary, source, and man page files for a given command.
|
||||
* **`locate Command:`** locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
|
||||
|
||||
### Method-1 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt-cache Command?
|
||||
|
||||
apt-cache command is used to query the APT cache or package metadata from APT’s internal database.
|
||||
|
||||
It will search and display an information about the given package. It shows whether the package is installed or not, installed package version, source repository information.
|
||||
|
||||
The below output clearly showing that `nano` package has already installed in the system. Since installed part is showing the installed version of nano package.
|
||||
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.9.3-2
|
||||
Candidate: 2.9.3-2
|
||||
Version table:
|
||||
*** 2.9.3-2 500
|
||||
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
```
|
||||
|
||||
### Method-2 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt Command?
|
||||
|
||||
APT is a powerful command-line tool for installing, downloading, removing, searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library. It’s contains some less used command-line utilities related to package management.
|
||||
|
||||
```
|
||||
# apt -qq list nano
|
||||
nano/bionic,now 2.9.3-2 amd64 [installed]
|
||||
```
|
||||
|
||||
### Method-3 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg-query Command?
|
||||
|
||||
dpkg-query is a tool to show information about packages listed in the dpkg database.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg-query --list | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-4 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg Command?
|
||||
|
||||
DPKG stands for Debian Package is a tool to install, build, remove and manage Debian packages, but unlike other package management systems, it cannot automatically download and install packages or their dependencies.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg -l | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-5 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using which Command?
|
||||
|
||||
The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
|
||||
It’s very useful when you want to create a desktop shortcut or symbolic link for executable files.
|
||||
|
||||
Which command searches the directories listed in the current user’s PATH environment variable not for all the users. I mean, when you are logged in your own account and you can’t able to search for root user file or directory.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# which nano
|
||||
/bin/nano
|
||||
```
|
||||
|
||||
### Method-6 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using whereis Command?
|
||||
|
||||
The whereis command used to search the binary, source, and man page files for a given command.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# whereis nano
|
||||
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
|
||||
```
|
||||
|
||||
### Method-7 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using locate Command?
|
||||
|
||||
locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
It uses a database rather than hunting individual directory paths to get a given file.
|
||||
|
||||
locate command doesn’t pre-installed in most of the distributions so, use your distribution package manager to install it.
|
||||
|
||||
The database is updated regularly through cron. Even, we can update it manually.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# locate --basename '\nano'
|
||||
/usr/bin/nano
|
||||
/usr/share/nano
|
||||
/usr/share/doc/nano
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
93
sources/tech/20190514 Why bother writing tests at all.md
Normal file
93
sources/tech/20190514 Why bother writing tests at all.md
Normal file
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why bother writing tests at all?)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Why bother writing tests at all?
|
||||
======
|
||||
|
||||
In previous posts and presentations I talked about [how to test][1], and [when to test][2]. To conclude this series of I’m going to ask the question, _why test at all?_
|
||||
|
||||
### Even if you don’t, someone _will_ test your software
|
||||
|
||||
I’m sure no-one reading this post thinks that software should be delivered without being tested first. Even if that were true, your customers are going to test it, or at least use it. If nothing else, it would be good to discover any issues with the code before your customers do. If not for the reputation of your company, at least for your professional pride.
|
||||
|
||||
So, if we agree that software should be tested, the question becomes: _who_ should do that testing?
|
||||
|
||||
### The majority of testing should be performed by development teams
|
||||
|
||||
I argue that the majority of the testing should be done by development groups. Moreover, testing should be automated, and thus the majority of these tests should be unit style tests.
|
||||
|
||||
To be clear, I am _not_ saying you shouldn’t write integration, functional, or end to end tests. I’m also _not_ saying that you shouldn’t have a QA group, or integration test engineers. However at a recent software conference, in a room of over 1,000 engineers, nobody raised their hand when I asked if they considered themselves in a pure quality assurance role.
|
||||
|
||||
You might argue that the audience was self selecting, that QA engineers did not feel a software conference was relevant–or welcoming–to them. However, I think this proves my point, the days of [one developer to one test engineer][3] are gone and not coming back.
|
||||
|
||||
If development teams aren’t writing the majority of tests, who is?
|
||||
|
||||
### Manual testing should not be the majority of your testing because manual testing is O(n)
|
||||
|
||||
Thus, if individual contributors are expected to test the software they write, why do we need to automate it? Why is a manual testing plan not good enough?
|
||||
|
||||
Manual testing of software or manual verification of a defect is not sufficient because it does not scale. As the number of manual tests grows, engineers are tempted to skip them or only execute the scenarios they _think_ are could be affected. Manual testing is expensive in terms of time, thus dollars, and it is boring. 99.9% of the tests that passed last time are _expected_ to pass again. Manual testing is looking for a needle in a haystack, except you don’t stop when you find the first needle.
|
||||
|
||||
This means that your first response when given a bug to fix or a feature to implement should be to write a failing test. This doesn’t need to be a unit test, but it should be an automated test. Once you’ve fixed the bug, or added the feature, now have the test case to prove it worked–and you can check them in together.
|
||||
|
||||
### Tests are the critical component that ensure you can always ship your master branch
|
||||
|
||||
As a development team, you are judged on your ability to deliver working software to the business. No, seriously, the business could care less about OOP vs FP, CI/CD, table tennis or limited run La Croix.
|
||||
|
||||
Your super power is, at any time, anyone on the team should be confident that the master branch of your code is shippable. This means at any time they can deliver a release of your software to the business and the business can recoup its investment in your development R&D.
|
||||
|
||||
I cannot emphasise this enough. If you want the non technical parts of the business to believe you are heros, you must never create a situation where you say “well, we can’t release right now because we’re in the middle of an important refactoring. It’ll be a few weeks. We hope.”
|
||||
|
||||
Again, I’m not saying you cannot refactor, but at every stage your product must be shippable. Your tests have to pass. It may not have all the desired features, but the features that are there should work as described on the tin.
|
||||
|
||||
### Tests lock in behaviour
|
||||
|
||||
Your tests are the contract about what your software does and does not do. Unit tests should lock in the behaviour of the package’s API. Integration tests do the same for complex interactions. Tests describe, in code, what the program promises to do.
|
||||
|
||||
If there is a unit test for each input permutation, you have defined the contract for what the code will do _in code_ , not documentation. This is a contract anyone on your team can assert by simply running the tests. At any stage you _know_ with a high degree of confidence that the behaviour people relied on before your change continues to function after your change.
|
||||
|
||||
### Tests give you confidence to change someone else’s code
|
||||
|
||||
Lastly, and this is the biggest one, for programmers working on a piece of code that has been through many hands. Tests give you the confidence to make changes.
|
||||
|
||||
Even though we’ve never met, something I know about you, the reader, is you will eventually leave your current employer. Maybe you’ll be moving on to a new role, or perhaps a promotion, perhaps you’ll move cities, or follow your partner overseas. Whatever the reason, the succession of the maintenance of programs you write is key.
|
||||
|
||||
If people cannot maintain our code then as you and I move from job to job we’ll leave behind programs which cannot be maintained. This goes beyond advocacy for a language or tool. Programs which cannot be changed, programs which are too hard to onboard new developers, or programs which feel like career digression to work on them will reach only one end state–they are a dead end. They represent a balance sheet loss for the business. They will be replaced.
|
||||
|
||||
If you worry about who will maintain your code after you’re gone, write good tests.
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Writing table driven tests in Go][4]
|
||||
2. [Prefer table driven tests][5]
|
||||
3. [Automatically run your package’s tests with inotifywait][6]
|
||||
4. [The value of TDD][7]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
|
||||
[2]: https://dave.cheney.net/paste/absolute-unit-test-london-gophers.pdf
|
||||
[3]: https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/evolving-test-practices-microsoft
|
||||
[4]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
|
||||
[5]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests (Prefer table driven tests)
|
||||
[6]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait)
|
||||
[7]: https://dave.cheney.net/2016/04/11/the-value-of-tdd (The value of TDD)
|
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2)
|
||||
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2
|
||||
======
|
||||
|
||||
As a continuation of Part-1, this part-2 has remaining features of Portainer covered and as explained below.
|
||||
|
||||
### Monitoring docker container images
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ docker ps -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
9ab9aa72f015 ubuntu "/bin/bash" 14 seconds ago Exited (0) 12 seconds ago suspicious_shannon
|
||||
305369d3b2bb centos "/bin/bash" 24 seconds ago Exited (0) 22 seconds ago admiring_mestorf
|
||||
9a669f3dc4f6 portainer/portainer "/portainer" 7 minutes ago Up 7 minutes 0.0.0.0:9000->9000/tcp trusting_keller
|
||||
```
|
||||
|
||||
Including the portainer(which is a docker container image), all the exited and present running docker images are displayed. Below screenshot from Portainer GUI displays the same.
|
||||
|
||||
[![Docker_status][1]][2]
|
||||
|
||||
### Monitoring events
|
||||
|
||||
Click on the “Events” option from the portainer webpage as shown below.
|
||||
|
||||
Various events that are generated and created based on docker-container activity, are captured and displayed in this page
|
||||
|
||||
[![Container-Events-Poratiner-GUI][3]][4]
|
||||
|
||||
Now to check and validate how the “ **Events** ” section works. Create a new docker-container image redis as explained below, check the docker ps –a status at docker command-line.
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ docker ps -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
cdbfbef59c31 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp angry_varahamihira
|
||||
9ab9aa72f015 ubuntu "/bin/bash" 10 minutes ago Exited (0) 10 minutes ago suspicious_shannon
|
||||
305369d3b2bb centos "/bin/bash" 11 minutes ago Exited (0) 11 minutes ago admiring_mestorf
|
||||
9a669f3dc4f6 portainer/portainer "/portainer" 17 minutes ago Up 17 minutes 0.0.0.0:9000->9000/tcp trusting_keller
|
||||
```
|
||||
|
||||
Click the “Event List” on the top to refresh the events list,
|
||||
|
||||
[![events_updated][5]][6]
|
||||
|
||||
Now the event’s page also updated with this change,
|
||||
|
||||
### Host status
|
||||
|
||||
Below is the screenshot of the portainer displaying the host status. This is a simple window showing-up. This shows the basic info like “CPU”, “hostname”, “OS info” etc of the host linux machine. Instead of logging- into the host command-line, this page provides very useful info on for quick glance.
|
||||
|
||||
[![Host-names-Portainer][7]][8]
|
||||
|
||||
### Dashboard in Portainer
|
||||
|
||||
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Dashboard** ” section of the selected Docker Container image.
|
||||
|
||||
When “ **EndPoint** ” option is clicked in the GUI of Portainer, the following window appears,
|
||||
|
||||
[![End_Point_Settings][9]][10]
|
||||
|
||||
This Dashboard has many statuses and options, for a host container image.
|
||||
|
||||
**1) Stacks:** Clicking on this option, provides status of any stacks if any. Since there are no stacks, this displays zero.
|
||||
|
||||
**2) Images:** Clicking on this option provides host of container images that are available. This option will display all the live and exited container images
|
||||
|
||||
[![Docker-Container-Images-Portainer][11]][12]
|
||||
|
||||
For example create one more “ **Nginx”** container and refresh this list to see the updates.
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ sudo docker run nginx
|
||||
Unable to find image 'nginx:latest' locally
|
||||
latest: Pulling from library/nginx
|
||||
27833a3ba0a5: Pull complete
|
||||
ea005e36e544: Pull complete
|
||||
d172c7f0578d: Pull complete
|
||||
Digest: sha256:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c
|
||||
Status: Downloaded newer image for nginx:latest
|
||||
```
|
||||
|
||||
The following is the image after refresh,
|
||||
|
||||
[![Nginx_Image_creation][13]][14]
|
||||
|
||||
Once the Nginx image is stopped/killed and docker container image will be moved to unused status.
|
||||
|
||||
**Note** :-One can see all the image details here are very clear with memory usage, creation date and time. As compared to command-line option, maintaining and monitoring containers from here it will be very easy.
|
||||
|
||||
**3) Networks:** this option is used for network operations. Like assigning IP address, creating subnets, providing IP address range, access control (admin and normal user) . The following window provides the details of various options possible. Based on your need these options can be explored further.
|
||||
|
||||
[![Conatiner-Network-Portainer][15]][16]
|
||||
|
||||
Once all the various networking parameters are entered, “ **create network** ” button is clicked for creating the network.
|
||||
|
||||
**4) Container:** (click on container) This option will provide the container status. This list will provide details on live and not running container statuses. This output is similar to docker ps command option.
|
||||
|
||||
[![Containers-Status-Portainer][17]][18]
|
||||
|
||||
From this window only the containers can be stopped and started as need arises by checking the check box and selecting the above buttons. One example is provided as below,
|
||||
|
||||
Example, Both “CentOS” and “Ubuntu” containers which are in stopped state, they are started now by selecting check boxes and hitting “Start” button.
|
||||
|
||||
[![start_containers1][19]][20]
|
||||
|
||||
[![start_containers2][21]][22]
|
||||
|
||||
**Note:** Since both are Linux container images, they will not be started. Portainer tries to start and stops later. Try “Nginx” instead and you can see it coming to “running”status.
|
||||
|
||||
[![start_containers3][23]][24]
|
||||
|
||||
**5) Volume:** Described in Part-I of Portainer Article
|
||||
|
||||
### Setting option in Portainer
|
||||
|
||||
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Setting”** section of the selected Docker Container image.
|
||||
|
||||
When “Settings” option is clicked in the GUI of Portainer, the following further configuration options are available,
|
||||
|
||||
**1) Extensions** : This is a simple Portainer CE subscription process. The details and uses can be seen from the attached window. This is mainly used for maintaining the license and subscription of the respective version.
|
||||
|
||||
[![Extensions][25]][26]
|
||||
|
||||
**2) Users:** This option is used for adding “users” with or without administrative privileges. Following example provides the same.
|
||||
|
||||
Enter the selected user name “shashi” in this case and your choice of password and hit “ **Create User** ” button below.
|
||||
|
||||
[![create_user_portainer][27]][28]
|
||||
|
||||
[![create_user2_portainer][29]][30]
|
||||
|
||||
[![Internal-user-Portainer][31]][32]
|
||||
|
||||
Similarly the just now created user “shashi” can be removed by selecting the check box and hitting remove button.
|
||||
|
||||
[![user_remove_portainer][33]][34]
|
||||
|
||||
**3) Endpoints:** this option is used for Endpoint management. Endpoints can be added and removed as shown in the attached windows.
|
||||
|
||||
[![Endpoint-Portainer-GUI][35]][36]
|
||||
|
||||
The new endpoint “shashi” is created using the various default parameters as shown below,
|
||||
|
||||
[![Endpoint2-Portainer-GUI][37]][38]
|
||||
|
||||
Similarly this endpoint can be removed by clicking the check box and hitting remove button.
|
||||
|
||||
**4) Registries:** this option is used for registry management. As docker hub has registry of various images, this feature can be used for similar purposes.
|
||||
|
||||
[![Registry-Portainer-GUI][39]][40]
|
||||
|
||||
With the default options the “shashi-registry” can be created.
|
||||
|
||||
[![Registry2-Portainer-GUI][41]][42]
|
||||
|
||||
Similarly this can be removed if not required.
|
||||
|
||||
**5) Settings:** This option is used for the following various options,
|
||||
|
||||
* Setting-up snapshot interval
|
||||
* For using custom logo
|
||||
* To create external templates
|
||||
* Security features like- Disable and enable bin mounts for non-admins, Disable/enable privileges for non-admins, Enabling host management features
|
||||
|
||||
|
||||
|
||||
Following screenshot shows some options enabled and disabled for demonstration purposes. Once all done hit on “Save Settings” button to save all these options.
|
||||
|
||||
[![Portainer-GUI-Settings][43]][44]
|
||||
|
||||
Now one more option pops-up on “Authentication settings” for LDAP, Internal or OAuth extension as shown below”
|
||||
|
||||
[![Authentication-Portainer-GUI-Settings][45]][46]
|
||||
|
||||
Based on what level of security features we want for our environment, respective option is chosen.
|
||||
|
||||
That’s all from this article, I hope these Portainer GUI articles helps you to manage and monitor containers more efficiently. Please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status-1024x423.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events-1024x404.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated-1024x414.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names-1024x408.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings-1024x471.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images-1024x398.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation-1024x439.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-1024x463.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers-1024x364.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1-1024x432.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2-1024x307.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3-1024x435.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions-1024x421.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user-1024x350.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2-1024x372.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer-1024x257.jpg
|
||||
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer.jpg
|
||||
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove-1024x318.jpg
|
||||
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove.jpg
|
||||
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint-1024x349.jpg
|
||||
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint.jpg
|
||||
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2-1024x379.jpg
|
||||
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2.jpg
|
||||
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry-1024x420.jpg
|
||||
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry.jpg
|
||||
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2-1024x409.jpg
|
||||
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2.jpg
|
||||
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings-1024x418.jpg
|
||||
[44]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings.jpg
|
||||
[45]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings-1024x344.jpg
|
||||
[46]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings.jpg
|
65
sources/tech/20190519 The three Rs of remote work.md
Normal file
65
sources/tech/20190519 The three Rs of remote work.md
Normal file
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The three Rs of remote work)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
The three Rs of remote work
|
||||
======
|
||||
|
||||
I started working remotely in 2012. Since then I’ve worked for big companies and small, organisations with outstanding remote working cultures, and others that probably would have difficulty spelling the word without predictive text. I broadly classify my experiences into three tiers;
|
||||
|
||||
### Little r remote
|
||||
|
||||
The first kind of remote work I call _little r_ remote.
|
||||
|
||||
Your company has an office, but it’s not convenient or you don’t want to work from there. It could be the commute is too long, or its in the next town over, or perhaps a short plane flight away. Sometimes you might go into the office for a day or two a week, and should something serious arise you could join your co-workers onsite for an extended period of time.
|
||||
|
||||
If you often hear people say they are going to work from home to get some work done, that’s little r remote.
|
||||
|
||||
### Big R remote
|
||||
|
||||
The next category I call _Big R_ remote. Big R remote differs mainly from little r remote by the tyranny of distance. It’s not impossible to visit your co-workers in person, but it is inconvenient. Meeting face to face requires a day’s flying. Passports and boarder crossings are frequently involved. The expense and distance necessitates week long sprints and commensurate periods of jetlag recuperation.
|
||||
|
||||
Because of timezone differences meetings must be prearranged and periods of overlap closely guarded. Communication becomes less spontaneous and care must be taken to avoid committing to unsustainable working hours.
|
||||
|
||||
### Gothic ℜ remote
|
||||
|
||||
The final category is basically Big R remote working on hard mode. Everything that was hard about Big R remote, timezone, travel schedules, public holidays, daylight savings, video call latency, cultural and language barriers is multiplied for each remote worker.
|
||||
|
||||
In person meetings are so rare that without a focus on written asynchronous communication progress can repeatedly stall for days, if not weeks, as miscommunication leads to disillusionment and loss of trust.
|
||||
|
||||
In my experience, for knowledge workers, little r remote work offers many benefits over [the open office hell scape][1] du jour. Big R remote takes a serious commitment by all parties and if you are the first employee in that category you will bare most of the cost to making Big R remote work for you.
|
||||
|
||||
Gothic ℜ remote working should probably be avoided unless all those involved have many years of working in that style _and_ the employer is committed to restructuring the company as a remote first organisation. It is not possible to succeed in a Gothic ℜ remote role without a culture of written communication and asynchronous decision making mandated, _and consistently enforced,_ by the leaders of the company.
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [How to dial remote SSL/TLS services in Go][2]
|
||||
2. [How does the go build command work ?][3]
|
||||
3. [Why Slack is inappropriate for open source communications][4]
|
||||
4. [The office coffee model of concurrent garbage collection][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twitter.com/davecheney/status/761693088666357760
|
||||
[2]: https://dave.cheney.net/2010/10/05/how-to-dial-remote-ssltls-services-in-go (How to dial remote SSL/TLS services in Go)
|
||||
[3]: https://dave.cheney.net/2013/10/15/how-does-the-go-build-command-work (How does the go build command work ?)
|
||||
[4]: https://dave.cheney.net/2017/04/11/why-slack-is-inappropriate-for-open-source-communications (Why Slack is inappropriate for open source communications)
|
||||
[5]: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection (The office coffee model of concurrent garbage collection)
|
@ -0,0 +1,193 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Download and Use Ansible Galaxy Roles in Ansible Playbook)
|
||||
[#]: via: (https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Download and Use Ansible Galaxy Roles in Ansible Playbook
|
||||
======
|
||||
|
||||
**Ansible** is tool of choice these days if you must manage multiple devices, be it Linux, Windows, Mac, Network Devices, VMware and lot more. What makes Ansible popular is its agent less feature and granular control. If you have worked with python or have experience with **yaml** , you will feel at home with Ansible. To see how you can install [Ansible][1] click here.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Download-Use-Ansible-Galaxy-Roles.jpg>
|
||||
|
||||
Ansible core modules will let you manage almost anything should you wish to write playbooks, however often there is someone who has already written a role for a problem you are trying to solve. Let’s take an example, you wish to manage NTP clients on the Linux machines, you have 2 choices either write a role which can be applied to the nodes or use **ansible-galaxy** to download an existing role someone has already written/tested for you. Ansible galaxy has roles for almost all the domains and these caters different problems. You can visit <https://galaxy.ansible.com/> to get an idea on domains and popular roles it has. Each role published on galaxy repository is thoroughly tested and has been rated by the users, so you get an idea on how other people who have used it liked it.
|
||||
|
||||
To keep moving with the NTP idea, here is how you can search and install an NTP role from galaxy.
|
||||
|
||||
Firstly, lets run ansible-galaxy with the help flag to check what options does it give us
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy --help
|
||||
```
|
||||
|
||||
![ansible-galaxy-help][2]
|
||||
|
||||
As you can see from the output above there are some interesting options been shown, since we are looking for a role to manage ntp clients lets try the search option to see how good it is finding what we are looking for.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy search ntp
|
||||
```
|
||||
|
||||
Here is the truncated output of the command above.
|
||||
|
||||
![ansible-galaxy-search][3]
|
||||
|
||||
It found 341 matches based on our search, as you can see from the output above many of these roles are not even related to NTP which means our search needs some refinement however, it has managed to pull some NTP roles, lets dig deeper to see what these roles are. But before that let me tell you the naming convention being followed here. The name of a role is always preceded by the author name so that it is easy to segregate roles with the same name. So, if you have written an NTP role and have published it to galaxy repo, it does not get mixed up with someone else repo with the same name.
|
||||
|
||||
With that out of the way, lets continue with our job of installing a NTP role for our Linux machines. Let’s try **bennojoy.ntp** for this example, but before using this we need to figure out couple of things, is this role compatible with the version of ansible I am running. Also, what is the license status of this role. To figure out these, let’s run below ansible-galaxy command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy info bennojoy.ntp
|
||||
```
|
||||
|
||||
![ansible-galaxy-info][4]
|
||||
|
||||
ok so this says the minimum version is 1.4 and the license is BSD, lets download it
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy install bennojoy.ntp
|
||||
- downloading role 'ntp', owned by bennojoy
|
||||
- downloading role from https://github.com/bennojoy/ntp/archive/master.tar.gz
|
||||
- extracting bennojoy.ntp to /etc/ansible/roles/bennojoy.ntp
|
||||
- bennojoy.ntp (master) was installed successfully
|
||||
[root@linuxtechi ~]# ansible-galaxy list
|
||||
- bennojoy.ntp, master
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Let’s find the newly installed role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cd /etc/ansible/roles/bennojoy.ntp/
|
||||
[root@linuxtechi bennojoy.ntp]# ls -l
|
||||
total 4
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 defaults
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 handlers
|
||||
drwxr-xr-x. 2 root root 48 May 21 22:38 meta
|
||||
-rw-rw-r--. 1 root root 1328 Apr 20 2016 README.md
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 tasks
|
||||
drwxr-xr-x. 2 root root 24 May 21 22:38 templates
|
||||
drwxr-xr-x. 2 root root 55 May 21 22:38 vars
|
||||
[root@linuxtechi bennojoy.ntp]#
|
||||
```
|
||||
|
||||
I am going to run this newly downloaded role on my Elasticsearch CentOS node. Here is my hosts file
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat hosts
|
||||
[CentOS]
|
||||
elastic7-01 ansible_host=192.168.1.15 ansibel_port=22 ansible_user=linuxtechi
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Let’s try to ping the node using below ansible ping module,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible -m ping -i hosts elastic7-01
|
||||
elastic7-01 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Here is what the current ntp.conf looks like on elastic node.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# head -30 /etc/ntp.conf
|
||||
```
|
||||
|
||||
![Current-ntp-conf][5]
|
||||
|
||||
Since I am in India, lets add server **in.pool.ntp.org** to ntp.conf. I would have to edit the variables in default directory of the role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/ansible/roles/bennojoy.ntp/defaults/main.yml
|
||||
```
|
||||
|
||||
Change NTP server address in “ntp_server” parameter, after updating it should look like below.
|
||||
|
||||
![Update-ansible-ntp-role][6]
|
||||
|
||||
The last thing now is to create my playbook which would call this role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi ntpsite.yaml
|
||||
---
|
||||
- name: Configure NTP on CentOS/RHEL/Debian System
|
||||
become: true
|
||||
hosts: all
|
||||
roles:
|
||||
- {role: bennojoy.ntp}
|
||||
```
|
||||
|
||||
save and exit the file
|
||||
|
||||
We are ready to run this role now, use below command to run ntp playbook,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-playbook -i hosts ntpsite.yaml
|
||||
```
|
||||
|
||||
Output of above ntp ansible playbook should be something like below,
|
||||
|
||||
![ansible-playbook-output][7]
|
||||
|
||||
Let’s check updated file now. go to elastic node and view the contents of ntp.conf file
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /etc/ntp.conf
|
||||
#Ansible managed
|
||||
|
||||
driftfile /var/lib/ntp/drift
|
||||
server in.pool.ntp.org
|
||||
|
||||
restrict -4 default kod notrap nomodify nopeer noquery
|
||||
restrict -6 default kod notrap nomodify nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Just in case you do not find a role fulfilling your requirement ansible-galaxy can help you create a directory structure for your custom roles. This helps your playbooks along with the variables, handlers, templates etc assembled in a standardized file structure. Let’s create our own role, its always a good practice to let ansible-galaxy create the structure for you.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy init pk.backup
|
||||
- pk.backup was created successfully
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Verify the structure of your role using the tree command,
|
||||
|
||||
![createing-roles-ansible-galaxy][8]
|
||||
|
||||
Let me quickly explain what each of these directories and files are for, each of these serves a purpose.
|
||||
|
||||
The very first one is the **defaults** directory which contains the files containing variables with takes the lowest precedence, if the same variables are assigned in var directory it will be take precedence over default. The **handlers** directory hosts the handlers. The **file** and **templates** keep any files your role may need to copy and **jinja templates** to be used in playbooks respectively. The **tasks** directory is where your playbooks containing the tasks are kept. The var directory consists of all the files that hosts the variables used in role. The test directory consists of a sample inventory and test playbooks which can be used to test the role. The **meta directory** consists of any dependencies on other roles along with the authorship information.
|
||||
|
||||
Finally, **README.md** file simply consists of some general information like description and minimum version of ansible this role is compatible with.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/install-and-use-ansible-in-centos-7/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-help-1024x294.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-search-1024x552.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-info-1024x557.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Current-ntp-conf.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Update-ansible-ntp-role.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-playbook-output-1024x376.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/createing-roles-ansible-galaxy.jpg
|
@ -0,0 +1,200 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server)
|
||||
[#]: via: (https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server
|
||||
======
|
||||
|
||||
In this article, we’ll be looking at how to install **LEMP** stack on Fedora 30 Server. LEMP Stands for:
|
||||
|
||||
* L -> Linux
|
||||
* E -> Nginx
|
||||
* M -> Maria DB
|
||||
* P -> PHP
|
||||
|
||||
|
||||
|
||||
I am assuming **[Fedora 30][1]** is already installed on your system.
|
||||
|
||||
![LEMP-Stack-Fedora30][2]
|
||||
|
||||
LEMP is a collection of powerful software setup that is installed on a Linux server to help in developing popular development platforms to build websites, LEMP is a variation of LAMP wherein instead of **Apache** , **EngineX (Nginx)** is used as well as **MariaDB** used in place of **MySQL**. This how-to guide is a collection of separate guides to install Nginx, Maria DB and PHP.
|
||||
|
||||
### Install Nginx, PHP 7.3 and PHP-FPM on Fedora 30 Server
|
||||
|
||||
Let’s take a look at how to install Nginx and PHP along with PHP FPM on Fedora 30 Server.
|
||||
|
||||
### Step 1) Switch to root user
|
||||
|
||||
First step in installing Nginx in your system is to switch to root user. Use the following command :
|
||||
|
||||
```
|
||||
root@linuxtechi ~]$ sudo -i
|
||||
[sudo] password for pkumar:
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 2) Install Nginx, PHP 7.3 and PHP FPM using dnf command
|
||||
|
||||
Install Nginx using the following dnf command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx php php-fpm php-common -y
|
||||
```
|
||||
|
||||
### Step 3) Install Additional PHP modules
|
||||
|
||||
The default installation of PHP only comes with the basic and the most needed modules installed. If you need additional modules like GD, XML support for PHP, command line interface Zend OPCache features etc, you can always choose your packages and install everything in one go. See the sample command below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# sudo dnf install php-opcache php-pecl-apcu php-cli php-pear php-pdo php-pecl-mongodb php-pecl-redis php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml -y
|
||||
```
|
||||
|
||||
### Step 4) Start & Enable Nginx and PHP-fpm Service
|
||||
|
||||
Start and enable Nginx service using the following command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start nginx && systemctl enable nginx
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use the following command to start and enable PHP-FPM service
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start php-fpm && systemctl enable php-fpm
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/php-fpm.service → /usr/lib/systemd/system/php-fpm.service.
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
**Verify Nginx (Web Server) and PHP installation,**
|
||||
|
||||
**Note:** In case OS firewall is enabled and running on your Fedora 30 system, then allow 80 and 443 ports using beneath commands,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=http
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=https
|
||||
success
|
||||
[root@linuxtechi ~]# firewall-cmd --reload
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Open the web browser, type the following URL: http://<Your-Server-IP>
|
||||
|
||||
[![Test-Page-HTTP-Server-Fedora-30][3]][4]
|
||||
|
||||
Above screen confirms that NGINX is installed successfully.
|
||||
|
||||
Now let’s verify PHP installation, create a test php page(info.php) using the beneath command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# echo "<?php phpinfo(); ?>" > /usr/share/nginx/html/info.php
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Type the following URL in the web browser,
|
||||
|
||||
http://<Your-Server-IP>/info.php
|
||||
|
||||
[![Php-info-page-fedora30][5]][6]
|
||||
|
||||
Above page confirms that PHP 7.3.5 has been installed successfully. Now let’s install MariaDB database server.
|
||||
|
||||
### Install MariaDB on Fedora 30
|
||||
|
||||
MariaDB is a great replacement for MySQL DB as it is works much similar to MySQL and also compatible with MySQL steps too. Let’s look at the steps to install MariaDB on Fedora 30 Server
|
||||
|
||||
### Step 1) Switch to Root User
|
||||
|
||||
First step in installing MariaDB in your system is to switch to root user or you can use a local user who has root privilege. Use the following command below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# sudo -i
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 2) Install latest version of MariaDB (10.3) using dnf command
|
||||
|
||||
Use the following command to install MariaDB on Fedora 30 Server
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install mariadb-server -y
|
||||
```
|
||||
|
||||
### Step 3) Start and enable MariaDB Service
|
||||
|
||||
Once the mariadb is installed successfully in step 2), next step is to start the MariaDB service. Use the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start mariadb.service ; systemctl enable mariadb.service
|
||||
```
|
||||
|
||||
### Step 4) Secure MariaDB Installation
|
||||
|
||||
When we install MariaDB server, so by default there is no root password, also anonymous users are created in database. So, to secure MariaDB installation, run the beneath “mysql_secure_installation” command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mysql_secure_installation
|
||||
```
|
||||
|
||||
Next you will be prompted with some question, just answer the questions as shown below:
|
||||
|
||||
![Secure-MariaDB-Installation-Part1][7]
|
||||
|
||||
![Secure-MariaDB-Installation-Part2][8]
|
||||
|
||||
### Step 5) Test MariaDB Installation
|
||||
|
||||
Once you have installed, you can always test if MariaDB is successfully installed on the server. Use the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mysql -u root -p
|
||||
Enter password:
|
||||
```
|
||||
|
||||
Next you will be prompted for a password. Enter the password same password that you have set during MariaDB secure installation, then you can see the MariaDB welcome screen.
|
||||
|
||||
```
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MariaDB connection id is 17
|
||||
Server version: 10.3.12-MariaDB MariaDB Server
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MariaDB [(none)]>
|
||||
```
|
||||
|
||||
And finally, we’ve completed everything to install LEMP (Linux, Nginx, MariaDB and PHP) on your server successfully. Please post all your comments and suggestions in the feedback section below and we’ll respond back at the earliest.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/LEMP-Stack-Fedora30.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30-1024x732.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30-1024x732.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part1.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part2.jpg
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to navigate the Kubernetes learning curve)
|
||||
[#]: via: (https://opensource.com/article/19/6/kubernetes-learning-curve)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux)
|
||||
|
||||
How to navigate the Kubernetes learning curve
|
||||
======
|
||||
Kubernetes is like a dump truck. It's elegant for solving the problems
|
||||
it's designed for, but you have to master the learning curve first.
|
||||
![Dump truck rounding a turn in the road][1]
|
||||
|
||||
In _[Kubernetes is a dump truck][2]_, I talked about how a tool can be elegant for the problem it was designed to solve—once you learn how to use it. In part 2 of this series, I'm going a little deeper into the Kubernetes' learning curve.
|
||||
|
||||
The journey to [Kubernetes][3] often starts with running one container on one host. You quickly discover how easy it is to run new versions of software, how easy it is to share that software with others, and how easy it is for those users to run it the way you intended.
|
||||
|
||||
But then you need
|
||||
|
||||
* Two containers
|
||||
* Two hosts
|
||||
|
||||
|
||||
|
||||
It's easy to fire up one web server on port 80 with a container, but what happens when you need to fire up a second container on port 80? What happens when you are building a production environment and you need the containerized web server to fail over to a second host? The short answer, in either case, is you have to move into container orchestration.
|
||||
|
||||
Inevitably, when you start to handle the two containers or two hosts problem, you'll introduce complexity and, hence, a learning curve. The two services (a more generalized version of a container) / two hosts problem has been around for a long time and has always introduced complexity.
|
||||
|
||||
Historically, this would have involved load balancers, clustering software, and even clustered file systems. Configuration logic for every service is embedded in every system (load balancers, cluster software, and file systems). Running 60 or 70 services, clustered, behind load balancers is complex. Adding another new service is also complex. Worse, decommissioning a service is a nightmare. Thinking back on my days of troubleshooting production MySQL and Apache servers with logic embedded in three, four, or five different places, all in different formats, still makes my head hurt.
|
||||
|
||||
Kubernetes elegantly solves all these problems with one piece of software:
|
||||
|
||||
1. Two services (containers): Check
|
||||
2. Two servers (high availability): Check
|
||||
3. Single source of configuration: Check
|
||||
4. Standard configuration format: Check
|
||||
5. Networking: Check
|
||||
6. Storage: Check
|
||||
7. Dependencies (what services talk to what databases): Check
|
||||
8. Easy provisioning: Check
|
||||
9. Easy de-provisioning: Check (perhaps Kubernetes' _most_ powerful piece)
|
||||
|
||||
|
||||
|
||||
Wait, it's starting to look like Kubernetes is pretty elegant and pretty powerful. _It is._ You can model an entire miniature IT universe in Kubernetes.
|
||||
|
||||
![Kubernetes business model][4]
|
||||
|
||||
So yes, there is a learning curve when starting to use a giant dump truck (or any professional equipment). There's also a learning curve to use Kubernetes, but it's worth it because you can solve so many problems with one tool. If you are apprehensive about the learning curve, think through all the underlying networking, storage, and security problems in IT infrastructure and envision their solutions today—they're not easier. Especially when you introduce more and more services, faster and faster. Velocity is the goal nowadays, so give special consideration to the provisioning and de-provisioning problem.
|
||||
|
||||
But don't confuse the learning curve for building or equipping Kubernetes (picking the right mud flaps for your dump truck can be hard, LOL) with the learning curve for using it. Learning to build your own Kubernetes with so many different choices at so many different layers (container engine, logging, monitoring, service mesh, storage, networking), and then maintaining updated selections of each component every six months, might not be worth the investment—but learning to use it is absolutely worth it.
|
||||
|
||||
I eat, sleep, and breathe Kubernetes and containers every day, and even I struggle to keep track of all the major new projects announced literally almost every day. But there isn't a day that I'm not excited about the operational benefits of having a single tool to model an entire IT miniverse. Also, remember Kubernetes has matured a ton and will continue to do so. Like Linux and OpenStack before it, the interfaces and de facto projects at each layer will mature and become easier to select.
|
||||
|
||||
In the third article in this series, I'll dig into what you need to know before you drive your Kubernetes "truck."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/kubernetes-learning-curve
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dumptruck_car_vehicle_storage_container_road.jpg?itok=TWK0CbX_ (Dump truck rounding a turn in the road)
|
||||
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional_1.png (Kubernetes business model)
|
@ -0,0 +1,227 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set ulimit and file descriptors limit on Linux Servers)
|
||||
[#]: via: (https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
How to set ulimit and file descriptors limit on Linux Servers
|
||||
======
|
||||
|
||||
**Introduction: ** Challenges like number of open files in any of the production environment has become common now a day. Since many applications which are Java based and Apache based, are getting installed and configured, which may lead to too many open files, file descriptors etc. If this exceeds the default limit that is set, then one may face access control problems and file opening challenges. Many production environments come to standstill kind of situations because of this.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/ulimit-number-openfiles-linux-server.jpg>
|
||||
|
||||
Luckily, we have “ **ulimit** ” command in any of the Linux based server, by which one can see/set/get number of files open status/configuration details. This command is equipped with many options and with this combination one can set number of open files. Following are step-by-step commands with examples explained in detail.
|
||||
|
||||
### To see what is the present open file limit in any Linux System
|
||||
|
||||
To get open file limit on any Linux server, execute the following command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
146013
|
||||
```
|
||||
|
||||
The above number shows that user can open ‘146013’ file per user login session.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
149219
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
73906
|
||||
```
|
||||
|
||||
This clearly indicates that individual Linux operating systems have different number of open files. This is based on dependencies and applications which are running in respective systems.
|
||||
|
||||
### ulimit command :
|
||||
|
||||
As the name suggests, ulimit (user limit) is used to display and set resources limit for logged in user.When we run ulimit command with -a option then it will print all resources’ limit for the logged in user. Now let’s run “ **ulimit -a** ” on Ubuntu / Debian and CentOS systems,
|
||||
|
||||
**Ubuntu / Debian System** ,
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -a
|
||||
core file size (blocks, -c) 0
|
||||
data seg size (kbytes, -d) unlimited
|
||||
scheduling priority (-e) 0
|
||||
file size (blocks, -f) unlimited
|
||||
pending signals (-i) 5731
|
||||
max locked memory (kbytes, -l) 64
|
||||
max memory size (kbytes, -m) unlimited
|
||||
open files (-n) 1024
|
||||
pipe size (512 bytes, -p) 8
|
||||
POSIX message queues (bytes, -q) 819200
|
||||
real-time priority (-r) 0
|
||||
stack size (kbytes, -s) 8192
|
||||
cpu time (seconds, -t) unlimited
|
||||
max user processes (-u) 5731
|
||||
virtual memory (kbytes, -v) unlimited
|
||||
file locks (-x) unlimited
|
||||
```
|
||||
|
||||
**CentOS System**
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -a
|
||||
core file size (blocks, -c) 0
|
||||
data seg size (kbytes, -d) unlimited
|
||||
scheduling priority (-e) 0
|
||||
file size (blocks, -f) unlimited
|
||||
pending signals (-i) 5901
|
||||
max locked memory (kbytes, -l) 64
|
||||
max memory size (kbytes, -m) unlimited
|
||||
open files (-n) 1024
|
||||
pipe size (512 bytes, -p) 8
|
||||
POSIX message queues (bytes, -q) 819200
|
||||
real-time priority (-r) 0
|
||||
stack size (kbytes, -s) 8192
|
||||
cpu time (seconds, -t) unlimited
|
||||
max user processes (-u) 5901
|
||||
virtual memory (kbytes, -v) unlimited
|
||||
file locks (-x) unlimited
|
||||
```
|
||||
|
||||
As we can be seen here different OS have different limits set. All these limits can be configured/changed using “ulimit” command.
|
||||
|
||||
To display the individual resource limit then pass the individual parameter in ulimit command, some of parameters are listed below:
|
||||
|
||||
* ulimit -n –> It will display number of open files limit
|
||||
* ulimit -c –> It display the size of core file
|
||||
* umilit -u –> It will display the maximum user process limit for the logged in user.
|
||||
* ulimit -f –> It will display the maximum file size that the user can have.
|
||||
* umilit -m –> It will display the maximum memory size for logged in user.
|
||||
* ulimit -v –> It will display the maximum memory size limit
|
||||
|
||||
|
||||
|
||||
Use below commands check hard and soft limits for number of open file for the logged in user
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -Hn
|
||||
1048576
|
||||
root@linuxtechi ~}$ ulimit -Sn
|
||||
1024
|
||||
```
|
||||
|
||||
### How to fix the problem when limit on number of Maximum Files was reached ?
|
||||
|
||||
Let’s assume our Linux server has reached the limit of maximum number of open files and want to extend that limit system wide, for example we want to set 100000 as limit of number of open files.
|
||||
|
||||
Use sysctl command to pass fs.file-max parameter to kernel on the fly, execute beneath command as root user,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# sysctl -w fs.file-max=100000
|
||||
fs.file-max = 100000
|
||||
```
|
||||
|
||||
Above changes will be active until the next reboot, so to make these changes persistent across the reboot, edit the file **/etc/sysctl.conf** and add same parameter,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# vi /etc/sysctl.conf
|
||||
fs.file-max = 100000
|
||||
```
|
||||
|
||||
save and exit file,
|
||||
|
||||
Run the beneath command to make above changes into effect immediately without logout and reboot.
|
||||
|
||||
```
|
||||
root@linuxtechi~]# sysctl -p
|
||||
```
|
||||
|
||||
Now verify whether new changes are in effect or not.
|
||||
|
||||
```
|
||||
root@linuxtechi~]# cat /proc/sys/fs/file-max
|
||||
100000
|
||||
```
|
||||
|
||||
Use below command to find out how many file descriptors are currently being utilized:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# more /proc/sys/fs/file-nr
|
||||
1216 0 100000
|
||||
```
|
||||
|
||||
Note:- Command “ **sysctl -p** ” is used to commit the changes without reboot and logout.
|
||||
|
||||
### Set User level resource limit via limit.conf file
|
||||
|
||||
“ **/etc/sysctl.conf** ” file is used to set resource limit system wide but if you want to set resource limit for specific user like Oracle, MariaDB and Apache then this can be achieved via “ **/etc/security/limits.conf** ” file.
|
||||
|
||||
Sample Limit.conf is shown below,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# cat /proc/sys/fs/file-max
|
||||
```
|
||||
|
||||
![Limits-conf-linux-part1][1]
|
||||
|
||||
![Limits-conf-linux-part2][2]
|
||||
|
||||
Let’s assume we want to set hard and soft limit on number of open files for linuxtechi user and for oracle user set hard and soft limit on number of open process, edit the file “/etc/security/limits.conf” and add the following lines
|
||||
|
||||
```
|
||||
# hard limit for max opened files for linuxtechi user
|
||||
linuxtechi hard nofile 4096
|
||||
# soft limit for max opened files for linuxtechi user
|
||||
linuxtechi soft nofile 1024
|
||||
|
||||
# hard limit for max number of process for oracle user
|
||||
oracle hard nproc 8096
|
||||
# soft limit for max number of process for oracle user
|
||||
oracle soft nproc 4096
|
||||
```
|
||||
|
||||
Save & exit the file.
|
||||
|
||||
**Note:** In case you want to put resource limit on a group instead of users, then it can also be possible via limit.conf file, in place of user name , type **@ <Group_Name>** and rest of the items will be same, example is shown below,
|
||||
|
||||
```
|
||||
# hard limit for max opened files for sysadmin group
|
||||
@sysadmin hard nofile 4096
|
||||
# soft limit for max opened files for sysadmin group
|
||||
@sysadmin soft nofile 1024
|
||||
```
|
||||
|
||||
Verify whether new changes are in effect or not,
|
||||
|
||||
```
|
||||
~]# su - linuxtechi
|
||||
~]$ ulimit -n -H
|
||||
4096
|
||||
~]$ ulimit -n -S
|
||||
1024
|
||||
|
||||
~]# su - oracle
|
||||
~]$ ulimit -H -u
|
||||
8096
|
||||
~]$ ulimit -S -u
|
||||
4096
|
||||
```
|
||||
|
||||
Note: Other majorly used command is “[ **lsof**][3]” which is used for finding out “how many files are opened currently”. This command is very helpful for admins.
|
||||
|
||||
**Conclusion:**
|
||||
|
||||
As mentioned in the introduction section “ulimit” command is very powerful and helps one to configure and make sure application installations are smoother without any bottlenecks. This command helps in fixing many of the number of file limitations in Linux based servers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part1-1024x677.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part2-1024x443.jpg
|
||||
[3]: https://www.linuxtechi.com/lsof-command-examples-linux-geeks/
|
281
sources/tech/20190610 Constant Time.md
Normal file
281
sources/tech/20190610 Constant Time.md
Normal file
@ -0,0 +1,281 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Constant Time)
|
||||
[#]: via: (https://dave.cheney.net/2019/06/10/constant-time)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Constant Time
|
||||
======
|
||||
|
||||
This essay is a derived from my [dotGo 2019 presentation][1] about my favourite feature in Go.
|
||||
|
||||
* * *
|
||||
|
||||
Many years ago Rob Pike remarked,
|
||||
|
||||
> “Numbers are just numbers, you’ll never see `0x80ULL` in a `.go` source file”.
|
||||
|
||||
—Rob Pike, [The Go Programming Language][2]
|
||||
|
||||
Beyond this pithy observation lies the fascinating world of Go’s constants. Something that is perhaps taken for granted because, as Rob noted, is Go numbers–constants–just work.
|
||||
In this post I intend to show you a few things that perhaps you didn’t know about Go’s `const` keyword.
|
||||
|
||||
## What’s so great about constants?
|
||||
|
||||
To kick things off, why are constants good? Three things spring to mind:
|
||||
|
||||
* _Immutability_. Constants are one of the few ways we have in Go to express immutability to the compiler.
|
||||
* _Clarity_. Constants give us a way to extract magic numbers from our code, giving them names and semantic meaning.
|
||||
* _Performance_. The ability to express to the compiler that something will not change is key as it unlocks optimisations such as constant folding, constant propagation, branch and dead code elimination.
|
||||
|
||||
|
||||
|
||||
But these are generic use cases for constants, they apply to any language. Let’s talk about some of the properties of Go’s constants.
|
||||
|
||||
### A Challenge
|
||||
|
||||
To introduce the power of Go’s constants let’s try a little challenge: declare a _constant_ whose value is the number of bits in the natural machine word.
|
||||
|
||||
We can’t use `unsafe.SizeOf` as it is not a constant expression. We could use a build tag and laboriously record the natural word size of each Go platform, or we could do something like this:
|
||||
|
||||
```
|
||||
const uintSize = 32 << (^uint(0) >> 32 & 1)
|
||||
```
|
||||
|
||||
There are many versions of this expression in Go codebases. They all work roughly the same way. If we’re on a 64 bit platform then the exclusive or of the number zero–all zero bits–is a number with all bits set, sixty four of them to be exact.
|
||||
|
||||
```
|
||||
1111111111111111111111111111111111111111111111111111111111111111
|
||||
```
|
||||
|
||||
If we shift that value thirty two bits to the right, we get another value with thirty two ones in it.
|
||||
|
||||
```
|
||||
0000000000000000000000000000000011111111111111111111111111111111
|
||||
```
|
||||
|
||||
Anding that with a number with one bit in the final position give us, the same thing, `1`,
|
||||
|
||||
```
|
||||
0000000000000000000000000000000011111111111111111111111111111111 & 1 = 1
|
||||
```
|
||||
|
||||
Finally we shift the number thirty two one place to the right, giving us 641.
|
||||
|
||||
```
|
||||
32 << 1 = 64
|
||||
```
|
||||
|
||||
This expression is an example of a _constant expression_. All of these operations happen at compile time and the result of the expression is itself a constant. If you look in the in runtime package, in particular the garbage collector, you’ll see how constant expressions are used to set up complex invariants based on the word size of the machine the code is compiled on.
|
||||
|
||||
So, this is a neat party trick, but most compilers will do this kind of constant folding at compile time for you. Let’s step it up a notch.
|
||||
|
||||
## Constants are values
|
||||
|
||||
In Go, constants are values and each value has a type. In Go, user defined types can declare their own methods. Thus, a constant value can have a method set. If you’re surprised by this, let me show you an example that you probably use every day.
|
||||
|
||||
```
|
||||
const timeout = 500 * time.Millisecond
|
||||
fmt.Println("The timeout is", timeout) // 500ms
|
||||
```
|
||||
|
||||
In the example the untyped literal constant `500` is multiplied by `time.Millisecond`, itself a constant of type `time.Duration`. The rule for assignments in Go are, unless otherwise declared, the type on the left hand side of the assignment operator is inferred from the type on the right.`500` is an untyped constant so it is converted to a `time.Duration` then multiplied with the constant `time.Millisecond`.
|
||||
|
||||
Thus `timeout` is a constant of type `time.Duration` which holds the value `500000000`.
|
||||
Why then does `fmt.Println` print `500ms`, not `500000000`?
|
||||
|
||||
The answer is `time.Duration` has a `String` method. Thus any `time.Duration` value, even a constant, knows how to pretty print itself.
|
||||
|
||||
Now we know that constant values are typed, and because types can declare methods, we can derive that _constant values can fulfil interfaces_. In fact we just saw an example of this. `fmt.Println` doesn’t assert that a value has a `String` method, it asserts the value implements the `Stringer` interface.
|
||||
|
||||
Let’s talk a little about how we can use this property to make our Go code better, and to do that I’m going to take a brief digression into the Singleton pattern.
|
||||
|
||||
## Singletons
|
||||
|
||||
I’m generally not a fan of the singleton pattern, in Go or any language. Singletons complicate testing and create unnecessary coupling between packages. I feel the singleton pattern is often used _not_ to create a singular instance of a thing, but instead to create a place to coordinate registration. `net/http.DefaultServeMux` is a good example of this pattern.
|
||||
|
||||
```
|
||||
package http
|
||||
|
||||
// DefaultServeMux is the default ServeMux used by Serve.
|
||||
var DefaultServeMux = &defaultServeMux
|
||||
|
||||
var defaultServeMux ServeMux
|
||||
```
|
||||
|
||||
There is nothing singular about `http.defaultServerMux`, nothing prevents you from creating another `ServeMux`. In fact the `http` package provides a helper that will create as many `ServeMux`‘s as you want.
|
||||
|
||||
```
|
||||
// NewServeMux allocates and returns a new ServeMux.
|
||||
func NewServeMux() *ServeMux { return new(ServeMux) }
|
||||
```
|
||||
|
||||
`http.DefaultServeMux` is not a singleton. Never the less there is a case for things which are truely singletons because they can only represent a single thing. A good example of this are the file descriptors of a process; 0, 1, and 2 which represent stdin, stdout, and stderr respectively.
|
||||
|
||||
It doesn’t matter what names you give them, `1` is always stdout, and there can only ever be one file descriptor `1`. Thus these two operations are identical:
|
||||
|
||||
```
|
||||
fmt.Fprintf(os.Stdout, "Hello dotGo\n")
|
||||
syscall.Write(1, []byte("Hello dotGo\n"))
|
||||
```
|
||||
|
||||
So let’s look at how the `os` package defines `Stdin`, `Stdout`, and `Stderr`:
|
||||
|
||||
```
|
||||
package os
|
||||
|
||||
var (
|
||||
Stdin = NewFile(uintptr(syscall.Stdin), "/dev/stdin")
|
||||
Stdout = NewFile(uintptr(syscall.Stdout), "/dev/stdout")
|
||||
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
|
||||
)
|
||||
```
|
||||
|
||||
There are a few problems with this declaration. Firstly their type is `*os.File` not the respective `io.Reader` or `io.Writer` interfaces. People have long complained that this makes replacing them with alternatives problematic. However the notion of replacing these variables is precisely the point of this digression. Can you safely change the value of `os.Stdout` once your program is running without causing a data race?
|
||||
|
||||
I argue that, in the general case, you cannot. In general, if something is unsafe to do, as programmers we shouldn’t let our users think that it is safe, [lest they begin to depend on that behaviour][3].
|
||||
|
||||
Could we change the definition of `os.Stdout` and friends so that they retain the observable behaviour of reading and writing, but remain immutable? It turns out, we can do this easily with constants.
|
||||
|
||||
```
|
||||
type readfd int
|
||||
|
||||
func (r readfd) Read(buf []byte) (int, error) {
|
||||
return syscall.Read(int(r), buf)
|
||||
}
|
||||
|
||||
type writefd int
|
||||
|
||||
func (w writefd) Write(buf []byte) (int, error) {
|
||||
return syscall.Write(int(w), buf)
|
||||
}
|
||||
|
||||
const (
|
||||
Stdin = readfd(0)
|
||||
Stdout = writefd(1)
|
||||
Stderr = writefd(2)
|
||||
)
|
||||
|
||||
func main() {
|
||||
fmt.Fprintf(Stdout, "Hello world")
|
||||
}
|
||||
```
|
||||
|
||||
In fact this change causes only one compilation failure in the standard library.2
|
||||
|
||||
## Sentinel error values
|
||||
|
||||
Another case of things which look like constants but really aren’t, are sentinel error values. `io.EOF`, `sql.ErrNoRows`, `crypto/x509.ErrUnsupportedAlgorithm`, and so on are all examples of sentinel error values. They all fall into a category of _expected_ errors, and because they are expected, you’re expected to check for them.
|
||||
|
||||
To compare the error you have with the one you were expecting, you need to import the package that defines that error. Because, by definition, sentinel errors are exported public variables, any code that imports, for example, the `io` package could change the value of `io.EOF`.
|
||||
|
||||
```
|
||||
package nelson
|
||||
|
||||
import "io"
|
||||
|
||||
func init() {
|
||||
io.EOF = nil // haha!
|
||||
}
|
||||
```
|
||||
|
||||
I’ll say that again. If I know the name of `io.EOF` I can import the package that declares it, which I must if I want to compare it to my error, and thus I could change `io.EOF`‘s value. Historically convention and a bit of dumb luck discourages people from writing code that does this, but technically there is nothing to prevent you from doing so.
|
||||
|
||||
Replacing `io.EOF` is probably going to be detected almost immediately. But replacing a less frequently used sentinel error may cause some interesting side effects:
|
||||
|
||||
```
|
||||
package innocent
|
||||
|
||||
import "crypto/rsa"
|
||||
|
||||
func init() {
|
||||
rsa.ErrVerification = nil // 🤔
|
||||
}
|
||||
```
|
||||
|
||||
If you were hoping the race detector will spot this subterfuge, I suggest you talk to the folks writing testing frameworks who replace `os.Stdout` without it triggering the race detector.
|
||||
|
||||
## Fungibility
|
||||
|
||||
I want to digress for a moment to talk about _the_ most important property of constants. Constants aren’t just immutable, its not enough that we cannot overwrite their declaration,
|
||||
Constants are _fungible_. This is a tremendously important property that doesn’t get nearly enough attention.
|
||||
|
||||
Fungible means identical. Money is a great example of fungibility. If you were to lend me 10 bucks, and I later pay you back, the fact that you gave me a 10 dollar note and I returned to you 10 one dollar bills, with respect to its operation as a financial instrument, is irrelevant. Things which are fungible are by definition equal and equality is a powerful property we can leverage for our programs.
|
||||
|
||||
```
|
||||
var myEOF = errors.New("EOF") // io/io.go line 38
|
||||
fmt.Println(myEOF == io.EOF) // false
|
||||
```
|
||||
|
||||
Putting aside the effect of malicious actors in your code base the key design challenge with sentinel errors is they behave like _singletons_ , not _constants_. Even if we follow the exact procedure used by the `io` package to create our own EOF value, `myEOF` and `io.EOF` are not equal. `myEOF` and `io.EOF` are not fungible, they cannot be interchanged. Programs can spot the difference.
|
||||
|
||||
When you combine the lack of immutability, the lack of fungibility, the lack of equality, you have a set of weird behaviours stemming from the fact that sentinel error values in Go are not constant expressions. But what if they were?
|
||||
|
||||
## Constant errors
|
||||
|
||||
Ideally a sentinel error value should behave as a constant. It should be immutable and fungible. Let’s recap how the built in `error` interface works in Go.
|
||||
|
||||
```
|
||||
type error interface {
|
||||
Error() string
|
||||
}
|
||||
```
|
||||
|
||||
Any type with an `Error() string` method fulfils the `error` interface. This includes user defined types, it includes types derived from primitives like string, and it includes constant strings. With that background, consider this error implementation:
|
||||
|
||||
```
|
||||
type Error string
|
||||
|
||||
func (e Error) Error() string {
|
||||
return string(e)
|
||||
}
|
||||
```
|
||||
|
||||
We can use this error type as a constant expression:
|
||||
|
||||
```
|
||||
const err = Error("EOF")
|
||||
```
|
||||
|
||||
Unlike `errors.errorString`, which is a struct, a compact struct literal initialiser is not a constant expression and cannot be used.
|
||||
|
||||
```
|
||||
const err2 = errors.errorString{"EOF"} // doesn't compile
|
||||
```
|
||||
|
||||
As constants of this `Error` type are not variables, they are immutable.
|
||||
|
||||
```
|
||||
const err = Error("EOF")
|
||||
err = Error("not EOF") // doesn't compile
|
||||
```
|
||||
|
||||
Additionally, two constant strings are always equal if their contents are equal:
|
||||
|
||||
```
|
||||
const str1 = "EOF"
|
||||
const str2 = "EOF"
|
||||
fmt.Println(str1 == str2) // true
|
||||
```
|
||||
|
||||
which means two constants of a type derived from string with the same contents are also equal.
|
||||
|
||||
```
|
||||
type Error string
|
||||
|
||||
const err1 = Error("EOF")
|
||||
const err2 = Error("EOF")
|
||||
fmt.Println(err1 == err2) // true```
|
||||
|
||||
```
|
||||
Said another way, equal constant `Error` values are the same, in the way that the literal constant `1` is the same as every other literal constant `1`.
|
||||
|
||||
Now we have all the pieces we need to make sentinel errors, like `io.EOF`, and `rsa.ErrVerfication`, immutable, fungible, constant expressions.
|
||||
```
|
||||
|
||||
% git diff
|
||||
diff --git a/src/io/io.go b/src/io/io.go
|
||||
inde
|
@ -0,0 +1,204 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Step by Step Zorin OS 15 Installation Guide with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Step by Step Zorin OS 15 Installation Guide with Screenshots
|
||||
======
|
||||
|
||||
Good News for all the Zorin users out there! Zorin has launched its latest version (Zorin OS 15) of its Ubuntu based Linux distro. This version is based on Ubuntu 18.04.2, since its launch in July 2009, it is estimated that this popular distribution has reached more than 17 million downloads. Zorin is renowned for creating a distribution for beginner level users and the all new Zorin OS 15 comes packed with a lot of goodies that surely will make Zorin OS lovers happy. Let’s see some of the major enhancements made in the latest version.
|
||||
|
||||
### New Features of Zorin OS 15
|
||||
|
||||
Zorin OS has always amazed users with different set of features when every version is released Zorin OS 15 is no exception as it comes with a lot of new features as outlined below:
|
||||
|
||||
**Enhanced User Experience**
|
||||
|
||||
The moment you look at the Zorin OS 15, you will ask whether it is a Linux distro because it looks more like a Windows OS. According to Zorin, it wanted Windows users to get ported to Linux in a more user-friendly manner. And it features a Windows like Start menu, quick app launchers, a traditional task bar section, system tray etc.
|
||||
|
||||
**Zorin Connect**
|
||||
|
||||
Another major highlight of Zorin OS 15 is the ability to integrate your Android Smartphones seamlessly with your desktop using the Zorin Connect application. With your phone connected, you can share music, videos and other files between your phone and desktop. You can even use your phone as a mouse to control the desktop. You can also easily control the media playback in your desktop from your phone itself. Quickly reply to all your messages and notifications sent to your phone from your desktop.
|
||||
|
||||
**New GTK Theme**
|
||||
|
||||
Zorin OS 15 ships with an all new GTK Theme that has been exclusively built for this distro and the theme is available in 6 different colors along with the hugely popular dark theme. Another highlight is that the OS automatically detects the time of the day and changes the desktop theme accordingly. Say for example, during sunset it switches to a dark theme whereas in the morning it switches to bright theme automatically.
|
||||
|
||||
**Other New Features:**
|
||||
|
||||
Zorin OS 15 comes packed with a lot of new features including:
|
||||
|
||||
* Compatible with Thunderbolt 3.0 devices
|
||||
* Supports color emojis
|
||||
* Comes with an upgraded Linux Kernel 4.18
|
||||
* Customized settings available for application menu and task bar
|
||||
* System font changed to Inter
|
||||
* Supports renaming bulk files
|
||||
|
||||
|
||||
|
||||
### Minimum system requirements for Zorin OS 15 (Core):
|
||||
|
||||
* Dual Core 64-bit (1GHZ)
|
||||
* 2 GB RAM
|
||||
* 10 GB free disk space
|
||||
* Internet Connection Optional
|
||||
* Display (800×600)
|
||||
|
||||
|
||||
|
||||
### Step by Step Guide to Install Zorin OS 15 (Core)
|
||||
|
||||
Before you start installing Zorin OS 15, ensure you have a copy of the Zorin OS 15 downloaded in your system. If not download then refer official website of [Zorin OS 15][1]. Remember this Linux distribution is available in 4 versions including:
|
||||
|
||||
* Ultimate (Paid Version)
|
||||
* Core (Free Version)
|
||||
* Lite (Free Version)
|
||||
* Education (Free Version)
|
||||
|
||||
|
||||
|
||||
Note: In this article I will demonstrate Zorin OS 15 Core Installation Steps
|
||||
|
||||
### Step 1) Create Zorin OS 15 Bootable USB Disk
|
||||
|
||||
Once you have downloaded Zorin OS 15, copy the ISO into an USB disk and create a bootable disk. Change our system settings to boot using an USB disk and restart your system. Once you restart your system, you will see the screen as shown below. Click “ **Install or Try Zorin OS** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS15-option.jpg>
|
||||
|
||||
### Step 2) Choose Install Zorin OS
|
||||
|
||||
In the next screen, you will be shown option to whether install Zorin OS 15 or to try Zorin OS. Click “ **Install Zorin OS** ” to continue the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS-15-on-System.jpg>
|
||||
|
||||
### Step 3) Choose Keyboard Layout
|
||||
|
||||
Next step is to choose your keyboard layout. By default, English (US) is selected and if you want to choose a different language, then choose it and click “ **Continue** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Keyboard-Layout-Zorinos-15.jpg>
|
||||
|
||||
### Step 4) Download Updates and Other Software
|
||||
|
||||
In the next screen, you will be asked whether you need to download updates while you are installing Zorin OS and install other 3rd party applications. In case your system is connected to internet then you can select both of these options, but by doing so your installation time increases considerably, or if you don’t want to install updates and third party software during the installation then untick both these options and click “Continue”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Updates-third-party-softwares-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 5) Choose Zorin OS 15 Installation Method
|
||||
|
||||
If you are new to Linux and want fresh installation and don’t want to customize partitions, then better choose option “ **Erase disk and install Zorin OS** ”
|
||||
|
||||
If you want to create customize partitions for Zorin OS then choose “ **Something else** “, In this tutorial I will demonstrate how to create customize partition scheme for Zorin OS 15 installation,
|
||||
|
||||
So, choose “ **Something else** ” option and then click on Continue
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Something-else-option-Zorin-OS15-Installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Disk-for-Zorin-OS15-Installation.jpg>
|
||||
|
||||
As we can see we have around 42 GB disk available for Zorin OS, We will be creating following partitions,
|
||||
|
||||
* /boot = 2 GB (ext4 file system)
|
||||
* /home = 20 GB (ext4 file system)
|
||||
* / = 10 GB (ext4 file system)
|
||||
* /var = 7 GB (ext4 file system)
|
||||
* Swap = 2 GB (ext4 file system)
|
||||
|
||||
|
||||
|
||||
To start creating partitions, first click on “ **New Partition Table** ” and it will show it is going to create empty partition table, click on continue
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/create-empty-partition-zorin-os15-installation.jpg>
|
||||
|
||||
In the next screen we will see that we have now 42 GB free space on disk (/dev/sda), so let’s create our first partition as /boot,
|
||||
|
||||
Select the free space, then click on + symbol and then specify the partition size as 2048 MB, file system type as ext4 and mount point as /boot,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/boot-partiton-during-zorin-os15-installation.jpg>
|
||||
|
||||
Click on OK
|
||||
|
||||
Now create our next partition /home of size 20 GB (20480 MB),
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/home-partition-zorin-os15-installation.jpg>
|
||||
|
||||
Similarly create our next two partition / and /var of size 10 and 7 GB respectively,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/slash-partition-zorin-os15-installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/var-partition-zorin-os15-installation.jpg>
|
||||
|
||||
Let’s create our last partition as swap of size 2 GB
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/swap-partition-Zorin-OS15-Installation.jpg>
|
||||
|
||||
Click on OK
|
||||
|
||||
Choose “ **Install Now** ” option in next window,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-now-option-zorin-os15.jpg>
|
||||
|
||||
In next window, choose “Continue” to write changes to disk and to proceed with installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Write-Changes-to-disk-zorin-os15.jpg>
|
||||
|
||||
### Step 6) Choose Your Preferred Location
|
||||
|
||||
In the next screen, you will be asked to choose your location and click “Continue”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/TimeZone-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 7) Provide User Credentials
|
||||
|
||||
In the next screen, you’ll be asked to enter the user credentials including your name, computer name,
|
||||
|
||||
Username and password. Once you are done, click “Continue” to proceed with the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/User-Credentails-During-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 8) Installing Zorin OS 15
|
||||
|
||||
Once you click continue, you can see that the Zorin OS 15 starts installing and it may take some time to complete the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Installation-Progress-Zorin-OS15.jpg>
|
||||
|
||||
### Step 9) Restart your system after Successful Installation
|
||||
|
||||
Once the installation process is completed, it will ask to restart your computer. Hit “ **Restart Now** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Zorin-OS15-Installation-Completed.jpg>
|
||||
|
||||
### Step 10) Login to Zorin OS 15
|
||||
|
||||
Once the system restarts, you will be asked to login into the system using the login credentials provided earlier.
|
||||
|
||||
Note: Don’t forget to change the boot medium from bios so that system boots up with disk.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Login-Screen-Zorin-OS15.jpg>
|
||||
|
||||
### Step 11) Zorin OS 15 Welcome Screen
|
||||
|
||||
Once your login is successful, you can see the Zorin OS 15 welcome screen. Now you can start exploring all the incredible features of Zorin OS 15.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Desktop-Screen-Zorin-OS15.jpg>
|
||||
|
||||
That’s all from this tutorial, please do share feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://zorinos.com/download/
|
@ -1,171 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to send email from the Linux command line
|
||||
======
|
||||
Linux offers several commands that allow you to send email from the command line. Here's look at some that offer interesting options.
|
||||
![Molnia/iStock][1]
|
||||
|
||||
There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:
|
||||
|
||||
### mail
|
||||
|
||||
The easiest way to send a simple message from the Linux command line is to use the **mail** command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
Another option is to grab your message text from a file that contains the content you want to send:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
In both cases, the -s options allows you to provide a subject line for your message.
|
||||
|
||||
### sendmail
|
||||
|
||||
Using **sendmail** , you can send a quick message (with no subject) using a command like this (replacing "recip" with your intended recipient:
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can send just a subject line (with no message content) with a command like this:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can also use sendmail on the command line to send a message complete with a subject line. However, when using this approach, you would add your subject line to the file you intend to send as in this example file:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
Then you would send the file like this (where the lyrics file contains your subject line and text):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
Sendmail can be quite verbose in its output. If you're desperately curious and want to see the interchange between the sending and receiving systems, add the -v (verbose) option:
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
An especially nice tool for command line emailing is the **mutt** command, though you will likely have to install it first. Mutt has a convenient advantage in that it can allow you to include attachments.
|
||||
|
||||
To use mutt to send a quick messsage:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
To get content from a file:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
To add an attachment with mutt, use the -a option. You can even add more than one – as shown in this command:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
In the command above, the "msg" file includes content for the email. If you don't have any additional content to provide, you can do this instead:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
The other useful option that you have with mutt is that it provides a way to send carbon copies (using the -c option) and blind carbon copies (using the -b option).
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
If you want to get deep into the details of sending email, you can use **telnet** to carry on the email exchange operation, but you'll need to, as they say, "learn the lingo." Mail servers expect a sequence of commands that include things like introducing yourself ( **EHLO** command), providing the email sender ( **MAIL FROM** command), specifying the email recipient ( **RCPT TO** command), and then adding the message ( **DATA** ) and ending the message with a "." as the only character on the line. Not every email server will respond to these requests. This approach is generally used only for troubleshooting.
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### Sending email to multiple recipients
|
||||
|
||||
If you want to send email from the Linux command line to a large group of recipients, you can always use a loop to make the job easier as in this example using mutt.
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are quite a few ways to send email from the Linux command line. Some tools provide quite a few options.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers)
|
||||
[#]: via: (https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers
|
||||
======
|
||||
|
||||
There are some scenarios where we want to assign multiple IPs from different **VLAN** on the same Ethernet card (nic) on Linux servers ( **CentOS** / **RHEL** ). This can be done by enabling VLAN tagged interface. But for this to happen first we must make sure multiple VLANs are attached to port on switch or in other words we can say we should configure trunk port by adding multiple VLANs on switch.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/VLAN-Tagged-NIC-Linux-Server.jpg>
|
||||
|
||||
Let’s assume we have a Linux Server, there we have two Ethernet cards (enp0s3 & enp0s8), first NIC ( **enp0s3** ) will be used for data traffic and second NIC ( **enp0s8** ) will be used for control / management traffic. For Data traffic I will using multiple VLANs (or will assign multiple IPs from different VLANs on data traffic ethernet card).
|
||||
|
||||
I am assuming the port from switch which is connected to my server data NIC is configured as trunk port by mapping the multiple VLANs to it.
|
||||
|
||||
Following are the VLANs which is mapped to data traffic Ethernet Card (NIC):
|
||||
|
||||
* VLAN ID (200), VLAN N/W = 172.168.10.0/24
|
||||
* VLAN ID (300), VLAN N/W = 172.168.20.0/24
|
||||
|
||||
|
||||
|
||||
To use VLAN tagged interface on CentOS 7 / RHEL 7 / CentOS 8 /RHEL 8 systems, [kernel module][1] **8021q** must be loaded.
|
||||
|
||||
Use the following command to load the kernel module “8021q”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# lsmod | grep -i 8021q
|
||||
[root@linuxtechi ~]# modprobe --first-time 8021q
|
||||
[root@linuxtechi ~]# lsmod | grep -i 8021q
|
||||
8021q 29022 0
|
||||
garp 14384 1 8021q
|
||||
mrp 18542 1 8021q
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use below modinfo command to display information about kernel module “8021q”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# modinfo 8021q
|
||||
filename: /lib/modules/3.10.0-327.el7.x86_64/kernel/net/8021q/8021q.ko
|
||||
version: 1.8
|
||||
license: GPL
|
||||
alias: rtnl-link-vlan
|
||||
rhelversion: 7.2
|
||||
srcversion: 2E63BD725D9DC11C7DA6190
|
||||
depends: mrp,garp
|
||||
intree: Y
|
||||
vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions
|
||||
signer: CentOS Linux kernel signing key
|
||||
sig_key: 79:AD:88:6A:11:3C:A0:22:35:26:33:6C:0F:82:5B:8A:94:29:6A:B3
|
||||
sig_hashalgo: sha256
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now tagged (or mapped) the VLANs 200 and 300 to NIC enp0s3 using the [ip command][2]
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.200 type vlan id 200
|
||||
```
|
||||
|
||||
Bring up the interface using below ip command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link set dev enp0s3.200 up
|
||||
```
|
||||
|
||||
Similarly mapped the VLAN 300 to NIC enp0s3
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.300 type vlan id 300
|
||||
[root@linuxtechi ~]# ip link set dev enp0s3.300 up
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now view the tagged interface status using ip command:
|
||||
|
||||
[![tagged-interface-ip-command][3]][4]
|
||||
|
||||
Now we can assign the IP address to tagged interface from their respective VLANs using beneath ip command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip addr add 172.168.10.51/24 dev enp0s3.200
|
||||
[root@linuxtechi ~]# ip addr add 172.168.20.51/24 dev enp0s3.300
|
||||
```
|
||||
|
||||
Use below ip command to see whether IP is assigned to tagged interface or not.
|
||||
|
||||
![ip-address-tagged-nic][5]
|
||||
|
||||
All the above changes via ip commands will not be persistent across the reboot. These tagged interfaces will not be available after reboot and after network service restart
|
||||
|
||||
So, to make tagged interfaces persistent across the reboot then use interface **ifcfg files**
|
||||
|
||||
Edit interface (enp0s3) file “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3** ” and add the following content,
|
||||
|
||||
Note: Replace the interface name that suits to your env,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
|
||||
TYPE=Ethernet
|
||||
DEVICE=enp0s3
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
```
|
||||
|
||||
Save & exit the file
|
||||
|
||||
Create tagged interface file for VLAN id 200 as “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3.200** ” and add the following contents to it.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.200
|
||||
DEVICE=enp0s3.200
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
IPADDR=172.168.10.51
|
||||
PREFIX=24
|
||||
NETWORK=172.168.10.0
|
||||
VLAN=yes
|
||||
```
|
||||
|
||||
Save & exit the file
|
||||
|
||||
Similarly create interface file for VLAN id 300 as “/etc/sysconfig/network-scripts/ifcfg-enp0s3.300” and add the following contents to it
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.300
|
||||
DEVICE=enp0s3.300
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
IPADDR=172.168.20.51
|
||||
PREFIX=24
|
||||
NETWORK=172.168.20.0
|
||||
VLAN=yes
|
||||
```
|
||||
|
||||
Save and exit file and then restart network services using the beneath command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl restart network
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now verify whether tagged interface are configured and up & running using the ip command,
|
||||
|
||||
![tagged-interface-status-ip-command-linux-server][6]
|
||||
|
||||
That’s all from this article, I hope you got an idea how to configure and enable VLAN tagged interface on CentOS 7 / 8 and RHEL 7 /8 Severs. Please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/how-to-manage-kernel-modules-in-linux/
|
||||
[2]: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command-1024x444.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/ip-address-tagged-nic-1024x343.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-status-ip-command-linux-server-1024x656.jpg
|
118
sources/tech/20190618 A beginner-s guide to Linux permissions.md
Normal file
118
sources/tech/20190618 A beginner-s guide to Linux permissions.md
Normal file
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to Linux permissions)
|
||||
[#]: via: (https://opensource.com/article/19/6/understanding-linux-permissions)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson/users/greg-p/users/tj)
|
||||
|
||||
A beginner's guide to Linux permissions
|
||||
======
|
||||
Linux security permissions designate who can do what with a file or
|
||||
directory.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
One of the main benefits of Linux systems is that they are known to be less prone to security vulnerabilities and exploits than other systems. Linux definitely gives users more flexibility and granular controls over its file systems' security permissions. This may imply that it's critical for Linux users to understand security permissions. That isn't necessarily true, but it's still wise for beginning users to understand the basics of Linux permissions.
|
||||
|
||||
### View Linux security permissions
|
||||
|
||||
To start learning about Linux permissions, imagine we have a newly created directory called **PermissionDemo**. Run **cd** inside the directory and use the **ls -l** command to view the Linux security permissions. If you want to sort them by time modified, add the **-t** option.
|
||||
|
||||
|
||||
```
|
||||
`ls -lt`
|
||||
```
|
||||
|
||||
Since there are no files inside this new directory, this command returns nothing.
|
||||
|
||||
![No output from ls -l command][2]
|
||||
|
||||
To learn more about the **ls** option, access its man page by entering **man ls** on the command line.
|
||||
|
||||
![ls man page][3]
|
||||
|
||||
Now, let's create two files: **cat.txt** and **dog.txt** with empty content; this is easy to do using the **touch** command. Let's also create an empty directory called **Pets** with the **mkdir** command. We can use the **ls -l** command again to see the permissions for these new files.
|
||||
|
||||
![Creating new files and directory][4]
|
||||
|
||||
We need to pay attention to two sections of output from this command.
|
||||
|
||||
### Who has permission?
|
||||
|
||||
The first thing to examine indicates _who_ has permission to access the file/directory. Note the section highlighted in the red box below. The first column refers to the _user_ who has access, while the second column refers to the _group_ that has access.
|
||||
|
||||
![Output from -ls command][5]
|
||||
|
||||
There are three main types of users: **user** , **group** ; and **other** (essentially neither a user nor a group). There is one more: **all** , which means practically everyone.
|
||||
|
||||
![User types][6]
|
||||
|
||||
Because we are using **root** as the user, we can access any file or directory because **root** is the superuser. However, this is generally not the case, and you will probably be restricted to your username. A list of all users is stored in the **/etc/passwd** file.
|
||||
|
||||
![/etc/passwd file][7]
|
||||
|
||||
Groups are maintained in the **/etc/group** file.
|
||||
|
||||
![/etc/passwd file][8]
|
||||
|
||||
### What permissions do they have?
|
||||
|
||||
The other section of the output from **ls -l** that we need to pay attention to relates to enforcing permissions. Above, we confirmed that the owner and group permissions for the files dog.txt and cat.txt and the directory Pets we created belong to the **root** account. We can use that information about who owns what to enforce permissions for the different user ownership types, as highlighted in the red box below.
|
||||
|
||||
![Enforcing permissions for different user ownership types][9]
|
||||
|
||||
We can dissect each line into five bits of information. The first part indicates whether it is a file or a directory; files are labeled with a **-** (hyphen), and directories are labeled with **d**. The next three parts refer to permissions for **user** , **group** , and **other** , respectively. The last part is a flag for the [**access-control list**][10] (ACL), a list of permissions for an object.
|
||||
|
||||
![Different Linux permissions][11]
|
||||
|
||||
Linux permission levels can be identified with letters or numbers. There are three privilege types:
|
||||
|
||||
* **read** : r or 4
|
||||
* **write:** w or 2
|
||||
* **executable:** e or 1
|
||||
|
||||
|
||||
|
||||
![Privilege types][12]
|
||||
|
||||
The presence of each letter symbol ( **r** , **w** , or **x** ) means that the permission exists, while **-** indicates it does not. In the example below, the file is readable and writeable by the owner, only readable if the user belongs to the group, and readable and executable by anyone else. Converted to numeric notation, this would be 645 (see the image below for an explanation of how this is calculated).
|
||||
|
||||
![Permission type example][13]
|
||||
|
||||
Here are a few more examples:
|
||||
|
||||
![Permission type examples][14]
|
||||
|
||||
Test your knowledge by going through the following exercises.
|
||||
|
||||
![Permission type examples][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/understanding-linux-permissions
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson/users/greg-p/users/tj
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_3.jpg (No output from ls -l command)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_man.jpg (ls man page)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/2_6.jpg (Creating new files and directory)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3_2.jpg (Output from -ls command)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/4_0.jpg (User types)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_passwd.jpg (/etc/passwd file)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_group.jpg (/etc/passwd file)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/linuxpermissions_5.jpg (Enforcing permissions for different user ownership types)
|
||||
[10]: https://en.wikipedia.org/wiki/Access-control_list
|
||||
[11]: https://opensource.com/sites/default/files/uploads/linuxpermissions_6.jpg (Different Linux permissions)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/linuxpermissions_7.jpg (Privilege types)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/linuxpermissions_8.jpg (Permission type example)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/linuxpermissions_9.jpg (Permission type examples)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/linuxpermissions_10.jpg (Permission type examples)
|
@ -0,0 +1,231 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use MapTool to build an interactive dungeon RPG)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-use-maptools)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to use MapTool to build an interactive dungeon RPG
|
||||
======
|
||||
By using MapTool, most of a game master's work is done well before a
|
||||
role-playing game begins.
|
||||
![][1]
|
||||
|
||||
In my previous article on MapTool, I explained how to download, install, and configure your own private, [open source virtual tabletop][2] so you and your friends can play a role-playing game (RPG) together. [MapTool][3] is a complex application with lots of features, and this article demonstrates how a game master (GM) can make the most of it.
|
||||
|
||||
### Update JavaFX
|
||||
|
||||
MapTool requires JavaFX, but Java maintainers recently stopped bundling it in Java downloads. This means that, even if you have Java installed, you might not have JavaFX installed.
|
||||
|
||||
Some Linux distributions have a JavaFX package available, so if you try to run MapTool and get an error about JavaFX, download the latest self-contained version:
|
||||
|
||||
* For [Ubuntu and other Debian-based systems][4]
|
||||
* For [Fedora and Red Hat-based systems][5]
|
||||
|
||||
|
||||
|
||||
### Build a campaign
|
||||
|
||||
The top-level file in MapTool is a campaign (.cmpgn) file. A campaign can contain all of the maps required by the game you're running. As your players progress through the campaign, everyone changes to the appropriate map and plays.
|
||||
|
||||
For that to go smoothly, you must do a little prep work.
|
||||
|
||||
First, you need the digital equivalents of miniatures: _tokens_ in MapTool terminology. Tokens are available from various sites, but the most prolific is [immortalnights.com/tokensite][6]. If you're still just trying out virtual tabletops and aren't ready to invest in digital art yet, you can get a stunning collection of starter tokens from immortalnights.com for $0.
|
||||
|
||||
You can add starter content to MapTool quickly and easily using its built-in resource importer. Go to the **File** menu and select **Add Resource to Library**.
|
||||
|
||||
In the **Add Resource to Library** dialogue box, select the RPTools tab, located at the bottom-left. This lists all the free art packs available from the RPTools server, tokens and maps alike. Click to download and import.
|
||||
|
||||
![Add Resource to Library dialogue][7]
|
||||
|
||||
You can import assets you already have on your computer by selecting files from the file system, using the same dialogue box.
|
||||
|
||||
MapTool resources appear in the Library panel. If your MapTool window has no Library panel, select **Library** in the **Window** menu to add one.
|
||||
|
||||
### Gather your maps
|
||||
|
||||
The next step in preparing for your game is to gather maps. Depending on what you're playing, that might mean you need to draw your maps, purchase a map pack, or just open a map bundled with a game module. If all you need is a generic dungeon, you can also download free maps from within MapTool's **Add Resource to Library**.
|
||||
|
||||
If you have a set of maps you intend to use often, you can import them as resources. If you are building a campaign you intend to use just once, you can quickly add any PNG or JPEG file as a **New Map** in the **Map** menu.
|
||||
|
||||
![Creating a new map][8]
|
||||
|
||||
Set the **Background** to a texture that roughly matches your map or to a neutral color.
|
||||
|
||||
Set the **Map** to your map graphics file.
|
||||
|
||||
Give your new map a unique **Name**. The map name is visible to your players, so keep it free of spoilers.
|
||||
|
||||
To switch between maps, click the **Select Map** button in the top-right corner of the MapTool window, and choose the map name in the drop-down menu that appears.
|
||||
|
||||
![Select a map][9]
|
||||
|
||||
Before you let your players loose on your map, you still have some important prep work to do.
|
||||
|
||||
### Adjust the grid size
|
||||
|
||||
Since most RPGs govern how far players can move during their turn, especially during combat, game maps are designed to a specific scale. The most common scale is one map square for every five feet. Most maps you download already have a grid drawn on them; if you're designing a map, you should draw on graph paper to keep your scale consistent. Whether your map graphic has a grid or not, MapTool doesn't know about it, but you can adjust the digital grid overlay so that your player tokens are constrained into squares along the grid.
|
||||
|
||||
MapTool doesn't show the grid by default, so go to the **Map** menu and select **Adjust grid**. This displays MapTool's grid lines, and your goal is to make MapTool's grid line up with the grid drawn onto your map graphic. If your map graphic doesn't have a grid, it may indicate its scale; a common scale is one inch per five feet, and you can usually assume 72 pixels is one inch (on a 72 DPI screen). While adjusting the grid, you can change the color of the grid lines for your own reference. Set the cell size in pixels. Click and drag to align MapTool's grid to your map's grid.
|
||||
|
||||
![Adjusting the grid][10]
|
||||
|
||||
If your map has no grid and you want the grid to remain visible after you adjust it, go to the **View** menu and select **Show Grid**.
|
||||
|
||||
### Add players and NPCs
|
||||
|
||||
To add a player character (PC), non-player character (NPC), or monster to your map, find an appropriate token in your **Library** panel, then drag and drop one onto your map. In the **New Token** dialogue box that appears, give the token a name and set it as an NPC or a PC, then click the OK button.
|
||||
|
||||
![Adding a player character to the map][11]
|
||||
|
||||
Once a token is on the map, try moving it to see how its movements are constrained to the grid you've designated. Make sure **Interaction Tools** , located in the toolbar just under the **File** menu, is selected.
|
||||
|
||||
![A token moving within the grid][12]
|
||||
|
||||
Each token added to a map has its own set of properties, including the direction it's facing, a light source, player ownership, conditions (such as incapacitated, prone, dead, and so on), and even class attributes. You can set as many or as few of these as you want, but at the very least you should right-click on each token and assign it ownership. Your players must be logged into your MapTool server for tokens to be assigned to them, but you can assign yourself NPCs and monsters in advance.
|
||||
|
||||
The right-click menu provides access to all important token-related functions, including setting which direction it's facing, setting a health bar and health value, a copy and paste function (enabling you and your players to move tokens from map to map), and much more.
|
||||
|
||||
![The token menu unlocks great, arcane power][13]
|
||||
|
||||
### Activate fog-of-war effects
|
||||
|
||||
If you're using maps exclusively to coordinate combat, you may not need a fog-of-war effect. But if you're using maps to help your players visualize a dungeon they're exploring, you probably don't want them to see the whole map before they've made significant moves, like opening locked doors or braving a decaying bridge over a pit of hot lava.
|
||||
|
||||
The fog-of-war effect is an invaluable tool for the GM, and it's essential to set it up early so that your players don't accidentally get a sneak peek at all the horrors your dungeon holds for them.
|
||||
|
||||
To activate fog-of-war on a map, go to the **Map** and select **Fog-of-War**. This blackens the entire screen for your players, so your next step is to reveal some portion of the map so that your players aren't faced with total darkness when they switch to the map. Fog-of-war is a subtractive process; it starts 100% dark, and as the players progress, you reveal new portions of the map using fog-of-war drawing tools available in the **FOG** toolbar, just under the **View** menu.
|
||||
|
||||
You can reveal sections of the map in rectangle blocks, ovals, polygons, diamonds, and freehand shapes. Once you've selected the shape, click and release on the map, drag to define an area to reveal, and then click again.
|
||||
|
||||
![Fog-of-war as experienced by a playe][14]
|
||||
|
||||
If you're accidentally overzealous with what you reveal, you have two ways to reverse what you've done: You can manually draw new fog, or you can reset all fog. The quicker method is to reset all fog with **Ctrl+Shift+A**. The more elegant solution is to press **Shift** , then click and release, draw an area of fog, and then click again. Instead of exposing an area of the map, it restores fog.
|
||||
|
||||
### Add lighting effects
|
||||
|
||||
Fog-of-war mimics the natural phenomenon of not being able to see areas of the world other than where you are, but lighting effects mimic the visibility player characters might experience in light and dark. For games like Pathfinder and Dungeons and Dragons 5e, visibility is governed by light sources matched against light conditions.
|
||||
|
||||
First, activate lighting by clicking on the **Map** menu, selecting **Vision** , and then choosing either Daylight or Night. Now lighting effects are active, but none of your players have light sources, so they have no visibility.
|
||||
|
||||
To assign light sources to players, right-click on the appropriate token and choose **Light Source**. Definitions exist for the D20 system (candle, lantern, torch, and so on) and in generic measurements.
|
||||
|
||||
With lighting effects active, players can expose portions of fog-of-war as their light sources get closer to unexposed fog. That's a great effect, but it doesn't make much sense when players can illuminate the next room right through a solid wall. To prevent that, you have to help MapTool differentiate between empty space and solid objects.
|
||||
|
||||
#### Define solid objects
|
||||
|
||||
Defining walls and other solid objects through which light should not pass is easier than it sounds. MapTool's **Vision Blocking Layer** (VBL) tools are basic and built to minimize prep time. There are several basic shapes available, including a basic rectangle and an oval. Draw these shapes over all the solid walls, doors, pillars, and other obstructions, and you have instant rudimentary physics.
|
||||
|
||||
![Setting up obstructions][15]
|
||||
|
||||
Now your players can move around the map with light sources without seeing what lurks in the shadows of a nearby pillar or behind an innocent-looking door… until it's too late!
|
||||
|
||||
![Lighting effects][16]
|
||||
|
||||
### Track initiative
|
||||
|
||||
Eventually, your players are going to stumble on something that wants to kill them, and that means combat. In most RPG systems, combat is played in rounds, with the order of turns decided by an _initiative_ roll. During combat, each player (in order of their initiative roll, from greatest to lowest) tries to defeat their foe, ideally dealing enough damage until their foe is left with no health points (HP). It's usually the most paperwork a GM has to do during a game because it involves tracking whose turn it is, how much damage each monster has taken, what amount of damage each monster's attack deals, what special abilities each monster has, and more. Luckily, MapTool can help with that—and better yet, you can extend it with a custom macro to do even more.
|
||||
|
||||
MapTool's basic initiative panel helps you keep track of whose turn it is and how many rounds have transpired so far. To view the initiative panel, go to the **Window** menu and select **Initiative**.
|
||||
|
||||
To add characters to the initiative order, right-click a token and select **Add To Initiative**. As you add each, the token and its label appear in the initiative panel in the order that you add them. If you make a mistake or someone holds their action and changes the initiative order, click and drag the tokens in the initiative panel to reorder them.
|
||||
|
||||
During combat, click the **Next** button in the top-left of the initiative panel to progress to the next character. As long as you use the **Next** button, the **Round** counter increments, helping you track how many rounds the combat has lasted (which is helpful when you have spells or effects that last only for a specific number of rounds).
|
||||
|
||||
Tracking combat order is helpful, but it's even better to track health points. Your players should be tracking their own health, but since everyone's staring at the same screen, it doesn't hurt to track it publicly in one place. An HP property and a graphical health bar (which you can activate) are assigned to each token, so that's all the infrastructure you need to track HP in MapTool, but doing it manually takes a lot of clicking around. Since MapTool can be extended with macros, it's trivial to bring all these components together for a smooth GM experience.
|
||||
|
||||
The first step is to activate graphical health bars for your tokens. To do this, right-click on each token and select **Edit**. In the **Edit Token** dialog box, click on the **State** tab and deselect the radio button next to **Hide**.
|
||||
|
||||
![Don't hide the health bar][17]
|
||||
|
||||
Do this for each token whose health you want to expose.
|
||||
|
||||
#### Write a macro
|
||||
|
||||
Macros have access to all token properties, so each token's HP can be tracked by reading and writing whatever value exists in the token's HP property. The graphical health bar, however, bases its state on a percentage, so for the health bars to be meaningful, your tokens also must have some value that represents 100% of its HP.
|
||||
|
||||
Go to the **Edit** menu and select **Campaign Properties** to globally add properties to tokens. In the **Campaign Properties** window, select the **Token Properties** tab and then click the **Basic** category in the left column. Under ***@HP** , add ***@MaxHP** and click the **Update** button. Click the **OK** button to close the window.
|
||||
|
||||
![Adding a property to all tokens][18]
|
||||
|
||||
Now right-click a token and select **Edit**. In the **Edit Token** window, select the **State** tab and enter a value for the token's maximum HP (from the player's character sheet).
|
||||
|
||||
To create a new macro, reveal the **Campaign** panel in the **Window** menu.
|
||||
|
||||
In the **Campaign** panel, right-click and select **Add New Macro**. A button labeled **New** appears in the panel. Right-click on the **New** button and select **Edit**.
|
||||
|
||||
Enter this code in the macro editor window:
|
||||
|
||||
|
||||
```
|
||||
[h:status = input(
|
||||
"hpAmount|0|Points",
|
||||
"hpType|Damage,Healing|Damage or heal?|RADIO|SELECT=0")]
|
||||
[h:abort(status)]
|
||||
|
||||
[if(hpType == 0),CODE: {
|
||||
[h:HP = HP - hpAmount]
|
||||
[h:bar.Health = HP / MaxHP]
|
||||
[r:token.name] takes [r:hpAmount] damage.};
|
||||
{
|
||||
[h:diff = MaxHP - HP]
|
||||
[h:HP = min(HP+hpAmount, MaxHP)]
|
||||
[h:bar.Health = HP / MaxHP]
|
||||
[r:token.name] gains [r:min(diff,hpAmount)] HP. };]
|
||||
```
|
||||
|
||||
You can find full documentation of functions available in MapTool macros and their syntax from the [RPTools wiki][19].
|
||||
|
||||
In the **Details** tab, enable **Include Label** and **Apply to Selected Tokens** , and leave all other values at their default. Give your macro a better name than **New** , such as **HPTracker** , then click **Apply** and **OK**.
|
||||
|
||||
![Macro editing][20]
|
||||
|
||||
Your campaign now has a new ability!
|
||||
|
||||
Select a token and click your **HPTracker** button. Enter the number of points to deduct from the token, click **OK** , and watch the health bar change to reflect the token's new state.
|
||||
|
||||
It may seem like a simple change, but in the heat of battle, this is a GM's greatest weapon.
|
||||
|
||||
### During the game
|
||||
|
||||
There's obviously a lot you can do with MapTool, but with a little prep work, most of your work is done well before you start playing. You can even create a template campaign by creating an empty campaign with only the macros and settings you want, so all you have to do is import maps and stat out tokens.
|
||||
|
||||
During the game, your workflow is mostly about revealing areas from fog-of-war and managing combat. The players can manage their own tokens, and your prep work takes care of everything else.
|
||||
|
||||
MapTool makes digital gaming easy and fun, and most importantly, it keeps it open source and self-contained. Level-up today by learning MapTool and using it for your games.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-use-maptools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice-keys_0.jpg?itok=PGEs3ZXa
|
||||
[2]: https://opensource.com/article/18/5/maptool
|
||||
[3]: https://github.com/RPTools/maptool
|
||||
[4]: https://github.com/RPTools/maptool/releases
|
||||
[5]: https://klaatu.fedorapeople.org/RPTools/maptool/
|
||||
[6]: https://immortalnights.com/tokensite/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/maptool-resources.png (Add Resource to Library dialogue)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/map-properties.png (Creating a new map)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/map-select.jpg (Select a map)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/grid-adjust.jpg (Adjusting the grid)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/token-new.png (Adding a player character to the map)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/token-move.jpg (A token moving within the grid)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/token-menu.jpg (The token menu unlocks great, arcane power)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/fog-of-war.jpg (Fog-of-war as experienced by a playe)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/vbl.jpg (Setting up obstructions)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/map-light.jpg (Lighting effects)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/token-edit.jpg (Don't hide the health bar)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/campaign-properties.jpg (Adding a property to all tokens)
|
||||
[19]: https://lmwcs.com/rptools/wiki/Main_Page
|
||||
[20]: https://opensource.com/sites/default/files/uploads/macro-detail.jpg (Macro editing)
|
@ -0,0 +1,342 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with OpenSSL: Cryptography basics)
|
||||
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
|
||||
|
||||
Getting started with OpenSSL: Cryptography basics
|
||||
======
|
||||
Need a primer on cryptography basics, especially regarding OpenSSL? Read
|
||||
on.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
This article is the first of two on cryptography basics using [OpenSSL][2], a production-grade library and toolkit popular on Linux and other systems. (To install the most recent version of OpenSSL, see [here][3].) OpenSSL utilities are available at the command line, and programs can call functions from the OpenSSL libraries. The sample program for this article is in C, the source language for the OpenSSL libraries.
|
||||
|
||||
The two articles in this series cover—collectively—cryptographic hashes, digital signatures, encryption and decryption, and digital certificates. You can find the code and command-line examples in a ZIP file from [my website][4].
|
||||
|
||||
Let’s start with a review of the SSL in the OpenSSL name.
|
||||
|
||||
### A quick history
|
||||
|
||||
[Secure Socket Layer (SSL)][5] is a cryptographic protocol that [Netscape][6] released in 1995. This protocol layer can sit atop HTTP, thereby providing the _S_ for _secure_ in HTTPS. The SSL protocol provides various security services, including two that are central in HTTPS:
|
||||
|
||||
* Peer authentication (aka mutual challenge): Each side of a connection authenticates the identity of the other side. If Alice and Bob are to exchange messages over SSL, then each first authenticates the identity of the other.
|
||||
* Confidentiality: A sender encrypts messages before sending these over a channel. The receiver then decrypts each received message. This process safeguards network conversations. Even if eavesdropper Eve intercepts an encrypted message from Alice to Bob (a _man-in-the-middle_ attack), Eve finds it computationally infeasible to decrypt this message.
|
||||
|
||||
|
||||
|
||||
These two key SSL services, in turn, are tied to others that get less attention. For example, SSL supports message integrity, which assures that a received message is the same as the one sent. This feature is implemented with hash functions, which likewise come with the OpenSSL toolkit.
|
||||
|
||||
SSL is versioned (e.g., SSLv2 and SSLv3), and in 1999 Transport Layer Security (TLS) emerged as a similar protocol based upon SSLv3. TLSv1 and SSLv3 are alike, but not enough so to work together. Nonetheless, it is common to refer to SSL/TLS as if they are one and the same protocol. For example, OpenSSL functions often have SSL in the name even when TLS rather than SSL is in play. Furthermore, calling OpenSSL command-line utilities begins with the term **openssl**.
|
||||
|
||||
The documentation for OpenSSL is spotty beyond the **man** pages, which become unwieldy given how big the OpenSSL toolkit is. Command-line and code examples are one way to bring the main topics into focus together. Let’s start with a familiar example—accessing a web site with HTTPS—and use this example to pick apart the cryptographic pieces of interest.
|
||||
|
||||
### An HTTPS client
|
||||
|
||||
The **client** program shown here connects over HTTPS to Google:
|
||||
|
||||
|
||||
```
|
||||
/* compilation: gcc -o client client.c -lssl -lcrypto */
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <stdlib.h>
|
||||
|
||||
#include <openssl/bio.h> /* BasicInput/Output streams */
|
||||
|
||||
#include <openssl/err.h> /* errors */
|
||||
|
||||
#include <openssl/ssl.h> /* core library */
|
||||
|
||||
#define BuffSize 1024
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][7](msg);
|
||||
ERR_print_errors_fp(stderr);
|
||||
[exit][8](-1);
|
||||
}
|
||||
|
||||
void init_ssl() {
|
||||
SSL_load_error_strings();
|
||||
SSL_library_init();
|
||||
}
|
||||
|
||||
void cleanup(SSL_CTX* ctx, BIO* bio) {
|
||||
SSL_CTX_free(ctx);
|
||||
BIO_free_all(bio);
|
||||
}
|
||||
|
||||
void secure_connect(const char* hostname) {
|
||||
char name[BuffSize];
|
||||
char request[BuffSize];
|
||||
char response[BuffSize];
|
||||
|
||||
const SSL_METHOD* method = TLSv1_2_client_method();
|
||||
if (NULL == method) report_and_exit("TLSv1_2_client_method...");
|
||||
|
||||
SSL_CTX* ctx = SSL_CTX_new(method);
|
||||
if (NULL == ctx) report_and_exit("SSL_CTX_new...");
|
||||
|
||||
BIO* bio = BIO_new_ssl_connect(ctx);
|
||||
if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
|
||||
|
||||
SSL* ssl = NULL;
|
||||
|
||||
/* link bio channel, SSL session, and server endpoint */
|
||||
|
||||
[sprintf][9](name, "%s:%s", hostname, "https");
|
||||
BIO_get_ssl(bio, &ssl); /* session */
|
||||
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* robustness */
|
||||
BIO_set_conn_hostname(bio, name); /* prepare to connect */
|
||||
|
||||
/* try to connect */
|
||||
if (BIO_do_connect(bio) <= 0) {
|
||||
cleanup(ctx, bio);
|
||||
report_and_exit("BIO_do_connect...");
|
||||
}
|
||||
|
||||
/* verify truststore, check cert */
|
||||
if (!SSL_CTX_load_verify_locations(ctx,
|
||||
"/etc/ssl/certs/ca-certificates.crt", /* truststore */
|
||||
"/etc/ssl/certs/")) /* more truststore */
|
||||
report_and_exit("SSL_CTX_load_verify_locations...");
|
||||
|
||||
long verify_flag = SSL_get_verify_result(ssl);
|
||||
if (verify_flag != X509_V_OK)
|
||||
[fprintf][10](stderr,
|
||||
"##### Certificate verification error (%i) but continuing...\n",
|
||||
(int) verify_flag);
|
||||
|
||||
/* now fetch the homepage as sample data */
|
||||
[sprintf][9](request,
|
||||
"GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
|
||||
hostname);
|
||||
BIO_puts(bio, request);
|
||||
|
||||
/* read HTTP response from server and print to stdout */
|
||||
while (1) {
|
||||
[memset][11](response, '\0', sizeof(response));
|
||||
int n = BIO_read(bio, response, BuffSize);
|
||||
if (n <= 0) break; /* 0 is end-of-stream, < 0 is an error */
|
||||
[puts][12](response);
|
||||
}
|
||||
|
||||
cleanup(ctx, bio);
|
||||
}
|
||||
|
||||
int main() {
|
||||
init_ssl();
|
||||
|
||||
const char* hostname = "www.google.com:443";
|
||||
[fprintf][10](stderr, "Trying an HTTPS connection to %s...\n", hostname);
|
||||
secure_connect(hostname);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
This program can be compiled and executed from the command line (note the lowercase L in **-lssl** and **-lcrypto**):
|
||||
|
||||
**gcc** **-o** **client client.c -lssl** **-lcrypto**
|
||||
|
||||
This program tries to open a secure connection to the web site [www.google.com][13]. As part of the TLS handshake with the Google web server, the **client** program receives one or more digital certificates, which the program tries (but, on my system, fails) to verify. Nonetheless, the **client** program goes on to fetch the Google homepage through the secure channel. This program depends on the security artifacts mentioned earlier, although only a digital certificate stands out in the code. The other artifacts remain behind the scenes and are clarified later in detail.
|
||||
|
||||
Generally, a client program in C or C++ that opened an HTTP (non-secure) channel would use constructs such as a _file descriptor_ for a _network socket_, which is an endpoint in a connection between two processes (e.g., the client program and the Google web server). A file descriptor, in turn, is a non-negative integer value that identifies, within a program, any file-like construct that the program opens. Such a program also would use a structure to specify details about the web server’s address.
|
||||
|
||||
None of these relatively low-level constructs occurs in the client program, as the OpenSSL library wraps the socket infrastructure and address specification in high-level security constructs. The result is a straightforward API. Here’s a first look at the security details in the example **client** program.
|
||||
|
||||
* The program begins by loading the relevant OpenSSL libraries, with my function **init_ssl** making two calls into OpenSSL:
|
||||
|
||||
**SSL_library_init(); SSL_load_error_strings();**
|
||||
|
||||
* The next initialization step tries to get a security _context_, a framework of information required to establish and maintain a secure channel to the web server. **TLS 1.2** is used in the example, as shown in this call to an OpenSSL library function:
|
||||
|
||||
**const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */**
|
||||
|
||||
If the call succeeds, then the **method** pointer is passed to the library function that creates the context of type **SSL_CTX**:
|
||||
|
||||
**SSL_CTX*** **ctx** **= SSL_CTX_new(method);**
|
||||
|
||||
The **client** program checks for errors on each of these critical library calls, and then the program terminates if either call fails.
|
||||
|
||||
* Two other OpenSSL artifacts now come into play: a security session of type **SSL**, which manages the secure connection from start to finish; and a secured stream of type **BIO** (Basic Input/Output), which is used to communicate with the web server. The **BIO** stream is generated with this call:
|
||||
|
||||
**BIO* bio = BIO_new_ssl_connect(ctx);**
|
||||
|
||||
Note that the all-important context is the argument. The **BIO** type is the OpenSSL wrapper for the **FILE** type in C. This wrapper secures the input and output streams between the **client** program and Google's web server.
|
||||
|
||||
* With the **SSL_CTX** and **BIO** in hand, the program then links these together in an **SSL** session. Three library calls do the work:
|
||||
|
||||
**BIO_get_ssl(bio, &ssl); /* get a TLS session */**
|
||||
|
||||
**SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* for robustness */**
|
||||
|
||||
**BIO_set_conn_hostname(bio, name); /* prepare to connect to Google */**
|
||||
|
||||
The secure connection itself is established through this call:
|
||||
|
||||
**BIO_do_connect(bio);**
|
||||
|
||||
If this last call does not succeed, the **client** program terminates; otherwise, the connection is ready to support a confidential conversation between the **client** program and the Google web server.
|
||||
|
||||
|
||||
|
||||
|
||||
During the handshake with the web server, the **client** program receives one or more digital certificates that authenticate the server’s identity. However, the **client** program does not send a certificate of its own, which means that the authentication is one-way. (Web servers typically are configured _not_ to expect a client certificate.) Despite the failed verification of the web server’s certificate, the **client** program continues by fetching the Google homepage through the secure channel to the web server.
|
||||
|
||||
Why does the attempt to verify a Google certificate fail? A typical OpenSSL installation has the directory **/etc/ssl/certs**, which includes the **ca-certificates.crt** file. The directory and the file together contain digital certificates that OpenSSL trusts out of the box and accordingly constitute a _truststore_. The truststore can be updated as needed, in particular, to include newly trusted certificates and to remove ones no longer trusted.
|
||||
|
||||
The client program receives three certificates from the Google web server, but the OpenSSL truststore on my machine does not contain exact matches. As presently written, the **client** program does not pursue the matter by, for example, verifying the digital signature on a Google certificate (a signature that vouches for the certificate). If that signature were trusted, then the certificate containing it should be trusted as well. Nonetheless, the client program goes on to fetch and then to print Google’s homepage. The next section gets into more detail.
|
||||
|
||||
### The hidden security pieces in the client program
|
||||
|
||||
Let’s start with the visible security artifact in the client example—the digital certificate—and consider how other security artifacts relate to it. The dominant layout standard for a digital certificate is X509, and a production-grade certificate is issued by a certificate authority (CA) such as [Verisign][14].
|
||||
|
||||
A digital certificate contains various pieces of information (e.g., activation and expiration dates, and a domain name for the owner), including the issuer’s identity and _digital signature_, which is an encrypted _cryptographic hash_ value. A certificate also has an unencrypted hash value that serves as its identifying _fingerprint_.
|
||||
|
||||
A hash value results from mapping an arbitrary number of bits to a fixed-length digest. What the bits represent (an accounting report, a novel, or maybe a digital movie) is irrelevant. For example, the Message Digest version 5 (MD5) hash algorithm maps input bits of whatever length to a 128-bit hash value, whereas the SHA1 (Secure Hash Algorithm version 1) algorithm maps input bits to a 160-bit value. Different input bits result in different—indeed, statistically unique—hash values. The next article goes into further detail and focuses on what makes a hash function _cryptographic_.
|
||||
|
||||
Digital certificates differ in type (e.g., _root_, _intermediate_, and _end-entity_ certificates) and form a hierarchy that reflects these types. As the name suggests, a _root_ certificate sits atop the hierarchy, and the certificates under it inherit whatever trust the root certificate has. The OpenSSL libraries and most modern programming languages have an X509 type together with functions that deal with such certificates. The certificate from Google has an X509 format, and the **client** program checks whether this certificate is **X509_V_OK**.
|
||||
|
||||
X509 certificates are based upon public-key infrastructure (PKI), which includes algorithms—RSA is the dominant one—for generating _key pairs_: a public key and its paired private key. A public key is an identity: [Amazon’s][15] public key identifies it, and my public key identifies me. A private key is meant to be kept secret by its owner.
|
||||
|
||||
The keys in a pair have standard uses. A public key can be used to encrypt a message, and the private key from the same pair can then be used to decrypt the message. A private key also can be used to sign a document or other electronic artifact (e.g., a program or an email), and the public key from the pair can then be used to verify the signature. The following two examples fill in some details.
|
||||
|
||||
In the first example, Alice distributes her public key to the world, including Bob. Bob then encrypts a message with Alice’s public key, sending the encrypted message to Alice. The message encrypted with Alice’s public key is decrypted with her private key, which (by assumption) she alone has, like so:
|
||||
|
||||
|
||||
```
|
||||
+------------------+ encrypted msg +-------------------+
|
||||
Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg
|
||||
+------------------+ +-------------------+
|
||||
```
|
||||
|
||||
Decrypting the message without Alice’s private key is possible in principle, but infeasible in practice given a sound cryptographic key-pair system such as RSA.
|
||||
|
||||
Now, for the second example, consider signing a document to certify its authenticity. The signature algorithm uses a private key from a pair to process a cryptographic hash of the document to be signed:
|
||||
|
||||
|
||||
```
|
||||
+-------------------+
|
||||
Hash of document--->|Alice's private key|--->Alice's digital signature of the document
|
||||
+-------------------+
|
||||
```
|
||||
|
||||
Assume that Alice digitally signs a contract sent to Bob. Bob then can use Alice’s public key from the key pair to verify the signature:
|
||||
|
||||
|
||||
```
|
||||
+------------------+
|
||||
Alice's digital signature of the document--->|Alice's public key|--->verified or not
|
||||
+------------------+
|
||||
```
|
||||
|
||||
It is infeasible to forge Alice’s signature without Alice’s private key: hence, it is in Alice’s interest to keep her private key secret.
|
||||
|
||||
None of these security pieces, except for digital certificates, is explicit in the **client** program. The next article fills in the details with examples that use the OpenSSL utilities and library functions.
|
||||
|
||||
### OpenSSL from the command line
|
||||
|
||||
In the meantime, let’s take a look at OpenSSL command-line utilities: in particular, a utility to inspect the certificates from a web server during the TLS handshake. Invoking the OpenSSL utilities begins with the **openssl** command and then adds a combination of arguments and flags to specify the desired operation.
|
||||
|
||||
Consider this command:
|
||||
|
||||
**openssl list-cipher-algorithms**
|
||||
|
||||
The output is a list of associated algorithms that make up a _cipher suite_. Here’s the start of the list, with comments to clarify the acronyms:
|
||||
|
||||
|
||||
```
|
||||
AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
|
||||
AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
|
||||
AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
|
||||
...
|
||||
```
|
||||
|
||||
The next command, using the argument **s_client**, opens a secure connection to **[www.google.com][13]** and prints screens full of information about this connection:
|
||||
|
||||
**openssl s_client -connect [www.google.com:443][16] -showcerts**
|
||||
|
||||
The port number 443 is the standard one used by web servers for receiving HTTPS rather than HTTP connections. (For HTTP, the standard port is 80.) The network address **[www.google.com:443][16]** also occurs in the **client** program's code. If the attempted connection succeeds, the three digital certificates from Google are displayed together with information about the secure session, the cipher suite in play, and related items. For example, here is a slice of output from near the start, which announces that a _certificate chain_ is forthcoming. The encoding for the certificates is base64:
|
||||
|
||||
|
||||
```
|
||||
Certificate chain
|
||||
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
|
||||
i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
|
||||
\-----BEGIN CERTIFICATE-----
|
||||
MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
|
||||
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
|
||||
...
|
||||
```
|
||||
|
||||
A major web site such as Google usually sends multiple certificates for authentication.
|
||||
|
||||
The output ends with summary information about the TLS session, including specifics on the cipher suite:
|
||||
|
||||
|
||||
```
|
||||
SSL-Session:
|
||||
Protocol : TLSv1.2
|
||||
Cipher : ECDHE-RSA-AES128-GCM-SHA256
|
||||
Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
|
||||
...
|
||||
```
|
||||
|
||||
The protocol **TLS 1.2** is used in the **client** program, and the **Session-ID** uniquely identifies the connection between the **openssl** utility and the Google web server. The **Cipher** entry can be parsed as follows:
|
||||
|
||||
* **ECDHE** (Elliptic Curve Diffie Hellman Ephemeral) is an effective and efficient algorithm for managing the TLS handshake. In particular, ECDHE solves the _key-distribution problem_ by ensuring that both parties in a connection (e.g., the client program and the Google web server) use the same encryption/decryption key, which is known as the _session key_. The follow-up article digs into the details.
|
||||
|
||||
* **RSA** (Rivest Shamir Adleman) is the dominant public-key cryptosystem and named after the three academics who first described the system in the late 1970s. The key-pairs in play are generated with the RSA algorithm.
|
||||
|
||||
* **AES128** (Advanced Encryption Standard) is a _block cipher_ that encrypts and decrypts blocks of bits. (The alternative is a _stream cipher_, which encrypts and decrypts bits one at a time.) The cipher is _symmetric_ in that the same key is used to encrypt and to decrypt, which raises the key-distribution problem in the first place. AES supports key sizes of 128 (used here), 192, and 256 bits: the larger the key, the better the protection.
|
||||
|
||||
Key sizes for symmetric cryptosystems such as AES are, in general, smaller than those for asymmetric (key-pair based) systems such as RSA. For example, a 1024-bit RSA key is relatively small, whereas a 256-bit key is currently the largest for AES.
|
||||
|
||||
* **GCM** (Galois Counter Mode) handles the repeated application of a cipher (in this case, AES128) during a secured conversation. AES128 blocks are only 128-bits in size, and a secure conversation is likely to consist of multiple AES128 blocks from one side to the other. GCM is efficient and commonly paired with AES128.
|
||||
|
||||
* **SHA256** (Secure Hash Algorithm 256 bits) is the cryptographic hash algorithm in play. The hash values produced are 256 bits in size, although even larger values are possible with SHA.
|
||||
|
||||
|
||||
|
||||
|
||||
Cipher suites are in continual development. Not so long ago, for example, Google used the RC4 stream cipher (Ron’s Cipher version 4 after Ron Rivest from RSA). RC4 now has known vulnerabilities, which presumably accounts, at least in part, for Google’s switch to AES128.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
This first look at OpenSSL, through a secure C web client and various command-line examples, has brought to the fore a handful of topics in need of more clarification. [The next article gets into the details][17], starting with cryptographic hashes and ending with a fuller discussion of how digital certificates address the key distribution challenge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://www.openssl.org/
|
||||
[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
|
||||
[4]: http://condor.depaul.edu/mkalin
|
||||
[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||
[6]: https://en.wikipedia.org/wiki/Netscape
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
||||
[13]: http://www.google.com
|
||||
[14]: https://www.verisign.com
|
||||
[15]: https://www.amazon.com
|
||||
[16]: http://www.google.com:443
|
||||
[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
|
68
sources/tech/20190619 Leading in the Python community.md
Normal file
68
sources/tech/20190619 Leading in the Python community.md
Normal file
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Leading in the Python community)
|
||||
[#]: via: (https://opensource.com/article/19/6/naomi-ceder-python-software-foundation)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Leading in the Python community
|
||||
======
|
||||
A chat with Naomi Ceder, current Python Software Foundation board chair.
|
||||
![Hands together around the word trust][1]
|
||||
|
||||
Like many other leaders in the open source software world, [Naomi Ceder][2], board chair of the [Python Software Foundation][3] (PSF), took a non-traditional path into the Python world. As the title of her 2017 [keynote][4] at PyCon España explains, she came for the language and stayed for the community. In a recent conversation with her, she shared how she became a Python community leader and offered some insight into what makes Python special.
|
||||
|
||||
### From teaching to coding
|
||||
|
||||
Naomi began her career in the Classics; she earned a PhD in Latin and Ancient Greek with a minor in Indo-European Linguistics, as she says, "several decades ago." While teaching Latin at a private school, she began tinkering with computers, learning to code and to take machines apart to do upgrades and repairs. She started working with open source software in 1995 with [Yggdrasil Linux][5] and helped launch the Fort Wayne, Indiana, [Linux User Group][6].
|
||||
|
||||
A teacher at heart, Naomi believes teaching coding in middle and high school is essential because, by the time most people get to college, they are already convinced that coding and technology careers are not for them. Starting earlier can help increase the supply of technical talent and the diversity and breadth of experience in our talent pools to meet the industry's needs, she says.
|
||||
|
||||
Somewhere around 2001, she decided to switch from studying human languages to researching computer languages, as well as teaching computer classes and managing the school's IT. Her interest in Python was sparked at Linux World 2001 when she attended PSF president Guido Van Rossum's day-long tutorial on Python. Back then, it was an obscure language, but she liked it so well that she began teaching Python and using it to track student records and do sysadmin duties at her school.
|
||||
|
||||
### Leading the Python community
|
||||
|
||||
Naomi says, "community is the key factor behind Python's success. The whole idea behind open source software is sharing. Few people really want to just sit alone, writing code, and staring at their screens. The real satisfaction comes in trading ideas and building something with others."
|
||||
|
||||
She started giving talks at the first [PyCon][7] in 2003 has been a consistent attendee and leader since then. She has organized birds-of-a-feather sessions and founded the PyCon and PyCon UK poster sessions, the education summit, and the Spanish language track, [Charlas][8].
|
||||
|
||||
She is also the author of _[The Quick Python Book][9]_ and co-founded [Trans*Code][10], "the UK's only hack event series focused solely on drawing attention to transgender issues and opportunities." Naomi says, "as technology offers growing opportunities, being sure these opportunities are equally accessible to traditionally marginalized groups grows ever more important."
|
||||
|
||||
### Contributing through the PSF
|
||||
|
||||
As board chair of the PSF, Naomi contributes actively to the organization's work to support the Python language and the people working with it. In addition to sponsoring PyCon, the PSF funds grants for meetups, conferences, and workshops around the world. In 2018, the organization gave almost $335,000 in grants, most of them in the $500 to $5,000 range.
|
||||
|
||||
The PSF's short-term goals are to become a sustainable, stable, and mature non-profit organization with professional staff. Its long-term goals include developing resources that offer meaningful support to development efforts for Python and expanding the organization's support for educational efforts in Python around the world.
|
||||
|
||||
This work depends on having financial support from the community. Naomi says the PSF's "largest current source of funding is PyCon. To ensure the PSF's sustainability, we are also focusing on [sponsorships][11] from companies using Python, which is our fastest-growing segment." Supporting memberships are $99 per year, and [donations and fundraisers][12] also help sustain the organization's work.
|
||||
|
||||
You can learn much more about the PSF's work in its [Annual Report][13].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/naomi-ceder-python-software-foundation
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_HighTrust_1110_A.png?itok=EF5Tmcdk (Hands together around the word trust)
|
||||
[2]: https://www.naomiceder.tech/pages/about/
|
||||
[3]: https://www.python.org/psf/
|
||||
[4]: https://www.youtube.com/watch?v=ayQK6app_wA
|
||||
[5]: https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X
|
||||
[6]: http://fortwaynelinux.org/about
|
||||
[7]: http://pycon.org/
|
||||
[8]: https://twitter.com/pyconcharlas?lang=en
|
||||
[9]: https://www.manning.com/books/the-quick-python-book-third-edition
|
||||
[10]: https://www.trans.tech/
|
||||
[11]: https://www.python.org/psf/sponsorship/
|
||||
[12]: https://www.python.org/psf/donations/
|
||||
[13]: https://www.python.org/psf/annual-report/2019/
|
185
sources/tech/20190620 How to SSH into a running container.md
Normal file
185
sources/tech/20190620 How to SSH into a running container.md
Normal file
@ -0,0 +1,185 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to SSH into a running container)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-ssh-running-container)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/bcotton)
|
||||
|
||||
How to SSH into a running container
|
||||
======
|
||||
SSH is probably not the best way to run commands in a container; try
|
||||
this instead.
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
Containers have shifted the way we think about virtualization. You may remember the days (or you may still be living them) when a virtual machine was the full stack, from virtualized BIOS, operating system, and kernel up to each virtualized network interface controller (NIC). You logged into the virtual box just as you would your own workstation. It was a very direct and simple analogy.
|
||||
|
||||
And then containers came along, [starting with LXC][2] and culminating in the Open Container Initiative ([OCI][3]), and that's when things got complicated.
|
||||
|
||||
### Idempotency
|
||||
|
||||
In the world of containers, the "virtual machine" is only mostly virtual. Everything that doesn't need to be virtualized is borrowed from the host machine. Furthermore, the container itself is usually meant to be ephemeral and idempotent, so it stores no persistent data, and its state is defined by configuration files on the host machine.
|
||||
|
||||
If you're used to the old ways of virtual machines, then you naturally expect to log into a virtual machine in order to interact with it. But containers are ephemeral, so anything you do in a container is forgotten, by design, should the container need to be restarted or respawned.
|
||||
|
||||
The commands controlling your container infrastructure (such as **oc, crictl**, **lxc**, and **docker**) provide an interface to run important commands to restart services, view logs, confirm the existence and permissions modes of an important file, and so on. You should use the tools provided by your container infrastructure to interact with your application, or else edit configuration files and relaunch. That's what containers are designed to do.
|
||||
|
||||
For instance, the open source forum software [Discourse][4] is officially distributed as a container image. The Discourse software is _stateless_, so its installation is self-contained within **/var/discourse**. As long as you have a backup of **/var/discourse**, you can always restore the forum by relaunching the container. The container holds no persistent data, and its configuration file is **/var/discourse/containers/app.yml**.
|
||||
|
||||
Were you to log into the container and edit any of the files it contains, all changes would be lost if the container had to be restarted.
|
||||
|
||||
LXC containers you're building from scratch are more flexible, with configuration files (in a location defined by you) passed to the container when you launch it.
|
||||
|
||||
A build system like [Jenkins][5] usually has a default configuration file, such as **jenkins.yaml**, providing instructions for a base container image that exists only to build and run tests on source code. After the builds are done, the container goes away.
|
||||
|
||||
Now that you know you don't need SSH to interact with your containers, here's an overview of what tools are available (and some notes about using SSH in spite of all the fancy tools that make it redundant).
|
||||
|
||||
### OpenShift web console
|
||||
|
||||
[OpenShift 4][6] offers an open source toolchain for container creation and maintenance, including an interactive web console.
|
||||
|
||||
When you log into your web console, navigate to your project overview and click the **Applications** tab for a list of pods. Select a (running) pod to open the application's **Details** panel.
|
||||
|
||||
![Pod details in OpenShift][7]
|
||||
|
||||
Click the **Terminal** tab at the top of the **Details** panel to open an interactive shell in your container.
|
||||
|
||||
![A terminal in a running container][8]
|
||||
|
||||
If you prefer a browser-based experience for Kubernetes management, you can learn more through interactive lessons available at [learn.openshift.com][9].
|
||||
|
||||
### OpenShift oc
|
||||
|
||||
If you prefer a command-line interface experience, you can use the **oc** command to interact with containers from the terminal.
|
||||
|
||||
First, get a list of running pods (or refer to the web console for a list of active pods). To get that list, enter:
|
||||
|
||||
|
||||
```
|
||||
`$ oc get pods`
|
||||
```
|
||||
|
||||
You can view the logs of a resource (a pod, build, or container). By default, **oc logs** returns the logs from the first container in the pod you specify. To select a single container, add the **\--container** option:
|
||||
|
||||
|
||||
```
|
||||
`$ oc logs --follow=true example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
You can also view logs from all containers in a pod with:
|
||||
|
||||
|
||||
```
|
||||
`$ oc logs --follow=true example-1-e1337 --all-containers`
|
||||
```
|
||||
|
||||
#### Execute commands
|
||||
|
||||
You can execute commands remotely with:
|
||||
|
||||
|
||||
```
|
||||
$ oc exec example-1-e1337 --container app hostname
|
||||
example.local
|
||||
```
|
||||
|
||||
This is similar to running SSH non-interactively: you get to run the command you want to run without an interactive shell taking over your environment.
|
||||
|
||||
#### Remote shell
|
||||
|
||||
You can attach to a running container. This still does _not_ open a shell in the container, but it does run commands directly. For example:
|
||||
|
||||
|
||||
```
|
||||
`$ oc attach example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
If you need a true interactive shell in a container, you can open a remote shell with the **oc rsh** command as long as the container includes a shell. By default, **oc rsh** launches **/bin/sh**:
|
||||
|
||||
|
||||
```
|
||||
`$ oc rsh example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
If you're using Kubernetes directly, you can use the **kubetcl** **exec** command to run a Bash shell in your pod.
|
||||
|
||||
First, confirm that your pod is running:
|
||||
|
||||
|
||||
```
|
||||
`$ kubectl get pods`
|
||||
```
|
||||
|
||||
As long as the pod containing your application is listed, you can use the **exec** command to launch a shell in the container. Using the name **example-pod** as the pod name, enter:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl exec --stdin=false --tty=false
|
||||
example-pod -- /bin/bash
|
||||
[root@example.local][10]:/# ls
|
||||
bin core etc lib root srv
|
||||
boot dev home lib64 sbin tmp var
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
The **docker** command is similar to **kubectl**. With the **dockerd** daemon running, get the name of the running container (you may have to use **sudo** to escalate privileges if you're not in the appropriate group):
|
||||
|
||||
|
||||
```
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND NAME
|
||||
678ac5cca78e centos "/bin/bash" example-centos
|
||||
```
|
||||
|
||||
Using the container name, you can run a command in the container:
|
||||
|
||||
|
||||
```
|
||||
$ docker exec example/centos cat /etc/os-release
|
||||
CentOS Linux release 7.6
|
||||
NAME="CentOS Linux"
|
||||
VERSION="7"
|
||||
ID="centos"
|
||||
ID_LIKE="rhel fedora"
|
||||
VERSION_ID="7"
|
||||
[...]
|
||||
```
|
||||
|
||||
Or you can launch a Bash shell for an interactive session:
|
||||
|
||||
|
||||
```
|
||||
`$ docker exec -it example-centos /bin/bash`
|
||||
```
|
||||
|
||||
### Containers and appliances
|
||||
|
||||
The important thing to remember when dealing with the cloud is that containers are essentially runtimes rather than virtual machines. While they have much in common with a Linux system (because they _are_ a Linux system!), they rarely translate directly to the commands and workflow you may have developed on your Linux workstation. However, like appliances, containers have an interface to help you develop, maintain, and monitor them, so get familiar with the front-end commands and services until you're happily interacting with them just as easily as ****you interact with virtual (or bare-metal) machines. Soon, you'll wonder why everything isn't developed to be ephemeral.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-ssh-running-container
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
[3]: https://www.opencontainers.org/
|
||||
[4]: http://discourse.org
|
||||
[5]: http://jenkins.io
|
||||
[6]: https://www.openshift.com/learn/get-started
|
||||
[7]: https://opensource.com/sites/default/files/uploads/openshift-pod-access.jpg (Pod details in OpenShift)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/openshift-pod-terminal.jpg (A terminal in a running container)
|
||||
[9]: http://learn.openshift.com
|
||||
[10]: mailto:root@example.local
|
@ -0,0 +1,337 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use OpenSSL: Hashes, digital signatures, and more)
|
||||
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-2)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
How to use OpenSSL: Hashes, digital signatures, and more
|
||||
======
|
||||
Dig deeper into the details of cryptography with OpenSSL: Hashes,
|
||||
digital signatures, digital certificates, and more
|
||||
![A person working.][1]
|
||||
|
||||
The [first article in this series][2] introduced hashes, encryption/decryption, digital signatures, and digital certificates through the OpenSSL libraries and command-line utilities. This second article drills down into the details. Let’s begin with hashes, which are ubiquitous in computing, and consider what makes a hash function _cryptographic_.
|
||||
|
||||
### Cryptographic hashes
|
||||
|
||||
The download page for the OpenSSL source code (<https://www.openssl.org/source/>) contains a table with recent versions. Each version comes with two hash values: 160-bit SHA1 and 256-bit SHA256. These values can be used to verify that the downloaded file matches the original in the repository: The downloader recomputes the hash values locally on the downloaded file and then compares the results against the originals. Modern systems have utilities for computing such hashes. Linux, for instance, has **md5sum** and **sha256sum**. OpenSSL itself provides similar command-line utilities.
|
||||
|
||||
Hashes are used in many areas of computing. For example, the Bitcoin blockchain uses SHA256 hash values as block identifiers. To mine a Bitcoin is to generate a SHA256 hash value that falls below a specified threshold, which means a hash value with at least N leading zeroes. (The value of N can go up or down depending on how productive the mining is at a particular time.) As a point of interest, today’s miners are hardware clusters designed for generating SHA256 hashes in parallel. During a peak time in 2018, Bitcoin miners worldwide generated about 75 million terahashes per second—yet another incomprehensible number.
|
||||
|
||||
Network protocols use hash values as well—often under the name **checksum**—to support message integrity; that is, to assure that a received message is the same as the one sent. The message sender computes the message’s checksum and sends the results along with the message. The receiver recomputes the checksum when the message arrives. If the sent and the recomputed checksum do not match, then something happened to the message in transit, or to the sent checksum, or to both. In this case, the message and its checksum should be sent again, or at least an error condition should be raised. (Low-level network protocols such as UDP do not bother with checksums.)
|
||||
|
||||
Other examples of hashes are familiar. Consider a website that requires users to authenticate with a password, which the user enters in their browser. Their password is then sent, encrypted, from the browser to the server via an HTTPS connection to the server. Once the password arrives at the server, it's decrypted for a database table lookup.
|
||||
|
||||
What should be stored in this lookup table? Storing the passwords themselves is risky. It’s far less risky is to store a hash generated from a password, perhaps with some _salt_ (extra bits) added to taste before the hash value is computed. Your password may be sent to the web server, but the site can assure you that the password is not stored there.
|
||||
|
||||
Hash values also occur in various areas of security. For example, hash-based message authentication code ([HMAC][3]) uses a hash value and a secret cryptographic key to authenticate a message sent over a network. HMAC codes, which are lightweight and easy to use in programs, are popular in web services. An X509 digital certificate includes a hash value known as the _fingerprint_, which can facilitate certificate verification. An in-memory truststore could be implemented as a lookup table keyed on such fingerprints—as a _hash map_, which supports constant-time lookups. The fingerprint from an incoming certificate can be compared against the truststore keys for a match.
|
||||
|
||||
What special property should a _cryptographic hash function_ have? It should be _one-way_, which means very difficult to invert. A cryptographic hash function should be relatively straightforward to compute, but computing its inverse—the function that maps the hash value back to the input bitstring—should be computationally intractable. Here is a depiction, with **chf** as a cryptographic hash function and my password **foobar** as the sample input:
|
||||
|
||||
|
||||
```
|
||||
+---+
|
||||
foobar—>|chf|—>hash value ## straightforward
|
||||
+--–+
|
||||
```
|
||||
|
||||
By contrast, the inverse operation is infeasible:
|
||||
|
||||
|
||||
```
|
||||
+-----------+
|
||||
hash value—>|chf inverse|—>foobar ## intractable
|
||||
+-----------+
|
||||
```
|
||||
|
||||
Recall, for example, the SHA256 hash function. For an input bitstring of any length N > 0, this function generates a fixed-length hash value of 256 bits; hence, this hash value does not reveal even the input bitstring’s length N, let alone the value of each bit in the string. By the way, SHA256 is not susceptible to a [_length extension attack_][4]. The only effective way to reverse engineer a computed SHA256 hash value back to the input bitstring is through a brute-force search, which means trying every possible input bitstring until a match with the target hash value is found. Such a search is infeasible on a sound cryptographic hash function such as SHA256.
|
||||
|
||||
Now, a final review point is in order. Cryptographic hash values are statistically rather than unconditionally unique, which means that it is unlikely but not impossible for two different input bitstrings to yield the same hash value—a _collision_. The [_birthday problem_][5] offers a nicely counter-intuitive example of collisions. There is extensive research on various hash algorithms’ _collision resistance_. For example, MD5 (128-bit hash values) has a breakdown in collision resistance after roughly 221 hashes. For SHA1 (160-bit hash values), the breakdown starts at about 261 hashes.
|
||||
|
||||
A good estimate of the breakdown in collision resistance for SHA256 is not yet in hand. This fact is not surprising. SHA256 has a range of 2256 distinct hash values, a number whose decimal representation has a whopping 78 digits! So, can collisions occur with SHA256 hashing? Of course, but they are extremely unlikely.
|
||||
|
||||
In the command-line examples that follow, two input files are used as bitstring sources: **hashIn1.txt** and **hashIn2.txt**. The first file contains **abc** and the second contains **1a2b3c**.
|
||||
|
||||
These files contain text for readability, but binary files could be used instead.
|
||||
|
||||
Using the Linux **sha256sum** utility on these two files at the command line—with the percent sign (**%**) as the prompt—produces the following hash values (in hex):
|
||||
|
||||
|
||||
```
|
||||
% sha256sum hashIn1.txt
|
||||
9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3 hashIn1.txt
|
||||
|
||||
% sha256sum hashIn2.txt
|
||||
3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13 hashIn2.txt
|
||||
```
|
||||
|
||||
The OpenSSL hashing counterparts yield the same results, as expected:
|
||||
|
||||
|
||||
```
|
||||
% openssl dgst -sha256 hashIn1.txt
|
||||
SHA256(hashIn1.txt)= 9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3
|
||||
|
||||
% openssl dgst -sha256 hashIn2.txt
|
||||
SHA256(hashIn2.txt)= 3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13
|
||||
```
|
||||
|
||||
This examination of cryptographic hash functions sets up a closer look at digital signatures and their relationship to key pairs.
|
||||
|
||||
### Digital signatures
|
||||
|
||||
As the name suggests, a digital signature can be attached to a document or some other electronic artifact (e.g., a program) to vouch for its authenticity. Such a signature is thus analogous to a hand-written signature on a paper document. To verify the digital signature is to confirm two things. First, that the vouched-for artifact has not changed since the signature was attached because it is based, in part, on a cryptographic _hash_ of the document. Second, that the signature belongs to the person (e.g., Alice) who alone has access to the private key in a pair. By the way, digitally signing code (source or compiled) has become a common practice among programmers.
|
||||
|
||||
Let’s walk through how a digital signature is created. As mentioned before, there is no digital signature without a public and private key pair. When using OpenSSL to create these keys, there are two separate commands: one to create a private key, and another to extract the matching public key from the private one. These key pairs are encoded in base64, and their sizes can be specified during this process.
|
||||
|
||||
The private key consists of numeric values, two of which (a _modulus_ and an _exponent_) make up the public key. Although the private key file contains the public key, the extracted public key does _not_ reveal the value of the corresponding private key.
|
||||
|
||||
The resulting file with the private key thus contains the full key pair. Extracting the public key into its own file is practical because the two keys have distinct uses, but this extraction also minimizes the danger that the private key might be publicized by accident.
|
||||
|
||||
Next, the pair’s private key is used to process a hash value for the target artifact (e.g., an email), thereby creating the signature. On the other end, the receiver’s system uses the pair’s public key to verify the signature attached to the artifact.
|
||||
|
||||
Now for an example. To begin, generate a 2048-bit RSA key pair with OpenSSL:
|
||||
|
||||
**openssl genpkey -out privkey.pem -algorithm rsa 2048**
|
||||
|
||||
We can drop the **-algorithm rsa** flag in this example because **genpkey** defaults to the type RSA. The file’s name (**privkey.pem**) is arbitrary, but the Privacy Enhanced Mail (PEM) extension **pem** is customary for the default PEM format. (OpenSSL has commands to convert among formats if needed.) If a larger key size (e.g., 4096) is in order, then the last argument of **2048** could be changed to **4096**. These sizes are always powers of two.
|
||||
|
||||
Here’s a slice of the resulting **privkey.pem** file, which is in base64:
|
||||
|
||||
|
||||
```
|
||||
\-----BEGIN PRIVATE KEY-----
|
||||
MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBANnlAh4jSKgcNj/Z
|
||||
JF4J4WdhkljP2R+TXVGuKVRtPkGAiLWE4BDbgsyKVLfs2EdjKL1U+/qtfhYsqhkK
|
||||
…
|
||||
\-----END PRIVATE KEY-----
|
||||
```
|
||||
|
||||
The next command then extracts the pair’s public key from the private one:
|
||||
|
||||
**openssl rsa -in privkey.pem -outform PEM -pubout -out pubkey.pem**
|
||||
|
||||
The resulting **pubkey.pem** file is small enough to show here in full:
|
||||
|
||||
|
||||
```
|
||||
\-----BEGIN PUBLIC KEY-----
|
||||
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDZ5QIeI0ioHDY/2SReCeFnYZJY
|
||||
z9kfk11RrilUbT5BgIi1hOAQ24LMilS37NhHYyi9VPv6rX4WLKoZCmkeYaWk/TR5
|
||||
4nbH1E/AkniwRoXpeh5VncwWMuMsL5qPWGY8fuuTE27GhwqBiKQGBOmU+MYlZonO
|
||||
O0xnAKpAvysMy7G7qQIDAQAB
|
||||
\-----END PUBLIC KEY-----
|
||||
```
|
||||
|
||||
Now, with the key pair at hand, the digital signing is easy—in this case with the source file **client.c** as the artifact to be signed:
|
||||
|
||||
**openssl dgst -sha256 -sign privkey.pem -out sign.sha256 client.c**
|
||||
|
||||
The digest for the **client.c** source file is SHA256, and the private key resides in the **privkey.pem** file created earlier. The resulting binary signature file is **sign.sha256**, an arbitrary name. To get a readable (if base64) version of this file, the follow-up command is:
|
||||
|
||||
**openssl enc -base64 -in sign.sha256 -out sign.sha256.base64**
|
||||
|
||||
The file **sign.sha256.base64** now contains:
|
||||
|
||||
|
||||
```
|
||||
h+e+3UPx++KKSlWKIk34fQ1g91XKHOGFRmjc0ZHPEyyjP6/lJ05SfjpAJxAPm075
|
||||
VNfFwysvqRGmL0jkp/TTdwnDTwt756Ej4X3OwAVeYM7i5DCcjVsQf5+h7JycHKlM
|
||||
o/Jd3kUIWUkZ8+Lk0ZwzNzhKJu6LM5KWtL+MhJ2DpVc=
|
||||
```
|
||||
|
||||
Or, the executable file **client** could be signed instead, and the resulting base64-encoded signature would differ as expected:
|
||||
|
||||
|
||||
```
|
||||
VMVImPgVLKHxVBapJ8DgLNJUKb98GbXgehRPD8o0ImADhLqlEKVy0HKRm/51m9IX
|
||||
xRAN7DoL4Q3uuVmWWi749Vampong/uT5qjgVNTnRt9jON112fzchgEoMb8CHNsCT
|
||||
XIMdyaPtnJZdLALw6rwMM55MoLamSc6M/MV1OrJnk/g=
|
||||
```
|
||||
|
||||
The final step in this process is to verify the digital signature with the public key. The hash used to sign the artifact (in this case, the executable **client** program) should be recomputed as an essential step in the verification since the verification process should indicate whether the artifact has changed since being signed.
|
||||
|
||||
There are two OpenSSL commands used for this purpose. The first decodes the base64 signature:
|
||||
|
||||
**openssl enc -base64 -d -in sign.sha256.base64 -out sign.sha256**
|
||||
|
||||
The second verifies the signature:
|
||||
|
||||
**openssl dgst -sha256 -verify pubkey.pem -signature sign.sha256 client**
|
||||
|
||||
The output from this second command is, as it should be:
|
||||
|
||||
|
||||
```
|
||||
`Verified OK`
|
||||
```
|
||||
|
||||
To understand what happens when verification fails, a short but useful exercise is to replace the executable **client** file in the last OpenSSL command with the source file **client.c** and then try to verify. Another exercise is to change the **client** program, however slightly, and try again.
|
||||
|
||||
### Digital certificates
|
||||
|
||||
A digital certificate brings together the pieces analyzed so far: hash values, key pairs, digital signatures, and encryption/decryption. The first step toward a production-grade certificate is to create a certificate signing request (CSR), which is then sent to a certificate authority (CA). To do this for the example with OpenSSL, run:
|
||||
|
||||
**openssl req -out myserver.csr -new -newkey rsa:4096 -nodes -keyout myserverkey.pem**
|
||||
|
||||
This example generates a CSR document and stores the document in the file **myserver.csr** (base64 text). The purpose here is this: the CSR document requests that the CA vouch for the identity associated with the specified domain name—the common name (CN) in CA-speak.
|
||||
|
||||
A new key pair also is generated by this command, although an existing pair could be used. Note that the use of **server** in names such as **myserver.csr** and **myserverkey.pem** hints at the typical use of digital certificates: as vouchers for the identity of a web server associated with a domain such as [www.google.com][6].
|
||||
|
||||
The same command, however, creates a CSR regardless of how the digital certificate might be used. It also starts an interactive question/answer session that prompts for relevant information about the domain name to link with the requester’s digital certificate. This interactive session can be short-circuited by providing the essentials as part of the command, with backslashes as continuations across line breaks. The **-subj** flag introduces the required information:
|
||||
|
||||
|
||||
```
|
||||
% openssl req -new
|
||||
-newkey rsa:2048 -nodes -keyout privkeyDC.pem
|
||||
-out myserver.csr
|
||||
-subj "/C=US/ST=Illinois/L=Chicago/O=Faulty Consulting/OU=IT/CN=myserver.com"
|
||||
```
|
||||
|
||||
The resulting CSR document can be inspected and verified before being sent to a CA. This process creates the digital certificate with the desired format (e.g., X509), signature, validity dates, and so on:
|
||||
|
||||
**openssl req -text -in myserver.csr -noout -verify**
|
||||
|
||||
Here’s a slice of the output:
|
||||
|
||||
|
||||
```
|
||||
verify OK
|
||||
Certificate Request:
|
||||
Data:
|
||||
Version: 0 (0x0)
|
||||
Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73:
|
||||
…
|
||||
Exponent: 65537 (0x10001)
|
||||
Attributes:
|
||||
a0:00
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
…
|
||||
```
|
||||
|
||||
### A self-signed certificate
|
||||
|
||||
During the development of an HTTPS web site, it is convenient to have a digital certificate on hand without going through the CA process. A self-signed certificate fills the bill during the HTTPS handshake’s authentication phase, although any modern browser warns that such a certificate is worthless. Continuing the example, the OpenSSL command for a self-signed certificate—valid for a year and with an RSA public key—is:
|
||||
|
||||
**openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout myserver.pem -out myserver.crt**
|
||||
|
||||
The OpenSSL command below presents a readable version of the generated certificate:
|
||||
|
||||
**openssl x509 -in myserver.crt -text -noout**
|
||||
|
||||
Here’s part of the output for the self-signed certificate:
|
||||
|
||||
|
||||
```
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number: 13951598013130016090 (0xc19e087965a9055a)
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Validity
|
||||
Not Before: Apr 11 17:22:18 2019 GMT
|
||||
Not After : Apr 10 17:22:18 2020 GMT
|
||||
Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (4096 bit)
|
||||
Modulus:
|
||||
00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73:
|
||||
…
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Subject Key Identifier:
|
||||
3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91
|
||||
X509v3 Authority Key Identifier:
|
||||
keyid:3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91
|
||||
|
||||
X509v3 Basic Constraints:
|
||||
CA:TRUE
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
3a:eb:8d:09:53:3b:5c:2e:48:ed:14:ce:f9:20:01:4e:90:c9:
|
||||
...
|
||||
```
|
||||
|
||||
As mentioned earlier, an RSA private key contains values from which the public key is generated. However, a given public key does _not_ give away the matching private key. For an introduction to the underlying mathematics, see <https://simple.wikipedia.org/wiki/RSA_algorithm>.
|
||||
|
||||
There is an important correspondence between a digital certificate and the key pair used to generate the certificate, even if the certificate is only self-signed:
|
||||
|
||||
* The digital certificate contains the _exponent_ and _modulus_ values that make up the public key. These values are part of the key pair in the originally-generated PEM file, in this case, the file **myserver.pem**.
|
||||
* The exponent is almost always 65,537 (as in this case) and so can be ignored.
|
||||
* The modulus from the key pair should match the modulus from the digital certificate.
|
||||
|
||||
|
||||
|
||||
The modulus is a large value and, for readability, can be hashed. Here are two OpenSSL commands that check for the same modulus, thereby confirming that the digital certificate is based upon the key pair in the PEM file:
|
||||
|
||||
|
||||
```
|
||||
% openssl x509 -noout -modulus -in myserver.crt | openssl sha1 ## modulus from CRT
|
||||
(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769
|
||||
|
||||
% openssl rsa -noout -modulus -in myserver.pem | openssl sha1 ## modulus from PEM
|
||||
(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769
|
||||
```
|
||||
|
||||
The resulting hash values match, thereby confirming that the digital certificate is based upon the specified key pair.
|
||||
|
||||
### Back to the key distribution problem
|
||||
|
||||
Let’s return to an issue raised at the end of Part 1: the TLS handshake between the **client** program and the Google web server. There are various handshake protocols, and even the Diffie-Hellman version at work in the **client** example offers wiggle room. Nonetheless, the **client** example follows a common pattern.
|
||||
|
||||
To start, during the TLS handshake, the **client** program and the web server agree on a cipher suite, which consists of the algorithms to use. In this case, the suite is **ECDHE-RSA-AES128-GCM-SHA256**.
|
||||
|
||||
The two elements of interest now are the RSA key-pair algorithm and the AES128 block cipher used for encrypting and decrypting messages if the handshake succeeds. Regarding encryption/decryption, this process comes in two flavors: symmetric and asymmetric. In the symmetric flavor, the _same_ key is used to encrypt and decrypt, which raises the _key distribution problem_ in the first place: How is the key to be distributed securely to both parties? In the asymmetric flavor, one key is used to encrypt (in this case, the RSA public key) but a different key is used to decrypt (in this case, the RSA private key from the same pair).
|
||||
|
||||
The **client** program has the Google web server’s public key from an authenticating certificate, and the web server has the private key from the same pair. Accordingly, the **client** program can send an encrypted message to the web server, which alone can readily decrypt this message.
|
||||
|
||||
In the TLS situation, the symmetric approach has two significant advantages:
|
||||
|
||||
* In the interaction between the **client** program and the Google web server, the authentication is one-way. The Google web server sends three certificates to the **client** program, but the **client** program does not send a certificate to the web server; hence, the web server has no public key from the client and can’t encrypt messages to the client.
|
||||
* Symmetric encryption/decryption with AES128 is nearly a _thousand times faster_ than the asymmetric alternative using RSA keys.
|
||||
|
||||
|
||||
|
||||
The TLS handshake combines the two flavors of encryption/decryption in a clever way. During the handshake, the **client** program generates random bits known as the pre-master secret (PMS). Then the **client** program encrypts the PMS with the server’s public key and sends the encrypted PMS to the server, which in turn decrypts the PMS message with its private key from the RSA pair:
|
||||
|
||||
|
||||
```
|
||||
+-------------------+ encrypted PMS +--------------------+
|
||||
client PMS--->|server’s public key|--------------->|server’s private key|--->server PMS
|
||||
+-------------------+ +--------------------+
|
||||
```
|
||||
|
||||
At the end of this process, the **client** program and the Google web server now have the same PMS bits. Each side uses these bits to generate a _master secret_ and, in short order, a symmetric encryption/decryption key known as the _session key_. There are now two distinct but identical session keys, one on each side of the connection. In the **client** example, the session key is of the AES128 variety. Once generated on both the **client** program’s and Google web server’s sides, the session key on each side keeps the conversation between the two sides confidential. A handshake protocol such as Diffie-Hellman allows the entire PMS process to be repeated if either side (e.g., the **client** program) or the other (in this case, the Google web server) calls for a restart of the handshake.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The OpenSSL operations illustrated at the command line are available, too, through the API for the underlying libraries. These two articles have emphasized the utilities to keep the examples short and to focus on the cryptographic topics. If you have an interest in security issues, OpenSSL is a fine place to start—and to stay.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
|
||||
[3]: https://en.wikipedia.org/wiki/HMAC
|
||||
[4]: https://en.wikipedia.org/wiki/Length_extension_attack
|
||||
[5]: https://en.wikipedia.org/wiki/Birthday_problem
|
||||
[6]: http://www.google.com
|
103
sources/tech/20190620 You can-t buy DevOps.md
Normal file
103
sources/tech/20190620 You can-t buy DevOps.md
Normal file
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You can't buy DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/6/you-cant-buy-devops)
|
||||
[#]: author: (Julie Gunderson https://opensource.com/users/juliegund)
|
||||
|
||||
You can't buy DevOps
|
||||
======
|
||||
But plenty of people are happy to sell it to you. Here's why it's not
|
||||
for sale.
|
||||
![Coffee shop photo][1]
|
||||
|
||||
![DevOps price tag graphic][2]
|
||||
|
||||
Making a move to [DevOps][3] can be a daunting undertaking, with many organizations not knowing the right place to start. I recently had some fun taking a few "DevOps assessments" to see what solutions they offered. I varied my answers—from an organization that fully embraces DevOps to one at the beginning of the journey. Some of the assessments provided real value, linking me back to articles on culture and methodologies, while others merely offered me a tool promising to bring all my DevOps dreams into reality.
|
||||
|
||||
Tools are absolutely essential to the DevOps journey; for instance, tools can continuously deliver, automate, or monitor your environment. However, **DevOps is not a product**, and tools alone will not enable the processes necessary to realize the full value of DevOps. People are what matter most; you can't do DevOps without building the people, mindset, and culture first.
|
||||
|
||||
### Don't 'win' at DevOps; become a champion
|
||||
|
||||
As a DevOps advocate at PagerDuty, I am proud to be a part of an organization with a strong commitment to DevOps methodologies, well beyond just "checking the boxes" of tool adoption.
|
||||
|
||||
I recently had a conversation with PagerDuty CEO Jennifer Tejada about being a winner versus a champion. She talked about how winning is fantastic—you get a trophy, a title, or maybe even a few million dollars (if it's the lottery). However, in the big picture, winning is all about short-term goals, while being a champion means focusing on long-term successes or outcomes. This got me thinking about how to apply this principle to organizations embracing DevOps.
|
||||
|
||||
One of my favorite examples of DevOps tooling is XebiaLabs' [Periodic Table of DevOps Tools][4]:
|
||||
|
||||
[![Periodic Table of DevOps][5]][4]
|
||||
|
||||
(Click table for interactive version.)
|
||||
|
||||
The table shows that numerous tools fit into DevOps. However, too many times, I have heard about organizations "transforming to DevOps" by purchasing tools. While tooling is an essential part of the DevOps journey, a tool alone does not create a DevOps environment. You have to consider all the factors that make a DevOps team function well: collaboration, breaking down silos, defined processes, ownership, and automation, along with continuous improvement/continuous delivery.
|
||||
|
||||
Deciding to purchase tooling is a great step in the right direction; what is more important is to define the "why" or the end goal behind decisions first. Which brings us back to the mentality of a champion; look at Olympic gold medalist Michael Phelps, for example. Phelps is the most decorated Olympian of all time and holds 39 world records. To achieve these accomplishments, Phelps didn't stop at one, two, or even 20 wins; he aimed to be a champion. This was all done through commitment, practice, and focusing on the desired end state.
|
||||
|
||||
### DevOps defined
|
||||
|
||||
There are hundreds of definitions for DevOps, but almost everyone can agree on the core tenet outlined in the [State of DevOps Report][6]:
|
||||
|
||||
> "DevOps is a set of principles aimed at building culture and processes to help teams work more efficiently and deliver better software faster."
|
||||
|
||||
You can't change culture and processes with a credit card. Tooling can enable an organization to collaborate better or automate or continuously deliver; however, without the right mindset and adoption, a tool's full capability may not be achievable.
|
||||
|
||||
For example, one of my former colleagues heard how amazing Slack is for teams transforming to DevOps by opening up channels for collaboration. He convinced his manager that Slack would solve all of their communication woes. However, six months into the Slack adoption, most teams were still using Skype, including the manager. Slack ended up being more of a place to talk about brewing beer than a tool to bring the product to market faster. The issue was not Slack; it was the lack of buy-in from the team and organization and knowledge around the product's full functionality.
|
||||
|
||||
Purchasing a tool can definitely be a win for a team, but purchasing a tool is not purchasing DevOps. Making tooling and best practices work for the team and achieving short- and long-term goals are where our conversation around being a champion comes up. This brings us back to the why, the overall and far-reaching goal for the team or organization. Once you identify the goal, how do you get buy-in from key stakeholders? After you achieve buy-in, how do you implement the solution?
|
||||
|
||||
### Organizational change
|
||||
|
||||
####
|
||||
|
||||
[![Change management comic by Randy Glasbergen][7]][8]
|
||||
|
||||
Change is hard for many organizations and individuals; moreover, meaningful change does not happen overnight. It is important to understand how people and organizations process change. In the [Kotter 8-Step Process for Leading Change][9], it's about articulating the need for a change, creating urgency around the why, then starting small and finding and developing internal champions, _before_ trying to prove wins or, in this case, purchasing a tool.
|
||||
|
||||
If people in an organization are not aware of a problem or that there's a better way of operating, it will be hard to get the buy-in necessary and motivate team members to adopt new ideas and take action. People may be perfectly content with the current state; perhaps the processes in place are adequate or, at a minimum, the current state is a known factor. However, for the overall team to function well and achieve its shared goal in a faster, more agile way, new mechanisms must be put into place first.
|
||||
|
||||
![Kotter 8-Step Process for Leading Change][10]
|
||||
|
||||
### How to be a DevOps champion
|
||||
|
||||
Being a champion in the DevOps world means going beyond the win and delving deeper into the team/organizational structure and culture, thereby identifying outlying issues beyond tools, and then working with others to embrace the right change that leads to defined results. Go back to the beginning and define the end goal. Here are a few sample questions you can ask to get started:
|
||||
|
||||
* What are your core values?
|
||||
* Why are you trying to become a more agile company or team?
|
||||
* What obstacles is your team or organization facing?
|
||||
* What will the tool or process accomplish?
|
||||
* How are people communicating and collaborating?
|
||||
* Are there silos and why?
|
||||
* How are you championing the customer?
|
||||
* Are employees empowered?
|
||||
|
||||
|
||||
|
||||
After defining the end state, find other like-minded individuals to be part of your champion team, and don't lose sight of what you are trying to accomplish. When making any change, make sure to start small, e.g., with one team or a test environment. By starting small and building on the wins, internal champions will start creating themselves.
|
||||
|
||||
Remember, companies are happy and eager to try to sell you DevOps, but at the end of the day, DevOps is not a product. It is a fully embraced methodology and mindset of automation, collaboration, people, and processes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/you-cant-buy-devops
|
||||
|
||||
作者:[Julie Gunderson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/juliegund
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee-shop-devops.png?itok=CPefJZJL (Coffee shop photo)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/devops-pricetag.jpg (DevOps price tag graphic)
|
||||
[3]: https://opensource.com/resources/devops
|
||||
[4]: https://xebialabs.com/periodic-table-of-devops-tools/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/periodic-table-of-devops-tools.png (Periodic Table of DevOps)
|
||||
[6]: https://puppet.com/resources/whitepaper/state-of-devops-report
|
||||
[7]: https://opensource.com/sites/default/files/uploads/cartoon.png (Change management comic by Randy Glasbergen)
|
||||
[8]: https://images.app.goo.gl/JiMaWAenNkLcmkZJ9
|
||||
[9]: https://www.kotterinc.com/8-steps-process-for-leading-change/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/kotter-process.png (Kotter 8-Step Process for Leading Change)
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 infrastructure performance and scaling tools you should be using)
|
||||
[#]: via: (https://opensource.com/article/19/6/performance-scaling-tools)
|
||||
[#]: author: (Pradeep SurisettyPeter Portante https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga)
|
||||
|
||||
7 infrastructure performance and scaling tools you should be using
|
||||
======
|
||||
These open source tools will help you feel confident in your
|
||||
infrastructure's performance as it scales up.
|
||||
![Several images of graphs.][1]
|
||||
|
||||
[Sysadmins][2], [site reliability engineers][3] (SREs), and cloud operators all too often struggle to feel confident in their infrastructure as it scales up. Also too often, they think the only way to solve their challenges is to write a tool for in-house use. Fortunately, there are options. There are many open source tools available to test an infrastructure's performance. Here are my favorites.
|
||||
|
||||
### Pbench
|
||||
|
||||
Pbench is a performance testing harness to make executing benchmarks and performance tools easier and more convenient. In short, it:
|
||||
|
||||
* Excels at running micro-benchmarks on large scales of hosts (bare-metal, virtual machines, containers, etc.) while automating a potentially large set of benchmark parameters
|
||||
* Focuses on installing, configuring, and executing benchmark code and performance tools and not on provisioning or orchestrating the testbed (e.g., OpenStack, RHEV, RHEL, Docker, etc.)
|
||||
* Is designed to work in concert with provisioning tools like BrowBeat or Ansible playbooks
|
||||
|
||||
|
||||
|
||||
Pbench's [documentation][4] includes installation and user guides, and the code is [maintained on GitHub][5], where the team welcomes contributions and issues.
|
||||
|
||||
### Ripsaw
|
||||
|
||||
Baselining is a critical aspect of infrastructure reliability. Ripsaw is a performance benchmark Operator for launching workloads on Kubernetes. It deploys as a Kuberentes Operator that then deploys common workloads, including specific applications (e.g., Couchbase) or general performance tests (e.g., Uperf) to measure and establish a performance baseline.
|
||||
|
||||
Ripsaw is [maintained on GitHub][6]. You can also find its maintainers on the [Kubernetes Slack][7], where they are active contributors.
|
||||
|
||||
### OpenShift Scale
|
||||
|
||||
The collection of tools in OpenShift Scale, OpenShift's open source solution for performance testing, do everything from spinning up OpenShift on OpenStack installations (TripleO Install and ShiftStack Install), installing on Amazon Web Services (AWS), or providing containerized tooling, like running Pbench on your cluster or doing cluster limits testing, network tests, storage tests, metric tests with Prometheus, logging, and concurrent build testing.
|
||||
|
||||
Scale's CI suite is flexible enough to both add workloads and include your workloads when deploying to Azure or anywhere else you might run. You can see the full suite of tools [on GitHub][8].
|
||||
|
||||
### Browbeat
|
||||
|
||||
[Browbeat][9] calls itself "a performance tuning and analysis tool for OpenStack." You can use it to analyze and tune the deployment of your workloads. It also automates the deployment of standard monitoring and data analysis tools like Grafana and Graphite. Browbeat is [maintained on GitHub][10].
|
||||
|
||||
### Smallfile
|
||||
|
||||
Smallfile is a filesystem workload generator targeted for scale-out, distributed storage. It has been used to test a number of open filesystem technologies, including GlusterFS, CephFS, Network File System (NFS), Server Message Block (SMB), and OpenStack Cinder volumes. It is [maintained on GitHub][11].
|
||||
|
||||
### Ceph Benchmarking Tool
|
||||
|
||||
Ceph Benchmarking Tool (CBT) is a testing harness that can automate tasks for testing [Ceph][12] cluster performance. It records system metrics with collectl, and it can collect more information with tools including perf, blktrace, and valgrind. CBT can also do advanced testing that includes automated object storage daemon outages, erasure-coded pools, and cache-tier configurations.
|
||||
|
||||
Contributors have extended CBT to use [Pbench monitoring tools and Ansible][13] and to run the [Smallfile benchmark][14]. A separate Grafana visualization dashboard uses Elasticsearch data generated by [Automated Ceph Test][15].
|
||||
|
||||
### satperf
|
||||
|
||||
Satellite-performance (satperf) is a set of Ansible playbooks and helper scripts to deploy Satellite 6 environments and measure the performance of selected actions, such as concurrent registrations, remote execution, Puppet operations, repository synchronizations and promotions, and more. You can find Satperf [on GitHub][16].
|
||||
|
||||
### Conclusion
|
||||
|
||||
Sysadmins, SREs, and cloud operators face a wide variety of challenges as they work to scale their infrastructure, but luckily there is also a wide variety of tools to help them get past those common issues. Any of these seven tools should help you get started testing your infrastructure's performance as it scales.
|
||||
|
||||
Are there other open source performance and scaling tools that should be on this list? Add your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/performance-scaling-tools
|
||||
|
||||
作者:[Pradeep SurisettyPeter Portante][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf (Several images of graphs.)
|
||||
[2]: /16/12/yearbook-10-open-source-sysadmin-tools
|
||||
[3]: /article/19/5/life-performance-engineer
|
||||
[4]: https://distributed-system-analysis.github.io/pbench/
|
||||
[5]: https://github.com/distributed-system-analysis/pbench
|
||||
[6]: https://github.com/cloud-bulldozer/ripsaw
|
||||
[7]: https://github.com/cloud-bulldozer/ripsaw#community
|
||||
[8]: https://github.com/openshift-scale
|
||||
[9]: https://browbeatproject.org/
|
||||
[10]: https://github.com/cloud-bulldozer/browbeat
|
||||
[11]: https://github.com/distributed-system-analysis/smallfile
|
||||
[12]: https://ceph.com/
|
||||
[13]: https://github.com/acalhounRH/cbt
|
||||
[14]: https://nuget.pkg.github.com/bengland2/cbt/tree/smallfile
|
||||
[15]: https://github.com/acalhounRH/automated_ceph_test
|
||||
[16]: https://github.com/redhat-performance/satperf
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The state of open source translation tools for contributors to your project)
|
||||
[#]: via: (https://opensource.com/article/19/6/translation-platforms-matter)
|
||||
[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/jibec)
|
||||
|
||||
The state of open source translation tools for contributors to your project
|
||||
======
|
||||
There are almost 100 languages with more than 10 million speakers. How
|
||||
many of your active contributors speak one?
|
||||
![Team of people around the world][1]
|
||||
|
||||
In the world of free software, many people speak English: It is the **one** language. English helps us cross borders to meet others. However, this language is also a barrier for the majority of people.
|
||||
|
||||
Some master it while others don't. Complex English terms are, in general, a barrier to the understanding and propagation of knowledge. Whenever you use an uncommon English word, ask yourself about your real mastery of what you are explaining, and the unintentional barriers you build in the process.
|
||||
|
||||
_“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.”_ — Nelson Mandela
|
||||
|
||||
We are 7 billion humans, and less than 400 million of us are English natives. The wonders done day after day by free/libre open source contributors deserve to reach the hearts of the [6.6 billion people][2] for whom English is not their mother tongue. In this day and age, we have the technology to help translate all types of content: websites, documentation, software, and even sounds and images. Even if I do not translate of all of these media personally, I do not know of any real limits. The only prerequisite for getting this content translated is both the willingness of the creators and the collective will of the users, customers, and—in the case of free software—the contributors.
|
||||
|
||||
### Why successful translation requires real tooling
|
||||
|
||||
Some projects are stuck in the stone ages and require translators to use [Git][3], [Mercurial][4], or other development tools. These tools don’t meet the needs of translation communities. Let’s help these projects evolve, as discussed in the section "A call for action."
|
||||
|
||||
Other projects have integrated translation platforms, which are key tools for linguistic diversity and existence. These tools understand the needs of translators and serve as a bridge to the development world. They make translation contribution easy, and keep those doing the translations motivated over time.
|
||||
|
||||
This aspect is important: There are almost 100 languages with more than 10 million speakers. Do you really believe that your project can have an active contributor for each of these languages? Unless you are a huge organization, like Mozilla or LibreOffice, there is no chance. The translators who help you also help two, ten, or a hundred other projects. They need tools to be effective, such as [translation memories][5], progress reports, alerts, ways to collaborate, and knowing that what they do is useful.
|
||||
|
||||
### Translation platforms are in trouble
|
||||
|
||||
However, the translation platforms distributed as free software are disappearing in favor of closed platforms. These platforms set their rules and efforts according to what will bring them the most profit.
|
||||
|
||||
Linguistic and cultural diversity does not bring money: It opens doors and allows local development. It emancipates populations and can ensure the survival of certain cultures and languages. In the 21st century, is your culture really alive if it does not exist in cyberspace?
|
||||
|
||||
The short history of translation platforms is not pretty:
|
||||
|
||||
* In 2011, Transifex ceased to be open when they decided to no longer publish their source code.
|
||||
* Since September 2017, the [Pootle][6] project seems to have stalled.
|
||||
* In October 2018, the [Zanata][7] project shut down because it had not succeeded in building a community of technical contributors capable of taking over when corporate funding was halted.
|
||||
|
||||
|
||||
|
||||
In particular, the [Fedora Project][8]—which I work closely with—has ridden the roller coaster from Transifex to Zanata and is now facing another move and more challenges.
|
||||
|
||||
Two significant platforms remain:
|
||||
|
||||
* [Pontoon][9]: Dedicated to the Mozilla use case (large community, common project).
|
||||
* [Weblate][10]: A generic platform created by developer [Michal Čihař][11] (a generic purpose platform).
|
||||
|
||||
|
||||
|
||||
These two tools are of high quality and are technically up-to-date, but Mozilla’s Pontoon is not designed to appeal to the greatest number of people. This project is dedicated to the specific challenges Mozilla faces.
|
||||
|
||||
### A call for action
|
||||
|
||||
There is an urgent need for large communities to share resources to perpetuate Weblate as free software and promote its adoption. Support is also needed for other tools, such as [po4a][12], the [Translate Toolkit][13], and even our old friend [gettext][14]. Will we accept a sword of Damocles hanging over our heads? Will we continue to consume years of free work without giving a cent in return? Or will we take the lead in bringing security to our communities?
|
||||
|
||||
**What you can do as a contributor**: Promote Weblate as an open source translation platform, and help your beloved project use it. [Hosting is free for open source projects][15].
|
||||
|
||||
**What you can do as a developer**: Make sure all of your project’s content can be translated into any language. Think about this issue from the beginning, as all tools don’t provide the same internationalization features.
|
||||
|
||||
**What you can do as an entity with a budget**: Whether you’re a company or just part of the community, pay for the support, hosting, or development of the tools you use. Even if the amount is symbolic, doing this will lower the risks. In particular, [here is the info for Weblate][16]. (Note: I’m not involved with the Weblate project other than bug reports and translation.)
|
||||
|
||||
**What to do if you’re a language enthusiast**: Contact me to help create an open source language organization to promote our tools and their usage, and find money to fund them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/translation-platforms-matter
|
||||
|
||||
作者:[Jean-Baptiste Holcroft][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jibec/users/jibec
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world)
|
||||
[2]: https://www.ethnologue.com/statistics/size
|
||||
[3]: https://git-scm.com
|
||||
[4]: https://www.mercurial-scm.org
|
||||
[5]: https://en.wikipedia.org/wiki/Translation_memory
|
||||
[6]: http://pootle.translatehouse.org
|
||||
[7]: http://zanata.org
|
||||
[8]: https://getfedora.org
|
||||
[9]: https://github.com/mozilla/pontoon/
|
||||
[10]: https://weblate.org
|
||||
[11]: https://cihar.com
|
||||
[12]: https://po4a.org
|
||||
[13]: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/
|
||||
[14]: https://www.gnu.org/software/gettext/
|
||||
[15]: http://hosted.weblate.org/
|
||||
[16]: https://weblate.org/en/hosting/
|
@ -0,0 +1,307 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Three Ways to Lock and Unlock User Account in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/lock-unlock-disable-enable-user-account-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Three Ways to Lock and Unlock User Account in Linux
|
||||
======
|
||||
|
||||
If password policy had already implemented in your organization, then you no need to look for this options.
|
||||
|
||||
However, if you had set up lock period for 24 hours, in this case you might need to unlock the user’s account manually.
|
||||
|
||||
This tutorial will help you to manually lock and unlock users account in Linux.
|
||||
|
||||
This can be done using the following two Linux Commands in three ways.
|
||||
|
||||
* **`passwd:`**The passwd command is used to update user’s authentication tokens. This task is achieved by calling the Linux-PAM and Libuser API
|
||||
* **`usermod:`**The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
|
||||
|
||||
To exprement this, we are choosing `daygeek` user account. Let’s see, how to do step by step.
|
||||
|
||||
Make a note, you have to use corresponding user account which you need to lock or unlock instead of ours.
|
||||
|
||||
You can check the given user account is available or not in system by using `id Command`. Yes, my account is available in the system.
|
||||
|
||||
```
|
||||
# id daygeek
|
||||
|
||||
uid=2240(daygeek) gid=2243(daygeek) groups=2243(daygeek),2244(ladmin)
|
||||
```
|
||||
|
||||
### Method-1: How To Lock, Unlock and Check Status of the Given User Account in Linux Using passwd Command?
|
||||
|
||||
The passwd command is one of the frequently used command by Linux administrator very often.
|
||||
|
||||
It used to update user’s authentication tokens in the `/etc/shadow` file.
|
||||
|
||||
Run the passwd command with the `-l` switch to lock the given user account.
|
||||
|
||||
```
|
||||
# passwd -l daygeek
|
||||
|
||||
Locking password for user daygeek.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
You can check the locked account status either passwd command or grep the given user name from /etc/shadow file.
|
||||
|
||||
Checking the user account locked status using passwd command.
|
||||
|
||||
```
|
||||
# passwd -S daygeek
|
||||
or
|
||||
# passwd --status daygeek
|
||||
|
||||
daygeek LK 2019-05-30 7 90 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
This will output a short information about the status of the password for a given account.
|
||||
|
||||
* **`LK:`**` ` Password locked
|
||||
* **`NP:`**` ` No password
|
||||
* **`PS:`**` ` Password set
|
||||
|
||||
|
||||
|
||||
Checking the locked user account status using `/etc/shadow` file. Two exclamation mark will be added in front of the password, if the account is already locked.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/shadow
|
||||
|
||||
daygeek:!!$6$tGvVUhEY$PIkpI43HPaEoRrNJSRpM3H0YWOsqTqXCxtER6rak5PMaAoyQohrXNB0YoFCmAuh406n8XOvBBldvMy9trmIV00:18047:7:90:7:::
|
||||
```
|
||||
|
||||
Run the passwd command with the `-u` switch to unlock the given user account.
|
||||
|
||||
```
|
||||
# passwd -u daygeek
|
||||
|
||||
Unlocking password for user daygeek.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
### Method-2: How To Lock, Unlock and Check Status of the Given User Account in Linux Using usermod Command?
|
||||
|
||||
Even, the usermod command also frequently used by Linux administrator very often.
|
||||
|
||||
The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
Run the usermod command with the `-L` switch to lock the given user account.
|
||||
|
||||
```
|
||||
# usermod --lock daygeek
|
||||
or
|
||||
# usermod -L daygeek
|
||||
```
|
||||
|
||||
You can check the locked account status either passwd command or grep the given user name from /etc/shadow file.
|
||||
|
||||
Checking the user account locked status using passwd command.
|
||||
|
||||
```
|
||||
# passwd -S daygeek
|
||||
or
|
||||
# passwd --status daygeek
|
||||
|
||||
daygeek LK 2019-05-30 7 90 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
This will output a short information about the status of the password for a given account.
|
||||
|
||||
* **`LK:`**` ` Password locked
|
||||
* **`NP:`**` ` No password
|
||||
* **`PS:`**` ` Password set
|
||||
|
||||
|
||||
|
||||
Checking the locked user account status using /etc/shadow file. Two exclamation mark will be added in front of the password, if the account is already locked.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/shadow
|
||||
|
||||
daygeek:!!$6$tGvVUhEY$PIkpI43HPaEoRrNJSRpM3H0YWOsqTqXCxtER6rak5PMaAoyQohrXNB0YoFCmAuh406n8XOvBBldvMy9trmIV00:18047:7:90:7:::
|
||||
```
|
||||
|
||||
Run the usermod command with the `-U` switch to unlock the given user account.
|
||||
|
||||
```
|
||||
# usermod --unlock daygeek
|
||||
or
|
||||
# usermod -U daygeek
|
||||
```
|
||||
|
||||
### Method-3: How To Disable, Enable SSH Access To the Given User Account in Linux Using usermod Command?
|
||||
|
||||
Even, the usermod command also frequently used by Linux administrator very often.
|
||||
|
||||
The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
Alternativly this can be done by assigning the `nologin` shell to the given user. To do so, run the below command.
|
||||
|
||||
```
|
||||
# usermod -s /sbin/nologin daygeek
|
||||
```
|
||||
|
||||
You can check the locked user account details by greping the given user name from /etc/passwd file.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/passwd
|
||||
|
||||
daygeek:x:2240:2243::/home/daygeek:/sbin/nologin
|
||||
```
|
||||
|
||||
We can enable the user ssh access by assigning back to the old shell.
|
||||
|
||||
```
|
||||
# usermod -s /bin/bash daygeek
|
||||
```
|
||||
|
||||
### How To Lock, Unlock and Check Status of Multiple User Account in Linux Using Shell Script?
|
||||
|
||||
If you would like to lock/unlock more than one account then you need to look for script.
|
||||
|
||||
Yes, we can write a small shell script to perform this. To do so, use the following shell script.
|
||||
|
||||
Create The Users list. Each user should be in separate line.
|
||||
|
||||
```
|
||||
$ cat user-lists.txt
|
||||
|
||||
u1
|
||||
u2
|
||||
u3
|
||||
u4
|
||||
u5
|
||||
```
|
||||
|
||||
Use the following shell script to lock multiple users account in Linux.
|
||||
|
||||
```
|
||||
# user-lock.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -l $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-lock.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-lock.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-lock.sh
|
||||
|
||||
Locking password for user u1.
|
||||
passwd: Success
|
||||
Locking password for user u2.
|
||||
passwd: Success
|
||||
Locking password for user u3.
|
||||
passwd: Success
|
||||
Locking password for user u4.
|
||||
passwd: Success
|
||||
Locking password for user u5.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
Use the following shell script to check locked users account in Linux.
|
||||
|
||||
```
|
||||
# vi user-lock-status.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -S $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-lock-status.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-lock-status.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-lock-status.sh
|
||||
|
||||
u1 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u2 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u3 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u4 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u5 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
Use the following shell script to unlock multiple users account in Linux.
|
||||
|
||||
```
|
||||
# user-unlock.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -u $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-unlock.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-unlock.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-unlock.sh
|
||||
|
||||
Unlocking password for user u1.
|
||||
passwd: Success
|
||||
Unlocking password for user u2.
|
||||
passwd: Success
|
||||
Unlocking password for user u3.
|
||||
passwd: Success
|
||||
Unlocking password for user u4.
|
||||
passwd: Success
|
||||
Unlocking password for user u5.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
Run the same shell script `user-lock-status.sh` to check these locked user accounts got unlocked in Linux.
|
||||
|
||||
```
|
||||
# sh user-lock-status.sh
|
||||
|
||||
u1 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u2 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u3 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u4 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u5 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/lock-unlock-disable-enable-user-account-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何用 Linux 命令行发电子邮件
|
||||
======
|
||||
Linux 提供了几种命令允许您通过终端发送电子邮件,下面来展示一些有趣的方法。
|
||||
|
||||
![Molnia/iStock][1]
|
||||
|
||||
Linux 可以用多种方式通过命令行发送电子邮件。有一些方法十分简单,有一些相对会复杂一些,不过仍旧提供了很多有用的特性。选择哪一种方式取决于你想要什么 —— 向同事快速发送消息,还是向一批人群发带有附件的更复杂的信息。接下来看一看几种可行方案:
|
||||
|
||||
### mail
|
||||
|
||||
发送一条简单消息最便捷的 Linux 命令是 `mail`。假设你需要提醒老板你今天得早点走,你可以使用这样的一条命令:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
另一种方式是从一个文件中提取出你想要发送的文本信息:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
在以上两种情况中,你都可以通过 -s 来为邮件添加标题。
|
||||
|
||||
### sendmail
|
||||
|
||||
使用 `sendmail` 命令可以发送一封不包含标题的快信。(用目标收件人替换 `recip`):
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
你可以用这条命令发送一条只有标题,没有内容的信息:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
你也可以用 `sendmail` 发送一条包含一条标题行的完整信息。不过使用这个方法时,你的标题行会被添加到要发送的文件中,如下例所示:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
你也可以发送这样的文件(lyric 文件包含标题和正文):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
`sendmain` 的输出可能会很冗长。如果你感到好奇并希望查看发送系统和接收系统之间的交互,请添加 `-v` (verbose)选项。
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
`mutt` 是通过命令行发送邮件的一个很好的工具,在使用前你需要安装它。 `mutt` 的一个很方便的优势就是它允许你在邮件中添加附件。
|
||||
|
||||
使用 `mutt` 发送一条快速信息:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
从文件中获取内容:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
使用 `-a` 选项在 `mutt` 中添加附件。你甚至可以添加不止一个附件 —— 如下一条命令所示:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
在以上的命令中,`msg` 文件包含了邮件中的正文。如果你没有其他补充的内容,你可以这样来代替之前的命令:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
`mutt` 另一个有用的功能是可以添加抄送(`-c`)和密送(`-b`)。
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
如果你想深入了解发送电子邮件的细节,你可以使用 `telnet` 来进行电子邮件交互操作。但正如所说的那样,你需要“学习术语”。邮件服务器期望一系列命令,其中包括自我介绍(`EHLO` 命令)、提供发件人(`MAIL FROM` 命令)、指定收件人(`RCPT TO` 命令),然后添加消息(`DATA`)并以 `.` 结束消息。并不是所有的电子邮件服务器都会响应这些请求。此方法通常仅用于故障排除。
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### 向多个收件人发送电子邮件
|
||||
|
||||
如果你希望通过 Linux 命令行向一大组收件人发送电子邮件,你可以使用一个循环来帮助你完成任务,如下面应用在 `mutt` 中的例子:
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
有很多方法可以从 Linux 命令行发送电子邮件。有些工具提供了相当多的选项。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Modrisco](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wahailin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open Source Slack Alternative Mattermost Gets $50M Funding)
|
||||
[#]: via: (https://itsfoss.com/mattermost-funding/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Slack 开源替代品 Mattermost 获得 5000 万美元融资
|
||||
======
|
||||
|
||||
[Mattermost][1],作为 [Slack][2] 的开源替代品,获得了 5000 万美元的B轮融资。这个消息极其令人振奋。
|
||||
|
||||
[Slack][3] 是一个基于云的团队内部沟通协作软件。企业、创业公司、甚至全球化的开源项目都在使用Slack进行同事及项目成员间的沟通。
|
||||
|
||||
[Slack 在 2019 年 6 月的估值为 200 亿美元][4],由此可见其在科技行业的巨大影响,当然也就有更多产品想与之竞争。
|
||||
|
||||
### 5000 万美元开源项目
|
||||
|
||||
![][5]
|
||||
|
||||
就我个人而言,我并不知道 MatterMost 这个产品。但 [VentureBeat][6] 对这则新闻的报道,激发了我的好奇心。 这次融资由 [Y Combinator’s][7] 与一家新的投资方 BattleVentures 牵头,现有投资者 Redpoint 和 S28 Captial 共同加入。
|
||||
|
||||
|
||||
在[公告][8]中,他们也提到:
|
||||
|
||||
> 今天的公告中,Mattermost 成为了 YC 历次 B 轮投资中投资额最高的项目。
|
||||
|
||||
下面是摘自 VentureBeat 的报道,你可以从中了解到一些细节:
|
||||
|
||||
> 本次资本注入,是继 2017 年 2 月的种子轮 350 万融资和今年 2 月份的 2000 万 A 轮融资之后进行的,并使得这家总部位于美国加州帕罗奥图(Palo Alto)的公司融资总额达到了约 7000 万美元。
|
||||
|
||||
如果你对他们的规划感兴趣,可以阅读[官方公告][8]。
|
||||
|
||||
尽管听起来很不错,但可能你并不知道 Mattermost 是什么。所以我们先来作个简单了解:
|
||||
|
||||
### Mattermost 快览
|
||||
|
||||
![Mattermost][9]
|
||||
|
||||
前面已经提到,Mattermost 是 Slack 的开源替代品。
|
||||
|
||||
乍一看,它几乎照搬了 Slack 的界面外观,没错,这就是关键所在,你将拥有你乐于使用的软件的开源方案。
|
||||
|
||||
它甚至集成了一些流行的 DevOps 工具,如 Git、Bots 和 CI/CD。除了这些功能外,它还关注安全性和隐私。
|
||||
|
||||
同样,和 Slack 类似,它支持和多种应用程序与服务的集成。
|
||||
|
||||
听起来很有前景?我也这么认为。
|
||||
|
||||
#### 定价:企业版和团队版
|
||||
|
||||
如果你想让 Mattermost 对其托管(或获得优先支持),应选择企业版。但如果你想使用非付费托管,可以下载[团队版][11],并安装到基于 Linux 的云服务器或 VPS 服务器上。
|
||||
|
||||
当然,我们不会在此进行深入探究。我确想在此提及的是,企业版并不昂贵。
|
||||
|
||||
![][12]
|
||||
|
||||
**总结**
|
||||
|
||||
MatterMost 无疑相当出色,有了 5000 万巨额资金的注入,对于那些正在寻求安全的并能提供高效团队协作支持的开源通讯平台的开源社区用户,Mattermost 很可能成为下一个大事件。
|
||||
|
||||
你觉得这条新闻怎么样?对你来说有价值吗?你是否已了解 Mattermost 是 Slack 的替代品?
|
||||
|
||||
请在下面的评论中给出你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
来源: <https://itsfoss.com/mattermost-funding/>
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wahailin](https://github.com/wahailin)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mattermost.com/
|
||||
[2]: https://itsfoss.com/slack-use-linux/
|
||||
[3]: https://slack.com
|
||||
[4]: https://www.ft.com/content/98747b36-9368-11e9-aea1-2b1d33ac3271
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-wallpaper.png?resize=800%2C450&ssl=1
|
||||
[6]: https://venturebeat.com/2019/06/19/mattermost-raises-50-million-to-advance-its-open-source-slack-alternative/
|
||||
[7]: https://www.ycombinator.com/
|
||||
[8]: https://mattermost.com/blog/yc-leads-50m-series-b-in-mattermost-as-open-source-slack-alternative/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-screenshot.jpg?fit=800%2C497&ssl=1
|
||||
[10]: https://itsfoss.com/zettlr-markdown-editor/
|
||||
[11]: https://mattermost.com/download/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-enterprise-plan.jpg?fit=800%2C325&ssl=1
|
Loading…
Reference in New Issue
Block a user