Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-06-23 08:19:00 +08:00
commit 23b66b67d7
30 changed files with 4758 additions and 30 deletions

View File

@ -1,23 +1,22 @@
[#]: collector: (lujun9972)
[#]: translator: (luuming)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11004-1.html)
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
[#]: author: (Simon James https://fosspost.org/author/simonjames)
5 款不错的开源语音识别/语音文字转换系统
======
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/open-source-speech-recognition-speech-to-text.png?resize=1237%2C527&ssl=1)
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>STT系统就像它名字所蕴含的那样是一种将说出的单词转换为文本文件以供后续用途的方式。
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>STT系统就像它名字所蕴含的意思那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
语音文字转换技术非常有用。它可以用到许多应用中,例如自动转录,使用自己的声音写书籍或文本,用生成的文本文件和其他工具做复杂的分析等。
在过去,语音文字转换技术以专有软件和库为主导,开源替代品并不存在或是有严格的限制并且没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你立即使用。
在过去,语音文字转换技术以专有软件和库为主导,要么没有开源替代品,要么有着严格的限制,也没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你随时使用。
这里我列出了 5 个。
@ -27,74 +26,74 @@
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
该项目由 Firefox 浏览器背后的组织 Mozilla 团队开发。它 100% 自由并且使用 TensorFlow 机器学习框架实现
该项目由 Firefox 浏览器的开发组织 Mozilla 团队开发。它是 100% 的自由开源软件,其名字暗示使用了 TensorFlow 机器学习框架实现去功能
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
它也支持许多编程语言,例如 Python3.6)。可以让你在数秒之内获取
它也支持许多编程语言,例如 Python3.6)。可以让你在数秒之内完成工作
```
pip3 install deepspeech
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
```
你也可以通过 npm 安装它:
你也可以通过 `npm` 安装它:
```
npm install deepspeech
```
想要获得更多信息,请参考[项目主页][2]。
- [项目主页][2]
#### Kaldi
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
Kaldi 是一个用 C++ 写的开源语音识别软件,并且在 Apache 公共许可下发布。它可以运行在 WindowsmacOS 和 Linux 上。它的开发始于 2009。
Kaldi 是一个用 C++ 写的开源语音识别软件,并且在 Apache 公共许可证下发布。它可以运行在 Windows、macOS 和 Linux 上。它的开发始于 2009。
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的三方模块可以用来完成你的任务。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的可以用来完成你的任务的第三方模块。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的完成
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的实现
[Projects homepage][5].
- [项目主页][5]
#### Julius
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
可能是有史以来最古老的语音识别软件之一。它的发始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
可能是有史以来最古老的语音识别软件之一。它的发始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
Julius 的主要特点包括了执行实时 STT 的能力低内存占用20000 单词少于 64 MB输出<ruby>最优词<rt>N-best word</rt></ruby>/<ruby>词图<rt>Word-graph</rt></ruby>的能力,当作服务器单元运行的能力和很多东西。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 LinuxWindowsmacOS 甚至 Android在智能手机上
Julius 的主要特点包括了执行实时 STT 的能力低内存占用20000 单词少于 64 MB能够输出<ruby>最优词<rt>N-best word</rt></ruby><ruby>词图<rt>Word-graph</rt></ruby>,能够作为服务器单元运行等等。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux、Windows、macOS 甚至 Android在智能手机上
它当前仅支持英语和日语。软件或许易于从 Linux 发行版的仓库中安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于大约一个半月之前。
它当前仅支持英语和日语。软件应该能够从 Linux 发行版的仓库中轻松安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于本文发布前大约一个半月之前。
[Projects homepage][8].
- [项目主页][8]
#### Wav2Letter++
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
如果你在寻找一个更加时髦的那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可下发布。
如果你在寻找一个更加时髦的那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可下发布。
Facebook 描述它的库是“最快<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的想法使其能在默认情况下对性能进行优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
Facebook 描述它的库是“最快<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的理念使其默认针对性能进行了优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此命名为 Wav2Letter++。
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此命名为 Wav2Letter++。
[Projects homepage][12].
- [项目主页][12]
#### DeepSpeech2
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
中国巨头百度的研究人员也在开发他们自己的语音文字转换引擎叫做“DeepSpeech2”。它是一个端对端的开源引擎使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可下发布。
中国软件巨头百度的研究人员也在开发他们自己的语音文字转换引擎叫做“DeepSpeech2”。它是一个端对端的开源引擎使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可下发布。
引擎可以训练在任何模型之上,并且可以用于任何想要的语言。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
该引擎可以在你想用的任何模型和任何语言上训练。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
[Projects homepage][14].
- [项目主页][14]
### 总结
语音识别领域仍然主要由专有软件巨头所占据,比如 Google 和 IBM它们为此提供了闭源商业服务但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
语音识别领域仍然主要由专有软件巨头所占据,比如 Google 和 IBM它们为此提供了闭源商业服务但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
如果你对清单有其他的建议或评论,我们很乐意在下面听到。
@ -104,8 +103,8 @@ via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
作者:[Simon James][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/LuuMing)
校对:[校对者ID](https://github.com/校对者ID)
译者:[LuuMing](https://github.com/LuuMing)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (BitTorrent Client Deluge 2.0 Released: Heres Whats New)
[#]: via: (https://itsfoss.com/deluge-2-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
BitTorrent Client Deluge 2.0 Released: Heres Whats New
======
You probably already know that [Deluge][1] is one of the [best Torrent clients available for Linux users][2]. However, the last stable release was almost two years back.
Even though it was in active development, a major stable release wasnt there until recently. The latest version while we write this happens to be 2.0.2. So, if you havent downloaded the latest stable version do try it out.
In either case, if youre curious, let us talk about whats new.
![Deluge][3]
### Major improvements in Deluge 2.0
The new release introduces multi-user support which was a much needed addition.
In addition to that, there has been several performance improvements to handle more torrents with faster loading times.
Also, with version 2.0, Deluge used Python 3 with minimal support for Python 2.7. Even for the user interface, they migrated from GTK UI to GTK3.
As per the release notes, there are several more significant additions/improvements, which include:
* Multi-user support.
* Performance updates to handle thousands of torrents with faster loading times.
* A New Console UI which emulates GTK/Web UIs.
* GTK UI migrated to GTK3 with UI improvements and additions.
* Magnet pre-fetching to allow file selection when adding torrent.
* Fully support libtorrent 1.2 release.
* Language switching support.
* Improved documentation hosted on ReadTheDocs.
* AutoAdd plugin replaces built-in functionality.
### How to install or upgrade to Deluge 2.0
![][4]
You should follow the official [installation guide][5] (using PPA or PyPi) for any Linux distro. However, if you are upgrading, you should go through the note mentioned in the release note:
“_Deluge 2.0 is not compatible with Deluge 1.x clients or daemons so these will require upgrading too._ _Also_ _third-party Python scripts may not be compatible if they directly connect to the Deluge client and will need migrating._
So, they insist to always make a backup of your [config][6] before a major version upgrade to guard against data loss.
[][7]
Suggested read  Ubuntu's Snap Apps Website Gets Much Needed Improvements
And, if you are an author of a plugin, you need to upgrade it make it compatible with the new release.
Direct download app packages not yet available for Windows and Mac OS. However, the release note mentions that they are being worked on.
As an alternative, you can install them manually by following the [installation guide][5] in the updated official documentation.
**Wrapping Up**
What do you think about the latest stable release? Do you utilize Deluge as your BitTorrent client? Or do you find something else as a better alternative?
Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/deluge-2-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://dev.deluge-torrent.org/
[2]: https://itsfoss.com/best-torrent-ubuntu/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/deluge.jpg?fit=800%2C410&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/Deluge-2-release.png?resize=800%2C450&ssl=1
[5]: https://deluge.readthedocs.io/en/latest/intro/01-install.html
[6]: https://dev.deluge-torrent.org/wiki/Faq#WheredoesDelugestoreitssettingsconfig
[7]: https://itsfoss.com/snap-store/

View File

@ -0,0 +1,42 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Codethink open sources part of onboarding process)
[#]: via: (https://opensource.com/article/19/6/codethink-onboarding-process)
[#]: author: (Laurence Urhegyi https://opensource.com/users/laurence-urhegyi)
Codethink open sources part of onboarding process
======
In other words, how to Git going in FOSS.
![Teacher or learner?][1]
Here at [Codethink][2], weve recently focused our energy into enhancing the onboarding process we use for all new starters at the company. As we grow steadily in size, its important that we have a well-defined approach to both welcoming new employees into the company, and introducing them to the organizations culture.
As part of this overall onboarding effort, weve created [_How to Git going in FOSS_][3]: an introductory guide to the world of free and open source software (FOSS), and some of the common technologies, practices, and principles associated with free and open source software.
This guide was initially aimed at work experience students and summer interns. However, the document is in fact equally applicable to anyone who is new to free and open source software, no matter their prior experience in software or IT in general. _How to Git going in FOSS_ is hosted on GitLab and consists of several repositories, each designed to be a self-guided walk through.
Our guide begins with a general introduction to FOSS, including explanations of the history of GNU/Linux, how to use [Git][4] (as well as Git hosting services such as GitLab), and how to use a text editor. The document then moves on to exercises that show the reader how to implement some of the things theyve just learned.
_How to Git going in FOSS_ is fully public and available for anyone to try. If youre new to FOSS or know someone who is, then please have a read-through, and see what you think. If you have any feedback, feel free to raise an issue on GitLab. And, of course, we also welcome contributions. Were keen to keep improving the guide however possible. One future improvement we plan to make is an additional exercise that is more complex than the existing two, such as potentially introducing the reader to [Continuous Integration][5].
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/codethink-onboarding-process
作者:[Laurence Urhegyi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/laurence-urhegyi
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?)
[2]: https://www.codethink.co.uk/about.html
[3]: https://gitlab.com/ct-starter-guide
[4]: https://git-scm.com
[5]: https://en.wikipedia.org/wiki/Continuous_integration

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news)
[#]: via: (https://opensource.com/article/19/6/news-june-22)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news
======
Catch up on the biggest open source headlines from the past two weeks.
![Weekly news roundup with TV][1]
In this edition of our open source news roundup, we take a look Cloudflare's open source random number generator, more open source robotics data, new npm functionality, and more!
### Cloudflare announces open source random number generator project
Is there such a thing as a truly random number? Internet security and services provider Cloudflare things so. To prove it, the company has formed [The League of Entropy][2], an open source project to create a generator for random numbers.
The League consists of Cloudflare and "five other organisations — predominantly universities and security companies." They share random numbers, using an open source tool called [Drand][3] (short for Distributed Randomness Beacon Daemon). The numbers are then "composited into one random number" on the basis that "several random numbers are more random than one random number." While the League's random number generator isn't intended "for any kind of password or cryptographic seed generation," Cloudflare's CEO Matthew Prince points out that if "you need a way of having a known random source, this is a really valuable tool."
### Cruise open sources robotics data analysis tool
Projects involved in creating self-driving vehicles generate petabytes of data. And with amounts of data that large comes the challenge of quickly and effectively analyzing it. To make the task easier, General Motors subsidiary Cruise has made its Webviz data visualization tool "[freely available to developers][4] in need of a modular robotics analysis solution."
Webviz "takes as input any bag file (the message format used by the popular Robot Operating System) and outputs charts and graphs." It "contains a collection of general panels (which visualize data) applicable to most robotics developers," said Esther Weon, a software engineer at Cruise. The company also plans to "release a public API thatll allow developers to build custom panels themselves."
The code for Webviz is [available on GitHub][5], where you can download or contribute to the project.
### npm provides more security
The team behind npm, the site providing JavaScript package hosting, has a new collaboration with GitHub to automatically scan for exposed tokens that could give hackers access that doesn't belong to them. The project includes a handy automatic revoke of leaked credentials them if are still valid. This could drastically reduce vulnerabilities in the JavaScript community. For instructions on how to participate, see the [original article][6].
Note that this news was found via the [Changelog news][7].
### Better end of life tracking via open source
A new project, [endoflife.date][8], aims to overcome the complexity of end of life (EOL) announcements for software. It's part tracker, part public announcement on what good documentation looks like for software. As the README states: "The reason this site exists is because this information is very often hidden away. If you're releasing something on a regular basis:
1. List only supported releases.
2. Give EoL dates/policy if possible.
3. Hide unsupported releases behind a few extra clicks.
4. Mention security/active release difference if needed."
Check out the [source code][9] for more information.
### In other news
* [Medicine needs to embrace open source][10]
* [Using geospatial data to create safer roads][11]
* [Embracing open source could be a big competitive advantage for businesses][12]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/news-june-22
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
[2]: https://thenextweb.com/dd/2019/06/17/cloudflares-new-open-source-project-helps-anyone-obtain-truly-random-numbers/
[3]: https://github.com/dedis/drand
[4]: https://venturebeat.com/2019/06/18/cruise-open-sources-webview-a-tool-for-robotics-data-analysis/
[5]: https://github.com/cruise-automation/webviz
[6]: https://blog.npmjs.org/post/185680936500/protecting-package-publishers-npm-token-security
[7]: https://changelog.com/news/npm-token-scanning-extending-to-github-NAoe
[8]: https://endoflife.date/
[9]: https://github.com/captn3m0/endoflife.date
[10]: https://www.zdnet.com/article/medicine-needs-to-embrace-open-source/
[11]: https://itbrief.co.nz/story/using-geospatial-data-to-create-safer-roads
[12]: https://www.fastcompany.com/90364152/embracing-open-source-could-be-a-big-competitive-advantage-for-businesses

View File

@ -0,0 +1,82 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (17 predictions about 5G networks and devices)
[#]: via: (https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
17 predictions about 5G networks and devices
======
Not surprisingly, the new Ericsson Mobility Report is bullish on the potential of 5G technology. Heres a quick look at the most important numbers.
![Vertigo3D / Getty Images][1]
_“As market after market switches on 5G, we are at a truly momentous point in time. No previous generation of mobile technology has had the potential to drive economic growth to the extent that 5G promises. It goes beyond connecting people to fully realizing the Internet of Things (IoT) and the Fourth Industrial Revolution.”_ —The opening paragraph of the [June 2019 Ericsson Mobility Report][2]
Almost every significant technology advancement now goes through what [Gartner calls the “hype cycle.”][3] These days, Everyone expects new technologies to be met with gushing optimism and dreamy visions of how its going to change the world in the blink of an eye. After a while, we all come to expect the vendors and the press to go overboard with excitement, at least until reality and disappointment set in when things dont pan out exactly as expected.
**[ Also read:[The time of 5G is almost here][4] ]**
Even with all that in mind, though, Ericssons whole-hearted embrace of 5G in its Internet Mobility Report is impressive. The optimism is backed up by lots of numbers, but they can be hard to tease out of the 36-document. So, lets recap some of the most important top-line predictions (with my comments at the end).
### Worldwide 5G growth projections
1. “More than 10 million 5G subscriptions are projected worldwide by the end of 2019.”
2. “[We] now expect there to be 1.9 billion 5G subscriptions for enhanced mobile broadband by the end of 2024. This will account for over 20 percent of all mobile subscriptions at that time. The peak of LTE subscriptions is projected for 2022, at around 5.3 billion subscriptions, with the number declining slowly thereafter.”
3. “In 2024, 5G networks will carry 35 percent of mobile data traffic globally.”
4. “5G can cover up to 65 percent of the worlds population in 2024.”
5. ”NB-IoT and Cat-M technologies will account for close to 45 percent of cellular IoT connections in 2024.”
6. “By the end of 2024, nearly 35 percent of cellular IoT connections will be Broadband IoT, with 4G connecting the majority.” But 5G connections will support more advanced use cases.
7. “Despite challenging 5G timelines, device suppliers are expected to be ready with different band and architecture support in a range of devices during 2019.”
8. “Spectrum sharing … chipsets are currently in development and are anticipated to be in 5G commercial devices in late 2019."
9. “[VoLTE][5] is the foundation for enabling voice and communication services on 5G devices. Subscriptions are expected to reach 2.1 billion by the end of 2019. … The number of VoLTE subscriptions is projected to reach 5.9 billion by the end of 2024, accounting for more than 85 percent of combined LTE and 5G subscriptions.”
![][6]
### Regional 5G projections
1. “In North America, … service providers have already launched commercial 5G services, both for fixed wireless access and mobile. … By the end of 2024, we anticipate close to 270 million 5G subscriptions in the region, accounting for more than 60 percent of mobile subscriptions.”
2. “In Western Europe … The momentum for 5G in the region was highlighted by the first commercial launch in April. By the end of 2024, 5G is expected to account for around 40 percent of mobile subscriptions.
3. In Central and Eastern Europe, … The first 5G subscriptions are expected in 2019, and will make up 15 percent of subscriptions in 2024.”
4. “In North East Asia, … the regions 5G subscription penetration is projected to reach 47 percent [by the end of 2024].
5. “[In India,] 5G subscriptions are expected to become available in 2022 and will represent 6 percent of mobile subscriptions at the end of 2024.”
6. “In the Middle East and North Africa, we anticipate commercial 5G deployments with leading communications service providers during 2019, and significant volumes in 2021. … Around 60 million 5G subscriptions are forecast for the end of 2024, representing 3 percent of total mobile subscriptions.”
7. “Initial 5G commercial devices are expected in the [South East Asia and Oceania] region during the first half of 2019. By the end of 2024, it is anticipated that almost 12 percent of subscriptions in the region will be for 5G.]”
8. “In Latin America … the first 5G deployments will be possible in the 3.5GHz band during 2019. Argentina, Brazil, Chile, Colombia, and Mexico are anticipated to be the first countries in the region to deploy 5G, with increased subscription uptake forecast from 2020. By the end of 2024, 5G is set to make up 7 percent of mobile subscriptions.”
### Is 5G really so inevitable?
Considered individually, these predictions all seem perfectly reasonable. Heck, 10 million 5G subscriptions is only a drop in the global bucket. And rumors are already flying that Apples next round of iPhones will include 5G capability. Also, 2024 is still five years in the future, so why wouldnt the faster connections drive impressive traffic stats? Similarly, North America and North East Asia will experience the fastest 5G penetration.
But when you look at them all together, these numbers project a sense of 5G inevitability that could well be premature. It will take a _lot_ of spending, by a lot of different parties—carriers, chip makers, equipment vendors, phone manufacturers, and consumers—to make this kind of growth a reality.
Im not saying 5G wont take over the world. Im just saying that when so many things have to happen in a relatively short time, there are a lot of opportunities for the train to jump the tracks. Dont be surprised if it takes longer than expected for 5G to turn into the worldwide default standard Ericsson—and everyone else—seems to think it will inevitably become.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/02/5g_wireless_technology_network_connections_by_credit-vertigo3d_gettyimages-1043302218_3x2-100787550-large.jpg
[2]: https://www.ericsson.com/assets/local/mobility-report/documents/2019/ericsson-mobility-report-june-2019.pdf
[3]: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
[5]: https://www.gsma.com/futurenetworks/technology/volte/
[6]: https://images.idgesg.net/images/article/2019/06/ericsson-mobility-report-june-2019-graph-100799481-large.jpg
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,144 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why your workplace arguments aren't as effective as you'd like)
[#]: via: (https://opensource.com/open-organization/19/6/barriers-productive-arguments)
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland/users/ron-mcfarland)
Why your workplace arguments aren't as effective as you'd like
======
Open organizations rely on open conversations. These common barriers to
productive argument often get in the way.
![Arrows pointing different directions][1]
Transparent, frank, and often contentious arguments are part of life in an open organization. But how can we be sure those conversations are _productive_ —not _destructive_?
This is the second installment of a two-part series on how to argue and actually achieve something. In the [first article][2], I mentioned what arguments are (and are not), according to author Sinnott-Armstrong in his book _Think Again: How to Reason and Argue._ I also offered some suggestions for making arguments as productive as possible.
In this article, I'll examine three barriers to productive arguments that Sinnott-Armstrong elaborates in his book: incivility, polarization, and language issues. Finally, I'll explain his suggestions for addressing those barriers.
### Incivility
"Incivility" has become a social concern in recent years. Consider this: As a tactic in arguments, incivility _can_ have an effect in certain situations—and that's why it's a common strategy. Sinnott-Armstrong notes that incivility:
* **Attracts attention:** Incivility draws people's attention in one direction, sometimes to misdirect attention from or outright obscure other issues. It redirects people's attention to shocking statements. Incivility, exaggeration, and extremism can increase the size of an audience.
* **Energizes:** Sinnott-Armstrong writes that seeing someone being uncivil on a topic of interest can generate energy from a state of powerlessness.
* **Stimulates memory:** Forgetting shocking statements is difficult; they stick in our memory more easily than statements that are less surprising to us.
* **Excites the powerless:** The groups most likely to believe and invest in someone being uncivil are those that feel they're powerless and being treated unfairly.
Unfortunately, incivility as a tactic in arguments has its costs. One such cost is polarization.
### Polarization
Sinnott-Armstrong writes about five forms of polarization:
* **Distance:** If two people's or groups' views are far apart according to some relevant scale, have significant disagreements and little common ground, then they're polarized.
* **Differences:** If two people or groups have fewer values and beliefs _in common_ than they _don't have in common_ , then they're polarized.
* **Antagonism:** Groups are more polarized the more they feel hatred, disdain, fear, or other negative emotions toward other people or groups.
* **Incivility:** Groups tend to be more polarized when they talk more negatively about people of the other groups.
* **Rigidity:** Groups tend to be more polarized when they treat their values as indisputable and will not compromise.
* **Gridlock:** Groups tend to be more polarized when they're unable to cooperate and work together toward common goals.
And I'll add one more form of polarization to Sinnott-Armstrong's list:
* **Non-disclosure:** Groups tend to be more polarized when one or both of the groups refuses to share valid, verifiable information—or when they distract each other with useless or irrelevant information. One of the ways people polarize is by not talking to each other and withhold information. Similarly, they talk on subjects that distract us from the issue at hand. Some issues are difficult to talk about, but by doing so solutions can be explored.
### Language issues
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes.
Language issues can be argument-stoppers Sinnott-Armstrong says. In particular, he outlines some of the following language-related barriers to productive argument.
* **Guarding:** Using words like "all" can make a statement unbelievable; words like "sometimes" can make a statement too vague.
* **Assuring:** Simply stating "trust me, I know what I'm talking about," without offering evidence that this is the case, can impede arguments.
* **Evaluating:** Offering an evaluation of something—like saying "It is good"―without any supporting reasoning.
* **Discounting:** This involves anticipating what the another person will say and attempting to weaken it as much as possible by framing an argument in a negative way. (Contrast these two sentences, for example: "Ramona is smart but boring" and "Ramona is boring but smart." The difference is subtle, but you'd probably want to spend less time with Ramona if you heard the first statement about her than if you heard the second.)
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes. In addition, Sinnott-Armstrong specifically draws readers' attention to two other language problems that can kill productive debates: vagueness and ambiguity.
* **Vagueness:** This occurs when a word or sentence is not precise enough and having many ways to interpret its true meaning and intent, which leads to confusion. Consider the sentence "It is big." "It" must be defined if it's not already obvious to everyone in the conversation. And a word like "big" must be clarified through comparison to something that everyone has agreed upon.
* **Ambiguity:** This occurs when a sentence could have two distinct meanings. For example: "Police killed man with axe." Who was holding the axe, the man or the police? "My neighbor had a friend for dinner." Did your neighbor invite a friend to share a meal—or did she eat her friend?
### Overcoming barriers
To help readers avoid these common roadblocks to productive arguments, Sinnott-Armstrong recommends a simple, four-step process for evaluating another person's argument.
1. **Observation:** First, observe a stated opinion and its related evidence to determine the precise nature of the claim. This might require you to ask some questions for clarification (you'll remember I employed this technique when arguing with my belligerent uncle, which I described [in the first article of this series][2]).
2. **Hypothesis:** Develop some hypothesis about the argument. In this case, the hypothesis should be an inference based on generally acceptable standards (for more on the structure of arguments themselves, also see [the first part of this series][2]).
3. **Comparison:** Compare that hypothesis with others and evaluate which is more accurate. More important issues will require you to conduct more comparisons. In other cases, premises are so obvious that no further explanation is required.
4. **Conclusion:** From the comparison analysis, reach a conclusion about whether your hypothesis about a competing argument is correct.
In many cases, the question is not whether a particular claim is _correct_ or _incorrect_ , but whether it is _believable._ So Sinnott-Armstrong also offers a four-step "believability test" for evaluating claims of this type.
1. **Expertise:** Does the person presenting the argument have authority in an appropriate field? Being a specialist is one field doesn't necessarily make that person an expert in another.
2. **Motive:** Would self-interest or other personal motives compel a person to withhold information or make false statements? To confirm one's statements, it might be wise to seek a totally separate, independent authority for confirmation.
3. **Sources:** Are the sources the person offers as evidence of a claim recognized experts? Do those sources have the expertise on the specific issue addressed?
4. **Agreement:** Is there agreement among many experts within the same specialty?
### Let's argue
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs.
When I was a university student, I would usually sit toward the front of the classroom. When I didn't understand something, I would start asking questions for clarification. Everyone else in the class would just sit silently, saying nothing. After class, however, other students would come up to me and thank me for asking those questions—because everyone else in the room was confused, too.
Clarification is a powerful act—not just in the classroom, but during arguments anywhere. Building an organizational culture in which people feel empowered to ask for clarification is critical for productive arguments (I've [given presentations on this topic][3] before). If members have the courage to clarify premises, and they can do so in an environment where others don't think they're being belligerent, then this might be the key to a successful and productive argument.
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs. Then, practice some of Sinnott-Armstrong's suggestions. Arguing productively will enhance [transparency, inclusivity, and collaboration][4] in your organization—leading to a more open culture.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/19/6/barriers-productive-arguments
作者:[Ron McFarland][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland/users/ron-mcfarland
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/directions-arrows.png?itok=EE3lFewZ (Arrows pointing different directions)
[2]: https://opensource.com/open-organization/19/5/productive-arguments
[3]: https://www.slideshare.net/RonMcFarland1/argue-successfully-achieve-something
[4]: https://opensource.com/open-organization/resources/open-org-definition

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cisco issues critical security warnings on SD-WAN, DNA Center)
[#]: via: (https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Cisco issues critical security warnings on SD-WAN, DNA Center
======
Vulnerabilities to Cisco's SD-WAN and DNA Center software top a list of nearly 30 security advisories issued by the company.
![zajcsik \(CC0\)][1]
Cisco has released two critical warnings about security issues with its SD-WAN and DNA Center software packages.
The worse, with a Common Vulnerability Scoring System rating of 9.3 out of 10, is a vulnerability in its [Digital Network Architecture][2] (DNA) Center software that could let an unauthenticated attacker connect an unauthorized network device to the subnet designated for cluster services.
**More about SD-WAN**
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
* [How to pick an off-site data-backup method][4]
* [SD-Branch: What it is and why youll need it][5]
* [What are the options for security SD-WAN?][6]
A successful exploit could let an attacker reach internal services that are not hardened for external access, Cisco [stated][7]. The vulnerability is due to insufficient access restriction on ports necessary for system operation, and the company discovered the issue during internal security testing, Cisco stated.
Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access, automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis.
This vulnerability affects Cisco DNA Center Software releases prior to 1.3, and it is fixed in version 1.3 and releases after that.
Cisco wrote that system updates are available from the Cisco cloud but not from the [Software Center][8] on Cisco.com. To upgrade to a fixed release of Cisco DNA Center Software, administrators can use the “System Updates” feature of the software.
A second critical warning with a CVVS score of 7.8 is a weakness in the command-line interface of the Cisco SD-WAN Solution that could let an authenticated local attacker elevate lower-level privileges to the root user on an affected device.
Cisco [wrote][9] that the vulnerability is due to insufficient authorization enforcement. An attacker could exploit this vulnerability by authenticating to the targeted device and executing commands that could lead to elevated privileges. A successful exploit could let the attacker make configuration changes to the system as the root user, the company stated.
This vulnerability affects a range of Cisco products running a release of the Cisco SD-WAN Solution prior to Releases 18.3.6, 18.4.1, and 19.1.0 including:
* vBond Orchestrator Software
* vEdge 100 Series Routers
* vEdge 1000 Series Routers
* vEdge 2000 Series Routers
* vEdge 5000 Series Routers
* vEdge Cloud Router Platform
* vManage Network Management Software
* vSmart Controller Software
Cisco said it has released free [software updates][10] that address the vulnerability described in this advisory. Cisco wrote that it fixed this vulnerability in Release 18.4.1 of the Cisco SD-WAN Solution.
The two critical warnings were included in a dump of [nearly 30 security advisories][11].
There were two other “High” impact rated warnings involving the SD-WAN software.
One, a vulnerability in the vManage web-based UI (Web UI) of the Cisco SD-WAN Solution could let an authenticated, remote attacker gain elevated privileges on an affected vManage device, Cisco [wrote][12].
The vulnerability is due to a failure to properly authorize certain user actions in the device configuration. An attacker could exploit this vulnerability by logging in to the vManage Web UI and sending crafted HTTP requests to vManage. A successful exploit could let attackers gain elevated privileges and make changes to the configuration that they would not normally be authorized to make, Cisco stated.
Another vulnerability in the vManage web-based UI could let an authenticated, remote attacker inject arbitrary commands that are executed with root privileges.
This exposure is due to insufficient input validation, Cisco [wrote][13]. An attacker could exploit this vulnerability by authenticating to the device and submitting crafted input to the vManage Web UI.
Both vulnerabilities affect Cisco vManage Network Management Software that is running a release of the Cisco SD-WAN Solution prior to Release 18.4.0 and Cisco has released free [software updates][10] to correct them.
Other high-rated vulnerabilities Cisco disclosed included:
* A [vulnerability][14] in the Cisco Discovery Protocol (CDP) implementation for the Cisco TelePresence Codec (TC) and Collaboration Endpoint (CE) Software could allow an unauthenticated, adjacent attacker to inject arbitrary shell commands that are executed by the device.
* A [weakness][15] in the internal packet-processing functionality of the Cisco StarOS operating system running on virtual platforms could allow an unauthenticated, remote attacker to cause an affected device to stop processing traffic, resulting in a denial of service (DoS) condition.
* A [vulnerability][16] in the web-based management interface of the Cisco RV110W Wireless-N VPN Firewall, Cisco RV130W Wireless-N Multifunction VPN Router, and Cisco RV215W Wireless-N VPN Router could allow an unauthenticated, remote attacker to cause a reload of an affected device, resulting in a denial of service (DoS) condition.
Cisco has [released software][10] fixes for those advisories as well.
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2018/04/lightning_storm_night_gyorgy_karoly_toth_aka_zajcsik_cc0_via_pixabay_1200x800-100754504-large.jpg
[2]: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-dnac-bypass
[8]: https://software.cisco.com/download/home
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privesca
[10]: https://tools.cisco.com/security/center/resources/security_vulnerability_policy.html#fixes
[11]: https://tools.cisco.com/security/center/publicationListing.x?product=Cisco&sort=-day_sir&limit=50#~Vulnerabilities
[12]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privilescal
[13]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-cmdinj
[14]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-tele-shell-inj
[15]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-staros-asr-dos
[16]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-rvrouters-dos
[17]: https://www.facebook.com/NetworkWorld/
[18]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (With Tableau, SaaS king Salesforce becomes a hybrid cloud company)
[#]: via: (https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
With Tableau, SaaS king Salesforce becomes a hybrid cloud company
======
Once dismissive of software, Salesforce acknowledges the inevitability of the hybrid cloud.
![Martyn Williams/IDGNS][1]
I remember a time when people at Salesforce events would hand out pins that read “Software” inside a red circle with a slash through it. The High Priest of SaaS (a.k.a. CEO Marc Benioff) was so adamant against installed, on-premises software that his keynotes were always comical.
Now, Salesforce is prepared to [spend $15.7 billion to acquire Tableau Software][2], the leader in on-premises data analytics.
On the hell-freezes-over scale, this is up there with Microsoft embracing Linux or Apple PR people returning a phone call. Well, we know at least one of those has happened.
**[ Also read:[Hybrid Cloud: The time for adoption is upon us][3] | Stay in the know: [Subscribe and get daily newsletter updates][4] ]**
So, why would a company that is so steeped in the cloud, so anti-on-premises software, make such a massive purchase?
Partly it is because Benioff and company are finally coming to the same conclusion as most everyone else: The hybrid cloud, a mix of on-premises systems and public cloud, is the wave of the future, and pure cloud plays are in the minority.
The reality is that data is hybrid and does not sit in a single location, and Salesforce is finally acknowledging this, said Tim Crawford, president of Avoa, a strategic CIO advisory firm.
“I see the acquisition of Tableau by Salesforce as less about getting into the on-prem game as it is a reality of the world we live in. Salesforce needed a solid analytics tool that went well beyond their existing capability. Tableau was that tool,” he said.
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
Salesforce also understands that they need a better understanding of customers and those data insights that drive customer decisions. That data is both on-prem and in the cloud, Crawford noted. It is in Salesforce, other solutions, and the myriad of Excel spreadsheets spread across employee systems. Tableau crosses the hybrid boundaries and brings a straightforward way to visualize data.
Salesforce had analytics features as part of its SaaS platform, but it was geared around their own platform, whereas everyone uses Tableau and Tableau supports all manner of analytics.
“Theres a huge overlap between Tableau customers and Salesforce customers,” Crawford said. “The data is everywhere in the enterprise, not just in Salesforce. Salesforce does a great job with its own data, but Tableau does great with data in a lot of places because its not tied to one platform. So, it opens up where the data comes from and the insights you get from the data.”
Crawford said that once the deal is done and Tableau is under some deeper pockets, the organization may be able to innovate faster or do things they were unable to do prior. That hardly indicates Tableau was struggling, though. It pulled in [$1.16 billion in revenue][6] in 2018.
Crawford also expects Salesforce to push Tableau to open up new possibilities for customer insights by unlocking customer data inside and outside of Salesforce. One challenge for the two companies is to maintain that neutrality so that they dont lose the ability to use Tableau for non-customer centric activities.
“Its a beautiful way to visualize large sets of data that have nothing to do with customer centricity,” he said.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://images.techhive.com/images/article/2015/09/150914-salesforce-dreamforce-2-100614575-large.jpg
[2]: https://www.cio.com/article/3402026/how-salesforces-tableau-acquisition-will-impact-it.html
[3]: http://www.networkworld.com/article/2172875/cloud-computing/hybrid-cloud--the-year-of-adoption-is-upon-us.html
[4]: https://www.networkworld.com/newsletters/signup.html
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
[6]: https://www.geekwire.com/2019/tableau-hits-841m-annual-recurring-revenue-41-transition-subscription-model-continues/
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (ninifly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,123 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VSCodium: 100% Open Source Version of Microsoft VS Code)
[#]: via: (https://itsfoss.com/vscodium/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
VSCodium: 100% Open Source Version of Microsoft VS Code
======
_**Brief: VSCodium is a fork of Microsofts popular Visual Studio Code editor. Its identical to VS Code with the single biggest difference that unlike VS Code, VSCodium doesnt track your usage data.**_
Microsofts [Visual Studio Code][1] is an excellent editor not only for web developers but also for other programmers. Due to its features, its considered one of the best open source code editors.
Yes, its one of the many open source products from Microsoft. You can [easily install Visual Studio Code in Linux][2] thanks to the ready to use binaries in the form of DEB, RPM and Snap packages.
And there is a problem which might not be an issue for a regular user but significant to an open source purist.
The ready to use binaries Microsoft provides are not open source.
Confused? Let me explain.
The source code of VS Code is open sourced under MIT license. You can access it on the [GitHub][3]. However, the [installation files that Microsoft has created contain proprietary telemetry/tracking][4].
This tracking basically collects usage data and sends it to Microsoft to help improve their products and services. Telemetry reporting is common with software products these days. Even [Ubuntu does that but with more transparency][5].
You can [disable the telemetry in VS Code][6] but can you trust Microsoft completely? If the answer is no, then what are your options?
You can build it from the source code and thus keep everything open source. But [installing from source code][7] is not always the prettiest option specially in todays world when we are so used to of having binaries.
Another option is to use VSCodium!
### VSCodium: 100% open source form of Visual Studio Code
![][8]
[VSCodium][9] is a fork of Microsofts Visual Studio Code. This projects sole aim is to provide you with ready to use binaries without Microsofts telemetry code.
[][10]
Suggested read  SuiteCRM: An Open Source CRM Takes Aim At Salesforce
This solves the problem where you want to use VS Code without the proprietary code from Microsoft but you are not comfortable with building it from the source.
Since [VSCodium is a fork of VS Code][11], it looks and functions exactly the same as VS Code.
Heres a screenshot of the first run of VS Code and VSCodium side by side in Ubuntu. Can you distinguish one from another?
![Can you guess which is VSCode and VSCodium?][12]
If you have not been able to distinguish between the two, look at the bottom.
![Thats Microsoft][13]
Apart from this and the logo of the two applications, there is no other noticeable difference.
![VSCodium and VS Code in GNOME Menu][14]
#### Installing VSCodium on Linux
While VSCodium is available in some distributions like Parrot OS, youll have to add additional repositories in other Linux distributions.
On Ubuntu and Debian based distributions, you can use the following commands to install VSCodium.
First, add the GPG key of the repository:
```
wget -qO - https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg | sudo apt-key add -
```
And then add the repository itself:
```
echo 'deb https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/repos/debs/ vscodium main' | sudo tee --append /etc/apt/sources.list.d/vscodium.list
```
Now update your system and install VSCodium:
```
sudo apt update && sudo apt install codium
```
You can find the [installation instructions for other distributions on its page][15]. You should also read the [instructions about migrating from VS Code to VSCodium][16].
**What do you think of VSCodium?**
Personally, I like the concept of VSCodium. To use the cliche, the project has its heart in the right place. I think, Linux distributions committed to open source may even start including it in their official repository.
What do you think? Is it worth switching to VSCodium or would you rather opt out of the telemetry and continue using VS Code?
And please, no “I use Vim” comments :D
--------------------------------------------------------------------------------
via: https://itsfoss.com/vscodium/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://code.visualstudio.com/
[2]: https://itsfoss.com/install-visual-studio-code-ubuntu/
[3]: https://github.com/Microsoft/vscode
[4]: https://github.com/Microsoft/vscode/issues/60#issuecomment-161792005
[5]: https://itsfoss.com/ubuntu-data-collection-stats/
[6]: https://code.visualstudio.com/docs/supporting/faq#_how-to-disable-telemetry-reporting
[7]: https://itsfoss.com/install-software-from-source-code/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium.png?resize=800%2C450&ssl=1
[9]: https://vscodium.com/
[10]: https://itsfoss.com/suitecrm-ondemand/
[11]: https://github.com/VSCodium/vscodium
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-vs-vscode.png?resize=800%2C450&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-vscode-tracking.png?resize=800%2C259&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-and-vscode.jpg?resize=800%2C220&ssl=1
[15]: https://vscodium.com/#install
[16]: https://vscodium.com/#migrate

View File

@ -0,0 +1,247 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) Part-1)
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/)
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
Monitor and Manage Docker Containers with Portainer.io (GUI tool) Part-1
======
As **Docker** usage and adoption is growing faster and faster, monitoring **Docker container** images is becoming more challenging. As multiple Docker container images are getting created day-by-day, monitoring them is very important. There are already some in built tools and technologies, but configuring them is little complex. As micro-services based architecture is becoming the de-facto standard in coming days, learning such tool adds one more arsenal to your tool-set.
Based on the above scenarios, there was in need of one light weight and robust tool requirement was growing. So Portainer.io addressed this. “ **Portainer.io** “,(Latest version is 1.20.2) the tool is very light weight(with 2-3 commands only one can configure it) and has become popular among Docker users.
**This tool has advantages over other tools; some of these are as below** ,
* Light weight (requires only 2-3 commands to be required to run to install this tool) {Also installation image is only around 26-30MB of size)
* Robust and easy to use
* Can be used for Docker monitor and Build
* This tool provides us a detailed overview of your Docker environments
* This tool allows us to manage your containers, images, networks and volumes.
* Portainer is simple to deploy this requires just one Docker command (can be run from anywhere.)
* Complete Docker-container environment can be monitored easily
**Portainer is also equipped with** ,
* Community support
* Enterprise support
* Has professional services available(along with partner OEM services)
**Functionality and features of Portainer tool are,**
1. It comes-up with nice Dashboard, easy to use and monitor.
2. Many in-built templates for ease of operation and creation
3. Support of services (OEM, Enterprise level)
4. Monitoring of Containers, Images, Networks, Volume and configuration at almost real-time.
5. Also includes Docker-Swarm monitoring
6. User management with many fancy capabilities
**Read Also :[How to Install Docker CE on Ubuntu 16.04 / 18.04 LTS System][1]**
### How to install and configure Portainer.io on Ubuntu Linux / RHEL / CentOS
**Note:** This installation is done on Ubuntu 18.04 but the installation on RHEL & CentOS would be same. We are assuming Docker CE is already installed on your system.
```
root@linuxtechi:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
root@linuxtechi:~$
```
Create the Volume for portainer
```
root@linuxtechi:~$ sudo docker volume create portainer_data
portainer_data
root@linuxtechi:~$
```
Launch and start Portainer Container using the beneath docker command,
```
root@linuxtechi:~$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Unable to find image 'portainer/portainer:latest' locally
latest: Pulling from portainer/portainer
d1e017099d17: Pull complete
0b1e707a06d2: Pull complete
Digest: sha256:d6cc2c20c0af38d8d557ab994c419c799a10fe825e4aa57fea2e2e507a13747d
Status: Downloaded newer image for portainer/portainer:latest
35286de9f2e21d197309575bb52b5599fec24d4f373cc27210d98abc60244107
root@linuxtechi:~$
```
Once the complete installation is done, use the ip of host or Docker using port 9000 of the Docker engine where portainer is running using your browser.
**Note:** If OS firewall is enabled on your Docker host then make sure 9000 port is allowed else its GUI will not come up.
In my case, IP address of my Docker Host / Engine is “192.168.1.16” so URL will be,
<http://192.168.1.16:9000>
[![Portainer-Login-User-Name-Password][2]][3]
Please make sure that you enter 8-character passwords. Let the admin be the user as it is and then click “Create user”.
Now the following screen appears, in this select “Local” rectangle box.
[![Connect-Portainer-Local-Docker][4]][5]
Click on “Connect”
Nice GUI with admin as user home screen appears as below,
[![Portainer-io-Docker-Monitor-Dashboard][6]][7]
Now Portainer is ready to launch and manage your Docker containers and it can also be used for containers monitoring.
### Bring-up container image on Portainer tool
[![Portainer-Endpoints][8]][9]
Now check the present status, there are two container images are already running, if you create one more that appears instantly.
From your command line kick-start one or two containers as below,
```
root@linuxtechi:~$ sudo docker run --name test -it debian
Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
e79bb959ec00: Pull complete
Digest: sha256:724b0fbbda7fda6372ffed586670573c59e07a48c86d606bab05db118abe0ef5
Status: Downloaded newer image for debian:latest
root@linuxtechi:/#
```
Now click Refresh button (Are you sure message appears, click “continue” on this) in Portainer GUI, you will now see 3 container images as highlighted below,
[![Portainer-io-new-container-image][10]][11]
Click on the “ **containers** ” (in which it is red circled above), next window appears with “ **Dashboard Endpoint summary**
[![Portainer-io-Docker-Container-Dash][12]][13]
In this page, click on “ **Containers** ” as highlighted in red color. Now you are ready to monitor your container image.
### Simple Docker container image monitoring
From the above step, it appears that a fancy and nice looking “Container List” page appears as below,
[![Portainer-Container-List][14]][15]
All the container images can be controlled from here (stop, start, etc)
**1)** Now from this page, stop the earlier started {“test” container (this was the debian image that we started earlier)}
To do this select the check box in front of this image and click stop button from above,
[![Stop-Container-Portainer-io-dashboard][16]][17]
From the command line option, you will see that this image has been stopped or exited now,
```
root@linuxtechi:~$ sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d45902e717c0 debian "bash" 21 minutes ago Exited (0) 49 seconds ago test
08b96eddbae9 centos:7 "/bin/bash" About an hour ago Exited (137) 9 minutes ago mycontainer2
35286de9f2e2 portainer/portainer "/portainer" 2 hours ago Up About an hour 0.0.0.0:9000->9000/tcp compassionate_benz
root@linuxtechi:~$
```
**2)** Now start the stopped containers (test & mycontainer2) from Portainer GUI,
Select the check box in front of stopped containers, and the click on Start
[![Start-Containers-Portainer-GUI][18]][19]
You will get a quick window saying, “ **Container successfully started** ” and with running state
[![Conatiner-Started-successfully-Portainer-GUI][20]][21]
### Various other options and features are explored as below step-by-step
**1)** Click on “ **Images** ” which is highlighted, you will get the below window,
[![Docker-Container-Images-Portainer-GUI][22]][23]
This is the list of container images that are available but some may not running. These images can be imported, exported or uploaded to various locations, below screen shot shows the same,
[![Upload-Docker-Container-Image-Portainer-GUI][24]][25]
**2)** Click on “ **volumes”** which is highlighted, you will get the below window,
[![Volume-list-Portainer-io-gui][26]][27]
**3)** Volumes can be added easily with following option, click on add volume button, below window appears,
Provide the name as “ **myvol** ” in the name box and click on “ **create the volume** ” button.
[![Volume-Creation-Portainer-io-gui][28]][29]
The newly created volume appears as below, (with unused state)
[![Volume-unused-Portainer-io-gui][30]][31]
#### Conclusion:
As from the above installation steps, configuration and playing around with various options you can see how easy and fancy looking is Portainer.io tool is. This provides multiple features and options to explore on building, monitoring docker container. As explained this is very light weight tool, so doesnt add any overload to host system. Next set-of options will be explored in part-2 of this series.
Read Also: **[Monitor and Manage Docker Containers with Portainer.io (GUI tool) Part-2][32]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/shashidhar/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/how-to-setup-docker-on-ubuntu-server-16-04/
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password-1024x681.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker-1024x538.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard-1024x544.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints-1024x252.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image-1024x544.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash-1024x544.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List-1024x538.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard-1024x447.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI-1024x449.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI-1024x538.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI-1024x544.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI-1024x544.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui-1024x544.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui-1024x544.jpg
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui.jpg
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui-1024x544.jpg
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui.jpg
[32]: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/

View File

@ -0,0 +1,207 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fedora 30 Workstation Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/fedora-30-workstation-installation-guide/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Fedora 30 Workstation Installation Guide with Screenshots
======
If you are a **Fedora distribution** lover and always try the things at Fedora Workstation and Servers, then it is good news for you as Fedora has released its latest OS edition as **Fedora 30** for the Workstation and Server. One of the important updates in Fedora 30 from its previous release is that it has introduced **Fedora CoreOS** as a replacement of Fedora Atomic host.
Some other noticeable updates in Fedora 30 are listed beneath:
* Updated Desktop Gnome 3.32
* New Linux Kernel 5.0.9
* Updated Bash Version 5.0, PHP 7.3 & GCC 9
* Updated Python 3.7.3, JDK12, Ruby 2.6 Mesa 19.0.2 and Golang 1.12
* Improved DNF (Default Package Manager)
In this article we will walk through the Fedora 30 workstation Installation steps for laptop or desktop.
**Following are the minimum system requirement for Fedora 30 workstation,**
* 1GHz Processor (Recommended 2 GHz Dual Core processor)
* 2 GB RAM
* 15 GB unallocated Hard Disk
* Bootable Media (USB / DVD)
* nternet Connection (Optional)
Lets Jump into Installation steps,
### Step:1) Download Fedora 30 Workstation ISO File
Download the Fedora 30 Workstation ISO file on your system from its Official Web Site
<https://getfedora.org/en/workstation/download/>
Once the ISO file is downloaded, then burn it either in USB drive or DVD and make it bootable.
### Step:2) Boot Your Target System with Bootable media (USB Drive or DVD)
Reboot your target machine (i.e. machine where you want to install Fedora 30), Set the boot medium as USB or DVD from Bios settings so system boots up with bootable media.
### Step:3) Choose Start Fedora-Workstation-30 Live
When the system boots up with bootable media then we will get the following screen, to begin with installation on your systems hard disk, choose “ **Start Fedora-Workstation-30 Live** “,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Start-Fedora-Workstation-30-Live.jpg>
### Step:4) Select Install to Hard Drive Option
Select “ **Install to Hard Drive** ” option to install Fedora 30 on your systems hard disk, you can also try Fedora on your system without installing it, for that select “ **Try Fedora** ” Option
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Install-hard-drive.jpg>
### Step:5) Choose appropriate language for your Fedora 30 Installation
In this step choose your language which will be used during Fedora 30 Installation,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-Fedora30-Installation.jpg>
Click on Continue
### Step:6) Choose Installation destination and partition Scheme
In the next window we will be present the following screen, here we will choose our installation destination, means on which hard disk we will do installation
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-Destination-Fedora-30.jpg>
In the next screen we will see the local available hard disk, select the disk that suits your installation and then choose how you want to create partitions on it from storage configuration tab.
If you choose “ **Automatic** ” partition scheme, then installer will create the necessary partition for your system automatically but if you want to create your own customize partition scheme then choose “ **Custom** ” option,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Custom-Partition-Fedora-30-installation.jpg>
Click on Done
In this article I will demonstrate how to create [**LVM**][1] based custom partitions, in my case I have around 40 GB unallocated hard drive, so I will be creating following partitions on it,
* /boot = 2 GB (ext4 file system)
* /home = 15 GB (ext4 file system)
* /var = 10 GB (ext4 file system)
* / = 10 GB (ext4 file system)
* Swap = 2 GB
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LVM-Partition-MountPoint-Fedora-30-Installation.jpg>
Select “ **LVM** ” as partitioning scheme and then click on plus (+) symbol,
Specify the mount point as /boot and partition size as 2 GB and then click on “Add mount point”
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partiton-fedora30-installation.jpg>
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-standard-parttion-fedora-30.jpg>
Now create next partition as /home of size 15 GB, Click on + symbol
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-fedora-30-installation.jpg>
Click on “ **Add mount point**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Modify-Volume-Group-Fedora30-Installation.jpg>
If you might have noticed, /home partition is created as LVM partition under default Volume Group, if you wish to change default Volume Group name then click on “ **Modify** ” option from Volume Group Tab,
Mention the Volume Group name you want to set and then click on Save. Now onward all the LVM partition will be part of fedora30 volume group.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Group-Fedora-30-Installation.jpg>
Similarly create the next two partitions **/var** and **/** of size 10 GB respectively,
**/var partition:**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-fedora30-installation.jpg>
**/ (slash) partition:**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-fedora30-installation.jpg>
Now create the last partition as swap of size 2 GB,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-partition-fedora30-installation.jpg>
In the next window, click on Done
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-After-Parttions-Creation-Fedora30.jpg>
In the next screen, choose “ **Accept Changes**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-Fedora30-Installation.jpg>
Now we will get Installation Summary window, here you can also change the time zone that suits to your installation and then click on “ **Begin Installation**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-Fedora30-Installation.jpg>
### Step:7) Fedora 30 Installation started
In this step we can see Fedora 30 Installation has been started and it is in progress,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Installation-Progress.jpg>
Once the Installation is completed, you will be prompted to restart your system
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora30-Installation-Completed-Screen.jpg>
Click on Quit and reboot your system.
Dont forget the Change boot medium from Bios settings so your system boots up with hard disk.
### Step:8) Welcome message and login Screen after reboot
When we first time reboot Fedora 30 system after the successful installation, we will get below welcome screen,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Welcome-Screen-After-Fedora30-Installation.jpg>
Click on Next
In the next screen you can Sync your online accounts or else you can skip,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Online-Accounts-Sync-Fedora30.jpg>
In the next window you will be required to specify the local account (user name) and its password, later this account will be used to login to the system
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-Fedora30-Installation.jpg>
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-password-fedora30.jpg>
Click on Next
And finally, we will get below screen which confirms that we are ready to use Fedora 30,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Reday-to-use-fedora30-message.jpg>
Click on “ **Start Using Fedora**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Gnome-Desktop-Screen-Fedora30.jpg>
Above Gnome Desktop Screen confirms that we have successfully installed Fedora 30 Workstation, now explore it and have fun 😊
In Fedora 30 workstation, if you want to install any packages or software from command line use DNF command.
Read More On: **[26 DNF Command Examples for Package Management in Fedora Linux][2]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/
[2]: https://www.linuxtechi.com/dnf-command-examples-rpm-management-fedora-linux/

View File

@ -0,0 +1,521 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Prefer table driven tests)
[#]: via: (https://dave.cheney.net/2019/05/07/prefer-table-driven-tests)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
Prefer table driven tests
======
Im a big fan of testing, specifically [unit testing][1] and TDD ([done correctly][2], of course). A practice that has grown around Go projects is the idea of a table driven test. This post explores the how and why of writing a table driven test.
Lets say we have a function that splits strings:
```
// Split slices s into all substrings separated by sep and
// returns a slice of the substrings between those separators.
func Split(s, sep string) []string {
var result []string
i := strings.Index(s, sep)
for i > -1 {
result = append(result, s[:i])
s = s[i+len(sep):]
i = strings.Index(s, sep)
}
return append(result, s)
}
```
In Go, unit tests are just regular Go functions (with a few rules) so we write a unit test for this function starting with a file in the same directory, with the same package name, `strings`.
```
package split
import (
"reflect"
"testing"
)
func TestSplit(t *testing.T) {
got := Split("a/b/c", "/")
want := []string{"a", "b", "c"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}
```
Tests are just regular Go functions with a few rules:
1. The name of the test function must start with `Test`.
2. The test function must take one argument of type `*testing.T`. A `*testing.T` is a type injected by the testing package itself, to provide ways to print, skip, and fail the test.
In our test we call `Split` with some inputs, then compare it to the result we expected.
### Code coverage
The next question is, what is the coverage of this package? Luckily the go tool has a built in branch coverage. We can invoke it like this:
```
% go test -coverprofile=c.out
PASS
coverage: 100.0% of statements
ok split 0.010s
```
Which tells us we have 100% branch coverage, which isnt really surprising, theres only one branch in this code.
If we want to dig in to the coverage report the go tool has several options to print the coverage report. We can use `go tool cover -func` to break down the coverage per function:
```
% go tool cover -func=c.out
split/split.go:8: Split 100.0%
total: (statements) 100.0%
```
Which isnt that exciting as we only have one function in this package, but Im sure youll find more exciting packages to test.
#### Spray some .bashrc on that
This pair of commands is so useful for me I have a shell alias which runs the test coverage and the report in one command:
```
cover () {
local t=$(mktemp -t cover)
go test $COVERFLAGS -coverprofile=$t $@ \
&& go tool cover -func=$t \
&& unlink $t
}
```
### Going beyond 100% coverage
So, we wrote one test case, got 100% coverage, but this isnt really the end of the story. We have good branch coverage but we probably need to test some of the boundary conditions. For example, what happens if we try to split it on comma?
```
func TestSplitWrongSep(t *testing.T) {
got := Split("a/b/c", ",")
want := []string{"a/b/c"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}
```
Or, what happens if there are no separators in the source string?
```
func TestSplitNoSep(t *testing.T) {
got := Split("abc", "/")
want := []string{"abc"}
if !reflect.DeepEqual(want, got) {
t.Fatalf("expected: %v, got: %v", want, got)
}
}
```
Were starting build a set of test cases that exercise boundary conditions. This is good.
### Introducing table driven tests
However the there is a lot of duplication in our tests. For each test case only the input, the expected output, and name of the test case change. Everything else is boilerplate. What wed like to to set up all the inputs and expected outputs and feel them to a single test harness. This is a great time to introduce table driven testing.
```
func TestSplit(t *testing.T) {
type test struct {
input string
sep string
want []string
}
tests := []test{
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
}
for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
}
}
```
We declare a structure to hold our test inputs and expected outputs. This is our table. The `tests` structure is usually a local declaration because we want to reuse this name for other tests in this package.
In fact, we dont even need to give the type a name, we can use an anonymous struct literal to reduce the boilerplate like this:
```
func TestSplit(t *testing.T) {
tests := []struct {
input string
sep string
want []string
}{
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
}
for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
}
}
```
Now, adding a new test is a straight forward matter; simply add another line the `tests` structure. For example, what will happen if our input string has a trailing separator?
```
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, // trailing sep
```
But, when we run `go test`, we get
```
% go test
--- FAIL: TestSplit (0.00s)
split_test.go:24: expected: [a b c], got: [a b c ]
```
Putting aside the test failure, there are a few problems to talk about.
The first is by rewriting each test from a function to a row in a table weve lost the name of the failing test. We added a comment in the test file to call out this case, but we dont have access to that comment in the `go test` output.
There are a few ways to resolve this. Youll see a mix of styles in use in Go code bases because the table testing idiom is evolving as people continue to experiment with the form.
### Enumerating test cases
As tests are stored in a slice we can print out the index of the test case in the failure message:
```
func TestSplit(t *testing.T) {
tests := []struct {
input string
sep . string
want []string
}{
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{input: "abc", sep: "/", want: []string{"abc"}},
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}
for i, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("test %d: expected: %v, got: %v", i+1, tc.want, got)
}
}
}
```
Now when we run `go test` we get this
```
% go test
--- FAIL: TestSplit (0.00s)
split_test.go:24: test 4: expected: [a b c], got: [a b c ]
```
Which is a little better. Now we know that the fourth test is failing, although we have to do a little bit of fudging because slice indexing—and range iteration—is zero based. This requires consistency across your test cases; if some use zero base reporting and others use one based, its going to be confusing. And, if the list of test cases is long, it could be difficult to count braces to figure out exactly which fixture constitutes test case number four.
### Give your test cases names
Another common pattern is to include a name field in the test fixture.
```
func TestSplit(t *testing.T) {
tests := []struct {
name string
input string
sep string
want []string
}{
{name: "simple", input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
{name: "wrong sep", input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
{name: "no sep", input: "abc", sep: "/", want: []string{"abc"}},
{name: "trailing sep", input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}
for _, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("%s: expected: %v, got: %v", tc.name, tc.want, got)
}
}
}
```
Now when the test fails we have a descriptive name for what the test was doing. We no longer have to try to figure it out from the output—also, now have a string we can search on.
```
% go test
--- FAIL: TestSplit (0.00s)
split_test.go:25: trailing sep: expected: [a b c], got: [a b c ]
```
We can dry this up even more using a map literal syntax:
```
func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}
for name, tc := range tests {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("%s: expected: %v, got: %v", name, tc.want, got)
}
}
}
```
Using a map literal syntax we define our test cases not as a slice of structs, but as map of test names to test fixtures. Theres also a side benefit of using a map that is going to potentially improve the utility of our tests.
Map iteration order is _undefined_ 1 This means each time we run `go test`, our tests are going to be potentially run in a different order.
This is super useful for spotting conditions where test pass when run in statement order, but not otherwise. If you find that happens you probably have some global state that is being mutated by one test with subsequent tests depending on that modification.
### Introducing sub tests
Before we fix the failing test there are a few other issues to address in our table driven test harness.
The first is were calling `t.Fatalf` when one of the test cases fails. This means after the first failing test case we stop testing the other cases. Because test cases are run in an undefined order, if there is a test failure, it would be nice to know if it was the only failure or just the first.
The testing package would do this for us if we go to the effort to write out each test case as its own function, but thats quite verbose. The good news is since Go 1.7 a new feature was added that lets us do this easily for table driven tests. Theyre called [sub tests][3].
```
func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %v, got: %v", tc.want, got)
}
})
}
}
```
As each sub test now has a name we get that name automatically printed out in any test runs.
```
% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]
```
Each subtest is its own anonymous function, therefore we can use `t.Fatalf`, `t.Skipf`, and all the other `testing.T`helpers, while retaining the compactness of a table driven test.
#### Individual sub test cases can be executed directly
Because sub tests have a name, you can run a selection of sub tests by name using the `go test -run` flag.
```
% go test -run=.*/trailing -v
=== RUN TestSplit
=== RUN TestSplit/trailing_sep
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]
```
### Comparing what we got with what we wanted
Now were ready to fix the test case. Lets look at the error.
```
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: [a b c], got: [a b c ]
```
Can you spot the problem? Clearly the slices are different, thats what `reflect.DeepEqual` is upset about. But spotting the actual difference isnt easy, you have to spot that extra space after `c`. This might look simple in this simple example, but it is any thing but when youre comparing two complicated deeply nested gRPC structures.
We can improve the output if we switch to the `%#v` syntax to view the value as a Go(ish) declaration:
```
got := Split(tc.input, tc.sep)
if !reflect.DeepEqual(tc.want, got) {
t.Fatalf("expected: %#v, got: %#v", tc.want, got)
}
```
Now when we run our test its clear that the problem is there is an extra blank element in the slice.
```
% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:25: expected: []string{"a", "b", "c"}, got: []string{"a", "b", "c", ""}
```
But before we go to fix our test failure I want to talk a little bit more about choosing the right way to present test failures. Our `Split` function is simple, it takes a primitive string and returns a slice of strings, but what if it worked with structs, or worse, pointers to structs?
Here is an example where `%#v` does not work as well:
```
func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
fmt.Printf("%v %v\n", x, y)
fmt.Printf("%#v %#v\n", x, y)
}
```
The first `fmt.Printf`prints the unhelpful, but expected slice of addresses; `[0xc000096000 0xc000096008 0xc000096010] [0xc000096018 0xc000096020 0xc000096028]`. However our `%#v` version doesnt fare any better, printing a slice of addresses cast to `*main.T`;`[]*main.T{(*main.T)(0xc000096000), (*main.T)(0xc000096008), (*main.T)(0xc000096010)} []*main.T{(*main.T)(0xc000096018), (*main.T)(0xc000096020), (*main.T)(0xc000096028)}`
Because of the limitations in using any `fmt.Printf` verb, I want to introduce the [go-cmp][4] library from Google.
The goal of the cmp library is it is specifically to compare two values. This is similar to `reflect.DeepEqual`, but it has more capabilities. Using the cmp pacakge you can, of course, write:
```
func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
fmt.Println(cmp.Equal(x, y)) // false
}
```
But far more useful for us with our test function is the `cmp.Diff` function which will produce a textual description of what is different between the two values, recursively.
```
func main() {
type T struct {
I int
}
x := []*T{{1}, {2}, {3}}
y := []*T{{1}, {2}, {4}}
diff := cmp.Diff(x, y)
fmt.Printf(diff)
}
```
Which instead produces:
```
% go run
{[]*main.T}[2].I:
-: 3
+: 4
```
Telling us that at element 2 of the slice of `T`s the `I`field was expected to be 3, but was actually 4.
Putting this all together we have our table driven go-cmp test
```
func TestSplit(t *testing.T) {
tests := map[string]struct {
input string
sep string
want []string
}{
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
got := Split(tc.input, tc.sep)
diff := cmp.Diff(tc.want, got)
if diff != "" {
t.Fatalf(diff)
}
})
}
}
```
Running this we get
```
% go test
--- FAIL: TestSplit (0.00s)
--- FAIL: TestSplit/trailing_sep (0.00s)
split_test.go:27: {[]string}[?->3]:
-: <non-existent>
+: ""
FAIL
exit status 1
FAIL split 0.006s
```
Using `cmp.Diff` our test harness isnt just telling us that what we got and what we wanted were different. Our test is telling us that the strings are different lengths, the third index in the fixture shouldnt exist, but the actual output we got an empty string, “”. From here fixing the test failure is straight forward.
1. Please dont email me to argue that map iteration order is _random_. [Its not][5].
#### Related posts:
1. [Writing table driven tests in Go][6]
2. [Internets of Interest #7: Ian Cooper on Test Driven Development][7]
3. [Automatically run your packages tests with inotifywait][8]
4. [How to write benchmarks in Go][9]
--------------------------------------------------------------------------------
via: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
作者:[Dave Cheney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://dave.cheney.net/author/davecheney
[b]: https://github.com/lujun9972
[1]: https://dave.cheney.net/2019/04/03/absolute-unit-test
[2]: https://www.youtube.com/watch?v=EZ05e7EMOLM
[3]: https://blog.golang.org/subtests
[4]: https://github.com/google/go-cmp
[5]: https://golang.org/ref/spec#For_statements
[6]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
[7]: https://dave.cheney.net/2018/10/15/internets-of-interest-7-ian-cooper-on-test-driven-development (Internets of Interest #7: Ian Cooper on Test Driven Development)
[8]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your packages tests with inotifywait)
[9]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go)

View File

@ -0,0 +1,256 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots)
[#]: via: (https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots
======
Red Hat has released its most awaited OS **RHEL 8** on 7th May 2019. RHEL 8 is based on **Fedora 28** distribution and Linux **kernel version 4.18**. One of the important key features in RHEL 8 is that it has introduced “ **Application Streams** ” which allows developers tools, frameworks and languages to be updated frequently without impacting the core resources of base OS. In other words, application streams will help to segregate the users space packages from OS Kernel Space.
Apart from this, there are many new features which are noticed in RHEL 8 like:
* XFS File system supports copy-on-write of file extents
* Introduction of Stratis filesystem, Buildah, Podman, and Skopeo
* Yum utility is based on DNF
* Chrony replace NTP.
* Cockpit is the default Web Console tool for Server management.
* OpenSSL 1.1.1 & TLS 1.3 support
* PHP 7.2
* iptables replaced by nftables
### Minimum System Requirements for RHEL 8:
* 4 GB RAM
* 20 GB unallocated disk space
* 64-bit x86 or ARM System
**Note:** RHEL 8 supports the following architectures:
* AMD or Intel x86 64-bit
* 64-bit ARM
* IBM Power Systems, Little Endian & IBM Z
In this article we will demonstrate how to install RHEL 8 step by step with screenshots.
### RHEL 8 Installation Steps with Screenshots
### Step:1) Download RHEL 8.0 ISO file
Download RHEL 8 iso file from its official web site,
<https://access.redhat.com/downloads/>
I am assuming you have the active subscription if not then register yourself for evaluation and then download ISO file
### Step:2) Create Installation bootable media (USB or DVD)
Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its bios settings and set the boot medium as USB or DVD,
### Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option
When the system boots up with installation media (USB or DVD), we will get the following screen, choose “ **Install Red Hat Enterprise Linux 8.0** ” and hit enter,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Install-RHEL8.jpg>
### Step:4) Choose your preferred language for RHEL 8 installation
In this step, you need to choose a language that you want to use for RHEL 8 installation, so make a selection that suits to your setup.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-RHEL8-Installation.jpg>
Click on Continue
### Step:5) Preparing RHEL 8 Installation
In this step we will decide the installation destination for RHEL 8, apart from this we can configure the followings:
* Time Zone
* Kdump (enabled/disabled)
* Software Selection (Packages)
* Networking and Hostname
* Security Policies & System purpose
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-summary-rhel8.jpg>
By default, installer will automatically pick time zone and will enable the **kdump** , if wish to change the time zone then click on “ **Time & Date**” option and set your preferred time zone and then click on Done.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/timezone-rhel8-installation.jpg>
To configure IP address and Hostname click on “ **Network & Hostname**” option from installation summary screen,
If your system is connected to any switch or modem, then it will try to get IP from DHCP server otherwise we can configure IP manually.
Mention the hostname that you want to set and then click on “ **Apply”**. Once you are done with IP address and hostname configuration click on “Done”
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-Hostname-RHEL8-Installation.jpg>
To define the installation disk and partition scheme for RHEL 8, click on “ **Installation Destination** ” option,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Installation-Disk-RHEL8-Installation.jpg>
Click on Done
As we can see I have around 60 GB free disk space on sda drive, I will be creating following customize lvm based partitions on this disk,
* /boot = 2GB (xfs file system)
* / = 20 GB (xfs file system)
* /var = 10 GB (xfs file system)
* /home = 15 GB (xfs file system)
* /tmp = 5 GB (xfs file system)
* Swap = 2 GB (xfs file system)
**Note:** If you dont want to create manual partitions then select “ **Automatic** ” option from Storage Configuration Tab
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Create-New-Partition-RHEL8-Installation.jpg>
Lets create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning scheme and then click on + “plus” symbol,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partition-rhel8-installation.jpg>
Click on “ **Add mount point**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Boot-partition-details-rhel8-installation.jpg>
To create next partition as / of size 20 GB, click on + symbol and specify the details as shown below,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-rhel8-installation.jpg>
Click on “Add mount point”
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-root-partition-details-rhel8-installation.jpg>
As we can see installer has created the Volume group as “ **rhel_rhel8** “, if you want to change this name then click on Modify option and specify the desired name and then click on Save
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Change-VolumeGroup-RHEL8-Installation.jpg>
Now onward all partitions will be part of Volume Group “ **VolGrp**
Similarly create next three partitions **/home** , **/var** and **/tmp** of size 15GB, 10 GB and 5 GB respectively
**/home partition:**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-rhel8-installation.jpg>
**/var partition:**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-rhel8-installation.jpg>
**/tmp partition:**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/tmp-partition-rhel8-installation.jpg>
Now finally create last partition as swap of size of 2 GB,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-Partition-RHEL8-Installation.jpg>
Click on “Add mount point”
Once you are done with partition creations, click on Done on Next screen, example is shown below
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-after-partition-creation-rhel8-installation.jpg>
In the next window, choose “ **Accept Changes**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-RHEL8-Installation.jpg>
### Step:6) Select Software Packages and Choose Security Policy and System purpose
After accepting the changes in above step, we will be redirected to installation summary window.
By default, installer will select “ **Server with GUI”** as software packages and if you want to change it then click on “ **Software Selection** ” option and choose your preferred “ **Basic Environment**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Software-Selection-RHEL8-Installation.jpg>
Click on Done
If you want to set the security policies during the installation, the choose the required profile from Security polices option else you can leave as it is.
From “ **System Purpose** ” option specify the Role, Red Hat Service Level Agreement and Usage. Though You can leave this option as it is.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/System-role-agreement-usage-rhel8-installation.jpg>
Click on Done to proceed further.
### Step:7) Choose “Begin Installation” option to start installation
From the Installation summary window click on “Begin Installation” option to start the installation,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-RHEL8-Installation.jpg>
As we can see below RHEL 8 Installation is started & is in progress
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Progress.jpg>
Set the root password,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Root-Password-RHEL8.jpg>
Specify the local user details like its Full Name, user name and its password,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LocalUser-Details-RHEL8-Installation.jpg>
Once the installation is completed, installer will prompt us to reboot the system,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Completed-Message.jpg>
Click on “Reboot” to restart your system and dont forget to change boot medium from bios settings so that system boots up with hard disk.
### Step:8) Initial Setup after installation
When the system is rebooted first time after the successful installation then we will get below window there we need to accept the license (EULA),
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-EULA-RHEL8-Installation.jpg>
Click on Done,
In the next Screen click on “ **Finish Configuration**
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Finish-Configuration-RHEL8-Installation.jpg>
### Step:8) Login Screen of RHEL 8 Server after Installation
As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same user name and password that we created during the installation
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Login-Screen-RHEL8.jpg>
After the login we will get couple of Welcome Screen and follow the screen instructions and then finally we will get the following screen,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Ready-to-Use-RHEL8.jpg>
Click on “Start Using Red Hat Enterprise Linux”
<https://www.linuxtechi.com/wp-content/uploads/2019/05/GNOME-Desktop-RHEL8-Server.jpg>
This confirms that we have successfully installed RHEL 8, thats all from this article. We will be writing articles on RHEL 8 in the coming future till then please do share your feedback and comments on this article.
Read Also :** [How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][1]**
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File)
[#]: via: (https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File
======
Recently Red Hat has released its most awaited operating system “ **RHEL 8** “, in case you have installed RHEL 8 Server on your system and wondering how to setup local yum or dnf repository using installation DVD or ISO file then refer below steps and procedure.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Setup-Local-Repo-RHEL8.jpg>
In RHEL 8, we have two package repositories:
* BaseOS
* Application Stream
BaseOS repository have all underlying OS packages where as Application Stream repository have all application related packages, developer tools and databases etc. Using Application stream repository, we can have multiple of versions of same application and Database.
### Step:1) Mount RHEL 8 ISO file / Installation DVD
To mount RHEL 8 ISO file inside your RHEL 8 server use the beneath mount command,
```
[root@linuxtechi ~]# mount -o loop rhel-8.0-x86_64-dvd.iso /opt/
```
**Note:** I am assuming you have already copied RHEL 8 ISO file inside your system,
In case you have RHEL 8 installation DVD, then use below mount command to mount it,
```
[root@linuxtechi ~]# mount /dev/sr0 /opt
```
### Step:2) Copy media.repo file from mounted directory to /etc/yum.repos.d/
In our case RHEL 8 Installation DVD or ISO file is mounted under /opt folder, use cp command to copy media.repo file to /etc/yum.repos.d/ directory,
```
[root@linuxtechi ~]# cp -v /opt/media.repo /etc/yum.repos.d/rhel8.repo
'/opt/media.repo' -> '/etc/yum.repos.d/rhel8.repo'
[root@linuxtechi ~]#
```
Set “644” permission on “ **/etc/yum.repos.d/rhel8.repo** ”
```
[root@linuxtechi ~]# chmod 644 /etc/yum.repos.d/rhel8.repo
[root@linuxtechi ~]#
```
### Step:3) Add repository entries in “/etc/yum.repos.d/rhel8.repo” file
By default, **rhel8.repo** file will have following content,
<https://www.linuxtechi.com/wp-content/uploads/2019/05/default-rhel8-repo-file.jpg>
Edit rhel8.repo file and add the following contents,
```
[root@linuxtechi ~]# vi /etc/yum.repos.d/rhel8.repo
[InstallMedia-BaseOS]
name=Red Hat Enterprise Linux 8 - BaseOS
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///opt/BaseOS/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[InstallMedia-AppStream]
name=Red Hat Enterprise Linux 8 - AppStream
metadata_expire=-1
gpgcheck=1
enabled=1
baseurl=file:///opt/AppStream/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
```
rhel8.repo should look like above once we add the content, In case you have mounted the Installation DVD or ISO on different folder then change the location and folder name in base url line for both repositories and rest of parameter leave as it is.
### Step:4) Clean Yum / DNF and Subscription Manager Cache
Use the following command to clear yum or dnf and subscription manager cache,
```
root@linuxtechi ~]# dnf clean all
[root@linuxtechi ~]# subscription-manager clean
All local data removed
[root@linuxtechi ~]#
```
### Step:5) Verify whether Yum / DNF is getting packages from Local Repo
Use dnf or yum repolist command to verify whether these commands are getting packages from Local repositories or not.
```
[root@linuxtechi ~]# dnf repolist
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Last metadata expiration check: 1:32:44 ago on Sat 11 May 2019 08:48:24 AM BST.
repo id repo name status
InstallMedia-AppStream Red Hat Enterprise Linux 8 - AppStream 4,672
InstallMedia-BaseOS Red Hat Enterprise Linux 8 - BaseOS 1,658
[root@linuxtechi ~]#
```
**Note :** You can use either dnf or yum command, if you use yum command then its request is redirecting to DNF itself because in RHEL 8 yum is based on DNF command.
If you have noticed the above command output carefully, we are getting warning message “ **This system is not registered to Red Hat Subscription Management**. **You can use subscription-manager to register”** , if you want to suppress or prevent this message while running dnf / yum command then edit the file “/etc/yum/pluginconf.d/subscription-manager.conf”, changed the parameter “enabled=1” to “enabled=0”
```
[root@linuxtechi ~]# vi /etc/yum/pluginconf.d/subscription-manager.conf
[main]
enabled=0
```
save and exit the file.
### Step:6) Installing packages using DNF / Yum
Lets assume we want to install nginx web server then run below dnf command,
```
[root@linuxtechi ~]# dnf install nginx
```
![][1]
Similarly if you want to install **LEMP** stack on your RHEL 8 system use the following dnf command,
```
[root@linuxtechi ~]# dnf install nginx mariadb php -y
```
[![][2]][3]
This confirms that we have successfully configured Local yum / dnf repository on our RHEL 8 server using Installation DVD or ISO file.
In case these steps help you technically, please do share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/dnf-install-nginx-rhel8-1024x376.jpg
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8-1024x540.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8.jpg

View File

@ -0,0 +1,93 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why bother writing tests at all?)
[#]: via: (https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
Why bother writing tests at all?
======
In previous posts and presentations I talked about [how to test][1], and [when to test][2]. To conclude this series of Im going to ask the question, _why test at all?_
### Even if you dont, someone _will_ test your software
Im sure no-one reading this post thinks that software should be delivered without being tested first. Even if that were true, your customers are going to test it, or at least use it. If nothing else, it would be good to discover any issues with the code before your customers do. If not for the reputation of your company, at least for your professional pride.
So, if we agree that software should be tested, the question becomes: _who_ should do that testing?
### The majority of testing should be performed by development teams
I argue that the majority of the testing should be done by development groups. Moreover, testing should be automated, and thus the majority of these tests should be unit style tests.
To be clear, I am _not_ saying you shouldnt write integration, functional, or end to end tests. Im also _not_ saying that you shouldnt have a QA group, or integration test engineers. However at a recent software conference, in a room of over 1,000 engineers, nobody raised their hand when I asked if they considered themselves in a pure quality assurance role.
You might argue that the audience was self selecting, that QA engineers did not feel a software conference was relevantor welcomingto them. However, I think this proves my point, the days of [one developer to one test engineer][3] are gone and not coming back.
If development teams arent writing the majority of tests, who is?
### Manual testing should not be the majority of your testing because manual testing is O(n)
Thus, if individual contributors are expected to test the software they write, why do we need to automate it? Why is a manual testing plan not good enough?
Manual testing of software or manual verification of a defect is not sufficient because it does not scale. As the number of manual tests grows, engineers are tempted to skip them or only execute the scenarios they _think_ are could be affected. Manual testing is expensive in terms of time, thus dollars, and it is boring. 99.9% of the tests that passed last time are _expected_ to pass again. Manual testing is looking for a needle in a haystack, except you dont stop when you find the first needle.
This means that your first response when given a bug to fix or a feature to implement should be to write a failing test. This doesnt need to be a unit test, but it should be an automated test. Once youve fixed the bug, or added the feature, now have the test case to prove it workedand you can check them in together.
### Tests are the critical component that ensure you can always ship your master branch
As a development team, you are judged on your ability to deliver working software to the business. No, seriously, the business could care less about OOP vs FP, CI/CD, table tennis or limited run La Croix.
Your super power is, at any time, anyone on the team should be confident that the master branch of your code is shippable. This means at any time they can deliver a release of your software to the business and the business can recoup its investment in your development R&D.
I cannot emphasise this enough. If you want the non technical parts of the business to believe you are heros, you must never create a situation where you say “well, we cant release right now because were in the middle of an important refactoring. Itll be a few weeks. We hope.”
Again, Im not saying you cannot refactor, but at every stage your product must be shippable. Your tests have to pass. It may not have all the desired features, but the features that are there should work as described on the tin.
### Tests lock in behaviour
Your tests are the contract about what your software does and does not do. Unit tests should lock in the behaviour of the packages API. Integration tests do the same for complex interactions. Tests describe, in code, what the program promises to do.
If there is a unit test for each input permutation, you have defined the contract for what the code will do _in code_ , not documentation. This is a contract anyone on your team can assert by simply running the tests. At any stage you _know_ with a high degree of confidence that the behaviour people relied on before your change continues to function after your change.
### Tests give you confidence to change someone elses code
Lastly, and this is the biggest one, for programmers working on a piece of code that has been through many hands. Tests give you the confidence to make changes.
Even though weve never met, something I know about you, the reader, is you will eventually leave your current employer. Maybe youll be moving on to a new role, or perhaps a promotion, perhaps youll move cities, or follow your partner overseas. Whatever the reason, the succession of the maintenance of programs you write is key.
If people cannot maintain our code then as you and I move from job to job well leave behind programs which cannot be maintained. This goes beyond advocacy for a language or tool. Programs which cannot be changed, programs which are too hard to onboard new developers, or programs which feel like career digression to work on them will reach only one end statethey are a dead end. They represent a balance sheet loss for the business. They will be replaced.
If you worry about who will maintain your code after youre gone, write good tests.
#### Related posts:
1. [Writing table driven tests in Go][4]
2. [Prefer table driven tests][5]
3. [Automatically run your packages tests with inotifywait][6]
4. [The value of TDD][7]
--------------------------------------------------------------------------------
via: https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all
作者:[Dave Cheney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://dave.cheney.net/author/davecheney
[b]: https://github.com/lujun9972
[1]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
[2]: https://dave.cheney.net/paste/absolute-unit-test-london-gophers.pdf
[3]: https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/evolving-test-practices-microsoft
[4]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
[5]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests (Prefer table driven tests)
[6]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your packages tests with inotifywait)
[7]: https://dave.cheney.net/2016/04/11/the-value-of-tdd (The value of TDD)

View File

@ -0,0 +1,244 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) Part-2)
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/)
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
Monitor and Manage Docker Containers with Portainer.io (GUI tool) Part-2
======
As a continuation of Part-1, this part-2 has remaining features of Portainer covered and as explained below.
### Monitoring docker container images
```
root@linuxtechi ~}$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9ab9aa72f015 ubuntu "/bin/bash" 14 seconds ago Exited (0) 12 seconds ago suspicious_shannon
305369d3b2bb centos "/bin/bash" 24 seconds ago Exited (0) 22 seconds ago admiring_mestorf
9a669f3dc4f6 portainer/portainer "/portainer" 7 minutes ago Up 7 minutes 0.0.0.0:9000->9000/tcp trusting_keller
```
Including the portainer(which is a docker container image), all the exited and present running docker images are displayed. Below screenshot from Portainer GUI displays the same.
[![Docker_status][1]][2]
### Monitoring events
Click on the “Events” option from the portainer webpage as shown below.
Various events that are generated and created based on docker-container activity, are captured and displayed in this page
[![Container-Events-Poratiner-GUI][3]][4]
Now to check and validate how the “ **Events** ” section works. Create a new docker-container image redis as explained below, check the docker ps a status at docker command-line.
```
root@linuxtechi ~}$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cdbfbef59c31 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp angry_varahamihira
9ab9aa72f015 ubuntu "/bin/bash" 10 minutes ago Exited (0) 10 minutes ago suspicious_shannon
305369d3b2bb centos "/bin/bash" 11 minutes ago Exited (0) 11 minutes ago admiring_mestorf
9a669f3dc4f6 portainer/portainer "/portainer" 17 minutes ago Up 17 minutes 0.0.0.0:9000->9000/tcp trusting_keller
```
Click the “Event List” on the top to refresh the events list,
[![events_updated][5]][6]
Now the events page also updated with this change,
### Host status
Below is the screenshot of the portainer displaying the host status. This is a simple window showing-up. This shows the basic info like “CPU”, “hostname”, “OS info” etc of the host linux machine. Instead of logging- into the host command-line, this page provides very useful info on for quick glance.
[![Host-names-Portainer][7]][8]
### Dashboard in Portainer
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Dashboard** ” section of the selected Docker Container image.
When “ **EndPoint** ” option is clicked in the GUI of Portainer, the following window appears,
[![End_Point_Settings][9]][10]
This Dashboard has many statuses and options, for a host container image.
**1) Stacks:** Clicking on this option, provides status of any stacks if any. Since there are no stacks, this displays zero.
**2) Images:** Clicking on this option provides host of container images that are available. This option will display all the live and exited container images
[![Docker-Container-Images-Portainer][11]][12]
For example create one more “ **Nginx”** container and refresh this list to see the updates.
```
root@linuxtechi ~}$ sudo docker run nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
27833a3ba0a5: Pull complete
ea005e36e544: Pull complete
d172c7f0578d: Pull complete
Digest: sha256:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c
Status: Downloaded newer image for nginx:latest
```
The following is the image after refresh,
[![Nginx_Image_creation][13]][14]
Once the Nginx image is stopped/killed and docker container image will be moved to unused status.
**Note** :-One can see all the image details here are very clear with memory usage, creation date and time. As compared to command-line option, maintaining and monitoring containers from here it will be very easy.
**3) Networks:** this option is used for network operations. Like assigning IP address, creating subnets, providing IP address range, access control (admin and normal user) . The following window provides the details of various options possible. Based on your need these options can be explored further.
[![Conatiner-Network-Portainer][15]][16]
Once all the various networking parameters are entered, “ **create network** ” button is clicked for creating the network.
**4) Container:** (click on container) This option will provide the container status. This list will provide details on live and not running container statuses. This output is similar to docker ps command option.
[![Containers-Status-Portainer][17]][18]
From this window only the containers can be stopped and started as need arises by checking the check box and selecting the above buttons. One example is provided as below,
Example, Both “CentOS” and “Ubuntu” containers which are in stopped state, they are started now by selecting check boxes and hitting “Start” button.
[![start_containers1][19]][20]
[![start_containers2][21]][22]
**Note:** Since both are Linux container images, they will not be started. Portainer tries to start and stops later. Try “Nginx” instead and you can see it coming to “running”status.
[![start_containers3][23]][24]
**5) Volume:** Described in Part-I of Portainer Article
### Setting option in Portainer
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Setting”** section of the selected Docker Container image.
When “Settings” option is clicked in the GUI of Portainer, the following further configuration options are available,
**1) Extensions** : This is a simple Portainer CE subscription process. The details and uses can be seen from the attached window. This is mainly used for maintaining the license and subscription of the respective version.
[![Extensions][25]][26]
**2) Users:** This option is used for adding “users” with or without administrative privileges. Following example provides the same.
Enter the selected user name “shashi” in this case and your choice of password and hit “ **Create User** ” button below.
[![create_user_portainer][27]][28]
[![create_user2_portainer][29]][30]
[![Internal-user-Portainer][31]][32]
Similarly the just now created user “shashi” can be removed by selecting the check box and hitting remove button.
[![user_remove_portainer][33]][34]
**3) Endpoints:** this option is used for Endpoint management. Endpoints can be added and removed as shown in the attached windows.
[![Endpoint-Portainer-GUI][35]][36]
The new endpoint “shashi” is created using the various default parameters as shown below,
[![Endpoint2-Portainer-GUI][37]][38]
Similarly this endpoint can be removed by clicking the check box and hitting remove button.
**4) Registries:** this option is used for registry management. As docker hub has registry of various images, this feature can be used for similar purposes.
[![Registry-Portainer-GUI][39]][40]
With the default options the “shashi-registry” can be created.
[![Registry2-Portainer-GUI][41]][42]
Similarly this can be removed if not required.
**5) Settings:** This option is used for the following various options,
* Setting-up snapshot interval
* For using custom logo
* To create external templates
* Security features like- Disable and enable bin mounts for non-admins, Disable/enable privileges for non-admins, Enabling host management features
Following screenshot shows some options enabled and disabled for demonstration purposes. Once all done hit on “Save Settings” button to save all these options.
[![Portainer-GUI-Settings][43]][44]
Now one more option pops-up on “Authentication settings” for LDAP, Internal or OAuth extension as shown below”
[![Authentication-Portainer-GUI-Settings][45]][46]
Based on what level of security features we want for our environment, respective option is chosen.
Thats all from this article, I hope these Portainer GUI articles helps you to manage and monitor containers more efficiently. Please do share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/shashidhar/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status-1024x423.jpg
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events-1024x404.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated-1024x414.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names-1024x408.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names.jpg
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings-1024x471.jpg
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings.jpg
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images-1024x398.jpg
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images.jpg
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation-1024x439.jpg
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation.jpg
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-1024x463.jpg
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network.jpg
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers-1024x364.jpg
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers.jpg
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1-1024x432.jpg
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1.jpg
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2-1024x307.jpg
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2.jpg
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3-1024x435.jpg
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3.jpg
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions-1024x421.jpg
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions.jpg
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user-1024x350.jpg
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user.jpg
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2-1024x372.jpg
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2.jpg
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer-1024x257.jpg
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer.jpg
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove-1024x318.jpg
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove.jpg
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint-1024x349.jpg
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint.jpg
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2-1024x379.jpg
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2.jpg
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry-1024x420.jpg
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry.jpg
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2-1024x409.jpg
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2.jpg
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings-1024x418.jpg
[44]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings.jpg
[45]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings-1024x344.jpg
[46]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings.jpg

View File

@ -0,0 +1,65 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The three Rs of remote work)
[#]: via: (https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
The three Rs of remote work
======
I started working remotely in 2012. Since then Ive worked for big companies and small, organisations with outstanding remote working cultures, and others that probably would have difficulty spelling the word without predictive text. I broadly classify my experiences into three tiers;
### Little r remote
The first kind of remote work I call _little r_ remote.
Your company has an office, but its not convenient or you dont want to work from there. It could be the commute is too long, or its in the next town over, or perhaps a short plane flight away. Sometimes you might go into the office for a day or two a week, and should something serious arise you could join your co-workers onsite for an extended period of time.
If you often hear people say they are going to work from home to get some work done, thats little r remote.
### Big R remote
The next category I call _Big R_ remote. Big R remote differs mainly from little r remote by the tyranny of distance. Its not impossible to visit your co-workers in person, but it is inconvenient. Meeting face to face requires a days flying. Passports and boarder crossings are frequently involved. The expense and distance necessitates week long sprints and commensurate periods of jetlag recuperation.
Because of timezone differences meetings must be prearranged and periods of overlap closely guarded. Communication becomes less spontaneous and care must be taken to avoid committing to unsustainable working hours.
### Gothic remote
The final category is basically Big R remote working on hard mode. Everything that was hard about Big R remote, timezone, travel schedules, public holidays, daylight savings, video call latency, cultural and language barriers is multiplied for each remote worker.
In person meetings are so rare that without a focus on written asynchronous communication progress can repeatedly stall for days, if not weeks, as miscommunication leads to disillusionment and loss of trust.
In my experience, for knowledge workers, little r remote work offers many benefits over [the open office hell scape][1] du jour. Big R remote takes a serious commitment by all parties and if you are the first employee in that category you will bare most of the cost to making Big R remote work for you.
Gothic remote working should probably be avoided unless all those involved have many years of working in that style _and_ the employer is committed to restructuring the company as a remote first organisation. It is not possible to succeed in a Gothic remote role without a culture of written communication and asynchronous decision making mandated, _and consistently enforced,_ by the leaders of the company.
#### Related posts:
1. [How to dial remote SSL/TLS services in Go][2]
2. [How does the go build command work ?][3]
3. [Why Slack is inappropriate for open source communications][4]
4. [The office coffee model of concurrent garbage collection][5]
--------------------------------------------------------------------------------
via: https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work
作者:[Dave Cheney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://dave.cheney.net/author/davecheney
[b]: https://github.com/lujun9972
[1]: https://twitter.com/davecheney/status/761693088666357760
[2]: https://dave.cheney.net/2010/10/05/how-to-dial-remote-ssltls-services-in-go (How to dial remote SSL/TLS services in Go)
[3]: https://dave.cheney.net/2013/10/15/how-does-the-go-build-command-work (How does the go build command work ?)
[4]: https://dave.cheney.net/2017/04/11/why-slack-is-inappropriate-for-open-source-communications (Why Slack is inappropriate for open source communications)
[5]: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection (The office coffee model of concurrent garbage collection)

View File

@ -0,0 +1,193 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Download and Use Ansible Galaxy Roles in Ansible Playbook)
[#]: via: (https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Download and Use Ansible Galaxy Roles in Ansible Playbook
======
**Ansible** is tool of choice these days if you must manage multiple devices, be it Linux, Windows, Mac, Network Devices, VMware and lot more. What makes Ansible popular is its agent less feature and granular control. If you have worked with python or have experience with **yaml** , you will feel at home with Ansible. To see how you can install [Ansible][1] click here.
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Download-Use-Ansible-Galaxy-Roles.jpg>
Ansible core modules will let you manage almost anything should you wish to write playbooks, however often there is someone who has already written a role for a problem you are trying to solve. Lets take an example, you wish to manage NTP clients on the Linux machines, you have 2 choices either write a role which can be applied to the nodes or use **ansible-galaxy** to download an existing role someone has already written/tested for you. Ansible galaxy has roles for almost all the domains and these caters different problems. You can visit <https://galaxy.ansible.com/> to get an idea on domains and popular roles it has. Each role published on galaxy repository is thoroughly tested and has been rated by the users, so you get an idea on how other people who have used it liked it.
To keep moving with the NTP idea, here is how you can search and install an NTP role from galaxy.
Firstly, lets run ansible-galaxy with the help flag to check what options does it give us
```
[root@linuxtechi ~]# ansible-galaxy --help
```
![ansible-galaxy-help][2]
As you can see from the output above there are some interesting options been shown, since we are looking for a role to manage ntp clients lets try the search option to see how good it is finding what we are looking for.
```
[root@linuxtechi ~]# ansible-galaxy search ntp
```
Here is the truncated output of the command above.
![ansible-galaxy-search][3]
It found 341 matches based on our search, as you can see from the output above many of these roles are not even related to NTP which means our search needs some refinement however, it has managed to pull some NTP roles, lets dig deeper to see what these roles are. But before that let me tell you the naming convention being followed here. The name of a role is always preceded by the author name so that it is easy to segregate roles with the same name. So, if you have written an NTP role and have published it to galaxy repo, it does not get mixed up with someone else repo with the same name.
With that out of the way, lets continue with our job of installing a NTP role for our Linux machines. Lets try **bennojoy.ntp** for this example, but before using this we need to figure out couple of things, is this role compatible with the version of ansible I am running. Also, what is the license status of this role. To figure out these, lets run below ansible-galaxy command,
```
[root@linuxtechi ~]# ansible-galaxy info bennojoy.ntp
```
![ansible-galaxy-info][4]
ok so this says the minimum version is 1.4 and the license is BSD, lets download it
```
[root@linuxtechi ~]# ansible-galaxy install bennojoy.ntp
- downloading role 'ntp', owned by bennojoy
- downloading role from https://github.com/bennojoy/ntp/archive/master.tar.gz
- extracting bennojoy.ntp to /etc/ansible/roles/bennojoy.ntp
- bennojoy.ntp (master) was installed successfully
[root@linuxtechi ~]# ansible-galaxy list
- bennojoy.ntp, master
[root@linuxtechi ~]#
```
Lets find the newly installed role.
```
[root@linuxtechi ~]# cd /etc/ansible/roles/bennojoy.ntp/
[root@linuxtechi bennojoy.ntp]# ls -l
total 4
drwxr-xr-x. 2 root root 21 May 21 22:38 defaults
drwxr-xr-x. 2 root root 21 May 21 22:38 handlers
drwxr-xr-x. 2 root root 48 May 21 22:38 meta
-rw-rw-r--. 1 root root 1328 Apr 20 2016 README.md
drwxr-xr-x. 2 root root 21 May 21 22:38 tasks
drwxr-xr-x. 2 root root 24 May 21 22:38 templates
drwxr-xr-x. 2 root root 55 May 21 22:38 vars
[root@linuxtechi bennojoy.ntp]#
```
I am going to run this newly downloaded role on my Elasticsearch CentOS node. Here is my hosts file
```
[root@linuxtechi ~]# cat hosts
[CentOS]
elastic7-01 ansible_host=192.168.1.15 ansibel_port=22 ansible_user=linuxtechi
[root@linuxtechi ~]#
```
Lets try to ping the node using below ansible ping module,
```
[root@linuxtechi ~]# ansible -m ping -i hosts elastic7-01
elastic7-01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
[root@linuxtechi ~]#
```
Here is what the current ntp.conf looks like on elastic node.
```
[root@linuxtechi ~]# head -30 /etc/ntp.conf
```
![Current-ntp-conf][5]
Since I am in India, lets add server **in.pool.ntp.org** to ntp.conf. I would have to edit the variables in default directory of the role.
```
[root@linuxtechi ~]# vi /etc/ansible/roles/bennojoy.ntp/defaults/main.yml
```
Change NTP server address in “ntp_server” parameter, after updating it should look like below.
![Update-ansible-ntp-role][6]
The last thing now is to create my playbook which would call this role.
```
[root@linuxtechi ~]# vi ntpsite.yaml
---
- name: Configure NTP on CentOS/RHEL/Debian System
become: true
hosts: all
roles:
- {role: bennojoy.ntp}
```
save and exit the file
We are ready to run this role now, use below command to run ntp playbook,
```
[root@linuxtechi ~]# ansible-playbook -i hosts ntpsite.yaml
```
Output of above ntp ansible playbook should be something like below,
![ansible-playbook-output][7]
Lets check updated file now. go to elastic node and view the contents of ntp.conf file
```
[root@linuxtechi ~]# cat /etc/ntp.conf
#Ansible managed
driftfile /var/lib/ntp/drift
server in.pool.ntp.org
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
restrict 127.0.0.1
[root@linuxtechi ~]#
```
Just in case you do not find a role fulfilling your requirement ansible-galaxy can help you create a directory structure for your custom roles. This helps your playbooks along with the variables, handlers, templates etc assembled in a standardized file structure. Lets create our own role, its always a good practice to let ansible-galaxy create the structure for you.
```
[root@linuxtechi ~]# ansible-galaxy init pk.backup
- pk.backup was created successfully
[root@linuxtechi ~]#
```
Verify the structure of your role using the tree command,
![createing-roles-ansible-galaxy][8]
Let me quickly explain what each of these directories and files are for, each of these serves a purpose.
The very first one is the **defaults** directory which contains the files containing variables with takes the lowest precedence, if the same variables are assigned in var directory it will be take precedence over default. The **handlers** directory hosts the handlers. The **file** and **templates** keep any files your role may need to copy and **jinja templates** to be used in playbooks respectively. The **tasks** directory is where your playbooks containing the tasks are kept. The var directory consists of all the files that hosts the variables used in role. The test directory consists of a sample inventory and test playbooks which can be used to test the role. The **meta directory** consists of any dependencies on other roles along with the authorship information.
Finally, **README.md** file simply consists of some general information like description and minimum version of ansible this role is compatible with.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/install-and-use-ansible-in-centos-7/
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-help-1024x294.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-search-1024x552.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-info-1024x557.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Current-ntp-conf.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Update-ansible-ntp-role.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-playbook-output-1024x376.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/createing-roles-ansible-galaxy.jpg

View File

@ -0,0 +1,200 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server)
[#]: via: (https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server
======
In this article, well be looking at how to install **LEMP** stack on Fedora 30 Server. LEMP Stands for:
* L -> Linux
* E -> Nginx
* M -> Maria DB
* P -> PHP
I am assuming **[Fedora 30][1]** is already installed on your system.
![LEMP-Stack-Fedora30][2]
LEMP is a collection of powerful software setup that is installed on a Linux server to help in developing popular development platforms to build websites, LEMP is a variation of LAMP wherein instead of **Apache** , **EngineX (Nginx)** is used as well as **MariaDB** used in place of **MySQL**. This how-to guide is a collection of separate guides to install Nginx, Maria DB and PHP.
### Install Nginx, PHP 7.3 and PHP-FPM on Fedora 30 Server
Lets take a look at how to install Nginx and PHP along with PHP FPM on Fedora 30 Server.
### Step 1) Switch to root user
First step in installing Nginx in your system is to switch to root user. Use the following command :
```
root@linuxtechi ~]$ sudo -i
[sudo] password for pkumar:
[root@linuxtechi ~]#
```
### Step 2) Install Nginx, PHP 7.3 and PHP FPM using dnf command
Install Nginx using the following dnf command:
```
[root@linuxtechi ~]# dnf install nginx php php-fpm php-common -y
```
### Step 3) Install Additional PHP modules
The default installation of PHP only comes with the basic and the most needed modules installed. If you need additional modules like GD, XML support for PHP, command line interface Zend OPCache features etc, you can always choose your packages and install everything in one go. See the sample command below:
```
[root@linuxtechi ~]# sudo dnf install php-opcache php-pecl-apcu php-cli php-pear php-pdo php-pecl-mongodb php-pecl-redis php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml -y
```
### Step 4) Start & Enable Nginx and PHP-fpm Service
Start and enable Nginx service using the following command
```
[root@linuxtechi ~]# systemctl start nginx && systemctl enable nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@linuxtechi ~]#
```
Use the following command to start and enable PHP-FPM service
```
[root@linuxtechi ~]# systemctl start php-fpm && systemctl enable php-fpm
Created symlink /etc/systemd/system/multi-user.target.wants/php-fpm.service → /usr/lib/systemd/system/php-fpm.service.
[root@linuxtechi ~]#
```
**Verify Nginx (Web Server) and PHP installation,**
**Note:** In case OS firewall is enabled and running on your Fedora 30 system, then allow 80 and 443 ports using beneath commands,
```
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=http
success
[root@linuxtechi ~]#
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=https
success
[root@linuxtechi ~]# firewall-cmd --reload
success
[root@linuxtechi ~]#
```
Open the web browser, type the following URL: http://<Your-Server-IP>
[![Test-Page-HTTP-Server-Fedora-30][3]][4]
Above screen confirms that NGINX is installed successfully.
Now lets verify PHP installation, create a test php page(info.php) using the beneath command,
```
[root@linuxtechi ~]# echo "<?php phpinfo(); ?>" > /usr/share/nginx/html/info.php
[root@linuxtechi ~]#
```
Type the following URL in the web browser,
http://<Your-Server-IP>/info.php
[![Php-info-page-fedora30][5]][6]
Above page confirms that PHP 7.3.5 has been installed successfully. Now lets install MariaDB database server.
### Install MariaDB on Fedora 30
MariaDB is a great replacement for MySQL DB as it is works much similar to MySQL and also compatible with MySQL steps too. Lets look at the steps to install MariaDB on Fedora 30 Server
### Step 1) Switch to Root User
First step in installing MariaDB in your system is to switch to root user or you can use a local user who has root privilege. Use the following command below:
```
[root@linuxtechi ~]# sudo -i
[root@linuxtechi ~]#
```
### Step 2) Install latest version of MariaDB (10.3) using dnf command
Use the following command to install MariaDB on Fedora 30 Server
```
[root@linuxtechi ~]# dnf install mariadb-server -y
```
### Step 3) Start and enable MariaDB Service
Once the mariadb is installed successfully in step 2), next step is to start the MariaDB service. Use the following command:
```
[root@linuxtechi ~]# systemctl start mariadb.service ; systemctl enable mariadb.service
```
### Step 4) Secure MariaDB Installation
When we install MariaDB server, so by default there is no root password, also anonymous users are created in database. So, to secure MariaDB installation, run the beneath “mysql_secure_installation” command
```
[root@linuxtechi ~]# mysql_secure_installation
```
Next you will be prompted with some question, just answer the questions as shown below:
![Secure-MariaDB-Installation-Part1][7]
![Secure-MariaDB-Installation-Part2][8]
### Step 5) Test MariaDB Installation
Once you have installed, you can always test if MariaDB is successfully installed on the server. Use the following command:
```
[root@linuxtechi ~]# mysql -u root -p
Enter password:
```
Next you will be prompted for a password. Enter the password same password that you have set during MariaDB secure installation, then you can see the MariaDB welcome screen.
```
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 17
Server version: 10.3.12-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
```
And finally, weve completed everything to install LEMP (Linux, Nginx, MariaDB and PHP) on your server successfully. Please post all your comments and suggestions in the feedback section below and well respond back at the earliest.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/LEMP-Stack-Fedora30.jpg
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30-1024x732.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30-1024x732.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30.jpg
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part1.jpg
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part2.jpg

View File

@ -0,0 +1,52 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Kubernetes is a dump truck: Here's why)
[#]: via: (https://opensource.com/article/19/6/kubernetes-dump-truck)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
Kubernetes is a dump truck: Here's why
======
Dump trucks are an elegant solution to a wide range of essential
business problems.
![Dump truck with kids standing in the foreground][1]
As we approach Kubernetes anniversary on Friday, June 7 this week let's start with this.
Dump trucks are elegant. Seriously, stay with me for a minute. They solve a wide array of technical problems in an elegant way. They can move dirt, gravel, rocks, coal, construction material, or road barricades. They can even pull trailers with other pieces of heavy equipment on them. You can load a dump truck with five tons of dirt and drive across the country with it. For a nerd like me, that's elegance.
But, they're not easy to use. Dump trucks require a special driver's license. They're also not easy to equip or maintain. There are a ton of options when you buy a dump truck and a lot of maintenance. But, they're elegant for moving dirt.
You know what's not elegant for moving dirt? A late-model, compact sedan. They're way easier to buy. Easier to drive. Easier to maintain. But, they're terrible at carrying dirt. It would take 200 trips to carry five tons of dirt, and nobody would want the car after that.
Alright, you're sold on using a dump truck, but you want to build it yourself. I get it. I'm a nerd and I love building things. But…
If you owned a construction company, you wouldn't build your own dump trucks. You definitely wouldn't maintain the supply chain to rebuild dump trucks (that's a big supply chain). But you would learn to drive one.
OK, my analogy is crude but easy to understand. Ease of use is relative. Ease of maintenance is relative. Ease of configuration is relative. It really depends on what you are trying to do. [Kubernetes][2] is no different.
Building Kubernetes once isn't too hard. Equipping Kubernetes? Well, that gets harder. What did you think of KubeCon? How many new projects were announced? Which ones are "real"? Which ones should you learn? How deeply do you understand Harbor, TikV, NATD, Vitess, Open Policy Agent? Not to mention Envoy, eBPF, and a bunch of the underlying technologies in Linux? It feels a lot like building dump trucks in 1904 with the industrial revolution in full swing. Figuring out what screws, bolts, metal, and pistons to use. (Steampunk, anyone?)
Building and equipping Kubernetes, like a dump truck, is a technical problem you probably shouldn't be tackling if you are in financial services, retail, biological research, food services, and so forth. But, learning how to drive Kubernetes is definitely something you should be learning.
Kubernetes, like a dump truck, is elegant for the wide variety of technical problems it can solve (and the ecosystem it drags along). So, I'll leave you with a quote that one of my computer science professors told us in my first year of college. She said, "one day, you will look at a piece of code and say to yourself, 'now that's elegant!'"
Kubernetes is elegant.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/kubernetes-dump-truck
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dump_truck_car_container_kubernetes.jpg?itok=4BdmyVGd (Dump truck with kids standing in the foreground)
[2]: https://kubernetes.io/

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to navigate the Kubernetes learning curve)
[#]: via: (https://opensource.com/article/19/6/kubernetes-learning-curve)
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux)
How to navigate the Kubernetes learning curve
======
Kubernetes is like a dump truck. It's elegant for solving the problems
it's designed for, but you have to master the learning curve first.
![Dump truck rounding a turn in the road][1]
In _[Kubernetes is a dump truck][2]_, I talked about how a tool can be elegant for the problem it was designed to solve—once you learn how to use it. In part 2 of this series, I'm going a little deeper into the Kubernetes' learning curve.
The journey to [Kubernetes][3] often starts with running one container on one host. You quickly discover how easy it is to run new versions of software, how easy it is to share that software with others, and how easy it is for those users to run it the way you intended.
But then you need
* Two containers
* Two hosts
It's easy to fire up one web server on port 80 with a container, but what happens when you need to fire up a second container on port 80? What happens when you are building a production environment and you need the containerized web server to fail over to a second host? The short answer, in either case, is you have to move into container orchestration.
Inevitably, when you start to handle the two containers or two hosts problem, you'll introduce complexity and, hence, a learning curve. The two services (a more generalized version of a container) / two hosts problem has been around for a long time and has always introduced complexity.
Historically, this would have involved load balancers, clustering software, and even clustered file systems. Configuration logic for every service is embedded in every system (load balancers, cluster software, and file systems). Running 60 or 70 services, clustered, behind load balancers is complex. Adding another new service is also complex. Worse, decommissioning a service is a nightmare. Thinking back on my days of troubleshooting production MySQL and Apache servers with logic embedded in three, four, or five different places, all in different formats, still makes my head hurt.
Kubernetes elegantly solves all these problems with one piece of software:
1. Two services (containers): Check
2. Two servers (high availability): Check
3. Single source of configuration: Check
4. Standard configuration format: Check
5. Networking: Check
6. Storage: Check
7. Dependencies (what services talk to what databases): Check
8. Easy provisioning: Check
9. Easy de-provisioning: Check (perhaps Kubernetes' _most_ powerful piece)
Wait, it's starting to look like Kubernetes is pretty elegant and pretty powerful. _It is._ You can model an entire miniature IT universe in Kubernetes.
![Kubernetes business model][4]
So yes, there is a learning curve when starting to use a giant dump truck (or any professional equipment). There's also a learning curve to use Kubernetes, but it's worth it because you can solve so many problems with one tool. If you are apprehensive about the learning curve, think through all the underlying networking, storage, and security problems in IT infrastructure and envision their solutions today—they're not easier. Especially when you introduce more and more services, faster and faster. Velocity is the goal nowadays, so give special consideration to the provisioning and de-provisioning problem.
But don't confuse the learning curve for building or equipping Kubernetes (picking the right mud flaps for your dump truck can be hard, LOL) with the learning curve for using it. Learning to build your own Kubernetes with so many different choices at so many different layers (container engine, logging, monitoring, service mesh, storage, networking), and then maintaining updated selections of each component every six months, might not be worth the investment—but learning to use it is absolutely worth it.
I eat, sleep, and breathe Kubernetes and containers every day, and even I struggle to keep track of all the major new projects announced literally almost every day. But there isn't a day that I'm not excited about the operational benefits of having a single tool to model an entire IT miniverse. Also, remember Kubernetes has matured a ton and will continue to do so. Like Linux and OpenStack before it, the interfaces and de facto projects at each layer will mature and become easier to select.
In the third article in this series, I'll dig into what you need to know before you drive your Kubernetes "truck."
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/kubernetes-learning-curve
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dumptruck_car_vehicle_storage_container_road.jpg?itok=TWK0CbX_ (Dump truck rounding a turn in the road)
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
[3]: https://kubernetes.io/
[4]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional_1.png (Kubernetes business model)

View File

@ -0,0 +1,227 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to set ulimit and file descriptors limit on Linux Servers)
[#]: via: (https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/)
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
How to set ulimit and file descriptors limit on Linux Servers
======
**Introduction: ** Challenges like number of open files in any of the production environment has become common now a day. Since many applications which are Java based and Apache based, are getting installed and configured, which may lead to too many open files, file descriptors etc. If this exceeds the default limit that is set, then one may face access control problems and file opening challenges. Many production environments come to standstill kind of situations because of this.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/ulimit-number-openfiles-linux-server.jpg>
Luckily, we have “ **ulimit** ” command in any of the Linux based server, by which one can see/set/get number of files open status/configuration details. This command is equipped with many options and with this combination one can set number of open files. Following are step-by-step commands with examples explained in detail.
### To see what is the present open file limit in any Linux System
To get open file limit on any Linux server, execute the following command,
```
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
146013
```
The above number shows that user can open 146013 file per user login session.
```
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
149219
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
73906
```
This clearly indicates that individual Linux operating systems have different number of open files. This is based on dependencies and applications which are running in respective systems.
### ulimit command :
As the name suggests, ulimit (user limit) is used to display and set resources limit for logged in user.When we run ulimit command with -a option then it will print all resources limit for the logged in user. Now lets run “ **ulimit -a** ” on Ubuntu / Debian and CentOS systems,
**Ubuntu / Debian System** ,
```
root@linuxtechi ~}$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 5731
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 5731
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
```
**CentOS System**
```
root@linuxtechi ~}$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 5901
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 5901
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
```
As we can be seen here different OS have different limits set. All these limits can be configured/changed using “ulimit” command.
To display the individual resource limit then pass the individual parameter in ulimit command, some of parameters are listed below:
* ulimit -n > It will display number of open files limit
* ulimit -c > It display the size of core file
* umilit -u > It will display the maximum user process limit for the logged in user.
* ulimit -f > It will display the maximum file size that the user can have.
* umilit -m > It will display the maximum memory size for logged in user.
* ulimit -v > It will display the maximum memory size limit
Use below commands check hard and soft limits for number of open file for the logged in user
```
root@linuxtechi ~}$ ulimit -Hn
1048576
root@linuxtechi ~}$ ulimit -Sn
1024
```
### How to fix the problem when limit on number of Maximum Files was reached ?
Lets assume our Linux server has reached the limit of maximum number of open files and want to extend that limit system wide, for example we want to set 100000 as limit of number of open files.
Use sysctl command to pass fs.file-max parameter to kernel on the fly, execute beneath command as root user,
```
root@linuxtechi~]# sysctl -w fs.file-max=100000
fs.file-max = 100000
```
Above changes will be active until the next reboot, so to make these changes persistent across the reboot, edit the file **/etc/sysctl.conf** and add same parameter,
```
root@linuxtechi~]# vi /etc/sysctl.conf
fs.file-max = 100000
```
save and exit file,
Run the beneath command to make above changes into effect immediately without logout and reboot.
```
root@linuxtechi~]# sysctl -p
```
Now verify whether new changes are in effect or not.
```
root@linuxtechi~]# cat /proc/sys/fs/file-max
100000
```
Use below command to find out how many file descriptors are currently being utilized:
```
[root@linuxtechi ~]# more /proc/sys/fs/file-nr
1216 0 100000
```
Note:- Command “ **sysctl -p** ” is used to commit the changes without reboot and logout.
### Set User level resource limit via limit.conf file
**/etc/sysctl.conf** ” file is used to set resource limit system wide but if you want to set resource limit for specific user like Oracle, MariaDB and Apache then this can be achieved via “ **/etc/security/limits.conf** ” file.
Sample Limit.conf is shown below,
```
root@linuxtechi~]# cat /proc/sys/fs/file-max
```
![Limits-conf-linux-part1][1]
![Limits-conf-linux-part2][2]
Lets assume we want to set hard and soft limit on number of open files for linuxtechi user and for oracle user set hard and soft limit on number of open process, edit the file “/etc/security/limits.conf” and add the following lines
```
# hard limit for max opened files for linuxtechi user
linuxtechi hard nofile 4096
# soft limit for max opened files for linuxtechi user
linuxtechi soft nofile 1024
# hard limit for max number of process for oracle user
oracle hard nproc 8096
# soft limit for max number of process for oracle user
oracle soft nproc 4096
```
Save & exit the file.
**Note:** In case you want to put resource limit on a group instead of users, then it can also be possible via limit.conf file, in place of user name , type **@ <Group_Name>** and rest of the items will be same, example is shown below,
```
# hard limit for max opened files for sysadmin group
@sysadmin hard nofile 4096
# soft limit for max opened files for sysadmin group
@sysadmin soft nofile 1024
```
Verify whether new changes are in effect or not,
```
~]# su - linuxtechi
~]$ ulimit -n -H
4096
~]$ ulimit -n -S
1024
~]# su - oracle
~]$ ulimit -H -u
8096
~]$ ulimit -S -u
4096
```
Note: Other majorly used command is “[ **lsof**][3]” which is used for finding out “how many files are opened currently”. This command is very helpful for admins.
**Conclusion:**
As mentioned in the introduction section “ulimit” command is very powerful and helps one to configure and make sure application installations are smoother without any bottlenecks. This command helps in fixing many of the number of file limitations in Linux based servers.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/
作者:[Shashidhar Soppin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/shashidhar/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part1-1024x677.jpg
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part2-1024x443.jpg
[3]: https://www.linuxtechi.com/lsof-command-examples-linux-geeks/

View File

@ -0,0 +1,281 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Constant Time)
[#]: via: (https://dave.cheney.net/2019/06/10/constant-time)
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
Constant Time
======
This essay is a derived from my [dotGo 2019 presentation][1] about my favourite feature in Go.
* * *
Many years ago Rob Pike remarked,
> “Numbers are just numbers, youll never see `0x80ULL` in a `.go` source file”.
—Rob Pike, [The Go Programming Language][2]
Beyond this pithy observation lies the fascinating world of Gos constants. Something that is perhaps taken for granted because, as Rob noted, is Go numbersconstantsjust work.
In this post I intend to show you a few things that perhaps you didnt know about Gos `const` keyword.
## Whats so great about constants?
To kick things off, why are constants good? Three things spring to mind:
* _Immutability_. Constants are one of the few ways we have in Go to express immutability to the compiler.
* _Clarity_. Constants give us a way to extract magic numbers from our code, giving them names and semantic meaning.
* _Performance_. The ability to express to the compiler that something will not change is key as it unlocks optimisations such as constant folding, constant propagation, branch and dead code elimination.
But these are generic use cases for constants, they apply to any language. Lets talk about some of the properties of Gos constants.
### A Challenge
To introduce the power of Gos constants lets try a little challenge: declare a _constant_ whose value is the number of bits in the natural machine word.
We cant use `unsafe.SizeOf` as it is not a constant expression. We could use a build tag and laboriously record the natural word size of each Go platform, or we could do something like this:
```
const uintSize = 32 << (^uint(0) >> 32 & 1)
```
There are many versions of this expression in Go codebases. They all work roughly the same way. If were on a 64 bit platform then the exclusive or of the number zeroall zero bitsis a number with all bits set, sixty four of them to be exact.
```
1111111111111111111111111111111111111111111111111111111111111111
```
If we shift that value thirty two bits to the right, we get another value with thirty two ones in it.
```
0000000000000000000000000000000011111111111111111111111111111111
```
Anding that with a number with one bit in the final position give us, the same thing, `1`,
```
0000000000000000000000000000000011111111111111111111111111111111 & 1 = 1
```
Finally we shift the number thirty two one place to the right, giving us 641.
```
32 << 1 = 64
```
This expression is an example of a _constant expression_. All of these operations happen at compile time and the result of the expression is itself a constant. If you look in the in runtime package, in particular the garbage collector, youll see how constant expressions are used to set up complex invariants based on the word size of the machine the code is compiled on.
So, this is a neat party trick, but most compilers will do this kind of constant folding at compile time for you. Lets step it up a notch.
## Constants are values
In Go, constants are values and each value has a type. In Go, user defined types can declare their own methods. Thus, a constant value can have a method set. If youre surprised by this, let me show you an example that you probably use every day.
```
const timeout = 500 * time.Millisecond
fmt.Println("The timeout is", timeout) // 500ms
```
In the example the untyped literal constant `500` is multiplied by `time.Millisecond`, itself a constant of type `time.Duration`. The rule for assignments in Go are, unless otherwise declared, the type on the left hand side of the assignment operator is inferred from the type on the right.`500` is an untyped constant so it is converted to a `time.Duration` then multiplied with the constant `time.Millisecond`.
Thus `timeout` is a constant of type `time.Duration` which holds the value `500000000`.
Why then does `fmt.Println` print `500ms`, not `500000000`?
The answer is `time.Duration` has a `String` method. Thus any `time.Duration` value, even a constant, knows how to pretty print itself.
Now we know that constant values are typed, and because types can declare methods, we can derive that _constant values can fulfil interfaces_. In fact we just saw an example of this. `fmt.Println` doesnt assert that a value has a `String` method, it asserts the value implements the `Stringer` interface.
Lets talk a little about how we can use this property to make our Go code better, and to do that Im going to take a brief digression into the Singleton pattern.
## Singletons
Im generally not a fan of the singleton pattern, in Go or any language. Singletons complicate testing and create unnecessary coupling between packages. I feel the singleton pattern is often used _not_ to create a singular instance of a thing, but instead to create a place to coordinate registration. `net/http.DefaultServeMux` is a good example of this pattern.
```
package http
// DefaultServeMux is the default ServeMux used by Serve.
var DefaultServeMux = &defaultServeMux
var defaultServeMux ServeMux
```
There is nothing singular about `http.defaultServerMux`, nothing prevents you from creating another `ServeMux`. In fact the `http` package provides a helper that will create as many `ServeMux`s as you want.
```
// NewServeMux allocates and returns a new ServeMux.
func NewServeMux() *ServeMux { return new(ServeMux) }
```
`http.DefaultServeMux` is not a singleton. Never the less there is a case for things which are truely singletons because they can only represent a single thing. A good example of this are the file descriptors of a process; 0, 1, and 2 which represent stdin, stdout, and stderr respectively.
It doesnt matter what names you give them, `1` is always stdout, and there can only ever be one file descriptor `1`. Thus these two operations are identical:
```
fmt.Fprintf(os.Stdout, "Hello dotGo\n")
syscall.Write(1, []byte("Hello dotGo\n"))
```
So lets look at how the `os` package defines `Stdin`, `Stdout`, and `Stderr`:
```
package os
var (
Stdin = NewFile(uintptr(syscall.Stdin), "/dev/stdin")
Stdout = NewFile(uintptr(syscall.Stdout), "/dev/stdout")
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
)
```
There are a few problems with this declaration. Firstly their type is `*os.File` not the respective `io.Reader` or `io.Writer` interfaces. People have long complained that this makes replacing them with alternatives problematic. However the notion of replacing these variables is precisely the point of this digression. Can you safely change the value of `os.Stdout` once your program is running without causing a data race?
I argue that, in the general case, you cannot. In general, if something is unsafe to do, as programmers we shouldnt let our users think that it is safe, [lest they begin to depend on that behaviour][3].
Could we change the definition of `os.Stdout` and friends so that they retain the observable behaviour of reading and writing, but remain immutable? It turns out, we can do this easily with constants.
```
type readfd int
func (r readfd) Read(buf []byte) (int, error) {
return syscall.Read(int(r), buf)
}
type writefd int
func (w writefd) Write(buf []byte) (int, error) {
return syscall.Write(int(w), buf)
}
const (
Stdin = readfd(0)
Stdout = writefd(1)
Stderr = writefd(2)
)
func main() {
fmt.Fprintf(Stdout, "Hello world")
}
```
In fact this change causes only one compilation failure in the standard library.2
## Sentinel error values
Another case of things which look like constants but really arent, are sentinel error values. `io.EOF`, `sql.ErrNoRows`, `crypto/x509.ErrUnsupportedAlgorithm`, and so on are all examples of sentinel error values. They all fall into a category of _expected_ errors, and because they are expected, youre expected to check for them.
To compare the error you have with the one you were expecting, you need to import the package that defines that error. Because, by definition, sentinel errors are exported public variables, any code that imports, for example, the `io` package could change the value of `io.EOF`.
```
package nelson
import "io"
func init() {
io.EOF = nil // haha!
}
```
Ill say that again. If I know the name of `io.EOF` I can import the package that declares it, which I must if I want to compare it to my error, and thus I could change `io.EOF`s value. Historically convention and a bit of dumb luck discourages people from writing code that does this, but technically there is nothing to prevent you from doing so.
Replacing `io.EOF` is probably going to be detected almost immediately. But replacing a less frequently used sentinel error may cause some interesting side effects:
```
package innocent
import "crypto/rsa"
func init() {
rsa.ErrVerification = nil // 🤔
}
```
If you were hoping the race detector will spot this subterfuge, I suggest you talk to the folks writing testing frameworks who replace `os.Stdout` without it triggering the race detector.
## Fungibility
I want to digress for a moment to talk about _the_ most important property of constants. Constants arent just immutable, its not enough that we cannot overwrite their declaration,
Constants are _fungible_. This is a tremendously important property that doesnt get nearly enough attention.
Fungible means identical. Money is a great example of fungibility. If you were to lend me 10 bucks, and I later pay you back, the fact that you gave me a 10 dollar note and I returned to you 10 one dollar bills, with respect to its operation as a financial instrument, is irrelevant. Things which are fungible are by definition equal and equality is a powerful property we can leverage for our programs.
```
var myEOF = errors.New("EOF") // io/io.go line 38
fmt.Println(myEOF == io.EOF) // false
```
Putting aside the effect of malicious actors in your code base the key design challenge with sentinel errors is they behave like _singletons_ , not _constants_. Even if we follow the exact procedure used by the `io` package to create our own EOF value, `myEOF` and `io.EOF` are not equal. `myEOF` and `io.EOF` are not fungible, they cannot be interchanged. Programs can spot the difference.
When you combine the lack of immutability, the lack of fungibility, the lack of equality, you have a set of weird behaviours stemming from the fact that sentinel error values in Go are not constant expressions. But what if they were?
## Constant errors
Ideally a sentinel error value should behave as a constant. It should be immutable and fungible. Lets recap how the built in `error` interface works in Go.
```
type error interface {
Error() string
}
```
Any type with an `Error() string` method fulfils the `error` interface. This includes user defined types, it includes types derived from primitives like string, and it includes constant strings. With that background, consider this error implementation:
```
type Error string
func (e Error) Error() string {
return string(e)
}
```
We can use this error type as a constant expression:
```
const err = Error("EOF")
```
Unlike `errors.errorString`, which is a struct, a compact struct literal initialiser is not a constant expression and cannot be used.
```
const err2 = errors.errorString{"EOF"} // doesn't compile
```
As constants of this `Error` type are not variables, they are immutable.
```
const err = Error("EOF")
err = Error("not EOF") // doesn't compile
```
Additionally, two constant strings are always equal if their contents are equal:
```
const str1 = "EOF"
const str2 = "EOF"
fmt.Println(str1 == str2) // true
```
which means two constants of a type derived from string with the same contents are also equal.
```
type Error string
const err1 = Error("EOF")
const err2 = Error("EOF")
fmt.Println(err1 == err2) // true```
```
Said another way, equal constant `Error` values are the same, in the way that the literal constant `1` is the same as every other literal constant `1`.
Now we have all the pieces we need to make sentinel errors, like `io.EOF`, and `rsa.ErrVerfication`, immutable, fungible, constant expressions.
```
% git diff
diff --git a/src/io/io.go b/src/io/io.go
inde

View File

@ -0,0 +1,204 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Step by Step Zorin OS 15 Installation Guide with Screenshots)
[#]: via: (https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
Step by Step Zorin OS 15 Installation Guide with Screenshots
======
Good News for all the Zorin users out there! Zorin has launched its latest version (Zorin OS 15) of its Ubuntu based Linux distro. This version is based on Ubuntu 18.04.2, since its launch in July 2009, it is estimated that this popular distribution has reached more than 17 million downloads. Zorin is renowned for creating a distribution for beginner level users and the all new Zorin OS 15 comes packed with a lot of goodies that surely will make Zorin OS lovers happy. Lets see some of the major enhancements made in the latest version.
### New Features of Zorin OS 15
Zorin OS has always amazed users with different set of features when every version is released Zorin OS 15 is no exception as it comes with a lot of new features as outlined below:
**Enhanced User Experience**
The moment you look at the Zorin OS 15, you will ask whether it is a Linux distro because it looks more like a Windows OS. According to Zorin, it wanted Windows users to get ported to Linux in a more user-friendly manner. And it features a Windows like Start menu, quick app launchers, a traditional task bar section, system tray etc.
**Zorin Connect**
Another major highlight of Zorin OS 15 is the ability to integrate your Android Smartphones seamlessly with your desktop using the Zorin Connect application. With your phone connected, you can share music, videos and other files between your phone and desktop. You can even use your phone as a mouse to control the desktop. You can also easily control the media playback in your desktop from your phone itself. Quickly reply to all your messages and notifications sent to your phone from your desktop.
**New GTK Theme**
Zorin OS 15 ships with an all new GTK Theme that has been exclusively built for this distro and the theme is available in 6 different colors along with the hugely popular dark theme. Another highlight is that the OS automatically detects the time of the day and changes the desktop theme accordingly. Say for example, during sunset it switches to a dark theme whereas in the morning it switches to bright theme automatically.
**Other New Features:**
Zorin OS 15 comes packed with a lot of new features including:
* Compatible with Thunderbolt 3.0 devices
* Supports color emojis
* Comes with an upgraded Linux Kernel 4.18
* Customized settings available for application menu and task bar
* System font changed to Inter
* Supports renaming bulk files
### Minimum system requirements for Zorin OS 15 (Core):
* Dual Core 64-bit (1GHZ)
* 2 GB RAM
* 10 GB free disk space
* Internet Connection Optional
* Display (800×600)
### Step by Step Guide to Install Zorin OS 15 (Core)
Before you start installing Zorin OS 15, ensure you have a copy of the Zorin OS 15 downloaded in your system. If not download then refer official website of [Zorin OS 15][1]. Remember this Linux distribution is available in 4 versions including:
* Ultimate (Paid Version)
* Core (Free Version)
* Lite (Free Version)
* Education (Free Version)
Note: In this article I will demonstrate Zorin OS 15 Core Installation Steps
### Step 1) Create Zorin OS 15 Bootable USB Disk
Once you have downloaded Zorin OS 15, copy the ISO into an USB disk and create a bootable disk. Change our system settings to boot using an USB disk and restart your system. Once you restart your system, you will see the screen as shown below. Click “ **Install or Try Zorin OS**
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS15-option.jpg>
### Step 2) Choose Install Zorin OS
In the next screen, you will be shown option to whether install Zorin OS 15 or to try Zorin OS. Click “ **Install Zorin OS** ” to continue the installation process.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS-15-on-System.jpg>
### Step 3) Choose Keyboard Layout
Next step is to choose your keyboard layout. By default, English (US) is selected and if you want to choose a different language, then choose it and click “ **Continue**
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Keyboard-Layout-Zorinos-15.jpg>
### Step 4) Download Updates and Other Software
In the next screen, you will be asked whether you need to download updates while you are installing Zorin OS and install other 3rd party applications. In case your system is connected to internet then you can select both of these options, but by doing so your installation time increases considerably, or if you dont want to install updates and third party software during the installation then untick both these options and click “Continue”
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Updates-third-party-softwares-Zorin-OS15-Installation.jpg>
### Step 5) Choose Zorin OS 15 Installation Method
If you are new to Linux and want fresh installation and dont want to customize partitions, then better choose option “ **Erase disk and install Zorin OS**
If you want to create customize partitions for Zorin OS then choose “ **Something else** “, In this tutorial I will demonstrate how to create customize partition scheme for Zorin OS 15 installation,
So, choose “ **Something else** ” option and then click on Continue
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Something-else-option-Zorin-OS15-Installation.jpg>
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Disk-for-Zorin-OS15-Installation.jpg>
As we can see we have around 42 GB disk available for Zorin OS, We will be creating following partitions,
* /boot = 2 GB (ext4 file system)
* /home = 20 GB (ext4 file system)
* / = 10 GB (ext4 file system)
* /var = 7 GB (ext4 file system)
* Swap = 2 GB (ext4 file system)
To start creating partitions, first click on “ **New Partition Table** ” and it will show it is going to create empty partition table, click on continue
<https://www.linuxtechi.com/wp-content/uploads/2019/06/create-empty-partition-zorin-os15-installation.jpg>
In the next screen we will see that we have now 42 GB free space on disk (/dev/sda), so lets create our first partition as /boot,
Select the free space, then click on + symbol and then specify the partition size as 2048 MB, file system type as ext4 and mount point as /boot,
<https://www.linuxtechi.com/wp-content/uploads/2019/06/boot-partiton-during-zorin-os15-installation.jpg>
Click on OK
Now create our next partition /home of size 20 GB (20480 MB),
<https://www.linuxtechi.com/wp-content/uploads/2019/06/home-partition-zorin-os15-installation.jpg>
Similarly create our next two partition / and /var of size 10 and 7 GB respectively,
<https://www.linuxtechi.com/wp-content/uploads/2019/06/slash-partition-zorin-os15-installation.jpg>
<https://www.linuxtechi.com/wp-content/uploads/2019/06/var-partition-zorin-os15-installation.jpg>
Lets create our last partition as swap of size 2 GB
<https://www.linuxtechi.com/wp-content/uploads/2019/06/swap-partition-Zorin-OS15-Installation.jpg>
Click on OK
Choose “ **Install Now** ” option in next window,
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-now-option-zorin-os15.jpg>
In next window, choose “Continue” to write changes to disk and to proceed with installation
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Write-Changes-to-disk-zorin-os15.jpg>
### Step 6) Choose Your Preferred Location
In the next screen, you will be asked to choose your location and click “Continue”
<https://www.linuxtechi.com/wp-content/uploads/2019/06/TimeZone-Zorin-OS15-Installation.jpg>
### Step 7) Provide User Credentials
In the next screen, youll be asked to enter the user credentials including your name, computer name,
Username and password. Once you are done, click “Continue” to proceed with the installation process.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/User-Credentails-During-Zorin-OS15-Installation.jpg>
### Step 8) Installing Zorin OS 15
Once you click continue, you can see that the Zorin OS 15 starts installing and it may take some time to complete the installation process.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Installation-Progress-Zorin-OS15.jpg>
### Step 9) Restart your system after Successful Installation
Once the installation process is completed, it will ask to restart your computer. Hit “ **Restart Now**
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Zorin-OS15-Installation-Completed.jpg>
### Step 10) Login to Zorin OS 15
Once the system restarts, you will be asked to login into the system using the login credentials provided earlier.
Note: Dont forget to change the boot medium from bios so that system boots up with disk.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Login-Screen-Zorin-OS15.jpg>
### Step 11) Zorin OS 15 Welcome Screen
Once your login is successful, you can see the Zorin OS 15 welcome screen. Now you can start exploring all the incredible features of Zorin OS 15.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Desktop-Screen-Zorin-OS15.jpg>
Thats all from this tutorial, please do share feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://zorinos.com/download/

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers)
[#]: via: (https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers
======
There are some scenarios where we want to assign multiple IPs from different **VLAN** on the same Ethernet card (nic) on Linux servers ( **CentOS** / **RHEL** ). This can be done by enabling VLAN tagged interface. But for this to happen first we must make sure multiple VLANs are attached to port on switch or in other words we can say we should configure trunk port by adding multiple VLANs on switch.
<https://www.linuxtechi.com/wp-content/uploads/2019/06/VLAN-Tagged-NIC-Linux-Server.jpg>
Lets assume we have a Linux Server, there we have two Ethernet cards (enp0s3 & enp0s8), first NIC ( **enp0s3** ) will be used for data traffic and second NIC ( **enp0s8** ) will be used for control / management traffic. For Data traffic I will using multiple VLANs (or will assign multiple IPs from different VLANs on data traffic ethernet card).
I am assuming the port from switch which is connected to my server data NIC is configured as trunk port by mapping the multiple VLANs to it.
Following are the VLANs which is mapped to data traffic Ethernet Card (NIC):
* VLAN ID (200), VLAN N/W = 172.168.10.0/24
* VLAN ID (300), VLAN N/W = 172.168.20.0/24
To use VLAN tagged interface on CentOS 7 / RHEL 7 / CentOS 8 /RHEL 8 systems, [kernel module][1] **8021q** must be loaded.
Use the following command to load the kernel module “8021q”
```
[root@linuxtechi ~]# lsmod | grep -i 8021q
[root@linuxtechi ~]# modprobe --first-time 8021q
[root@linuxtechi ~]# lsmod | grep -i 8021q
8021q 29022 0
garp 14384 1 8021q
mrp 18542 1 8021q
[root@linuxtechi ~]#
```
Use below modinfo command to display information about kernel module “8021q”
```
[root@linuxtechi ~]# modinfo 8021q
filename: /lib/modules/3.10.0-327.el7.x86_64/kernel/net/8021q/8021q.ko
version: 1.8
license: GPL
alias: rtnl-link-vlan
rhelversion: 7.2
srcversion: 2E63BD725D9DC11C7DA6190
depends: mrp,garp
intree: Y
vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: 79:AD:88:6A:11:3C:A0:22:35:26:33:6C:0F:82:5B:8A:94:29:6A:B3
sig_hashalgo: sha256
[root@linuxtechi ~]#
```
Now tagged (or mapped) the VLANs 200 and 300 to NIC enp0s3 using the [ip command][2]
```
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.200 type vlan id 200
```
Bring up the interface using below ip command:
```
[root@linuxtechi ~]# ip link set dev enp0s3.200 up
```
Similarly mapped the VLAN 300 to NIC enp0s3
```
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.300 type vlan id 300
[root@linuxtechi ~]# ip link set dev enp0s3.300 up
[root@linuxtechi ~]#
```
Now view the tagged interface status using ip command:
[![tagged-interface-ip-command][3]][4]
Now we can assign the IP address to tagged interface from their respective VLANs using beneath ip command,
```
[root@linuxtechi ~]# ip addr add 172.168.10.51/24 dev enp0s3.200
[root@linuxtechi ~]# ip addr add 172.168.20.51/24 dev enp0s3.300
```
Use below ip command to see whether IP is assigned to tagged interface or not.
![ip-address-tagged-nic][5]
All the above changes via ip commands will not be persistent across the reboot. These tagged interfaces will not be available after reboot and after network service restart
So, to make tagged interfaces persistent across the reboot then use interface **ifcfg files**
Edit interface (enp0s3) file “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3** ” and add the following content,
Note: Replace the interface name that suits to your env,
```
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
TYPE=Ethernet
DEVICE=enp0s3
BOOTPROTO=none
ONBOOT=yes
```
Save & exit the file
Create tagged interface file for VLAN id 200 as “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3.200** ” and add the following contents to it.
```
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.200
DEVICE=enp0s3.200
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.168.10.51
PREFIX=24
NETWORK=172.168.10.0
VLAN=yes
```
Save & exit the file
Similarly create interface file for VLAN id 300 as “/etc/sysconfig/network-scripts/ifcfg-enp0s3.300” and add the following contents to it
```
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.300
DEVICE=enp0s3.300
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.168.20.51
PREFIX=24
NETWORK=172.168.20.0
VLAN=yes
```
Save and exit file and then restart network services using the beneath command,
```
[root@linuxtechi ~]# systemctl restart network
[root@linuxtechi ~]#
```
Now verify whether tagged interface are configured and up & running using the ip command,
![tagged-interface-status-ip-command-linux-server][6]
Thats all from this article, I hope you got an idea how to configure and enable VLAN tagged interface on CentOS 7 / 8 and RHEL 7 /8 Severs. Please do share your feedback and comments.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: https://www.linuxtechi.com/how-to-manage-kernel-modules-in-linux/
[2]: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command-1024x444.jpg
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command.jpg
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/ip-address-tagged-nic-1024x343.jpg
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-status-ip-command-linux-server-1024x656.jpg

View File

@ -0,0 +1,118 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A beginner's guide to Linux permissions)
[#]: via: (https://opensource.com/article/19/6/understanding-linux-permissions)
[#]: author: (Bryant Son https://opensource.com/users/brson/users/greg-p/users/tj)
A beginner's guide to Linux permissions
======
Linux security permissions designate who can do what with a file or
directory.
![Hand putting a Linux file folder into a drawer][1]
One of the main benefits of Linux systems is that they are known to be less prone to security vulnerabilities and exploits than other systems. Linux definitely gives users more flexibility and granular controls over its file systems' security permissions. This may imply that it's critical for Linux users to understand security permissions. That isn't necessarily true, but it's still wise for beginning users to understand the basics of Linux permissions.
### View Linux security permissions
To start learning about Linux permissions, imagine we have a newly created directory called **PermissionDemo**. Run **cd** inside the directory and use the **ls -l** command to view the Linux security permissions. If you want to sort them by time modified, add the **-t** option.
```
`ls -lt`
```
Since there are no files inside this new directory, this command returns nothing.
![No output from ls -l command][2]
To learn more about the **ls** option, access its man page by entering **man ls** on the command line.
![ls man page][3]
Now, let's create two files: **cat.txt** and **dog.txt** with empty content; this is easy to do using the **touch** command. Let's also create an empty directory called **Pets** with the **mkdir** command. We can use the **ls -l** command again to see the permissions for these new files.
![Creating new files and directory][4]
We need to pay attention to two sections of output from this command.
### Who has permission?
The first thing to examine indicates _who_ has permission to access the file/directory. Note the section highlighted in the red box below. The first column refers to the _user_ who has access, while the second column refers to the _group_ that has access.
![Output from -ls command][5]
There are three main types of users: **user** , **group** ; and **other** (essentially neither a user nor a group). There is one more: **all** , which means practically everyone.
![User types][6]
Because we are using **root** as the user, we can access any file or directory because **root** is the superuser. However, this is generally not the case, and you will probably be restricted to your username. A list of all users is stored in the **/etc/passwd** file.
![/etc/passwd file][7]
Groups are maintained in the **/etc/group** file.
![/etc/passwd file][8]
### What permissions do they have?
The other section of the output from **ls -l** that we need to pay attention to relates to enforcing permissions. Above, we confirmed that the owner and group permissions for the files dog.txt and cat.txt and the directory Pets we created belong to the **root** account. We can use that information about who owns what to enforce permissions for the different user ownership types, as highlighted in the red box below.
![Enforcing permissions for different user ownership types][9]
We can dissect each line into five bits of information. The first part indicates whether it is a file or a directory; files are labeled with a **-** (hyphen), and directories are labeled with **d**. The next three parts refer to permissions for **user** , **group** , and **other** , respectively. The last part is a flag for the [**access-control list**][10] (ACL), a list of permissions for an object.
![Different Linux permissions][11]
Linux permission levels can be identified with letters or numbers. There are three privilege types:
* **read** : r or 4
* **write:** w or 2
* **executable:** e or 1
![Privilege types][12]
The presence of each letter symbol ( **r** , **w** , or **x** ) means that the permission exists, while **-** indicates it does not. In the example below, the file is readable and writeable by the owner, only readable if the user belongs to the group, and readable and executable by anyone else. Converted to numeric notation, this would be 645 (see the image below for an explanation of how this is calculated).
![Permission type example][13]
Here are a few more examples:
![Permission type examples][14]
Test your knowledge by going through the following exercises.
![Permission type examples][15]
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/understanding-linux-permissions
作者:[Bryant Son][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/brson/users/greg-p/users/tj
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://opensource.com/sites/default/files/uploads/1_3.jpg (No output from ls -l command)
[3]: https://opensource.com/sites/default/files/uploads/1_man.jpg (ls man page)
[4]: https://opensource.com/sites/default/files/uploads/2_6.jpg (Creating new files and directory)
[5]: https://opensource.com/sites/default/files/uploads/3_2.jpg (Output from -ls command)
[6]: https://opensource.com/sites/default/files/uploads/4_0.jpg (User types)
[7]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_passwd.jpg (/etc/passwd file)
[8]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_group.jpg (/etc/passwd file)
[9]: https://opensource.com/sites/default/files/uploads/linuxpermissions_5.jpg (Enforcing permissions for different user ownership types)
[10]: https://en.wikipedia.org/wiki/Access-control_list
[11]: https://opensource.com/sites/default/files/uploads/linuxpermissions_6.jpg (Different Linux permissions)
[12]: https://opensource.com/sites/default/files/uploads/linuxpermissions_7.jpg (Privilege types)
[13]: https://opensource.com/sites/default/files/uploads/linuxpermissions_8.jpg (Permission type example)
[14]: https://opensource.com/sites/default/files/uploads/linuxpermissions_9.jpg (Permission type examples)
[15]: https://opensource.com/sites/default/files/uploads/linuxpermissions_10.jpg (Permission type examples)

View File

@ -0,0 +1,231 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use MapTool to build an interactive dungeon RPG)
[#]: via: (https://opensource.com/article/19/6/how-use-maptools)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to use MapTool to build an interactive dungeon RPG
======
By using MapTool, most of a game master's work is done well before a
role-playing game begins.
![][1]
In my previous article on MapTool, I explained how to download, install, and configure your own private, [open source virtual tabletop][2] so you and your friends can play a role-playing game (RPG) together. [MapTool][3] is a complex application with lots of features, and this article demonstrates how a game master (GM) can make the most of it.
### Update JavaFX
MapTool requires JavaFX, but Java maintainers recently stopped bundling it in Java downloads. This means that, even if you have Java installed, you might not have JavaFX installed.
Some Linux distributions have a JavaFX package available, so if you try to run MapTool and get an error about JavaFX, download the latest self-contained version:
* For [Ubuntu and other Debian-based systems][4]
* For [Fedora and Red Hat-based systems][5]
### Build a campaign
The top-level file in MapTool is a campaign (.cmpgn) file. A campaign can contain all of the maps required by the game you're running. As your players progress through the campaign, everyone changes to the appropriate map and plays.
For that to go smoothly, you must do a little prep work.
First, you need the digital equivalents of miniatures: _tokens_ in MapTool terminology. Tokens are available from various sites, but the most prolific is [immortalnights.com/tokensite][6]. If you're still just trying out virtual tabletops and aren't ready to invest in digital art yet, you can get a stunning collection of starter tokens from immortalnights.com for $0.
You can add starter content to MapTool quickly and easily using its built-in resource importer. Go to the **File** menu and select **Add Resource to Library**.
In the **Add Resource to Library** dialogue box, select the RPTools tab, located at the bottom-left. This lists all the free art packs available from the RPTools server, tokens and maps alike. Click to download and import.
![Add Resource to Library dialogue][7]
You can import assets you already have on your computer by selecting files from the file system, using the same dialogue box.
MapTool resources appear in the Library panel. If your MapTool window has no Library panel, select **Library** in the **Window** menu to add one.
### Gather your maps
The next step in preparing for your game is to gather maps. Depending on what you're playing, that might mean you need to draw your maps, purchase a map pack, or just open a map bundled with a game module. If all you need is a generic dungeon, you can also download free maps from within MapTool's **Add Resource to Library**.
If you have a set of maps you intend to use often, you can import them as resources. If you are building a campaign you intend to use just once, you can quickly add any PNG or JPEG file as a **New Map** in the **Map** menu.
![Creating a new map][8]
Set the **Background** to a texture that roughly matches your map or to a neutral color.
Set the **Map** to your map graphics file.
Give your new map a unique **Name**. The map name is visible to your players, so keep it free of spoilers.
To switch between maps, click the **Select Map** button in the top-right corner of the MapTool window, and choose the map name in the drop-down menu that appears.
![Select a map][9]
Before you let your players loose on your map, you still have some important prep work to do.
### Adjust the grid size
Since most RPGs govern how far players can move during their turn, especially during combat, game maps are designed to a specific scale. The most common scale is one map square for every five feet. Most maps you download already have a grid drawn on them; if you're designing a map, you should draw on graph paper to keep your scale consistent. Whether your map graphic has a grid or not, MapTool doesn't know about it, but you can adjust the digital grid overlay so that your player tokens are constrained into squares along the grid.
MapTool doesn't show the grid by default, so go to the **Map** menu and select **Adjust grid**. This displays MapTool's grid lines, and your goal is to make MapTool's grid line up with the grid drawn onto your map graphic. If your map graphic doesn't have a grid, it may indicate its scale; a common scale is one inch per five feet, and you can usually assume 72 pixels is one inch (on a 72 DPI screen). While adjusting the grid, you can change the color of the grid lines for your own reference. Set the cell size in pixels. Click and drag to align MapTool's grid to your map's grid.
![Adjusting the grid][10]
If your map has no grid and you want the grid to remain visible after you adjust it, go to the **View** menu and select **Show Grid**.
### Add players and NPCs
To add a player character (PC), non-player character (NPC), or monster to your map, find an appropriate token in your **Library** panel, then drag and drop one onto your map. In the **New Token** dialogue box that appears, give the token a name and set it as an NPC or a PC, then click the OK button.
![Adding a player character to the map][11]
Once a token is on the map, try moving it to see how its movements are constrained to the grid you've designated. Make sure **Interaction Tools** , located in the toolbar just under the **File** menu, is selected.
![A token moving within the grid][12]
Each token added to a map has its own set of properties, including the direction it's facing, a light source, player ownership, conditions (such as incapacitated, prone, dead, and so on), and even class attributes. You can set as many or as few of these as you want, but at the very least you should right-click on each token and assign it ownership. Your players must be logged into your MapTool server for tokens to be assigned to them, but you can assign yourself NPCs and monsters in advance.
The right-click menu provides access to all important token-related functions, including setting which direction it's facing, setting a health bar and health value, a copy and paste function (enabling you and your players to move tokens from map to map), and much more.
![The token menu unlocks great, arcane power][13]
### Activate fog-of-war effects
If you're using maps exclusively to coordinate combat, you may not need a fog-of-war effect. But if you're using maps to help your players visualize a dungeon they're exploring, you probably don't want them to see the whole map before they've made significant moves, like opening locked doors or braving a decaying bridge over a pit of hot lava.
The fog-of-war effect is an invaluable tool for the GM, and it's essential to set it up early so that your players don't accidentally get a sneak peek at all the horrors your dungeon holds for them.
To activate fog-of-war on a map, go to the **Map** and select **Fog-of-War**. This blackens the entire screen for your players, so your next step is to reveal some portion of the map so that your players aren't faced with total darkness when they switch to the map. Fog-of-war is a subtractive process; it starts 100% dark, and as the players progress, you reveal new portions of the map using fog-of-war drawing tools available in the **FOG** toolbar, just under the **View** menu.
You can reveal sections of the map in rectangle blocks, ovals, polygons, diamonds, and freehand shapes. Once you've selected the shape, click and release on the map, drag to define an area to reveal, and then click again.
![Fog-of-war as experienced by a playe][14]
If you're accidentally overzealous with what you reveal, you have two ways to reverse what you've done: You can manually draw new fog, or you can reset all fog. The quicker method is to reset all fog with **Ctrl+Shift+A**. The more elegant solution is to press **Shift** , then click and release, draw an area of fog, and then click again. Instead of exposing an area of the map, it restores fog.
### Add lighting effects
Fog-of-war mimics the natural phenomenon of not being able to see areas of the world other than where you are, but lighting effects mimic the visibility player characters might experience in light and dark. For games like Pathfinder and Dungeons and Dragons 5e, visibility is governed by light sources matched against light conditions.
First, activate lighting by clicking on the **Map** menu, selecting **Vision** , and then choosing either Daylight or Night. Now lighting effects are active, but none of your players have light sources, so they have no visibility.
To assign light sources to players, right-click on the appropriate token and choose **Light Source**. Definitions exist for the D20 system (candle, lantern, torch, and so on) and in generic measurements.
With lighting effects active, players can expose portions of fog-of-war as their light sources get closer to unexposed fog. That's a great effect, but it doesn't make much sense when players can illuminate the next room right through a solid wall. To prevent that, you have to help MapTool differentiate between empty space and solid objects.
#### Define solid objects
Defining walls and other solid objects through which light should not pass is easier than it sounds. MapTool's **Vision Blocking Layer** (VBL) tools are basic and built to minimize prep time. There are several basic shapes available, including a basic rectangle and an oval. Draw these shapes over all the solid walls, doors, pillars, and other obstructions, and you have instant rudimentary physics.
![Setting up obstructions][15]
Now your players can move around the map with light sources without seeing what lurks in the shadows of a nearby pillar or behind an innocent-looking door… until it's too late!
![Lighting effects][16]
### Track initiative
Eventually, your players are going to stumble on something that wants to kill them, and that means combat. In most RPG systems, combat is played in rounds, with the order of turns decided by an _initiative_ roll. During combat, each player (in order of their initiative roll, from greatest to lowest) tries to defeat their foe, ideally dealing enough damage until their foe is left with no health points (HP). It's usually the most paperwork a GM has to do during a game because it involves tracking whose turn it is, how much damage each monster has taken, what amount of damage each monster's attack deals, what special abilities each monster has, and more. Luckily, MapTool can help with that—and better yet, you can extend it with a custom macro to do even more.
MapTool's basic initiative panel helps you keep track of whose turn it is and how many rounds have transpired so far. To view the initiative panel, go to the **Window** menu and select **Initiative**.
To add characters to the initiative order, right-click a token and select **Add To Initiative**. As you add each, the token and its label appear in the initiative panel in the order that you add them. If you make a mistake or someone holds their action and changes the initiative order, click and drag the tokens in the initiative panel to reorder them.
During combat, click the **Next** button in the top-left of the initiative panel to progress to the next character. As long as you use the **Next** button, the **Round** counter increments, helping you track how many rounds the combat has lasted (which is helpful when you have spells or effects that last only for a specific number of rounds).
Tracking combat order is helpful, but it's even better to track health points. Your players should be tracking their own health, but since everyone's staring at the same screen, it doesn't hurt to track it publicly in one place. An HP property and a graphical health bar (which you can activate) are assigned to each token, so that's all the infrastructure you need to track HP in MapTool, but doing it manually takes a lot of clicking around. Since MapTool can be extended with macros, it's trivial to bring all these components together for a smooth GM experience.
The first step is to activate graphical health bars for your tokens. To do this, right-click on each token and select **Edit**. In the **Edit Token** dialog box, click on the **State** tab and deselect the radio button next to **Hide**.
![Don't hide the health bar][17]
Do this for each token whose health you want to expose.
#### Write a macro
Macros have access to all token properties, so each token's HP can be tracked by reading and writing whatever value exists in the token's HP property. The graphical health bar, however, bases its state on a percentage, so for the health bars to be meaningful, your tokens also must have some value that represents 100% of its HP.
Go to the **Edit** menu and select **Campaign Properties** to globally add properties to tokens. In the **Campaign Properties** window, select the **Token Properties** tab and then click the **Basic** category in the left column. Under ***@HP** , add ***@MaxHP** and click the **Update** button. Click the **OK** button to close the window.
![Adding a property to all tokens][18]
Now right-click a token and select **Edit**. In the **Edit Token** window, select the **State** tab and enter a value for the token's maximum HP (from the player's character sheet).
To create a new macro, reveal the **Campaign** panel in the **Window** menu.
In the **Campaign** panel, right-click and select **Add New Macro**. A button labeled **New** appears in the panel. Right-click on the **New** button and select **Edit**.
Enter this code in the macro editor window:
```
[h:status = input(
"hpAmount|0|Points",
"hpType|Damage,Healing|Damage or heal?|RADIO|SELECT=0")]
[h:abort(status)]
[if(hpType == 0),CODE: {
[h:HP = HP - hpAmount]
[h:bar.Health = HP / MaxHP]
[r:token.name] takes [r:hpAmount] damage.};
{
[h:diff = MaxHP - HP]
[h:HP = min(HP+hpAmount, MaxHP)]
[h:bar.Health = HP / MaxHP]
[r:token.name] gains [r:min(diff,hpAmount)] HP. };]
```
You can find full documentation of functions available in MapTool macros and their syntax from the [RPTools wiki][19].
In the **Details** tab, enable **Include Label** and **Apply to Selected Tokens** , and leave all other values at their default. Give your macro a better name than **New** , such as **HPTracker** , then click **Apply** and **OK**.
![Macro editing][20]
Your campaign now has a new ability!
Select a token and click your **HPTracker** button. Enter the number of points to deduct from the token, click **OK** , and watch the health bar change to reflect the token's new state.
It may seem like a simple change, but in the heat of battle, this is a GM's greatest weapon.
### During the game
There's obviously a lot you can do with MapTool, but with a little prep work, most of your work is done well before you start playing. You can even create a template campaign by creating an empty campaign with only the macros and settings you want, so all you have to do is import maps and stat out tokens.
During the game, your workflow is mostly about revealing areas from fog-of-war and managing combat. The players can manage their own tokens, and your prep work takes care of everything else.
MapTool makes digital gaming easy and fun, and most importantly, it keeps it open source and self-contained. Level-up today by learning MapTool and using it for your games.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/how-use-maptools
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice-keys_0.jpg?itok=PGEs3ZXa
[2]: https://opensource.com/article/18/5/maptool
[3]: https://github.com/RPTools/maptool
[4]: https://github.com/RPTools/maptool/releases
[5]: https://klaatu.fedorapeople.org/RPTools/maptool/
[6]: https://immortalnights.com/tokensite/
[7]: https://opensource.com/sites/default/files/uploads/maptool-resources.png (Add Resource to Library dialogue)
[8]: https://opensource.com/sites/default/files/uploads/map-properties.png (Creating a new map)
[9]: https://opensource.com/sites/default/files/uploads/map-select.jpg (Select a map)
[10]: https://opensource.com/sites/default/files/uploads/grid-adjust.jpg (Adjusting the grid)
[11]: https://opensource.com/sites/default/files/uploads/token-new.png (Adding a player character to the map)
[12]: https://opensource.com/sites/default/files/uploads/token-move.jpg (A token moving within the grid)
[13]: https://opensource.com/sites/default/files/uploads/token-menu.jpg (The token menu unlocks great, arcane power)
[14]: https://opensource.com/sites/default/files/uploads/fog-of-war.jpg (Fog-of-war as experienced by a playe)
[15]: https://opensource.com/sites/default/files/uploads/vbl.jpg (Setting up obstructions)
[16]: https://opensource.com/sites/default/files/uploads/map-light.jpg (Lighting effects)
[17]: https://opensource.com/sites/default/files/uploads/token-edit.jpg (Don't hide the health bar)
[18]: https://opensource.com/sites/default/files/uploads/campaign-properties.jpg (Adding a property to all tokens)
[19]: https://lmwcs.com/rptools/wiki/Main_Page
[20]: https://opensource.com/sites/default/files/uploads/macro-detail.jpg (Macro editing)

View File

@ -0,0 +1,342 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Getting started with OpenSSL: Cryptography basics)
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
Getting started with OpenSSL: Cryptography basics
======
Need a primer on cryptography basics, especially regarding OpenSSL? Read
on.
![A lock on the side of a building][1]
This article is the first of two on cryptography basics using [OpenSSL][2], a production-grade library and toolkit popular on Linux and other systems. (To install the most recent version of OpenSSL, see [here][3].) OpenSSL utilities are available at the command line, and programs can call functions from the OpenSSL libraries. The sample program for this article is in C, the source language for the OpenSSL libraries.
The two articles in this series cover—collectively—cryptographic hashes, digital signatures, encryption and decryption, and digital certificates. You can find the code and command-line examples in a ZIP file from [my website][4].
Lets start with a review of the SSL in the OpenSSL name.
### A quick history
[Secure Socket Layer (SSL)][5] is a cryptographic protocol that [Netscape][6] released in 1995. This protocol layer can sit atop HTTP, thereby providing the _S_ for _secure_ in HTTPS. The SSL protocol provides various security services, including two that are central in HTTPS:
* Peer authentication (aka mutual challenge): Each side of a connection authenticates the identity of the other side. If Alice and Bob are to exchange messages over SSL, then each first authenticates the identity of the other.
* Confidentiality: A sender encrypts messages before sending these over a channel. The receiver then decrypts each received message. This process safeguards network conversations. Even if eavesdropper Eve intercepts an encrypted message from Alice to Bob (a _man-in-the-middle_ attack), Eve finds it computationally infeasible to decrypt this message.
These two key SSL services, in turn, are tied to others that get less attention. For example, SSL supports message integrity, which assures that a received message is the same as the one sent. This feature is implemented with hash functions, which likewise come with the OpenSSL toolkit.
SSL is versioned (e.g., SSLv2 and SSLv3), and in 1999 Transport Layer Security (TLS) emerged as a similar protocol based upon SSLv3. TLSv1 and SSLv3 are alike, but not enough so to work together. Nonetheless, it is common to refer to SSL/TLS as if they are one and the same protocol. For example, OpenSSL functions often have SSL in the name even when TLS rather than SSL is in play. Furthermore, calling OpenSSL command-line utilities begins with the term **openssl**.
The documentation for OpenSSL is spotty beyond the **man** pages, which become unwieldy given how big the OpenSSL toolkit is. Command-line and code examples are one way to bring the main topics into focus together. Lets start with a familiar example—accessing a web site with HTTPS—and use this example to pick apart the cryptographic pieces of interest.
### An HTTPS client
The **client** program shown here connects over HTTPS to Google:
```
/* compilation: gcc -o client client.c -lssl -lcrypto */
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;openssl/bio.h&gt; /* BasicInput/Output streams */
#include &lt;openssl/err.h&gt; /* errors */
#include &lt;openssl/ssl.h&gt; /* core library */
#define BuffSize 1024
void report_and_exit(const char* msg) {
  [perror][7](msg);
  ERR_print_errors_fp(stderr);
  [exit][8](-1);
}
void init_ssl() {
  SSL_load_error_strings();
  SSL_library_init();
}
void cleanup(SSL_CTX* ctx, BIO* bio) {
  SSL_CTX_free(ctx);
  BIO_free_all(bio);
}
void secure_connect(const char* hostname) {
  char name[BuffSize];
  char request[BuffSize];
  char response[BuffSize];
  const SSL_METHOD* method = TLSv1_2_client_method();
  if (NULL == method) report_and_exit("TLSv1_2_client_method...");
  SSL_CTX* ctx = SSL_CTX_new(method);
  if (NULL == ctx) report_and_exit("SSL_CTX_new...");
  BIO* bio = BIO_new_ssl_connect(ctx);
  if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
  SSL* ssl = NULL;
  /* link bio channel, SSL session, and server endpoint */
  [sprintf][9](name, "%s:%s", hostname, "https");
  BIO_get_ssl(bio, &amp;ssl); /* session */
  SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* robustness */
  BIO_set_conn_hostname(bio, name); /* prepare to connect */
  /* try to connect */
  if (BIO_do_connect(bio) &lt;= 0) {
    cleanup(ctx, bio);
    report_and_exit("BIO_do_connect...");
  }
  /* verify truststore, check cert */
  if (!SSL_CTX_load_verify_locations(ctx,
                                      "/etc/ssl/certs/ca-certificates.crt", /* truststore */
                                      "/etc/ssl/certs/")) /* more truststore */
    report_and_exit("SSL_CTX_load_verify_locations...");
  long verify_flag = SSL_get_verify_result(ssl);
  if (verify_flag != X509_V_OK)
    [fprintf][10](stderr,
            "##### Certificate verification error (%i) but continuing...\n",
            (int) verify_flag);
  /* now fetch the homepage as sample data */
  [sprintf][9](request,
          "GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
          hostname);
  BIO_puts(bio, request);
  /* read HTTP response from server and print to stdout */
  while (1) {
    [memset][11](response, '\0', sizeof(response));
    int n = BIO_read(bio, response, BuffSize);
    if (n &lt;= 0) break; /* 0 is end-of-stream, &lt; 0 is an error */
  [puts][12](response);
  }
  cleanup(ctx, bio);
}
int main() {
  init_ssl();
  const char* hostname = "www.google.com:443";
  [fprintf][10](stderr, "Trying an HTTPS connection to %s...\n", hostname);
  secure_connect(hostname);
return 0;
}
```
This program can be compiled and executed from the command line (note the lowercase L in **-lssl** and **-lcrypto**):
**gcc** **-o** **client client.c -lssl** **-lcrypto**
This program tries to open a secure connection to the web site [www.google.com][13]. As part of the TLS handshake with the Google web server, the **client** program receives one or more digital certificates, which the program tries (but, on my system, fails) to verify. Nonetheless, the **client** program goes on to fetch the Google homepage through the secure channel. This program depends on the security artifacts mentioned earlier, although only a digital certificate stands out in the code. The other artifacts remain behind the scenes and are clarified later in detail.
Generally, a client program in C or C++ that opened an HTTP (non-secure) channel would use constructs such as a _file descriptor_ for a _network socket_, which is an endpoint in a connection between two processes (e.g., the client program and the Google web server). A file descriptor, in turn, is a non-negative integer value that identifies, within a program, any file-like construct that the program opens. Such a program also would use a structure to specify details about the web servers address.
None of these relatively low-level constructs occurs in the client program, as the OpenSSL library wraps the socket infrastructure and address specification in high-level security constructs. The result is a straightforward API. Heres a first look at the security details in the example **client** program.
* The program begins by loading the relevant OpenSSL libraries, with my function **init_ssl** making two calls into OpenSSL:
**SSL_library_init(); SSL_load_error_strings();**
* The next initialization step tries to get a security _context_, a framework of information required to establish and maintain a secure channel to the web server. **TLS 1.2** is used in the example, as shown in this call to an OpenSSL library function:
**const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */**
If the call succeeds, then the **method** pointer is passed to the library function that creates the context of type **SSL_CTX**:
**SSL_CTX*** **ctx** **= SSL_CTX_new(method);**
The **client** program checks for errors on each of these critical library calls, and then the program terminates if either call fails.
* Two other OpenSSL artifacts now come into play: a security session of type **SSL**, which manages the secure connection from start to finish; and a secured stream of type **BIO** (Basic Input/Output), which is used to communicate with the web server. The **BIO** stream is generated with this call:
**BIO* bio = BIO_new_ssl_connect(ctx);**
Note that the all-important context is the argument. The **BIO** type is the OpenSSL wrapper for the **FILE** type in C. This wrapper secures the input and output streams between the **client** program and Google's web server.
* With the **SSL_CTX** and **BIO** in hand, the program then links these together in an **SSL** session. Three library calls do the work:
**BIO_get_ssl(bio, &amp;ssl); /* get a TLS session */**
**SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* for robustness */**
**BIO_set_conn_hostname(bio, name); /* prepare to connect to Google */**
The secure connection itself is established through this call:
**BIO_do_connect(bio);**
If this last call does not succeed, the **client** program terminates; otherwise, the connection is ready to support a confidential conversation between the **client** program and the Google web server.
During the handshake with the web server, the **client** program receives one or more digital certificates that authenticate the servers identity. However, the **client** program does not send a certificate of its own, which means that the authentication is one-way. (Web servers typically are configured _not_ to expect a client certificate.) Despite the failed verification of the web servers certificate, the **client** program continues by fetching the Google homepage through the secure channel to the web server.
Why does the attempt to verify a Google certificate fail? A typical OpenSSL installation has the directory **/etc/ssl/certs**, which includes the **ca-certificates.crt** file. The directory and the file together contain digital certificates that OpenSSL trusts out of the box and accordingly constitute a _truststore_. The truststore can be updated as needed, in particular, to include newly trusted certificates and to remove ones no longer trusted.
The client program receives three certificates from the Google web server, but the OpenSSL truststore on my machine does not contain exact matches. As presently written, the **client** program does not pursue the matter by, for example, verifying the digital signature on a Google certificate (a signature that vouches for the certificate). If that signature were trusted, then the certificate containing it should be trusted as well. Nonetheless, the client program goes on to fetch and then to print Googles homepage. The next section gets into more detail.
### The hidden security pieces in the client program
Lets start with the visible security artifact in the client example—the digital certificate—and consider how other security artifacts relate to it. The dominant layout standard for a digital certificate is X509, and a production-grade certificate is issued by a certificate authority (CA) such as [Verisign][14].
A digital certificate contains various pieces of information (e.g., activation and expiration dates, and a domain name for the owner), including the issuers identity and _digital signature_, which is an encrypted _cryptographic hash_ value. A certificate also has an unencrypted hash value that serves as its identifying _fingerprint_.
A hash value results from mapping an arbitrary number of bits to a fixed-length digest. What the bits represent (an accounting report, a novel, or maybe a digital movie) is irrelevant. For example, the Message Digest version 5 (MD5) hash algorithm maps input bits of whatever length to a 128-bit hash value, whereas the SHA1 (Secure Hash Algorithm version 1) algorithm maps input bits to a 160-bit value. Different input bits result in different—indeed, statistically unique—hash values. The next article goes into further detail and focuses on what makes a hash function _cryptographic_.
Digital certificates differ in type (e.g., _root_, _intermediate_, and _end-entity_ certificates) and form a hierarchy that reflects these types. As the name suggests, a _root_ certificate sits atop the hierarchy, and the certificates under it inherit whatever trust the root certificate has. The OpenSSL libraries and most modern programming languages have an X509 type together with functions that deal with such certificates. The certificate from Google has an X509 format, and the **client** program checks whether this certificate is **X509_V_OK**.
X509 certificates are based upon public-key infrastructure (PKI), which includes algorithms—RSA is the dominant one—for generating _key pairs_: a public key and its paired private key. A public key is an identity: [Amazons][15] public key identifies it, and my public key identifies me. A private key is meant to be kept secret by its owner.
The keys in a pair have standard uses. A public key can be used to encrypt a message, and the private key from the same pair can then be used to decrypt the message. A private key also can be used to sign a document or other electronic artifact (e.g., a program or an email), and the public key from the pair can then be used to verify the signature. The following two examples fill in some details.
In the first example, Alice distributes her public key to the world, including Bob. Bob then encrypts a message with Alices public key, sending the encrypted message to Alice. The message encrypted with Alices public key is decrypted with her private key, which (by assumption) she alone has, like so:
```
             +------------------+ encrypted msg  +-------------------+
Bob's msg---&gt;|Alice's public key|---------------&gt;|Alice's private key|---&gt; Bob's msg
             +------------------+                +-------------------+
```
Decrypting the message without Alices private key is possible in principle, but infeasible in practice given a sound cryptographic key-pair system such as RSA.
Now, for the second example, consider signing a document to certify its authenticity. The signature algorithm uses a private key from a pair to process a cryptographic hash of the document to be signed:
```
                    +-------------------+
Hash of document---&gt;|Alice's private key|---&gt;Alice's digital signature of the document
                    +-------------------+
```
Assume that Alice digitally signs a contract sent to Bob. Bob then can use Alices public key from the key pair to verify the signature:
```
                                             +------------------+
Alice's digital signature of the document---&gt;|Alice's public key|---&gt;verified or not
                                             +------------------+
```
It is infeasible to forge Alices signature without Alices private key: hence, it is in Alices interest to keep her private key secret.
None of these security pieces, except for digital certificates, is explicit in the **client** program. The next article fills in the details with examples that use the OpenSSL utilities and library functions.
### OpenSSL from the command line
In the meantime, lets take a look at OpenSSL command-line utilities: in particular, a utility to inspect the certificates from a web server during the TLS handshake. Invoking the OpenSSL utilities begins with the **openssl** command and then adds a combination of arguments and flags to specify the desired operation.
Consider this command:
**openssl list-cipher-algorithms**
The output is a list of associated algorithms that make up a _cipher suite_. Heres the start of the list, with comments to clarify the acronyms:
```
AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
...
```
The next command, using the argument **s_client**, opens a secure connection to **[www.google.com][13]** and prints screens full of information about this connection:
**openssl s_client -connect [www.google.com:443][16] -showcerts**
The port number 443 is the standard one used by web servers for receiving HTTPS rather than HTTP connections. (For HTTP, the standard port is 80.) The network address **[www.google.com:443][16]** also occurs in the **client** program's code. If the attempted connection succeeds, the three digital certificates from Google are displayed together with information about the secure session, the cipher suite in play, and related items. For example, here is a slice of output from near the start, which announces that a _certificate chain_ is forthcoming. The encoding for the certificates is base64:
```
Certificate chain
 0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
 i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
\-----BEGIN CERTIFICATE-----
MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
...
```
A major web site such as Google usually sends multiple certificates for authentication.
The output ends with summary information about the TLS session, including specifics on the cipher suite:
```
SSL-Session:
    Protocol : TLSv1.2
    Cipher : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
...
```
The protocol **TLS 1.2** is used in the **client** program, and the **Session-ID** uniquely identifies the connection between the **openssl** utility and the Google web server. The **Cipher** entry can be parsed as follows:
* **ECDHE** (Elliptic Curve Diffie Hellman Ephemeral) is an effective and efficient algorithm for managing the TLS handshake. In particular, ECDHE solves the _key-distribution problem_ by ensuring that both parties in a connection (e.g., the client program and the Google web server) use the same encryption/decryption key, which is known as the _session key_. The follow-up article digs into the details.
* **RSA** (Rivest Shamir Adleman) is the dominant public-key cryptosystem and named after the three academics who first described the system in the late 1970s. The key-pairs in play are generated with the RSA algorithm.
* **AES128** (Advanced Encryption Standard) is a _block cipher_ that encrypts and decrypts blocks of bits. (The alternative is a _stream cipher_, which encrypts and decrypts bits one at a time.) The cipher is _symmetric_ in that the same key is used to encrypt and to decrypt, which raises the key-distribution problem in the first place. AES supports key sizes of 128 (used here), 192, and 256 bits: the larger the key, the better the protection.
Key sizes for symmetric cryptosystems such as AES are, in general, smaller than those for asymmetric (key-pair based) systems such as RSA. For example, a 1024-bit RSA key is relatively small, whereas a 256-bit key is currently the largest for AES.
* **GCM** (Galois Counter Mode) handles the repeated application of a cipher (in this case, AES128) during a secured conversation. AES128 blocks are only 128-bits in size, and a secure conversation is likely to consist of multiple AES128 blocks from one side to the other. GCM is efficient and commonly paired with AES128.
* **SHA256** (Secure Hash Algorithm 256 bits) is the cryptographic hash algorithm in play. The hash values produced are 256 bits in size, although even larger values are possible with SHA.
Cipher suites are in continual development. Not so long ago, for example, Google used the RC4 stream cipher (Rons Cipher version 4 after Ron Rivest from RSA). RC4 now has known vulnerabilities, which presumably accounts, at least in part, for Googles switch to AES128.
### Wrapping up
This first look at OpenSSL, through a secure C web client and various command-line examples, has brought to the fore a handful of topics in need of more clarification. [The next article gets into the details][17], starting with cryptographic hashes and ending with a fuller discussion of how digital certificates address the key distribution challenge.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://www.openssl.org/
[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
[4]: http://condor.depaul.edu/mkalin
[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
[6]: https://en.wikipedia.org/wiki/Netscape
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[13]: http://www.google.com
[14]: https://www.verisign.com
[15]: https://www.amazon.com
[16]: http://www.google.com:443
[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source Slack Alternative Mattermost Gets $50M Funding)
[#]: via: (https://itsfoss.com/mattermost-funding/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Open Source Slack Alternative Mattermost Gets $50M Funding
======
[Mattermost][1], which presents itself as an open source alternative to [Slack][2] raised $50M in series B funding. This is definitely something to get excited for.
[Slack][3] is a cloud-based team collaboration software that is mainly used for internal team communication. Enterprises, startups and even open source projects worldwide use it interact with colleagues and project members. Slack is free with limited features while the paid enterprise version has premium features.
[Slack is valued at $20 billion][4] in June, 2019. You can guess the kind of impact it has made in the tech industry and certainly more products are trying to compete with Slack.
### $50 million for an open source project
![][5]
Personally, I was not aware of Mattermost. But, when [VentureBeat][6] reported the story, it made me curious. The funding was led by [Y Combinators][7] Continuity with a new investor Battery Ventures and was joined by the existing investors Redpoint and S28 Captial.
With the [announcement][8], they also mentioned:
> With todays announcement, Mattermost becomes YCs largest ever Series B investment, and more importantly, their largest open source investment to date.
To give you some specifics, heres what VentureBeat mentioned:
> The capital infusion follows a $20 million series A in February and a $3.5 million seed round in February 2017 and brings the Palo Alto, California-based companys total raised to roughly $70 million.
If you are curious about their plans, you should go through their [official announcement post][8].
Even though it all sounds good, what is Mattermost? Maybe you didnt know about it, until now. So, let us take a brief look at it:
### A quick look at Mattermost
![Mattermost][9]
As mentioned, it is an open source Slack alternative.
At first glance, it almost resembles the look and feel of Slack. Well, thats the point here you will have an open source solution that youre comfortable using.
It even integrates with some of the popular DevOps tools like Git, Bots, and CI/CD. In addition to the functionality, it focuses on security and privacy as well.
[][10]
Suggested read  Zettlr - Markdown Editor for Writers and Researchers
Also, similar to Slack, it supports integration with multiple apps and services.
Sounds promising? I think so.
#### Pricing: Enterprise Edition vs Team Edition
If you want them (Mattermost) to host it (or want priority support), you should opt for the Enterprise edition. However, if you want to host it without spending a penny, you can download the [Team edition][11] and install it on your Linux-based cloud/VPS server.
Of course, we are not here to review it in-depth. However, I do want to mention that the enterprise edition is quite affordable.
![][12]
**Wrapping Up**
Mattermost is definitely impressive. And, with a whopping $50M funding, it may become the next big thing in the open source community for users who were on the lookout for a secure and open source messaging platform with efficient team collaboration support.
What do you think about this news? Is it something exciting for you? Were you already aware of Mattermost as a slack alternative?
Let us know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mattermost-funding/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://mattermost.com/
[2]: https://itsfoss.com/slack-use-linux/
[3]: https://slack.com
[4]: https://www.ft.com/content/98747b36-9368-11e9-aea1-2b1d33ac3271
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-wallpaper.png?resize=800%2C450&ssl=1
[6]: https://venturebeat.com/2019/06/19/mattermost-raises-50-million-to-advance-its-open-source-slack-alternative/
[7]: https://www.ycombinator.com/
[8]: https://mattermost.com/blog/yc-leads-50m-series-b-in-mattermost-as-open-source-slack-alternative/
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-screenshot.jpg?fit=800%2C497&ssl=1
[10]: https://itsfoss.com/zettlr-markdown-editor/
[11]: https://mattermost.com/download/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-enterprise-plan.jpg?fit=800%2C325&ssl=1