mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-28 23:20:10 +08:00
commit
c70fb52ef3
@ -1,23 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (luuming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11004-1.html)
|
||||
[#]: subject: (5 Good Open Source Speech Recognition/Speech-to-Text Systems)
|
||||
[#]: via: (https://fosspost.org/lists/open-source-speech-recognition-speech-to-text)
|
||||
[#]: author: (Simon James https://fosspost.org/author/simonjames)
|
||||
|
||||
5 款不错的开源语音识别/语音文字转换系统
|
||||
|
||||
======
|
||||
|
||||
![](https://i0.wp.com/fosspost.org/wp-content/uploads/2019/02/open-source-speech-recognition-speech-to-text.png?resize=1237%2C527&ssl=1)
|
||||
|
||||
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>(STT)系统就像它名字所蕴含的那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
|
||||
<ruby>语音文字转换<rt>speech-to-text</rt></ruby>(STT)系统就像它名字所蕴含的意思那样,是一种将说出的单词转换为文本文件以供后续用途的方式。
|
||||
|
||||
语音文字转换技术非常有用。它可以用到许多应用中,例如自动转录,使用自己的声音写书籍或文本,用生成的文本文件和其他工具做复杂的分析等。
|
||||
|
||||
在过去,语音文字转换技术以专有软件和库为主导,开源替代品并不存在或是有严格的限制并且没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你立即使用。
|
||||
在过去,语音文字转换技术以专有软件和库为主导,要么没有开源替代品,要么有着严格的限制,也没有社区。这一点正在发生改变,当今有许多开源语音文字转换工具和库可以让你随时使用。
|
||||
|
||||
这里我列出了 5 个。
|
||||
|
||||
@ -27,74 +26,74 @@
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 15 open source speech recognition][1]
|
||||
|
||||
该项目由 Firefox 浏览器背后的组织 Mozilla 团队开发。它 100% 自由并且使用 TensorFlow 机器学习框架实现。
|
||||
该项目由 Firefox 浏览器的开发组织 Mozilla 团队开发。它是 100% 的自由开源软件,其名字暗示使用了 TensorFlow 机器学习框架实现去功能。
|
||||
|
||||
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
|
||||
换句话说,你可以用它训练自己的模型获得更好的效果,甚至可以用它来转换其它的语言。你也可以轻松的将它集成到自己的 Tensorflow 机器学习项目中。可惜的是项目当前默认仅支持英语。
|
||||
|
||||
它也支持许多编程语言,例如 Python(3.6)。可以让你在数秒之内获取:
|
||||
它也支持许多编程语言,例如 Python(3.6)。可以让你在数秒之内完成工作:
|
||||
|
||||
```
|
||||
pip3 install deepspeech
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
你也可以通过 npm 安装它:
|
||||
你也可以通过 `npm` 安装它:
|
||||
|
||||
```
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
想要获得更多信息,请参考[项目主页][2]。
|
||||
- [项目主页][2]
|
||||
|
||||
#### Kaldi
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 17 open source speech recognition][3]
|
||||
|
||||
Kaldi 是一个用 C++ 写的开源语音识别软件,并且在 Apache 公共许可下发布。它可以运行在 Windows,macOS 和 Linux 上。它的开发始于 2009。
|
||||
Kaldi 是一个用 C++ 编写的开源语音识别软件,并且在 Apache 公共许可证下发布。它可以运行在 Windows、macOS 和 Linux 上。它的开发始于 2009。
|
||||
|
||||
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的三方模块可以用来完成你的任务。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
|
||||
Kaldi 超过其他语音识别软件的主要特点是可扩展和模块化。社区提供了大量的可以用来完成你的任务的第三方模块。Kaldi 也支持深度神经网络,并且在它的网站上提供了[出色的文档][4]。
|
||||
|
||||
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的完成。
|
||||
虽然代码主要由 C++ 完成,但它通过 Bash 和 Python 脚本进行了封装。因此,如果你仅仅想使用基本的语音到文字转换功能,你就会发现通过 Python 或 Bash 能够轻易的实现。
|
||||
|
||||
[Project’s homepage][5].
|
||||
- [项目主页][5]
|
||||
|
||||
#### Julius
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 19 open source speech recognition][6]
|
||||
|
||||
可能是有史以来最古老的语音识别软件之一。它的发展始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
|
||||
它可能是有史以来最古老的语音识别软件之一。它的开发始于 1991 年的京都大学,之后在 2005 年将所有权转移到了一个独立的项目组。
|
||||
|
||||
Julius 的主要特点包括了执行实时 STT 的能力,低内存占用(20000 单词少于 64 MB),输出<ruby>最优词<rt>N-best word</rt></ruby>/<ruby>词图<rt>Word-graph</rt></ruby>的能力,当作服务器单元运行的能力和很多东西。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux,Windows,macOS 甚至 Android(在智能手机上)。
|
||||
Julius 的主要特点包括了执行实时 STT 的能力,低内存占用(20000 单词少于 64 MB),能够输出<ruby>最优词<rt>N-best word</rt></ruby>和<ruby>词图<rt>Word-graph</rt></ruby>,能够作为服务器单元运行等等。这款软件主要为学术和研究所设计。由 C 语言写成,并且可以运行在 Linux、Windows、macOS 甚至 Android(在智能手机上)。
|
||||
|
||||
它当前仅支持英语和日语。软件或许易于从 Linux 发行版的仓库中安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于大约一个半月之前。
|
||||
它当前仅支持英语和日语。软件应该能够从 Linux 发行版的仓库中轻松安装。只要在软件包管理器中搜索 julius 即可。最新的版本[发布][7]于本文发布前大约一个半月之前。
|
||||
|
||||
[Project’s homepage][8].
|
||||
- [项目主页][8]
|
||||
|
||||
#### Wav2Letter++
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 21 open source speech recognition][9]
|
||||
|
||||
如果你在寻找一个更加时髦的,那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可下发布。
|
||||
如果你在寻找一个更加时髦的,那么这款一定适合。Wav2Letter++ 是一款由 Facebook 的 AI 研究团队于 2 个月之前发布的开源语言识别软件。代码在 BSD 许可证下发布。
|
||||
|
||||
Facebook 描述它的库是“最快<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的想法使其能在默认情况下对性能进行优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
|
||||
Facebook 描述它的库是“最快、<ruby>最先进<rt>state-of-the-art</rt></ruby>的语音识别系统”。构建它时的理念使其默认针对性能进行了优化。Facebook 最新的机器学习库 [FlashLight][11] 也被用作 Wav2Letter++ 的底层核心。
|
||||
|
||||
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此命名为 Wav2Letter++。
|
||||
Wav2Letter++ 需要你先为所描述的语言建立一个模型来训练算法。没有任何一种语言(包括英语)的预训练模型,它仅仅是个机器学习驱动的文本语音转换工具,它用 C++ 写成,因此被命名为 Wav2Letter++。
|
||||
|
||||
[Project’s homepage][12].
|
||||
- [项目主页][12]
|
||||
|
||||
#### DeepSpeech2
|
||||
|
||||
![5 Good Open Source Speech Recognition/Speech-to-Text Systems 23 open source speech recognition][13]
|
||||
|
||||
中国巨头百度的研究人员也在开发他们自己的语音文字转换引擎,叫做“DeepSpeech2”。它是一个端对端的开源引擎,使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可下发布。
|
||||
中国软件巨头百度的研究人员也在开发他们自己的语音文字转换引擎,叫做“DeepSpeech2”。它是一个端对端的开源引擎,使用“PaddlePaddle”深度学习框架进行英语或汉语的文字转换。代码在 BSD 许可证下发布。
|
||||
|
||||
引擎可以训练在任何模型之上,并且可以用于任何想要的语言。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
|
||||
该引擎可以在你想用的任何模型和任何语言上训练。模型并未随代码一同发布。你要像其他软件那样自己建立模型。DeepSpeech2 的源代码由 Python 写成,如果你使用过就会非常容易上手。
|
||||
|
||||
[Project’s homepage][14].
|
||||
- [项目主页][14]
|
||||
|
||||
### 总结
|
||||
|
||||
语音识别领域仍然主要地由专有软件巨头所占据,比如 Google 和 IBM(它们为此提供了闭源商业服务),但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
|
||||
语音识别领域仍然主要由专有软件巨头所占据,比如 Google 和 IBM(它们为此提供了闭源商业服务),但是开源同类软件很有前途。这 5 款开源语音识别引擎应当能够帮助你构建应用,随着时间推移,它们会不断地发展。在几年之后,我们希望开源成为这些技术中的常态,就像其他行业那样。
|
||||
|
||||
如果你对清单有其他的建议或评论,我们很乐意在下面听到。
|
||||
|
||||
@ -104,8 +103,8 @@ via: https://fosspost.org/lists/open-source-speech-recognition-speech-to-text
|
||||
|
||||
作者:[Simon James][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11013-1.html)
|
||||
[#]: subject: (Blockchain 2.0 – Ongoing Projects (The State Of Smart Contracts Now) [Part 6])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
@ -12,15 +12,15 @@
|
||||
|
||||
![The State Of Smart Contracts Now][1]
|
||||
|
||||
继续我们的[前面的关于智能合约的文章][2],这篇文章旨在讨论智能合约的形势,重点介绍目前正在该领域进行开发的一些项目和公司。如本系列前一篇文章中讨论的,智能合约是在区块链网络上存在并执行的程序。我们探讨了智能合约的工作原理以及它们优于传统数字平台的原因。这里描述的公司分布于各种各样的行业中,但是大多涉及到身份管理系统、金融服务、众筹系统等,因为这些是被认为最适合切换到基于区块链的数据库系统的领域。
|
||||
继续我们的[前面的关于智能合约的文章][2],这篇文章旨在讨论智能合约的发展形势,重点介绍目前正在该领域进行开发的一些项目和公司。如本系列前一篇文章中讨论的,智能合约是在区块链网络上存在并执行的程序。我们探讨了智能合约的工作原理以及它们优于传统数字平台的原因。这里描述的公司分布于各种各样的行业中,但是大多涉及到身份管理系统、金融服务、众筹系统等,因为这些是被认为最适合切换到基于区块链的数据库系统的领域。
|
||||
|
||||
### 开放平台
|
||||
|
||||
诸如 [Counterparty][8] 和 Solidity(以太坊)等平台是完全公用的构建模块,开发者可以以之创建自己的智能合约。大量的开发人员参与此类项目使这些项目成为开发智能合约、设计自己的加密货币令牌系统以及为区块链创建协议以实现功能的事实标准。许多值得称赞的项目都来源于它们。摩根大通派生自以太坊的 [Quorum][9],就是一个例子。而瑞波是另一个例子。
|
||||
诸如 [Counterparty][8] 和 Solidity(以太坊)等平台是完全公用的构建模块,开发者可以以之创建自己的智能合约。大量的开发人员参与此类项目使这些项目成为开发智能合约、设计自己的加密货币令牌系统,以及创建区块链运行协议的事实标准。许多值得称赞的项目都来源于它们。摩根大通派生自以太坊的 [Quorum][9],就是一个例子。而瑞波是另一个例子。
|
||||
|
||||
### 管理金融交易
|
||||
|
||||
通过互联网转账加密货币被吹捧为在未来几年的常态。与此相关的不足之处是:
|
||||
通过互联网转账加密货币被吹捧为在未来几年会成为常态。与此相关的不足之处是:
|
||||
|
||||
* 身份和钱包地址是匿名的。如果接收方不履行交易,则付款人没有任何第一追索权。
|
||||
* 错误交易(如果无法追踪任何交易)。
|
||||
@ -32,17 +32,17 @@
|
||||
|
||||
### 金融服务
|
||||
|
||||
小额融资和小额保险项目的发展将改善世界上大多数贫穷或没有银行账户的人的银行金融服务。据估计,社会中较贫穷的“无银行账户”人群可以为银行和机构的增加 3800 亿美元收入 [^5]。这一金额取代了通过银行切换到区块链 DLT 预期可以节省的运营费用。
|
||||
小额融资和小额保险项目的发展将改善世界上大多数贫穷或没有银行账户的人的银行金融服务。据估计,社会中较贫穷的“无银行账户”人群可以为银行和机构的增加 3800 亿美元收入 [^5]。这一金额要远远超过银行切换到区块链分布式账本技术(DLT)预期可以节省的运营费用。
|
||||
|
||||
位于美国中西部的 BankQu Inc. 的口号是“通过身份而尊严”。他们的平台允许个人设置他们自己的数字身份记录,其中所有交易将在区块链上实时审查和处理。在底层代码上记录并为其用户构建唯一的在线标识,从而实现超快速的交易和结算。BankQu 案例研究探讨了他们如何以这种方式帮助个人和公司,可以在[这里][3]看到。
|
||||
位于美国中西部的 BankQu Inc. 的口号是“通过身份而尊严”。他们的平台允许个人建立他们自己的数字身份记录,其中所有交易将在区块链上实时审查和处理。在底层代码上记录并为其用户构建唯一的在线标识,从而实现超快速的交易和结算。BankQu 案例研究探讨了他们如何以这种方式帮助个人和公司,可以在[这里][3]看到。
|
||||
|
||||
[Stratumn][12] 正在帮助保险公司通过自动执行早期由人类微观管理的任务来提供更好的保险服务。通过自动化、端到端可追溯性和高效的数据隐私方法,他们彻底改变了保险索赔的结算方式。改善客户体验以及显著降低成本,为客户和相关公司带来双赢局面。
|
||||
[Stratumn][12] 正在帮助保险公司通过自动化早期由人类微观管理的任务来提供更好的保险服务。通过自动化、端到端可追溯性和高效的数据隐私方法,他们彻底改变了保险索赔的结算方式。改善客户体验以及显著降低成本为客户和相关的公司带来双赢局面。
|
||||
|
||||
法国保险公司 [AXA][14] 目前正在试行类似的努力。其产品 [fizzy][13] 允许用户以少量费用订阅其服务并输入他们的航班详细信息。如果航班延误或遇到其他问题,该程序会自动搜索在线数据库,检查保险条款并将保险金额记入用户的帐户。这样就消除了用户或客户在手动检查条款后提出索赔的要求,并且一旦这样的系统成为主流,就不需要长期提出索赔,增加了航空公司的责任。
|
||||
法国保险公司 [AXA][14] 目前正在试行类似的努力。其产品 [fizzy][13] 允许用户以少量费用订阅其服务并输入他们的航班详细信息。如果航班延误或遇到其他问题,该程序会自动搜索在线数据库,检查保险条款并将保险金额记入用户的帐户。这样就用户或客户无需在手动检查条款后提出索赔,并且就长期而言,一旦这样的系统成为主流,就增加了航空公司的责任心。
|
||||
|
||||
### 跟踪所有权
|
||||
|
||||
理论上可以利用 DLT 中的带时间戳的数据块来跟踪从创建到最终用户消费的媒体。Peertracks 公司 和 Mycelia 公司目前正在帮助音乐家发布内容,而不必担心其内容被盗或被滥用。他们帮助艺术家直接向粉丝和客户销售,同时获得工作报酬,而无需通过权利和唱片公司 [^9]。
|
||||
理论上可以利用 DLT 中的带时间戳的数据块来跟踪媒体的创建到最终用户消费。Peertracks 公司和 Mycelia 公司目前正在帮助音乐家发布内容,而不必担心其内容被盗或被滥用。他们帮助艺术家直接向粉丝和客户销售,同时获得工作报酬,而无需通过权利和唱片公司 [^9]。
|
||||
|
||||
### 身份管理平台
|
||||
|
||||
@ -56,7 +56,7 @@
|
||||
|
||||
[Share & Charge][18] ([Slock.It][19]) 是一家欧洲的区块链初创公司。他们的移动应用程序允许房主和其他个人投入资金建立充电站与其他正在寻找快速充电的人分享他们的资源。这不仅使业主能够收回他们的一些投资,而且还允许 EV 司机在其近地域获得更多的充电点,从而允许供应商以方便的方式满足需求。一旦“客户”完成对其车辆的充电,相关的硬件就会创建一个由数据组成的安全时间戳块,并且在该平台上工作的智能合约会自动将相应的金额记入所有者账户。记录所有此类交易的跟踪并保持适当的安全验证。有兴趣的读者可以看一下[这里][6],了解他们产品背后的技术角度。该公司的平台将逐步使用户能够与有需要的个人分享其他产品和服务,并从中获得被动收入。
|
||||
|
||||
我们在这里看到的公司,以及一个很短的正在进行中的项目的清单,这些项目利用智能合约和区块链数据库系统。诸如此类的平台有助于构建一个安全的“盒子”,其中包含仅由用户自己和上面的代码或智能合约访问的信息。基于触发器对信息进行实时审查、检查,并且算法由系统执行。这样的平台具有最小化的人为监督,这是在安全数字自动化方面朝着正确方向迈出的急需的一步,这在以前从未被考虑过如此规模。
|
||||
我们在这里看到的公司,以及一个很短的正在进行中的项目的清单,这些项目利用智能合约和区块链数据库系统。诸如此类的平台有助于构建一个安全的“盒子”,其中包含仅由用户自己、其上的代码或智能合约访问的信息。基于触发器对信息进行实时审查、检查,并且算法由系统执行。这样的平台人为监督最小化,这是在安全数字自动化方面朝着正确方向迈出的急需的一步,这在以前从未被考虑过如此规模。
|
||||
|
||||
下一篇文章将阐述不同类型的区块链。单击以下链接以了解有关此主题的更多信息。
|
||||
|
||||
@ -70,10 +70,10 @@
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
|
||||
作者:[editor][a]
|
||||
作者:[ostechnix][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11009-1.html)
|
||||
[#]: subject: (Edge computing is in most industries’ future)
|
||||
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
边缘计算是大多数行业的未来
|
||||
======
|
||||
|
||||
> 几乎每个行业都可以利用边缘计算来加速数字化转型。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/231224cdl3kwedn0hw2lie.jpg)
|
||||
|
||||
边缘计算的发展将取得一次巨大的飞跃。[据 Gartner 数据][2],现在公司有 10% 的数据是在传统数据中心或云之外生成的。但在未来六年内,这一比例将升至 75%。
|
||||
|
||||
这很大程度上取决于处理来自设备数据的需要,比如物联网(IoT)数据传感器。早期采用这一方法的包括:
|
||||
|
||||
* **制造商**:设备与传感器似乎是这个行业特有的,因此需要为产生的数据找到更快速的方法也就不足为奇。一份 [Automation World][3] 最近的研究发现 43% 的制造商已经部署了边缘计算项目。最常用用途包括生产/制造数据分析与设备数据分析。
|
||||
* **零售商**:与大多数深受数字化运营需求影响的产业一样,零售商也不得不革新了其客户体验。为此,这些组织“正在积极投资贴近于买家的计算能力”,施耐德电气公司 IT 部门执行副总裁 [Dave Johnson][4] 如是说。他列举了一些例子,例如在试衣间的增强现实(AR)镜子,提供了不同的服装选择,而不用顾客试用这些服装。又如用于显示店内导航的基于信标的热图。
|
||||
* **医疗保健机构**:随着医疗保健成本的不断上升,这一行业已经具备了提高生产能力与成本效率方面的创新能力。管理咨询公司[麦肯锡已经确定][5],至少有 11 个有益于患者、医疗机构或两者的医疗保健用例。举两个例子:提高护理效率并有助于优化设备的跟踪移动医疗设备;跟踪用户锻炼并提供健康建议的可穿戴设备。
|
||||
|
||||
虽然以上这些是明显的用例,随着边缘计算市场的扩大,采用它的行业也会增加。
|
||||
|
||||
### 数字化转型的优势
|
||||
|
||||
随着边缘计算的快速处理能力完全符合数字化转型的目标:提高效率、生产能力和加速产品上市和客户体验。以下是一些有潜力的应用及将被边缘计算改变的行业:
|
||||
|
||||
**农业**:农民和组织已经使用无人机将农田和气候环境传给灌溉设备。其他的应用可能包括了对工人、牲畜和设备的监测与位置跟踪,从而改善生产能力、效率和成本。
|
||||
|
||||
**能源**:在这一领域有许多的潜在的应用,可以使消费者与供应商都受益。例如,智能电表有助于业主更好地管理能源使用,同时减少电网运营商对手动抄表的需求。同样的,水管上的传感器能够监测到漏水,同时提供实时漏水数据。
|
||||
|
||||
**金融服务**:银行正在采取交互式 ATM 机,这种交互式 ATM 机能够快速地处理数据以提供更好的用户体验。在管理层次,可以更快速地分析交易数据中的欺诈行为。
|
||||
|
||||
**物流**:由于消费者需要更快速地交付商品和服务,物流公司将需要转换其地图和寻路功能以获取实时数据,尤其在最后一公里计划和跟踪方面。这可能涉及到基于街道、包裹及汽车的传感器数据传输处理过程。
|
||||
|
||||
得益于边缘计算,所有行业都有转型的潜力。但是,这将取决于他们如何处理计算基础设施。可以在 [APC.com][6] 找到如何克服任何 IT 阻碍的解决方案。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[ninifly](https://github.com/ninifly)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
|
||||
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -1,40 +1,42 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11000-1.html)
|
||||
[#]: subject: (Why startups should release their code as open source)
|
||||
[#]: via: (https://opensource.com/article/19/5/startups-release-code)
|
||||
[#]: author: (Clément Flipo https://opensource.com/users/cl%C3%A9ment-flipo)
|
||||
|
||||
为什么初创公司应该将代码开源
|
||||
======
|
||||
Dokit 曾经怀疑将自己的知识开源可能是一个失败的商业决策,然而正是这个选择奠定了它的成功。
|
||||
|
||||
> Dokit 曾经怀疑将自己的知识开源可能是一个失败的商业决策,然而正是这个选择奠定了它的成功。
|
||||
|
||||
![open source button on keyboard][1]
|
||||
|
||||
回想一个项目开展最初期的细节并不是一件容易的事情,但有时候可以帮助你更清晰地理解这个项目。如果让我来说,关于 [Dokit][2] 这个用来创建用户手册和文档的平台的最早的想法来自我的童年。小时候我家里都是 Meccano(译注:一种类似乐高的拼装玩具)和飞机模型之类的玩具,对于我来说,游戏中很重要的一部分就是动手制作,把独立的零件组装在一起来创造一个新的东西。我父亲在一家 DIY 公司工作,所以家里的很多东西也都和建筑、修理,以及使用说明书有关。小的时候父母还让我参加了童子军,在那里我们制作桌子和帐篷,还有泥巴做的烧烤炉,这些事情都培养了我在共同学习中感受到的乐趣,就像我在开源活动中感受到的一样。
|
||||
回想一个项目开展最初期的细节并不是一件容易的事情,但这有时候可以帮助你更清晰地理解这个项目。如果让我来说,关于 [Dokit][2] 这个用来创建用户手册和文档的平台的最早的想法来自我的童年。小时候我家里都是 Meccano(LCTT 译注:一种类似乐高的拼装玩具)和飞机模型之类的玩具,对于我来说,游戏中很重要的一部分就是动手制作,把独立的零件组装在一起来创造一个新的东西。我父亲在一家 DIY 公司工作,所以家里到处都建筑、修理,以及使用说明书。小的时候父母还让我参加了童子军,在那里我们制作桌子和帐篷,还有泥巴做的烧烤炉,这些事情都培养了我在共同学习中感受到的乐趣,就像我在开源活动中感受到的一样。
|
||||
|
||||
在童年学到的修理东西和回收产品的艺术后来成为了我工作的一部分。后来我决心要用线上的方式,还原这种在家里或者小组里学习如何制作和修理东西时的那种非常棒的感觉。Dokit 就从这个想法中诞生了。
|
||||
在童年学到的修理东西和回收产品的本领成为了我工作的一部分。后来我决心要用线上的方式,还原这种在家里或者小组里学习如何制作和修理东西时的那种非常棒的感觉。Dokit 就从这个想法中诞生了。
|
||||
|
||||
### 创业初期
|
||||
|
||||
事情并非一帆风顺,在我们的公司于 2017 年成立之后,我很快就意识到那些最庞大、最值得奋斗的目标一般来说也总是最困难的。如果想要实现我们的计划 —— 彻底改变 [人们旧有的编写和发行说明书和用户手册的方式][3],并且在这个细分市场(我们非常清楚这一点)里取得最大的影响力 —— 那么确立一个主导任务就十分关键,它关乎项目的组织方式。我们据此做出了第一个重要决策:首先 [在短时间内使用一个已有的开源框架 MediaWiki 制作产品原型来验证我们的想法][4],然后将我们的全部代码都作为开源项目发布。
|
||||
事情并非一帆风顺,在我们的公司于 2017 年成立之后,我很快就意识到那些最庞大、最值得奋斗的目标一般来说也总是最困难的。如果想要实现我们的计划 —— 彻底改变 [老式的说明书和用户手册的编写和发行方式][3],并且在这个细分市场(我们非常清楚这一点)里取得最大的影响力 —— 那么确立一个主导任务就十分关键,它关乎项目的组织方式。我们据此做出了第一个重要决策:首先 [在短时间内使用一个已有的开源框架 MediaWiki 制作产品原型来验证我们的想法][4],然后将我们的全部代码都作为开源项目发布。
|
||||
|
||||
当时 [MediaWiki][5] 已经在正常运作了,事后看来,这一点让我们的决策变得容易了许多。这个平台已经拥有我们设想的最小可用产品(MVP)所需要的 90% 的功能,并且在全世界范围内有 15000 名活跃的开发者。MediaWiki 因为是维基百科的驱动引擎而小有名气,如果没有来自它的支持,事情对我们来说无疑会困难很多。还有一个许多公司都在使用的文档平台 Confluence 也有一些不错的功能,但是最终还是不难在这两者之间做出选择。
|
||||
当时 [MediaWiki][5] 已经在正常运作了,事后看来,这一点让我们的决策变得容易了许多。这个平台已经拥有我们设想的最小可用产品(MVP)所需要的 90% 的功能,并且在全世界范围内有 15000 名活跃的开发者。MediaWiki 因为是维基百科的驱动引擎而小有名气,如果没有来自它的支持,事情对我们来说无疑会困难很多。还有一个许多公司都在使用的文档平台 Confluence 也有一些不错的功能,但是最终在这两者之间做出选择还是很容易的。
|
||||
|
||||
出于对 Github 的信赖,我们把自己平台的初始版本完全放在了这个社区上。我们甚至还没有真正开始进行推广,就已经可以看到世界各地的制造者开始使用我们的平台,这种令人激动的感觉似乎说明我们的选择是正确的。尽管 [制造商以及 Fablab 运动][6](译注: Fablab 是一种向个人提供包括 3D 打印在内的电子化制造服务的小型工坊)都在鼓励用户积极分享说明材料,并且在 [Fablab 契约][7] 中也写明了这一点,现实中像模像样的文档还是不太多见。
|
||||
出于对社区的信赖,我们把自己平台的初始版本完全放在了 GitHub 上。我们甚至还没有真正开始进行推广,就已经可以看到世界各地的创客们开始使用我们的平台,这种令人激动的感觉似乎说明我们的选择是正确的。尽管 [创客以及 Fablab 运动][6](LCTT 译注:Fablab 是一种向个人提供包括 3D 打印在内的电子化制造服务的小型工坊)都在鼓励用户积极分享说明材料,并且在 [Fablab 章程][7] 中也写明了这一点,但现实中像模像样的文档还是不太多见。
|
||||
|
||||
人们喜欢使用我们这个平台的首要原因是它可以解决一个非常实在的问题:一个本来还不错的项目,却使用了非常糟糕的文档 —— 其实这个项目本来可以变得更好的。对我们来说,这有点儿像是在修复 DIY 以及动手爱好者社区里的一个裂缝。在我们的平台发布后的一年之内,Fablabs、[Wikifab][8]、[Open Source Ecology][9]、[Les Petits Debrouillards][10]、[Ademe][11] 以及 [Low-Tech Lab][12] 都在他们的服务器上安装了我们的工具,用来制作逐步引导的教程。
|
||||
人们喜欢使用我们这个平台的首要原因是它可以解决一个非常实在的问题:一个本来还不错的项目,却使用了非常糟糕的文档 —— 其实这个项目本来可以变得更好的。对我们来说,这有点儿像是在修复创客及 DIY 社区里的一个裂缝。在我们的平台发布后的一年之内,Fablabs、[Wikifab][8]、[Open Source Ecology][9]、[Les Petits Debrouillards][10]、[Ademe][11] 以及 [Low-Tech Lab][12] 都在他们的服务器上安装了我们的工具,用来制作逐步引导的教程。
|
||||
|
||||
甚至在我们还没有发新闻稿之前,我们的其中一个用户 Wikifab 就开始在全国性媒体上收到“DIY 界的维基百科”这样的称赞了。仅仅两年之内,我们看到有数百的社区都在他们自己的 Dokits 上开展了项目,从有意思的、搞笑的,到那种很正式的产品手册都有。这种社区的力量正是我们想要驾驭的,并且有这么多的项目 —— 从风力涡轮机到宠物喂食器 —— 都在使用我们创建的平台编写非常有吸引力的产品手册,这件事情真的令我们赞叹不已。
|
||||
甚至在我们还没有发新闻稿之前,我们的其中一个用户 Wikifab 就开始在全国性媒体上收到“DIY 界的维基百科”这样的称赞了。短短两年之内,我们看到有数百的社区都在他们自己的 Dokits 上开展了项目,从有意思的、搞笑的,到那种很正式的产品手册都有。这种社区的力量正是我们想要驾驭的,并且有这么多的项目 —— 从风力涡轮机到宠物喂食器 —— 都在使用我们创建的平台编写非常有吸引力的产品手册,这件事情真的令我们赞叹不已。
|
||||
|
||||
### 项目开源
|
||||
|
||||
回头看看前两年的成功,很明显选择开源是我们能迅速取得成果的关键因素。最有价值的事情就是在开源项目中获得反馈的能力了。如果一段代码无法正常运行,[会有人立刻告诉我们][14]。如果可以从这些已经在使用你提供的服务的人那里学到这么多东西,为什么还要需要等着和顾问们开会呢?
|
||||
|
||||
Github 社区对我们这个项目的关注程度也反映出了这个市场的潜力(包括利润上的潜力)。[巴黎有非常好的、成长迅速的开发者社区][15](译注:Dokit 是一家设立在巴黎的公司),但是开源将我们从一个只有数千当地人的小池子里带到了全世界数百万的开发者身边,他们都将成为我们的创作中的一部分。与此同时,代码的开放性也让我们的用户和客户更加放心,因为即使我们这个公司不在了,代码仍然会存续下去。
|
||||
社区对我们这个项目的关注程度也反映出了这个市场的潜力(包括利润上的潜力)。[巴黎有一个非常好的、成长迅速的开发者社区][15](LCTT 译注:Dokit 是一家设立在巴黎的公司),但是开源将我们从一个只有数千当地人的小池子里带到了全世界数百万的开发者身边,他们都将成为我们的创作中的一部分。与此同时,代码的开放性也让我们的用户和客户更加放心,因为即使我们这个公司不在了,代码仍然会存续下去。
|
||||
|
||||
如果说上面这些都是在我们之前对开源的预期之中的话,其实这一路上也有不少惊喜。因为开源,我们获得了更多的客户、声望以及精准推广,这种推广本来以我们有限的预算是负担不起的,现在却不需要我们支付费用。开放代码还优化了我们的招聘流程,因为在雇佣之前就可以通过我们的代码来测试候选人,并且被雇佣之后的入职过程也会更加顺利。
|
||||
如果说上面这些都是在我们之前对开源的预期之中的话,其实这一路上也有不少惊喜。因为开源,我们获得了更多的客户、声望以及精准推广,这种推广本来以我们有限的预算是负担不起的,现在却不需要我们支付费用。我们发现开源代码还改善了我们的招聘流程,因为在雇佣之前就可以通过我们的代码来测试候选人,并且被雇佣之后的入职过程也会更加顺利。
|
||||
|
||||
开发者在完全公开的情况下写代码,既有一点尴尬,同时也很团结,这对我们提升产品质量很有帮助。人们可以互相发表意见和反馈,并且因为工作都是完全公开的,人们似乎会尽可能地想做到最好。为了不断优化、不断重构 Dokit 的运行方式,我们明白未来应该在对社区的支持上做得更好。
|
||||
|
||||
@ -44,7 +46,7 @@ Github 社区对我们这个项目的关注程度也反映出了这个市场的
|
||||
|
||||
在创业初期,我们对将自己的知识免费分发出去这件事还是非常担心的。事实证明正好相反 —— 正是开源让我们能够迅速构建起一个可持续的初创企业。Dokit 平台的设计初衷是通过社区的支持,让它的用户有信心去构建、组装、修理和创造全新的发明。事后看来,我们用开源的方式去构建了 Dokit 这个平台,这和 Dokit 本身想做的其实正好是同一件事情。
|
||||
|
||||
如同修理或者组装一件实体产品一样,只有当你对自己的方法有信心的时候,事情才会越来越顺利。现在,在我们创业的第三个年头,我们开始注意到全世界对这个领域的兴趣在增加,因为它迎合了出于不断变化的居家和生活方式的需求而 [想要使用、重复利用以及组装产品的新一代客户][16]。我们正是在通过线上社区的支持,创造一个让大家能够在自己动手做东西的时候感到更加有信心的平台。
|
||||
如同修理或者组装一件实体产品一样,只有当你对自己的方法有信心的时候,事情才会越来越顺利。现在,在我们创业的第三个年头,我们开始注意到全世界对这个领域的兴趣在增加,因为它迎合了出于不断变化的居家和生活方式的需求而 [想要使用或重复利用以及组装产品的新一代客户][16]。我们正是在通过线上社区的支持,创造一个让大家能够在自己动手做东西的时候感到更加有信心的平台。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -53,7 +55,7 @@ via: https://opensource.com/article/19/5/startups-release-code
|
||||
作者:[Clément Flipo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chen-ni](https://github.com/chen-ni)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (yizhuoyan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11010-1.html)
|
||||
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
|
||||
如何在 Debian/Ubuntu 系统中检查程序包是否安装?
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/235541yl41p73z5jv78y8p.jpg)
|
||||
|
||||
我们近期发布了一篇关于批量程序包安装的文章。在此同时,关于如何获取系统上已安装了的程序包信息,我也做了些调查然后找到了些方法。我会把这些方法分享在我们的网站上,希望能帮助到其他人。
|
||||
|
||||
有很多种方法可以检查程序包是否已安装,我找到了 7 种命令,你可以从中选择你喜欢的使用。
|
||||
|
||||
如下:
|
||||
|
||||
* `apt-cache`:可用于查询 APT 缓存或程序包的元数据。
|
||||
* `apt`:是基于 Debian 的系统中的安装、下载、删除、搜索和管理包的强有力的工具。
|
||||
* `dpkg-query`:一个查询 dpkg 数据库的工具。
|
||||
* `dpkg`:基于 Debian 的系统的包管理工具。
|
||||
* `which`:返回在终端中输入命令时执行的可执行文件的全路径。
|
||||
* `whereis`:可用于搜索指定命令的二进制文件、源码文件和帮助文件。
|
||||
* `locate`:比 `find` 命令快,因为其使用 `updatedb` 数据库搜索,而 `find`命令在实际系统中搜索。
|
||||
|
||||
### 方法一、使用 apt-cache 命令
|
||||
|
||||
`apt-cache` 命令用于从 APT 内部数据库中查询**APT 缓存**和**包的元数据**,将会搜索和显示指定包的信息,包括是否安装、程序包版本、源码仓库信息等。
|
||||
|
||||
下面的示例清楚的显示 `nano` 包已经在系统中安装了以及对应安装的版本号。
|
||||
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.9.3-2
|
||||
Candidate: 2.9.3-2
|
||||
Version table:
|
||||
*** 2.9.3-2 500
|
||||
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
```
|
||||
|
||||
### 方法二、使用 apt 命令
|
||||
|
||||
`apt` 是一个功能强大的命令行工具,可用于安装、下载、删除、搜索、管理程序包以及查询关于程序包的信息,类似对于 `libapt-pkg` 库的所有功能的底层访问。其包含一些与包管理相关的但很少用到的命令行功能。
|
||||
|
||||
```
|
||||
# apt -qq list nano
|
||||
nano/bionic,now 2.9.3-2 amd64 [installed]
|
||||
```
|
||||
|
||||
### 方法三、使用 dpkg-query 命令
|
||||
|
||||
`dpkg-query` 是显示 `dpkg` 数据库中程序包信息列表的一个工具。
|
||||
|
||||
下面示例中的输出的第一列 `ii`,表示查询的程序包已安装了。
|
||||
|
||||
```
|
||||
# dpkg-query --list | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### 方法四、使用 dpkg 命令
|
||||
|
||||
`dpkg`(**d**ebian **p**ac**k**a**g**e)是一个安装、构建、删除和管理 Debian 包的工具,但和其他包管理系统不同的是,其不能自动下载和安装包或包依赖。
|
||||
|
||||
下面示例中的输出的第一列 `ii`,表示查询的包已安装了。
|
||||
|
||||
```
|
||||
# dpkg -l | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### 方法五、使用 which 命令
|
||||
|
||||
`which` 命令返回在终端中输入命令时执行的可执行文件的全路径。这对于你想要给可执行文件创建桌面快捷方式或符号链接时非常有用。
|
||||
|
||||
`which` 命令仅在当前用户 `PATH` 环境变量配置的目录列表中搜索,而不是在所有用户的目录中搜索。这意思是当你登入你自己账号时,其不会在 `root` 用户文件或目录中搜索。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# which nano
|
||||
/bin/nano
|
||||
```
|
||||
|
||||
### 方法六、使用 whereis 命令
|
||||
|
||||
`whereis` 命令用于针对指定命令搜索对应的程序二进制文件、源码文件以及帮助文件等。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# whereis nano
|
||||
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
|
||||
```
|
||||
|
||||
### 方法七、使用 locate 命令
|
||||
|
||||
`locate` 命令比 `find` 命令快,因为其在 `updatedb` 数据库中搜索,而 `find` 命令在实际系统中进行搜索。
|
||||
|
||||
对于获取指定文件,其使用数据库而不是在特定目录路径中搜索。
|
||||
|
||||
`locate` 命令不会预安装在大多数系统中,需要手动安装。
|
||||
|
||||
`locate` 使用的数据库会根据定时任务定期更新。当然,我们也可以手动更新。
|
||||
|
||||
如果对于指定的程序包或可执行文件路径有如下输出,则表示已安装了,否则没有。
|
||||
|
||||
```
|
||||
# locate --basename '\nano'
|
||||
/usr/bin/nano
|
||||
/usr/share/nano
|
||||
/usr/share/doc/nano
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[yizhuoyan](https://github.com/yizhuoyan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10997-1.html)
|
||||
[#]: subject: (5 GNOME keyboard shortcuts to be more productive)
|
||||
[#]: via: (https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive/)
|
||||
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
|
||||
@ -16,7 +16,7 @@
|
||||
|
||||
### GNOME 活动概述
|
||||
|
||||
可以使用键盘上的 `Super` 键轻松打开活动概述。(`Super` 键通常有一个标识——比如 Windows 徽标……。)这在启动应用程序时非常有用。例如,使用以下键序列 `Super + f i r + Enter` 可以轻松启动 Firefox Web 浏览器
|
||||
可以使用键盘上的 `Super` 键轻松打开活动概述。(`Super` 键通常有一个标识——比如 Windows 徽标……)这在启动应用程序时非常有用。例如,使用以下键序列 `Super + f i r + Enter` 可以轻松启动 Firefox Web 浏览器
|
||||
|
||||
![][3]
|
||||
|
||||
@ -24,7 +24,7 @@
|
||||
|
||||
在 GNOME 中,消息托盘中提供了通知。这也是日历和世界时钟出现的地方。要使用键盘打开信息托盘,请使用 `Super + m` 快捷键。要关闭消息托盘,只需再次使用相同的快捷方式。
|
||||
|
||||
* ![][4]
|
||||
![][4]
|
||||
|
||||
### 在 GNOME 中管理工作空间
|
||||
|
||||
@ -42,20 +42,18 @@ GNOME Shell 使用动态工作空间,这意味着它可以根据需要创建
|
||||
|
||||
### 同一个应用的多个窗口
|
||||
|
||||
使用活动概述启动应用程序非常有效。但是,尝试从已经运行的应用程序打开一个新窗口只能将焦点转移到已经打开的窗口。要创建一个新窗口,而不是简单地按 `Enter` 启动应用程序,请使用 `Ctrl + Enter`。
|
||||
使用活动概述启动应用程序非常有效。但是,尝试从已经运行的应用程序打开一个新窗口只能将焦点转移到已经打开的窗口。要创建一个新窗口,就不是简单地按 `Enter` 启动应用程序,请使用 `Ctrl + Enter`。
|
||||
|
||||
因此,例如,使用应用程序概述启动终端的第二个实例,`Super + t e r + (Ctrl + Enter)`。
|
||||
|
||||
![][7]
|
||||
|
||||
然后你可以使用 `Super` + `在同一个应用程序的窗口之间切换。
|
||||
然后你可以使用 `Super` + ` 在同一个应用程序的窗口之间切换。
|
||||
|
||||
![][8]
|
||||
|
||||
如图所示,当用键盘控制时,GNOME Shell 是一个非常强大的桌面环境。学习使用这些快捷方式并训练你的肌肉记忆以不使用鼠标将为你提供更好的用户体验,并在使用 GNOME 时提高你的工作效率。有关其他有用的快捷方式,请查看 [GNOME wiki 上的此页面][9]。
|
||||
|
||||
* * *
|
||||
|
||||
*图片来自 [1AmFcS][10],[Unsplash][11]*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
@ -65,7 +63,7 @@ via: https://fedoramagazine.org/5-gnome-keyboard-shortcuts-to-be-more-productive
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11015-1.html)
|
||||
[#]: subject: (Learn Python with these awesome resources)
|
||||
[#]: via: (https://opensource.com/article/19/5/resources-learning-python)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
学习 Python 的精品 PLN 资源
|
||||
======
|
||||
|
||||
> 通过将这些资源加入你自己的私人学习网络以拓展 Python 知识。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/25/002706hrx0d3dfrxeid3nj.jpg)
|
||||
|
||||
我使用和教授 Python 已有很长时间了,但我总是乐于增加我对这门实用语言的知识。这就是为什么我一直试着拓展我的 Python <ruby>[个人学习网络][2]<rt>personal learning network</rt></ruby>(PLN),这是一个描述用于分享信息的非正式的互惠型网络的概念。
|
||||
|
||||
教育学家 [Kelly Paredes][3] 和 [Sean Tibor][4] 最近在他们的播客 [Teaching Python][5] 上谈到了如何搭建 Python PLN。我在克里夫兰的 [PyCon 2019][6] 遇到他们之后就订阅了这个频道(并把它们加入到我的 Python PLN 当中)。这个播客激发了我对 Python PLN 中的人的思考,包括那些我最近在 PyCon 遇到的人们。
|
||||
|
||||
我会分享一些我找到 PLN 成员的地方;可能它们也可以变成你的 Python PLN 的一部分。
|
||||
|
||||
### Young Coders 导师
|
||||
|
||||
Python 基金会的活动协调者 [Betsy Waliszewski][7] 是我的 Python PLN 中的一员。当我们在 PyCon2019 见到时,因为我是个老师,她推荐我看看为十二岁及以上的孩子打造的 [Young Coders][8] 工作室。在那我遇到了正在负责这个计划的 [Katie Cunningham][9],它会教参与者如何搭建和配置树莓派并使用 Python 项目。年轻学生也会收到两本书 Jason Briggs 的 《[Python for Kids][10]》 和 Craig Richardson 的 《[Learn to Program with Minecraft][11]》。我一直寻找提升我教学水平的新方式,因此我在该会议上的 [NoStarch Press][12] 展台迅速拿到了两本 Minecraft 书。Katie 是一名优秀的教师,也是一名多产作家,拥有一个充满 Python 培训视频的 [YouTube][13] 精彩频道。
|
||||
|
||||
我把 Kattie 与我在 Young Coders 工作室碰到的另外两个人加入我的 PLN:[Nat Dunn][14] 和 [Sean Valentine][15]。像 Katie 一样,他们自愿花时间把 Python 介绍给青年程序员们。Nat 是 [Webucator][16] 的总裁,这是一家 IT 培训公司,多年来一直是 Python 软件基金会赞助商,并赞助了 PyCon 2018 教育峰会。在将 Python 教他 13 岁的儿子和 14 岁的侄子之后,他决定在 Young Coders 任教。Sean 是 [Hidden Genius 项目][17] 的战略计划总监,这是一个针对黑人男性青年的技术及领导力打造的教导项目。Sean 说许多 Hidden Genius 参与者“用 Python 打造项目因此我们认为 [Young Coders] 是一个很好的合作机会”。了解 Hidden Genius 项目激发了我更深层次地思考编程的未来以及其改变生活的威力。
|
||||
|
||||
### Open Spaces 聚会
|
||||
|
||||
我发现 PyCon 的 [Open Spaces][18] —— 这是一个一小时左右的自组织的即兴聚会 —— 跟正式的项目活动一样有用。我的最爱之一是 [Circuit Playground Express][19] 设备,它是我们会议主题包的一部分。我很喜欢这个设备,并且 Open Space 提供了学习它的一条大道。组织者提供了工作表和一个 [Github][20] 仓库,其中包含有我们成功所需要的所有工具,也提供了一个上手实践的机会以及探索这个独特硬件的方向。
|
||||
|
||||
这次会面激起了了我对学习 Circuit Playground Express 更新信息的兴趣,因此在 PyCon 之后, 我在 Twitter 上接触到了在会议上就该设备编程发表主旨演讲的 [Nina Zakharenko][21]。Nina 自从去年秋天我在 [All Things Open][23] 上听过她的演讲后就在我的 Python PLN 里了。我最近报名参加了她的 [Python 基础][24]课程以加深我的学习。Nina 推荐我将 [Kattni Rembor][25] 加入我的 Python PLN。他的[示例代码][26]正帮助我学习用 CircuitPython 编程。
|
||||
|
||||
### 我的 PLN 中的其他资源
|
||||
|
||||
我在 PyCon 2019 也遇见了 [Opensource.com][27] 社区版主 [Moshe Zadka][28],并和他来了场长谈。他分享了几个新的 Python 资源,包括 [如何像电脑科学家一样思考][29]。社区版主 [Seth Kenlon][30] 是我的 PLN 中的另一名成员;他发表了许多优秀的 [Python 文章][31],我也推荐你关注他。
|
||||
|
||||
我的 Python PLN 每天都在持续扩大。除了我已经提到的,我同样推荐你关注 [Al Sweigart][32]、[Eric Matthes][33] 以及 [Adafruit][34]他们分享的优质内容。我也推荐这本书《[制作:由 Adafruit Circuit Playground Express 开始][35]》和《[Podcast.\_\_init\_\_][36]》,这是一个关于 Python 社区的播客。这两个都是我从我的 PLN 中了解到的。
|
||||
|
||||
谁在你的 Python PLN 中?请在留言区分享你的最爱。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/resources-learning-python
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites)
|
||||
[2]: https://en.wikipedia.org/wiki/Personal_learning_network
|
||||
[3]: https://www.teachingpython.fm/hosts/kellypared
|
||||
[4]: https://twitter.com/smtibor
|
||||
[5]: https://www.teachingpython.fm/20
|
||||
[6]: https://us.pycon.org/2019/
|
||||
[7]: https://www.linkedin.com/in/betsywaliszewski
|
||||
[8]: https://us.pycon.org/2019/events/letslearnpython/
|
||||
[9]: https://www.linkedin.com/in/kcunning/
|
||||
[10]: https://nostarch.com/pythonforkids
|
||||
[11]: https://nostarch.com/programwithminecraft
|
||||
[12]: https://nostarch.com/
|
||||
[13]: https://www.youtube.com/c/KatieCunningham
|
||||
[14]: https://www.linkedin.com/in/natdunn/
|
||||
[15]: https://www.linkedin.com/in/sean-valentine-b370349b/
|
||||
[16]: https://www.webucator.com/
|
||||
[17]: http://www.hiddengeniusproject.org/
|
||||
[18]: https://us.pycon.org/2019/events/open-spaces/
|
||||
[19]: https://www.adafruit.com/product/3333
|
||||
[20]: https://github.com/adafruit/PyCon2019
|
||||
[21]: https://twitter.com/nnja
|
||||
[22]: https://www.youtube.com/watch?v=35mXD40SvXM
|
||||
[23]: https://allthingsopen.org/
|
||||
[24]: https://frontendmasters.com/courses/python/
|
||||
[25]: https://twitter.com/kattni
|
||||
[26]: https://github.com/kattni/ChiPy_2018
|
||||
[27]: http://Opensource.com
|
||||
[28]: https://opensource.com/users/moshez
|
||||
[29]: http://openbookproject.net/thinkcs/python/english3e/
|
||||
[30]: https://opensource.com/users/seth
|
||||
[31]: https://www.google.com/search?source=hp&ei=gVToXPq-FYXGsAW-mZ_YAw&q=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&oq=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&gs_l=psy-ab.12...627.15303..15584...1.0..0.176.2802.4j21......0....1..gws-wiz.....0..35i39j0j0i131j0i67j0i20i263.r2SAW3dxlB4
|
||||
[32]: http://alsweigart.com/
|
||||
[33]: https://twitter.com/ehmatthes?lang=en
|
||||
[34]: https://twitter.com/adafruit
|
||||
[35]: https://www.adafruit.com/product/3944
|
||||
[36]: https://www.pythonpodcast.com/episodes/
|
53
published/20190604 Kubernetes is a dump truck- Here-s why.md
Normal file
53
published/20190604 Kubernetes is a dump truck- Here-s why.md
Normal file
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11011-1.html)
|
||||
[#]: subject: (Kubernetes is a dump truck: Here's why)
|
||||
[#]: via: (https://opensource.com/article/19/6/kubernetes-dump-truck)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux)
|
||||
|
||||
为什么说 Kubernetes 是一辆翻斗车
|
||||
======
|
||||
|
||||
> 翻斗车是解决各种基本业务问题的优雅解决方案。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/24/012846v737bts00uwk3qd7.jpg)
|
||||
|
||||
这篇文章写于 Kubernetes 的生日(6 月 7 日星期五)前夕。
|
||||
|
||||
翻斗车很优雅。说真的,不信你听我说。它们以优雅的方式解决了各种各样的技术问题。它们可以搬动泥土、砾石、岩石、煤炭、建筑材料或道路上的障碍。它们甚至可以拉动拖车及它们上面的其他重型设备。你可以给一辆翻斗车装上五吨泥土,然后自驾游遍全国。对于像我这样的电脑极客来说,那就是优雅。
|
||||
|
||||
但是,它们并不容易使用。驾驶翻斗车需要特殊的驾驶执照。它们也不容易装配和维护。购买翻斗车和各种维护时要做很多选择。但是,它们可以优雅的搬动那些垃圾。
|
||||
|
||||
你知道搬动垃圾有什么不优雅的地方吗?假如你有一款新型的紧凑型轿车,它们到处可以买到,易于驾驶、更易于维护。但是,用它们来装泥土就很糟糕。这需要跑 200 趟才能运走 5 吨垃圾,而且,之后没人再会想要这辆车了。
|
||||
|
||||
好吧,你可以买一辆出售的翻斗车,而不是想自己造一辆。但是我不同,我是个极客,我喜欢自己造东西。但……
|
||||
|
||||
如果你拥有一家建筑公司,你就不会想着自己造一辆翻斗车。你肯定不会维持一条供应链来重构翻斗车(这可是一条很大的供应链)。但你可以学会驾驶一辆。
|
||||
|
||||
好吧,我的这个比喻很粗糙,但很容易理解。易用性是相对的,易于维护是相对的,易于装配也是相对的。这实际上取决于你想要做什么。[Kubernetes][2] 也不例外。
|
||||
|
||||
一次性构建 Kubernetes 并不太难。配置好 Kubernetes 呢?好吧,这稍微难一些。你如何看待 KubeCon?它们又宣布了多少新项目?哪些是“真实的”呢?而你应该学习哪些?你对 Harbour、TikV、NATD、Vitess,开放策略代理有多深入的了解?更不用说 Envoy、eBPF 和 Linux 中的一系列底层技术?这就像是 1904 年工业革命爆发时建造翻斗车一样,你要弄清楚使用的螺钉、螺栓、金属和活塞。(有没有蒸汽朋克在这里吗?)
|
||||
|
||||
像翻斗车一样构造和配置 Kubernetes 是一个技术问题,如果你从事金融服务、零售、生物研究、食品服务等等,这可能不是你应该做的事情。但是,学习如何驾驶 Kubernetes 肯定是你应该学习的东西。
|
||||
|
||||
Kubernetes 就像一辆翻斗车,因其可以解决的各种技术问题(以及它所拖带的生态系统)而优雅。所以,我会给你一句引用的话,这是我的一位计算机科学教授在我大学的第一年告诉我们的,她说,“有一天,你会看到一段代码并对自己说,‘真特么优雅!’”
|
||||
|
||||
Kubernetes 很优雅。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/kubernetes-dump-truck
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dump_truck_car_container_kubernetes.jpg?itok=4BdmyVGd (Dump truck with kids standing in the foreground)
|
||||
[2]: https://kubernetes.io/
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10999-1.html)
|
||||
[#]: subject: (Installing alternative versions of RPMs in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/installing-alternative-rpm-versions-in-fedora/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
在 Fedora 中安装替代版本的 RPM 包
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
<ruby>[模块化][2]<rt>Modularity</rt></ruby>使 Fedora 能够在仓库中提供替代版本的 RPM 软件包。每个 Fedroa 版本可以原生构建不同应用、语言运行时和工具版本的多个版本。
|
||||
|
||||
Fedora Magazine 大约一年前就写了 [Fedora 28 服务器版的模块化][3]。那时,它只是一个有附加内容的可选仓库,并且明确只支持服务器版。到目前为止,它已经发生了很多变化,现在**模块化是 Fedora 发行版的核心部分**。一些软件包已完全变成模块。在编写本文时,Fedora 30 的 49,464 个二进制 RPM 软件包中的 1,119(2.26%)来自模块([关于这个数字的更多信息][4])。
|
||||
|
||||
### 模块化基础知识
|
||||
|
||||
由于许多软件包有不同的版本会让人难以承受(并且难以管理),所以包被分组为**模块**,它可以代表一个应用程序、一个语言运行时或任何其他合理的组。
|
||||
|
||||
模块通常有多个**流**,这通常代表软件的主要版本。它可以并行使用,但在给定系统上只能安装每个模块的一个流。
|
||||
|
||||
为了不让用户因为太多选择而难以承受,每个 Fedora 版本都有一组**默认**,因此只需要在需要时做出决定。
|
||||
|
||||
最后,为了简化安装,可以根据用例使用预定义的 **profile** 选择性地安装模块。例如,数据库模块可以作为客户端,服务端或同时安装。
|
||||
|
||||
### 实际使用模块化
|
||||
|
||||
当你在 Fedora 系统上安装 RPM 软件包时,它很可能它来自模块流。你可能没有注意到的原因之一是模块化的核心原则之一是在你探究之前保持不可见。
|
||||
|
||||
让我们比较以下两种情况。首先,安装流行的 i3 平铺窗口管理器,然后安装极简化的 dwm 窗口管理器:
|
||||
|
||||
```
|
||||
$ sudo dnf install i3
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
正如所料,上面的命令会在系统上安装 i3 包及其依赖项。这里没有其他事情发生。但另一个会怎么样?
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
...
|
||||
Enabling module streams:
|
||||
dwm 6.1
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
感觉是一样的,但后台发生了一些事情 。它启用了默认的 dwm 模块流(6.1),并且安装了模块中的 dwm 包。
|
||||
|
||||
为了保持透明,输出中有一条关于模块自动启用的消息。但除此之外,用户不需要了解模块化的任何信息,以便按照他们一贯的方式使用他们的系统。
|
||||
|
||||
但如果他们使用模块化方式呢?让我们看看如何安装不同版本的 dwm。
|
||||
|
||||
使用以下命令查看可用的模块流:
|
||||
|
||||
```
|
||||
$ sudo dnf module list
|
||||
...
|
||||
dwm latest ...
|
||||
dwm 6.0 ...
|
||||
dwm 6.1 [d] ...
|
||||
dwm 6.2 ...
|
||||
...
|
||||
Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
|
||||
```
|
||||
|
||||
输出显示 dwm 模块有四个流,6.1 是默认值。
|
||||
|
||||
要安装不同版本的 dwm 包,例如,安装 6.2 的流。启用它,然后使用以下两个命令安装软件包:
|
||||
|
||||
```
|
||||
$ sudo dnf module enable dwm:6.2
|
||||
...
|
||||
Enabling module streams:
|
||||
dwm 6.2
|
||||
...
|
||||
Done!
|
||||
$ sudo dnf install dwm
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
最后,让我们看下配置,以 PostgreSQL 为例。
|
||||
|
||||
```
|
||||
$ sudo dnf module list
|
||||
...
|
||||
postgresql 9.6 client, server ...
|
||||
postgresql 10 client, server ...
|
||||
postgresql 11 client, server ...
|
||||
...
|
||||
```
|
||||
|
||||
要安装 PostgreSQL 11 服务端,使用以下命令:
|
||||
|
||||
```
|
||||
$ sudo dnf module install postgresql:11/server
|
||||
```
|
||||
|
||||
请注意,除了启用流之外,我们可以指定配置从而使用一条命令安装模块。
|
||||
|
||||
可以立即安装多个版本。要添加客户端工具,使用下面的命令:
|
||||
|
||||
```
|
||||
$ sudo dnf module install postgresql:11/client
|
||||
```
|
||||
|
||||
还有许多其他带有多个流的模块可供选择。在编写本文时,Fedora 30 中有 83 个模块流。包括两个版本的 MariaDB、三个版本的 Node.js、两个版本的 Ruby 等等。
|
||||
|
||||
有关完整的命令集(包括从一个流切换到另一个流),请参阅[模块化的官方用户文档][5]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/installing-alternative-rpm-versions-in-fedora/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/modularity-f30-816x345.jpg
|
||||
[2]: https://docs.pagure.org/modularity
|
||||
[3]: https://linux.cn/article-10479-1.html
|
||||
[4]: https://blog.samalik.com/2019/06/12/counting-modularity-packages.html
|
||||
[5]: https://docs.fedoraproject.org/en-US/modularity/using-modules/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10998-1.html)
|
||||
[#]: subject: (Open hardware for musicians and music lovers: Headphone, amps, and more)
|
||||
[#]: via: (https://opensource.com/article/19/6/hardware-music)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
|
||||
@ -10,40 +10,39 @@
|
||||
音乐家和音乐爱好者的开放硬件:耳机、放大器等
|
||||
======
|
||||
|
||||
从 3D 打印乐器到将隔空播放声音的设备,有很多可以通过开放硬件项目来制造音乐的方法。
|
||||
> 从 3D 打印乐器到无线播放声音的设备,有很多通过开放硬件项目来奏乐的方法。
|
||||
|
||||
![][1]
|
||||
|
||||
这个世界到处都是很棒的[开源音乐播放器][2],但为什么只是将开源用在播放音乐上呢?你还可以使用开源硬件制造音乐。本文中描述的所有工具都是经过了[开源硬件协会][3](OSHWA)认证的。这意味着你可以自由地构建它们,重新组合它们,或者用它们做任何其他事情。
|
||||
这个世界到处都是很棒的[开源音乐播放器][2],但为什么只是将开源用在播放音乐上呢?你还可以使用开源硬件奏乐。本文中描述的所有工具都是经过了[开源硬件协会][3](OSHWA)认证的。这意味着你可以自由地构建它们,重新组合它们,或者用它们做任何其他事情。
|
||||
|
||||
### 开源乐器
|
||||
|
||||
当你想制作音乐时,乐器始终是一个好的起点。如果你更倾向于传统的的乐器,那么[F-F-Fiddle][4]可能适合你。
|
||||
当你想奏乐时使用乐器总是最好的方式之一。如果你喜欢传统的的乐器,那么 [F-F-Fiddle][4] 可能适合你。
|
||||
|
||||
![F-f-fiddle][5]
|
||||
|
||||
F-F-Fiddle 是一款全尺寸电子小提琴,你可以使用标准桌面 3D 打印机制作([熔丝制造][6])。如果你觉得眼见为真,那么这里有一个 F-F-Fiddle 的视频: https://youtu.be/8NDWVcJJS2Y
|
||||
F-F-Fiddle 是一款全尺寸电子小提琴,你可以使用标准的桌面 3D 打印机制作(采用[熔丝制造][6])。如果你想眼见为真,那么这里有一个 F-F-Fiddle 的视频: https://img.linux.net.cn/static/video/The%20F-F-Fiddle-8NDWVcJJS2Y.mp4
|
||||
|
||||
精通小提琴,但还对一些更具异国情调的东西感兴趣?<ruby>[开源特雷门琴][7]<rt>Open Theremin</rt></ruby>怎么样?
|
||||
如果你精通小提琴,但还对一些更具异国情调的东西感兴趣?<ruby>[开源的特雷门琴][7]<rt>Open Theremin</rt></ruby>怎么样?
|
||||
|
||||
![Open Theremin][8]
|
||||
|
||||
与所有特雷门琴一样,开源特雷门琴可让你在不触碰乐器的情况下播放音乐。当然,它特别擅长为你的下一个科幻视频或空间主题派对制作[令人毛骨悚然的空间声音][9]。
|
||||
|
||||
[Waft][10] 的操作类似,也可以远程控制声音。它使用[激光雷达][11]来测量手与传感器的距离。看看这个: https://vimeo.com/203705197
|
||||
[Waft][10] 的操作类似,也可以远程控制声音。它使用[激光雷达][11]来测量手与传感器的距离。看看这个: https://img.linux.net.cn/static/video/Waft%20Prototype%2012-Feb-2017-203705197.mp4
|
||||
|
||||
Waft 是特雷门琴吗?我不确定算不算,特雷门琴高手可以在下面的评论里发表一下看法。
|
||||
|
||||
如果特雷门琴对你来说太熟悉了,[SIGNUM][12]可能就是你想要的。用其开发人员的话说,SIGNUM 通过将不可见的无线通信转换为可听信号来“揭示加密的信息代码和人/机通信的语言”。
|
||||
如果特雷门琴对你来说太熟悉了,[SIGNUM][12] 可能就是你想要的。用其开发人员的话说,SIGNUM 通过将不可见的无线通信转换为可听信号来“揭示加密的信息代码和人/机通信的语言”。
|
||||
|
||||
![SIGNUM][13]
|
||||
|
||||
这是演示: https://vimeo.com/142831757
|
||||
|
||||
这是演示: https://img.linux.net.cn/static/video/SIGNUM_Portable%20Analog%20Instrumentation%20Amplifier-142831757.mp4
|
||||
|
||||
### 输入
|
||||
|
||||
无论你使用什么乐器,都需要将其插入某些东西。如果你想要连接到树莓派,请尝试 [AudioSense-Pi][14],它允许你一次将多个输入和输出连接到你的树莓派。
|
||||
无论你使用什么乐器,都需要将其接到某些东西上。如果你想要连接到树莓派,请尝试 [AudioSense-Pi][14],它允许你一次将多个输入和输出连接到你的树莓派。
|
||||
|
||||
![AudioSense-Pi][15]
|
||||
|
||||
@ -55,7 +54,7 @@ Waft 是特雷门琴吗?我不确定算不算,特雷门琴高手可以在下
|
||||
|
||||
### 耳机
|
||||
|
||||
制作所有这些音乐很棒,但你还需要考虑如何听它。幸运的是,[EQ-1耳机][18]是开源和可以 3D 打印的。
|
||||
制作所有这些音乐很棒,但你还需要考虑如何听它。幸运的是,[EQ-1耳机][18]是开源,支持 3D 打印。
|
||||
|
||||
![EQ-1 headphones][19]
|
||||
|
||||
@ -68,7 +67,7 @@ via: https://opensource.com/article/19/6/hardware-music
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,119 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10995-1.html)
|
||||
[#]: subject: (Ubuntu Kylin: The Official Chinese Version of Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-kylin/)
|
||||
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
|
||||
|
||||
优麒麟:Ubuntu 的官方中文版本
|
||||
======
|
||||
|
||||
> 让我们来看看国外是如何看优麒麟的。
|
||||
|
||||
[Ubuntu 有几个官方特色版本][1],优麒麟(Ubuntu Kylin)是它们中的一个。在这篇文章中,你将了解到优麒麟,它是什么,它为什么被创建,它的特色是什么。
|
||||
|
||||
麒麟操作系统最初由中华人民共和国的[国防科技大学][2]的院士在 2001 年开发。名字来源于[麒麟][3],这是一种来自中国神话的神兽。
|
||||
|
||||
麒麟操作系统的第一个版本基于 [FreeBSD][4],计划用于中国军方和其它政府组织。麒麟 3.0 完全基于 Linux 内核,并且在 2010 年 12 月发布一个称为 [NeoKylin][5] 的版本。
|
||||
|
||||
在 2013 年,[Canonical][6] (Ubuntu 的母公司) 与中华人民共和国的[工业和信息化部][7] 达成共识,共同创建和发布一个针对中国市场特色的基于 Ubuntu 的操作系统。
|
||||
|
||||
![Ubuntu Kylin][8]
|
||||
|
||||
### 优麒麟是什么?
|
||||
|
||||
根据上述 2013 年的共识,优麒麟现在是 Ubuntu 的官方中国版本。它不仅仅是语言本地化。事实上,它决心服务中国市场,就像 Ubuntu 服务全球市场一样。
|
||||
|
||||
[优麒麟][9]的第一个版本与 Ubuntu 13.04 一起到来。像 Ubuntu 一样,优麒麟也有 LTS (长期支持)和非 LTS 版本。
|
||||
|
||||
当前,优麒麟 19.04 LTS 采用了 [UKUI][10] 桌面环境,修改了启动动画、登录/锁屏程序和操作系统主题。为给用户提供更友好的体验,它修复了一些错误,带有文件预览、定时注销等功能,最新的 [WPS 办公组件][11]和 [搜狗][12] 输入法集成于其中。
|
||||
|
||||
- [https://youtu.be/kZPtFMWsyv4](https://youtu.be/kZPtFMWsyv4)
|
||||
|
||||
银河麒麟 4.0.2 是一个基于优麒麟 16.04 LTS 的社区版本。它包含一些带有长期稳定支持的第三方应用程序。它非常适合服务器和日常桌面办公使用,欢迎开发者[下载][13]。麒麟论坛积极地获取来自提供的反馈以及解决问题来找到解决方案。
|
||||
|
||||
#### UKUI:优麒麟的桌面环境
|
||||
|
||||
![Ubuntu Kylin 19.04 with UKUI Desktop][15]
|
||||
|
||||
[UKUI][16] 由优麒麟开发小组设计和开发,有一些非常好的特色和预装软件:
|
||||
|
||||
* 类似 Windows 的交互功能,带来更友好的用户体验。安装导向易于使用,用户可以快速使用优麒麟。
|
||||
* 控制中心对主题和窗口采用了新的设置。如开始菜单、任务栏、文件管理器、窗口管理器和其它的组件进行了更新。
|
||||
* 在 Ubuntu 和 Debian 存储库上都可用,为 Debian/Ubuntu 发行版和其全球衍生版的的用户提供一个新单独桌面环境。
|
||||
* 新的登录和锁定程序,它更稳定和具有很多功能。
|
||||
* 包括一个反馈问题的实用的反馈程序。
|
||||
|
||||
#### 麒麟软件中心
|
||||
|
||||
![Kylin Software Center][17]
|
||||
|
||||
麒麟有一个软件中心,类似于 Ubuntu 软件中心,并被称为优麒麟软件中心。它是优麒麟软件商店的一部分,该商店也包含优麒麟开发者平台和优麒麟存储库,具有一个简单的用户界面,并功能强大。它同时支持 Ubuntu 和优麒麟存储库,并特别适用于由优麒麟小组开发的中文特有的软件的快速安装!
|
||||
|
||||
#### 优客:一系列的工具
|
||||
|
||||
优麒麟也有一系列被命名为优客的工具。在麒麟开始菜单中输入 “Youker” 将带来麒麟助手。如果你在键盘上按 “Windows” 按键,像你在 Windows 上一样,它将打开麒麟开始菜单。
|
||||
|
||||
![Kylin Assistant][18]
|
||||
|
||||
其它麒麟品牌的应用程序包括麒麟影音(播放器)、麒麟刻录,优客天气、优客 Fcitx 输入法,它们更好地支持办公工作和个人娱乐。
|
||||
|
||||
![Kylin Video][19]
|
||||
|
||||
#### 特别专注于中文
|
||||
|
||||
通过与金山软件合作,优麒麟开发者也致力于 Linux 版本的搜狗拼音输入法、快盘和优麒麟版本的金山 WPS,并解决了智能拼音、云存储和办公应用程序方面的问题。[拼音][20] 是中文字符的拉丁化系统。使用这个系统,用户用英文键盘输入,但在屏幕上将显示中文字符。
|
||||
|
||||
#### 有趣的事实:优麒麟运行在中国超级计算机上
|
||||
|
||||
![Tianhe-2 Supercomputer. Photo by O01326 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=45399546][22]
|
||||
|
||||
众所周知[世界上最快的超级计算机 500 强都在运行 Linux][23]。中国超级计算机[天河-1][24]和[天河-2][25]都使用优麒麟的 64 位版本,致力于高性能的[并行计算][26]优化、电源管理和高性能的[虚拟化计算][27]。
|
||||
|
||||
### 总结
|
||||
|
||||
我希望你喜欢这篇优麒麟世界的介绍。你可以从它的[官方网站][28]获得优麒麟 19.04 或基于 Ubuntu 16.04 的社区版本(银河麒麟)。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-kylin/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/avimanyu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://english.nudt.edu.cn
|
||||
[3]: https://www.thoughtco.com/what-is-a-qilin-195005
|
||||
[4]: https://itsfoss.com/freebsd-12-release/
|
||||
[5]: https://thehackernews.com/2015/09/neokylin-china-linux-os.html
|
||||
[6]: https://www.canonical.com/
|
||||
[7]: http://english.gov.cn/state_council/2014/08/23/content_281474983035940.htm
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Ubuntu-Kylin.jpeg?resize=800%2C450&ssl=1
|
||||
[9]: http://www.ubuntukylin.com/
|
||||
[10]: http://ukui.org
|
||||
[11]: https://www.wps.com/
|
||||
[12]: https://en.wikipedia.org/wiki/Sogou_Pinyin
|
||||
[13]: http://www.ubuntukylin.com/downloads/show.php?lang=en&id=122
|
||||
[14]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/ubuntu-Kylin-19-04-desktop.jpg?resize=800%2C450&ssl=1
|
||||
[16]: http://www.ukui.org/
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-software-center.jpg?resize=800%2C496&ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-assistant.jpg?resize=800%2C535&ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-video.jpg?resize=800%2C533&ssl=1
|
||||
[20]: https://en.wikipedia.org/wiki/Pinyin
|
||||
[21]: https://itsfoss.com/remove-old-kernels-ubuntu/
|
||||
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/tianhe-2.jpg?resize=800%2C600&ssl=1
|
||||
[23]: https://itsfoss.com/linux-runs-top-supercomputers/
|
||||
[24]: https://en.wikipedia.org/wiki/Tianhe-1
|
||||
[25]: https://en.wikipedia.org/wiki/Tianhe-2
|
||||
[26]: https://en.wikipedia.org/wiki/Parallel_computing
|
||||
[27]: https://computer.howstuffworks.com/how-virtual-computing-works.htm
|
||||
[28]: http://www.ubuntukylin.com
|
@ -1,20 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11008-1.html)
|
||||
[#]: subject: (Exploring /run on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3403023/exploring-run-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Exploring /run on Linux
|
||||
探索 Linux 上的 /run
|
||||
======
|
||||
There's been a small but significant change in how Linux systems work with respect to runtime data.
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
If you haven’t been paying close attention, you might not have noticed a small but significant change in how Linux systems work with respect to runtime data. A re-arrangement of how and where it’s accessible in the file system started taking hold about eight years ago. And while this change might not have been big enough of a splash to wet your socks, it provides some additional consistency in the Linux file system and is worthy of some exploration.
|
||||
> Linux 系统在运行时数据方面的工作方式发生了微小但重大的变化。
|
||||
|
||||
To get started, cd your way over to /run. If you use df to check it out, you’ll see something like this:
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/092816aqczi984w30j8k12.jpg)
|
||||
|
||||
如果你没有密切关注,你可能没有注意到 Linux 系统在运行时数据方面的工作方式有一些小但重大的变化。 它重新组织了文件系统中可访问的方式和位置,而这个变化在大约八年前就开始了。虽然这种变化可能不足以让你的袜子变湿,但它在 Linux 文件系统中提供了更多一致性,值得进行一些探索。
|
||||
|
||||
要开始,请转到 `/run`。如果你使用 `df` 来检查它,你会看到这样的输出:
|
||||
|
||||
```
|
||||
$ df -k .
|
||||
@ -22,18 +24,16 @@ Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
tmpfs 609984 2604 607380 1% /run
|
||||
```
|
||||
|
||||
Identified as a “tmpfs” (temporary file system), we know that the files and directories in /run are not stored on disk but only in volatile memory. They represent data kept in memory (or disk-based swap) that takes on the appearance of a mounted file system to allow it to be more accessible and easier to manage.
|
||||
它被识别为 “tmpfs”(临时文件系统),因此我们知道 `/run` 中的文件和目录没有存储在磁盘上,而只存储在内存中。它们表示保存在内存(或基于磁盘的交换空间)中的数据,它看起来像是一个已挂载的文件系统,这个可以使其更易于访问和管理。
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
/run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.
|
||||
`/run` 是各种各样数据的家园。例如,如果你查看 `/run/user`,你会注意到一组带有数字名称的目录。
|
||||
|
||||
```
|
||||
$ ls /run/user
|
||||
1000 1002 121
|
||||
```
|
||||
|
||||
A long file listing will clarify the significance of these numbers.
|
||||
使用长文件列表可以发现这些数字的重要性。
|
||||
|
||||
```
|
||||
$ ls -l
|
||||
@ -43,9 +43,9 @@ drwx------ 5 dory dory 120 Jun 16 16:14 1002
|
||||
drwx------ 8 gdm gdm 220 Jun 14 12:18 121
|
||||
```
|
||||
|
||||
This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.
|
||||
我们看到每个目录与当前登录的用户或显示管理器 gdm 相关。数字代表他们的 UID。每个目录的内容都是运行中的进程所使用的文件。
|
||||
|
||||
The /run/user files represent only a very small portion of what you’ll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.
|
||||
`/run/user` 文件只是你在 `/run` 中找到的一小部分。还有很多其他文件。有一些文件包含了各种系统进程的进程 ID。
|
||||
|
||||
```
|
||||
$ ls *.pid
|
||||
@ -53,7 +53,7 @@ acpid.pid atopacctd.pid crond.pid rsyslogd.pid
|
||||
atd.pid atop.pid gdm3.pid sshd.pid
|
||||
```
|
||||
|
||||
As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).
|
||||
如下所示,上面列出的 `sshd.pid` 文件包含 ssh 守护程序(`sshd`)的进程 ID。
|
||||
|
||||
```
|
||||
$ cat sshd.pid
|
||||
@ -67,7 +67,7 @@ dory 18232 18109 0 16:14 ? 00:00:00 sshd: dory@pts/1
|
||||
shs 19276 10923 0 16:50 pts/0 00:00:00 grep --color=auto sshd
|
||||
```
|
||||
|
||||
Some of the subdirectories within /run can only be accessed with root authority such as /run/sudo. Running as root, for example, we can see some files related to real or attempted sudo usage:
|
||||
`/run` 中的某些子目录只能使用 root 权限访问,例如 `/run/sudo`。例如,以 root 身份运行我们可以看到一些与真实或尝试使用 `sudo` 相关的文件:
|
||||
|
||||
```
|
||||
/run/sudo/ts# ls -l
|
||||
@ -76,7 +76,7 @@ total 8
|
||||
-rw------- 1 root shs 168 Jun 17 08:33 shs
|
||||
```
|
||||
|
||||
In keeping with the shift to using /run, some of the old locations for runtime data are now symbolic links. /var/run is now a pointer to /run and /var/lock a pointer to /run/lock, allowing old references to work as expected.
|
||||
为了与 `/run` 的变化保持一致,一些运行时数据的旧位置现在是符号链接。`/var/run` 现在是指向 `/run` 的指针,`/var/lock` 指向 `/run/lock` 的指针,可以保证旧的引用按预期工作。
|
||||
|
||||
```
|
||||
$ ls -l /var
|
||||
@ -98,11 +98,7 @@ drwxrwxrwt 8 root root 4096 Jun 17 00:00 tmp
|
||||
drwxr-xr-x 3 root root 4096 Jan 19 12:14 www
|
||||
```
|
||||
|
||||
While minor as far as technical changes go, the transition to using /run simply allows for a better organization of runtime data in the Linux file system.
|
||||
|
||||
**[ Now read this:[Invaluable tips and tricks for troubleshooting Linux][3] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
虽然技术上的变化很小,但转换到使用 `/run` 只是为了在 Linux 文件系统中更好地组织运行时数据。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -110,8 +106,8 @@ via: https://www.networkworld.com/article/3403023/exploring-run-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
60
published/20190619 Get the latest Ansible 2.8 in Fedora.md
Normal file
60
published/20190619 Get the latest Ansible 2.8 in Fedora.md
Normal file
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11002-1.html)
|
||||
[#]: subject: (Get the latest Ansible 2.8 in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/get-the-latest-ansible-2-8-in-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
在 Fedora 中获取最新的 Ansible 2.8
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Ansible 是世界上最受欢迎的自动化引擎之一。它能让你自动化几乎任何事情,从本地系统的设置到大量的平台和应用。它是跨平台的,因此你可以将其用于各种操作系统。请继续阅读以获取有关如何在 Fedora 中获取最新 Ansible,以及它的一些更改和改进,以及如何使用它。
|
||||
|
||||
### 发布版本和功能
|
||||
|
||||
Ansible 2.8 最近发布了,其中包含许多修复、功能和增强。仅仅几天之后,它就可在 Fedora 29 和 30 以及 EPEL 中获取。两周前发布了后续版本 2.8.1。同样,新版本在几天内就可以在 Fedora 中获取。
|
||||
|
||||
[使用 sudo][2] 能够非常容易地从官方仓库安装:
|
||||
|
||||
```
|
||||
$ sudo dnf -y install ansible
|
||||
```
|
||||
|
||||
2.8 版本有很长的更新列表,你可以在 [2.8 的迁移指南][3]中阅读查看。但其中包含了一些好东西,比如 *Python 解释器发现功能* 。Ansible 2.8 现在会试图找出哪个 Python 是它所运行的平台的首选版本。如果失败,Ansible 会使用后备列表。但是,你仍然可以使用变量 `ansible_python_interpreter` 来设置 Python 解释器。
|
||||
|
||||
另一个变化使 Ansible 在各个平台上更加一致。由于 `sudo` 专用于 UNIX/Linux,而其他平台并没有,因此现在在更多地方使用 `become`。这包括了命令行开关。例如,`-ask-sudo-pass` 已变成了 `-ask-become-pass`,提示符也变成了 `BECOME password:`。
|
||||
|
||||
2.8 和 2.8.1 版本中还有许多其他功能。有关所有细节,请查看 [GitHub 上的官方更新日志][4]。
|
||||
|
||||
### 使用 Ansible
|
||||
|
||||
也许你不确定 Ansible 是否可以实际使用。别担心,你并不是唯一一个这样想的,因为它太强大了。但事实证明,它并不难以使用,在一个家庭内的几台电脑(甚至一台电脑)上设置都可以。
|
||||
|
||||
我们之前在 Fedora Magazine 中也讨论过这个话题:
|
||||
|
||||
- [使用 Ansible 设置工作站][5]
|
||||
|
||||
试试看 Ansible,说下你的想法。很重要的一部分是让 Fedora 保持最新版本。自动化快乐!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/get-the-latest-ansible-2-8-in-fedora/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/ansible28-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[3]: https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_2.8.html
|
||||
[4]: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8.rst
|
||||
[5]: https://fedoramagazine.org/using-ansible-setup-workstation/
|
@ -0,0 +1,151 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11007-1.html)
|
||||
[#]: subject: (Bash Script to Monitor Memory Usage on Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-memory-utilization-usage-and-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
用 Bash 脚本监控 Linux 上的内存使用情况
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201906/23/085446setqkshf5zk0tn2x.jpg)
|
||||
|
||||
目前市场上有许多开源监控工具可用于监控 Linux 系统的性能。当系统达到指定的阈值限制时,它可以发送电子邮件警报。它可以监视 CPU 利用率、内存利用率、交换利用率、磁盘空间利用率等所有内容。
|
||||
|
||||
如果你只有很少的系统并且想要监视它们,那么编写一个小的 shell 脚本可以使你的任务变得非常简单。
|
||||
|
||||
在本教程中,我们添加了两个 shell 脚本来监视 Linux 系统上的内存利用率。当系统达到给定阈值时,它将给特定电子邮件地址发邮件。
|
||||
|
||||
### 方法-1:用 Linux Bash 脚本监视内存利用率并发送电子邮件
|
||||
|
||||
如果只想在系统达到给定阈值时通过邮件获取当前内存利用率百分比,请使用以下脚本。
|
||||
|
||||
这是个非常简单直接的单行脚本。在大多数情况下,我更喜欢使用这种方法。
|
||||
|
||||
当你的系统达到内存利用率的 80% 时,它将触发一封电子邮件。
|
||||
|
||||
```
|
||||
*/5 * * * * /usr/bin/free | awk '/Mem/{printf("RAM Usage: %.2f%\n"), $3/$2*100}' | awk '{print $3}' | awk '{ if($1 > 80) print $0;}' | mail -s "High Memory Alert" 2daygeek@gmail.com
|
||||
```
|
||||
|
||||
**注意:**你需要更改电子邮件地址而不是使用我们的电子邮件地址。此外,你可以根据你的要求更改内存利用率阈值。
|
||||
|
||||
**输出:**你将收到类似下面的电子邮件提醒。
|
||||
|
||||
```
|
||||
High Memory Alert: 80.40%
|
||||
```
|
||||
|
||||
我们过去添加了许多有用的 shell 脚本。如果要查看这些内容,请导航至以下链接。
|
||||
|
||||
* [如何使用 shell 脚本自动执行日常活动?][1]
|
||||
|
||||
### 方法-2:用 Linux Bash 脚本监视内存利用率并发送电子邮件
|
||||
|
||||
如果要在邮件警报中获取有关内存利用率的更多信息。使用以下脚本,其中包括基于 `top` 命令和 `ps` 命令的最高内存利用率和进程详细信息。
|
||||
|
||||
这将立即让你了解系统的运行情况。
|
||||
|
||||
当你的系统达到内存利用率的 “80%” 时,它将触发一封电子邮件。
|
||||
|
||||
**注意:**你需要更改电子邮件地址而不是使用我们的电子邮件地址。此外,你可以根据你的要求更改内存利用率阈值。
|
||||
|
||||
```
|
||||
# vi /opt/scripts/memory-alert.sh
|
||||
|
||||
#!/bin/sh
|
||||
ramusage=$(free | awk '/Mem/{printf("RAM Usage: %.2f\n"), $3/$2*100}'| awk '{print $3}')
|
||||
|
||||
if [ "$ramusage" > 20 ]; then
|
||||
|
||||
SUBJECT="ATTENTION: Memory Utilization is High on $(hostname) at $(date)"
|
||||
MESSAGE="/tmp/Mail.out"
|
||||
TO="2daygeek@gmail.com"
|
||||
echo "Memory Current Usage is: $ramusage%" >> $MESSAGE
|
||||
echo "" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "Top Memory Consuming Process Using top command" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "$(top -b -o +%MEM | head -n 20)" >> $MESSAGE
|
||||
echo "" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "Top Memory Consuming Process Using ps command" >> $MESSAGE
|
||||
echo "------------------------------------------------------------------" >> $MESSAGE
|
||||
echo "$(ps -eo pid,ppid,%mem,%Memory,cmd --sort=-%mem | head)" >> $MESSAGE
|
||||
mail -s "$SUBJECT" "$TO" < $MESSAGE
|
||||
rm /tmp/Mail.out
|
||||
fi
|
||||
```
|
||||
|
||||
最后添加一个 [cron 任务][2] 来自动执行此操作。它将每 5 分钟运行一次。
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/5 * * * * /bin/bash /opt/scripts/memory-alert.sh
|
||||
```
|
||||
|
||||
**注意:**由于脚本计划每 5 分钟运行一次,因此你将在最多 5 分钟后收到电子邮件提醒(但不是 5 分钟,取决于具体时间)。
|
||||
|
||||
比如说,如果你的系统达到 8.25 的给定限制,那么你将在 5 分钟内收到电子邮件警报。希望现在说清楚了。
|
||||
|
||||
**输出:**你将收到类似下面的电子邮件提醒。
|
||||
|
||||
```
|
||||
Memory Current Usage is: 80.71%
|
||||
|
||||
+------------------------------------------------------------------+
|
||||
Top Memory Consuming Process Using top command
|
||||
+------------------------------------------------------------------+
|
||||
top - 12:00:58 up 5 days, 9:03, 1 user, load average: 1.82, 2.60, 2.83
|
||||
Tasks: 314 total, 1 running, 313 sleeping, 0 stopped, 0 zombie
|
||||
%Cpu0 : 8.3 us, 12.5 sy, 0.0 ni, 75.0 id, 0.0 wa, 0.0 hi, 4.2 si, 0.0 st
|
||||
%Cpu1 : 13.6 us, 4.5 sy, 0.0 ni, 81.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu2 : 21.7 us, 21.7 sy, 0.0 ni, 56.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu3 : 13.6 us, 9.1 sy, 0.0 ni, 77.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu4 : 17.4 us, 8.7 sy, 0.0 ni, 73.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu5 : 20.8 us, 4.2 sy, 0.0 ni, 70.8 id, 0.0 wa, 0.0 hi, 4.2 si, 0.0 st
|
||||
%Cpu6 : 9.1 us, 0.0 sy, 0.0 ni, 90.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
%Cpu7 : 17.4 us, 4.3 sy, 0.0 ni, 78.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
|
||||
KiB Mem : 16248588 total, 5015964 free, 6453404 used, 4779220 buff/cache
|
||||
KiB Swap: 17873388 total, 16928620 free, 944768 used. 6423008 avail Mem
|
||||
|
||||
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
17163 daygeek 20 2033204 487736 282888 S 10.0 3.0 8:26.07 /usr/lib/firefox/firefox -contentproc -childID 15 -isForBrowser -prefsLen 9408 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /u+
|
||||
1121 daygeek 20 4191388 419180 100552 S 5.0 2.6 126:02.84 /usr/bin/gnome-shell
|
||||
1902 daygeek 20 1701644 327216 82536 S 20.0 2.0 153:27.92 /opt/google/chrome/chrome
|
||||
2969 daygeek 20 1051116 324656 92388 S 15.0 2.0 149:38.09 /opt/google/chrome/chrome --type=renderer --field-trial-handle=10346122902703263820,11905758137655502112,131072 --service-pipe-token=1339861+
|
||||
1068 daygeek 20 1104856 309552 278072 S 5.0 1.9 143:47.42 /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3
|
||||
27246 daygeek 20 907344 265600 108276 S 30.0 1.6 10:42.80 /opt/google/chrome/chrome --type=renderer --field-trial-handle=10346122902703263820,11905758137655502112,131072 --service-pipe-token=8587368+
|
||||
|
||||
+------------------------------------------------------------------+
|
||||
Top Memory Consuming Process Using ps command
|
||||
+------------------------------------------------------------------+
|
||||
PID PPID %MEM %CPU CMD
|
||||
8223 1 6.4 6.8 /usr/lib/firefox/firefox --new-window
|
||||
13948 1121 6.3 1.2 /usr/bin/../lib/notepadqq/notepadqq-bin
|
||||
8671 8223 4.4 7.5 /usr/lib/firefox/firefox -contentproc -childID 5 -isForBrowser -prefsLen 6999 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
17163 8223 3.0 0.6 /usr/lib/firefox/firefox -contentproc -childID 15 -isForBrowser -prefsLen 9408 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
1121 1078 2.5 1.6 /usr/bin/gnome-shell
|
||||
17937 8223 2.5 0.8 /usr/lib/firefox/firefox -contentproc -childID 16 -isForBrowser -prefsLen 9410 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
8499 8223 2.2 0.6 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 6635 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
8306 8223 2.2 0.8 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
9198 8223 2.1 0.6 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8604 -prefMapSize 184979 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 8223 true tab
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-bash-script-to-monitor-memory-utilization-usage-and-send-email/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
||||
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wahailin)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11014-1.html)
|
||||
[#]: subject: (Open Source Slack Alternative Mattermost Gets $50M Funding)
|
||||
[#]: via: (https://itsfoss.com/mattermost-funding/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Slack 的开源替代品 Mattermost 获得 5000 万美元融资
|
||||
======
|
||||
|
||||
[Mattermost][1],作为 [Slack][2] 的开源替代品,获得了 5000 万美元的 B 轮融资。这个消息极其令人振奋。
|
||||
|
||||
[Slack][3] 是一个基于云的团队内部沟通协作软件。企业、初创企业、甚至全球化的开源项目都在使用 Slack 进行同事及项目成员间的沟通。
|
||||
|
||||
[Slack 在 2019 年 6 月的估值为 200 亿美元][4],由此可见其在科技行业的巨大影响,当然也就有更多产品想与之竞争。
|
||||
|
||||
### 5000 万美元开源项目
|
||||
|
||||
![][5]
|
||||
|
||||
就我个人而言,我并不知道 MatterMost 这个产品。但 [VentureBeat][6] 对这则新闻的报道,激发了我的好奇心。这次融资由 [Y Combinator][7] 的 Continuity 与一家新的投资方 BattleVentures 领投,现有投资者 Redpoint 和 S28 Captial 共同跟投。
|
||||
|
||||
在[公告][8]中,他们也提到:
|
||||
|
||||
> 今天的公告中,Mattermost 成为了 YC 有史以来规模最大的 B 轮投资项目,更重要的是,它是 YC 迄今为止投资额最高的开源项目。
|
||||
|
||||
下面是摘自 VentureBeat 的报道,你可以从中了解到一些细节:
|
||||
|
||||
> 本次资本注入,是继 2017 年 2 月的种子轮 350 万融资和今年 2 月份的 2000 万 A 轮融资之后进行的,并使得这家总部位于美国加州<ruby>帕罗奥图<rt>Palo Alto</rt></ruby>的公司融资总额达到了约 7000 万美元。
|
||||
|
||||
如果你对他们的规划感兴趣,可以阅读[官方公告][8]。
|
||||
|
||||
尽管听起来很不错,但可能你并不知道 Mattermost 是什么。所以我们先来作个简单了解:
|
||||
|
||||
### Mattermost 快览
|
||||
|
||||
![Mattermost][9]
|
||||
|
||||
前面已经提到,Mattermost 是 Slack 的开源替代品。
|
||||
|
||||
乍一看,它几乎照搬了 Slack 的界面外观,没错,这就是关键所在,你将拥有你可以轻松使用的软件的开源解决方案。
|
||||
|
||||
它甚至集成了一些流行的 DevOps 工具,如 Git、自动机器人和 CI/CD。除了这些功能外,它还关注安全性和隐私。
|
||||
|
||||
同样,和 Slack 类似,它支持和多种应用程序与服务的集成。
|
||||
|
||||
听起来很有前景?我也这么认为。
|
||||
|
||||
#### 定价:企业版和团队版
|
||||
|
||||
如果你希望由 Mattermost 托管该服务(或获得优先支持),应选择其企业版。但如果你不想使用付费托管,可以下载[团队版][11],并将其安装到基于 Linux 的云服务器或 VPS 服务器上。
|
||||
|
||||
当然,我们不会在此进行深入探究。我确想在此提及的是,企业版并不昂贵。
|
||||
|
||||
![][12]
|
||||
|
||||
### 总结
|
||||
|
||||
MatterMost 无疑相当出色,有了 5000 万巨额资金的注入,对于那些正在寻求安全的并能提供高效团队协作支持的开源通讯平台的用户,Mattermost 很可能成为开源社区重要的部分。
|
||||
|
||||
你觉得这条新闻怎么样?对你来说有价值吗?你是否已了解 Mattermost 是 Slack 的替代品?
|
||||
|
||||
请在下面的评论中给出你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mattermost-funding/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wahailin](https://github.com/wahailin)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mattermost.com/
|
||||
[2]: https://itsfoss.com/slack-use-linux/
|
||||
[3]: https://slack.com
|
||||
[4]: https://www.ft.com/content/98747b36-9368-11e9-aea1-2b1d33ac3271
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-wallpaper.png?resize=800%2C450&ssl=1
|
||||
[6]: https://venturebeat.com/2019/06/19/mattermost-raises-50-million-to-advance-its-open-source-slack-alternative/
|
||||
[7]: https://www.ycombinator.com/
|
||||
[8]: https://mattermost.com/blog/yc-leads-50m-series-b-in-mattermost-as-open-source-slack-alternative/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-screenshot.jpg?fit=800%2C497&ssl=1
|
||||
[10]: https://itsfoss.com/zettlr-markdown-editor/
|
||||
[11]: https://mattermost.com/download/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/mattermost-enterprise-plan.jpg?fit=800%2C325&ssl=1
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (BitTorrent Client Deluge 2.0 Released: Here’s What’s New)
|
||||
[#]: via: (https://itsfoss.com/deluge-2-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
BitTorrent Client Deluge 2.0 Released: Here’s What’s New
|
||||
======
|
||||
|
||||
You probably already know that [Deluge][1] is one of the [best Torrent clients available for Linux users][2]. However, the last stable release was almost two years back.
|
||||
|
||||
Even though it was in active development, a major stable release wasn’t there – until recently. The latest version while we write this happens to be 2.0.2. So, if you haven’t downloaded the latest stable version – do try it out.
|
||||
|
||||
In either case, if you’re curious, let us talk about what’s new.
|
||||
|
||||
![Deluge][3]
|
||||
|
||||
### Major improvements in Deluge 2.0
|
||||
|
||||
The new release introduces multi-user support – which was a much needed addition.
|
||||
|
||||
In addition to that, there has been several performance improvements to handle more torrents with faster loading times.
|
||||
|
||||
Also, with version 2.0, Deluge used Python 3 with minimal support for Python 2.7. Even for the user interface, they migrated from GTK UI to GTK3.
|
||||
|
||||
As per the release notes, there are several more significant additions/improvements, which include:
|
||||
|
||||
* Multi-user support.
|
||||
* Performance updates to handle thousands of torrents with faster loading times.
|
||||
* A New Console UI which emulates GTK/Web UIs.
|
||||
* GTK UI migrated to GTK3 with UI improvements and additions.
|
||||
* Magnet pre-fetching to allow file selection when adding torrent.
|
||||
* Fully support libtorrent 1.2 release.
|
||||
* Language switching support.
|
||||
* Improved documentation hosted on ReadTheDocs.
|
||||
* AutoAdd plugin replaces built-in functionality.
|
||||
|
||||
|
||||
|
||||
### How to install or upgrade to Deluge 2.0
|
||||
|
||||
![][4]
|
||||
|
||||
You should follow the official [installation guide][5] (using PPA or PyPi) for any Linux distro. However, if you are upgrading, you should go through the note mentioned in the release note:
|
||||
|
||||
“_Deluge 2.0 is not compatible with Deluge 1.x clients or daemons so these will require upgrading too._ _Also_ _third-party Python scripts may not be compatible if they directly connect to the Deluge client and will need migrating._“
|
||||
|
||||
So, they insist to always make a backup of your [config][6] before a major version upgrade to guard against data loss.
|
||||
|
||||
[][7]
|
||||
|
||||
Suggested read Ubuntu's Snap Apps Website Gets Much Needed Improvements
|
||||
|
||||
And, if you are an author of a plugin, you need to upgrade it make it compatible with the new release.
|
||||
|
||||
Direct download app packages not yet available for Windows and Mac OS. However, the release note mentions that they are being worked on.
|
||||
|
||||
As an alternative, you can install them manually by following the [installation guide][5] in the updated official documentation.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
What do you think about the latest stable release? Do you utilize Deluge as your BitTorrent client? Or do you find something else as a better alternative?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/deluge-2-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dev.deluge-torrent.org/
|
||||
[2]: https://itsfoss.com/best-torrent-ubuntu/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/deluge.jpg?fit=800%2C410&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/Deluge-2-release.png?resize=800%2C450&ssl=1
|
||||
[5]: https://deluge.readthedocs.io/en/latest/intro/01-install.html
|
||||
[6]: https://dev.deluge-torrent.org/wiki/Faq#WheredoesDelugestoreitssettingsconfig
|
||||
[7]: https://itsfoss.com/snap-store/
|
@ -0,0 +1,42 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Codethink open sources part of onboarding process)
|
||||
[#]: via: (https://opensource.com/article/19/6/codethink-onboarding-process)
|
||||
[#]: author: (Laurence Urhegyi https://opensource.com/users/laurence-urhegyi)
|
||||
|
||||
Codethink open sources part of onboarding process
|
||||
======
|
||||
In other words, how to Git going in FOSS.
|
||||
![Teacher or learner?][1]
|
||||
|
||||
Here at [Codethink][2], we’ve recently focused our energy into enhancing the onboarding process we use for all new starters at the company. As we grow steadily in size, it’s important that we have a well-defined approach to both welcoming new employees into the company, and introducing them to the organization’s culture.
|
||||
|
||||
As part of this overall onboarding effort, we’ve created [_How to Git going in FOSS_][3]: an introductory guide to the world of free and open source software (FOSS), and some of the common technologies, practices, and principles associated with free and open source software.
|
||||
|
||||
This guide was initially aimed at work experience students and summer interns. However, the document is in fact equally applicable to anyone who is new to free and open source software, no matter their prior experience in software or IT in general. _How to Git going in FOSS_ is hosted on GitLab and consists of several repositories, each designed to be a self-guided walk through.
|
||||
|
||||
Our guide begins with a general introduction to FOSS, including explanations of the history of GNU/Linux, how to use [Git][4] (as well as Git hosting services such as GitLab), and how to use a text editor. The document then moves on to exercises that show the reader how to implement some of the things they’ve just learned.
|
||||
|
||||
_How to Git going in FOSS_ is fully public and available for anyone to try. If you’re new to FOSS or know someone who is, then please have a read-through, and see what you think. If you have any feedback, feel free to raise an issue on GitLab. And, of course, we also welcome contributions. We’re keen to keep improving the guide however possible. One future improvement we plan to make is an additional exercise that is more complex than the existing two, such as potentially introducing the reader to [Continuous Integration][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/codethink-onboarding-process
|
||||
|
||||
作者:[Laurence Urhegyi][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurence-urhegyi
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-teacher-learner.png?itok=rMJqBN5G (Teacher or learner?)
|
||||
[2]: https://www.codethink.co.uk/about.html
|
||||
[3]: https://gitlab.com/ct-starter-guide
|
||||
[4]: https://git-scm.com
|
||||
[5]: https://en.wikipedia.org/wiki/Continuous_integration
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news)
|
||||
[#]: via: (https://opensource.com/article/19/6/news-june-22)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Cloudflare's random number generator, robotics data visualization, npm token scanning, and more news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![Weekly news roundup with TV][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look Cloudflare's open source random number generator, more open source robotics data, new npm functionality, and more!
|
||||
|
||||
### Cloudflare announces open source random number generator project
|
||||
|
||||
Is there such a thing as a truly random number? Internet security and services provider Cloudflare things so. To prove it, the company has formed [The League of Entropy][2], an open source project to create a generator for random numbers.
|
||||
|
||||
The League consists of Cloudflare and "five other organisations — predominantly universities and security companies." They share random numbers, using an open source tool called [Drand][3] (short for Distributed Randomness Beacon Daemon). The numbers are then "composited into one random number" on the basis that "several random numbers are more random than one random number." While the League's random number generator isn't intended "for any kind of password or cryptographic seed generation," Cloudflare's CEO Matthew Prince points out that if "you need a way of having a known random source, this is a really valuable tool."
|
||||
|
||||
### Cruise open sources robotics data analysis tool
|
||||
|
||||
Projects involved in creating self-driving vehicles generate petabytes of data. And with amounts of data that large comes the challenge of quickly and effectively analyzing it. To make the task easier, General Motors subsidiary Cruise has made its Webviz data visualization tool "[freely available to developers][4] in need of a modular robotics analysis solution."
|
||||
|
||||
Webviz "takes as input any bag file (the message format used by the popular Robot Operating System) and outputs charts and graphs." It "contains a collection of general panels (which visualize data) applicable to most robotics developers," said Esther Weon, a software engineer at Cruise. The company also plans to "release a public API that’ll allow developers to build custom panels themselves."
|
||||
|
||||
The code for Webviz is [available on GitHub][5], where you can download or contribute to the project.
|
||||
|
||||
### npm provides more security
|
||||
|
||||
The team behind npm, the site providing JavaScript package hosting, has a new collaboration with GitHub to automatically scan for exposed tokens that could give hackers access that doesn't belong to them. The project includes a handy automatic revoke of leaked credentials them if are still valid. This could drastically reduce vulnerabilities in the JavaScript community. For instructions on how to participate, see the [original article][6].
|
||||
|
||||
Note that this news was found via the [Changelog news][7].
|
||||
|
||||
### Better end of life tracking via open source
|
||||
|
||||
A new project, [endoflife.date][8], aims to overcome the complexity of end of life (EOL) announcements for software. It's part tracker, part public announcement on what good documentation looks like for software. As the README states: "The reason this site exists is because this information is very often hidden away. If you're releasing something on a regular basis:
|
||||
|
||||
1. List only supported releases.
|
||||
2. Give EoL dates/policy if possible.
|
||||
3. Hide unsupported releases behind a few extra clicks.
|
||||
4. Mention security/active release difference if needed."
|
||||
|
||||
|
||||
|
||||
Check out the [source code][9] for more information.
|
||||
|
||||
### In other news
|
||||
|
||||
* [Medicine needs to embrace open source][10]
|
||||
* [Using geospatial data to create safer roads][11]
|
||||
* [Embracing open source could be a big competitive advantage for businesses][12]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/news-june-22
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i (Weekly news roundup with TV)
|
||||
[2]: https://thenextweb.com/dd/2019/06/17/cloudflares-new-open-source-project-helps-anyone-obtain-truly-random-numbers/
|
||||
[3]: https://github.com/dedis/drand
|
||||
[4]: https://venturebeat.com/2019/06/18/cruise-open-sources-webview-a-tool-for-robotics-data-analysis/
|
||||
[5]: https://github.com/cruise-automation/webviz
|
||||
[6]: https://blog.npmjs.org/post/185680936500/protecting-package-publishers-npm-token-security
|
||||
[7]: https://changelog.com/news/npm-token-scanning-extending-to-github-NAoe
|
||||
[8]: https://endoflife.date/
|
||||
[9]: https://github.com/captn3m0/endoflife.date
|
||||
[10]: https://www.zdnet.com/article/medicine-needs-to-embrace-open-source/
|
||||
[11]: https://itbrief.co.nz/story/using-geospatial-data-to-create-safer-roads
|
||||
[12]: https://www.fastcompany.com/90364152/embracing-open-source-could-be-a-big-competitive-advantage-for-businesses
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An Ubuntu User’s Review Of Dell XPS 13 Ubuntu Edition)
|
||||
[#]: via: (https://itsfoss.com/dell-xps-13-ubuntu-review)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
An Ubuntu User’s Review Of Dell XPS 13 Ubuntu Edition
|
||||
======
|
||||
|
||||
_**Brief: Sharing my feel and experience about Dell XPS 13 Kaby Lake Ubuntu edition after using it for over three months.**_
|
||||
|
||||
During Black Friday sale last year, I took the bullet and ordered myself a [Dell XPS 13][1] with the new [Intel Kaby Lake processor][2]. It got delivered in the second week of December and if you [follow It’s FOSS on Facebook][3], you might have seen the [live unboxing][4].
|
||||
|
||||
Though I was tempted to do the review of Dell XPS 13 Ubuntu edition almost at the same time, I knew it won’t be fair. A brand new system will, of course, feel good and work smooth.
|
||||
|
||||
But that’s not the real experience. The real experience of any system comes after weeks, if not months, of use. That’s the reason I hold myself back and waited three months to review Dell XPS Kobylake Ubuntu edition.
|
||||
|
||||
### Dell XPS 13 Ubuntu Edition Review
|
||||
|
||||
Before we saw what’s hot and what’s not in the latest version of Dell XPS 13, I should tell you that I was using an Acer R13 ultrabook book before this. So I may compare the new Dell system with the older Acer one.
|
||||
|
||||
![Dell XPS 13 Ubuntu Edition System Settings][5]![Dell XPS 13 Ubuntu Edition System Settings][5]
|
||||
|
||||
Dell XPS 13 has several versions based on processor. The one I am reviewing is Dell XPS13 MLK (9360). It has i5-7200U 7th generation processor. Since I hardly used the touch screen in Acer Aspire R13, I chose to go with the non-touch version of XPS. This decision also saved me a couple of hundreds of Euro.
|
||||
|
||||
It has 8 GB of LPDDR3 1866MHz RAM and 256 GB SSD PCIe. Graphics is Intel HD. On connectivity side, it’s got Killer 1535 Wi-Fi 802.11ac 2×2 and Bluetooth 4.1. Screen is InfinityEdge Full HD (1 920 x 1080).
|
||||
|
||||
Now, you know what kind of hardware we’ve got here, let’s see what works and what sucks.
|
||||
|
||||
#### Look and feel
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu Edition][6]![Dell XPS 13 Kaby Lake Ubuntu Edition][6]
|
||||
|
||||
At 13.3″, Dell XPS 13 looks even smaller than a regular 13.3″ laptop, thanks to its non-existent bezel which is the specialty of the infinite display. It is light as a feather with weight just under 1.23 Kg.
|
||||
|
||||
The outer surface is metallic, not very shiny but a decent aluminum look. On the interior, the palm rest is made of carbon fiber which is very comfortable at the rest. Unlike the MacBook Air that uses metallic palm rests, the carbon fiber ones are more friendly, especially in winters.
|
||||
|
||||
It is almost centimeter and a half high at it’s thickest part (around hinges). This also adds a plus point to the elegance of XPS 13.
|
||||
|
||||
Overall, Dell XPS 13 has a compact body and an elegant body.
|
||||
|
||||
#### Keyboard and touchpad
|
||||
|
||||
The keyboard and touchpad mix well with the carbon fiber interiors. The keys are smooth with springs in the back (perhaps) and give a rich feel while typing. All of the important keys are present and are not tiny in size, something you might be worried of, considering the overall tiny size of XPS13.
|
||||
|
||||
Oh! the keyboards have backlit support. Which adds to the rich feel of this expensive laptop.
|
||||
|
||||
While the keyboard is a great experience, the same cannot be said about the touchpad. In fact, the touchpad is the weakest part which mars the overall good experience of XPS 13.
|
||||
|
||||
The touchpad has a cheap feeling because it makes an irritating sound while tapping on the right side as if it’s hollow underneath. This is [something that has been noticed in the earlier versions of XPS 13][7] but hasn’t been given enough consideration to fix it. This is something you do not expect from a product at such a price.
|
||||
|
||||
Also, the touchpad scroll on websites is hideous. It is also not suitable for pixel works because of difficulty in moving little adjustments.
|
||||
|
||||
#### Ports
|
||||
|
||||
Dell XPS 13 has two USB 3.0 ports, one of them with PowerShare. If you did not know, [USB 3.0 PowerShare][8] ports allow you to charge external devices even when your system is turned off.
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu edition ports][9]![Dell XPS 13 Kaby Lake Ubuntu edition ports][9]
|
||||
|
||||
It also has a [Thunderbolt][10] (doubles up as [USB Type-C port][11]). It doesn’t have HDMI port, Ethernet port or VGA port. However, all of these three can be used via the Thunderbolt port and external adapters (sold separately).
|
||||
|
||||
![Dell XPS 13 Kaby Lake Ubuntu edition ports][12]![Dell XPS 13 Kaby Lake Ubuntu edition ports][12]
|
||||
|
||||
It also has an SD card reader and a headphone jack. In addition to all these, there is an [anti-theft slot][13] (a common security practice in enterprises).
|
||||
|
||||
#### Display
|
||||
|
||||
The model I have packs 1920x1080px. It’s full HD and display quality is at par. It perfectly displays the high definition pictures and 1080p video files.
|
||||
|
||||
I cannot compare it with the [qHD model][14] as I never used it. But considering that there are not enough 4K contents for now, full HD display should be sufficient for next few years.
|
||||
|
||||
#### Sound
|
||||
|
||||
Compared to Acer R13, XPS 13 has better audio quality. Even the max volume is louder than that of Acer R13. The dual speakers give a nice stereo effect.
|
||||
|
||||
#### Webcam
|
||||
|
||||
The weirdest part of Dell XPS 13 review comes now. We all have been accustomed of seeing the webcam at the top-middle position on any laptop. But this is not the case here.
|
||||
|
||||
XPS 13 puts the webcam on the bottom left corner of the laptop. This is done to keep the bezel as thin as possible. But this creates a problem.
|
||||
|
||||
![Image captured with laptop screen at 90 degree][15]
|
||||
|
||||
When you video chat with someone, it is natural to look straight up. With the top-middle webcam, your face is in direct line with the camera. But with the bottom left position of web cam, it looks like those weird accidental selfies you take with the front camera of your smartphone. Heck, people on the other side might see inside of your nostrils.
|
||||
|
||||
#### Battery
|
||||
|
||||
Battery life is the strongest point of Dell XPS 13. While Dell claims an astounding 21-hour battery life, but in my experience, it smoothly gives a battery life of 8-10 hours. This is when I watch movies, browse the internet and other regular stuff.
|
||||
|
||||
There is one strange thing that I noticed, though. It charges pretty quick until 90% but the charging slows down afterward. And it almost never goes beyond 98%.
|
||||
|
||||
The battery indicator turns red when the battery status falls below 30% and it starts displaying notifications if the battery goes below 10%. There is small light indicator under the touchpad that turns yellow when the battery is low and it turns white when the charger is plugged in.
|
||||
|
||||
#### Overheating
|
||||
|
||||
I have previously written about ways to [reduce laptop overheating in Linux][16]. Thankfully, so far, I didn’t need to employ those tricks.
|
||||
|
||||
Dell XPS 13 remains surprisingly cool when you are using it on battery, even in long runs. The bottom does get heated a little when you use it while charging.
|
||||
|
||||
Overall, XPS 13 manages overheating very well.
|
||||
|
||||
#### The Ubuntu experience with Dell XPS 13
|
||||
|
||||
So far we have seen pretty generic things about the Dell XPS 13. Let’s talk about how good a Linux laptop it is.
|
||||
|
||||
Until now, I used to manually [install Linux on Windows laptop][17]. This is the first Linux laptop I ever bought. I would also like to mention the awesome first boot animation of Dell’s Ubuntu laptop. Here’s a YouTube video of the same:
|
||||
|
||||
One thing I would like to mention here is that Dell never displays Ubuntu laptops on its website. You’ll have to search the website with Ubuntu then you’ll see the Ubuntu editions. Also, Ubuntu edition is cheaper just by 50 Euro in comparison to its Windows counterpart whereas I was expecting it to be at least 100 Euro less than that of Windows.
|
||||
|
||||
Despite being an Ubuntu preloaded laptop, the super key still comes with Windows logo on it. It’s trivial but I would have loved to see the Ubuntu logo on it.
|
||||
|
||||
Now talking about Ubuntu experience, the first thing I noticed was that there was no hardware issue. Even the function and media keys work perfectly in Ubuntu, which is a pleasant surprise.
|
||||
|
||||
Dell has also added its own repository in the software sources to provide for some Dell specific tools. You can see the footprints of Dell in the entire system.
|
||||
|
||||
You might be interested to see how Dell partitioned the 256Gb of disk space. Let me show that to you.
|
||||
|
||||
![Default disk partition by Dell][18]
|
||||
|
||||
As you can see, there is 524MB reserved for [EFI][19]. Then there is 3.2 GB of factory restore image perhaps.
|
||||
|
||||
Dell is using 17Gb of Swap partition, which is more than double of the RAM size. It seems Dell didn’t put enough thought here because this is simply waste of disk space, in my opinion. I would have used not [more than 11 GB of Swap partition][20] here.
|
||||
|
||||
As I mentioned before, Dell adds a “restore to factory settings” option in the Grub menu. This is a nice little feature to have.
|
||||
|
||||
One thing which I don’t like in the XPS 13 Ubuntu edition is the long boot time. It takes entire 23 seconds to reach the login screen after pressing the power button. I would expect it to be faster considering that it uses SSD PCIe.
|
||||
|
||||
If it interests you, the XPS 13 had Chromium and Google Chrome browsers installed by default instead of Firefox.
|
||||
|
||||
As far my experience goes, I am fairly impressed with Dell XPS 13 Ubuntu edition. It gives a smooth Ubuntu experience. The laptop seems to be a part of Ubuntu. Though it is an expensive laptop, I would say it is definitely worth the money.
|
||||
|
||||
To summarize, let’s see the good, the bad and the ugly of Dell XPS 13 Ubuntu edition.
|
||||
|
||||
#### The Good
|
||||
|
||||
* Ultralight weight
|
||||
* Compact
|
||||
* Keyboard
|
||||
* Carbon fiber palm rest
|
||||
* Full hardware support for Ubuntu
|
||||
* Factory restore option for Ubuntu
|
||||
* Nice display and sound quality
|
||||
* Good battery life
|
||||
|
||||
|
||||
|
||||
#### The bad
|
||||
|
||||
* Poor touchpad
|
||||
* A little pricey
|
||||
* Long boot time for SSD powered laptop
|
||||
* Windows key still present :P
|
||||
|
||||
|
||||
|
||||
#### The ugly
|
||||
|
||||
* Weird webcam placement
|
||||
|
||||
|
||||
|
||||
How did you like the **Dell XPS 13 Ubuntu edition review** from an Ubuntu user’s point of view? Do you find it good enough to spend over a thousand bucks? Do share your views in the comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/dell-xps-13-ubuntu-review
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://amzn.to/2ImVkCV
|
||||
[2]: http://www.techradar.com/news/computing-components/processors/kaby-lake-intel-core-processor-7th-gen-cpu-news-rumors-and-release-date-1325782
|
||||
[3]: https://www.facebook.com/itsfoss/
|
||||
[4]: https://www.facebook.com/itsfoss/videos/810293905778045/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/Dell-XPS-13-Ubuntu-Edition-spec.jpg?resize=540%2C337&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-Ubuntu-review.jpeg?resize=800%2C600&ssl=1
|
||||
[7]: https://www.youtube.com/watch?v=Yt5SkI0c3lM
|
||||
[8]: http://www.dell.com/support/article/fr/fr/frbsdt1/SLN155147/usb-powershare-feature?lang=EN
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-Ubuntu-XPS-13-Kaby-Lake-ports-1.jpg?resize=800%2C435&ssl=1
|
||||
[10]: https://en.wikipedia.org/wiki/Thunderbolt_(interface)
|
||||
[11]: https://en.wikipedia.org/wiki/USB-C
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-Ubuntu-XPS-13-Kaby-Lake-ports-2.jpg?resize=800%2C325&ssl=1
|
||||
[13]: http://accessories.euro.dell.com/sna/productdetail.aspx?c=ie&l=en&s=dhs&cs=iedhs1&sku=461-10169
|
||||
[14]: https://recombu.com/mobile/article/quad-hd-vs-qhd-vs-4k-ultra-hd-what-does-it-all-mean_M20472.html
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-webcam-issue.jpg?resize=800%2C450&ssl=1
|
||||
[16]: https://itsfoss.com/reduce-overheating-laptops-linux/
|
||||
[17]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/03/Dell-XPS-13-Ubuntu-Edition-disk-partition.jpeg?resize=800%2C448&ssl=1
|
||||
[19]: https://en.wikipedia.org/wiki/EFI_system_partition
|
||||
[20]: https://itsfoss.com/swap-size/
|
129
sources/talk/20190331 Codecademy vs. The BBC Micro.md
Normal file
129
sources/talk/20190331 Codecademy vs. The BBC Micro.md
Normal file
@ -0,0 +1,129 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Codecademy vs. The BBC Micro)
|
||||
[#]: via: (https://twobithistory.org/2019/03/31/bbc-micro.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
Codecademy vs. The BBC Micro
|
||||
======
|
||||
|
||||
In the late 1970s, the computer, which for decades had been a mysterious, hulking machine that only did the bidding of corporate overlords, suddenly became something the average person could buy and take home. An enthusiastic minority saw how great this was and rushed to get a computer of their own. For many more people, the arrival of the microcomputer triggered helpless anxiety about the future. An ad from a magazine at the time promised that a home computer would “give your child an unfair advantage in school.” It showed a boy in a smart blazer and tie eagerly raising his hand to answer a question, while behind him his dim-witted classmates look on sullenly. The ad and others like it implied that the world was changing quickly and, if you did not immediately learn how to use one of these intimidating new devices, you and your family would be left behind.
|
||||
|
||||
In the UK, this anxiety metastasized into concern at the highest levels of government about the competitiveness of the nation. The 1970s had been, on the whole, an underwhelming decade for Great Britain. Both inflation and unemployment had been high. Meanwhile, a series of strikes put London through blackout after blackout. A government report from 1979 fretted that a failure to keep up with trends in computing technology would “add another factor to our poor industrial performance.”1 The country already seemed to be behind in the computing arena—all the great computer companies were American, while integrated circuits were being assembled in Japan and Taiwan.
|
||||
|
||||
In an audacious move, the BBC, a public service broadcaster funded by the government, decided that it would solve Britain’s national competitiveness problems by helping Britons everywhere overcome their aversion to computers. It launched the _Computer Literacy Project_ , a multi-pronged educational effort that involved several TV series, a few books, a network of support groups, and a specially built microcomputer known as the BBC Micro. The project was so successful that, by 1983, an editor for BYTE Magazine wrote, “compared to the US, proportionally more of Britain’s population is interested in microcomputers.”2 The editor marveled that there were more people at the Fifth Personal Computer World Show in the UK than had been to that year’s West Coast Computer Faire. Over a sixth of Great Britain watched an episode in the first series produced for the _Computer Literacy Project_ and 1.5 million BBC Micros were ultimately sold.3
|
||||
|
||||
[An archive][1] containing every TV series produced and all the materials published for the _Computer Literacy Project_ was put on the web last year. I’ve had a huge amount of fun watching the TV series and trying to imagine what it would have been like to learn about computing in the early 1980s. But what’s turned out to be more interesting is how computing was _taught_. Today, we still worry about technology leaving people behind. Wealthy tech entrepreneurs and governments spend lots of money trying to teach kids “to code.” We have websites like Codecademy that make use of new technologies to teach coding interactively. One would assume that this approach is more effective than a goofy ’80s TV series. But is it?
|
||||
|
||||
### The Computer Literacy Project
|
||||
|
||||
The microcomputer revolution began in 1975 with the release of [the Altair 8800][2]. Only two years later, the Apple II, TRS-80, and Commodore PET had all been released. Sales of the new computers exploded. In 1978, the BBC explored the dramatic societal changes these new machines were sure to bring in a documentary called “Now the Chips Are Down.”
|
||||
|
||||
The documentary was alarming. Within the first five minutes, the narrator explains that microelectronics will “totally revolutionize our way of life.” As eerie synthesizer music plays, and green pulses of electricity dance around a magnified microprocessor on screen, the narrator argues that the new chips are why “Japan is abandoning its ship building, and why our children will grow up without jobs to go to.” The documentary goes on to explore how robots are being used to automate car assembly and how the European watch industry has lost out to digital watch manufacturers in the United States. It castigates the British government for not doing more to prepare the country for a future of mass unemployment.
|
||||
|
||||
The documentary was supposedly shown to the British Cabinet.4 Several government agencies, including the Department of Industry and the Manpower Services Commission, became interested in trying to raise awareness about computers among the British public. The Manpower Services Commission provided funds for a team from the BBC’s education division to travel to Japan, the United States, and other countries on a fact-finding trip. This research team produced a report that cataloged the ways in which microelectronics would indeed mean major changes for industrial manufacturing, labor relations, and office work. In late 1979, it was decided that the BBC should make a ten-part TV series that would help regular Britons “learn how to use and control computers and not feel dominated by them.”5 The project eventually became a multimedia endeavor similar to the _Adult Literacy Project_ , an earlier BBC undertaking involving both a TV series and supplemental courses that helped two million people improve their reading.
|
||||
|
||||
The producers behind the _Computer Literacy Project_ were keen for the TV series to feature “hands-on” examples that viewers could try on their own if they had a microcomputer at home. These examples would have to be in BASIC, since that was the language (really the entire shell) used on almost all microcomputers. But the producers faced a thorny problem: Microcomputer manufacturers all had their own dialects of BASIC, so no matter which dialect they picked, they would inevitably alienate some large fraction of their audience. The only real solution was to create a new BASIC—BBC BASIC—and a microcomputer to go along with it. Members of the British public would be able to buy the new microcomputer and follow along without worrying about differences in software or hardware.
|
||||
|
||||
The TV producers and presenters at the BBC were not capable of building a microcomputer on their own. So they put together a specification for the computer they had in mind and invited British microcomputer companies to propose a new machine that met the requirements. The specification called for a relatively powerful computer because the BBC producers felt that the machine should be able to run real, useful applications. Technical consultants for the _Computer Literacy Project_ also suggested that, if it had to be a BASIC dialect that was going to be taught to the entire nation, then it had better be a good one. (They may not have phrased it exactly that way, but I bet that’s what they were thinking.) BBC BASIC would make up for some of BASIC’s usual shortcomings by allowing for recursion and local variables.6
|
||||
|
||||
The BBC eventually decided that a Cambridge-based company called Acorn Computers would make the BBC Micro. In choosing Acorn, the BBC passed over a proposal from Clive Sinclair, who ran a company called Sinclair Research. Sinclair Research had brought mass-market microcomputing to the UK in 1980 with the Sinclair ZX80. Sinclair’s new computer, the ZX81, was cheap but not powerful enough for the BBC’s purposes. Acorn’s new prototype computer, known internally as the Proton, would be more expensive but more powerful and expandable. The BBC was impressed. The Proton was never marketed or sold as the Proton because it was instead released in December 1981 as the BBC Micro, also affectionately called “The Beeb.” You could get a 16k version for £235 and a 32k version for £335.
|
||||
|
||||
In 1980, Acorn was an underdog in the British computing industry. But the BBC Micro helped establish the company’s legacy. Today, the world’s most popular microprocessor instruction set is the ARM architecture. “ARM” now stands for “Advanced RISC Machine,” but originally it stood for “Acorn RISC Machine.” ARM Holdings, the company behind the architecture, was spun out from Acorn in 1990.
|
||||
|
||||
![Picture of the BBC Micro.][3] _A bad picture of a BBC Micro, taken by me at the Computer History Museum
|
||||
in Mountain View, California._
|
||||
|
||||
### The Computer Programme
|
||||
|
||||
A dozen different TV series were eventually produced as part of the _Computer Literacy Project_ , but the first of them was a ten-part series known as _The Computer Programme_. The series was broadcast over ten weeks at the beginning of 1982. A million people watched each week-night broadcast of the show; a quarter million watched the reruns on Sunday and Monday afternoon.
|
||||
|
||||
The show was hosted by two presenters, Chris Serle and Ian McNaught-Davis. Serle plays the neophyte while McNaught-Davis, who had professional experience programming mainframe computers, plays the expert. This was an inspired setup. It made for [awkward transitions][4]—Serle often goes directly from a conversation with McNaught-Davis to a bit of walk-and-talk narration delivered to the camera, and you can’t help but wonder whether McNaught-Davis is still standing there out of frame or what. But it meant that Serle could voice the concerns that the audience would surely have. He can look intimidated by a screenful of BASIC and can ask questions like, “What do all these dollar signs mean?” At several points during the show, Serle and McNaught-Davis sit down in front of a computer and essentially pair program, with McNaught-Davis providing hints here and there while Serle tries to figure it out. It would have been much less relatable if the show had been presented by a single, all-knowing narrator.
|
||||
|
||||
The show also made an effort to demonstrate the many practical applications of computing in the lives of regular people. By the early 1980s, the home computer had already begun to be associated with young boys and video games. The producers behind _The Computer Programme_ sought to avoid interviewing “impressively competent youngsters,” as that was likely “to increase the anxieties of older viewers,” a demographic that the show was trying to attract to computing.7 In the first episode of the series, Gill Nevill, the show’s “on location” reporter, interviews a woman that has bought a Commodore PET to help manage her sweet shop. The woman (her name is Phyllis) looks to be 60-something years old, yet she has no trouble using the computer to do her accounting and has even started using her PET to do computer work for other businesses, which sounds like the beginning of a promising freelance career. Phyllis says that she wouldn’t mind if the computer work grew to replace her sweet shop business since she enjoys the computer work more. This interview could instead have been an interview with a teenager about how he had modified _Breakout_ to be faster and more challenging. But that would have been encouraging to almost nobody. On the other hand, if Phyllis, of all people, can use a computer, then surely you can too.
|
||||
|
||||
While the show features lots of BASIC programming, what it really wants to teach its audience is how computing works in general. The show explains these general principles with analogies. In the second episode, there is an extended discussion of the Jacquard loom, which accomplishes two things. First, it illustrates that computers are not based only on magical technology invented yesterday—some of the foundational principles of computing go back two hundred years and are about as simple as the idea that you can punch holes in card to control a weaving machine. Second, the interlacing of warp and weft threads is used to demonstrate how a binary choice (does the weft thread go above or below the warp thread?) is enough, when repeated over and over, to produce enormous variation. This segues, of course, into a discussion of how information can be stored using binary digits.
|
||||
|
||||
Later in the show there is a section about a steam organ that plays music encoded in a long, segmented roll of punched card. This time the analogy is used to explain subroutines in BASIC. Serle and McNaught-Davis lay out the whole roll of punched card on the floor in the studio, then point out the segments where it looks like a refrain is being repeated. McNaught-Davis explains that a subroutine is what you would get if you cut out those repeated segments of card and somehow added an instruction to go back to the original segment that played the refrain for the first time. This is a brilliant explanation and probably one that stuck around in people’s minds for a long time afterward.
|
||||
|
||||
I’ve picked out only a few examples, but I think in general the show excels at demystifying computers by explaining the principles that computers rely on to function. The show could instead have focused on teaching BASIC, but it did not. This, it turns out, was very much a conscious choice. In a retrospective written in 1983, John Radcliffe, the executive producer of the _Computer Literacy Project_ , wrote the following:
|
||||
|
||||
> If computers were going to be as important as we believed, some genuine understanding of this new subject would be important for everyone, almost as important perhaps as the capacity to read and write. Early ideas, both here and in America, had concentrated on programming as the main route to computer literacy. However, as our thinking progressed, although we recognized the value of “hands-on” experience on personal micros, we began to place less emphasis on programming and more on wider understanding, on relating micros to larger machines, encouraging people to gain experience with a range of applications programs and high-level languages, and relating these to experience in the real world of industry and commerce…. Our belief was that once people had grasped these principles, at their simplest, they would be able to move further forward into the subject.
|
||||
|
||||
Later, Radcliffe writes, in a similar vein:
|
||||
|
||||
> There had been much debate about the main explanatory thrust of the series. One school of thought had argued that it was particularly important for the programmes to give advice on the practical details of learning to use a micro. But we had concluded that if the series was to have any sustained educational value, it had to be a way into the real world of computing, through an explanation of computing principles. This would need to be achieved by a combination of studio demonstration on micros, explanation of principles by analogy, and illustration on film of real-life examples of practical applications. Not only micros, but mini computers and mainframes would be shown.
|
||||
|
||||
I love this, particularly the part about mini-computers and mainframes. The producers behind _The Computer Programme_ aimed to help Britons get situated: Where had computing been, and where was it going? What can computers do now, and what might they do in the future? Learning some BASIC was part of answering those questions, but knowing BASIC alone was not seen as enough to make someone computer literate.
|
||||
|
||||
### Computer Literacy Today
|
||||
|
||||
If you google “learn to code,” the first result you see is a link to Codecademy’s website. If there is a modern equivalent to the _Computer Literacy Project_ , something with the same reach and similar aims, then it is Codecademy.
|
||||
|
||||
“Learn to code” is Codecademy’s tagline. I don’t think I’m the first person to point this out—in fact, I probably read this somewhere and I’m now ripping it off—but there’s something revealing about using the word “code” instead of “program.” It suggests that the important thing you are learning is how to decode the code, how to look at a screen’s worth of Python and not have your eyes glaze over. I can understand why to the average person this seems like the main hurdle to becoming a professional programmer. Professional programmers spend all day looking at computer monitors covered in gobbledygook, so, if I want to become a professional programmer, I better make sure I can decipher the gobbledygook. But dealing with syntax is not the most challenging part of being a programmer, and it quickly becomes almost irrelevant in the face of much bigger obstacles. Also, armed only with knowledge of a programming language’s syntax, you may be able to _read_ code but you won’t be able to _write_ code to solve a novel problem.
|
||||
|
||||
I recently went through Codecademy’s “Code Foundations” course, which is the course that the site recommends you take if you are interested in programming (as opposed to web development or data science) and have never done any programming before. There are a few lessons in there about the history of computer science, but they are perfunctory and poorly researched. (Thank heavens for [this noble internet vigilante][5], who pointed out a particularly egregious error.) The main focus of the course is teaching you about the common structural elements of programming languages: variables, functions, control flow, loops. In other words, the course focuses on what you would need to know to start seeing patterns in the gobbledygook.
|
||||
|
||||
To be fair to Codecademy, they offer other courses that look meatier. But even courses such as their “Computer Science Path” course focus almost exclusively on programming and concepts that can be represented in programs. One might argue that this is the whole point—Codecademy’s main feature is that it gives you little interactive programming lessons with automated feedback. There also just isn’t enough room to cover more because there is only so much you can stuff into somebody’s brain in a little automated lesson. But the producers at the BBC tasked with kicking off the _Computer Literacy Project_ also had this problem; they recognized that they were limited by their medium and that “the amount of learning that would take place as a result of the television programmes themselves would be limited.”8 With similar constraints on the volume of information they could convey, they chose to emphasize general principles over learning BASIC. Couldn’t Codecademy replace a lesson or two with an interactive visualization of a Jacquard loom weaving together warp and weft threads?
|
||||
|
||||
I’m banging the drum for “general principles” loudly now, so let me just explain what I think they are and why they are important. There’s a book by J. Clark Scott about computers called _But How Do It Know?_ The title comes from the anecdote that opens the book. A salesman is explaining to a group of people that a thermos can keep hot food hot and cold food cold. A member of the audience, astounded by this new invention, asks, “But how do it know?” The joke of course is that the thermos is not perceiving the temperature of the food and then making a decision—the thermos is just constructed so that cold food inevitably stays cold and hot food inevitably stays hot. People anthropomorphize computers in the same way, believing that computers are digital brains that somehow “choose” to do one thing or another based on the code they are fed. But learning a few things about how computers work, even at a rudimentary level, takes the homunculus out of the machine. That’s why the Jacquard loom is such a good go-to illustration. It may at first seem like an incredible device. It reads punch cards and somehow “knows” to weave the right pattern! The reality is mundane: Each row of holes corresponds to a thread, and where there is a hole in that row the corresponding thread gets lifted. Understanding this may not help you do anything new with computers, but it will give you the confidence that you are not dealing with something magical. We should impart this sense of confidence to beginners as soon as we can.
|
||||
|
||||
Alas, it’s possible that the real problem is that nobody wants to learn about the Jacquard loom. Judging by how Codecademy emphasizes the professional applications of what it teaches, many people probably start using Codecademy because they believe it will help them “level up” their careers. They believe, not unreasonably, that the primary challenge will be understanding the gobbledygook, so they want to “learn to code.” And they want to do it as quickly as possible, in the hour or two they have each night between dinner and collapsing into bed. Codecademy, which after all is a business, gives these people what they are looking for—not some roundabout explanation involving a machine invented in the 18th century.
|
||||
|
||||
The _Computer Literacy Project_ , on the other hand, is what a bunch of producers and civil servants at the BBC thought would be the best way to educate the nation about computing. I admit that it is a bit elitist to suggest we should laud this group of people for teaching the masses what they were incapable of seeking out on their own. But I can’t help but think they got it right. Lots of people first learned about computing using a BBC Micro, and many of these people went on to become successful software developers or game designers. [As I’ve written before][6], I suspect learning about computing at a time when computers were relatively simple was a huge advantage. But perhaps another advantage these people had is shows like _The Computer Programme_ , which strove to teach not just programming but also how and why computers can run programs at all. After watching _The Computer Programme_ , you may not understand all the gobbledygook on a computer screen, but you don’t really need to because you know that, whatever the “code” looks like, the computer is always doing the same basic thing. After a course or two on Codecademy, you understand some flavors of gobbledygook, but to you a computer is just a magical machine that somehow turns gobbledygook into running software. That isn’t computer literacy.
|
||||
|
||||
_If you enjoyed this post, more like it come out every four weeks! Follow[@TwoBitHistory][7] on Twitter or subscribe to the [RSS feed][8] to make sure you know when a new post is out._
|
||||
|
||||
_Previously on TwoBitHistory…_
|
||||
|
||||
> FINALLY some new damn content, amirite?
|
||||
>
|
||||
> Wanted to write an article about how Simula bought us object-oriented programming. It did that, but early Simula also flirted with a different vision for how OOP would work. Wrote about that instead!<https://t.co/AYIWRRceI6>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [February 1, 2019][9]
|
||||
|
||||
1. Robert Albury and David Allen, Microelectronics, report (1979). ↩
|
||||
|
||||
2. Gregg Williams, “Microcomputing, British Style”, Byte Magazine, 40, January 1983, accessed on March 31, 2019, <https://archive.org/stream/byte-magazine-1983-01/1983_01_BYTE_08-01_Looking_Ahead#page/n41/mode/2up>. ↩
|
||||
|
||||
3. John Radcliffe, “Toward Computer Literacy,” Computer Literacy Project Achive, 42, accessed March 31, 2019, [https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards Computer Literacy.pdf][10]. ↩
|
||||
|
||||
4. David Allen, “About the Computer Literacy Project,” Computer Literacy Project Archive, accessed March 31, 2019, <https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/history>. ↩
|
||||
|
||||
5. ibid. ↩
|
||||
|
||||
6. Williams, 51. ↩
|
||||
|
||||
7. Radcliffe, 11. ↩
|
||||
|
||||
8. Radcliffe, 5. ↩
|
||||
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2019/03/31/bbc-micro.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/
|
||||
[2]: /2018/07/22/dawn-of-the-microcomputer.html
|
||||
[3]: /images/beeb.jpg
|
||||
[4]: https://twitter.com/TwoBitHistory/status/1112372000742404098
|
||||
[5]: https://twitter.com/TwoBitHistory/status/1111305774939234304
|
||||
[6]: /2018/09/02/learning-basic.html
|
||||
[7]: https://twitter.com/TwoBitHistory
|
||||
[8]: https://twobithistory.org/feed.xml
|
||||
[9]: https://twitter.com/TwoBitHistory/status/1091148050221944832?ref_src=twsrc%5Etfw
|
||||
[10]: https://computer-literacy-project.pilots.bbcconnectedstudio.co.uk/media/Towards%20Computer%20Literacy.pdf
|
@ -1,63 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ninifly )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Edge computing is in most industries’ future)
|
||||
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Edge computing is in most industries’ future
|
||||
======
|
||||
Nearly every industry can take advantage of edge computing in the journey to speed digital transformation efforts
|
||||
![iStock][1]
|
||||
|
||||
The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, [according to Gartner][2].
|
||||
|
||||
That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include:
|
||||
|
||||
* **Manufacturers:** Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent [_Automation World_][3] survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics.
|
||||
|
||||
* **Retailers** : Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” [writes Dave Johnson][4], executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show in-store traffic.
|
||||
|
||||
|
||||
|
||||
* **Healthcare organizations** : As healthcare costs continue to escalate, this industry is ripe for innovation that improves productivity and cost efficiencies. Management consulting firm [McKinsey & Co. has identified][5] at least 11 healthcare use cases that benefit patients, the facility, or both. Two examples: tracking mobile medical devices for nursing efficiency as well as optimization of equipment, and wearable devices that track user exercise and offer wellness advice.
|
||||
|
||||
|
||||
|
||||
While these are strong use cases, as the edge computing market grows, so too will the number of industries adopting it.
|
||||
|
||||
**Getting the edge on digital transformation**
|
||||
|
||||
Faster processing at the edge fits perfectly into the objectives and goals of digital transformation — improving efficiencies, productivity, speed to market, and the customer experience. Here are just a few of the potential applications and industries that will be changed by edge computing:
|
||||
|
||||
**Agriculture:** Farmers and organizations already use drones to transmit field and climate conditions to watering equipment. Other applications might include monitoring and location tracking of workers, livestock, and equipment to improve productivity, efficiencies, and costs.
|
||||
|
||||
**Energy** : There are multiple potential applications in this sector that could benefit both consumers and providers. For example, smart meters help homeowners better manage energy use while reducing grid operators’ need for manual meter reading. Similarly, sensors on water pipes would detect leaks, while providing real-time consumption data.
|
||||
|
||||
**Financial services** : Banks are adopting interactive ATMs that quickly process data to provide better customer experiences. At the organizational level, transactional data can be more quickly analyzed for fraudulent activity.
|
||||
|
||||
**Logistics** : As consumers demand faster delivery of goods and services, logistics companies will need to transform mapping and routing capabilities to get real-time data, especially in terms of last-mile planning and tracking. That could involve street-, package-, and car-based sensors transmitting data for processing.
|
||||
|
||||
All industries have the potential for transformation, thanks to edge computing. But it will depend on how they address their computing infrastructure. Discover how to overcome any IT obstacles at [APC.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
|
||||
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (17 predictions about 5G networks and devices)
|
||||
[#]: via: (https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
17 predictions about 5G networks and devices
|
||||
======
|
||||
Not surprisingly, the new Ericsson Mobility Report is bullish on the potential of 5G technology. Here’s a quick look at the most important numbers.
|
||||
![Vertigo3D / Getty Images][1]
|
||||
|
||||
_“As market after market switches on 5G, we are at a truly momentous point in time. No previous generation of mobile technology has had the potential to drive economic growth to the extent that 5G promises. It goes beyond connecting people to fully realizing the Internet of Things (IoT) and the Fourth Industrial Revolution.”_ —The opening paragraph of the [June 2019 Ericsson Mobility Report][2]
|
||||
|
||||
Almost every significant technology advancement now goes through what [Gartner calls the “hype cycle.”][3] These days, Everyone expects new technologies to be met with gushing optimism and dreamy visions of how it’s going to change the world in the blink of an eye. After a while, we all come to expect the vendors and the press to go overboard with excitement, at least until reality and disappointment set in when things don’t pan out exactly as expected.
|
||||
|
||||
**[ Also read:[The time of 5G is almost here][4] ]**
|
||||
|
||||
Even with all that in mind, though, Ericsson’s whole-hearted embrace of 5G in its Internet Mobility Report is impressive. The optimism is backed up by lots of numbers, but they can be hard to tease out of the 36-document. So, let’s recap some of the most important top-line predictions (with my comments at the end).
|
||||
|
||||
### Worldwide 5G growth projections
|
||||
|
||||
1. “More than 10 million 5G subscriptions are projected worldwide by the end of 2019.”
|
||||
2. “[We] now expect there to be 1.9 billion 5G subscriptions for enhanced mobile broadband by the end of 2024. This will account for over 20 percent of all mobile subscriptions at that time. The peak of LTE subscriptions is projected for 2022, at around 5.3 billion subscriptions, with the number declining slowly thereafter.”
|
||||
3. “In 2024, 5G networks will carry 35 percent of mobile data traffic globally.”
|
||||
4. “5G can cover up to 65 percent of the world’s population in 2024.”
|
||||
5. ”NB-IoT and Cat-M technologies will account for close to 45 percent of cellular IoT connections in 2024.”
|
||||
6. “By the end of 2024, nearly 35 percent of cellular IoT connections will be Broadband IoT, with 4G connecting the majority.” But 5G connections will support more advanced use cases.
|
||||
7. “Despite challenging 5G timelines, device suppliers are expected to be ready with different band and architecture support in a range of devices during 2019.”
|
||||
8. “Spectrum sharing … chipsets are currently in development and are anticipated to be in 5G commercial devices in late 2019."
|
||||
9. “[VoLTE][5] is the foundation for enabling voice and communication services on 5G devices. Subscriptions are expected to reach 2.1 billion by the end of 2019. … The number of VoLTE subscriptions is projected to reach 5.9 billion by the end of 2024, accounting for more than 85 percent of combined LTE and 5G subscriptions.”
|
||||
|
||||
|
||||
|
||||
![][6]
|
||||
|
||||
### Regional 5G projections
|
||||
|
||||
1. “In North America, … service providers have already launched commercial 5G services, both for fixed wireless access and mobile. … By the end of 2024, we anticipate close to 270 million 5G subscriptions in the region, accounting for more than 60 percent of mobile subscriptions.”
|
||||
2. “In Western Europe … The momentum for 5G in the region was highlighted by the first commercial launch in April. By the end of 2024, 5G is expected to account for around 40 percent of mobile subscriptions.
|
||||
3. In Central and Eastern Europe, … The first 5G subscriptions are expected in 2019, and will make up 15 percent of subscriptions in 2024.”
|
||||
4. “In North East Asia, … the region’s 5G subscription penetration is projected to reach 47 percent [by the end of 2024].
|
||||
5. “[In India,] 5G subscriptions are expected to become available in 2022 and will represent 6 percent of mobile subscriptions at the end of 2024.”
|
||||
6. “In the Middle East and North Africa, we anticipate commercial 5G deployments with leading communications service providers during 2019, and significant volumes in 2021. … Around 60 million 5G subscriptions are forecast for the end of 2024, representing 3 percent of total mobile subscriptions.”
|
||||
7. “Initial 5G commercial devices are expected in the [South East Asia and Oceania] region during the first half of 2019. By the end of 2024, it is anticipated that almost 12 percent of subscriptions in the region will be for 5G.]”
|
||||
8. “In Latin America … the first 5G deployments will be possible in the 3.5GHz band during 2019. Argentina, Brazil, Chile, Colombia, and Mexico are anticipated to be the first countries in the region to deploy 5G, with increased subscription uptake forecast from 2020. By the end of 2024, 5G is set to make up 7 percent of mobile subscriptions.”
|
||||
|
||||
|
||||
|
||||
### Is 5G really so inevitable?
|
||||
|
||||
Considered individually, these predictions all seem perfectly reasonable. Heck, 10 million 5G subscriptions is only a drop in the global bucket. And rumors are already flying that Apple’s next round of iPhones will include 5G capability. Also, 2024 is still five years in the future, so why wouldn’t the faster connections drive impressive traffic stats? Similarly, North America and North East Asia will experience the fastest 5G penetration.
|
||||
|
||||
But when you look at them all together, these numbers project a sense of 5G inevitability that could well be premature. It will take a _lot_ of spending, by a lot of different parties—carriers, chip makers, equipment vendors, phone manufacturers, and consumers—to make this kind of growth a reality.
|
||||
|
||||
I’m not saying 5G won’t take over the world. I’m just saying that when so many things have to happen in a relatively short time, there are a lot of opportunities for the train to jump the tracks. Don’t be surprised if it takes longer than expected for 5G to turn into the worldwide default standard Ericsson—and everyone else—seems to think it will inevitably become.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403358/17-predictions-about-5g-networks-and-devices.html
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/5g_wireless_technology_network_connections_by_credit-vertigo3d_gettyimages-1043302218_3x2-100787550-large.jpg
|
||||
[2]: https://www.ericsson.com/assets/local/mobility-report/documents/2019/ericsson-mobility-report-june-2019.pdf
|
||||
[3]: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle
|
||||
[4]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[5]: https://www.gsma.com/futurenetworks/technology/volte/
|
||||
[6]: https://images.idgesg.net/images/article/2019/06/ericsson-mobility-report-june-2019-graph-100799481-large.jpg
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,144 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why your workplace arguments aren't as effective as you'd like)
|
||||
[#]: via: (https://opensource.com/open-organization/19/6/barriers-productive-arguments)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland/users/ron-mcfarland)
|
||||
|
||||
Why your workplace arguments aren't as effective as you'd like
|
||||
======
|
||||
Open organizations rely on open conversations. These common barriers to
|
||||
productive argument often get in the way.
|
||||
![Arrows pointing different directions][1]
|
||||
|
||||
Transparent, frank, and often contentious arguments are part of life in an open organization. But how can we be sure those conversations are _productive_ —not _destructive_?
|
||||
|
||||
This is the second installment of a two-part series on how to argue and actually achieve something. In the [first article][2], I mentioned what arguments are (and are not), according to author Sinnott-Armstrong in his book _Think Again: How to Reason and Argue._ I also offered some suggestions for making arguments as productive as possible.
|
||||
|
||||
In this article, I'll examine three barriers to productive arguments that Sinnott-Armstrong elaborates in his book: incivility, polarization, and language issues. Finally, I'll explain his suggestions for addressing those barriers.
|
||||
|
||||
### Incivility
|
||||
|
||||
"Incivility" has become a social concern in recent years. Consider this: As a tactic in arguments, incivility _can_ have an effect in certain situations—and that's why it's a common strategy. Sinnott-Armstrong notes that incivility:
|
||||
|
||||
* **Attracts attention:** Incivility draws people's attention in one direction, sometimes to misdirect attention from or outright obscure other issues. It redirects people's attention to shocking statements. Incivility, exaggeration, and extremism can increase the size of an audience.
|
||||
|
||||
* **Energizes:** Sinnott-Armstrong writes that seeing someone being uncivil on a topic of interest can generate energy from a state of powerlessness.
|
||||
|
||||
* **Stimulates memory:** Forgetting shocking statements is difficult; they stick in our memory more easily than statements that are less surprising to us.
|
||||
|
||||
* **Excites the powerless:** The groups most likely to believe and invest in someone being uncivil are those that feel they're powerless and being treated unfairly.
|
||||
|
||||
|
||||
|
||||
|
||||
Unfortunately, incivility as a tactic in arguments has its costs. One such cost is polarization.
|
||||
|
||||
### Polarization
|
||||
|
||||
Sinnott-Armstrong writes about five forms of polarization:
|
||||
|
||||
* **Distance:** If two people's or groups' views are far apart according to some relevant scale, have significant disagreements and little common ground, then they're polarized.
|
||||
|
||||
* **Differences:** If two people or groups have fewer values and beliefs _in common_ than they _don't have in common_ , then they're polarized.
|
||||
|
||||
* **Antagonism:** Groups are more polarized the more they feel hatred, disdain, fear, or other negative emotions toward other people or groups.
|
||||
|
||||
* **Incivility:** Groups tend to be more polarized when they talk more negatively about people of the other groups.
|
||||
|
||||
* **Rigidity:** Groups tend to be more polarized when they treat their values as indisputable and will not compromise.
|
||||
|
||||
* **Gridlock:** Groups tend to be more polarized when they're unable to cooperate and work together toward common goals.
|
||||
|
||||
|
||||
|
||||
|
||||
And I'll add one more form of polarization to Sinnott-Armstrong's list:
|
||||
|
||||
* **Non-disclosure:** Groups tend to be more polarized when one or both of the groups refuses to share valid, verifiable information—or when they distract each other with useless or irrelevant information. One of the ways people polarize is by not talking to each other and withhold information. Similarly, they talk on subjects that distract us from the issue at hand. Some issues are difficult to talk about, but by doing so solutions can be explored.
|
||||
|
||||
|
||||
|
||||
### Language issues
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes.
|
||||
|
||||
Language issues can be argument-stoppers Sinnott-Armstrong says. In particular, he outlines some of the following language-related barriers to productive argument.
|
||||
|
||||
* **Guarding:** Using words like "all" can make a statement unbelievable; words like "sometimes" can make a statement too vague.
|
||||
|
||||
* **Assuring:** Simply stating "trust me, I know what I'm talking about," without offering evidence that this is the case, can impede arguments.
|
||||
|
||||
* **Evaluating:** Offering an evaluation of something—like saying "It is good"―without any supporting reasoning.
|
||||
|
||||
* **Discounting:** This involves anticipating what the another person will say and attempting to weaken it as much as possible by framing an argument in a negative way. (Contrast these two sentences, for example: "Ramona is smart but boring" and "Ramona is boring but smart." The difference is subtle, but you'd probably want to spend less time with Ramona if you heard the first statement about her than if you heard the second.)
|
||||
|
||||
|
||||
|
||||
|
||||
Identifying discussion-stoppers like these can help you avoid shutting down a discussion that would otherwise achieve beneficial outcomes. In addition, Sinnott-Armstrong specifically draws readers' attention to two other language problems that can kill productive debates: vagueness and ambiguity.
|
||||
|
||||
* **Vagueness:** This occurs when a word or sentence is not precise enough and having many ways to interpret its true meaning and intent, which leads to confusion. Consider the sentence "It is big." "It" must be defined if it's not already obvious to everyone in the conversation. And a word like "big" must be clarified through comparison to something that everyone has agreed upon.
|
||||
|
||||
* **Ambiguity:** This occurs when a sentence could have two distinct meanings. For example: "Police killed man with axe." Who was holding the axe, the man or the police? "My neighbor had a friend for dinner." Did your neighbor invite a friend to share a meal—or did she eat her friend?
|
||||
|
||||
|
||||
|
||||
|
||||
### Overcoming barriers
|
||||
|
||||
To help readers avoid these common roadblocks to productive arguments, Sinnott-Armstrong recommends a simple, four-step process for evaluating another person's argument.
|
||||
|
||||
1. **Observation:** First, observe a stated opinion and its related evidence to determine the precise nature of the claim. This might require you to ask some questions for clarification (you'll remember I employed this technique when arguing with my belligerent uncle, which I described [in the first article of this series][2]).
|
||||
|
||||
2. **Hypothesis:** Develop some hypothesis about the argument. In this case, the hypothesis should be an inference based on generally acceptable standards (for more on the structure of arguments themselves, also see [the first part of this series][2]).
|
||||
|
||||
3. **Comparison:** Compare that hypothesis with others and evaluate which is more accurate. More important issues will require you to conduct more comparisons. In other cases, premises are so obvious that no further explanation is required.
|
||||
|
||||
4. **Conclusion:** From the comparison analysis, reach a conclusion about whether your hypothesis about a competing argument is correct.
|
||||
|
||||
|
||||
|
||||
|
||||
In many cases, the question is not whether a particular claim is _correct_ or _incorrect_ , but whether it is _believable._ So Sinnott-Armstrong also offers a four-step "believability test" for evaluating claims of this type.
|
||||
|
||||
1. **Expertise:** Does the person presenting the argument have authority in an appropriate field? Being a specialist is one field doesn't necessarily make that person an expert in another.
|
||||
|
||||
2. **Motive:** Would self-interest or other personal motives compel a person to withhold information or make false statements? To confirm one's statements, it might be wise to seek a totally separate, independent authority for confirmation.
|
||||
|
||||
3. **Sources:** Are the sources the person offers as evidence of a claim recognized experts? Do those sources have the expertise on the specific issue addressed?
|
||||
|
||||
4. **Agreement:** Is there agreement among many experts within the same specialty?
|
||||
|
||||
|
||||
|
||||
|
||||
### Let's argue
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs.
|
||||
|
||||
When I was a university student, I would usually sit toward the front of the classroom. When I didn't understand something, I would start asking questions for clarification. Everyone else in the class would just sit silently, saying nothing. After class, however, other students would come up to me and thank me for asking those questions—because everyone else in the room was confused, too.
|
||||
|
||||
Clarification is a powerful act—not just in the classroom, but during arguments anywhere. Building an organizational culture in which people feel empowered to ask for clarification is critical for productive arguments (I've [given presentations on this topic][3] before). If members have the courage to clarify premises, and they can do so in an environment where others don't think they're being belligerent, then this might be the key to a successful and productive argument.
|
||||
|
||||
If you really want to strengthen your ability to argue, find someone that totally disagrees with you but wants to learn and understand your beliefs. Then, practice some of Sinnott-Armstrong's suggestions. Arguing productively will enhance [transparency, inclusivity, and collaboration][4] in your organization—leading to a more open culture.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/6/barriers-productive-arguments
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/directions-arrows.png?itok=EE3lFewZ (Arrows pointing different directions)
|
||||
[2]: https://opensource.com/open-organization/19/5/productive-arguments
|
||||
[3]: https://www.slideshare.net/RonMcFarland1/argue-successfully-achieve-something
|
||||
[4]: https://opensource.com/open-organization/resources/open-org-definition
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco connects with IBM in to simplify hybrid cloud deployment)
|
||||
[#]: via: (https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco connects with IBM in to simplify hybrid cloud deployment
|
||||
======
|
||||
Cisco and IBM are working todevelop a hybrid-cloud architecture that meld Cisco’s data-center, networking and analytics platforms with IBM’s cloud offerings.
|
||||
![Ilze Lucero \(CC0\)][1]
|
||||
|
||||
Cisco and IBM said the companies would meld their [data-center][2] and cloud technologies to help customers more easily and securely build and support on-premises and [hybrid-cloud][3] applications.
|
||||
|
||||
Cisco, IBM Cloud and IBM Global Technology Services (the professional services business of IBM) said they will work to develop a hybrid-cloud architecture that melds Cisco’s data-center, networking and analytics platforms with IBM’s cloud offerings. IBM's contribution includea a heavy emphasis on Kubernetes-based offerings such as Cloud Foundry and Cloud Private as well as a catalog of [IBM enterprise software][4] such as Websphere and open source software such as Open Whisk, KNative, Istio and Prometheus.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][5] ]**
|
||||
|
||||
Cisco said customers deploying its Virtual Application Centric Infrastructure (ACI) technologies can now extend that network fabric from on-premises to the IBM Cloud. ACI is Cisco’s [software-defined networking (SDN)][6] data-center package, but it also delivers the company’s Intent-Based Networking technology, which brings customers the ability to automatically implement network and policy changes on the fly and ensure data delivery.
|
||||
|
||||
[IBM said Cisco ACI Virtual Pod][7] (vPOD) software can now run on IBM Cloud bare-metal servers. “vPOD consists of virtual spines and leafs and supports up to eight instances of ACI Virtual Edge. These elements are often deployed on VMware services on the IBM Cloud to support hybrid deployments from on-premises environments to the IBM Cloud," the company stated.
|
||||
|
||||
“Through a new relationship with IBM’s Global Technology Services team, customers can implement Virtual ACI on their IBM Cloud,” Cisco’s Kaustubh Das, vice president of strategy and product development wrote in a [blog][8] about the agreement. “Virtual ACI is a software-only solution that you can deploy wherever you have at least two servers on which you can run the VMware ESXi hypervisor. In the future, the ability to deploy IBM Cloud Pak for Applications in a Cisco ACI environment will also be supported,” he stated.
|
||||
|
||||
IBM’s prepackaged Cloud Paks include a secured Kubernetes container and containerized IBM middleware designed to let customers quickly spin-up enterprise-ready containers, Big Blue said.
|
||||
|
||||
Additionally IBM said it would add support for its IBM Cloud Private, which manages Kubernetes and other containers, on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure (HCI) systems. HyperFlex is Cisco's HCI that offers computing, networking and storage resources in a single system. The package can be managed via Cisco’s Intersight software-as-a-service cloud management platform that offers a central dashboard of HyperFlex operations.
|
||||
|
||||
IBM said it was adding Hyperflex support to its IBM Cloud Pak for Applications as well.
|
||||
|
||||
The paks include IBM Multicloud Manager which is a Kubernetes-based platform that runs on the company’s [IBM Cloud Private][9] platform and lets customers manage and integrate workloads on clouds from other providers such as Amazon, Red Hat and Microsoft.
|
||||
|
||||
At the heart of the Multi-cloud Manager is a dashboard interface for managing thousands of Kubernetes applications and huge volumes of data regardless of where in the organization they are located.
|
||||
|
||||
The idea is that Multi-cloud Manager lets operations and development teams get visibility of Kubernetes applications and components across the different clouds and clusters via a single control pane.
|
||||
|
||||
“With IBM Multicloud Manager, enterprises can have a single place to manage multiple clusters running across multiple on-premises, public and private cloud environments, providing consistent visibility, governance and automation from on-premises to the edge, wrote IBM’s Evaristus Mainsah, general manager of IBM Cloud Private Ecosystem in a [blog][7] about the relationship.
|
||||
|
||||
Distributed workloads can be pushed out and managed directly at the device at a much larger scale across multiple public clouds and on-premises locations. Visibility, compliance and governance are provided with extended MCM capabilities that will be available at the lightweight device layer, with a connection back to the central server/gateway, Mainsah stated.
|
||||
|
||||
In addition, Cisco’s AppDynamics\can be tied in to monitor infrastructure and business performance, Cisco stated. Cisco recently added [AppDynamics for Kubernetes][10], which Cisco said will reduce the time it takes to identify and troubleshoot performance issues across Kubernetes clusters.
|
||||
|
||||
The companies said the hybrid-cloud architecture they envision will help reduce the complexity of setting up and managing hybrid-cloud environments.
|
||||
|
||||
Cisco and IBM are both aggressively pursuing cloud customers. Cisco[ ramped up][11] its own cloud presence in 2018 with all manner of support stemming from an [agreement with Amazon Web Services][12] (AWS) that will offer enterprise customers an integrated platform to help them more simply build, secure and connect [Kubernetes][13] clusters across private [data centers][14] and the AWS cloud.
|
||||
|
||||
Cisco and Google in [April expanded their joint cloud-development][15] activities to help customers more easily build secure multicloud and hybrid applications everywhere from on-premises data centers to public clouds.
|
||||
|
||||
IBM is waiting to close [its $34 billion Red Hat deal][16] that it expects will give it a huge presence in the hotly contested hybrid-cloud arena and increase its inroads to competitors – Google, Amazon and Microsoft among others. Gartner says that market will be worth $240 billion by next year.
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/03/cubes_blocks_squares_containers_ilze_lucero_cc0_via_unsplash_1200x800-100752172-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[3]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
|
||||
[4]: https://www.networkworld.com/article/3340043/ibm-marries-on-premises-private-and-public-cloud-data.html
|
||||
[5]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[6]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
|
||||
[7]: https://www.ibm.com/blogs/cloud-computing/2019/06/18/ibm-cisco-collaborating-hybrid-cloud-modern-enterprise/
|
||||
[8]: https://blogs.cisco.com/datacenter/cisco-and-ibm-cloud-announce-hybrid-cloud-partnership
|
||||
[9]: https://www.ibm.com/cloud/private
|
||||
[10]: https://blog.appdynamics.com/product/kubernetes-monitoring-with-appdynamics/
|
||||
[11]: https://www.networkworld.com/article/3322937/lan-wan/what-will-be-hot-for-cisco-in-2019.html?nsdr=true
|
||||
[12]: https://www.networkworld.com/article/3319782/cloud-computing/cisco-aws-marriage-simplifies-hybrid-cloud-app-development.html?nsdr=true
|
||||
[13]: https://www.networkworld.com/article/3269848/cloud-computing/cisco-embraces-kubernetes-pushing-container-software-into-mainstream.html
|
||||
[14]: https://www.networkworld.com/article/3223692/data-center/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[15]: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
|
||||
[16]: https://www.networkworld.com/article/3316960/ibm-says-buying-red-hat-makes-it-the-biggest-in-hybrid-cloud.html
|
||||
[17]: https://www.facebook.com/NetworkWorld/
|
||||
[18]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco issues critical security warnings on SD-WAN, DNA Center)
|
||||
[#]: via: (https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco issues critical security warnings on SD-WAN, DNA Center
|
||||
======
|
||||
Vulnerabilities to Cisco's SD-WAN and DNA Center software top a list of nearly 30 security advisories issued by the company.
|
||||
![zajcsik \(CC0\)][1]
|
||||
|
||||
Cisco has released two critical warnings about security issues with its SD-WAN and DNA Center software packages.
|
||||
|
||||
The worse, with a Common Vulnerability Scoring System rating of 9.3 out of 10, is a vulnerability in its [Digital Network Architecture][2] (DNA) Center software that could let an unauthenticated attacker connect an unauthorized network device to the subnet designated for cluster services.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
A successful exploit could let an attacker reach internal services that are not hardened for external access, Cisco [stated][7]. The vulnerability is due to insufficient access restriction on ports necessary for system operation, and the company discovered the issue during internal security testing, Cisco stated.
|
||||
|
||||
Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access, automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis.
|
||||
|
||||
This vulnerability affects Cisco DNA Center Software releases prior to 1.3, and it is fixed in version 1.3 and releases after that.
|
||||
|
||||
Cisco wrote that system updates are available from the Cisco cloud but not from the [Software Center][8] on Cisco.com. To upgrade to a fixed release of Cisco DNA Center Software, administrators can use the “System Updates” feature of the software.
|
||||
|
||||
A second critical warning – with a CVVS score of 7.8 – is a weakness in the command-line interface of the Cisco SD-WAN Solution that could let an authenticated local attacker elevate lower-level privileges to the root user on an affected device.
|
||||
|
||||
Cisco [wrote][9] that the vulnerability is due to insufficient authorization enforcement. An attacker could exploit this vulnerability by authenticating to the targeted device and executing commands that could lead to elevated privileges. A successful exploit could let the attacker make configuration changes to the system as the root user, the company stated.
|
||||
|
||||
This vulnerability affects a range of Cisco products running a release of the Cisco SD-WAN Solution prior to Releases 18.3.6, 18.4.1, and 19.1.0 including:
|
||||
|
||||
* vBond Orchestrator Software
|
||||
* vEdge 100 Series Routers
|
||||
* vEdge 1000 Series Routers
|
||||
* vEdge 2000 Series Routers
|
||||
* vEdge 5000 Series Routers
|
||||
* vEdge Cloud Router Platform
|
||||
* vManage Network Management Software
|
||||
* vSmart Controller Software
|
||||
|
||||
|
||||
|
||||
Cisco said it has released free [software updates][10] that address the vulnerability described in this advisory. Cisco wrote that it fixed this vulnerability in Release 18.4.1 of the Cisco SD-WAN Solution.
|
||||
|
||||
The two critical warnings were included in a dump of [nearly 30 security advisories][11].
|
||||
|
||||
There were two other “High” impact rated warnings involving the SD-WAN software.
|
||||
|
||||
One, a vulnerability in the vManage web-based UI (Web UI) of the Cisco SD-WAN Solution could let an authenticated, remote attacker gain elevated privileges on an affected vManage device, Cisco [wrote][12].
|
||||
|
||||
The vulnerability is due to a failure to properly authorize certain user actions in the device configuration. An attacker could exploit this vulnerability by logging in to the vManage Web UI and sending crafted HTTP requests to vManage. A successful exploit could let attackers gain elevated privileges and make changes to the configuration that they would not normally be authorized to make, Cisco stated.
|
||||
|
||||
Another vulnerability in the vManage web-based UI could let an authenticated, remote attacker inject arbitrary commands that are executed with root privileges.
|
||||
|
||||
This exposure is due to insufficient input validation, Cisco [wrote][13]. An attacker could exploit this vulnerability by authenticating to the device and submitting crafted input to the vManage Web UI.
|
||||
|
||||
Both vulnerabilities affect Cisco vManage Network Management Software that is running a release of the Cisco SD-WAN Solution prior to Release 18.4.0 and Cisco has released free [software updates][10] to correct them.
|
||||
|
||||
Other high-rated vulnerabilities Cisco disclosed included:
|
||||
|
||||
* A [vulnerability][14] in the Cisco Discovery Protocol (CDP) implementation for the Cisco TelePresence Codec (TC) and Collaboration Endpoint (CE) Software could allow an unauthenticated, adjacent attacker to inject arbitrary shell commands that are executed by the device.
|
||||
* A [weakness][15] in the internal packet-processing functionality of the Cisco StarOS operating system running on virtual platforms could allow an unauthenticated, remote attacker to cause an affected device to stop processing traffic, resulting in a denial of service (DoS) condition.
|
||||
* A [vulnerability][16] in the web-based management interface of the Cisco RV110W Wireless-N VPN Firewall, Cisco RV130W Wireless-N Multifunction VPN Router, and Cisco RV215W Wireless-N VPN Router could allow an unauthenticated, remote attacker to cause a reload of an affected device, resulting in a denial of service (DoS) condition.
|
||||
|
||||
|
||||
|
||||
Cisco has [released software][10] fixes for those advisories as well.
|
||||
|
||||
Join the Network World communities on [Facebook][17] and [LinkedIn][18] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403349/cisco-issues-critical-security-warnings-on-sd-wan-dna-center.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/04/lightning_storm_night_gyorgy_karoly_toth_aka_zajcsik_cc0_via_pixabay_1200x800-100754504-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-dnac-bypass
|
||||
[8]: https://software.cisco.com/download/home
|
||||
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privesca
|
||||
[10]: https://tools.cisco.com/security/center/resources/security_vulnerability_policy.html#fixes
|
||||
[11]: https://tools.cisco.com/security/center/publicationListing.x?product=Cisco&sort=-day_sir&limit=50#~Vulnerabilities
|
||||
[12]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-privilescal
|
||||
[13]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-sdwan-cmdinj
|
||||
[14]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-tele-shell-inj
|
||||
[15]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-staros-asr-dos
|
||||
[16]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190619-rvrouters-dos
|
||||
[17]: https://www.facebook.com/NetworkWorld/
|
||||
[18]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (With Tableau, SaaS king Salesforce becomes a hybrid cloud company)
|
||||
[#]: via: (https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
With Tableau, SaaS king Salesforce becomes a hybrid cloud company
|
||||
======
|
||||
Once dismissive of software, Salesforce acknowledges the inevitability of the hybrid cloud.
|
||||
![Martyn Williams/IDGNS][1]
|
||||
|
||||
I remember a time when people at Salesforce events would hand out pins that read “Software” inside a red circle with a slash through it. The High Priest of SaaS (a.k.a. CEO Marc Benioff) was so adamant against installed, on-premises software that his keynotes were always comical.
|
||||
|
||||
Now, Salesforce is prepared to [spend $15.7 billion to acquire Tableau Software][2], the leader in on-premises data analytics.
|
||||
|
||||
On the hell-freezes-over scale, this is up there with Microsoft embracing Linux or Apple PR people returning a phone call. Well, we know at least one of those has happened.
|
||||
|
||||
**[ Also read:[Hybrid Cloud: The time for adoption is upon us][3] | Stay in the know: [Subscribe and get daily newsletter updates][4] ]**
|
||||
|
||||
So, why would a company that is so steeped in the cloud, so anti-on-premises software, make such a massive purchase?
|
||||
|
||||
Partly it is because Benioff and company are finally coming to the same conclusion as most everyone else: The hybrid cloud, a mix of on-premises systems and public cloud, is the wave of the future, and pure cloud plays are in the minority.
|
||||
|
||||
The reality is that data is hybrid and does not sit in a single location, and Salesforce is finally acknowledging this, said Tim Crawford, president of Avoa, a strategic CIO advisory firm.
|
||||
|
||||
“I see the acquisition of Tableau by Salesforce as less about getting into the on-prem game as it is a reality of the world we live in. Salesforce needed a solid analytics tool that went well beyond their existing capability. Tableau was that tool,” he said.
|
||||
|
||||
**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][5] ]**
|
||||
|
||||
Salesforce also understands that they need a better understanding of customers and those data insights that drive customer decisions. That data is both on-prem and in the cloud, Crawford noted. It is in Salesforce, other solutions, and the myriad of Excel spreadsheets spread across employee systems. Tableau crosses the hybrid boundaries and brings a straightforward way to visualize data.
|
||||
|
||||
Salesforce had analytics features as part of its SaaS platform, but it was geared around their own platform, whereas everyone uses Tableau and Tableau supports all manner of analytics.
|
||||
|
||||
“There’s a huge overlap between Tableau customers and Salesforce customers,” Crawford said. “The data is everywhere in the enterprise, not just in Salesforce. Salesforce does a great job with its own data, but Tableau does great with data in a lot of places because it’s not tied to one platform. So, it opens up where the data comes from and the insights you get from the data.”
|
||||
|
||||
Crawford said that once the deal is done and Tableau is under some deeper pockets, the organization may be able to innovate faster or do things they were unable to do prior. That hardly indicates Tableau was struggling, though. It pulled in [$1.16 billion in revenue][6] in 2018.
|
||||
|
||||
Crawford also expects Salesforce to push Tableau to open up new possibilities for customer insights by unlocking customer data inside and outside of Salesforce. One challenge for the two companies is to maintain that neutrality so that they don’t lose the ability to use Tableau for non-customer centric activities.
|
||||
|
||||
“It’s a beautiful way to visualize large sets of data that have nothing to do with customer centricity,” he said.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403442/with-tableau-saas-king-salesforce-becomes-a-hybrid-cloud-company.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2015/09/150914-salesforce-dreamforce-2-100614575-large.jpg
|
||||
[2]: https://www.cio.com/article/3402026/how-salesforces-tableau-acquisition-will-impact-it.html
|
||||
[3]: http://www.networkworld.com/article/2172875/cloud-computing/hybrid-cloud--the-year-of-adoption-is-upon-us.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start
|
||||
[6]: https://www.geekwire.com/2019/tableau-hits-841m-annual-recurring-revenue-41-transition-subscription-model-continues/
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Carrier services help expand healthcare, with 5G in the offing)
|
||||
[#]: via: (https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Carrier services help expand healthcare, with 5G in the offing
|
||||
======
|
||||
Many telehealth initiatives tap into wireless networking supplied by service providers that may start offering services such as Citizen's Band and 5G to support remote medical care.
|
||||
![Thinkstock][1]
|
||||
|
||||
There are connectivity options aplenty for most types of [IoT][2] deployment, but the idea of simply handing the networking part of the equation off to a national licensed wireless carrier could be the best one for certain kinds of deployments in the medical field.
|
||||
|
||||
Telehealth systems, for example, are still a relatively new facet of modern medicine, but they’re already among the most important applications that use carrier networks to deliver care. One such system is operated by the University of Mississippi Medical Center, for the treatment and education of diabetes patients.
|
||||
|
||||
**[More on wireless:[The time of 5G is almost here][3]]**
|
||||
|
||||
**[ Now read[20 hot jobs ambitious IT pros should shoot for][4]. ]**
|
||||
|
||||
Greg Hall is the director of IT at UMMC’s center for telehealth. He said that the remote patient monitoring system is relatively simple by design – diabetes patients receive a tablet computer that they can use to input and track their blood sugar levels, alert clinicians to symptoms like nerve pain or foot sores, and even videoconference with their doctors directly. The tablet connects via Verizon, AT&T or CSpire – depending on who’s got the best coverage in a given area – back to UMMC’s servers.
|
||||
|
||||
According to Hall, there are multiple advantages to using carrier connectivity instead of unlicensed (i.e. purpose-built [Wi-Fi][5] or other technology) to connect patients – some of whom live in remote parts of the state – to their caregivers.
|
||||
|
||||
“We weren’t expecting everyone who uses the service to have Wi-Fi,” he said, “and they can take their tablet with them if they’re traveling.”
|
||||
|
||||
The system serves about 250 patients in Mississippi, up from roughly 175 in the 2015 pilot program that got the effort off the ground. Nor is it strictly limited to diabetes care – Hall said that it’s already been extended to patients suffering from chronic obstructive pulmonary disease, asthma and even used for prenatal care, with further expansion in the offing.
|
||||
|
||||
“The goal of our program isn’t just the monitoring piece, but also the education piece, teaching a person to live with their [condition] and thrive,” he said.
|
||||
|
||||
It hasn’t all been smooth sailing. One issue was caused by the natural foliage of the area, as dense areas of pine trees can cause transmission problems, thanks to their needles being a particularly troublesome length and interfering with 2.5GHz wireless signals. But Hall said that the team has been able to install signal boosters or repeaters to overcome that obstacle.
|
||||
|
||||
Neurologist Dr. Allen Gee’s practice in Wyoming attempts to address a similar issue – far-flung patients with medical needs that might not be addressed by the sparse local-care options. From his main office in Cody, he said, he can cover half the state via telepresence, using a purpose-built system that is based on cellular-data connectivity from TCT, Spectrum and AT&T, as well as remote audiovisual equipment and a link to electronic health records stored in distant locations. That allows him to receive patient data, audio/visual information and even imaging diagnostics remotely. Some specialists in the state are able to fly to those remote locations, others are not.
|
||||
|
||||
While Gee’s preference is to meet with patients in person, that’s just not always possible, he said.
|
||||
|
||||
“Medical specialists don’t get paid for windshield time,” he noted. “Being able to transfer information from an EHR facilitates the process of learning about the patient.”
|
||||
|
||||
### 5G is coming**
|
||||
|
||||
**
|
||||
|
||||
According to Alan Stewart-Brown, vice president at infrastructure management vendor Opengear, there’s a lot to like about current carrier networks for medical use – particularly wide coverage and a lack of interference – but there are bigger things to come.
|
||||
|
||||
“We have customers that have equipment in ambulances for instance, where they’re livestreaming patients’ vital signs to consoles that doctors can monitor,” he said. “They’re using carrier 4G for that right now and it works well enough, but there are limitations, namely latency, which you don’t get on [5G][6].”
|
||||
|
||||
Beyond the simple fact of increased throughput and lower latency, widespread 5G deployments could open a wide array of new possibilities for medical technology, mostly involving real-time, very-high-definition video streaming. These include medical VR, remote surgery and the like.
|
||||
|
||||
“The process you use to do things like real-time video – right now on a 4G network, that may or may not have a delay,” said Stewart-Brown. “Once you can get rid of the delay, the possibilities are endless as to what you can use the technology for.”
|
||||
|
||||
### Citizens band
|
||||
|
||||
Ron Malenfant, chief architect for service provider IoT at Cisco, agreed that the future of 5G for medical IoT is bright, but said that the actual applications of the technology have to be carefully thought out.
|
||||
|
||||
“The use cases need to be worked on,” he said. “The innovative [companies] are starting to say ‘OK, what does 5G mean to me’ and starting to plan use cases.”
|
||||
|
||||
One area that the carriers themselves have been eyeing recently is the CBRS band of radio frequencies, which sits around 3.5GHz. It’s what’s referred to as “lightly licensed” spectrum, in that parts of it are used for things like CB radio and other parts are the domain of the U.S. armed forces, and it could be used to build private networks for institutional users like hospitals, instead of deploying small but expensive 4G cells. The idea is that the institutions would be able to lease those frequencies for their specific area from the carrier directly for private LTE/CBRS networks, and, eventually 5G, Malenfant said.
|
||||
|
||||
There’s also the issue, of course, that there are still a huge amount of unknowns around 5G, which isn’t expected to supplant LTE in the U.S. for at least another year or so. The medical field’s stiff regulatory requirements could also prove a stumbling block for the adoption of newer wireless technology.
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403366/carrier-services-help-expand-healthcare-with-5g-in-the-offing.html
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/stethoscope_mobile_healthcare_ipad_tablet_doctor_patient-100765655-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[4]: https://www.networkworld.com/article/3276025/careers/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[5]: https://www.networkworld.com/article/3238664/80211-wi-fi-standards-and-speeds-explained.html
|
||||
[6]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cracks appear in Intel’s grip on supercomputing)
|
||||
[#]: via: (https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Cracks appear in Intel’s grip on supercomputing
|
||||
======
|
||||
New competitors threaten to take Intel’s dominance in the high-performance computing (HPC) world, and we’re not even talking about AMD (yet).
|
||||
![Randy Wong/LLNL][1]
|
||||
|
||||
It’s June, so it’s that time again for the twice-yearly Top 500 supercomputer list, where bragging rights are established or, in most cases, reaffirmed. The list constantly shifts as new trends appear, and one of them might be a break in Intel’s dominance.
|
||||
|
||||
[Supercomputers in the top 10 list][2] include a lot of IBM Power-based systems, and almost all run Nvidia GPUs. But there’s more going on than that.
|
||||
|
||||
For starters, an ARM supercomputer has shown up, at #156. [Astra][3] at Sandia National Laboratories is an HPE system running Cavium (now Marvell) ThunderX2 processors. It debuted on the list at #204 last November, but thanks to upgrades, it has moved up the list. It won’t be the last ARM server to show up, either.
|
||||
|
||||
**[ Also see:[10 of the world's fastest supercomputers][2] | Get daily insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Second is the appearance of four Nvidia DGX servers, with the [DGX SuperPOD][5] ranking the highest at #22. [DGX systems][6] are basically compact GPU boxes with a Xeon just to boot the thing. The GPUs do all the heavy lifting.
|
||||
|
||||
AMD hasn’t shown up yet with the Epyc processors, but it will, given Cray is building them for the government.
|
||||
|
||||
This signals a breaking up of the hold Intel has had on the high-performance computing (HPC) market for a long time, said Ashish Nadkarni, group vice president in IDC's worldwide infrastructure practice. “The Intel hold has already been broken up by all the accelerators in the supercomputing space. The more accelerators they use, the less need they have for Xeons. They can go with other processors that do justice to those accelerators,” he told me.
|
||||
|
||||
With so much work in HPC and artificial intelligence (AI) being done by GPUs, the x86 processor becomes just a boot processor in a way. I wasn’t kidding about the DGX box. It’s got one Xeon and eight Tesla GPUs. And the Xeon is an E5, a midrange part.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][7] ]**
|
||||
|
||||
“They don’t need high-end Xeons in servers any more, although there’s a lot of supercomputers that just use CPUs. The fact is there are so many options now,” said Nadkarni. One example of an all-CPU system is [Frontera][8], a Dell-based system at the Texas Advanced Computing Center in Austin.
|
||||
|
||||
The top two computers, Sierra and Summit, both run IBM Power9 RISC processors, as well as Nvidia GPUs. All told, Nvidia is in 125 of the 500 supercomputers, including five of the top 10, the fastest computer in the world, the fastest in Europe (Piz Daint) and the fastest in Japan (ABCI).
|
||||
|
||||
Lenovo was the top hardware provider, beating out Dell, HPE, and IBM combined. That’s because of its large presence in its native China. Nadkari said Lenovo, which acquired the IBM x86 server business in 2014, has benefitted from the IBM installed base, which has continued wanting the same tech from Lenovo under new ownership.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403443/cracks-appear-in-intels-grip-on-supercomputing.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/sierra875x500-100778404-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[3]: https://www.top500.org/system/179565
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.top500.org/system/179691
|
||||
[6]: https://www.networkworld.com/article/3196088/nvidias-new-volta-based-dgx-1-supercomputer-puts-400-servers-in-a-box.html
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[8]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html#slide7
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Several deals solidify the hybrid cloud’s status as the cloud of choice)
|
||||
[#]: via: (https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Several deals solidify the hybrid cloud’s status as the cloud of choice
|
||||
======
|
||||
On-premises and cloud connections are being built by all the top vendors to bridge legacy and modern systems, creating hybrid cloud environments.
|
||||
![Getty Images][1]
|
||||
|
||||
The hybrid cloud market is expected to grow from $38.27 billion in 2017 to $97.64 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 17.0% during the forecast period, according to Markets and Markets.
|
||||
|
||||
The research firm said the hybrid cloud is rapidly becoming a leading cloud solution, as it provides various benefits, such as cost, efficiency, agility, mobility, and elasticity. One of the many reasons is the need for interoperability standards between cloud services and existing systems.
|
||||
|
||||
Unless you are a startup company and can be born in the cloud, you have legacy data systems that need to be bridged, which is where the hybrid cloud comes in.
|
||||
|
||||
So, in very short order we’ve seen a bunch of new alliances involving the old and new guard, reiterating that the need for hybrid solutions remains strong.
|
||||
|
||||
**[ Read also:[What hybrid cloud means in practice][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
### HPE/Google
|
||||
|
||||
In April, the Hewlett Packard Enterprise (HPE) and Google announced a deal where HPE introduced a variety of server solutions for Google Cloud’s Anthos, along with a consumption-based model for the validated HPE on-premises infrastructure that is integrated with Anthos.
|
||||
|
||||
Following up with that, the two just announced a strategic partnership to create a hybrid cloud for containers by combining HPE’s on-premises infrastructure, Cloud Data Services, and GreenLake consumption model with Anthos. This allows for:
|
||||
|
||||
* Bi-directional data mobility for data mobility and consistent data services between on-premises and cloud
|
||||
* Application workload mobility to move containerized app workloads across on-premises and multi-cloud environments
|
||||
* Multi-cloud flexibility, offering the choice of HPE Cloud Volumes and Anthos for what works best for the workload
|
||||
* Unified hybrid management through Anthos, so customers can get a unified and consistent view of their applications and workloads regardless of where they reside
|
||||
* Charged as a service via HPE GreenLake
|
||||
|
||||
|
||||
|
||||
### IBM/Cisco
|
||||
|
||||
This is a furthering of an already existing partnership between IBM and Cisco designed to deliver a common and secure developer experience across on-premises and public cloud environments for building modern applications.
|
||||
|
||||
[Cisco said it will support IBM Cloud Private][4], an on-premises container application development platform, on Cisco HyperFlex and HyperFlex Edge hyperconverged infrastructure. This includes support for IBM Cloud Pak for Applications. IBM Cloud Paks deliver enterprise-ready containerized software solutions and developer tools for building apps and then easily moving to any cloud—public or private.
|
||||
|
||||
This architecture delivers a common and secure Kubernetes experience across on-premises (including edge) and public cloud environments. IBM’s Multicloud Manager covers monitoring and management of clusters and container-based applications running from on-premises to the edge, while Cisco’s Virtual Application Centric Infrastructure (ACI) will allow customers to extend their network fabric from on-premises to the IBM Cloud.
|
||||
|
||||
### IBM/Equinix
|
||||
|
||||
Equinix expanded its collaboration with IBM Cloud to bring private and scalable connectivity to global enterprises via Equinix Cloud Exchange Fabric (ECX Fabric). This provides private connectivity to IBM Cloud, including Direct Link Exchange, Direct Link Dedicated and Direct Link Dedicated Hosting, that is secure and scalable.
|
||||
|
||||
ECX Fabric is an on-demand, SDN-enabled interconnection service that allows any business to connect between its own distributed infrastructure and any other company’s distributed infrastructure, including cloud providers. Direct Link provides IBM customers with a connection between their network and IBM Cloud. So ECX Fabric provides IBM customers with a secured and scalable network connection to the IBM Cloud service.
|
||||
|
||||
At the same time, ECX Fabric provides secure connections to other cloud providers, and most customers prefer a multi-vendor approach to avoid vendor lock-in.
|
||||
|
||||
“Each of the partnerships focus on two things: 1) supporting a hybrid-cloud platform for their existing customers by reducing the friction to leveraging each solution and 2) leveraging the unique strength that each company brings. Each of the solutions are unique and would be unlikely to compete directly with other partnerships,” said Tim Crawford, president of Avoa, an IT consultancy.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3403354/several-deals-solidify-the-hybrid-clouds-status-as-the-cloud-of-choice.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_hand_plus_sign_private-100787051-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3249495/what-hybrid-cloud-mean-practice
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3403363/cisco-connects-with-ibm-in-to-simplify-hybrid-cloud-deployment.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (ninifly)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The office coffee model of concurrent garbage collection)
|
||||
[#]: via: (https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
The office coffee model of concurrent garbage collection
|
||||
======
|
||||
|
||||
Garbage collection is a field with its own terminology. Concepts like like _mutator_ s, _card marking_ , and _write barriers_ create a hurdle to understanding how garbage collectors work. Here’s an analogy to explain the operations of a concurrent garbage collector using everyday items found in the workplace.
|
||||
|
||||
Before we discuss the operation of _concurrent_ garbage collection, let’s introduce the dramatis personae. In offices around the world you’ll find one of these:
|
||||
|
||||
![][1]
|
||||
|
||||
In the workplace coffee is a natural resource. Employees visit the break room and fill their cups as required. That is, until the point someone goes to fill their cup only to discover the pot is _empty_!
|
||||
|
||||
Immediately the office is thrown into chaos. Meeting are called. Investigations are held. The perpetrator who took the last cup without refilling the machine is found and [reprimanded][2]. Despite many passive aggressive notes the situation keeps happening, thus a committee is formed to decide if a larger coffee pot should be requisitioned. Once the coffee maker is again full office productivity slowly returns to normal.
|
||||
|
||||
This is the model of _stop the world_ garbage collection. The various parts of your program proceed through their day consuming memory, or in our analogy coffee, without a care about the next allocation that needs to be made. Eventually one unlucky attempt to allocate memory is made only to find the heap, or the coffee pot, exhausted, triggering a stop the world garbage collection.
|
||||
|
||||
* * *
|
||||
|
||||
Down the road at a more enlightened workplace, management have adopted a different strategy for mitigating their break room’s coffee problems. Their policy is simple: if the pot is more than half full, fill your cup and be on your way. However, if the pot is less than half full, _before_ filling your cup, you must add a little coffee and a little water to the top of the machine. In this way, by the time the next person arrives for their re-up, the level in the pot will hopefully have risen higher than when the first person found it.
|
||||
|
||||
This policy does come at a cost to office productivity. Rather than filling their cup and hoping for the best, each worker may, depending on the aggregate level of consumption in the office, have to spend a little time refilling the percolator and topping up the water. However, this is time spent by a person who was already heading to the break room. It costs a few extra minutes to maintain the coffee machine, but does not impact their officemates who aren’t in need of caffeination. If several people take a break at the same time, they will all find the level in the pot below the half way mark and all proceed to top up the coffee maker–the more consumption, the greater the rate the machine will be refilled, although this takes a little longer as the break room becomes congested.
|
||||
|
||||
This is the model of _concurrent garbage collection_ as practiced by the Go runtime (and probably other language runtimes with concurrent collectors). Rather than each heap allocation proceeding blindly until the heap is exhausted, leading to a long stop the world pause, concurrent collection algorithms spread the work of walking the heap to find memory which is no longer reachable over the parts of the program allocating memory. In this way the parts of the program which allocate memory each pay a small cost–in terms of latency–for those allocations rather than the whole program being forced to halt when the heap is exhausted.
|
||||
|
||||
Lastly, in keeping with the office coffee model, if the rate of coffee consumption in the office is so high that management discovers that their staff are always in the break room trying desperately to refill the coffee machine, it’s time to invest in a machine with a bigger pot–or in garbage collection terms, grow the heap.
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [Visualising the Go garbage collector][3]
|
||||
2. [A whirlwind tour of Go’s runtime environment variables][4]
|
||||
3. [Why is a Goroutine’s stack infinite ?][5]
|
||||
4. [Introducing Go 2.0][6]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/wp-content/uploads/2018/12/20181204175004_79256.jpg
|
||||
[2]: https://www.youtube.com/watch?v=ww86iaucd2A
|
||||
[3]: https://dave.cheney.net/2014/07/11/visualising-the-go-garbage-collector (Visualising the Go garbage collector)
|
||||
[4]: https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables (A whirlwind tour of Go’s runtime environment variables)
|
||||
[5]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?)
|
||||
[6]: https://dave.cheney.net/2016/10/25/introducing-go-2-0 (Introducing Go 2.0)
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Avoid package names like base, util, or common)
|
||||
[#]: via: (https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Avoid package names like base, util, or common
|
||||
======
|
||||
|
||||
Writing a good Go package starts with its name. Think of your package’s name as an elevator pitch, you have to describe what it does using just one word.
|
||||
|
||||
A common cause of poor package names are _utility packages_. These are packages where helpers and utility code congeal. These packages contain an assortment of unrelated functions, as such their utility is hard to describe in terms of what the package _provides_. This often leads to a package’s name being derived from what the package _contains_ —utilities.
|
||||
|
||||
Package names like `utils` or `helpers` are commonly found in projects which have developed deep package hierarchies and want to share helper functions without introducing import loops. Extracting utility functions to new package breaks the import loop, but as the package stems from a design problem in the project, its name doesn’t reflect its purpose, only its function in breaking the import cycle.
|
||||
|
||||
> [A little] duplication is far cheaper than the wrong abstraction.
|
||||
|
||||
— [Sandy Metz][1]
|
||||
|
||||
My recommendation to improve the name of `utils` or `helpers` packages is to analyse where they are imported and move the relevant functions into the calling package. Even if this results in some code duplication this is preferable to introducing an import dependency between two packages. In the case where utility functions are used in many places, prefer multiple packages, each focused on a single aspect with a correspondingly descriptive name.
|
||||
|
||||
Packages with names like `base` or `common` are often found when functionality common to two or more related facilities, for example common types between a client and server or a server and its mock, has been refactored into a separate package. Instead the solution is to reduce the number of packages by combining client, server, and common code into a single package named after the facility the package provides.
|
||||
|
||||
For example, the `net/http` package does not have `client` and `server` packages, instead it has `client.go` and `server.go` files, each holding their respective types. `transport.go` holds for the common message transport code used by both HTTP clients and servers.
|
||||
|
||||
Name your packages after what they _provide_ , not what they _contain_.
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [Simple profiling package moved, updated][2]
|
||||
2. [The package level logger anti pattern][3]
|
||||
3. [How to include C code in your Go package][4]
|
||||
4. [Why I think Go package management is important][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction
|
||||
[2]: https://dave.cheney.net/2014/10/22/simple-profiling-package-moved-updated (Simple profiling package moved, updated)
|
||||
[3]: https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern (The package level logger anti pattern)
|
||||
[4]: https://dave.cheney.net/2013/09/07/how-to-include-c-code-in-your-go-package (How to include C code in your Go package)
|
||||
[5]: https://dave.cheney.net/2013/10/10/why-i-think-go-package-management-is-important (Why I think Go package management is important)
|
@ -1,170 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qfzy1233)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 Linux Distributions for Productivity)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
Top 5 Linux Distributions for Productivity
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_main.jpg?itok=2IKyg_7_)
|
||||
|
||||
I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated.
|
||||
|
||||
That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless.
|
||||
|
||||
Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at:
|
||||
|
||||
* General Productivity: For those who just need to work efficiently on multiple tasks.
|
||||
|
||||
* Graphic Design: For those that work with the creation and manipulation of graphic images.
|
||||
|
||||
* Development: For those who use their Linux desktops for programming.
|
||||
|
||||
* Administration: For those who need a distribution to facilitate their system administration tasks.
|
||||
|
||||
* Education: For those who need a desktop distribution to make them more productive in an educational environment.
|
||||
|
||||
|
||||
|
||||
|
||||
Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs.
|
||||
|
||||
### General Productivity
|
||||
|
||||
For general productivity, you won’t get much more efficient than [Ubuntu][1]. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1).
|
||||
|
||||
![GNOME Clipboard][3]
|
||||
|
||||
Figure 1: The GNOME Clipboard Indicator extension in action.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity.
|
||||
|
||||
Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy.
|
||||
|
||||
Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions.
|
||||
|
||||
### Graphic Design
|
||||
|
||||
If you’re looking to up your graphic design productivity, you can’t go wrong with [Fedora Design Suite][5]. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images.
|
||||
|
||||
With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to [this page][6].
|
||||
|
||||
![Fedora Design Suite Favorites][8]
|
||||
|
||||
Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop.
|
||||
|
||||
### Development
|
||||
|
||||
Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from [System76][9] and it’s called [Pop!_OS][10]. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3).
|
||||
|
||||
What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, [download][10] the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece.
|
||||
|
||||
![Pop!_OS][12]
|
||||
|
||||
Figure 3: The Pop!_OS take on GNOME Overview.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop.
|
||||
|
||||
### Administration
|
||||
|
||||
If you’re looking to find one of the most productive distributions for admin tasks, look no further than [Debian][13]. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity.
|
||||
|
||||
![Debian][15]
|
||||
|
||||
Figure 4: SSH’ing into a remote server on Debian.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits.
|
||||
|
||||
### Education
|
||||
|
||||
If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called [openSUSE:Education-Li-f-e][16] (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date).
|
||||
|
||||
openSUSE:Education-Li-f-e includes tools like:
|
||||
|
||||
* Brain Workshop - A dual n-back brain exercise
|
||||
|
||||
* GCompris - An educational software suite for young children
|
||||
|
||||
* gElemental - A periodic table viewer
|
||||
|
||||
* iGNUit - A general purpose flash card program
|
||||
|
||||
* Little Wizard - Development environment for children based on Pascal
|
||||
|
||||
* Stellarium - An astronomical sky simulator
|
||||
|
||||
* TuxMath - An math tutor game
|
||||
|
||||
* TuxPaint - A drawing program for young children
|
||||
|
||||
* TuxType - An educational typing tutor for children
|
||||
|
||||
* wxMaxima - A cross platform GUI for the computer algebra system
|
||||
|
||||
* Inkscape - Vector graphics program
|
||||
|
||||
* GIMP - Graphic image manipulation program
|
||||
|
||||
* Pencil - GUI prototyping tool
|
||||
|
||||
* Hugin - Panorama photo stitching and HDR merging program
|
||||
|
||||
|
||||
![Education][18]
|
||||
|
||||
Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE [KIWI-LTSP quick start guide][20].
|
||||
|
||||
Learn more about Linux through the free ["Introduction to Linux" ][21]course from The Linux Foundation and edX.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ubuntu.com/
|
||||
[2]: /files/images/productivity1jpg
|
||||
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w (GNOME Clipboard)
|
||||
[4]: /licenses/category/used-permission
|
||||
[5]: https://labs.fedoraproject.org/en/design-suite/
|
||||
[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials
|
||||
[7]: /files/images/productivity2jpg
|
||||
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH (Fedora Design Suite Favorites)
|
||||
[9]: https://system76.com/
|
||||
[10]: https://system76.com/pop
|
||||
[11]: /files/images/productivity3jpg-0
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD (Pop!_OS)
|
||||
[13]: https://www.debian.org/
|
||||
[14]: /files/images/productivity4jpg
|
||||
[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 (Debian)
|
||||
[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e
|
||||
[17]: /files/images/productivity5jpg
|
||||
[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT (Education)
|
||||
[19]: https://en.opensuse.org/Portal:KIWI-LTSP
|
||||
[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start
|
||||
[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
@ -0,0 +1,204 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Eliminate error handling by eliminating errors)
|
||||
[#]: via: (https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Eliminate error handling by eliminating errors
|
||||
======
|
||||
|
||||
Go 2 aims to improve the overhead of [error handling][1], but do you know what is better than an improved syntax for handling errors? Not needing to handle errors at all. Now, I’m not saying “delete your error handling code”, instead I’m suggesting changing your code so you don’t have as many errors to handle.
|
||||
|
||||
This article draws inspiration from a chapter in John Ousterhout’s, _[A philosophy of Software Design,][2]_ “Define Errors Out of Existence”. I’m going to try to apply his advice to Go.
|
||||
|
||||
* * *
|
||||
|
||||
Here’s a function to count the number of lines in a file,
|
||||
|
||||
```
|
||||
func CountLines(r io.Reader) (int, error) {
|
||||
var (
|
||||
br = bufio.NewReader(r)
|
||||
lines int
|
||||
err error
|
||||
)
|
||||
|
||||
for {
|
||||
_, err = br.ReadString('\n')
|
||||
lines++
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if err != io.EOF {
|
||||
return 0, err
|
||||
}
|
||||
return lines, nil
|
||||
}
|
||||
```
|
||||
|
||||
We construct a `bufio.Reader`, then sit in a loop calling the `ReadString` method, incrementing a counter until we reach the end of the file, then we return the number of lines read. That’s the code we _wanted_ to write, instead `CountLines` is made more complicated by its error handling. For example, there is this strange construction:
|
||||
|
||||
```
|
||||
_, err = br.ReadString('\n')
|
||||
lines++
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
```
|
||||
|
||||
We increment the count of lines _before_ checking the error—that looks odd. The reason we have to write it this way is `ReadString` will return an error if it encounters an end-of-file—`io.EOF`—before hitting a newline character. This can happen if there is no trailing newline.
|
||||
|
||||
To address this problem, we rearrange the logic to increment the line count, then see if we need to exit the loop.1
|
||||
|
||||
But we’re not done checking errors yet. `ReadString` will return `io.EOF` when it hits the end of the file. This is expected, `ReadString` needs some way of saying _stop, there is nothing more to read_. So before we return the error to the caller of `CountLine`, we need to check if the error was _not_ `io.EOF`, and in that case propagate it up, otherwise we return `nil` to say that everything worked fine. This is why the final line of the function is not simply
|
||||
|
||||
```
|
||||
return lines, err
|
||||
```
|
||||
|
||||
I think this is a good example of Russ Cox’s [observation that error handling can obscure the operation of the function][3]. Let’s look at an improved version.
|
||||
|
||||
```
|
||||
func CountLines(r io.Reader) (int, error) {
|
||||
sc := bufio.NewScanner(r)
|
||||
lines := 0
|
||||
|
||||
for sc.Scan() {
|
||||
lines++
|
||||
}
|
||||
|
||||
return lines, sc.Err()
|
||||
}
|
||||
```
|
||||
|
||||
This improved version switches from using `bufio.Reader` to `bufio.Scanner`. Under the hood `bufio.Scanner` uses `bufio.Reader` adding a layer of abstraction which helps remove the error handling which obscured the operation of our previous version of `CountLines` 2
|
||||
|
||||
The method `sc.Scan()` returns `true` if the scanner _has_ matched a line of text and _has not_ encountered an error. So, the body of our `for` loop will be called only when there is a line of text in the scanner’s buffer. This means our revised `CountLines` correctly handles the case where there is no trailing newline, It also correctly handles the case where the file is empty.
|
||||
|
||||
Secondly, as `sc.Scan` returns `false` once an error is encountered, our `for` loop will exit when the end-of-file is reached or an error is encountered. The `bufio.Scanner` type memoises the first error it encounters and we recover that error once we’ve exited the loop using the `sc.Err()` method.
|
||||
|
||||
Lastly, `buffo.Scanner` takes care of handling `io.EOF` and will convert it to a `nil` if the end of file was reached without encountering another error.
|
||||
|
||||
* * *
|
||||
|
||||
My second example is inspired by Rob Pikes’ _[Errors are values][4]_ blog post.
|
||||
|
||||
When dealing with opening, writing and closing files, the error handling is present but not overwhelming as, the operations can be encapsulated in helpers like `ioutil.ReadFile` and `ioutil.WriteFile`. However, when dealing with low level network protocols it often becomes necessary to build the response directly using I/O primitives, thus the error handling can become repetitive. Consider this fragment of a HTTP server which is constructing a HTTP/1.1 response.
|
||||
|
||||
```
|
||||
type Header struct {
|
||||
Key, Value string
|
||||
}
|
||||
|
||||
type Status struct {
|
||||
Code int
|
||||
Reason string
|
||||
}
|
||||
|
||||
func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error {
|
||||
_, err := fmt.Fprintf(w, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, h := range headers {
|
||||
_, err := fmt.Fprintf(w, "%s: %s\r\n", h.Key, h.Value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := fmt.Fprint(w, "\r\n"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = io.Copy(w, body)
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
First we construct the status line using `fmt.Fprintf`, and check the error. Then for each header we write the header key and value, checking the error each time. Lastly we terminate the header section with an additional `\r\n`, check the error, and copy the response body to the client. Finally, although we don’t need to check the error from `io.Copy`, we do need to translate it from the two return value form that `io.Copy` returns into the single return value that `WriteResponse` expects.
|
||||
|
||||
Not only is this a lot of repetitive work, each operation—fundamentally writing bytes to an `io.Writer`—has a different form of error handling. But we can make it easier on ourselves by introducing a small wrapper type.
|
||||
|
||||
```
|
||||
type errWriter struct {
|
||||
io.Writer
|
||||
err error
|
||||
}
|
||||
|
||||
func (e *errWriter) Write(buf []byte) (int, error) {
|
||||
if e.err != nil {
|
||||
return 0, e.err
|
||||
}
|
||||
|
||||
var n int
|
||||
n, e.err = e.Writer.Write(buf)
|
||||
return n, nil
|
||||
}
|
||||
```
|
||||
|
||||
`errWriter` fulfils the `io.Writer` contract so it can be used to wrap an existing `io.Writer`. `errWriter` passes writes through to its underlying writer until an error is detected. From that point on, it discards any writes and returns the previous error.
|
||||
|
||||
```
|
||||
func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error {
|
||||
ew := &errWriter{Writer: w}
|
||||
fmt.Fprintf(ew, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason)
|
||||
|
||||
for _, h := range headers {
|
||||
fmt.Fprintf(ew, "%s: %s\r\n", h.Key, h.Value)
|
||||
}
|
||||
|
||||
fmt.Fprint(ew, "\r\n")
|
||||
io.Copy(ew, body)
|
||||
|
||||
return ew.err
|
||||
}
|
||||
```
|
||||
|
||||
Applying `errWriter` to `WriteResponse` dramatically improves the clarity of the code. Each of the operations no longer needs to bracket itself with an error check. Reporting the error is moved to the end of the function by inspecting the `ew.err` field, avoiding the annoying translation from `io.Copy`’s return values.
|
||||
|
||||
* * *
|
||||
|
||||
When you find yourself faced with overbearing error handling, try to extract some of the operations into a helper type.
|
||||
|
||||
1. This logic _still_ isn’t correct, can you spot the bug?
|
||||
2. `bufio.Scanner` can scan for any pattern, by default it looks for newlines.
|
||||
|
||||
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [Error handling vs. exceptions redux][5]
|
||||
2. [Stack traces and the errors package][6]
|
||||
3. [Subcommand handling in Go][7]
|
||||
4. [Constant errors][8]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling-overview.md
|
||||
[2]: https://www.amazon.com/Philosophy-Software-Design-John-Ousterhout/dp/1732102201
|
||||
[3]: https://www.youtube.com/watch?v=6wIP3rO6On8
|
||||
[4]: https://blog.golang.org/errors-are-values
|
||||
[5]: https://dave.cheney.net/2014/11/04/error-handling-vs-exceptions-redux (Error handling vs. exceptions redux)
|
||||
[6]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package)
|
||||
[7]: https://dave.cheney.net/2013/11/07/subcommand-handling-in-go (Subcommand handling in Go)
|
||||
[8]: https://dave.cheney.net/2016/04/07/constant-errors (Constant errors)
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”)
|
||||
[#]: via: (https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”
|
||||
======
|
||||
|
||||
The name of a variable should describe its contents, not the _type_ of the contents. Consider this example:
|
||||
|
||||
```
|
||||
var usersMap map[string]*User
|
||||
```
|
||||
|
||||
What are some good properties of this declaration? We can see that it’s a map, and it has something to do with the `*User` type, so that’s probably good. But `usersMap` _is_ a map and Go, being a statically typed language, won’t let us accidentally use a map where a different type is required, so the `Map` suffix as a safety precaution is redundant.
|
||||
|
||||
Now, consider what happens if we declare other variables using this pattern:
|
||||
|
||||
```
|
||||
var (
|
||||
companiesMap map[string]*Company
|
||||
productsMap map[string]*Products
|
||||
)
|
||||
```
|
||||
|
||||
Now we have three map type variables in scope, `usersMap`, `companiesMap`, and `productsMap`, all mapping `string`s to different `struct` types. We know they are maps, and we also know that their declarations prevent us from using one in place of another—the compiler will throw an error if we try to use `companiesMap` where the code is expecting a `map[string]*User`. In this situation it’s clear that the `Map` suffix does not improve the clarity of the code, its just extra boilerplate to type.
|
||||
|
||||
My suggestion is avoid any suffix that resembles the _type_ of the variable. Said another way, if `users` isn’t descriptive enough, then `usersMap` won’t be either.
|
||||
|
||||
This advice also applies to function parameters. For example:
|
||||
|
||||
```
|
||||
type Config struct {
|
||||
//
|
||||
}
|
||||
|
||||
func WriteConfig(w io.Writer, config *Config)
|
||||
```
|
||||
|
||||
Naming the `*Config` parameter `config` is redundant. We know it’s a pointer to a `Config`, it says so right there in the declaration. Instead consider if `conf` will do, or maybe just `c` if the lifetime of the variable is short enough.
|
||||
|
||||
This advice is more than just a desire for brevity. If there is more that one `*Config` in scope at any one time, calling them `config1` and `config2` is less descriptive than calling them `original` and `updated` . The latter are less likely to be accidentally transposed—something the compiler won’t catch—while the former differ only in a one character suffix.
|
||||
|
||||
Finally, don’t let package names steal good variable names. The name of an imported identifier includes its package name. For example the `Context` type in the `context` package will be known as `context.Context` when imported into another package . This makes it impossible to use `context` as a variable or type, unless of course you rename the import, but that’s throwing good after bad. This is why the local declaration for `context.Context` types is traditionally `ctx`. eg.
|
||||
|
||||
```
|
||||
func WriteLog(ctx context.Context, message string)
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
A variable’s name should be independent of its type. You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”. You shouldn’t include the name of your type in the name of your variable for the same reason.
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [On declaring variables][1]
|
||||
2. [Go, without package scoped variables][2]
|
||||
3. [A whirlwind tour of Go’s runtime environment variables][3]
|
||||
4. [Declaration scopes in Go][4]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2014/05/24/on-declaring-variables (On declaring variables)
|
||||
[2]: https://dave.cheney.net/2017/06/11/go-without-package-scoped-variables (Go, without package scoped variables)
|
||||
[3]: https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables (A whirlwind tour of Go’s runtime environment variables)
|
||||
[4]: https://dave.cheney.net/2016/12/15/declaration-scopes-in-go (Declaration scopes in Go)
|
64
sources/tech/20190218 Talk, then code.md
Normal file
64
sources/tech/20190218 Talk, then code.md
Normal file
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Talk, then code)
|
||||
[#]: via: (https://dave.cheney.net/2019/02/18/talk-then-code)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Talk, then code
|
||||
======
|
||||
|
||||
The open source projects that I contribute to follow a philosophy which I describe as _talk, then code_. I think this is generally a good way to develop software and I want to spend a little time talking about the benefits of this methodology.
|
||||
|
||||
### Avoiding hurt feelings
|
||||
|
||||
The most important reason for discussing the change you want to make is it avoids hurt feelings. Often I see a contributor work hard in isolation on a pull request only to find their work is rejected. This can be for a bunch of reasons; the PR is too large, the PR doesn’t follow the local style, the PR fixes an issue which wasn’t important to the project or was recently fixed indirectly, and many more.
|
||||
|
||||
The underlying cause of all these issues is a lack of communication. The goal of the _talk, then code_ philosophy is not to impede or frustrate, but to ensure that a feature lands correctly the first time, without incurring significant maintenance debt, and neither the author of the change, or the reviewer, has to carry the emotional burden of dealing with hurt feelings when a change appears out of the blue with an implicit “well, I’ve done the work, all you have to do is merge it, right?”
|
||||
|
||||
### What does discussion look like?
|
||||
|
||||
Every new feature or bug fix should be discussed with the maintainer(s) of the project before work commences. It’s fine to experiment privately, but do not send a change without discussing it first.
|
||||
|
||||
The definition of _talk_ for simple changes can be as little as a design sketch in a GitHub issue. If your PR fixes a bug, you should link to the bug it fixes. If there isn’t one, you should raise a bug and wait for the maintainers to acknowledge it before sending a PR. This might seem a little backward–who wouldn’t want a bug fixed–but consider the bug could be a misunderstanding in how the software works or it could be a symptom of a larger problem that needs further investigation.
|
||||
|
||||
For more complicated changes, especially feature requests, I recommend that a design document be circulated and agreed upon before sending code. This doesn’t have to be a full blown document, a sketch in an issue may be sufficient, but the key is to reach agreement using words, before locking it in stone with code.
|
||||
|
||||
In all cases you shouldn’t proceed to send code until there is a positive agreement from the maintainer that the approach is one they are happy with. A pull request is for life, not just for Christmas.
|
||||
|
||||
### Code review, not design by committee
|
||||
|
||||
A code review is not the place for arguments about design. This is for two reasons. First, most code review tools are not suitable for long comment threads, GitHub’s PR interface is very bad at this, Gerrit is better, but few have a team of admins to maintain a Gerrit instance. More importantly, disagreements at the code review stage suggests there wasn’t agreement on how the change should be implemented.
|
||||
|
||||
* * *
|
||||
|
||||
Talk about what you want to code, then code what you talked about. Please don’t do it the other way around.
|
||||
|
||||
### Related posts:
|
||||
|
||||
1. [How to include C code in your Go package][1]
|
||||
2. [Let’s talk about logging][2]
|
||||
3. [The value of TDD][3]
|
||||
4. [Suggestions for contributing to an Open Source project][4]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/02/18/talk-then-code
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2013/09/07/how-to-include-c-code-in-your-go-package (How to include C code in your Go package)
|
||||
[2]: https://dave.cheney.net/2015/11/05/lets-talk-about-logging (Let’s talk about logging)
|
||||
[3]: https://dave.cheney.net/2016/04/11/the-value-of-tdd (The value of TDD)
|
||||
[4]: https://dave.cheney.net/2016/03/12/suggestions-for-contributing-to-an-open-source-project (Suggestions for contributing to an Open Source project)
|
@ -1,84 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs for (even more of) the win
|
||||
======
|
||||
|
||||
I use Emacs every day. I rarely notice it. But when I do, it usually brings me joy.
|
||||
|
||||
>If you are a professional writer…Emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish.
|
||||
|
||||
I’ve been using [Emacs][1] for well over twenty years. I use it for writing almost anything and everything (I edit Scala and Java in [IntelliJ][2]). I read my email in it. If it can be done in Emacs, that’s where I prefer to do it.
|
||||
|
||||
Although I’ve used Emacs for literally decades, I realized around the new year that very little about my use of Emacs had changed in the past decade or more. New editing modes had come along, of course, I’d picked up a package or two, and I did adopt [Helm][3] a few years ago, but mostly it just did all the heavy lifting that I required of it, day in and day out without complaining or getting in my way. On the one hand, that’s a testament to how good it is. On the other hand, that’s an invitation to dig in and see what I’ve missed.
|
||||
|
||||
At about the same time, I resolved to improve several aspects of my work life:
|
||||
|
||||
* **Better meeting management.** I’m lead on a couple of projects at work and those projects have meetings, both regularly scheduled and ad hoc; some of them I run, some of them, I only attend.
|
||||
|
||||
I realized I’d become sloppy about my participation in meetings. It’s all too easy sit in a room where there’s a meeting going on but actually read email and work on other items. (I strongly oppose the “no laptops” rule in meetings, but that’s a topic for another day.)
|
||||
|
||||
There are a couple of problems with sloppy participation. First, it’s disrespectful to the person who convened the meeting and the other participants. That’s actually sufficient reason not to do it, but I think there’s another problem: it disguises the cost of meetings.
|
||||
|
||||
If you’re in a meeting but also answering your email and maybe fixing a bug, then that meeting didn’t cost anything (or as much). If meetings are cheap, then there will be more of them.
|
||||
|
||||
I want fewer, shorter meetings. I don’t want to disguise their cost, I want them to be perceived as damned expensive and to be avoided unless absolutely necessary.
|
||||
|
||||
Sometimes, they are absolutely necessary. And I appreciate that a quick meeting can sometimes resolve an issue quickly. But if I have ten short meetings a day, let’s not pretend that I’m getting anything else productive accomplished.
|
||||
|
||||
I resolved to take notes at all the meetings I attend. I’m not offering to take minutes, necessarily, but I am taking minutes of a sort. It keeps me focused on the meeting and not catching up on other things.
|
||||
|
||||
* **Better time management.** There are lots and lots of things that I need or want to do, both professionally and personally. I’ve historically kept track off some of them in issue lists, some in saved email threads (in Emacs and [Gmail][4], for slightly different types of reminders), in my calendar, on “todo lists” of various sorts on my phone, and on little scraps of paper. And probably other places as well.
|
||||
|
||||
I resolved to keep them all in one place. Not because I think there’s one place that’s uniformly best or better, but because I hope to accomplish two things. First, by having them all in one place, I hope to be able to develop a better and more holistic view of where I’m putting my energies. Second, because I want to develop a habitn. “A settled or regular tendency or practice, especially one that is hard to give up.” of recording, tracking, and preserving them.
|
||||
|
||||
* **Better accountability.** If you work in certain science or engineering disciplines, you will have developed the habit of keeping a [lab notebook][5]. Alas, I did not. But I resolved to do so.
|
||||
|
||||
I’m not interested in the legal aspects that encourage bound pages or scribing only in permanent marker. What I’m interested in is developing the habit of keeping a record. My goal is to have a place to jot down ideas and design sketches and the like. If I have sudden inspiration or if I think of an edge case that isn’t in the test suite, I want my instinct to be to write it in my journal instead of scribbling it on a scrap of paper or promising myself that I’ll remember it.
|
||||
|
||||
|
||||
|
||||
|
||||
This confluence of resolutions led me quickly and more-or-less directly to [Org][6]. There is a large, active, and loyal community of Org users. I’ve played with it in the past (I even [wrote about it][7], at least in passing, a couple of years ago) and I tinkered long enough to [integrate MarkLogic][8] into it. (Boy has that paid off in the last week or two!)
|
||||
|
||||
But I never used it.
|
||||
|
||||
I am now using it. I take minutes in it, I record all of my todo items in it, and I keep a journal in it. I’m not sure there’s much value in me attempting to wax eloquent about it or enumerate all its features, you’ll find plenty of either with a quick web search.
|
||||
|
||||
If you use Emacs, you should be using Org. If you don’t use Emacs, I’m confident you wouldn’t be the first person who started because of Org. It does a lot. It takes a little time to learn your way around and remember the shortcuts, but I think it’s worth it. (And if you carry an [iOS][9] device in your pocket, I recommend [beorg][10] for recording items while you’re on the go.)
|
||||
|
||||
Naturally, I worked out how to [get XML out of it][11]⊕“Worked out” sure is a funny way to spell “hacked together in elisp.”. And from there, how to turn it back into the markup my weblog expects (and do so at the push of a button in Emacs, of course). So this is the first posting written in Org. It won’t be the last.
|
||||
|
||||
P.S. Happy birthday [little weblog][12].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -1,187 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Create a Custom System Tray Indicator For Your Tasks on Linux)
|
||||
[#]: via: (https://fosspost.org/tutorials/custom-system-tray-icon-indicator-linux)
|
||||
[#]: author: (M.Hanny Sabbagh https://fosspost.org/author/mhsabbagh)
|
||||
|
||||
Create a Custom System Tray Indicator For Your Tasks on Linux
|
||||
======
|
||||
|
||||
System Tray icons are still considered to be an amazing functionality today. By just right-clicking on the icon, and then selecting which actions you would like to take, you may ease your life a lot and save many unnecessary clicks on daily basis.
|
||||
|
||||
When talking about useful system tray icons, examples like Skype, Dropbox and VLC do come to mind:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 11][1]
|
||||
|
||||
However, system tray icons can actually be quite a lot more useful; By simply building one yourself for your own needs. In this tutorial, we’ll explain how to do that for you in very simple steps.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
We are going to build a custom system tray indicator using Python. Python is probably installed by default on all the major Linux distributions, so just check it’s there (version 2.7). Additionally, we’ll need the gir1.2-appindicator3 package installed. It’s the library allowing us to easily create system tray indicators.
|
||||
|
||||
To install it on Ubuntu/Mint/Debian:
|
||||
|
||||
```
|
||||
sudo apt-get install gir1.2-appindicator3
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
sudo dnf install libappindicator-gtk3
|
||||
```
|
||||
|
||||
For other distributions, just search for any packages containing appindicator.
|
||||
|
||||
On GNOME Shell, system tray icons are removed starting from 3.26. You’ll need to install the [following extension][2] (Or possibly other extensions) to re-enable the feature on your desktop. Otherwise, you won’t be able to see the indicator we are going to create here.
|
||||
|
||||
### Basic Code
|
||||
|
||||
Here’s the basic code of the indicator:
|
||||
|
||||
```
|
||||
#!/usr/bin/python
|
||||
import os
|
||||
from gi.repository import Gtk as gtk, AppIndicator3 as appindicator
|
||||
|
||||
def main():
|
||||
indicator = appindicator.Indicator.new("customtray", "semi-starred-symbolic", appindicator.IndicatorCategory.APPLICATION_STATUS)
|
||||
indicator.set_status(appindicator.IndicatorStatus.ACTIVE)
|
||||
indicator.set_menu(menu())
|
||||
gtk.main()
|
||||
|
||||
def menu():
|
||||
menu = gtk.Menu()
|
||||
|
||||
command_one = gtk.MenuItem('My Notes')
|
||||
command_one.connect('activate', note)
|
||||
menu.append(command_one)
|
||||
|
||||
exittray = gtk.MenuItem('Exit Tray')
|
||||
exittray.connect('activate', quit)
|
||||
menu.append(exittray)
|
||||
|
||||
menu.show_all()
|
||||
return menu
|
||||
|
||||
def note(_):
|
||||
os.system("gedit $HOME/Documents/notes.txt")
|
||||
|
||||
def quit(_):
|
||||
gtk.main_quit()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
We’ll explain how the code works later. But for know, just save it in a text file under the name tray.py, and run it using Python:
|
||||
|
||||
```
|
||||
python tray.py
|
||||
```
|
||||
|
||||
You’ll see the indicator working as follows:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 13][3]
|
||||
|
||||
Now, to explain how we did the magic:
|
||||
|
||||
* The first 3 lines of the code are nothing more than just specifying the Python path and importing the libraries we are going to use in our indicator.
|
||||
|
||||
* def main() : This is the main function of the indicator. Under it we write the code to initialize and build the indicator.
|
||||
|
||||
* indicator = appindicator.Indicator.new(“customtray”, “semi-starred-symbolic”, appindicator.IndicatorCategory.APPLICATION_STATUS) : Here we are specially creating a new indicator and calling it `customtray` . This is the special name of the indicator so that the system doesn’t mix it with other indicators that may be running. Also, we used the `semi-starred-symbolic` icon name as the default icon for our indicator. You could possibly change thing to any other things; Say `firefox` (if you want to see Firefox icon being used for the indicator), or any other icon name you would like. The last part regarding the `APPLICATION_STATUS` is just ordinary code for the categorization/scope of that indicator.
|
||||
|
||||
* `indicator.set_status(appindicator.IndicatorStatus.ACTIVE)` : This line just turns the indicator on.
|
||||
|
||||
* `indicator.set_menu(menu())` : Here, we are saying that we want to use the `menu()` function (which we’ll define later) for creating the menu items of our indicator. This is important so that when you click on the indicator, you can see a list of possible actions to take.
|
||||
|
||||
* `gtk.main()` : Just run the main GTK loop.
|
||||
|
||||
* Under `menu()` you’ll see that we are creating the actions/items we want to provide using our indicator. `command_one = gtk.MenuItem(‘My Notes’)` simply initializes the first menu item with the text “My notes”, and then `command_one.connect(‘activate’, note)` connects the `activate` signal of that menu item to the `note()` function defined later; In other words, we are telling our system here: “When this menu item is clicked, run the note() function”. Finally, `menu.append(command_one)` adds that menu item to the list.
|
||||
|
||||
* The lines regarding `exittray` are just for creating an exit menu item to close the indicator any time you want.
|
||||
|
||||
* `menu.show_all()` and `return menu` are just ordinary codes for returning the menu list to the indicator.
|
||||
|
||||
* Under `note(_)` you’ll see the code that must be executed when the “My Notes” menu item is clicked. Here, we just wrote `os.system(“gedit $HOME/Documents/notes.txt”)` ; The `os.system` function is a function that allows us to run shell commands from inside Python, so here we wrote a command to open a file called `notes.txt` under the `Documents` folder in our home directory using the `gedit` editor. This for example can be your daily notes taking program from now on!
|
||||
|
||||
### Adding your Needed Tasks
|
||||
|
||||
There are only 2 things you need to touch in the code:
|
||||
|
||||
1. Define a new menu item under `menu()` for your desired task.
|
||||
|
||||
2. Create a new function to run a specific action when that menu item is clicked.
|
||||
|
||||
|
||||
So, let’s say that you want to create a new menu item, which when clicked, plays a specific video/audio file on your hard disk using VLC? To do it, simply add the following 3 lines in line 17:
|
||||
|
||||
```
|
||||
command_two = gtk.MenuItem('Play video/audio')
|
||||
command_two.connect('activate', play)
|
||||
menu.append(command_two)
|
||||
```
|
||||
|
||||
And the following lines in line 30:
|
||||
|
||||
```
|
||||
def play(_):
|
||||
os.system("vlc /home/<username>/Videos/somevideo.mp4")
|
||||
```
|
||||
|
||||
Replace /home/<username>/Videos/somevideo.mp4 with the path to the video/audio file you want. Now save the file and run the indicator again:
|
||||
|
||||
```
|
||||
python tray.py
|
||||
```
|
||||
|
||||
This is how you’ll see it now:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 15][4]
|
||||
|
||||
And when you click on the newly-created menu item, VLC will start playing!
|
||||
|
||||
To create other items/tasks, simply redo the steps again. Just be careful to replace command_two with another name, like command_three, so that no clash between variables happen. And then define new separate functions like what we did with the play(_) function.
|
||||
|
||||
The possibilities are endless from here; I am using this way for example to fetch some data from the web (using the urllib2 library) and display them for me any time. I am also using it for playing an mp3 file in the background using the mpg123 command, and I am defining another menu item to killall mpg123 to stop playing that audio whenever I want. CS:GO on Steam for example takes a huge time to exit (the window doesn’t close automatically), so as a workaround for this, I simply minimize the window and click on a menu item that I created which will execute killall -9 csgo_linux64.
|
||||
|
||||
You can use this indicator for anything: Updating your system packages, possibly running some other scripts any time you want.. Literally anything.
|
||||
|
||||
### Autostart on Boot
|
||||
|
||||
We want our system tray indicator to start automatically on boot, we don’t want to run it manually each time. To do that, simply add the following command to your startup applications (after you replace the path to the tray.py file with yours):
|
||||
|
||||
```
|
||||
nohup python /home/<username>/tray.py &
|
||||
```
|
||||
|
||||
The very next time you reboot your system, the indicator will start working automatically after boot!
|
||||
|
||||
### Conclusion
|
||||
|
||||
You now know how to create your own system tray indicator for any task that you may want. This method should save you a lot of time depending on the nature and number of tasks you need to run on daily basis. Some users may prefer creating aliases from the command line, but this will require you to always open the terminal window or have a drop-down terminal emulator available, while here, the system tray indicator is always working and available for you.
|
||||
|
||||
Have you used this method to run your tasks before? Would love to hear your thoughts.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/tutorials/custom-system-tray-icon-indicator-linux
|
||||
|
||||
作者:[M.Hanny Sabbagh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/mhsabbagh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-28-0808.png?resize=407%2C345&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 12)
|
||||
[2]: https://extensions.gnome.org/extension/1031/topicons/
|
||||
[3]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/03/Screenshot-at-2019-03-02-1041.png?resize=434%2C140&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 14)
|
||||
[4]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/03/Screenshot-at-2019-03-02-1141.png?resize=440%2C149&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 16)
|
@ -0,0 +1,247 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1)
|
||||
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1
|
||||
======
|
||||
|
||||
As **Docker** usage and adoption is growing faster and faster, monitoring **Docker container** images is becoming more challenging. As multiple Docker container images are getting created day-by-day, monitoring them is very important. There are already some in built tools and technologies, but configuring them is little complex. As micro-services based architecture is becoming the de-facto standard in coming days, learning such tool adds one more arsenal to your tool-set.
|
||||
|
||||
Based on the above scenarios, there was in need of one light weight and robust tool requirement was growing. So Portainer.io addressed this. “ **Portainer.io** “,(Latest version is 1.20.2) the tool is very light weight(with 2-3 commands only one can configure it) and has become popular among Docker users.
|
||||
|
||||
**This tool has advantages over other tools; some of these are as below** ,
|
||||
|
||||
* Light weight (requires only 2-3 commands to be required to run to install this tool) {Also installation image is only around 26-30MB of size)
|
||||
* Robust and easy to use
|
||||
* Can be used for Docker monitor and Build
|
||||
* This tool provides us a detailed overview of your Docker environments
|
||||
* This tool allows us to manage your containers, images, networks and volumes.
|
||||
* Portainer is simple to deploy – this requires just one Docker command (can be run from anywhere.)
|
||||
* Complete Docker-container environment can be monitored easily
|
||||
|
||||
|
||||
|
||||
**Portainer is also equipped with** ,
|
||||
|
||||
* Community support
|
||||
* Enterprise support
|
||||
* Has professional services available(along with partner OEM services)
|
||||
|
||||
|
||||
|
||||
**Functionality and features of Portainer tool are,**
|
||||
|
||||
1. It comes-up with nice Dashboard, easy to use and monitor.
|
||||
2. Many in-built templates for ease of operation and creation
|
||||
3. Support of services (OEM, Enterprise level)
|
||||
4. Monitoring of Containers, Images, Networks, Volume and configuration at almost real-time.
|
||||
5. Also includes Docker-Swarm monitoring
|
||||
6. User management with many fancy capabilities
|
||||
|
||||
|
||||
|
||||
**Read Also :[How to Install Docker CE on Ubuntu 16.04 / 18.04 LTS System][1]**
|
||||
|
||||
### How to install and configure Portainer.io on Ubuntu Linux / RHEL / CentOS
|
||||
|
||||
**Note:** This installation is done on Ubuntu 18.04 but the installation on RHEL & CentOS would be same. We are assuming Docker CE is already installed on your system.
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ lsb_release -a
|
||||
No LSB modules are available.
|
||||
Distributor ID: Ubuntu
|
||||
Description: Ubuntu 18.04 LTS
|
||||
Release: 18.04
|
||||
Codename: bionic
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Create the Volume for portainer
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker volume create portainer_data
|
||||
portainer_data
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Launch and start Portainer Container using the beneath docker command,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
|
||||
Unable to find image 'portainer/portainer:latest' locally
|
||||
latest: Pulling from portainer/portainer
|
||||
d1e017099d17: Pull complete
|
||||
0b1e707a06d2: Pull complete
|
||||
Digest: sha256:d6cc2c20c0af38d8d557ab994c419c799a10fe825e4aa57fea2e2e507a13747d
|
||||
Status: Downloaded newer image for portainer/portainer:latest
|
||||
35286de9f2e21d197309575bb52b5599fec24d4f373cc27210d98abc60244107
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
Once the complete installation is done, use the ip of host or Docker using port 9000 of the Docker engine where portainer is running using your browser.
|
||||
|
||||
**Note:** If OS firewall is enabled on your Docker host then make sure 9000 port is allowed else its GUI will not come up.
|
||||
|
||||
In my case, IP address of my Docker Host / Engine is “192.168.1.16” so URL will be,
|
||||
|
||||
<http://192.168.1.16:9000>
|
||||
|
||||
[![Portainer-Login-User-Name-Password][2]][3]
|
||||
|
||||
Please make sure that you enter 8-character passwords. Let the admin be the user as it is and then click “Create user”.
|
||||
|
||||
Now the following screen appears, in this select “Local” rectangle box.
|
||||
|
||||
[![Connect-Portainer-Local-Docker][4]][5]
|
||||
|
||||
Click on “Connect”
|
||||
|
||||
Nice GUI with admin as user home screen appears as below,
|
||||
|
||||
[![Portainer-io-Docker-Monitor-Dashboard][6]][7]
|
||||
|
||||
Now Portainer is ready to launch and manage your Docker containers and it can also be used for containers monitoring.
|
||||
|
||||
### Bring-up container image on Portainer tool
|
||||
|
||||
[![Portainer-Endpoints][8]][9]
|
||||
|
||||
Now check the present status, there are two container images are already running, if you create one more that appears instantly.
|
||||
|
||||
From your command line kick-start one or two containers as below,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker run --name test -it debian
|
||||
Unable to find image 'debian:latest' locally
|
||||
latest: Pulling from library/debian
|
||||
e79bb959ec00: Pull complete
|
||||
Digest: sha256:724b0fbbda7fda6372ffed586670573c59e07a48c86d606bab05db118abe0ef5
|
||||
Status: Downloaded newer image for debian:latest
|
||||
root@linuxtechi:/#
|
||||
```
|
||||
|
||||
Now click Refresh button (Are you sure message appears, click “continue” on this) in Portainer GUI, you will now see 3 container images as highlighted below,
|
||||
|
||||
[![Portainer-io-new-container-image][10]][11]
|
||||
|
||||
Click on the “ **containers** ” (in which it is red circled above), next window appears with “ **Dashboard Endpoint summary** ”
|
||||
|
||||
[![Portainer-io-Docker-Container-Dash][12]][13]
|
||||
|
||||
In this page, click on “ **Containers** ” as highlighted in red color. Now you are ready to monitor your container image.
|
||||
|
||||
### Simple Docker container image monitoring
|
||||
|
||||
From the above step, it appears that a fancy and nice looking “Container List” page appears as below,
|
||||
|
||||
[![Portainer-Container-List][14]][15]
|
||||
|
||||
All the container images can be controlled from here (stop, start, etc)
|
||||
|
||||
**1)** Now from this page, stop the earlier started {“test” container (this was the debian image that we started earlier)}
|
||||
|
||||
To do this select the check box in front of this image and click stop button from above,
|
||||
|
||||
[![Stop-Container-Portainer-io-dashboard][16]][17]
|
||||
|
||||
From the command line option, you will see that this image has been stopped or exited now,
|
||||
|
||||
```
|
||||
root@linuxtechi:~$ sudo docker container ls -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
d45902e717c0 debian "bash" 21 minutes ago Exited (0) 49 seconds ago test
|
||||
08b96eddbae9 centos:7 "/bin/bash" About an hour ago Exited (137) 9 minutes ago mycontainer2
|
||||
35286de9f2e2 portainer/portainer "/portainer" 2 hours ago Up About an hour 0.0.0.0:9000->9000/tcp compassionate_benz
|
||||
root@linuxtechi:~$
|
||||
```
|
||||
|
||||
**2)** Now start the stopped containers (test & mycontainer2) from Portainer GUI,
|
||||
|
||||
Select the check box in front of stopped containers, and the click on Start
|
||||
|
||||
[![Start-Containers-Portainer-GUI][18]][19]
|
||||
|
||||
You will get a quick window saying, “ **Container successfully started** ” and with running state
|
||||
|
||||
[![Conatiner-Started-successfully-Portainer-GUI][20]][21]
|
||||
|
||||
### Various other options and features are explored as below step-by-step
|
||||
|
||||
**1)** Click on “ **Images** ” which is highlighted, you will get the below window,
|
||||
|
||||
[![Docker-Container-Images-Portainer-GUI][22]][23]
|
||||
|
||||
This is the list of container images that are available but some may not running. These images can be imported, exported or uploaded to various locations, below screen shot shows the same,
|
||||
|
||||
[![Upload-Docker-Container-Image-Portainer-GUI][24]][25]
|
||||
|
||||
**2)** Click on “ **volumes”** which is highlighted, you will get the below window,
|
||||
|
||||
[![Volume-list-Portainer-io-gui][26]][27]
|
||||
|
||||
**3)** Volumes can be added easily with following option, click on add volume button, below window appears,
|
||||
|
||||
Provide the name as “ **myvol** ” in the name box and click on “ **create the volume** ” button.
|
||||
|
||||
[![Volume-Creation-Portainer-io-gui][28]][29]
|
||||
|
||||
The newly created volume appears as below, (with unused state)
|
||||
|
||||
[![Volume-unused-Portainer-io-gui][30]][31]
|
||||
|
||||
#### Conclusion:
|
||||
|
||||
As from the above installation steps, configuration and playing around with various options you can see how easy and fancy looking is Portainer.io tool is. This provides multiple features and options to explore on building, monitoring docker container. As explained this is very light weight tool, so doesn’t add any overload to host system. Next set-of options will be explored in part-2 of this series.
|
||||
|
||||
Read Also: **[Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2][32]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/how-to-setup-docker-on-ubuntu-server-16-04/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password-1024x681.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker-1024x538.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard-1024x544.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints-1024x252.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image-1024x544.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash-1024x544.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List-1024x538.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard-1024x447.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI-1024x449.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI-1024x538.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI-1024x544.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI-1024x544.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui-1024x544.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui-1024x544.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui-1024x544.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui.jpg
|
||||
[32]: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/
|
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora 30 Workstation Installation Guide with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/fedora-30-workstation-installation-guide/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Fedora 30 Workstation Installation Guide with Screenshots
|
||||
======
|
||||
|
||||
If you are a **Fedora distribution** lover and always try the things at Fedora Workstation and Servers, then it is good news for you as Fedora has released its latest OS edition as **Fedora 30** for the Workstation and Server. One of the important updates in Fedora 30 from its previous release is that it has introduced **Fedora CoreOS** as a replacement of Fedora Atomic host.
|
||||
|
||||
Some other noticeable updates in Fedora 30 are listed beneath:
|
||||
|
||||
* Updated Desktop Gnome 3.32
|
||||
* New Linux Kernel 5.0.9
|
||||
* Updated Bash Version 5.0, PHP 7.3 & GCC 9
|
||||
* Updated Python 3.7.3, JDK12, Ruby 2.6 Mesa 19.0.2 and Golang 1.12
|
||||
* Improved DNF (Default Package Manager)
|
||||
|
||||
|
||||
|
||||
In this article we will walk through the Fedora 30 workstation Installation steps for laptop or desktop.
|
||||
|
||||
**Following are the minimum system requirement for Fedora 30 workstation,**
|
||||
|
||||
* 1GHz Processor (Recommended 2 GHz Dual Core processor)
|
||||
* 2 GB RAM
|
||||
* 15 GB unallocated Hard Disk
|
||||
* Bootable Media (USB / DVD)
|
||||
* nternet Connection (Optional)
|
||||
|
||||
|
||||
|
||||
Let’s Jump into Installation steps,
|
||||
|
||||
### Step:1) Download Fedora 30 Workstation ISO File
|
||||
|
||||
Download the Fedora 30 Workstation ISO file on your system from its Official Web Site
|
||||
|
||||
<https://getfedora.org/en/workstation/download/>
|
||||
|
||||
Once the ISO file is downloaded, then burn it either in USB drive or DVD and make it bootable.
|
||||
|
||||
### Step:2) Boot Your Target System with Bootable media (USB Drive or DVD)
|
||||
|
||||
Reboot your target machine (i.e. machine where you want to install Fedora 30), Set the boot medium as USB or DVD from Bios settings so system boots up with bootable media.
|
||||
|
||||
### Step:3) Choose Start Fedora-Workstation-30 Live
|
||||
|
||||
When the system boots up with bootable media then we will get the following screen, to begin with installation on your system’s hard disk, choose “ **Start Fedora-Workstation-30 Live** “,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Start-Fedora-Workstation-30-Live.jpg>
|
||||
|
||||
### Step:4) Select Install to Hard Drive Option
|
||||
|
||||
Select “ **Install to Hard Drive** ” option to install Fedora 30 on your system’s hard disk, you can also try Fedora on your system without installing it, for that select “ **Try Fedora** ” Option
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Install-hard-drive.jpg>
|
||||
|
||||
### Step:5) Choose appropriate language for your Fedora 30 Installation
|
||||
|
||||
In this step choose your language which will be used during Fedora 30 Installation,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-Fedora30-Installation.jpg>
|
||||
|
||||
Click on Continue
|
||||
|
||||
### Step:6) Choose Installation destination and partition Scheme
|
||||
|
||||
In the next window we will be present the following screen, here we will choose our installation destination, means on which hard disk we will do installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-Destination-Fedora-30.jpg>
|
||||
|
||||
In the next screen we will see the local available hard disk, select the disk that suits your installation and then choose how you want to create partitions on it from storage configuration tab.
|
||||
|
||||
If you choose “ **Automatic** ” partition scheme, then installer will create the necessary partition for your system automatically but if you want to create your own customize partition scheme then choose “ **Custom** ” option,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Custom-Partition-Fedora-30-installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
In this article I will demonstrate how to create [**LVM**][1] based custom partitions, in my case I have around 40 GB unallocated hard drive, so I will be creating following partitions on it,
|
||||
|
||||
* /boot = 2 GB (ext4 file system)
|
||||
* /home = 15 GB (ext4 file system)
|
||||
* /var = 10 GB (ext4 file system)
|
||||
* / = 10 GB (ext4 file system)
|
||||
* Swap = 2 GB
|
||||
|
||||
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LVM-Partition-MountPoint-Fedora-30-Installation.jpg>
|
||||
|
||||
Select “ **LVM** ” as partitioning scheme and then click on plus (+) symbol,
|
||||
|
||||
Specify the mount point as /boot and partition size as 2 GB and then click on “Add mount point”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partiton-fedora30-installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-standard-parttion-fedora-30.jpg>
|
||||
|
||||
Now create next partition as /home of size 15 GB, Click on + symbol
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-fedora-30-installation.jpg>
|
||||
|
||||
Click on “ **Add mount point** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Modify-Volume-Group-Fedora30-Installation.jpg>
|
||||
|
||||
If you might have noticed, /home partition is created as LVM partition under default Volume Group, if you wish to change default Volume Group name then click on “ **Modify** ” option from Volume Group Tab,
|
||||
|
||||
Mention the Volume Group name you want to set and then click on Save. Now onward all the LVM partition will be part of fedora30 volume group.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Group-Fedora-30-Installation.jpg>
|
||||
|
||||
Similarly create the next two partitions **/var** and **/** of size 10 GB respectively,
|
||||
|
||||
**/var partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-fedora30-installation.jpg>
|
||||
|
||||
**/ (slash) partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-fedora30-installation.jpg>
|
||||
|
||||
Now create the last partition as swap of size 2 GB,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-partition-fedora30-installation.jpg>
|
||||
|
||||
In the next window, click on Done
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-After-Parttions-Creation-Fedora30.jpg>
|
||||
|
||||
In the next screen, choose “ **Accept Changes** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-Fedora30-Installation.jpg>
|
||||
|
||||
Now we will get Installation Summary window, here you can also change the time zone that suits to your installation and then click on “ **Begin Installation** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-Fedora30-Installation.jpg>
|
||||
|
||||
### Step:7) Fedora 30 Installation started
|
||||
|
||||
In this step we can see Fedora 30 Installation has been started and it is in progress,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora-30-Installation-Progress.jpg>
|
||||
|
||||
Once the Installation is completed, you will be prompted to restart your system
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Fedora30-Installation-Completed-Screen.jpg>
|
||||
|
||||
Click on Quit and reboot your system.
|
||||
|
||||
Don’t forget the Change boot medium from Bios settings so your system boots up with hard disk.
|
||||
|
||||
### Step:8) Welcome message and login Screen after reboot
|
||||
|
||||
When we first time reboot Fedora 30 system after the successful installation, we will get below welcome screen,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Welcome-Screen-After-Fedora30-Installation.jpg>
|
||||
|
||||
Click on Next
|
||||
|
||||
In the next screen you can Sync your online accounts or else you can skip,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Online-Accounts-Sync-Fedora30.jpg>
|
||||
|
||||
In the next window you will be required to specify the local account (user name) and its password, later this account will be used to login to the system
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-Fedora30-Installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Local-Account-password-fedora30.jpg>
|
||||
|
||||
Click on Next
|
||||
|
||||
And finally, we will get below screen which confirms that we are ready to use Fedora 30,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Reday-to-use-fedora30-message.jpg>
|
||||
|
||||
Click on “ **Start Using Fedora** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Gnome-Desktop-Screen-Fedora30.jpg>
|
||||
|
||||
Above Gnome Desktop Screen confirms that we have successfully installed Fedora 30 Workstation, now explore it and have fun 😊
|
||||
|
||||
In Fedora 30 workstation, if you want to install any packages or software from command line use DNF command.
|
||||
|
||||
Read More On: **[26 DNF Command Examples for Package Management in Fedora Linux][2]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/
|
||||
[2]: https://www.linuxtechi.com/dnf-command-examples-rpm-management-fedora-linux/
|
521
sources/tech/20190507 Prefer table driven tests.md
Normal file
521
sources/tech/20190507 Prefer table driven tests.md
Normal file
@ -0,0 +1,521 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Prefer table driven tests)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/07/prefer-table-driven-tests)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Prefer table driven tests
|
||||
======
|
||||
|
||||
I’m a big fan of testing, specifically [unit testing][1] and TDD ([done correctly][2], of course). A practice that has grown around Go projects is the idea of a table driven test. This post explores the how and why of writing a table driven test.
|
||||
|
||||
Let’s say we have a function that splits strings:
|
||||
|
||||
```
|
||||
// Split slices s into all substrings separated by sep and
|
||||
// returns a slice of the substrings between those separators.
|
||||
func Split(s, sep string) []string {
|
||||
var result []string
|
||||
i := strings.Index(s, sep)
|
||||
for i > -1 {
|
||||
result = append(result, s[:i])
|
||||
s = s[i+len(sep):]
|
||||
i = strings.Index(s, sep)
|
||||
}
|
||||
return append(result, s)
|
||||
}
|
||||
```
|
||||
|
||||
In Go, unit tests are just regular Go functions (with a few rules) so we write a unit test for this function starting with a file in the same directory, with the same package name, `strings`.
|
||||
|
||||
```
|
||||
package split
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSplit(t *testing.T) {
|
||||
got := Split("a/b/c", "/")
|
||||
want := []string{"a", "b", "c"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Tests are just regular Go functions with a few rules:
|
||||
|
||||
1. The name of the test function must start with `Test`.
|
||||
2. The test function must take one argument of type `*testing.T`. A `*testing.T` is a type injected by the testing package itself, to provide ways to print, skip, and fail the test.
|
||||
|
||||
|
||||
|
||||
In our test we call `Split` with some inputs, then compare it to the result we expected.
|
||||
|
||||
### Code coverage
|
||||
|
||||
The next question is, what is the coverage of this package? Luckily the go tool has a built in branch coverage. We can invoke it like this:
|
||||
|
||||
```
|
||||
% go test -coverprofile=c.out
|
||||
PASS
|
||||
coverage: 100.0% of statements
|
||||
ok split 0.010s
|
||||
```
|
||||
|
||||
Which tells us we have 100% branch coverage, which isn’t really surprising, there’s only one branch in this code.
|
||||
|
||||
If we want to dig in to the coverage report the go tool has several options to print the coverage report. We can use `go tool cover -func` to break down the coverage per function:
|
||||
|
||||
```
|
||||
% go tool cover -func=c.out
|
||||
split/split.go:8: Split 100.0%
|
||||
total: (statements) 100.0%
|
||||
```
|
||||
|
||||
Which isn’t that exciting as we only have one function in this package, but I’m sure you’ll find more exciting packages to test.
|
||||
|
||||
#### Spray some .bashrc on that
|
||||
|
||||
This pair of commands is so useful for me I have a shell alias which runs the test coverage and the report in one command:
|
||||
|
||||
```
|
||||
cover () {
|
||||
local t=$(mktemp -t cover)
|
||||
go test $COVERFLAGS -coverprofile=$t $@ \
|
||||
&& go tool cover -func=$t \
|
||||
&& unlink $t
|
||||
}
|
||||
```
|
||||
|
||||
### Going beyond 100% coverage
|
||||
|
||||
So, we wrote one test case, got 100% coverage, but this isn’t really the end of the story. We have good branch coverage but we probably need to test some of the boundary conditions. For example, what happens if we try to split it on comma?
|
||||
|
||||
```
|
||||
func TestSplitWrongSep(t *testing.T) {
|
||||
got := Split("a/b/c", ",")
|
||||
want := []string{"a/b/c"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or, what happens if there are no separators in the source string?
|
||||
|
||||
```
|
||||
func TestSplitNoSep(t *testing.T) {
|
||||
got := Split("abc", "/")
|
||||
want := []string{"abc"}
|
||||
if !reflect.DeepEqual(want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", want, got)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We’re starting build a set of test cases that exercise boundary conditions. This is good.
|
||||
|
||||
### Introducing table driven tests
|
||||
|
||||
However the there is a lot of duplication in our tests. For each test case only the input, the expected output, and name of the test case change. Everything else is boilerplate. What we’d like to to set up all the inputs and expected outputs and feel them to a single test harness. This is a great time to introduce table driven testing.
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
type test struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}
|
||||
|
||||
tests := []test{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We declare a structure to hold our test inputs and expected outputs. This is our table. The `tests` structure is usually a local declaration because we want to reuse this name for other tests in this package.
|
||||
|
||||
In fact, we don’t even need to give the type a name, we can use an anonymous struct literal to reduce the boilerplate like this:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, adding a new test is a straight forward matter; simply add another line the `tests` structure. For example, what will happen if our input string has a trailing separator?
|
||||
|
||||
```
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, // trailing sep
|
||||
```
|
||||
|
||||
But, when we run `go test`, we get
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:24: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Putting aside the test failure, there are a few problems to talk about.
|
||||
|
||||
The first is by rewriting each test from a function to a row in a table we’ve lost the name of the failing test. We added a comment in the test file to call out this case, but we don’t have access to that comment in the `go test` output.
|
||||
|
||||
There are a few ways to resolve this. You’ll see a mix of styles in use in Go code bases because the table testing idiom is evolving as people continue to experiment with the form.
|
||||
|
||||
### Enumerating test cases
|
||||
|
||||
As tests are stored in a slice we can print out the index of the test case in the failure message:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
sep . string
|
||||
want []string
|
||||
}{
|
||||
{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for i, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("test %d: expected: %v, got: %v", i+1, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now when we run `go test` we get this
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:24: test 4: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Which is a little better. Now we know that the fourth test is failing, although we have to do a little bit of fudging because slice indexing—and range iteration—is zero based. This requires consistency across your test cases; if some use zero base reporting and others use one based, it’s going to be confusing. And, if the list of test cases is long, it could be difficult to count braces to figure out exactly which fixture constitutes test case number four.
|
||||
|
||||
### Give your test cases names
|
||||
|
||||
Another common pattern is to include a name field in the test fixture.
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
{name: "simple", input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
{name: "wrong sep", input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
{name: "no sep", input: "abc", sep: "/", want: []string{"abc"}},
|
||||
{name: "trailing sep", input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("%s: expected: %v, got: %v", tc.name, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now when the test fails we have a descriptive name for what the test was doing. We no longer have to try to figure it out from the output—also, now have a string we can search on.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
split_test.go:25: trailing sep: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
We can dry this up even more using a map literal syntax:
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("%s: expected: %v, got: %v", name, tc.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Using a map literal syntax we define our test cases not as a slice of structs, but as map of test names to test fixtures. There’s also a side benefit of using a map that is going to potentially improve the utility of our tests.
|
||||
|
||||
Map iteration order is _undefined_ 1 This means each time we run `go test`, our tests are going to be potentially run in a different order.
|
||||
|
||||
This is super useful for spotting conditions where test pass when run in statement order, but not otherwise. If you find that happens you probably have some global state that is being mutated by one test with subsequent tests depending on that modification.
|
||||
|
||||
### Introducing sub tests
|
||||
|
||||
Before we fix the failing test there are a few other issues to address in our table driven test harness.
|
||||
|
||||
The first is we’re calling `t.Fatalf` when one of the test cases fails. This means after the first failing test case we stop testing the other cases. Because test cases are run in an undefined order, if there is a test failure, it would be nice to know if it was the only failure or just the first.
|
||||
|
||||
The testing package would do this for us if we go to the effort to write out each test case as its own function, but that’s quite verbose. The good news is since Go 1.7 a new feature was added that lets us do this easily for table driven tests. They’re called [sub tests][3].
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %v, got: %v", tc.want, got)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As each sub test now has a name we get that name automatically printed out in any test runs.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Each subtest is its own anonymous function, therefore we can use `t.Fatalf`, `t.Skipf`, and all the other `testing.T`helpers, while retaining the compactness of a table driven test.
|
||||
|
||||
#### Individual sub test cases can be executed directly
|
||||
|
||||
Because sub tests have a name, you can run a selection of sub tests by name using the `go test -run` flag.
|
||||
|
||||
```
|
||||
% go test -run=.*/trailing -v
|
||||
=== RUN TestSplit
|
||||
=== RUN TestSplit/trailing_sep
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
### Comparing what we got with what we wanted
|
||||
|
||||
Now we’re ready to fix the test case. Let’s look at the error.
|
||||
|
||||
```
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: [a b c], got: [a b c ]
|
||||
```
|
||||
|
||||
Can you spot the problem? Clearly the slices are different, that’s what `reflect.DeepEqual` is upset about. But spotting the actual difference isn’t easy, you have to spot that extra space after `c`. This might look simple in this simple example, but it is any thing but when you’re comparing two complicated deeply nested gRPC structures.
|
||||
|
||||
We can improve the output if we switch to the `%#v` syntax to view the value as a Go(ish) declaration:
|
||||
|
||||
```
|
||||
got := Split(tc.input, tc.sep)
|
||||
if !reflect.DeepEqual(tc.want, got) {
|
||||
t.Fatalf("expected: %#v, got: %#v", tc.want, got)
|
||||
}
|
||||
```
|
||||
|
||||
Now when we run our test it’s clear that the problem is there is an extra blank element in the slice.
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:25: expected: []string{"a", "b", "c"}, got: []string{"a", "b", "c", ""}
|
||||
```
|
||||
|
||||
But before we go to fix our test failure I want to talk a little bit more about choosing the right way to present test failures. Our `Split` function is simple, it takes a primitive string and returns a slice of strings, but what if it worked with structs, or worse, pointers to structs?
|
||||
|
||||
Here is an example where `%#v` does not work as well:
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
fmt.Printf("%v %v\n", x, y)
|
||||
fmt.Printf("%#v %#v\n", x, y)
|
||||
}
|
||||
```
|
||||
|
||||
The first `fmt.Printf`prints the unhelpful, but expected slice of addresses; `[0xc000096000 0xc000096008 0xc000096010] [0xc000096018 0xc000096020 0xc000096028]`. However our `%#v` version doesn’t fare any better, printing a slice of addresses cast to `*main.T`;`[]*main.T{(*main.T)(0xc000096000), (*main.T)(0xc000096008), (*main.T)(0xc000096010)} []*main.T{(*main.T)(0xc000096018), (*main.T)(0xc000096020), (*main.T)(0xc000096028)}`
|
||||
|
||||
Because of the limitations in using any `fmt.Printf` verb, I want to introduce the [go-cmp][4] library from Google.
|
||||
|
||||
The goal of the cmp library is it is specifically to compare two values. This is similar to `reflect.DeepEqual`, but it has more capabilities. Using the cmp pacakge you can, of course, write:
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
fmt.Println(cmp.Equal(x, y)) // false
|
||||
}
|
||||
```
|
||||
|
||||
But far more useful for us with our test function is the `cmp.Diff` function which will produce a textual description of what is different between the two values, recursively.
|
||||
|
||||
```
|
||||
func main() {
|
||||
type T struct {
|
||||
I int
|
||||
}
|
||||
x := []*T{{1}, {2}, {3}}
|
||||
y := []*T{{1}, {2}, {4}}
|
||||
diff := cmp.Diff(x, y)
|
||||
fmt.Printf(diff)
|
||||
}
|
||||
```
|
||||
|
||||
Which instead produces:
|
||||
|
||||
```
|
||||
% go run
|
||||
{[]*main.T}[2].I:
|
||||
-: 3
|
||||
+: 4
|
||||
```
|
||||
|
||||
Telling us that at element 2 of the slice of `T`s the `I`field was expected to be 3, but was actually 4.
|
||||
|
||||
Putting this all together we have our table driven go-cmp test
|
||||
|
||||
```
|
||||
func TestSplit(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
input string
|
||||
sep string
|
||||
want []string
|
||||
}{
|
||||
"simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}},
|
||||
"wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}},
|
||||
"no sep": {input: "abc", sep: "/", want: []string{"abc"}},
|
||||
"trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}},
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
got := Split(tc.input, tc.sep)
|
||||
diff := cmp.Diff(tc.want, got)
|
||||
if diff != "" {
|
||||
t.Fatalf(diff)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Running this we get
|
||||
|
||||
```
|
||||
% go test
|
||||
--- FAIL: TestSplit (0.00s)
|
||||
--- FAIL: TestSplit/trailing_sep (0.00s)
|
||||
split_test.go:27: {[]string}[?->3]:
|
||||
-: <non-existent>
|
||||
+: ""
|
||||
FAIL
|
||||
exit status 1
|
||||
FAIL split 0.006s
|
||||
```
|
||||
|
||||
Using `cmp.Diff` our test harness isn’t just telling us that what we got and what we wanted were different. Our test is telling us that the strings are different lengths, the third index in the fixture shouldn’t exist, but the actual output we got an empty string, “”. From here fixing the test failure is straight forward.
|
||||
|
||||
1. Please don’t email me to argue that map iteration order is _random_. [It’s not][5].
|
||||
|
||||
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Writing table driven tests in Go][6]
|
||||
2. [Internets of Interest #7: Ian Cooper on Test Driven Development][7]
|
||||
3. [Automatically run your package’s tests with inotifywait][8]
|
||||
4. [How to write benchmarks in Go][9]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2019/04/03/absolute-unit-test
|
||||
[2]: https://www.youtube.com/watch?v=EZ05e7EMOLM
|
||||
[3]: https://blog.golang.org/subtests
|
||||
[4]: https://github.com/google/go-cmp
|
||||
[5]: https://golang.org/ref/spec#For_statements
|
||||
[6]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
|
||||
[7]: https://dave.cheney.net/2018/10/15/internets-of-interest-7-ian-cooper-on-test-driven-development (Internets of Interest #7: Ian Cooper on Test Driven Development)
|
||||
[8]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait)
|
||||
[9]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go)
|
@ -0,0 +1,256 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots
|
||||
======
|
||||
|
||||
Red Hat has released its most awaited OS **RHEL 8** on 7th May 2019. RHEL 8 is based on **Fedora 28** distribution and Linux **kernel version 4.18**. One of the important key features in RHEL 8 is that it has introduced “ **Application Streams** ” which allows developers tools, frameworks and languages to be updated frequently without impacting the core resources of base OS. In other words, application streams will help to segregate the users space packages from OS Kernel Space.
|
||||
|
||||
Apart from this, there are many new features which are noticed in RHEL 8 like:
|
||||
|
||||
* XFS File system supports copy-on-write of file extents
|
||||
* Introduction of Stratis filesystem, Buildah, Podman, and Skopeo
|
||||
* Yum utility is based on DNF
|
||||
* Chrony replace NTP.
|
||||
* Cockpit is the default Web Console tool for Server management.
|
||||
* OpenSSL 1.1.1 & TLS 1.3 support
|
||||
* PHP 7.2
|
||||
* iptables replaced by nftables
|
||||
|
||||
|
||||
|
||||
### Minimum System Requirements for RHEL 8:
|
||||
|
||||
* 4 GB RAM
|
||||
* 20 GB unallocated disk space
|
||||
* 64-bit x86 or ARM System
|
||||
|
||||
|
||||
|
||||
**Note:** RHEL 8 supports the following architectures:
|
||||
|
||||
* AMD or Intel x86 64-bit
|
||||
* 64-bit ARM
|
||||
* IBM Power Systems, Little Endian & IBM Z
|
||||
|
||||
|
||||
|
||||
In this article we will demonstrate how to install RHEL 8 step by step with screenshots.
|
||||
|
||||
### RHEL 8 Installation Steps with Screenshots
|
||||
|
||||
### Step:1) Download RHEL 8.0 ISO file
|
||||
|
||||
Download RHEL 8 iso file from its official web site,
|
||||
|
||||
<https://access.redhat.com/downloads/>
|
||||
|
||||
I am assuming you have the active subscription if not then register yourself for evaluation and then download ISO file
|
||||
|
||||
### Step:2) Create Installation bootable media (USB or DVD)
|
||||
|
||||
Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its bios settings and set the boot medium as USB or DVD,
|
||||
|
||||
### Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option
|
||||
|
||||
When the system boots up with installation media (USB or DVD), we will get the following screen, choose “ **Install Red Hat Enterprise Linux 8.0** ” and hit enter,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Install-RHEL8.jpg>
|
||||
|
||||
### Step:4) Choose your preferred language for RHEL 8 installation
|
||||
|
||||
In this step, you need to choose a language that you want to use for RHEL 8 installation, so make a selection that suits to your setup.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Language-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Continue
|
||||
|
||||
### Step:5) Preparing RHEL 8 Installation
|
||||
|
||||
In this step we will decide the installation destination for RHEL 8, apart from this we can configure the followings:
|
||||
|
||||
* Time Zone
|
||||
* Kdump (enabled/disabled)
|
||||
* Software Selection (Packages)
|
||||
* Networking and Hostname
|
||||
* Security Policies & System purpose
|
||||
|
||||
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Installation-summary-rhel8.jpg>
|
||||
|
||||
By default, installer will automatically pick time zone and will enable the **kdump** , if wish to change the time zone then click on “ **Time & Date**” option and set your preferred time zone and then click on Done.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/timezone-rhel8-installation.jpg>
|
||||
|
||||
To configure IP address and Hostname click on “ **Network & Hostname**” option from installation summary screen,
|
||||
|
||||
If your system is connected to any switch or modem, then it will try to get IP from DHCP server otherwise we can configure IP manually.
|
||||
|
||||
Mention the hostname that you want to set and then click on “ **Apply”**. Once you are done with IP address and hostname configuration click on “Done”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-Hostname-RHEL8-Installation.jpg>
|
||||
|
||||
To define the installation disk and partition scheme for RHEL 8, click on “ **Installation Destination** ” option,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Installation-Disk-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
As we can see I have around 60 GB free disk space on sda drive, I will be creating following customize lvm based partitions on this disk,
|
||||
|
||||
* /boot = 2GB (xfs file system)
|
||||
* / = 20 GB (xfs file system)
|
||||
* /var = 10 GB (xfs file system)
|
||||
* /home = 15 GB (xfs file system)
|
||||
* /tmp = 5 GB (xfs file system)
|
||||
* Swap = 2 GB (xfs file system)
|
||||
|
||||
|
||||
|
||||
**Note:** If you don’t want to create manual partitions then select “ **Automatic** ” option from Storage Configuration Tab
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Create-New-Partition-RHEL8-Installation.jpg>
|
||||
|
||||
Let’s create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning scheme and then click on + “plus” symbol,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/boot-partition-rhel8-installation.jpg>
|
||||
|
||||
Click on “ **Add mount point** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Boot-partition-details-rhel8-installation.jpg>
|
||||
|
||||
To create next partition as / of size 20 GB, click on + symbol and specify the details as shown below,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-partition-rhel8-installation.jpg>
|
||||
|
||||
Click on “Add mount point”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/slash-root-partition-details-rhel8-installation.jpg>
|
||||
|
||||
As we can see installer has created the Volume group as “ **rhel_rhel8** “, if you want to change this name then click on Modify option and specify the desired name and then click on Save
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Change-VolumeGroup-RHEL8-Installation.jpg>
|
||||
|
||||
Now onward all partitions will be part of Volume Group “ **VolGrp** ”
|
||||
|
||||
Similarly create next three partitions **/home** , **/var** and **/tmp** of size 15GB, 10 GB and 5 GB respectively
|
||||
|
||||
**/home partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/home-partition-rhel8-installation.jpg>
|
||||
|
||||
**/var partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/var-partition-rhel8-installation.jpg>
|
||||
|
||||
**/tmp partition:**
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/tmp-partition-rhel8-installation.jpg>
|
||||
|
||||
Now finally create last partition as swap of size of 2 GB,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Swap-Partition-RHEL8-Installation.jpg>
|
||||
|
||||
Click on “Add mount point”
|
||||
|
||||
Once you are done with partition creations, click on Done on Next screen, example is shown below
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Choose-Done-after-partition-creation-rhel8-installation.jpg>
|
||||
|
||||
In the next window, choose “ **Accept Changes** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-Changes-RHEL8-Installation.jpg>
|
||||
|
||||
### Step:6) Select Software Packages and Choose Security Policy and System purpose
|
||||
|
||||
After accepting the changes in above step, we will be redirected to installation summary window.
|
||||
|
||||
By default, installer will select “ **Server with GUI”** as software packages and if you want to change it then click on “ **Software Selection** ” option and choose your preferred “ **Basic Environment** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Software-Selection-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done
|
||||
|
||||
If you want to set the security policies during the installation, the choose the required profile from Security polices option else you can leave as it is.
|
||||
|
||||
From “ **System Purpose** ” option specify the Role, Red Hat Service Level Agreement and Usage. Though You can leave this option as it is.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/System-role-agreement-usage-rhel8-installation.jpg>
|
||||
|
||||
Click on Done to proceed further.
|
||||
|
||||
### Step:7) Choose “Begin Installation” option to start installation
|
||||
|
||||
From the Installation summary window click on “Begin Installation” option to start the installation,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Begin-Installation-RHEL8-Installation.jpg>
|
||||
|
||||
As we can see below RHEL 8 Installation is started & is in progress
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Progress.jpg>
|
||||
|
||||
Set the root password,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Root-Password-RHEL8.jpg>
|
||||
|
||||
Specify the local user details like its Full Name, user name and its password,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/LocalUser-Details-RHEL8-Installation.jpg>
|
||||
|
||||
Once the installation is completed, installer will prompt us to reboot the system,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/RHEL8-Installation-Completed-Message.jpg>
|
||||
|
||||
Click on “Reboot” to restart your system and don’t forget to change boot medium from bios settings so that system boots up with hard disk.
|
||||
|
||||
### Step:8) Initial Setup after installation
|
||||
|
||||
When the system is rebooted first time after the successful installation then we will get below window there we need to accept the license (EULA),
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Accept-EULA-RHEL8-Installation.jpg>
|
||||
|
||||
Click on Done,
|
||||
|
||||
In the next Screen click on “ **Finish Configuration** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Finish-Configuration-RHEL8-Installation.jpg>
|
||||
|
||||
### Step:8) Login Screen of RHEL 8 Server after Installation
|
||||
|
||||
As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same user name and password that we created during the installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Login-Screen-RHEL8.jpg>
|
||||
|
||||
After the login we will get couple of Welcome Screen and follow the screen instructions and then finally we will get the following screen,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Ready-to-Use-RHEL8.jpg>
|
||||
|
||||
Click on “Start Using Red Hat Enterprise Linux”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/GNOME-Desktop-RHEL8-Server.jpg>
|
||||
|
||||
This confirms that we have successfully installed RHEL 8, that’s all from this article. We will be writing articles on RHEL 8 in the coming future till then please do share your feedback and comments on this article.
|
||||
|
||||
Read Also :** [How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][1]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
|
@ -0,0 +1,164 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File)
|
||||
[#]: via: (https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File
|
||||
======
|
||||
|
||||
Recently Red Hat has released its most awaited operating system “ **RHEL 8** “, in case you have installed RHEL 8 Server on your system and wondering how to setup local yum or dnf repository using installation DVD or ISO file then refer below steps and procedure.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Setup-Local-Repo-RHEL8.jpg>
|
||||
|
||||
In RHEL 8, we have two package repositories:
|
||||
|
||||
* BaseOS
|
||||
* Application Stream
|
||||
|
||||
|
||||
|
||||
BaseOS repository have all underlying OS packages where as Application Stream repository have all application related packages, developer tools and databases etc. Using Application stream repository, we can have multiple of versions of same application and Database.
|
||||
|
||||
### Step:1) Mount RHEL 8 ISO file / Installation DVD
|
||||
|
||||
To mount RHEL 8 ISO file inside your RHEL 8 server use the beneath mount command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mount -o loop rhel-8.0-x86_64-dvd.iso /opt/
|
||||
```
|
||||
|
||||
**Note:** I am assuming you have already copied RHEL 8 ISO file inside your system,
|
||||
|
||||
In case you have RHEL 8 installation DVD, then use below mount command to mount it,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mount /dev/sr0 /opt
|
||||
```
|
||||
|
||||
### Step:2) Copy media.repo file from mounted directory to /etc/yum.repos.d/
|
||||
|
||||
In our case RHEL 8 Installation DVD or ISO file is mounted under /opt folder, use cp command to copy media.repo file to /etc/yum.repos.d/ directory,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cp -v /opt/media.repo /etc/yum.repos.d/rhel8.repo
|
||||
'/opt/media.repo' -> '/etc/yum.repos.d/rhel8.repo'
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Set “644” permission on “ **/etc/yum.repos.d/rhel8.repo** ”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# chmod 644 /etc/yum.repos.d/rhel8.repo
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step:3) Add repository entries in “/etc/yum.repos.d/rhel8.repo” file
|
||||
|
||||
By default, **rhel8.repo** file will have following content,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/default-rhel8-repo-file.jpg>
|
||||
|
||||
Edit rhel8.repo file and add the following contents,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/yum.repos.d/rhel8.repo
|
||||
[InstallMedia-BaseOS]
|
||||
name=Red Hat Enterprise Linux 8 - BaseOS
|
||||
metadata_expire=-1
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
baseurl=file:///opt/BaseOS/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
|
||||
[InstallMedia-AppStream]
|
||||
name=Red Hat Enterprise Linux 8 - AppStream
|
||||
metadata_expire=-1
|
||||
gpgcheck=1
|
||||
enabled=1
|
||||
baseurl=file:///opt/AppStream/
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
|
||||
```
|
||||
|
||||
rhel8.repo should look like above once we add the content, In case you have mounted the Installation DVD or ISO on different folder then change the location and folder name in base url line for both repositories and rest of parameter leave as it is.
|
||||
|
||||
### Step:4) Clean Yum / DNF and Subscription Manager Cache
|
||||
|
||||
Use the following command to clear yum or dnf and subscription manager cache,
|
||||
|
||||
```
|
||||
root@linuxtechi ~]# dnf clean all
|
||||
[root@linuxtechi ~]# subscription-manager clean
|
||||
All local data removed
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step:5) Verify whether Yum / DNF is getting packages from Local Repo
|
||||
|
||||
Use dnf or yum repolist command to verify whether these commands are getting packages from Local repositories or not.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf repolist
|
||||
Updating Subscription Management repositories.
|
||||
Unable to read consumer identity
|
||||
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
|
||||
Last metadata expiration check: 1:32:44 ago on Sat 11 May 2019 08:48:24 AM BST.
|
||||
repo id repo name status
|
||||
InstallMedia-AppStream Red Hat Enterprise Linux 8 - AppStream 4,672
|
||||
InstallMedia-BaseOS Red Hat Enterprise Linux 8 - BaseOS 1,658
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
**Note :** You can use either dnf or yum command, if you use yum command then its request is redirecting to DNF itself because in RHEL 8 yum is based on DNF command.
|
||||
|
||||
If you have noticed the above command output carefully, we are getting warning message “ **This system is not registered to Red Hat Subscription Management**. **You can use subscription-manager to register”** , if you want to suppress or prevent this message while running dnf / yum command then edit the file “/etc/yum/pluginconf.d/subscription-manager.conf”, changed the parameter “enabled=1” to “enabled=0”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/yum/pluginconf.d/subscription-manager.conf
|
||||
[main]
|
||||
enabled=0
|
||||
```
|
||||
|
||||
save and exit the file.
|
||||
|
||||
### Step:6) Installing packages using DNF / Yum
|
||||
|
||||
Let’s assume we want to install nginx web server then run below dnf command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx
|
||||
```
|
||||
|
||||
![][1]
|
||||
|
||||
Similarly if you want to install **LEMP** stack on your RHEL 8 system use the following dnf command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx mariadb php -y
|
||||
```
|
||||
|
||||
[![][2]][3]
|
||||
|
||||
This confirms that we have successfully configured Local yum / dnf repository on our RHEL 8 server using Installation DVD or ISO file.
|
||||
|
||||
In case these steps help you technically, please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/dnf-install-nginx-rhel8-1024x376.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8-1024x540.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8.jpg
|
@ -1,141 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?
|
||||
======
|
||||
|
||||
We have recently published an article about bulk package installation.
|
||||
|
||||
While doing that, i was struggled to get the installed package information and did a small google search and found few methods about it.
|
||||
|
||||
I would like to share it in our website so, that it will be helpful for others too.
|
||||
|
||||
There are numerous ways we can achieve this.
|
||||
|
||||
I have add seven ways to achieve this. However, you can choose the preferred method for you.
|
||||
|
||||
Those methods are listed below.
|
||||
|
||||
* **`apt-cache Command:`** apt-cache command is used to query the APT cache or package metadata.
|
||||
* **`apt Command:`** APT is a powerful command-line tool for installing, downloading, removing, searching and managing packages on Debian based systems.
|
||||
* **`dpkg-query Command:`** dpkg-query is a tool to query the dpkg database.
|
||||
* **`dpkg Command:`** dpkg is a package manager for Debian based systems.
|
||||
* **`which Command:`** The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
* **`whereis Command:`** The whereis command used to search the binary, source, and man page files for a given command.
|
||||
* **`locate Command:`** locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
|
||||
|
||||
### Method-1 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt-cache Command?
|
||||
|
||||
apt-cache command is used to query the APT cache or package metadata from APT’s internal database.
|
||||
|
||||
It will search and display an information about the given package. It shows whether the package is installed or not, installed package version, source repository information.
|
||||
|
||||
The below output clearly showing that `nano` package has already installed in the system. Since installed part is showing the installed version of nano package.
|
||||
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.9.3-2
|
||||
Candidate: 2.9.3-2
|
||||
Version table:
|
||||
*** 2.9.3-2 500
|
||||
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
```
|
||||
|
||||
### Method-2 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt Command?
|
||||
|
||||
APT is a powerful command-line tool for installing, downloading, removing, searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library. It’s contains some less used command-line utilities related to package management.
|
||||
|
||||
```
|
||||
# apt -qq list nano
|
||||
nano/bionic,now 2.9.3-2 amd64 [installed]
|
||||
```
|
||||
|
||||
### Method-3 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg-query Command?
|
||||
|
||||
dpkg-query is a tool to show information about packages listed in the dpkg database.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg-query --list | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-4 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg Command?
|
||||
|
||||
DPKG stands for Debian Package is a tool to install, build, remove and manage Debian packages, but unlike other package management systems, it cannot automatically download and install packages or their dependencies.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg -l | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-5 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using which Command?
|
||||
|
||||
The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
|
||||
It’s very useful when you want to create a desktop shortcut or symbolic link for executable files.
|
||||
|
||||
Which command searches the directories listed in the current user’s PATH environment variable not for all the users. I mean, when you are logged in your own account and you can’t able to search for root user file or directory.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# which nano
|
||||
/bin/nano
|
||||
```
|
||||
|
||||
### Method-6 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using whereis Command?
|
||||
|
||||
The whereis command used to search the binary, source, and man page files for a given command.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# whereis nano
|
||||
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
|
||||
```
|
||||
|
||||
### Method-7 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using locate Command?
|
||||
|
||||
locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
It uses a database rather than hunting individual directory paths to get a given file.
|
||||
|
||||
locate command doesn’t pre-installed in most of the distributions so, use your distribution package manager to install it.
|
||||
|
||||
The database is updated regularly through cron. Even, we can update it manually.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# locate --basename '\nano'
|
||||
/usr/bin/nano
|
||||
/usr/share/nano
|
||||
/usr/share/doc/nano
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
93
sources/tech/20190514 Why bother writing tests at all.md
Normal file
93
sources/tech/20190514 Why bother writing tests at all.md
Normal file
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why bother writing tests at all?)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Why bother writing tests at all?
|
||||
======
|
||||
|
||||
In previous posts and presentations I talked about [how to test][1], and [when to test][2]. To conclude this series of I’m going to ask the question, _why test at all?_
|
||||
|
||||
### Even if you don’t, someone _will_ test your software
|
||||
|
||||
I’m sure no-one reading this post thinks that software should be delivered without being tested first. Even if that were true, your customers are going to test it, or at least use it. If nothing else, it would be good to discover any issues with the code before your customers do. If not for the reputation of your company, at least for your professional pride.
|
||||
|
||||
So, if we agree that software should be tested, the question becomes: _who_ should do that testing?
|
||||
|
||||
### The majority of testing should be performed by development teams
|
||||
|
||||
I argue that the majority of the testing should be done by development groups. Moreover, testing should be automated, and thus the majority of these tests should be unit style tests.
|
||||
|
||||
To be clear, I am _not_ saying you shouldn’t write integration, functional, or end to end tests. I’m also _not_ saying that you shouldn’t have a QA group, or integration test engineers. However at a recent software conference, in a room of over 1,000 engineers, nobody raised their hand when I asked if they considered themselves in a pure quality assurance role.
|
||||
|
||||
You might argue that the audience was self selecting, that QA engineers did not feel a software conference was relevant–or welcoming–to them. However, I think this proves my point, the days of [one developer to one test engineer][3] are gone and not coming back.
|
||||
|
||||
If development teams aren’t writing the majority of tests, who is?
|
||||
|
||||
### Manual testing should not be the majority of your testing because manual testing is O(n)
|
||||
|
||||
Thus, if individual contributors are expected to test the software they write, why do we need to automate it? Why is a manual testing plan not good enough?
|
||||
|
||||
Manual testing of software or manual verification of a defect is not sufficient because it does not scale. As the number of manual tests grows, engineers are tempted to skip them or only execute the scenarios they _think_ are could be affected. Manual testing is expensive in terms of time, thus dollars, and it is boring. 99.9% of the tests that passed last time are _expected_ to pass again. Manual testing is looking for a needle in a haystack, except you don’t stop when you find the first needle.
|
||||
|
||||
This means that your first response when given a bug to fix or a feature to implement should be to write a failing test. This doesn’t need to be a unit test, but it should be an automated test. Once you’ve fixed the bug, or added the feature, now have the test case to prove it worked–and you can check them in together.
|
||||
|
||||
### Tests are the critical component that ensure you can always ship your master branch
|
||||
|
||||
As a development team, you are judged on your ability to deliver working software to the business. No, seriously, the business could care less about OOP vs FP, CI/CD, table tennis or limited run La Croix.
|
||||
|
||||
Your super power is, at any time, anyone on the team should be confident that the master branch of your code is shippable. This means at any time they can deliver a release of your software to the business and the business can recoup its investment in your development R&D.
|
||||
|
||||
I cannot emphasise this enough. If you want the non technical parts of the business to believe you are heros, you must never create a situation where you say “well, we can’t release right now because we’re in the middle of an important refactoring. It’ll be a few weeks. We hope.”
|
||||
|
||||
Again, I’m not saying you cannot refactor, but at every stage your product must be shippable. Your tests have to pass. It may not have all the desired features, but the features that are there should work as described on the tin.
|
||||
|
||||
### Tests lock in behaviour
|
||||
|
||||
Your tests are the contract about what your software does and does not do. Unit tests should lock in the behaviour of the package’s API. Integration tests do the same for complex interactions. Tests describe, in code, what the program promises to do.
|
||||
|
||||
If there is a unit test for each input permutation, you have defined the contract for what the code will do _in code_ , not documentation. This is a contract anyone on your team can assert by simply running the tests. At any stage you _know_ with a high degree of confidence that the behaviour people relied on before your change continues to function after your change.
|
||||
|
||||
### Tests give you confidence to change someone else’s code
|
||||
|
||||
Lastly, and this is the biggest one, for programmers working on a piece of code that has been through many hands. Tests give you the confidence to make changes.
|
||||
|
||||
Even though we’ve never met, something I know about you, the reader, is you will eventually leave your current employer. Maybe you’ll be moving on to a new role, or perhaps a promotion, perhaps you’ll move cities, or follow your partner overseas. Whatever the reason, the succession of the maintenance of programs you write is key.
|
||||
|
||||
If people cannot maintain our code then as you and I move from job to job we’ll leave behind programs which cannot be maintained. This goes beyond advocacy for a language or tool. Programs which cannot be changed, programs which are too hard to onboard new developers, or programs which feel like career digression to work on them will reach only one end state–they are a dead end. They represent a balance sheet loss for the business. They will be replaced.
|
||||
|
||||
If you worry about who will maintain your code after you’re gone, write good tests.
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [Writing table driven tests in Go][4]
|
||||
2. [Prefer table driven tests][5]
|
||||
3. [Automatically run your package’s tests with inotifywait][6]
|
||||
4. [The value of TDD][7]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests
|
||||
[2]: https://dave.cheney.net/paste/absolute-unit-test-london-gophers.pdf
|
||||
[3]: https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/evolving-test-practices-microsoft
|
||||
[4]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go)
|
||||
[5]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests (Prefer table driven tests)
|
||||
[6]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait)
|
||||
[7]: https://dave.cheney.net/2016/04/11/the-value-of-tdd (The value of TDD)
|
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2)
|
||||
[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2
|
||||
======
|
||||
|
||||
As a continuation of Part-1, this part-2 has remaining features of Portainer covered and as explained below.
|
||||
|
||||
### Monitoring docker container images
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ docker ps -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
9ab9aa72f015 ubuntu "/bin/bash" 14 seconds ago Exited (0) 12 seconds ago suspicious_shannon
|
||||
305369d3b2bb centos "/bin/bash" 24 seconds ago Exited (0) 22 seconds ago admiring_mestorf
|
||||
9a669f3dc4f6 portainer/portainer "/portainer" 7 minutes ago Up 7 minutes 0.0.0.0:9000->9000/tcp trusting_keller
|
||||
```
|
||||
|
||||
Including the portainer(which is a docker container image), all the exited and present running docker images are displayed. Below screenshot from Portainer GUI displays the same.
|
||||
|
||||
[![Docker_status][1]][2]
|
||||
|
||||
### Monitoring events
|
||||
|
||||
Click on the “Events” option from the portainer webpage as shown below.
|
||||
|
||||
Various events that are generated and created based on docker-container activity, are captured and displayed in this page
|
||||
|
||||
[![Container-Events-Poratiner-GUI][3]][4]
|
||||
|
||||
Now to check and validate how the “ **Events** ” section works. Create a new docker-container image redis as explained below, check the docker ps –a status at docker command-line.
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ docker ps -a
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
cdbfbef59c31 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp angry_varahamihira
|
||||
9ab9aa72f015 ubuntu "/bin/bash" 10 minutes ago Exited (0) 10 minutes ago suspicious_shannon
|
||||
305369d3b2bb centos "/bin/bash" 11 minutes ago Exited (0) 11 minutes ago admiring_mestorf
|
||||
9a669f3dc4f6 portainer/portainer "/portainer" 17 minutes ago Up 17 minutes 0.0.0.0:9000->9000/tcp trusting_keller
|
||||
```
|
||||
|
||||
Click the “Event List” on the top to refresh the events list,
|
||||
|
||||
[![events_updated][5]][6]
|
||||
|
||||
Now the event’s page also updated with this change,
|
||||
|
||||
### Host status
|
||||
|
||||
Below is the screenshot of the portainer displaying the host status. This is a simple window showing-up. This shows the basic info like “CPU”, “hostname”, “OS info” etc of the host linux machine. Instead of logging- into the host command-line, this page provides very useful info on for quick glance.
|
||||
|
||||
[![Host-names-Portainer][7]][8]
|
||||
|
||||
### Dashboard in Portainer
|
||||
|
||||
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Dashboard** ” section of the selected Docker Container image.
|
||||
|
||||
When “ **EndPoint** ” option is clicked in the GUI of Portainer, the following window appears,
|
||||
|
||||
[![End_Point_Settings][9]][10]
|
||||
|
||||
This Dashboard has many statuses and options, for a host container image.
|
||||
|
||||
**1) Stacks:** Clicking on this option, provides status of any stacks if any. Since there are no stacks, this displays zero.
|
||||
|
||||
**2) Images:** Clicking on this option provides host of container images that are available. This option will display all the live and exited container images
|
||||
|
||||
[![Docker-Container-Images-Portainer][11]][12]
|
||||
|
||||
For example create one more “ **Nginx”** container and refresh this list to see the updates.
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ sudo docker run nginx
|
||||
Unable to find image 'nginx:latest' locally
|
||||
latest: Pulling from library/nginx
|
||||
27833a3ba0a5: Pull complete
|
||||
ea005e36e544: Pull complete
|
||||
d172c7f0578d: Pull complete
|
||||
Digest: sha256:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c
|
||||
Status: Downloaded newer image for nginx:latest
|
||||
```
|
||||
|
||||
The following is the image after refresh,
|
||||
|
||||
[![Nginx_Image_creation][13]][14]
|
||||
|
||||
Once the Nginx image is stopped/killed and docker container image will be moved to unused status.
|
||||
|
||||
**Note** :-One can see all the image details here are very clear with memory usage, creation date and time. As compared to command-line option, maintaining and monitoring containers from here it will be very easy.
|
||||
|
||||
**3) Networks:** this option is used for network operations. Like assigning IP address, creating subnets, providing IP address range, access control (admin and normal user) . The following window provides the details of various options possible. Based on your need these options can be explored further.
|
||||
|
||||
[![Conatiner-Network-Portainer][15]][16]
|
||||
|
||||
Once all the various networking parameters are entered, “ **create network** ” button is clicked for creating the network.
|
||||
|
||||
**4) Container:** (click on container) This option will provide the container status. This list will provide details on live and not running container statuses. This output is similar to docker ps command option.
|
||||
|
||||
[![Containers-Status-Portainer][17]][18]
|
||||
|
||||
From this window only the containers can be stopped and started as need arises by checking the check box and selecting the above buttons. One example is provided as below,
|
||||
|
||||
Example, Both “CentOS” and “Ubuntu” containers which are in stopped state, they are started now by selecting check boxes and hitting “Start” button.
|
||||
|
||||
[![start_containers1][19]][20]
|
||||
|
||||
[![start_containers2][21]][22]
|
||||
|
||||
**Note:** Since both are Linux container images, they will not be started. Portainer tries to start and stops later. Try “Nginx” instead and you can see it coming to “running”status.
|
||||
|
||||
[![start_containers3][23]][24]
|
||||
|
||||
**5) Volume:** Described in Part-I of Portainer Article
|
||||
|
||||
### Setting option in Portainer
|
||||
|
||||
Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Setting”** section of the selected Docker Container image.
|
||||
|
||||
When “Settings” option is clicked in the GUI of Portainer, the following further configuration options are available,
|
||||
|
||||
**1) Extensions** : This is a simple Portainer CE subscription process. The details and uses can be seen from the attached window. This is mainly used for maintaining the license and subscription of the respective version.
|
||||
|
||||
[![Extensions][25]][26]
|
||||
|
||||
**2) Users:** This option is used for adding “users” with or without administrative privileges. Following example provides the same.
|
||||
|
||||
Enter the selected user name “shashi” in this case and your choice of password and hit “ **Create User** ” button below.
|
||||
|
||||
[![create_user_portainer][27]][28]
|
||||
|
||||
[![create_user2_portainer][29]][30]
|
||||
|
||||
[![Internal-user-Portainer][31]][32]
|
||||
|
||||
Similarly the just now created user “shashi” can be removed by selecting the check box and hitting remove button.
|
||||
|
||||
[![user_remove_portainer][33]][34]
|
||||
|
||||
**3) Endpoints:** this option is used for Endpoint management. Endpoints can be added and removed as shown in the attached windows.
|
||||
|
||||
[![Endpoint-Portainer-GUI][35]][36]
|
||||
|
||||
The new endpoint “shashi” is created using the various default parameters as shown below,
|
||||
|
||||
[![Endpoint2-Portainer-GUI][37]][38]
|
||||
|
||||
Similarly this endpoint can be removed by clicking the check box and hitting remove button.
|
||||
|
||||
**4) Registries:** this option is used for registry management. As docker hub has registry of various images, this feature can be used for similar purposes.
|
||||
|
||||
[![Registry-Portainer-GUI][39]][40]
|
||||
|
||||
With the default options the “shashi-registry” can be created.
|
||||
|
||||
[![Registry2-Portainer-GUI][41]][42]
|
||||
|
||||
Similarly this can be removed if not required.
|
||||
|
||||
**5) Settings:** This option is used for the following various options,
|
||||
|
||||
* Setting-up snapshot interval
|
||||
* For using custom logo
|
||||
* To create external templates
|
||||
* Security features like- Disable and enable bin mounts for non-admins, Disable/enable privileges for non-admins, Enabling host management features
|
||||
|
||||
|
||||
|
||||
Following screenshot shows some options enabled and disabled for demonstration purposes. Once all done hit on “Save Settings” button to save all these options.
|
||||
|
||||
[![Portainer-GUI-Settings][43]][44]
|
||||
|
||||
Now one more option pops-up on “Authentication settings” for LDAP, Internal or OAuth extension as shown below”
|
||||
|
||||
[![Authentication-Portainer-GUI-Settings][45]][46]
|
||||
|
||||
Based on what level of security features we want for our environment, respective option is chosen.
|
||||
|
||||
That’s all from this article, I hope these Portainer GUI articles helps you to manage and monitor containers more efficiently. Please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status-1024x423.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events-1024x404.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated-1024x414.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names-1024x408.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names.jpg
|
||||
[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings-1024x471.jpg
|
||||
[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings.jpg
|
||||
[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images-1024x398.jpg
|
||||
[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images.jpg
|
||||
[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation-1024x439.jpg
|
||||
[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation.jpg
|
||||
[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-1024x463.jpg
|
||||
[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network.jpg
|
||||
[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers-1024x364.jpg
|
||||
[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers.jpg
|
||||
[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1-1024x432.jpg
|
||||
[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1.jpg
|
||||
[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2-1024x307.jpg
|
||||
[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2.jpg
|
||||
[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3-1024x435.jpg
|
||||
[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3.jpg
|
||||
[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions-1024x421.jpg
|
||||
[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions.jpg
|
||||
[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user-1024x350.jpg
|
||||
[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user.jpg
|
||||
[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2-1024x372.jpg
|
||||
[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2.jpg
|
||||
[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer-1024x257.jpg
|
||||
[32]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer.jpg
|
||||
[33]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove-1024x318.jpg
|
||||
[34]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove.jpg
|
||||
[35]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint-1024x349.jpg
|
||||
[36]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint.jpg
|
||||
[37]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2-1024x379.jpg
|
||||
[38]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2.jpg
|
||||
[39]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry-1024x420.jpg
|
||||
[40]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry.jpg
|
||||
[41]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2-1024x409.jpg
|
||||
[42]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2.jpg
|
||||
[43]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings-1024x418.jpg
|
||||
[44]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings.jpg
|
||||
[45]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings-1024x344.jpg
|
||||
[46]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings.jpg
|
65
sources/tech/20190519 The three Rs of remote work.md
Normal file
65
sources/tech/20190519 The three Rs of remote work.md
Normal file
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The three Rs of remote work)
|
||||
[#]: via: (https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
The three Rs of remote work
|
||||
======
|
||||
|
||||
I started working remotely in 2012. Since then I’ve worked for big companies and small, organisations with outstanding remote working cultures, and others that probably would have difficulty spelling the word without predictive text. I broadly classify my experiences into three tiers;
|
||||
|
||||
### Little r remote
|
||||
|
||||
The first kind of remote work I call _little r_ remote.
|
||||
|
||||
Your company has an office, but it’s not convenient or you don’t want to work from there. It could be the commute is too long, or its in the next town over, or perhaps a short plane flight away. Sometimes you might go into the office for a day or two a week, and should something serious arise you could join your co-workers onsite for an extended period of time.
|
||||
|
||||
If you often hear people say they are going to work from home to get some work done, that’s little r remote.
|
||||
|
||||
### Big R remote
|
||||
|
||||
The next category I call _Big R_ remote. Big R remote differs mainly from little r remote by the tyranny of distance. It’s not impossible to visit your co-workers in person, but it is inconvenient. Meeting face to face requires a day’s flying. Passports and boarder crossings are frequently involved. The expense and distance necessitates week long sprints and commensurate periods of jetlag recuperation.
|
||||
|
||||
Because of timezone differences meetings must be prearranged and periods of overlap closely guarded. Communication becomes less spontaneous and care must be taken to avoid committing to unsustainable working hours.
|
||||
|
||||
### Gothic ℜ remote
|
||||
|
||||
The final category is basically Big R remote working on hard mode. Everything that was hard about Big R remote, timezone, travel schedules, public holidays, daylight savings, video call latency, cultural and language barriers is multiplied for each remote worker.
|
||||
|
||||
In person meetings are so rare that without a focus on written asynchronous communication progress can repeatedly stall for days, if not weeks, as miscommunication leads to disillusionment and loss of trust.
|
||||
|
||||
In my experience, for knowledge workers, little r remote work offers many benefits over [the open office hell scape][1] du jour. Big R remote takes a serious commitment by all parties and if you are the first employee in that category you will bare most of the cost to making Big R remote work for you.
|
||||
|
||||
Gothic ℜ remote working should probably be avoided unless all those involved have many years of working in that style _and_ the employer is committed to restructuring the company as a remote first organisation. It is not possible to succeed in a Gothic ℜ remote role without a culture of written communication and asynchronous decision making mandated, _and consistently enforced,_ by the leaders of the company.
|
||||
|
||||
#### Related posts:
|
||||
|
||||
1. [How to dial remote SSL/TLS services in Go][2]
|
||||
2. [How does the go build command work ?][3]
|
||||
3. [Why Slack is inappropriate for open source communications][4]
|
||||
4. [The office coffee model of concurrent garbage collection][5]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work
|
||||
|
||||
作者:[Dave Cheney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://dave.cheney.net/author/davecheney
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twitter.com/davecheney/status/761693088666357760
|
||||
[2]: https://dave.cheney.net/2010/10/05/how-to-dial-remote-ssltls-services-in-go (How to dial remote SSL/TLS services in Go)
|
||||
[3]: https://dave.cheney.net/2013/10/15/how-does-the-go-build-command-work (How does the go build command work ?)
|
||||
[4]: https://dave.cheney.net/2017/04/11/why-slack-is-inappropriate-for-open-source-communications (Why Slack is inappropriate for open source communications)
|
||||
[5]: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection (The office coffee model of concurrent garbage collection)
|
@ -0,0 +1,193 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Download and Use Ansible Galaxy Roles in Ansible Playbook)
|
||||
[#]: via: (https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Download and Use Ansible Galaxy Roles in Ansible Playbook
|
||||
======
|
||||
|
||||
**Ansible** is tool of choice these days if you must manage multiple devices, be it Linux, Windows, Mac, Network Devices, VMware and lot more. What makes Ansible popular is its agent less feature and granular control. If you have worked with python or have experience with **yaml** , you will feel at home with Ansible. To see how you can install [Ansible][1] click here.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/05/Download-Use-Ansible-Galaxy-Roles.jpg>
|
||||
|
||||
Ansible core modules will let you manage almost anything should you wish to write playbooks, however often there is someone who has already written a role for a problem you are trying to solve. Let’s take an example, you wish to manage NTP clients on the Linux machines, you have 2 choices either write a role which can be applied to the nodes or use **ansible-galaxy** to download an existing role someone has already written/tested for you. Ansible galaxy has roles for almost all the domains and these caters different problems. You can visit <https://galaxy.ansible.com/> to get an idea on domains and popular roles it has. Each role published on galaxy repository is thoroughly tested and has been rated by the users, so you get an idea on how other people who have used it liked it.
|
||||
|
||||
To keep moving with the NTP idea, here is how you can search and install an NTP role from galaxy.
|
||||
|
||||
Firstly, lets run ansible-galaxy with the help flag to check what options does it give us
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy --help
|
||||
```
|
||||
|
||||
![ansible-galaxy-help][2]
|
||||
|
||||
As you can see from the output above there are some interesting options been shown, since we are looking for a role to manage ntp clients lets try the search option to see how good it is finding what we are looking for.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy search ntp
|
||||
```
|
||||
|
||||
Here is the truncated output of the command above.
|
||||
|
||||
![ansible-galaxy-search][3]
|
||||
|
||||
It found 341 matches based on our search, as you can see from the output above many of these roles are not even related to NTP which means our search needs some refinement however, it has managed to pull some NTP roles, lets dig deeper to see what these roles are. But before that let me tell you the naming convention being followed here. The name of a role is always preceded by the author name so that it is easy to segregate roles with the same name. So, if you have written an NTP role and have published it to galaxy repo, it does not get mixed up with someone else repo with the same name.
|
||||
|
||||
With that out of the way, lets continue with our job of installing a NTP role for our Linux machines. Let’s try **bennojoy.ntp** for this example, but before using this we need to figure out couple of things, is this role compatible with the version of ansible I am running. Also, what is the license status of this role. To figure out these, let’s run below ansible-galaxy command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy info bennojoy.ntp
|
||||
```
|
||||
|
||||
![ansible-galaxy-info][4]
|
||||
|
||||
ok so this says the minimum version is 1.4 and the license is BSD, lets download it
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy install bennojoy.ntp
|
||||
- downloading role 'ntp', owned by bennojoy
|
||||
- downloading role from https://github.com/bennojoy/ntp/archive/master.tar.gz
|
||||
- extracting bennojoy.ntp to /etc/ansible/roles/bennojoy.ntp
|
||||
- bennojoy.ntp (master) was installed successfully
|
||||
[root@linuxtechi ~]# ansible-galaxy list
|
||||
- bennojoy.ntp, master
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Let’s find the newly installed role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cd /etc/ansible/roles/bennojoy.ntp/
|
||||
[root@linuxtechi bennojoy.ntp]# ls -l
|
||||
total 4
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 defaults
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 handlers
|
||||
drwxr-xr-x. 2 root root 48 May 21 22:38 meta
|
||||
-rw-rw-r--. 1 root root 1328 Apr 20 2016 README.md
|
||||
drwxr-xr-x. 2 root root 21 May 21 22:38 tasks
|
||||
drwxr-xr-x. 2 root root 24 May 21 22:38 templates
|
||||
drwxr-xr-x. 2 root root 55 May 21 22:38 vars
|
||||
[root@linuxtechi bennojoy.ntp]#
|
||||
```
|
||||
|
||||
I am going to run this newly downloaded role on my Elasticsearch CentOS node. Here is my hosts file
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat hosts
|
||||
[CentOS]
|
||||
elastic7-01 ansible_host=192.168.1.15 ansibel_port=22 ansible_user=linuxtechi
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Let’s try to ping the node using below ansible ping module,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible -m ping -i hosts elastic7-01
|
||||
elastic7-01 | SUCCESS => {
|
||||
"changed": false,
|
||||
"ping": "pong"
|
||||
}
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Here is what the current ntp.conf looks like on elastic node.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# head -30 /etc/ntp.conf
|
||||
```
|
||||
|
||||
![Current-ntp-conf][5]
|
||||
|
||||
Since I am in India, lets add server **in.pool.ntp.org** to ntp.conf. I would have to edit the variables in default directory of the role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/ansible/roles/bennojoy.ntp/defaults/main.yml
|
||||
```
|
||||
|
||||
Change NTP server address in “ntp_server” parameter, after updating it should look like below.
|
||||
|
||||
![Update-ansible-ntp-role][6]
|
||||
|
||||
The last thing now is to create my playbook which would call this role.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi ntpsite.yaml
|
||||
---
|
||||
- name: Configure NTP on CentOS/RHEL/Debian System
|
||||
become: true
|
||||
hosts: all
|
||||
roles:
|
||||
- {role: bennojoy.ntp}
|
||||
```
|
||||
|
||||
save and exit the file
|
||||
|
||||
We are ready to run this role now, use below command to run ntp playbook,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-playbook -i hosts ntpsite.yaml
|
||||
```
|
||||
|
||||
Output of above ntp ansible playbook should be something like below,
|
||||
|
||||
![ansible-playbook-output][7]
|
||||
|
||||
Let’s check updated file now. go to elastic node and view the contents of ntp.conf file
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /etc/ntp.conf
|
||||
#Ansible managed
|
||||
|
||||
driftfile /var/lib/ntp/drift
|
||||
server in.pool.ntp.org
|
||||
|
||||
restrict -4 default kod notrap nomodify nopeer noquery
|
||||
restrict -6 default kod notrap nomodify nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Just in case you do not find a role fulfilling your requirement ansible-galaxy can help you create a directory structure for your custom roles. This helps your playbooks along with the variables, handlers, templates etc assembled in a standardized file structure. Let’s create our own role, its always a good practice to let ansible-galaxy create the structure for you.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ansible-galaxy init pk.backup
|
||||
- pk.backup was created successfully
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Verify the structure of your role using the tree command,
|
||||
|
||||
![createing-roles-ansible-galaxy][8]
|
||||
|
||||
Let me quickly explain what each of these directories and files are for, each of these serves a purpose.
|
||||
|
||||
The very first one is the **defaults** directory which contains the files containing variables with takes the lowest precedence, if the same variables are assigned in var directory it will be take precedence over default. The **handlers** directory hosts the handlers. The **file** and **templates** keep any files your role may need to copy and **jinja templates** to be used in playbooks respectively. The **tasks** directory is where your playbooks containing the tasks are kept. The var directory consists of all the files that hosts the variables used in role. The test directory consists of a sample inventory and test playbooks which can be used to test the role. The **meta directory** consists of any dependencies on other roles along with the authorship information.
|
||||
|
||||
Finally, **README.md** file simply consists of some general information like description and minimum version of ansible this role is compatible with.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/install-and-use-ansible-in-centos-7/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-help-1024x294.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-search-1024x552.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-info-1024x557.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Current-ntp-conf.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Update-ansible-ntp-role.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-playbook-output-1024x376.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/createing-roles-ansible-galaxy.jpg
|
@ -1,3 +1,4 @@
|
||||
Translating by name1e5s
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
|
@ -1,90 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn Python with these awesome resources)
|
||||
[#]: via: (https://opensource.com/article/19/5/resources-learning-python)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Learn Python with these awesome resources
|
||||
======
|
||||
Expand your Python knowledge by adding these resources to your personal
|
||||
learning network.
|
||||
![Book list, favorites][1]
|
||||
|
||||
I've been using and teaching Python for a long time now, but I'm always interested in increasing my knowledge about this practical and useful programming language. That's why I've been trying to expand my Python [personal learning network][2] (PLN), a concept that describes informal and mutually beneficial networks for sharing information.
|
||||
|
||||
Educators [Kelly Paredes][3] and [Sean Tibor][4] recently talked about how to build your Python PLN on their podcast, [Teaching Python][5], which I subscribed to after meeting them at [PyCon 2019][6] in Cleveland (and adding them to my Python PLN). This podcast inspired me to think more about the people in my Python PLN, including those I met recently at PyCon.
|
||||
|
||||
I'll share some of the places I've met members of my PLN; maybe they can become part of your Python PLN, too.
|
||||
|
||||
### Young Coders mentors
|
||||
|
||||
[Betsy Waliszewski][7], the event coordinator for the Python Foundation, is a member of my Python PLN. When we ran into each other at PyCon2019, because I'm a teacher, she recommended I check out the [Young Coders][8] workshop for kids ages 12 and up. There, I met [Katie Cunningham][9], who was running the program, which taught participants how to set up and configure a Raspberry Pi and use Python. The young students also received two books: _[Python for Kids][10]_ by Jason Briggs and _[Learn to Program with Minecraft][11]_ by Craig Richardson. I'm always looking for new ways to improve my teaching, so I quickly picked up two copies of the Minecraft book at [NoStarch Press][12]' booth at the conference. Katie is a great teacher and a prolific author with a wonderful [YouTube][13] channel full of Python training videos.
|
||||
|
||||
I added Katie to my PLN, along with two other people I met at the Young Coders workshop: [Nat Dunn][14] and [Sean Valentine][15]. Like Katie, they were volunteering their time to introduce young programmers to Python. Nat is the president of [Webucator][16], an IT training company that has been a sponsor of the Python Software Foundation for several years and sponsored the PyCon 2018 Education Summit. He decided to teach at Young Coders after teaching Python to his 13-year-old son and 14-year-old nephew. Sean is the director of strategic initiatives at the [Hidden Genius Project][17], a technology and leadership mentoring program for black male youth. Sean said many Hidden Genius participants "built projects using Python, so we saw [Young Coders] as a great opportunity to partner." Learning about the Hidden Genius Project has inspired me to think deeper about the implications of coding and its power to change lives.
|
||||
|
||||
### Open Spaces meetups
|
||||
|
||||
I found PyCon's [Open Spaces][18], self-organizing, impromptu hour-long meetups, just as useful as the official programmed events. One of my favorites was about the [Circuit Playground Express][19] device, which was part of our conference swag bags. I am fascinated by this device, and the Open Space provided an avenue to learn more. The organizers offered a worksheet and a [GitHub][20] repo with all the tools we needed to be successful, as well as an opportunity for hands-on learning and direction to explore this unique hardware.
|
||||
|
||||
This meetup whetted my appetite to learn even more about programming the Circuit Playground Express, so after PyCon, I reached out on Twitter to [Nina Zakharenko][21], who [presented a keynote][22] at the conference about programming the device. Nina has been in my Python PLN since last fall when I heard her talk at [All Things Open][23], and I recently signed up for her [Python Fundamentals][24] class to add to my learning. Nina recommended I add [Kattni Rembor][25], whose [code examples][26] are helping me learn to program with CircuitPython, to my Python PLN.
|
||||
|
||||
### Other resources from my PLN
|
||||
|
||||
I also met fellow [Opensource.com][27] Community Moderator [Moshe Zadka][28] at PyCon2019 and talked with him at length. He shared several new Python resources, including _[How to Think Like a Computer Scientist][29]_. Community Moderator [Seth Kenlon][30] is another member of my PLN; he has published many great [Python articles][31], and I recommend you follow him, too.
|
||||
|
||||
My Python personal learning network continues to grow each day. Besides the folks I have already mentioned, I recommend you follow [Al Sweigart][32], [Eric Matthes][33], and [Adafruit][34] because they share great content. I also recommend the book _[Make: Getting Started with Adafruit Circuit Playground Express][35]_ and [Podcast.__init__][36], a podcast all about the Python community, both of which I learned about from my PLN.
|
||||
|
||||
Who is in your Python PLN? Please share your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/resources-learning-python
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/reading_book_stars_list.png?itok=Iwa1oBOl (Book list, favorites)
|
||||
[2]: https://en.wikipedia.org/wiki/Personal_learning_network
|
||||
[3]: https://www.teachingpython.fm/hosts/kellypared
|
||||
[4]: https://twitter.com/smtibor
|
||||
[5]: https://www.teachingpython.fm/20
|
||||
[6]: https://us.pycon.org/2019/
|
||||
[7]: https://www.linkedin.com/in/betsywaliszewski
|
||||
[8]: https://us.pycon.org/2019/events/letslearnpython/
|
||||
[9]: https://www.linkedin.com/in/kcunning/
|
||||
[10]: https://nostarch.com/pythonforkids
|
||||
[11]: https://nostarch.com/programwithminecraft
|
||||
[12]: https://nostarch.com/
|
||||
[13]: https://www.youtube.com/c/KatieCunningham
|
||||
[14]: https://www.linkedin.com/in/natdunn/
|
||||
[15]: https://www.linkedin.com/in/sean-valentine-b370349b/
|
||||
[16]: https://www.webucator.com/
|
||||
[17]: http://www.hiddengeniusproject.org/
|
||||
[18]: https://us.pycon.org/2019/events/open-spaces/
|
||||
[19]: https://www.adafruit.com/product/3333
|
||||
[20]: https://github.com/adafruit/PyCon2019
|
||||
[21]: https://twitter.com/nnja
|
||||
[22]: https://www.youtube.com/watch?v=35mXD40SvXM
|
||||
[23]: https://allthingsopen.org/
|
||||
[24]: https://frontendmasters.com/courses/python/
|
||||
[25]: https://twitter.com/kattni
|
||||
[26]: https://github.com/kattni/ChiPy_2018
|
||||
[27]: http://Opensource.com
|
||||
[28]: https://opensource.com/users/moshez
|
||||
[29]: http://openbookproject.net/thinkcs/python/english3e/
|
||||
[30]: https://opensource.com/users/seth
|
||||
[31]: https://www.google.com/search?source=hp&ei=gVToXPq-FYXGsAW-mZ_YAw&q=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&oq=site%3Aopensource.com+%22Seth+Kenlon%22+%2B+Python&gs_l=psy-ab.12...627.15303..15584...1.0..0.176.2802.4j21......0....1..gws-wiz.....0..35i39j0j0i131j0i67j0i20i263.r2SAW3dxlB4
|
||||
[32]: http://alsweigart.com/
|
||||
[33]: https://twitter.com/ehmatthes?lang=en
|
||||
[34]: https://twitter.com/adafruit
|
||||
[35]: https://www.adafruit.com/product/3944
|
||||
[36]: https://www.pythonpodcast.com/episodes/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,200 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server)
|
||||
[#]: via: (https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server
|
||||
======
|
||||
|
||||
In this article, we’ll be looking at how to install **LEMP** stack on Fedora 30 Server. LEMP Stands for:
|
||||
|
||||
* L -> Linux
|
||||
* E -> Nginx
|
||||
* M -> Maria DB
|
||||
* P -> PHP
|
||||
|
||||
|
||||
|
||||
I am assuming **[Fedora 30][1]** is already installed on your system.
|
||||
|
||||
![LEMP-Stack-Fedora30][2]
|
||||
|
||||
LEMP is a collection of powerful software setup that is installed on a Linux server to help in developing popular development platforms to build websites, LEMP is a variation of LAMP wherein instead of **Apache** , **EngineX (Nginx)** is used as well as **MariaDB** used in place of **MySQL**. This how-to guide is a collection of separate guides to install Nginx, Maria DB and PHP.
|
||||
|
||||
### Install Nginx, PHP 7.3 and PHP-FPM on Fedora 30 Server
|
||||
|
||||
Let’s take a look at how to install Nginx and PHP along with PHP FPM on Fedora 30 Server.
|
||||
|
||||
### Step 1) Switch to root user
|
||||
|
||||
First step in installing Nginx in your system is to switch to root user. Use the following command :
|
||||
|
||||
```
|
||||
root@linuxtechi ~]$ sudo -i
|
||||
[sudo] password for pkumar:
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 2) Install Nginx, PHP 7.3 and PHP FPM using dnf command
|
||||
|
||||
Install Nginx using the following dnf command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install nginx php php-fpm php-common -y
|
||||
```
|
||||
|
||||
### Step 3) Install Additional PHP modules
|
||||
|
||||
The default installation of PHP only comes with the basic and the most needed modules installed. If you need additional modules like GD, XML support for PHP, command line interface Zend OPCache features etc, you can always choose your packages and install everything in one go. See the sample command below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# sudo dnf install php-opcache php-pecl-apcu php-cli php-pear php-pdo php-pecl-mongodb php-pecl-redis php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml -y
|
||||
```
|
||||
|
||||
### Step 4) Start & Enable Nginx and PHP-fpm Service
|
||||
|
||||
Start and enable Nginx service using the following command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start nginx && systemctl enable nginx
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use the following command to start and enable PHP-FPM service
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start php-fpm && systemctl enable php-fpm
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/php-fpm.service → /usr/lib/systemd/system/php-fpm.service.
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
**Verify Nginx (Web Server) and PHP installation,**
|
||||
|
||||
**Note:** In case OS firewall is enabled and running on your Fedora 30 system, then allow 80 and 443 ports using beneath commands,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=http
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# firewall-cmd --permanent --add-service=https
|
||||
success
|
||||
[root@linuxtechi ~]# firewall-cmd --reload
|
||||
success
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Open the web browser, type the following URL: http://<Your-Server-IP>
|
||||
|
||||
[![Test-Page-HTTP-Server-Fedora-30][3]][4]
|
||||
|
||||
Above screen confirms that NGINX is installed successfully.
|
||||
|
||||
Now let’s verify PHP installation, create a test php page(info.php) using the beneath command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# echo "<?php phpinfo(); ?>" > /usr/share/nginx/html/info.php
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Type the following URL in the web browser,
|
||||
|
||||
http://<Your-Server-IP>/info.php
|
||||
|
||||
[![Php-info-page-fedora30][5]][6]
|
||||
|
||||
Above page confirms that PHP 7.3.5 has been installed successfully. Now let’s install MariaDB database server.
|
||||
|
||||
### Install MariaDB on Fedora 30
|
||||
|
||||
MariaDB is a great replacement for MySQL DB as it is works much similar to MySQL and also compatible with MySQL steps too. Let’s look at the steps to install MariaDB on Fedora 30 Server
|
||||
|
||||
### Step 1) Switch to Root User
|
||||
|
||||
First step in installing MariaDB in your system is to switch to root user or you can use a local user who has root privilege. Use the following command below:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# sudo -i
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Step 2) Install latest version of MariaDB (10.3) using dnf command
|
||||
|
||||
Use the following command to install MariaDB on Fedora 30 Server
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install mariadb-server -y
|
||||
```
|
||||
|
||||
### Step 3) Start and enable MariaDB Service
|
||||
|
||||
Once the mariadb is installed successfully in step 2), next step is to start the MariaDB service. Use the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl start mariadb.service ; systemctl enable mariadb.service
|
||||
```
|
||||
|
||||
### Step 4) Secure MariaDB Installation
|
||||
|
||||
When we install MariaDB server, so by default there is no root password, also anonymous users are created in database. So, to secure MariaDB installation, run the beneath “mysql_secure_installation” command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mysql_secure_installation
|
||||
```
|
||||
|
||||
Next you will be prompted with some question, just answer the questions as shown below:
|
||||
|
||||
![Secure-MariaDB-Installation-Part1][7]
|
||||
|
||||
![Secure-MariaDB-Installation-Part2][8]
|
||||
|
||||
### Step 5) Test MariaDB Installation
|
||||
|
||||
Once you have installed, you can always test if MariaDB is successfully installed on the server. Use the following command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mysql -u root -p
|
||||
Enter password:
|
||||
```
|
||||
|
||||
Next you will be prompted for a password. Enter the password same password that you have set during MariaDB secure installation, then you can see the MariaDB welcome screen.
|
||||
|
||||
```
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MariaDB connection id is 17
|
||||
Server version: 10.3.12-MariaDB MariaDB Server
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MariaDB [(none)]>
|
||||
```
|
||||
|
||||
And finally, we’ve completed everything to install LEMP (Linux, Nginx, MariaDB and PHP) on your server successfully. Please post all your comments and suggestions in the feedback section below and we’ll respond back at the earliest.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/LEMP-Stack-Fedora30.jpg
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30-1024x732.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30-1024x732.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30.jpg
|
||||
[7]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part1.jpg
|
||||
[8]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part2.jpg
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to navigate the Kubernetes learning curve)
|
||||
[#]: via: (https://opensource.com/article/19/6/kubernetes-learning-curve)
|
||||
[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux)
|
||||
|
||||
How to navigate the Kubernetes learning curve
|
||||
======
|
||||
Kubernetes is like a dump truck. It's elegant for solving the problems
|
||||
it's designed for, but you have to master the learning curve first.
|
||||
![Dump truck rounding a turn in the road][1]
|
||||
|
||||
In _[Kubernetes is a dump truck][2]_, I talked about how a tool can be elegant for the problem it was designed to solve—once you learn how to use it. In part 2 of this series, I'm going a little deeper into the Kubernetes' learning curve.
|
||||
|
||||
The journey to [Kubernetes][3] often starts with running one container on one host. You quickly discover how easy it is to run new versions of software, how easy it is to share that software with others, and how easy it is for those users to run it the way you intended.
|
||||
|
||||
But then you need
|
||||
|
||||
* Two containers
|
||||
* Two hosts
|
||||
|
||||
|
||||
|
||||
It's easy to fire up one web server on port 80 with a container, but what happens when you need to fire up a second container on port 80? What happens when you are building a production environment and you need the containerized web server to fail over to a second host? The short answer, in either case, is you have to move into container orchestration.
|
||||
|
||||
Inevitably, when you start to handle the two containers or two hosts problem, you'll introduce complexity and, hence, a learning curve. The two services (a more generalized version of a container) / two hosts problem has been around for a long time and has always introduced complexity.
|
||||
|
||||
Historically, this would have involved load balancers, clustering software, and even clustered file systems. Configuration logic for every service is embedded in every system (load balancers, cluster software, and file systems). Running 60 or 70 services, clustered, behind load balancers is complex. Adding another new service is also complex. Worse, decommissioning a service is a nightmare. Thinking back on my days of troubleshooting production MySQL and Apache servers with logic embedded in three, four, or five different places, all in different formats, still makes my head hurt.
|
||||
|
||||
Kubernetes elegantly solves all these problems with one piece of software:
|
||||
|
||||
1. Two services (containers): Check
|
||||
2. Two servers (high availability): Check
|
||||
3. Single source of configuration: Check
|
||||
4. Standard configuration format: Check
|
||||
5. Networking: Check
|
||||
6. Storage: Check
|
||||
7. Dependencies (what services talk to what databases): Check
|
||||
8. Easy provisioning: Check
|
||||
9. Easy de-provisioning: Check (perhaps Kubernetes' _most_ powerful piece)
|
||||
|
||||
|
||||
|
||||
Wait, it's starting to look like Kubernetes is pretty elegant and pretty powerful. _It is._ You can model an entire miniature IT universe in Kubernetes.
|
||||
|
||||
![Kubernetes business model][4]
|
||||
|
||||
So yes, there is a learning curve when starting to use a giant dump truck (or any professional equipment). There's also a learning curve to use Kubernetes, but it's worth it because you can solve so many problems with one tool. If you are apprehensive about the learning curve, think through all the underlying networking, storage, and security problems in IT infrastructure and envision their solutions today—they're not easier. Especially when you introduce more and more services, faster and faster. Velocity is the goal nowadays, so give special consideration to the provisioning and de-provisioning problem.
|
||||
|
||||
But don't confuse the learning curve for building or equipping Kubernetes (picking the right mud flaps for your dump truck can be hard, LOL) with the learning curve for using it. Learning to build your own Kubernetes with so many different choices at so many different layers (container engine, logging, monitoring, service mesh, storage, networking), and then maintaining updated selections of each component every six months, might not be worth the investment—but learning to use it is absolutely worth it.
|
||||
|
||||
I eat, sleep, and breathe Kubernetes and containers every day, and even I struggle to keep track of all the major new projects announced literally almost every day. But there isn't a day that I'm not excited about the operational benefits of having a single tool to model an entire IT miniverse. Also, remember Kubernetes has matured a ton and will continue to do so. Like Linux and OpenStack before it, the interfaces and de facto projects at each layer will mature and become easier to select.
|
||||
|
||||
In the third article in this series, I'll dig into what you need to know before you drive your Kubernetes "truck."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/kubernetes-learning-curve
|
||||
|
||||
作者:[Scott McCarty][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/fatherlinux/users/fatherlinux
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dumptruck_car_vehicle_storage_container_road.jpg?itok=TWK0CbX_ (Dump truck rounding a turn in the road)
|
||||
[2]: https://opensource.com/article/19/6/kubernetes-dump-truck
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/developer_native_experience_-_mapped_to_traditional_1.png (Kubernetes business model)
|
@ -0,0 +1,227 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set ulimit and file descriptors limit on Linux Servers)
|
||||
[#]: via: (https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/)
|
||||
[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/)
|
||||
|
||||
How to set ulimit and file descriptors limit on Linux Servers
|
||||
======
|
||||
|
||||
**Introduction: ** Challenges like number of open files in any of the production environment has become common now a day. Since many applications which are Java based and Apache based, are getting installed and configured, which may lead to too many open files, file descriptors etc. If this exceeds the default limit that is set, then one may face access control problems and file opening challenges. Many production environments come to standstill kind of situations because of this.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/ulimit-number-openfiles-linux-server.jpg>
|
||||
|
||||
Luckily, we have “ **ulimit** ” command in any of the Linux based server, by which one can see/set/get number of files open status/configuration details. This command is equipped with many options and with this combination one can set number of open files. Following are step-by-step commands with examples explained in detail.
|
||||
|
||||
### To see what is the present open file limit in any Linux System
|
||||
|
||||
To get open file limit on any Linux server, execute the following command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
146013
|
||||
```
|
||||
|
||||
The above number shows that user can open ‘146013’ file per user login session.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
149219
|
||||
[root@linuxtechi ~]# cat /proc/sys/fs/file-max
|
||||
73906
|
||||
```
|
||||
|
||||
This clearly indicates that individual Linux operating systems have different number of open files. This is based on dependencies and applications which are running in respective systems.
|
||||
|
||||
### ulimit command :
|
||||
|
||||
As the name suggests, ulimit (user limit) is used to display and set resources limit for logged in user.When we run ulimit command with -a option then it will print all resources’ limit for the logged in user. Now let’s run “ **ulimit -a** ” on Ubuntu / Debian and CentOS systems,
|
||||
|
||||
**Ubuntu / Debian System** ,
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -a
|
||||
core file size (blocks, -c) 0
|
||||
data seg size (kbytes, -d) unlimited
|
||||
scheduling priority (-e) 0
|
||||
file size (blocks, -f) unlimited
|
||||
pending signals (-i) 5731
|
||||
max locked memory (kbytes, -l) 64
|
||||
max memory size (kbytes, -m) unlimited
|
||||
open files (-n) 1024
|
||||
pipe size (512 bytes, -p) 8
|
||||
POSIX message queues (bytes, -q) 819200
|
||||
real-time priority (-r) 0
|
||||
stack size (kbytes, -s) 8192
|
||||
cpu time (seconds, -t) unlimited
|
||||
max user processes (-u) 5731
|
||||
virtual memory (kbytes, -v) unlimited
|
||||
file locks (-x) unlimited
|
||||
```
|
||||
|
||||
**CentOS System**
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -a
|
||||
core file size (blocks, -c) 0
|
||||
data seg size (kbytes, -d) unlimited
|
||||
scheduling priority (-e) 0
|
||||
file size (blocks, -f) unlimited
|
||||
pending signals (-i) 5901
|
||||
max locked memory (kbytes, -l) 64
|
||||
max memory size (kbytes, -m) unlimited
|
||||
open files (-n) 1024
|
||||
pipe size (512 bytes, -p) 8
|
||||
POSIX message queues (bytes, -q) 819200
|
||||
real-time priority (-r) 0
|
||||
stack size (kbytes, -s) 8192
|
||||
cpu time (seconds, -t) unlimited
|
||||
max user processes (-u) 5901
|
||||
virtual memory (kbytes, -v) unlimited
|
||||
file locks (-x) unlimited
|
||||
```
|
||||
|
||||
As we can be seen here different OS have different limits set. All these limits can be configured/changed using “ulimit” command.
|
||||
|
||||
To display the individual resource limit then pass the individual parameter in ulimit command, some of parameters are listed below:
|
||||
|
||||
* ulimit -n –> It will display number of open files limit
|
||||
* ulimit -c –> It display the size of core file
|
||||
* umilit -u –> It will display the maximum user process limit for the logged in user.
|
||||
* ulimit -f –> It will display the maximum file size that the user can have.
|
||||
* umilit -m –> It will display the maximum memory size for logged in user.
|
||||
* ulimit -v –> It will display the maximum memory size limit
|
||||
|
||||
|
||||
|
||||
Use below commands check hard and soft limits for number of open file for the logged in user
|
||||
|
||||
```
|
||||
root@linuxtechi ~}$ ulimit -Hn
|
||||
1048576
|
||||
root@linuxtechi ~}$ ulimit -Sn
|
||||
1024
|
||||
```
|
||||
|
||||
### How to fix the problem when limit on number of Maximum Files was reached ?
|
||||
|
||||
Let’s assume our Linux server has reached the limit of maximum number of open files and want to extend that limit system wide, for example we want to set 100000 as limit of number of open files.
|
||||
|
||||
Use sysctl command to pass fs.file-max parameter to kernel on the fly, execute beneath command as root user,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# sysctl -w fs.file-max=100000
|
||||
fs.file-max = 100000
|
||||
```
|
||||
|
||||
Above changes will be active until the next reboot, so to make these changes persistent across the reboot, edit the file **/etc/sysctl.conf** and add same parameter,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# vi /etc/sysctl.conf
|
||||
fs.file-max = 100000
|
||||
```
|
||||
|
||||
save and exit file,
|
||||
|
||||
Run the beneath command to make above changes into effect immediately without logout and reboot.
|
||||
|
||||
```
|
||||
root@linuxtechi~]# sysctl -p
|
||||
```
|
||||
|
||||
Now verify whether new changes are in effect or not.
|
||||
|
||||
```
|
||||
root@linuxtechi~]# cat /proc/sys/fs/file-max
|
||||
100000
|
||||
```
|
||||
|
||||
Use below command to find out how many file descriptors are currently being utilized:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# more /proc/sys/fs/file-nr
|
||||
1216 0 100000
|
||||
```
|
||||
|
||||
Note:- Command “ **sysctl -p** ” is used to commit the changes without reboot and logout.
|
||||
|
||||
### Set User level resource limit via limit.conf file
|
||||
|
||||
“ **/etc/sysctl.conf** ” file is used to set resource limit system wide but if you want to set resource limit for specific user like Oracle, MariaDB and Apache then this can be achieved via “ **/etc/security/limits.conf** ” file.
|
||||
|
||||
Sample Limit.conf is shown below,
|
||||
|
||||
```
|
||||
root@linuxtechi~]# cat /proc/sys/fs/file-max
|
||||
```
|
||||
|
||||
![Limits-conf-linux-part1][1]
|
||||
|
||||
![Limits-conf-linux-part2][2]
|
||||
|
||||
Let’s assume we want to set hard and soft limit on number of open files for linuxtechi user and for oracle user set hard and soft limit on number of open process, edit the file “/etc/security/limits.conf” and add the following lines
|
||||
|
||||
```
|
||||
# hard limit for max opened files for linuxtechi user
|
||||
linuxtechi hard nofile 4096
|
||||
# soft limit for max opened files for linuxtechi user
|
||||
linuxtechi soft nofile 1024
|
||||
|
||||
# hard limit for max number of process for oracle user
|
||||
oracle hard nproc 8096
|
||||
# soft limit for max number of process for oracle user
|
||||
oracle soft nproc 4096
|
||||
```
|
||||
|
||||
Save & exit the file.
|
||||
|
||||
**Note:** In case you want to put resource limit on a group instead of users, then it can also be possible via limit.conf file, in place of user name , type **@ <Group_Name>** and rest of the items will be same, example is shown below,
|
||||
|
||||
```
|
||||
# hard limit for max opened files for sysadmin group
|
||||
@sysadmin hard nofile 4096
|
||||
# soft limit for max opened files for sysadmin group
|
||||
@sysadmin soft nofile 1024
|
||||
```
|
||||
|
||||
Verify whether new changes are in effect or not,
|
||||
|
||||
```
|
||||
~]# su - linuxtechi
|
||||
~]$ ulimit -n -H
|
||||
4096
|
||||
~]$ ulimit -n -S
|
||||
1024
|
||||
|
||||
~]# su - oracle
|
||||
~]$ ulimit -H -u
|
||||
8096
|
||||
~]$ ulimit -S -u
|
||||
4096
|
||||
```
|
||||
|
||||
Note: Other majorly used command is “[ **lsof**][3]” which is used for finding out “how many files are opened currently”. This command is very helpful for admins.
|
||||
|
||||
**Conclusion:**
|
||||
|
||||
As mentioned in the introduction section “ulimit” command is very powerful and helps one to configure and make sure application installations are smoother without any bottlenecks. This command helps in fixing many of the number of file limitations in Linux based servers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/set-ulimit-file-descriptors-limit-linux-servers/
|
||||
|
||||
作者:[Shashidhar Soppin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/shashidhar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part1-1024x677.jpg
|
||||
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Limits-conf-linux-part2-1024x443.jpg
|
||||
[3]: https://www.linuxtechi.com/lsof-command-examples-linux-geeks/
|
281
sources/tech/20190610 Constant Time.md
Normal file
281
sources/tech/20190610 Constant Time.md
Normal file
@ -0,0 +1,281 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Constant Time)
|
||||
[#]: via: (https://dave.cheney.net/2019/06/10/constant-time)
|
||||
[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney)
|
||||
|
||||
Constant Time
|
||||
======
|
||||
|
||||
This essay is a derived from my [dotGo 2019 presentation][1] about my favourite feature in Go.
|
||||
|
||||
* * *
|
||||
|
||||
Many years ago Rob Pike remarked,
|
||||
|
||||
> “Numbers are just numbers, you’ll never see `0x80ULL` in a `.go` source file”.
|
||||
|
||||
—Rob Pike, [The Go Programming Language][2]
|
||||
|
||||
Beyond this pithy observation lies the fascinating world of Go’s constants. Something that is perhaps taken for granted because, as Rob noted, is Go numbers–constants–just work.
|
||||
In this post I intend to show you a few things that perhaps you didn’t know about Go’s `const` keyword.
|
||||
|
||||
## What’s so great about constants?
|
||||
|
||||
To kick things off, why are constants good? Three things spring to mind:
|
||||
|
||||
* _Immutability_. Constants are one of the few ways we have in Go to express immutability to the compiler.
|
||||
* _Clarity_. Constants give us a way to extract magic numbers from our code, giving them names and semantic meaning.
|
||||
* _Performance_. The ability to express to the compiler that something will not change is key as it unlocks optimisations such as constant folding, constant propagation, branch and dead code elimination.
|
||||
|
||||
|
||||
|
||||
But these are generic use cases for constants, they apply to any language. Let’s talk about some of the properties of Go’s constants.
|
||||
|
||||
### A Challenge
|
||||
|
||||
To introduce the power of Go’s constants let’s try a little challenge: declare a _constant_ whose value is the number of bits in the natural machine word.
|
||||
|
||||
We can’t use `unsafe.SizeOf` as it is not a constant expression. We could use a build tag and laboriously record the natural word size of each Go platform, or we could do something like this:
|
||||
|
||||
```
|
||||
const uintSize = 32 << (^uint(0) >> 32 & 1)
|
||||
```
|
||||
|
||||
There are many versions of this expression in Go codebases. They all work roughly the same way. If we’re on a 64 bit platform then the exclusive or of the number zero–all zero bits–is a number with all bits set, sixty four of them to be exact.
|
||||
|
||||
```
|
||||
1111111111111111111111111111111111111111111111111111111111111111
|
||||
```
|
||||
|
||||
If we shift that value thirty two bits to the right, we get another value with thirty two ones in it.
|
||||
|
||||
```
|
||||
0000000000000000000000000000000011111111111111111111111111111111
|
||||
```
|
||||
|
||||
Anding that with a number with one bit in the final position give us, the same thing, `1`,
|
||||
|
||||
```
|
||||
0000000000000000000000000000000011111111111111111111111111111111 & 1 = 1
|
||||
```
|
||||
|
||||
Finally we shift the number thirty two one place to the right, giving us 641.
|
||||
|
||||
```
|
||||
32 << 1 = 64
|
||||
```
|
||||
|
||||
This expression is an example of a _constant expression_. All of these operations happen at compile time and the result of the expression is itself a constant. If you look in the in runtime package, in particular the garbage collector, you’ll see how constant expressions are used to set up complex invariants based on the word size of the machine the code is compiled on.
|
||||
|
||||
So, this is a neat party trick, but most compilers will do this kind of constant folding at compile time for you. Let’s step it up a notch.
|
||||
|
||||
## Constants are values
|
||||
|
||||
In Go, constants are values and each value has a type. In Go, user defined types can declare their own methods. Thus, a constant value can have a method set. If you’re surprised by this, let me show you an example that you probably use every day.
|
||||
|
||||
```
|
||||
const timeout = 500 * time.Millisecond
|
||||
fmt.Println("The timeout is", timeout) // 500ms
|
||||
```
|
||||
|
||||
In the example the untyped literal constant `500` is multiplied by `time.Millisecond`, itself a constant of type `time.Duration`. The rule for assignments in Go are, unless otherwise declared, the type on the left hand side of the assignment operator is inferred from the type on the right.`500` is an untyped constant so it is converted to a `time.Duration` then multiplied with the constant `time.Millisecond`.
|
||||
|
||||
Thus `timeout` is a constant of type `time.Duration` which holds the value `500000000`.
|
||||
Why then does `fmt.Println` print `500ms`, not `500000000`?
|
||||
|
||||
The answer is `time.Duration` has a `String` method. Thus any `time.Duration` value, even a constant, knows how to pretty print itself.
|
||||
|
||||
Now we know that constant values are typed, and because types can declare methods, we can derive that _constant values can fulfil interfaces_. In fact we just saw an example of this. `fmt.Println` doesn’t assert that a value has a `String` method, it asserts the value implements the `Stringer` interface.
|
||||
|
||||
Let’s talk a little about how we can use this property to make our Go code better, and to do that I’m going to take a brief digression into the Singleton pattern.
|
||||
|
||||
## Singletons
|
||||
|
||||
I’m generally not a fan of the singleton pattern, in Go or any language. Singletons complicate testing and create unnecessary coupling between packages. I feel the singleton pattern is often used _not_ to create a singular instance of a thing, but instead to create a place to coordinate registration. `net/http.DefaultServeMux` is a good example of this pattern.
|
||||
|
||||
```
|
||||
package http
|
||||
|
||||
// DefaultServeMux is the default ServeMux used by Serve.
|
||||
var DefaultServeMux = &defaultServeMux
|
||||
|
||||
var defaultServeMux ServeMux
|
||||
```
|
||||
|
||||
There is nothing singular about `http.defaultServerMux`, nothing prevents you from creating another `ServeMux`. In fact the `http` package provides a helper that will create as many `ServeMux`‘s as you want.
|
||||
|
||||
```
|
||||
// NewServeMux allocates and returns a new ServeMux.
|
||||
func NewServeMux() *ServeMux { return new(ServeMux) }
|
||||
```
|
||||
|
||||
`http.DefaultServeMux` is not a singleton. Never the less there is a case for things which are truely singletons because they can only represent a single thing. A good example of this are the file descriptors of a process; 0, 1, and 2 which represent stdin, stdout, and stderr respectively.
|
||||
|
||||
It doesn’t matter what names you give them, `1` is always stdout, and there can only ever be one file descriptor `1`. Thus these two operations are identical:
|
||||
|
||||
```
|
||||
fmt.Fprintf(os.Stdout, "Hello dotGo\n")
|
||||
syscall.Write(1, []byte("Hello dotGo\n"))
|
||||
```
|
||||
|
||||
So let’s look at how the `os` package defines `Stdin`, `Stdout`, and `Stderr`:
|
||||
|
||||
```
|
||||
package os
|
||||
|
||||
var (
|
||||
Stdin = NewFile(uintptr(syscall.Stdin), "/dev/stdin")
|
||||
Stdout = NewFile(uintptr(syscall.Stdout), "/dev/stdout")
|
||||
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
|
||||
)
|
||||
```
|
||||
|
||||
There are a few problems with this declaration. Firstly their type is `*os.File` not the respective `io.Reader` or `io.Writer` interfaces. People have long complained that this makes replacing them with alternatives problematic. However the notion of replacing these variables is precisely the point of this digression. Can you safely change the value of `os.Stdout` once your program is running without causing a data race?
|
||||
|
||||
I argue that, in the general case, you cannot. In general, if something is unsafe to do, as programmers we shouldn’t let our users think that it is safe, [lest they begin to depend on that behaviour][3].
|
||||
|
||||
Could we change the definition of `os.Stdout` and friends so that they retain the observable behaviour of reading and writing, but remain immutable? It turns out, we can do this easily with constants.
|
||||
|
||||
```
|
||||
type readfd int
|
||||
|
||||
func (r readfd) Read(buf []byte) (int, error) {
|
||||
return syscall.Read(int(r), buf)
|
||||
}
|
||||
|
||||
type writefd int
|
||||
|
||||
func (w writefd) Write(buf []byte) (int, error) {
|
||||
return syscall.Write(int(w), buf)
|
||||
}
|
||||
|
||||
const (
|
||||
Stdin = readfd(0)
|
||||
Stdout = writefd(1)
|
||||
Stderr = writefd(2)
|
||||
)
|
||||
|
||||
func main() {
|
||||
fmt.Fprintf(Stdout, "Hello world")
|
||||
}
|
||||
```
|
||||
|
||||
In fact this change causes only one compilation failure in the standard library.2
|
||||
|
||||
## Sentinel error values
|
||||
|
||||
Another case of things which look like constants but really aren’t, are sentinel error values. `io.EOF`, `sql.ErrNoRows`, `crypto/x509.ErrUnsupportedAlgorithm`, and so on are all examples of sentinel error values. They all fall into a category of _expected_ errors, and because they are expected, you’re expected to check for them.
|
||||
|
||||
To compare the error you have with the one you were expecting, you need to import the package that defines that error. Because, by definition, sentinel errors are exported public variables, any code that imports, for example, the `io` package could change the value of `io.EOF`.
|
||||
|
||||
```
|
||||
package nelson
|
||||
|
||||
import "io"
|
||||
|
||||
func init() {
|
||||
io.EOF = nil // haha!
|
||||
}
|
||||
```
|
||||
|
||||
I’ll say that again. If I know the name of `io.EOF` I can import the package that declares it, which I must if I want to compare it to my error, and thus I could change `io.EOF`‘s value. Historically convention and a bit of dumb luck discourages people from writing code that does this, but technically there is nothing to prevent you from doing so.
|
||||
|
||||
Replacing `io.EOF` is probably going to be detected almost immediately. But replacing a less frequently used sentinel error may cause some interesting side effects:
|
||||
|
||||
```
|
||||
package innocent
|
||||
|
||||
import "crypto/rsa"
|
||||
|
||||
func init() {
|
||||
rsa.ErrVerification = nil // 🤔
|
||||
}
|
||||
```
|
||||
|
||||
If you were hoping the race detector will spot this subterfuge, I suggest you talk to the folks writing testing frameworks who replace `os.Stdout` without it triggering the race detector.
|
||||
|
||||
## Fungibility
|
||||
|
||||
I want to digress for a moment to talk about _the_ most important property of constants. Constants aren’t just immutable, its not enough that we cannot overwrite their declaration,
|
||||
Constants are _fungible_. This is a tremendously important property that doesn’t get nearly enough attention.
|
||||
|
||||
Fungible means identical. Money is a great example of fungibility. If you were to lend me 10 bucks, and I later pay you back, the fact that you gave me a 10 dollar note and I returned to you 10 one dollar bills, with respect to its operation as a financial instrument, is irrelevant. Things which are fungible are by definition equal and equality is a powerful property we can leverage for our programs.
|
||||
|
||||
```
|
||||
var myEOF = errors.New("EOF") // io/io.go line 38
|
||||
fmt.Println(myEOF == io.EOF) // false
|
||||
```
|
||||
|
||||
Putting aside the effect of malicious actors in your code base the key design challenge with sentinel errors is they behave like _singletons_ , not _constants_. Even if we follow the exact procedure used by the `io` package to create our own EOF value, `myEOF` and `io.EOF` are not equal. `myEOF` and `io.EOF` are not fungible, they cannot be interchanged. Programs can spot the difference.
|
||||
|
||||
When you combine the lack of immutability, the lack of fungibility, the lack of equality, you have a set of weird behaviours stemming from the fact that sentinel error values in Go are not constant expressions. But what if they were?
|
||||
|
||||
## Constant errors
|
||||
|
||||
Ideally a sentinel error value should behave as a constant. It should be immutable and fungible. Let’s recap how the built in `error` interface works in Go.
|
||||
|
||||
```
|
||||
type error interface {
|
||||
Error() string
|
||||
}
|
||||
```
|
||||
|
||||
Any type with an `Error() string` method fulfils the `error` interface. This includes user defined types, it includes types derived from primitives like string, and it includes constant strings. With that background, consider this error implementation:
|
||||
|
||||
```
|
||||
type Error string
|
||||
|
||||
func (e Error) Error() string {
|
||||
return string(e)
|
||||
}
|
||||
```
|
||||
|
||||
We can use this error type as a constant expression:
|
||||
|
||||
```
|
||||
const err = Error("EOF")
|
||||
```
|
||||
|
||||
Unlike `errors.errorString`, which is a struct, a compact struct literal initialiser is not a constant expression and cannot be used.
|
||||
|
||||
```
|
||||
const err2 = errors.errorString{"EOF"} // doesn't compile
|
||||
```
|
||||
|
||||
As constants of this `Error` type are not variables, they are immutable.
|
||||
|
||||
```
|
||||
const err = Error("EOF")
|
||||
err = Error("not EOF") // doesn't compile
|
||||
```
|
||||
|
||||
Additionally, two constant strings are always equal if their contents are equal:
|
||||
|
||||
```
|
||||
const str1 = "EOF"
|
||||
const str2 = "EOF"
|
||||
fmt.Println(str1 == str2) // true
|
||||
```
|
||||
|
||||
which means two constants of a type derived from string with the same contents are also equal.
|
||||
|
||||
```
|
||||
type Error string
|
||||
|
||||
const err1 = Error("EOF")
|
||||
const err2 = Error("EOF")
|
||||
fmt.Println(err1 == err2) // true```
|
||||
|
||||
```
|
||||
Said another way, equal constant `Error` values are the same, in the way that the literal constant `1` is the same as every other literal constant `1`.
|
||||
|
||||
Now we have all the pieces we need to make sentinel errors, like `io.EOF`, and `rsa.ErrVerfication`, immutable, fungible, constant expressions.
|
||||
```
|
||||
|
||||
% git diff
|
||||
diff --git a/src/io/io.go b/src/io/io.go
|
||||
inde
|
@ -0,0 +1,204 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Step by Step Zorin OS 15 Installation Guide with Screenshots)
|
||||
[#]: via: (https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Step by Step Zorin OS 15 Installation Guide with Screenshots
|
||||
======
|
||||
|
||||
Good News for all the Zorin users out there! Zorin has launched its latest version (Zorin OS 15) of its Ubuntu based Linux distro. This version is based on Ubuntu 18.04.2, since its launch in July 2009, it is estimated that this popular distribution has reached more than 17 million downloads. Zorin is renowned for creating a distribution for beginner level users and the all new Zorin OS 15 comes packed with a lot of goodies that surely will make Zorin OS lovers happy. Let’s see some of the major enhancements made in the latest version.
|
||||
|
||||
### New Features of Zorin OS 15
|
||||
|
||||
Zorin OS has always amazed users with different set of features when every version is released Zorin OS 15 is no exception as it comes with a lot of new features as outlined below:
|
||||
|
||||
**Enhanced User Experience**
|
||||
|
||||
The moment you look at the Zorin OS 15, you will ask whether it is a Linux distro because it looks more like a Windows OS. According to Zorin, it wanted Windows users to get ported to Linux in a more user-friendly manner. And it features a Windows like Start menu, quick app launchers, a traditional task bar section, system tray etc.
|
||||
|
||||
**Zorin Connect**
|
||||
|
||||
Another major highlight of Zorin OS 15 is the ability to integrate your Android Smartphones seamlessly with your desktop using the Zorin Connect application. With your phone connected, you can share music, videos and other files between your phone and desktop. You can even use your phone as a mouse to control the desktop. You can also easily control the media playback in your desktop from your phone itself. Quickly reply to all your messages and notifications sent to your phone from your desktop.
|
||||
|
||||
**New GTK Theme**
|
||||
|
||||
Zorin OS 15 ships with an all new GTK Theme that has been exclusively built for this distro and the theme is available in 6 different colors along with the hugely popular dark theme. Another highlight is that the OS automatically detects the time of the day and changes the desktop theme accordingly. Say for example, during sunset it switches to a dark theme whereas in the morning it switches to bright theme automatically.
|
||||
|
||||
**Other New Features:**
|
||||
|
||||
Zorin OS 15 comes packed with a lot of new features including:
|
||||
|
||||
* Compatible with Thunderbolt 3.0 devices
|
||||
* Supports color emojis
|
||||
* Comes with an upgraded Linux Kernel 4.18
|
||||
* Customized settings available for application menu and task bar
|
||||
* System font changed to Inter
|
||||
* Supports renaming bulk files
|
||||
|
||||
|
||||
|
||||
### Minimum system requirements for Zorin OS 15 (Core):
|
||||
|
||||
* Dual Core 64-bit (1GHZ)
|
||||
* 2 GB RAM
|
||||
* 10 GB free disk space
|
||||
* Internet Connection Optional
|
||||
* Display (800×600)
|
||||
|
||||
|
||||
|
||||
### Step by Step Guide to Install Zorin OS 15 (Core)
|
||||
|
||||
Before you start installing Zorin OS 15, ensure you have a copy of the Zorin OS 15 downloaded in your system. If not download then refer official website of [Zorin OS 15][1]. Remember this Linux distribution is available in 4 versions including:
|
||||
|
||||
* Ultimate (Paid Version)
|
||||
* Core (Free Version)
|
||||
* Lite (Free Version)
|
||||
* Education (Free Version)
|
||||
|
||||
|
||||
|
||||
Note: In this article I will demonstrate Zorin OS 15 Core Installation Steps
|
||||
|
||||
### Step 1) Create Zorin OS 15 Bootable USB Disk
|
||||
|
||||
Once you have downloaded Zorin OS 15, copy the ISO into an USB disk and create a bootable disk. Change our system settings to boot using an USB disk and restart your system. Once you restart your system, you will see the screen as shown below. Click “ **Install or Try Zorin OS** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS15-option.jpg>
|
||||
|
||||
### Step 2) Choose Install Zorin OS
|
||||
|
||||
In the next screen, you will be shown option to whether install Zorin OS 15 or to try Zorin OS. Click “ **Install Zorin OS** ” to continue the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Zorin-OS-15-on-System.jpg>
|
||||
|
||||
### Step 3) Choose Keyboard Layout
|
||||
|
||||
Next step is to choose your keyboard layout. By default, English (US) is selected and if you want to choose a different language, then choose it and click “ **Continue** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Keyboard-Layout-Zorinos-15.jpg>
|
||||
|
||||
### Step 4) Download Updates and Other Software
|
||||
|
||||
In the next screen, you will be asked whether you need to download updates while you are installing Zorin OS and install other 3rd party applications. In case your system is connected to internet then you can select both of these options, but by doing so your installation time increases considerably, or if you don’t want to install updates and third party software during the installation then untick both these options and click “Continue”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-Updates-third-party-softwares-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 5) Choose Zorin OS 15 Installation Method
|
||||
|
||||
If you are new to Linux and want fresh installation and don’t want to customize partitions, then better choose option “ **Erase disk and install Zorin OS** ”
|
||||
|
||||
If you want to create customize partitions for Zorin OS then choose “ **Something else** “, In this tutorial I will demonstrate how to create customize partition scheme for Zorin OS 15 installation,
|
||||
|
||||
So, choose “ **Something else** ” option and then click on Continue
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-Something-else-option-Zorin-OS15-Installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Disk-for-Zorin-OS15-Installation.jpg>
|
||||
|
||||
As we can see we have around 42 GB disk available for Zorin OS, We will be creating following partitions,
|
||||
|
||||
* /boot = 2 GB (ext4 file system)
|
||||
* /home = 20 GB (ext4 file system)
|
||||
* / = 10 GB (ext4 file system)
|
||||
* /var = 7 GB (ext4 file system)
|
||||
* Swap = 2 GB (ext4 file system)
|
||||
|
||||
|
||||
|
||||
To start creating partitions, first click on “ **New Partition Table** ” and it will show it is going to create empty partition table, click on continue
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/create-empty-partition-zorin-os15-installation.jpg>
|
||||
|
||||
In the next screen we will see that we have now 42 GB free space on disk (/dev/sda), so let’s create our first partition as /boot,
|
||||
|
||||
Select the free space, then click on + symbol and then specify the partition size as 2048 MB, file system type as ext4 and mount point as /boot,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/boot-partiton-during-zorin-os15-installation.jpg>
|
||||
|
||||
Click on OK
|
||||
|
||||
Now create our next partition /home of size 20 GB (20480 MB),
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/home-partition-zorin-os15-installation.jpg>
|
||||
|
||||
Similarly create our next two partition / and /var of size 10 and 7 GB respectively,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/slash-partition-zorin-os15-installation.jpg>
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/var-partition-zorin-os15-installation.jpg>
|
||||
|
||||
Let’s create our last partition as swap of size 2 GB
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/swap-partition-Zorin-OS15-Installation.jpg>
|
||||
|
||||
Click on OK
|
||||
|
||||
Choose “ **Install Now** ” option in next window,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Install-now-option-zorin-os15.jpg>
|
||||
|
||||
In next window, choose “Continue” to write changes to disk and to proceed with installation
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Write-Changes-to-disk-zorin-os15.jpg>
|
||||
|
||||
### Step 6) Choose Your Preferred Location
|
||||
|
||||
In the next screen, you will be asked to choose your location and click “Continue”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/TimeZone-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 7) Provide User Credentials
|
||||
|
||||
In the next screen, you’ll be asked to enter the user credentials including your name, computer name,
|
||||
|
||||
Username and password. Once you are done, click “Continue” to proceed with the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/User-Credentails-During-Zorin-OS15-Installation.jpg>
|
||||
|
||||
### Step 8) Installing Zorin OS 15
|
||||
|
||||
Once you click continue, you can see that the Zorin OS 15 starts installing and it may take some time to complete the installation process.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Installation-Progress-Zorin-OS15.jpg>
|
||||
|
||||
### Step 9) Restart your system after Successful Installation
|
||||
|
||||
Once the installation process is completed, it will ask to restart your computer. Hit “ **Restart Now** ”
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Zorin-OS15-Installation-Completed.jpg>
|
||||
|
||||
### Step 10) Login to Zorin OS 15
|
||||
|
||||
Once the system restarts, you will be asked to login into the system using the login credentials provided earlier.
|
||||
|
||||
Note: Don’t forget to change the boot medium from bios so that system boots up with disk.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Login-Screen-Zorin-OS15.jpg>
|
||||
|
||||
### Step 11) Zorin OS 15 Welcome Screen
|
||||
|
||||
Once your login is successful, you can see the Zorin OS 15 welcome screen. Now you can start exploring all the incredible features of Zorin OS 15.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Desktop-Screen-Zorin-OS15.jpg>
|
||||
|
||||
That’s all from this tutorial, please do share feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://zorinos.com/download/
|
@ -1,133 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Installing alternative versions of RPMs in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/installing-alternative-rpm-versions-in-fedora/)
|
||||
[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/)
|
||||
|
||||
Installing alternative versions of RPMs in Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Modularity][2] enables Fedora to provide alternative versions of RPM packages in the repositories. Several different applications, language runtimes, and tools are available in multiple versions, build natively for each Fedora release.
|
||||
|
||||
The Fedora Magazine has already covered [Modularity in Fedora 28 Server Edition][3] about a year ago. Back then, it was just an optional repository with additional content, and as the title hints, only available to the Server Edition. A lot has changed since then, and now **Modularity is a core part of the Fedora distribution**. And some packages have moved to modules completely. At the time of writing — out of the 49,464 binary RPM packages in Fedora 30 — 1,119 (2.26%) come from a module ([more about the numbers][4]).
|
||||
|
||||
### Modularity basics
|
||||
|
||||
Because having too many packages in multiple versions could feel overwhelming (and hard to manage), packages are grouped into **modules** that represent an application, a language runtime, or any other sensible group.
|
||||
|
||||
Modules often come in multiple **streams** — usually representing a major version of the software. Available in parallel, but only one stream of each module can be installed on a given system.
|
||||
|
||||
And not to overwhelm users with too many choices, each Fedora release comes with a set of **defaults** — so decisions only need to be made when desired.
|
||||
|
||||
Finally, to simplify installation, modules can be optionally installed using pre-defined **profiles** based on a use case. A database module, for example, could be installed as a client, a server, or both.
|
||||
|
||||
### Modularity in practice
|
||||
|
||||
When you install an RPM package on your Fedora system, chances are it comes from a module stream. The reason why you might not have noticed is one of the core principles of Modularity — remaining invisible until there is a reason to know about it.
|
||||
|
||||
Let’s compare the following two situations. First, installing the popular _i3_ tiling window manager, and second, installing the minimalist _dwm_ window manager:
|
||||
|
||||
```
|
||||
$ sudo dnf install i3
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
As expected, the above command installs the _i3_ package and its dependencies on the system. Nothing else happened here. But what about the other one?
|
||||
|
||||
```
|
||||
$ sudo dnf install dwm
|
||||
...
|
||||
Enabling module streams:
|
||||
dwm 6.1
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
It feels the same, but something happened in the background — the default _dwm_ module stream ( _6.1_ ) got enabled, and the _dwm_ package from the module got installed.
|
||||
|
||||
To be transparent, there is a message about the module auto-enablement in the output. But other than that, the user doesn’t need to know anything about Modularity in order to use their system the way they always did.
|
||||
|
||||
But what if they do? Let’s see how a different version of _dwm_ could have been installed instead.
|
||||
|
||||
Use the following command to see what module streams are available:
|
||||
|
||||
```
|
||||
$ sudo dnf module list
|
||||
...
|
||||
dwm latest ...
|
||||
dwm 6.0 ...
|
||||
dwm 6.1 [d] ...
|
||||
dwm 6.2 ...
|
||||
...
|
||||
Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
|
||||
```
|
||||
|
||||
The output shows there are four streams of the _dwm_ module, _6.1_ being the default.
|
||||
|
||||
To install the _dwm_ package in a different version — from the _6.2_ stream for example — enable the stream and then install the package by using the two following commands:
|
||||
|
||||
```
|
||||
$ sudo dnf module enable dwm:6.2
|
||||
...
|
||||
Enabling module streams:
|
||||
dwm 6.2
|
||||
...
|
||||
Done!
|
||||
$ sudo dnf install dwm
|
||||
...
|
||||
Done!
|
||||
```
|
||||
|
||||
Finally, let’s have a look at profiles, with PostgreSQL as an example.
|
||||
|
||||
```
|
||||
$ sudo dnf module list
|
||||
...
|
||||
postgresql 9.6 client, server ...
|
||||
postgresql 10 client, server ...
|
||||
postgresql 11 client, server ...
|
||||
...
|
||||
```
|
||||
|
||||
To install PostgreSQL 11 as a server, use the following command:
|
||||
|
||||
```
|
||||
$ sudo dnf module install postgresql:11/server
|
||||
```
|
||||
|
||||
Note that — apart from enabling — modules can be installed with a single command when a profile is specified.
|
||||
|
||||
It is possible to install multiple profiles at once. To add the client tools, use the following command:
|
||||
|
||||
```
|
||||
$ sudo dnf module install postgresql:11/client
|
||||
```
|
||||
|
||||
There are many other modules with multiple streams available to choose from. At the time of writing, there were 83 module streams in Fedora 30. That includes two versions of MariaDB, three versions of Node.js, two versions of Ruby, and many more.
|
||||
|
||||
Please refer to the [official user documentation for Modularity][5] for a complete set of commands including switching from one stream to another.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/installing-alternative-rpm-versions-in-fedora/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/asamalik/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/modularity-f30-816x345.jpg
|
||||
[2]: https://docs.pagure.org/modularity
|
||||
[3]: https://fedoramagazine.org/modularity-fedora-28-server-edition/
|
||||
[4]: https://blog.samalik.com/2019/06/12/counting-modularity-packages.html
|
||||
[5]: https://docs.fedoraproject.org/en-US/modularity/using-modules/
|
@ -1,171 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to send email from the Linux command line
|
||||
======
|
||||
Linux offers several commands that allow you to send email from the command line. Here's look at some that offer interesting options.
|
||||
![Molnia/iStock][1]
|
||||
|
||||
There are several ways to send email from the Linux command line. Some are very simple and others more complicated, but offer some very useful features. The choice depends on what you want to do -– whether you want to get a quick message off to a co-worker or send a more complicated message with an attachment to a large group of people. Here's a look at some of the options:
|
||||
|
||||
### mail
|
||||
|
||||
The easiest way to send a simple message from the Linux command line is to use the **mail** command. Maybe you need to remind your boss that you're leaving a little early that day. You could use a command like this one:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
Another option is to grab your message text from a file that contains the content you want to send:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
In both cases, the -s options allows you to provide a subject line for your message.
|
||||
|
||||
### sendmail
|
||||
|
||||
Using **sendmail** , you can send a quick message (with no subject) using a command like this (replacing "recip" with your intended recipient:
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can send just a subject line (with no message content) with a command like this:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
You can also use sendmail on the command line to send a message complete with a subject line. However, when using this approach, you would add your subject line to the file you intend to send as in this example file:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
Then you would send the file like this (where the lyrics file contains your subject line and text):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
Sendmail can be quite verbose in its output. If you're desperately curious and want to see the interchange between the sending and receiving systems, add the -v (verbose) option:
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
An especially nice tool for command line emailing is the **mutt** command, though you will likely have to install it first. Mutt has a convenient advantage in that it can allow you to include attachments.
|
||||
|
||||
To use mutt to send a quick messsage:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
To get content from a file:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
To add an attachment with mutt, use the -a option. You can even add more than one – as shown in this command:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
In the command above, the "msg" file includes content for the email. If you don't have any additional content to provide, you can do this instead:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
The other useful option that you have with mutt is that it provides a way to send carbon copies (using the -c option) and blind carbon copies (using the -b option).
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
If you want to get deep into the details of sending email, you can use **telnet** to carry on the email exchange operation, but you'll need to, as they say, "learn the lingo." Mail servers expect a sequence of commands that include things like introducing yourself ( **EHLO** command), providing the email sender ( **MAIL FROM** command), specifying the email recipient ( **RCPT TO** command), and then adding the message ( **DATA** ) and ending the message with a "." as the only character on the line. Not every email server will respond to these requests. This approach is generally used only for troubleshooting.
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### Sending email to multiple recipients
|
||||
|
||||
If you want to send email from the Linux command line to a large group of recipients, you can always use a loop to make the job easier as in this example using mutt.
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
There are quite a few ways to send email from the Linux command line. Some tools provide quite a few options.
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Personal assistant with Mycroft and Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/personal-assistant-with-mycroft-and-fedora/)
|
||||
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
|
||||
|
||||
Personal assistant with Mycroft and Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Looking for an open source personal assistant ? [Mycroft][2] is allowing you to run an open source service which gives you better control of your data.
|
||||
|
||||
### Install Mycroft on Fedora
|
||||
|
||||
Mycroft is currently not available in the official package collection, but it can be easily installed from the project source. The first step is to download the source from Mycroft’s GitHub repository.
|
||||
|
||||
```
|
||||
$ git clone https://github.com/MycroftAI/mycroft-core.git
|
||||
```
|
||||
|
||||
Mycroft is a Python application and the project provides a script that takes care of creating a virtual environment before installing Mycroft and its dependencies.
|
||||
|
||||
```
|
||||
$ cd mycroft-core
|
||||
$ ./dev_setup.sh
|
||||
```
|
||||
|
||||
The installation script prompts the user to help him with the installation process. It is recommended to run the stable version and get automatic updates.
|
||||
|
||||
When prompted to install locally the Mimic text-to-speech engine, answer No. Since as described in the installation process this can take a long time and Mimic is available as an rpm package in Fedora so it can be installed using dnf.
|
||||
|
||||
```
|
||||
$ sudo dnf install mimic
|
||||
```
|
||||
|
||||
### Starting Mycroft
|
||||
|
||||
After the installation is complete, the Mycroft services can be started using the following script.
|
||||
|
||||
```
|
||||
$ ./start-mycroft.sh all
|
||||
```
|
||||
|
||||
In order to start using Mycroft the device running the service needs to be registered. To do that an account is needed and can be created at <https://home.mycroft.ai/>.
|
||||
|
||||
Once the account created, it is possible to add a new device at the following address [https://account.mycroft.ai/devices.][3] Adding a new device requires a pairing code that will be spoken to you by your device after starting all the services.
|
||||
|
||||
![][4]
|
||||
|
||||
The device is now ready to be used.
|
||||
|
||||
### Using Mycroft
|
||||
|
||||
Mycroft provides a set of [skills][5] that are enabled by default or can be downloaded from the [Marketplace][5]. To start you can simply ask Mycroft how is doing, or what the weather is.
|
||||
|
||||
```
|
||||
Hey Mycroft, how are you ?
|
||||
|
||||
Hey Mycroft, what's the weather like ?
|
||||
```
|
||||
|
||||
If you are interested in how things works, the _start-mycroft.sh_ script provides a _cli_ option that lets you interact with the services using the command line. It is also displaying logs which is really useful for debugging.
|
||||
|
||||
Mycroft is always trying to learn new skills, and there are many way to help by [contributing][6] the Mycroft community.
|
||||
|
||||
* * *
|
||||
|
||||
Photo by [Przemyslaw Marczynski][7] on [Unsplash][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/personal-assistant-with-mycroft-and-fedora/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cverna/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/mycroft-816x345.jpg
|
||||
[2]: https://mycroft.ai/
|
||||
[3]: https://account.mycroft.ai/devices
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/Screenshot_2019-06-14-Account.png
|
||||
[5]: https://market.mycroft.ai/skills
|
||||
[6]: https://mycroft.ai/contribute/
|
||||
[7]: https://unsplash.com/@pemmax?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/search/photos/ai?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,173 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers)
|
||||
[#]: via: (https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers
|
||||
======
|
||||
|
||||
There are some scenarios where we want to assign multiple IPs from different **VLAN** on the same Ethernet card (nic) on Linux servers ( **CentOS** / **RHEL** ). This can be done by enabling VLAN tagged interface. But for this to happen first we must make sure multiple VLANs are attached to port on switch or in other words we can say we should configure trunk port by adding multiple VLANs on switch.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/VLAN-Tagged-NIC-Linux-Server.jpg>
|
||||
|
||||
Let’s assume we have a Linux Server, there we have two Ethernet cards (enp0s3 & enp0s8), first NIC ( **enp0s3** ) will be used for data traffic and second NIC ( **enp0s8** ) will be used for control / management traffic. For Data traffic I will using multiple VLANs (or will assign multiple IPs from different VLANs on data traffic ethernet card).
|
||||
|
||||
I am assuming the port from switch which is connected to my server data NIC is configured as trunk port by mapping the multiple VLANs to it.
|
||||
|
||||
Following are the VLANs which is mapped to data traffic Ethernet Card (NIC):
|
||||
|
||||
* VLAN ID (200), VLAN N/W = 172.168.10.0/24
|
||||
* VLAN ID (300), VLAN N/W = 172.168.20.0/24
|
||||
|
||||
|
||||
|
||||
To use VLAN tagged interface on CentOS 7 / RHEL 7 / CentOS 8 /RHEL 8 systems, [kernel module][1] **8021q** must be loaded.
|
||||
|
||||
Use the following command to load the kernel module “8021q”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# lsmod | grep -i 8021q
|
||||
[root@linuxtechi ~]# modprobe --first-time 8021q
|
||||
[root@linuxtechi ~]# lsmod | grep -i 8021q
|
||||
8021q 29022 0
|
||||
garp 14384 1 8021q
|
||||
mrp 18542 1 8021q
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Use below modinfo command to display information about kernel module “8021q”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# modinfo 8021q
|
||||
filename: /lib/modules/3.10.0-327.el7.x86_64/kernel/net/8021q/8021q.ko
|
||||
version: 1.8
|
||||
license: GPL
|
||||
alias: rtnl-link-vlan
|
||||
rhelversion: 7.2
|
||||
srcversion: 2E63BD725D9DC11C7DA6190
|
||||
depends: mrp,garp
|
||||
intree: Y
|
||||
vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions
|
||||
signer: CentOS Linux kernel signing key
|
||||
sig_key: 79:AD:88:6A:11:3C:A0:22:35:26:33:6C:0F:82:5B:8A:94:29:6A:B3
|
||||
sig_hashalgo: sha256
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now tagged (or mapped) the VLANs 200 and 300 to NIC enp0s3 using the [ip command][2]
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.200 type vlan id 200
|
||||
```
|
||||
|
||||
Bring up the interface using below ip command:
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link set dev enp0s3.200 up
|
||||
```
|
||||
|
||||
Similarly mapped the VLAN 300 to NIC enp0s3
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.300 type vlan id 300
|
||||
[root@linuxtechi ~]# ip link set dev enp0s3.300 up
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now view the tagged interface status using ip command:
|
||||
|
||||
[![tagged-interface-ip-command][3]][4]
|
||||
|
||||
Now we can assign the IP address to tagged interface from their respective VLANs using beneath ip command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ip addr add 172.168.10.51/24 dev enp0s3.200
|
||||
[root@linuxtechi ~]# ip addr add 172.168.20.51/24 dev enp0s3.300
|
||||
```
|
||||
|
||||
Use below ip command to see whether IP is assigned to tagged interface or not.
|
||||
|
||||
![ip-address-tagged-nic][5]
|
||||
|
||||
All the above changes via ip commands will not be persistent across the reboot. These tagged interfaces will not be available after reboot and after network service restart
|
||||
|
||||
So, to make tagged interfaces persistent across the reboot then use interface **ifcfg files**
|
||||
|
||||
Edit interface (enp0s3) file “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3** ” and add the following content,
|
||||
|
||||
Note: Replace the interface name that suits to your env,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
|
||||
TYPE=Ethernet
|
||||
DEVICE=enp0s3
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
```
|
||||
|
||||
Save & exit the file
|
||||
|
||||
Create tagged interface file for VLAN id 200 as “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3.200** ” and add the following contents to it.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.200
|
||||
DEVICE=enp0s3.200
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
IPADDR=172.168.10.51
|
||||
PREFIX=24
|
||||
NETWORK=172.168.10.0
|
||||
VLAN=yes
|
||||
```
|
||||
|
||||
Save & exit the file
|
||||
|
||||
Similarly create interface file for VLAN id 300 as “/etc/sysconfig/network-scripts/ifcfg-enp0s3.300” and add the following contents to it
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.300
|
||||
DEVICE=enp0s3.300
|
||||
BOOTPROTO=none
|
||||
ONBOOT=yes
|
||||
IPADDR=172.168.20.51
|
||||
PREFIX=24
|
||||
NETWORK=172.168.20.0
|
||||
VLAN=yes
|
||||
```
|
||||
|
||||
Save and exit file and then restart network services using the beneath command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# systemctl restart network
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now verify whether tagged interface are configured and up & running using the ip command,
|
||||
|
||||
![tagged-interface-status-ip-command-linux-server][6]
|
||||
|
||||
That’s all from this article, I hope you got an idea how to configure and enable VLAN tagged interface on CentOS 7 / 8 and RHEL 7 /8 Severs. Please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/how-to-manage-kernel-modules-in-linux/
|
||||
[2]: https://www.linuxtechi.com/ip-command-examples-for-linux-users/
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command-1024x444.jpg
|
||||
[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command.jpg
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/ip-address-tagged-nic-1024x343.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-status-ip-command-linux-server-1024x656.jpg
|
118
sources/tech/20190618 A beginner-s guide to Linux permissions.md
Normal file
118
sources/tech/20190618 A beginner-s guide to Linux permissions.md
Normal file
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to Linux permissions)
|
||||
[#]: via: (https://opensource.com/article/19/6/understanding-linux-permissions)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson/users/greg-p/users/tj)
|
||||
|
||||
A beginner's guide to Linux permissions
|
||||
======
|
||||
Linux security permissions designate who can do what with a file or
|
||||
directory.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
One of the main benefits of Linux systems is that they are known to be less prone to security vulnerabilities and exploits than other systems. Linux definitely gives users more flexibility and granular controls over its file systems' security permissions. This may imply that it's critical for Linux users to understand security permissions. That isn't necessarily true, but it's still wise for beginning users to understand the basics of Linux permissions.
|
||||
|
||||
### View Linux security permissions
|
||||
|
||||
To start learning about Linux permissions, imagine we have a newly created directory called **PermissionDemo**. Run **cd** inside the directory and use the **ls -l** command to view the Linux security permissions. If you want to sort them by time modified, add the **-t** option.
|
||||
|
||||
|
||||
```
|
||||
`ls -lt`
|
||||
```
|
||||
|
||||
Since there are no files inside this new directory, this command returns nothing.
|
||||
|
||||
![No output from ls -l command][2]
|
||||
|
||||
To learn more about the **ls** option, access its man page by entering **man ls** on the command line.
|
||||
|
||||
![ls man page][3]
|
||||
|
||||
Now, let's create two files: **cat.txt** and **dog.txt** with empty content; this is easy to do using the **touch** command. Let's also create an empty directory called **Pets** with the **mkdir** command. We can use the **ls -l** command again to see the permissions for these new files.
|
||||
|
||||
![Creating new files and directory][4]
|
||||
|
||||
We need to pay attention to two sections of output from this command.
|
||||
|
||||
### Who has permission?
|
||||
|
||||
The first thing to examine indicates _who_ has permission to access the file/directory. Note the section highlighted in the red box below. The first column refers to the _user_ who has access, while the second column refers to the _group_ that has access.
|
||||
|
||||
![Output from -ls command][5]
|
||||
|
||||
There are three main types of users: **user** , **group** ; and **other** (essentially neither a user nor a group). There is one more: **all** , which means practically everyone.
|
||||
|
||||
![User types][6]
|
||||
|
||||
Because we are using **root** as the user, we can access any file or directory because **root** is the superuser. However, this is generally not the case, and you will probably be restricted to your username. A list of all users is stored in the **/etc/passwd** file.
|
||||
|
||||
![/etc/passwd file][7]
|
||||
|
||||
Groups are maintained in the **/etc/group** file.
|
||||
|
||||
![/etc/passwd file][8]
|
||||
|
||||
### What permissions do they have?
|
||||
|
||||
The other section of the output from **ls -l** that we need to pay attention to relates to enforcing permissions. Above, we confirmed that the owner and group permissions for the files dog.txt and cat.txt and the directory Pets we created belong to the **root** account. We can use that information about who owns what to enforce permissions for the different user ownership types, as highlighted in the red box below.
|
||||
|
||||
![Enforcing permissions for different user ownership types][9]
|
||||
|
||||
We can dissect each line into five bits of information. The first part indicates whether it is a file or a directory; files are labeled with a **-** (hyphen), and directories are labeled with **d**. The next three parts refer to permissions for **user** , **group** , and **other** , respectively. The last part is a flag for the [**access-control list**][10] (ACL), a list of permissions for an object.
|
||||
|
||||
![Different Linux permissions][11]
|
||||
|
||||
Linux permission levels can be identified with letters or numbers. There are three privilege types:
|
||||
|
||||
* **read** : r or 4
|
||||
* **write:** w or 2
|
||||
* **executable:** e or 1
|
||||
|
||||
|
||||
|
||||
![Privilege types][12]
|
||||
|
||||
The presence of each letter symbol ( **r** , **w** , or **x** ) means that the permission exists, while **-** indicates it does not. In the example below, the file is readable and writeable by the owner, only readable if the user belongs to the group, and readable and executable by anyone else. Converted to numeric notation, this would be 645 (see the image below for an explanation of how this is calculated).
|
||||
|
||||
![Permission type example][13]
|
||||
|
||||
Here are a few more examples:
|
||||
|
||||
![Permission type examples][14]
|
||||
|
||||
Test your knowledge by going through the following exercises.
|
||||
|
||||
![Permission type examples][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/understanding-linux-permissions
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson/users/greg-p/users/tj
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_3.jpg (No output from ls -l command)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_man.jpg (ls man page)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/2_6.jpg (Creating new files and directory)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3_2.jpg (Output from -ls command)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/4_0.jpg (User types)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_passwd.jpg (/etc/passwd file)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_group.jpg (/etc/passwd file)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/linuxpermissions_5.jpg (Enforcing permissions for different user ownership types)
|
||||
[10]: https://en.wikipedia.org/wiki/Access-control_list
|
||||
[11]: https://opensource.com/sites/default/files/uploads/linuxpermissions_6.jpg (Different Linux permissions)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/linuxpermissions_7.jpg (Privilege types)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/linuxpermissions_8.jpg (Permission type example)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/linuxpermissions_9.jpg (Permission type examples)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/linuxpermissions_10.jpg (Permission type examples)
|
@ -0,0 +1,231 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use MapTool to build an interactive dungeon RPG)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-use-maptools)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to use MapTool to build an interactive dungeon RPG
|
||||
======
|
||||
By using MapTool, most of a game master's work is done well before a
|
||||
role-playing game begins.
|
||||
![][1]
|
||||
|
||||
In my previous article on MapTool, I explained how to download, install, and configure your own private, [open source virtual tabletop][2] so you and your friends can play a role-playing game (RPG) together. [MapTool][3] is a complex application with lots of features, and this article demonstrates how a game master (GM) can make the most of it.
|
||||
|
||||
### Update JavaFX
|
||||
|
||||
MapTool requires JavaFX, but Java maintainers recently stopped bundling it in Java downloads. This means that, even if you have Java installed, you might not have JavaFX installed.
|
||||
|
||||
Some Linux distributions have a JavaFX package available, so if you try to run MapTool and get an error about JavaFX, download the latest self-contained version:
|
||||
|
||||
* For [Ubuntu and other Debian-based systems][4]
|
||||
* For [Fedora and Red Hat-based systems][5]
|
||||
|
||||
|
||||
|
||||
### Build a campaign
|
||||
|
||||
The top-level file in MapTool is a campaign (.cmpgn) file. A campaign can contain all of the maps required by the game you're running. As your players progress through the campaign, everyone changes to the appropriate map and plays.
|
||||
|
||||
For that to go smoothly, you must do a little prep work.
|
||||
|
||||
First, you need the digital equivalents of miniatures: _tokens_ in MapTool terminology. Tokens are available from various sites, but the most prolific is [immortalnights.com/tokensite][6]. If you're still just trying out virtual tabletops and aren't ready to invest in digital art yet, you can get a stunning collection of starter tokens from immortalnights.com for $0.
|
||||
|
||||
You can add starter content to MapTool quickly and easily using its built-in resource importer. Go to the **File** menu and select **Add Resource to Library**.
|
||||
|
||||
In the **Add Resource to Library** dialogue box, select the RPTools tab, located at the bottom-left. This lists all the free art packs available from the RPTools server, tokens and maps alike. Click to download and import.
|
||||
|
||||
![Add Resource to Library dialogue][7]
|
||||
|
||||
You can import assets you already have on your computer by selecting files from the file system, using the same dialogue box.
|
||||
|
||||
MapTool resources appear in the Library panel. If your MapTool window has no Library panel, select **Library** in the **Window** menu to add one.
|
||||
|
||||
### Gather your maps
|
||||
|
||||
The next step in preparing for your game is to gather maps. Depending on what you're playing, that might mean you need to draw your maps, purchase a map pack, or just open a map bundled with a game module. If all you need is a generic dungeon, you can also download free maps from within MapTool's **Add Resource to Library**.
|
||||
|
||||
If you have a set of maps you intend to use often, you can import them as resources. If you are building a campaign you intend to use just once, you can quickly add any PNG or JPEG file as a **New Map** in the **Map** menu.
|
||||
|
||||
![Creating a new map][8]
|
||||
|
||||
Set the **Background** to a texture that roughly matches your map or to a neutral color.
|
||||
|
||||
Set the **Map** to your map graphics file.
|
||||
|
||||
Give your new map a unique **Name**. The map name is visible to your players, so keep it free of spoilers.
|
||||
|
||||
To switch between maps, click the **Select Map** button in the top-right corner of the MapTool window, and choose the map name in the drop-down menu that appears.
|
||||
|
||||
![Select a map][9]
|
||||
|
||||
Before you let your players loose on your map, you still have some important prep work to do.
|
||||
|
||||
### Adjust the grid size
|
||||
|
||||
Since most RPGs govern how far players can move during their turn, especially during combat, game maps are designed to a specific scale. The most common scale is one map square for every five feet. Most maps you download already have a grid drawn on them; if you're designing a map, you should draw on graph paper to keep your scale consistent. Whether your map graphic has a grid or not, MapTool doesn't know about it, but you can adjust the digital grid overlay so that your player tokens are constrained into squares along the grid.
|
||||
|
||||
MapTool doesn't show the grid by default, so go to the **Map** menu and select **Adjust grid**. This displays MapTool's grid lines, and your goal is to make MapTool's grid line up with the grid drawn onto your map graphic. If your map graphic doesn't have a grid, it may indicate its scale; a common scale is one inch per five feet, and you can usually assume 72 pixels is one inch (on a 72 DPI screen). While adjusting the grid, you can change the color of the grid lines for your own reference. Set the cell size in pixels. Click and drag to align MapTool's grid to your map's grid.
|
||||
|
||||
![Adjusting the grid][10]
|
||||
|
||||
If your map has no grid and you want the grid to remain visible after you adjust it, go to the **View** menu and select **Show Grid**.
|
||||
|
||||
### Add players and NPCs
|
||||
|
||||
To add a player character (PC), non-player character (NPC), or monster to your map, find an appropriate token in your **Library** panel, then drag and drop one onto your map. In the **New Token** dialogue box that appears, give the token a name and set it as an NPC or a PC, then click the OK button.
|
||||
|
||||
![Adding a player character to the map][11]
|
||||
|
||||
Once a token is on the map, try moving it to see how its movements are constrained to the grid you've designated. Make sure **Interaction Tools** , located in the toolbar just under the **File** menu, is selected.
|
||||
|
||||
![A token moving within the grid][12]
|
||||
|
||||
Each token added to a map has its own set of properties, including the direction it's facing, a light source, player ownership, conditions (such as incapacitated, prone, dead, and so on), and even class attributes. You can set as many or as few of these as you want, but at the very least you should right-click on each token and assign it ownership. Your players must be logged into your MapTool server for tokens to be assigned to them, but you can assign yourself NPCs and monsters in advance.
|
||||
|
||||
The right-click menu provides access to all important token-related functions, including setting which direction it's facing, setting a health bar and health value, a copy and paste function (enabling you and your players to move tokens from map to map), and much more.
|
||||
|
||||
![The token menu unlocks great, arcane power][13]
|
||||
|
||||
### Activate fog-of-war effects
|
||||
|
||||
If you're using maps exclusively to coordinate combat, you may not need a fog-of-war effect. But if you're using maps to help your players visualize a dungeon they're exploring, you probably don't want them to see the whole map before they've made significant moves, like opening locked doors or braving a decaying bridge over a pit of hot lava.
|
||||
|
||||
The fog-of-war effect is an invaluable tool for the GM, and it's essential to set it up early so that your players don't accidentally get a sneak peek at all the horrors your dungeon holds for them.
|
||||
|
||||
To activate fog-of-war on a map, go to the **Map** and select **Fog-of-War**. This blackens the entire screen for your players, so your next step is to reveal some portion of the map so that your players aren't faced with total darkness when they switch to the map. Fog-of-war is a subtractive process; it starts 100% dark, and as the players progress, you reveal new portions of the map using fog-of-war drawing tools available in the **FOG** toolbar, just under the **View** menu.
|
||||
|
||||
You can reveal sections of the map in rectangle blocks, ovals, polygons, diamonds, and freehand shapes. Once you've selected the shape, click and release on the map, drag to define an area to reveal, and then click again.
|
||||
|
||||
![Fog-of-war as experienced by a playe][14]
|
||||
|
||||
If you're accidentally overzealous with what you reveal, you have two ways to reverse what you've done: You can manually draw new fog, or you can reset all fog. The quicker method is to reset all fog with **Ctrl+Shift+A**. The more elegant solution is to press **Shift** , then click and release, draw an area of fog, and then click again. Instead of exposing an area of the map, it restores fog.
|
||||
|
||||
### Add lighting effects
|
||||
|
||||
Fog-of-war mimics the natural phenomenon of not being able to see areas of the world other than where you are, but lighting effects mimic the visibility player characters might experience in light and dark. For games like Pathfinder and Dungeons and Dragons 5e, visibility is governed by light sources matched against light conditions.
|
||||
|
||||
First, activate lighting by clicking on the **Map** menu, selecting **Vision** , and then choosing either Daylight or Night. Now lighting effects are active, but none of your players have light sources, so they have no visibility.
|
||||
|
||||
To assign light sources to players, right-click on the appropriate token and choose **Light Source**. Definitions exist for the D20 system (candle, lantern, torch, and so on) and in generic measurements.
|
||||
|
||||
With lighting effects active, players can expose portions of fog-of-war as their light sources get closer to unexposed fog. That's a great effect, but it doesn't make much sense when players can illuminate the next room right through a solid wall. To prevent that, you have to help MapTool differentiate between empty space and solid objects.
|
||||
|
||||
#### Define solid objects
|
||||
|
||||
Defining walls and other solid objects through which light should not pass is easier than it sounds. MapTool's **Vision Blocking Layer** (VBL) tools are basic and built to minimize prep time. There are several basic shapes available, including a basic rectangle and an oval. Draw these shapes over all the solid walls, doors, pillars, and other obstructions, and you have instant rudimentary physics.
|
||||
|
||||
![Setting up obstructions][15]
|
||||
|
||||
Now your players can move around the map with light sources without seeing what lurks in the shadows of a nearby pillar or behind an innocent-looking door… until it's too late!
|
||||
|
||||
![Lighting effects][16]
|
||||
|
||||
### Track initiative
|
||||
|
||||
Eventually, your players are going to stumble on something that wants to kill them, and that means combat. In most RPG systems, combat is played in rounds, with the order of turns decided by an _initiative_ roll. During combat, each player (in order of their initiative roll, from greatest to lowest) tries to defeat their foe, ideally dealing enough damage until their foe is left with no health points (HP). It's usually the most paperwork a GM has to do during a game because it involves tracking whose turn it is, how much damage each monster has taken, what amount of damage each monster's attack deals, what special abilities each monster has, and more. Luckily, MapTool can help with that—and better yet, you can extend it with a custom macro to do even more.
|
||||
|
||||
MapTool's basic initiative panel helps you keep track of whose turn it is and how many rounds have transpired so far. To view the initiative panel, go to the **Window** menu and select **Initiative**.
|
||||
|
||||
To add characters to the initiative order, right-click a token and select **Add To Initiative**. As you add each, the token and its label appear in the initiative panel in the order that you add them. If you make a mistake or someone holds their action and changes the initiative order, click and drag the tokens in the initiative panel to reorder them.
|
||||
|
||||
During combat, click the **Next** button in the top-left of the initiative panel to progress to the next character. As long as you use the **Next** button, the **Round** counter increments, helping you track how many rounds the combat has lasted (which is helpful when you have spells or effects that last only for a specific number of rounds).
|
||||
|
||||
Tracking combat order is helpful, but it's even better to track health points. Your players should be tracking their own health, but since everyone's staring at the same screen, it doesn't hurt to track it publicly in one place. An HP property and a graphical health bar (which you can activate) are assigned to each token, so that's all the infrastructure you need to track HP in MapTool, but doing it manually takes a lot of clicking around. Since MapTool can be extended with macros, it's trivial to bring all these components together for a smooth GM experience.
|
||||
|
||||
The first step is to activate graphical health bars for your tokens. To do this, right-click on each token and select **Edit**. In the **Edit Token** dialog box, click on the **State** tab and deselect the radio button next to **Hide**.
|
||||
|
||||
![Don't hide the health bar][17]
|
||||
|
||||
Do this for each token whose health you want to expose.
|
||||
|
||||
#### Write a macro
|
||||
|
||||
Macros have access to all token properties, so each token's HP can be tracked by reading and writing whatever value exists in the token's HP property. The graphical health bar, however, bases its state on a percentage, so for the health bars to be meaningful, your tokens also must have some value that represents 100% of its HP.
|
||||
|
||||
Go to the **Edit** menu and select **Campaign Properties** to globally add properties to tokens. In the **Campaign Properties** window, select the **Token Properties** tab and then click the **Basic** category in the left column. Under ***@HP** , add ***@MaxHP** and click the **Update** button. Click the **OK** button to close the window.
|
||||
|
||||
![Adding a property to all tokens][18]
|
||||
|
||||
Now right-click a token and select **Edit**. In the **Edit Token** window, select the **State** tab and enter a value for the token's maximum HP (from the player's character sheet).
|
||||
|
||||
To create a new macro, reveal the **Campaign** panel in the **Window** menu.
|
||||
|
||||
In the **Campaign** panel, right-click and select **Add New Macro**. A button labeled **New** appears in the panel. Right-click on the **New** button and select **Edit**.
|
||||
|
||||
Enter this code in the macro editor window:
|
||||
|
||||
|
||||
```
|
||||
[h:status = input(
|
||||
"hpAmount|0|Points",
|
||||
"hpType|Damage,Healing|Damage or heal?|RADIO|SELECT=0")]
|
||||
[h:abort(status)]
|
||||
|
||||
[if(hpType == 0),CODE: {
|
||||
[h:HP = HP - hpAmount]
|
||||
[h:bar.Health = HP / MaxHP]
|
||||
[r:token.name] takes [r:hpAmount] damage.};
|
||||
{
|
||||
[h:diff = MaxHP - HP]
|
||||
[h:HP = min(HP+hpAmount, MaxHP)]
|
||||
[h:bar.Health = HP / MaxHP]
|
||||
[r:token.name] gains [r:min(diff,hpAmount)] HP. };]
|
||||
```
|
||||
|
||||
You can find full documentation of functions available in MapTool macros and their syntax from the [RPTools wiki][19].
|
||||
|
||||
In the **Details** tab, enable **Include Label** and **Apply to Selected Tokens** , and leave all other values at their default. Give your macro a better name than **New** , such as **HPTracker** , then click **Apply** and **OK**.
|
||||
|
||||
![Macro editing][20]
|
||||
|
||||
Your campaign now has a new ability!
|
||||
|
||||
Select a token and click your **HPTracker** button. Enter the number of points to deduct from the token, click **OK** , and watch the health bar change to reflect the token's new state.
|
||||
|
||||
It may seem like a simple change, but in the heat of battle, this is a GM's greatest weapon.
|
||||
|
||||
### During the game
|
||||
|
||||
There's obviously a lot you can do with MapTool, but with a little prep work, most of your work is done well before you start playing. You can even create a template campaign by creating an empty campaign with only the macros and settings you want, so all you have to do is import maps and stat out tokens.
|
||||
|
||||
During the game, your workflow is mostly about revealing areas from fog-of-war and managing combat. The players can manage their own tokens, and your prep work takes care of everything else.
|
||||
|
||||
MapTool makes digital gaming easy and fun, and most importantly, it keeps it open source and self-contained. Level-up today by learning MapTool and using it for your games.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-use-maptools
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice-keys_0.jpg?itok=PGEs3ZXa
|
||||
[2]: https://opensource.com/article/18/5/maptool
|
||||
[3]: https://github.com/RPTools/maptool
|
||||
[4]: https://github.com/RPTools/maptool/releases
|
||||
[5]: https://klaatu.fedorapeople.org/RPTools/maptool/
|
||||
[6]: https://immortalnights.com/tokensite/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/maptool-resources.png (Add Resource to Library dialogue)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/map-properties.png (Creating a new map)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/map-select.jpg (Select a map)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/grid-adjust.jpg (Adjusting the grid)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/token-new.png (Adding a player character to the map)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/token-move.jpg (A token moving within the grid)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/token-menu.jpg (The token menu unlocks great, arcane power)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/fog-of-war.jpg (Fog-of-war as experienced by a playe)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/vbl.jpg (Setting up obstructions)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/map-light.jpg (Lighting effects)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/token-edit.jpg (Don't hide the health bar)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/campaign-properties.jpg (Adding a property to all tokens)
|
||||
[19]: https://lmwcs.com/rptools/wiki/Main_Page
|
||||
[20]: https://opensource.com/sites/default/files/uploads/macro-detail.jpg (Macro editing)
|
@ -0,0 +1,327 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (11 Free and Open Source Video Editing Software)
|
||||
[#]: via: (https://itsfoss.com/open-source-video-editors/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
11 Free and Open Source Video Editing Software
|
||||
======
|
||||
|
||||
We’ve already covered the [top video editors for Linux][1]. That list contained some non-open source software as well. This made us write this article to feature only open source video editors. We’ve also mentioned what platforms are supported by these software so that this list is helpful even if you are not using Linux.
|
||||
|
||||
### Top Free and Open Source Video Editors
|
||||
|
||||
![Best Open Source Video Editors][2]
|
||||
|
||||
Just for your information, this list is not a ranking and the editors listed here are not in any specific order. I have not mentioned the installation procedure but you can find that information on the website of each project.
|
||||
|
||||
#### 1\. Kdenlive
|
||||
|
||||
![][3]
|
||||
|
||||
**Key Features** :
|
||||
|
||||
* Multi-track Video Editing
|
||||
* All kinds of audio/video format supported with the help of FFmpeg libraries
|
||||
* 2D Title maker
|
||||
* Customizable Interface and shortcuts
|
||||
* Proxy editing to make things faster
|
||||
* Automatic backup
|
||||
* Timeline preview
|
||||
* Keyframeable effects
|
||||
* Audiometer, Histogram, Waveform, etc.
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux, macOS and Windows.
|
||||
|
||||
Kdenlive is an open source video editor (and free) available for Windows, Mac OSX, and Linux distros.
|
||||
|
||||
If you are on a Mac, you will have to manually compile and install it. However, if you are on Windows, you can download the EXE file and should have no issues installing it.
|
||||
|
||||
[Kdenlive][4]
|
||||
|
||||
#### 2\. LiVES
|
||||
|
||||
![][5]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Frame and sample accurate editing
|
||||
* Edit video in real-time
|
||||
* Can be controlled using MIDI, keyboard, Joystic
|
||||
* Multi-track support
|
||||
* VJ keyboard control during playback
|
||||
* Plugins supported
|
||||
* Compatible with various effects frameworks: projectM, LADSPA audio, and so on.
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux and macOS. Support for Windows will be added soon.
|
||||
|
||||
LiVES is an interesting open source video editor. You can find the code on [GitHub][6]. It is currently available for **Linux and macOS Leopard**. It will soon be available for Windows (hopefully by the end of 2019).
|
||||
|
||||
[LiVES][7]
|
||||
|
||||
#### 3\. OpenShot
|
||||
|
||||
![][8]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Almost all video/audio formats supported
|
||||
* Key frame animation framework
|
||||
* Multi-track support
|
||||
* Desktop integration (drag and drop support)
|
||||
* Video transition with real-time previews
|
||||
* 3D animated titles and effects
|
||||
* Advanced timeline with drag/drop support, panning, scrolling, zooming, and snapping.
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux, macOS and Windows.
|
||||
|
||||
OpenShot is a quite popular video editor and it is open source as well. Unlike others, OpenShot offers a DMG installer for Mac OSX. So, you don’t have to compile and install it manually.
|
||||
|
||||
If you are a fan of open source solutions and you own a Mac, OpenShot seems like a very good option.
|
||||
|
||||
[OpenShot][9]
|
||||
|
||||
#### 4\. VidCutter
|
||||
|
||||
![][10]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Keyframes viewer
|
||||
* Cut, Split, and add different clips
|
||||
* Major audio/video formats supported
|
||||
|
||||
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Install Windows Like Desktop Widgets In Ubuntu Linux With Screenlets
|
||||
|
||||
**Platforms available on:** Linux, macOS and Windows.
|
||||
|
||||
VidCutter is an open source video editor for basic tasks. It does not offer a plethora of features – but it works for all the common tasks like clipping or cutting. It’s under active development as well.
|
||||
|
||||
For Linux, it is available on Flathub as well. And, for Windows and Mac OS, you do get EXE and DMG file packages in the latest releases.
|
||||
|
||||
[VidCutter][12]
|
||||
|
||||
#### 5\. Shotcut
|
||||
|
||||
![][13]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Supports almost all major audio/video formats with the help of FFmpeg libraries.
|
||||
* Multiple dockable/undockable panels
|
||||
* Intuitive UI
|
||||
* JACK transport sync
|
||||
* Stereo, mono, and 5.1 surround support
|
||||
* Waveform, Histogram, etc.
|
||||
* Easy to use with dual monitors
|
||||
* Portable version available
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux, macOS and Windows.
|
||||
|
||||
Shotcut is yet another popular open source video editor available across multiple platforms. It features a nice interface to work on.
|
||||
|
||||
When considering the features, it offers almost everything that you would ever need (from color correction to adding transitions). Also, it provides a portable version for Windows – which is an impressive thing.
|
||||
|
||||
[Shotcut][14]
|
||||
|
||||
#### 6\. Flowblade
|
||||
|
||||
![][15]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Advanced timeline control
|
||||
* Multi-track editing
|
||||
* [G’mic][16] tool
|
||||
* All major audio/video formats supported with the help of FFMpeg libraries
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux
|
||||
|
||||
Flowblade is an intuitive open source video editor available only for Linux. Yes, it is a bummer that we do not have cross-platform support for this.
|
||||
|
||||
However, if you are using a Linux distro, you can either download the .deb file and get it installed or use the source code on GitHub.
|
||||
|
||||
[Flowblade][17]
|
||||
|
||||
#### 7\. Avidemux
|
||||
|
||||
![][18]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Trim
|
||||
* Cut
|
||||
* Filter support
|
||||
* Major video format supported
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux, BSD, macOS and Windows.
|
||||
|
||||
If you are looking for a basic cross-platform open source video editor – this will be one of our recommendations. You just get the ability to cut, save, add a filter, and perform some other basic editing tasks. Their official [SourceForge page][19] might look like it has been abandoned, but it is in active development.
|
||||
|
||||
[Avidemux][19]
|
||||
|
||||
#### 8\. Pitivi
|
||||
|
||||
![][20]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* All major video formats supported using [GStreamer Multimedia Framework][21]
|
||||
* Advanced timeline independent of frame rate
|
||||
* Animated effects and transitions
|
||||
* Audio waveforms
|
||||
* Real-time trimming previews
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux
|
||||
|
||||
Yet another open source video editor that is available only for Linux. The UI is user-friendly and the features offered will help you perform some advanced edits as well.
|
||||
|
||||
You can install it using Flatpak or look for the source code on their official website. It should be available in the repositories of most distributions as well.
|
||||
|
||||
[Pitivi][22]
|
||||
|
||||
#### 9\. Blender
|
||||
|
||||
![][23]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* VFX
|
||||
* Modeling tools
|
||||
* Animation and Rigging tools
|
||||
* Draw in 2D or 3D
|
||||
|
||||
|
||||
|
||||
[][24]
|
||||
|
||||
Suggested read 4 Best Modern Open Source Text Editors For Coding in Linux
|
||||
|
||||
**Platforms available on:** Linux, macOS and Windows.
|
||||
|
||||
Blender is an advanced 3D creation suite. And, it is surprising that you get all those powerful abilities for free (and while being open source).
|
||||
|
||||
Of course, Blender is not a solution for every user – however, it is definitely one of the best open source tool available for Windows, macOS, and Linux. You can also find it on [Steam][25] to install it.
|
||||
|
||||
[Blender][26]
|
||||
|
||||
#### 10\. Cinelerra
|
||||
|
||||
![][27]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* Advanced timeline
|
||||
* Motion tracking support
|
||||
* Video stabilization
|
||||
* Audio mastering
|
||||
* Color correction
|
||||
|
||||
|
||||
|
||||
**Platforms available on:** Linux
|
||||
|
||||
Cinelerra is a quite popular open source video editor. However, it has several branches to it (in other words – different versions). I am not sure if that is a good thing – but you get different features (and ability) on each of them.
|
||||
|
||||
Cinelerra GG, CV, CVE, and HV are those variants catering to users with different preferences. Personally, I would recommend to check out [Cinelerra GG][28].
|
||||
|
||||
[Cinelerra][29]
|
||||
|
||||
#### 11\. NATRON
|
||||
|
||||
![][30]
|
||||
|
||||
**Key Features:**
|
||||
|
||||
* VFX
|
||||
* Powerful Tracker
|
||||
* Keying tools for production needs
|
||||
* Shadertoy and G’mic tools
|
||||
* OpenFX plugin support
|
||||
|
||||
|
||||
|
||||
**Platforms available on** : Linux, macOS and Windows.
|
||||
|
||||
If you are into VFX and motion graphics, NATRON is a good alternative to Blender. Of course, in order to compare them for your usage, you will have to give it a try yourself.
|
||||
|
||||
You do have the installer available for Windows and the dmg package for Mac. So, it is quite easy to get it installed. You can always head to its [GitHub page][31] for more information.
|
||||
|
||||
[Natron][32]
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
So, now that you know about some of the most popular open source video editors available out there – what do you think about them?
|
||||
|
||||
Are they good enough for professional requirements? Or, did we miss any of your favorite open source video editor that deserved the mention?
|
||||
|
||||
I am not an expert video editor and have only experience with simple editing tasks to create YouTube videos. If you are an experienced and professional video editor, I would like to hear your opinion on how good are these open source video editors for the experts.
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/open-source-video-editors/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-video-editing-software-linux/
|
||||
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/best-open-source-video-editors-800x450.png?resize=800%2C450&ssl=1
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg?ssl=1
|
||||
[4]: https://kdenlive.org/en/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/lives-video-editor.jpg?fit=800%2C600&ssl=1
|
||||
[6]: https://github.com/salsaman/LiVES
|
||||
[7]: http://lives-video.com/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg?ssl=1
|
||||
[9]: https://www.openshot.org/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/vidcutter.jpg?fit=800%2C585&ssl=1
|
||||
[11]: https://itsfoss.com/install-windows-desktop-widgets-linux/
|
||||
[12]: https://github.com/ozmartian/vidcutter
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux.jpg?resize=800%2C503&ssl=1
|
||||
[14]: https://shotcut.org/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg?ssl=1
|
||||
[16]: https://gmic.eu/
|
||||
[17]: https://jliljebl.github.io/flowblade/index.html
|
||||
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/avidemux.jpg?resize=800%2C697&ssl=1
|
||||
[19]: http://avidemux.sourceforge.net/
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/pitvi.jpg?resize=800%2C464&ssl=1
|
||||
[21]: https://en.wikipedia.org/wiki/GStreamer
|
||||
[22]: http://www.pitivi.org/
|
||||
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg?ssl=1
|
||||
[24]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[25]: https://store.steampowered.com/app/365670/Blender/
|
||||
[26]: https://www.blender.org/
|
||||
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg?ssl=1
|
||||
[28]: https://www.cinelerra-gg.org/
|
||||
[29]: http://cinelerra.org/
|
||||
[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/natron.jpg?fit=800%2C481&ssl=1
|
||||
[31]: https://github.com/NatronGitHub/Natron
|
||||
[32]: https://natrongithub.github.io/
|
@ -0,0 +1,342 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with OpenSSL: Cryptography basics)
|
||||
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen)
|
||||
|
||||
Getting started with OpenSSL: Cryptography basics
|
||||
======
|
||||
Need a primer on cryptography basics, especially regarding OpenSSL? Read
|
||||
on.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
This article is the first of two on cryptography basics using [OpenSSL][2], a production-grade library and toolkit popular on Linux and other systems. (To install the most recent version of OpenSSL, see [here][3].) OpenSSL utilities are available at the command line, and programs can call functions from the OpenSSL libraries. The sample program for this article is in C, the source language for the OpenSSL libraries.
|
||||
|
||||
The two articles in this series cover—collectively—cryptographic hashes, digital signatures, encryption and decryption, and digital certificates. You can find the code and command-line examples in a ZIP file from [my website][4].
|
||||
|
||||
Let’s start with a review of the SSL in the OpenSSL name.
|
||||
|
||||
### A quick history
|
||||
|
||||
[Secure Socket Layer (SSL)][5] is a cryptographic protocol that [Netscape][6] released in 1995. This protocol layer can sit atop HTTP, thereby providing the _S_ for _secure_ in HTTPS. The SSL protocol provides various security services, including two that are central in HTTPS:
|
||||
|
||||
* Peer authentication (aka mutual challenge): Each side of a connection authenticates the identity of the other side. If Alice and Bob are to exchange messages over SSL, then each first authenticates the identity of the other.
|
||||
* Confidentiality: A sender encrypts messages before sending these over a channel. The receiver then decrypts each received message. This process safeguards network conversations. Even if eavesdropper Eve intercepts an encrypted message from Alice to Bob (a _man-in-the-middle_ attack), Eve finds it computationally infeasible to decrypt this message.
|
||||
|
||||
|
||||
|
||||
These two key SSL services, in turn, are tied to others that get less attention. For example, SSL supports message integrity, which assures that a received message is the same as the one sent. This feature is implemented with hash functions, which likewise come with the OpenSSL toolkit.
|
||||
|
||||
SSL is versioned (e.g., SSLv2 and SSLv3), and in 1999 Transport Layer Security (TLS) emerged as a similar protocol based upon SSLv3. TLSv1 and SSLv3 are alike, but not enough so to work together. Nonetheless, it is common to refer to SSL/TLS as if they are one and the same protocol. For example, OpenSSL functions often have SSL in the name even when TLS rather than SSL is in play. Furthermore, calling OpenSSL command-line utilities begins with the term **openssl**.
|
||||
|
||||
The documentation for OpenSSL is spotty beyond the **man** pages, which become unwieldy given how big the OpenSSL toolkit is. Command-line and code examples are one way to bring the main topics into focus together. Let’s start with a familiar example—accessing a web site with HTTPS—and use this example to pick apart the cryptographic pieces of interest.
|
||||
|
||||
### An HTTPS client
|
||||
|
||||
The **client** program shown here connects over HTTPS to Google:
|
||||
|
||||
|
||||
```
|
||||
/* compilation: gcc -o client client.c -lssl -lcrypto */
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <stdlib.h>
|
||||
|
||||
#include <openssl/bio.h> /* BasicInput/Output streams */
|
||||
|
||||
#include <openssl/err.h> /* errors */
|
||||
|
||||
#include <openssl/ssl.h> /* core library */
|
||||
|
||||
#define BuffSize 1024
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][7](msg);
|
||||
ERR_print_errors_fp(stderr);
|
||||
[exit][8](-1);
|
||||
}
|
||||
|
||||
void init_ssl() {
|
||||
SSL_load_error_strings();
|
||||
SSL_library_init();
|
||||
}
|
||||
|
||||
void cleanup(SSL_CTX* ctx, BIO* bio) {
|
||||
SSL_CTX_free(ctx);
|
||||
BIO_free_all(bio);
|
||||
}
|
||||
|
||||
void secure_connect(const char* hostname) {
|
||||
char name[BuffSize];
|
||||
char request[BuffSize];
|
||||
char response[BuffSize];
|
||||
|
||||
const SSL_METHOD* method = TLSv1_2_client_method();
|
||||
if (NULL == method) report_and_exit("TLSv1_2_client_method...");
|
||||
|
||||
SSL_CTX* ctx = SSL_CTX_new(method);
|
||||
if (NULL == ctx) report_and_exit("SSL_CTX_new...");
|
||||
|
||||
BIO* bio = BIO_new_ssl_connect(ctx);
|
||||
if (NULL == bio) report_and_exit("BIO_new_ssl_connect...");
|
||||
|
||||
SSL* ssl = NULL;
|
||||
|
||||
/* link bio channel, SSL session, and server endpoint */
|
||||
|
||||
[sprintf][9](name, "%s:%s", hostname, "https");
|
||||
BIO_get_ssl(bio, &ssl); /* session */
|
||||
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* robustness */
|
||||
BIO_set_conn_hostname(bio, name); /* prepare to connect */
|
||||
|
||||
/* try to connect */
|
||||
if (BIO_do_connect(bio) <= 0) {
|
||||
cleanup(ctx, bio);
|
||||
report_and_exit("BIO_do_connect...");
|
||||
}
|
||||
|
||||
/* verify truststore, check cert */
|
||||
if (!SSL_CTX_load_verify_locations(ctx,
|
||||
"/etc/ssl/certs/ca-certificates.crt", /* truststore */
|
||||
"/etc/ssl/certs/")) /* more truststore */
|
||||
report_and_exit("SSL_CTX_load_verify_locations...");
|
||||
|
||||
long verify_flag = SSL_get_verify_result(ssl);
|
||||
if (verify_flag != X509_V_OK)
|
||||
[fprintf][10](stderr,
|
||||
"##### Certificate verification error (%i) but continuing...\n",
|
||||
(int) verify_flag);
|
||||
|
||||
/* now fetch the homepage as sample data */
|
||||
[sprintf][9](request,
|
||||
"GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A",
|
||||
hostname);
|
||||
BIO_puts(bio, request);
|
||||
|
||||
/* read HTTP response from server and print to stdout */
|
||||
while (1) {
|
||||
[memset][11](response, '\0', sizeof(response));
|
||||
int n = BIO_read(bio, response, BuffSize);
|
||||
if (n <= 0) break; /* 0 is end-of-stream, < 0 is an error */
|
||||
[puts][12](response);
|
||||
}
|
||||
|
||||
cleanup(ctx, bio);
|
||||
}
|
||||
|
||||
int main() {
|
||||
init_ssl();
|
||||
|
||||
const char* hostname = "www.google.com:443";
|
||||
[fprintf][10](stderr, "Trying an HTTPS connection to %s...\n", hostname);
|
||||
secure_connect(hostname);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
This program can be compiled and executed from the command line (note the lowercase L in **-lssl** and **-lcrypto**):
|
||||
|
||||
**gcc** **-o** **client client.c -lssl** **-lcrypto**
|
||||
|
||||
This program tries to open a secure connection to the web site [www.google.com][13]. As part of the TLS handshake with the Google web server, the **client** program receives one or more digital certificates, which the program tries (but, on my system, fails) to verify. Nonetheless, the **client** program goes on to fetch the Google homepage through the secure channel. This program depends on the security artifacts mentioned earlier, although only a digital certificate stands out in the code. The other artifacts remain behind the scenes and are clarified later in detail.
|
||||
|
||||
Generally, a client program in C or C++ that opened an HTTP (non-secure) channel would use constructs such as a _file descriptor_ for a _network socket_, which is an endpoint in a connection between two processes (e.g., the client program and the Google web server). A file descriptor, in turn, is a non-negative integer value that identifies, within a program, any file-like construct that the program opens. Such a program also would use a structure to specify details about the web server’s address.
|
||||
|
||||
None of these relatively low-level constructs occurs in the client program, as the OpenSSL library wraps the socket infrastructure and address specification in high-level security constructs. The result is a straightforward API. Here’s a first look at the security details in the example **client** program.
|
||||
|
||||
* The program begins by loading the relevant OpenSSL libraries, with my function **init_ssl** making two calls into OpenSSL:
|
||||
|
||||
**SSL_library_init(); SSL_load_error_strings();**
|
||||
|
||||
* The next initialization step tries to get a security _context_, a framework of information required to establish and maintain a secure channel to the web server. **TLS 1.2** is used in the example, as shown in this call to an OpenSSL library function:
|
||||
|
||||
**const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */**
|
||||
|
||||
If the call succeeds, then the **method** pointer is passed to the library function that creates the context of type **SSL_CTX**:
|
||||
|
||||
**SSL_CTX*** **ctx** **= SSL_CTX_new(method);**
|
||||
|
||||
The **client** program checks for errors on each of these critical library calls, and then the program terminates if either call fails.
|
||||
|
||||
* Two other OpenSSL artifacts now come into play: a security session of type **SSL**, which manages the secure connection from start to finish; and a secured stream of type **BIO** (Basic Input/Output), which is used to communicate with the web server. The **BIO** stream is generated with this call:
|
||||
|
||||
**BIO* bio = BIO_new_ssl_connect(ctx);**
|
||||
|
||||
Note that the all-important context is the argument. The **BIO** type is the OpenSSL wrapper for the **FILE** type in C. This wrapper secures the input and output streams between the **client** program and Google's web server.
|
||||
|
||||
* With the **SSL_CTX** and **BIO** in hand, the program then links these together in an **SSL** session. Three library calls do the work:
|
||||
|
||||
**BIO_get_ssl(bio, &ssl); /* get a TLS session */**
|
||||
|
||||
**SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* for robustness */**
|
||||
|
||||
**BIO_set_conn_hostname(bio, name); /* prepare to connect to Google */**
|
||||
|
||||
The secure connection itself is established through this call:
|
||||
|
||||
**BIO_do_connect(bio);**
|
||||
|
||||
If this last call does not succeed, the **client** program terminates; otherwise, the connection is ready to support a confidential conversation between the **client** program and the Google web server.
|
||||
|
||||
|
||||
|
||||
|
||||
During the handshake with the web server, the **client** program receives one or more digital certificates that authenticate the server’s identity. However, the **client** program does not send a certificate of its own, which means that the authentication is one-way. (Web servers typically are configured _not_ to expect a client certificate.) Despite the failed verification of the web server’s certificate, the **client** program continues by fetching the Google homepage through the secure channel to the web server.
|
||||
|
||||
Why does the attempt to verify a Google certificate fail? A typical OpenSSL installation has the directory **/etc/ssl/certs**, which includes the **ca-certificates.crt** file. The directory and the file together contain digital certificates that OpenSSL trusts out of the box and accordingly constitute a _truststore_. The truststore can be updated as needed, in particular, to include newly trusted certificates and to remove ones no longer trusted.
|
||||
|
||||
The client program receives three certificates from the Google web server, but the OpenSSL truststore on my machine does not contain exact matches. As presently written, the **client** program does not pursue the matter by, for example, verifying the digital signature on a Google certificate (a signature that vouches for the certificate). If that signature were trusted, then the certificate containing it should be trusted as well. Nonetheless, the client program goes on to fetch and then to print Google’s homepage. The next section gets into more detail.
|
||||
|
||||
### The hidden security pieces in the client program
|
||||
|
||||
Let’s start with the visible security artifact in the client example—the digital certificate—and consider how other security artifacts relate to it. The dominant layout standard for a digital certificate is X509, and a production-grade certificate is issued by a certificate authority (CA) such as [Verisign][14].
|
||||
|
||||
A digital certificate contains various pieces of information (e.g., activation and expiration dates, and a domain name for the owner), including the issuer’s identity and _digital signature_, which is an encrypted _cryptographic hash_ value. A certificate also has an unencrypted hash value that serves as its identifying _fingerprint_.
|
||||
|
||||
A hash value results from mapping an arbitrary number of bits to a fixed-length digest. What the bits represent (an accounting report, a novel, or maybe a digital movie) is irrelevant. For example, the Message Digest version 5 (MD5) hash algorithm maps input bits of whatever length to a 128-bit hash value, whereas the SHA1 (Secure Hash Algorithm version 1) algorithm maps input bits to a 160-bit value. Different input bits result in different—indeed, statistically unique—hash values. The next article goes into further detail and focuses on what makes a hash function _cryptographic_.
|
||||
|
||||
Digital certificates differ in type (e.g., _root_, _intermediate_, and _end-entity_ certificates) and form a hierarchy that reflects these types. As the name suggests, a _root_ certificate sits atop the hierarchy, and the certificates under it inherit whatever trust the root certificate has. The OpenSSL libraries and most modern programming languages have an X509 type together with functions that deal with such certificates. The certificate from Google has an X509 format, and the **client** program checks whether this certificate is **X509_V_OK**.
|
||||
|
||||
X509 certificates are based upon public-key infrastructure (PKI), which includes algorithms—RSA is the dominant one—for generating _key pairs_: a public key and its paired private key. A public key is an identity: [Amazon’s][15] public key identifies it, and my public key identifies me. A private key is meant to be kept secret by its owner.
|
||||
|
||||
The keys in a pair have standard uses. A public key can be used to encrypt a message, and the private key from the same pair can then be used to decrypt the message. A private key also can be used to sign a document or other electronic artifact (e.g., a program or an email), and the public key from the pair can then be used to verify the signature. The following two examples fill in some details.
|
||||
|
||||
In the first example, Alice distributes her public key to the world, including Bob. Bob then encrypts a message with Alice’s public key, sending the encrypted message to Alice. The message encrypted with Alice’s public key is decrypted with her private key, which (by assumption) she alone has, like so:
|
||||
|
||||
|
||||
```
|
||||
+------------------+ encrypted msg +-------------------+
|
||||
Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg
|
||||
+------------------+ +-------------------+
|
||||
```
|
||||
|
||||
Decrypting the message without Alice’s private key is possible in principle, but infeasible in practice given a sound cryptographic key-pair system such as RSA.
|
||||
|
||||
Now, for the second example, consider signing a document to certify its authenticity. The signature algorithm uses a private key from a pair to process a cryptographic hash of the document to be signed:
|
||||
|
||||
|
||||
```
|
||||
+-------------------+
|
||||
Hash of document--->|Alice's private key|--->Alice's digital signature of the document
|
||||
+-------------------+
|
||||
```
|
||||
|
||||
Assume that Alice digitally signs a contract sent to Bob. Bob then can use Alice’s public key from the key pair to verify the signature:
|
||||
|
||||
|
||||
```
|
||||
+------------------+
|
||||
Alice's digital signature of the document--->|Alice's public key|--->verified or not
|
||||
+------------------+
|
||||
```
|
||||
|
||||
It is infeasible to forge Alice’s signature without Alice’s private key: hence, it is in Alice’s interest to keep her private key secret.
|
||||
|
||||
None of these security pieces, except for digital certificates, is explicit in the **client** program. The next article fills in the details with examples that use the OpenSSL utilities and library functions.
|
||||
|
||||
### OpenSSL from the command line
|
||||
|
||||
In the meantime, let’s take a look at OpenSSL command-line utilities: in particular, a utility to inspect the certificates from a web server during the TLS handshake. Invoking the OpenSSL utilities begins with the **openssl** command and then adds a combination of arguments and flags to specify the desired operation.
|
||||
|
||||
Consider this command:
|
||||
|
||||
**openssl list-cipher-algorithms**
|
||||
|
||||
The output is a list of associated algorithms that make up a _cipher suite_. Here’s the start of the list, with comments to clarify the acronyms:
|
||||
|
||||
|
||||
```
|
||||
AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining
|
||||
AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes
|
||||
AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1
|
||||
...
|
||||
```
|
||||
|
||||
The next command, using the argument **s_client**, opens a secure connection to **[www.google.com][13]** and prints screens full of information about this connection:
|
||||
|
||||
**openssl s_client -connect [www.google.com:443][16] -showcerts**
|
||||
|
||||
The port number 443 is the standard one used by web servers for receiving HTTPS rather than HTTP connections. (For HTTP, the standard port is 80.) The network address **[www.google.com:443][16]** also occurs in the **client** program's code. If the attempted connection succeeds, the three digital certificates from Google are displayed together with information about the secure session, the cipher suite in play, and related items. For example, here is a slice of output from near the start, which announces that a _certificate chain_ is forthcoming. The encoding for the certificates is base64:
|
||||
|
||||
|
||||
```
|
||||
Certificate chain
|
||||
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
|
||||
i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3
|
||||
\-----BEGIN CERTIFICATE-----
|
||||
MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU
|
||||
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw
|
||||
...
|
||||
```
|
||||
|
||||
A major web site such as Google usually sends multiple certificates for authentication.
|
||||
|
||||
The output ends with summary information about the TLS session, including specifics on the cipher suite:
|
||||
|
||||
|
||||
```
|
||||
SSL-Session:
|
||||
Protocol : TLSv1.2
|
||||
Cipher : ECDHE-RSA-AES128-GCM-SHA256
|
||||
Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573
|
||||
...
|
||||
```
|
||||
|
||||
The protocol **TLS 1.2** is used in the **client** program, and the **Session-ID** uniquely identifies the connection between the **openssl** utility and the Google web server. The **Cipher** entry can be parsed as follows:
|
||||
|
||||
* **ECDHE** (Elliptic Curve Diffie Hellman Ephemeral) is an effective and efficient algorithm for managing the TLS handshake. In particular, ECDHE solves the _key-distribution problem_ by ensuring that both parties in a connection (e.g., the client program and the Google web server) use the same encryption/decryption key, which is known as the _session key_. The follow-up article digs into the details.
|
||||
|
||||
* **RSA** (Rivest Shamir Adleman) is the dominant public-key cryptosystem and named after the three academics who first described the system in the late 1970s. The key-pairs in play are generated with the RSA algorithm.
|
||||
|
||||
* **AES128** (Advanced Encryption Standard) is a _block cipher_ that encrypts and decrypts blocks of bits. (The alternative is a _stream cipher_, which encrypts and decrypts bits one at a time.) The cipher is _symmetric_ in that the same key is used to encrypt and to decrypt, which raises the key-distribution problem in the first place. AES supports key sizes of 128 (used here), 192, and 256 bits: the larger the key, the better the protection.
|
||||
|
||||
Key sizes for symmetric cryptosystems such as AES are, in general, smaller than those for asymmetric (key-pair based) systems such as RSA. For example, a 1024-bit RSA key is relatively small, whereas a 256-bit key is currently the largest for AES.
|
||||
|
||||
* **GCM** (Galois Counter Mode) handles the repeated application of a cipher (in this case, AES128) during a secured conversation. AES128 blocks are only 128-bits in size, and a secure conversation is likely to consist of multiple AES128 blocks from one side to the other. GCM is efficient and commonly paired with AES128.
|
||||
|
||||
* **SHA256** (Secure Hash Algorithm 256 bits) is the cryptographic hash algorithm in play. The hash values produced are 256 bits in size, although even larger values are possible with SHA.
|
||||
|
||||
|
||||
|
||||
|
||||
Cipher suites are in continual development. Not so long ago, for example, Google used the RC4 stream cipher (Ron’s Cipher version 4 after Ron Rivest from RSA). RC4 now has known vulnerabilities, which presumably accounts, at least in part, for Google’s switch to AES128.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
This first look at OpenSSL, through a secure C web client and various command-line examples, has brought to the fore a handful of topics in need of more clarification. [The next article gets into the details][17], starting with cryptographic hashes and ending with a fuller discussion of how digital certificates address the key distribution challenge.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://www.openssl.org/
|
||||
[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
|
||||
[4]: http://condor.depaul.edu/mkalin
|
||||
[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||
[6]: https://en.wikipedia.org/wiki/Netscape
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
||||
[13]: http://www.google.com
|
||||
[14]: https://www.verisign.com
|
||||
[15]: https://www.amazon.com
|
||||
[16]: http://www.google.com:443
|
||||
[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
|
68
sources/tech/20190619 Leading in the Python community.md
Normal file
68
sources/tech/20190619 Leading in the Python community.md
Normal file
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Leading in the Python community)
|
||||
[#]: via: (https://opensource.com/article/19/6/naomi-ceder-python-software-foundation)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
Leading in the Python community
|
||||
======
|
||||
A chat with Naomi Ceder, current Python Software Foundation board chair.
|
||||
![Hands together around the word trust][1]
|
||||
|
||||
Like many other leaders in the open source software world, [Naomi Ceder][2], board chair of the [Python Software Foundation][3] (PSF), took a non-traditional path into the Python world. As the title of her 2017 [keynote][4] at PyCon España explains, she came for the language and stayed for the community. In a recent conversation with her, she shared how she became a Python community leader and offered some insight into what makes Python special.
|
||||
|
||||
### From teaching to coding
|
||||
|
||||
Naomi began her career in the Classics; she earned a PhD in Latin and Ancient Greek with a minor in Indo-European Linguistics, as she says, "several decades ago." While teaching Latin at a private school, she began tinkering with computers, learning to code and to take machines apart to do upgrades and repairs. She started working with open source software in 1995 with [Yggdrasil Linux][5] and helped launch the Fort Wayne, Indiana, [Linux User Group][6].
|
||||
|
||||
A teacher at heart, Naomi believes teaching coding in middle and high school is essential because, by the time most people get to college, they are already convinced that coding and technology careers are not for them. Starting earlier can help increase the supply of technical talent and the diversity and breadth of experience in our talent pools to meet the industry's needs, she says.
|
||||
|
||||
Somewhere around 2001, she decided to switch from studying human languages to researching computer languages, as well as teaching computer classes and managing the school's IT. Her interest in Python was sparked at Linux World 2001 when she attended PSF president Guido Van Rossum's day-long tutorial on Python. Back then, it was an obscure language, but she liked it so well that she began teaching Python and using it to track student records and do sysadmin duties at her school.
|
||||
|
||||
### Leading the Python community
|
||||
|
||||
Naomi says, "community is the key factor behind Python's success. The whole idea behind open source software is sharing. Few people really want to just sit alone, writing code, and staring at their screens. The real satisfaction comes in trading ideas and building something with others."
|
||||
|
||||
She started giving talks at the first [PyCon][7] in 2003 has been a consistent attendee and leader since then. She has organized birds-of-a-feather sessions and founded the PyCon and PyCon UK poster sessions, the education summit, and the Spanish language track, [Charlas][8].
|
||||
|
||||
She is also the author of _[The Quick Python Book][9]_ and co-founded [Trans*Code][10], "the UK's only hack event series focused solely on drawing attention to transgender issues and opportunities." Naomi says, "as technology offers growing opportunities, being sure these opportunities are equally accessible to traditionally marginalized groups grows ever more important."
|
||||
|
||||
### Contributing through the PSF
|
||||
|
||||
As board chair of the PSF, Naomi contributes actively to the organization's work to support the Python language and the people working with it. In addition to sponsoring PyCon, the PSF funds grants for meetups, conferences, and workshops around the world. In 2018, the organization gave almost $335,000 in grants, most of them in the $500 to $5,000 range.
|
||||
|
||||
The PSF's short-term goals are to become a sustainable, stable, and mature non-profit organization with professional staff. Its long-term goals include developing resources that offer meaningful support to development efforts for Python and expanding the organization's support for educational efforts in Python around the world.
|
||||
|
||||
This work depends on having financial support from the community. Naomi says the PSF's "largest current source of funding is PyCon. To ensure the PSF's sustainability, we are also focusing on [sponsorships][11] from companies using Python, which is our fastest-growing segment." Supporting memberships are $99 per year, and [donations and fundraisers][12] also help sustain the organization's work.
|
||||
|
||||
You can learn much more about the PSF's work in its [Annual Report][13].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/naomi-ceder-python-software-foundation
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_HighTrust_1110_A.png?itok=EF5Tmcdk (Hands together around the word trust)
|
||||
[2]: https://www.naomiceder.tech/pages/about/
|
||||
[3]: https://www.python.org/psf/
|
||||
[4]: https://www.youtube.com/watch?v=ayQK6app_wA
|
||||
[5]: https://en.wikipedia.org/wiki/Yggdrasil_Linux/GNU/X
|
||||
[6]: http://fortwaynelinux.org/about
|
||||
[7]: http://pycon.org/
|
||||
[8]: https://twitter.com/pyconcharlas?lang=en
|
||||
[9]: https://www.manning.com/books/the-quick-python-book-third-edition
|
||||
[10]: https://www.trans.tech/
|
||||
[11]: https://www.python.org/psf/sponsorship/
|
||||
[12]: https://www.python.org/psf/donations/
|
||||
[13]: https://www.python.org/psf/annual-report/2019/
|
185
sources/tech/20190620 How to SSH into a running container.md
Normal file
185
sources/tech/20190620 How to SSH into a running container.md
Normal file
@ -0,0 +1,185 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to SSH into a running container)
|
||||
[#]: via: (https://opensource.com/article/19/6/how-ssh-running-container)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/bcotton)
|
||||
|
||||
How to SSH into a running container
|
||||
======
|
||||
SSH is probably not the best way to run commands in a container; try
|
||||
this instead.
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
Containers have shifted the way we think about virtualization. You may remember the days (or you may still be living them) when a virtual machine was the full stack, from virtualized BIOS, operating system, and kernel up to each virtualized network interface controller (NIC). You logged into the virtual box just as you would your own workstation. It was a very direct and simple analogy.
|
||||
|
||||
And then containers came along, [starting with LXC][2] and culminating in the Open Container Initiative ([OCI][3]), and that's when things got complicated.
|
||||
|
||||
### Idempotency
|
||||
|
||||
In the world of containers, the "virtual machine" is only mostly virtual. Everything that doesn't need to be virtualized is borrowed from the host machine. Furthermore, the container itself is usually meant to be ephemeral and idempotent, so it stores no persistent data, and its state is defined by configuration files on the host machine.
|
||||
|
||||
If you're used to the old ways of virtual machines, then you naturally expect to log into a virtual machine in order to interact with it. But containers are ephemeral, so anything you do in a container is forgotten, by design, should the container need to be restarted or respawned.
|
||||
|
||||
The commands controlling your container infrastructure (such as **oc, crictl**, **lxc**, and **docker**) provide an interface to run important commands to restart services, view logs, confirm the existence and permissions modes of an important file, and so on. You should use the tools provided by your container infrastructure to interact with your application, or else edit configuration files and relaunch. That's what containers are designed to do.
|
||||
|
||||
For instance, the open source forum software [Discourse][4] is officially distributed as a container image. The Discourse software is _stateless_, so its installation is self-contained within **/var/discourse**. As long as you have a backup of **/var/discourse**, you can always restore the forum by relaunching the container. The container holds no persistent data, and its configuration file is **/var/discourse/containers/app.yml**.
|
||||
|
||||
Were you to log into the container and edit any of the files it contains, all changes would be lost if the container had to be restarted.
|
||||
|
||||
LXC containers you're building from scratch are more flexible, with configuration files (in a location defined by you) passed to the container when you launch it.
|
||||
|
||||
A build system like [Jenkins][5] usually has a default configuration file, such as **jenkins.yaml**, providing instructions for a base container image that exists only to build and run tests on source code. After the builds are done, the container goes away.
|
||||
|
||||
Now that you know you don't need SSH to interact with your containers, here's an overview of what tools are available (and some notes about using SSH in spite of all the fancy tools that make it redundant).
|
||||
|
||||
### OpenShift web console
|
||||
|
||||
[OpenShift 4][6] offers an open source toolchain for container creation and maintenance, including an interactive web console.
|
||||
|
||||
When you log into your web console, navigate to your project overview and click the **Applications** tab for a list of pods. Select a (running) pod to open the application's **Details** panel.
|
||||
|
||||
![Pod details in OpenShift][7]
|
||||
|
||||
Click the **Terminal** tab at the top of the **Details** panel to open an interactive shell in your container.
|
||||
|
||||
![A terminal in a running container][8]
|
||||
|
||||
If you prefer a browser-based experience for Kubernetes management, you can learn more through interactive lessons available at [learn.openshift.com][9].
|
||||
|
||||
### OpenShift oc
|
||||
|
||||
If you prefer a command-line interface experience, you can use the **oc** command to interact with containers from the terminal.
|
||||
|
||||
First, get a list of running pods (or refer to the web console for a list of active pods). To get that list, enter:
|
||||
|
||||
|
||||
```
|
||||
`$ oc get pods`
|
||||
```
|
||||
|
||||
You can view the logs of a resource (a pod, build, or container). By default, **oc logs** returns the logs from the first container in the pod you specify. To select a single container, add the **\--container** option:
|
||||
|
||||
|
||||
```
|
||||
`$ oc logs --follow=true example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
You can also view logs from all containers in a pod with:
|
||||
|
||||
|
||||
```
|
||||
`$ oc logs --follow=true example-1-e1337 --all-containers`
|
||||
```
|
||||
|
||||
#### Execute commands
|
||||
|
||||
You can execute commands remotely with:
|
||||
|
||||
|
||||
```
|
||||
$ oc exec example-1-e1337 --container app hostname
|
||||
example.local
|
||||
```
|
||||
|
||||
This is similar to running SSH non-interactively: you get to run the command you want to run without an interactive shell taking over your environment.
|
||||
|
||||
#### Remote shell
|
||||
|
||||
You can attach to a running container. This still does _not_ open a shell in the container, but it does run commands directly. For example:
|
||||
|
||||
|
||||
```
|
||||
`$ oc attach example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
If you need a true interactive shell in a container, you can open a remote shell with the **oc rsh** command as long as the container includes a shell. By default, **oc rsh** launches **/bin/sh**:
|
||||
|
||||
|
||||
```
|
||||
`$ oc rsh example-1-e1337 --container app`
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
||||
If you're using Kubernetes directly, you can use the **kubetcl** **exec** command to run a Bash shell in your pod.
|
||||
|
||||
First, confirm that your pod is running:
|
||||
|
||||
|
||||
```
|
||||
`$ kubectl get pods`
|
||||
```
|
||||
|
||||
As long as the pod containing your application is listed, you can use the **exec** command to launch a shell in the container. Using the name **example-pod** as the pod name, enter:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl exec --stdin=false --tty=false
|
||||
example-pod -- /bin/bash
|
||||
[root@example.local][10]:/# ls
|
||||
bin core etc lib root srv
|
||||
boot dev home lib64 sbin tmp var
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
The **docker** command is similar to **kubectl**. With the **dockerd** daemon running, get the name of the running container (you may have to use **sudo** to escalate privileges if you're not in the appropriate group):
|
||||
|
||||
|
||||
```
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND NAME
|
||||
678ac5cca78e centos "/bin/bash" example-centos
|
||||
```
|
||||
|
||||
Using the container name, you can run a command in the container:
|
||||
|
||||
|
||||
```
|
||||
$ docker exec example/centos cat /etc/os-release
|
||||
CentOS Linux release 7.6
|
||||
NAME="CentOS Linux"
|
||||
VERSION="7"
|
||||
ID="centos"
|
||||
ID_LIKE="rhel fedora"
|
||||
VERSION_ID="7"
|
||||
[...]
|
||||
```
|
||||
|
||||
Or you can launch a Bash shell for an interactive session:
|
||||
|
||||
|
||||
```
|
||||
`$ docker exec -it example-centos /bin/bash`
|
||||
```
|
||||
|
||||
### Containers and appliances
|
||||
|
||||
The important thing to remember when dealing with the cloud is that containers are essentially runtimes rather than virtual machines. While they have much in common with a Linux system (because they _are_ a Linux system!), they rarely translate directly to the commands and workflow you may have developed on your Linux workstation. However, like appliances, containers have an interface to help you develop, maintain, and monitor them, so get familiar with the front-end commands and services until you're happily interacting with them just as easily as ****you interact with virtual (or bare-metal) machines. Soon, you'll wonder why everything isn't developed to be ephemeral.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/how-ssh-running-container
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://opensource.com/article/18/11/behind-scenes-linux-containers
|
||||
[3]: https://www.opencontainers.org/
|
||||
[4]: http://discourse.org
|
||||
[5]: http://jenkins.io
|
||||
[6]: https://www.openshift.com/learn/get-started
|
||||
[7]: https://opensource.com/sites/default/files/uploads/openshift-pod-access.jpg (Pod details in OpenShift)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/openshift-pod-terminal.jpg (A terminal in a running container)
|
||||
[9]: http://learn.openshift.com
|
||||
[10]: mailto:root@example.local
|
@ -0,0 +1,337 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use OpenSSL: Hashes, digital signatures, and more)
|
||||
[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-2)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
How to use OpenSSL: Hashes, digital signatures, and more
|
||||
======
|
||||
Dig deeper into the details of cryptography with OpenSSL: Hashes,
|
||||
digital signatures, digital certificates, and more
|
||||
![A person working.][1]
|
||||
|
||||
The [first article in this series][2] introduced hashes, encryption/decryption, digital signatures, and digital certificates through the OpenSSL libraries and command-line utilities. This second article drills down into the details. Let’s begin with hashes, which are ubiquitous in computing, and consider what makes a hash function _cryptographic_.
|
||||
|
||||
### Cryptographic hashes
|
||||
|
||||
The download page for the OpenSSL source code (<https://www.openssl.org/source/>) contains a table with recent versions. Each version comes with two hash values: 160-bit SHA1 and 256-bit SHA256. These values can be used to verify that the downloaded file matches the original in the repository: The downloader recomputes the hash values locally on the downloaded file and then compares the results against the originals. Modern systems have utilities for computing such hashes. Linux, for instance, has **md5sum** and **sha256sum**. OpenSSL itself provides similar command-line utilities.
|
||||
|
||||
Hashes are used in many areas of computing. For example, the Bitcoin blockchain uses SHA256 hash values as block identifiers. To mine a Bitcoin is to generate a SHA256 hash value that falls below a specified threshold, which means a hash value with at least N leading zeroes. (The value of N can go up or down depending on how productive the mining is at a particular time.) As a point of interest, today’s miners are hardware clusters designed for generating SHA256 hashes in parallel. During a peak time in 2018, Bitcoin miners worldwide generated about 75 million terahashes per second—yet another incomprehensible number.
|
||||
|
||||
Network protocols use hash values as well—often under the name **checksum**—to support message integrity; that is, to assure that a received message is the same as the one sent. The message sender computes the message’s checksum and sends the results along with the message. The receiver recomputes the checksum when the message arrives. If the sent and the recomputed checksum do not match, then something happened to the message in transit, or to the sent checksum, or to both. In this case, the message and its checksum should be sent again, or at least an error condition should be raised. (Low-level network protocols such as UDP do not bother with checksums.)
|
||||
|
||||
Other examples of hashes are familiar. Consider a website that requires users to authenticate with a password, which the user enters in their browser. Their password is then sent, encrypted, from the browser to the server via an HTTPS connection to the server. Once the password arrives at the server, it's decrypted for a database table lookup.
|
||||
|
||||
What should be stored in this lookup table? Storing the passwords themselves is risky. It’s far less risky is to store a hash generated from a password, perhaps with some _salt_ (extra bits) added to taste before the hash value is computed. Your password may be sent to the web server, but the site can assure you that the password is not stored there.
|
||||
|
||||
Hash values also occur in various areas of security. For example, hash-based message authentication code ([HMAC][3]) uses a hash value and a secret cryptographic key to authenticate a message sent over a network. HMAC codes, which are lightweight and easy to use in programs, are popular in web services. An X509 digital certificate includes a hash value known as the _fingerprint_, which can facilitate certificate verification. An in-memory truststore could be implemented as a lookup table keyed on such fingerprints—as a _hash map_, which supports constant-time lookups. The fingerprint from an incoming certificate can be compared against the truststore keys for a match.
|
||||
|
||||
What special property should a _cryptographic hash function_ have? It should be _one-way_, which means very difficult to invert. A cryptographic hash function should be relatively straightforward to compute, but computing its inverse—the function that maps the hash value back to the input bitstring—should be computationally intractable. Here is a depiction, with **chf** as a cryptographic hash function and my password **foobar** as the sample input:
|
||||
|
||||
|
||||
```
|
||||
+---+
|
||||
foobar—>|chf|—>hash value ## straightforward
|
||||
+--–+
|
||||
```
|
||||
|
||||
By contrast, the inverse operation is infeasible:
|
||||
|
||||
|
||||
```
|
||||
+-----------+
|
||||
hash value—>|chf inverse|—>foobar ## intractable
|
||||
+-----------+
|
||||
```
|
||||
|
||||
Recall, for example, the SHA256 hash function. For an input bitstring of any length N > 0, this function generates a fixed-length hash value of 256 bits; hence, this hash value does not reveal even the input bitstring’s length N, let alone the value of each bit in the string. By the way, SHA256 is not susceptible to a [_length extension attack_][4]. The only effective way to reverse engineer a computed SHA256 hash value back to the input bitstring is through a brute-force search, which means trying every possible input bitstring until a match with the target hash value is found. Such a search is infeasible on a sound cryptographic hash function such as SHA256.
|
||||
|
||||
Now, a final review point is in order. Cryptographic hash values are statistically rather than unconditionally unique, which means that it is unlikely but not impossible for two different input bitstrings to yield the same hash value—a _collision_. The [_birthday problem_][5] offers a nicely counter-intuitive example of collisions. There is extensive research on various hash algorithms’ _collision resistance_. For example, MD5 (128-bit hash values) has a breakdown in collision resistance after roughly 221 hashes. For SHA1 (160-bit hash values), the breakdown starts at about 261 hashes.
|
||||
|
||||
A good estimate of the breakdown in collision resistance for SHA256 is not yet in hand. This fact is not surprising. SHA256 has a range of 2256 distinct hash values, a number whose decimal representation has a whopping 78 digits! So, can collisions occur with SHA256 hashing? Of course, but they are extremely unlikely.
|
||||
|
||||
In the command-line examples that follow, two input files are used as bitstring sources: **hashIn1.txt** and **hashIn2.txt**. The first file contains **abc** and the second contains **1a2b3c**.
|
||||
|
||||
These files contain text for readability, but binary files could be used instead.
|
||||
|
||||
Using the Linux **sha256sum** utility on these two files at the command line—with the percent sign (**%**) as the prompt—produces the following hash values (in hex):
|
||||
|
||||
|
||||
```
|
||||
% sha256sum hashIn1.txt
|
||||
9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3 hashIn1.txt
|
||||
|
||||
% sha256sum hashIn2.txt
|
||||
3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13 hashIn2.txt
|
||||
```
|
||||
|
||||
The OpenSSL hashing counterparts yield the same results, as expected:
|
||||
|
||||
|
||||
```
|
||||
% openssl dgst -sha256 hashIn1.txt
|
||||
SHA256(hashIn1.txt)= 9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3
|
||||
|
||||
% openssl dgst -sha256 hashIn2.txt
|
||||
SHA256(hashIn2.txt)= 3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13
|
||||
```
|
||||
|
||||
This examination of cryptographic hash functions sets up a closer look at digital signatures and their relationship to key pairs.
|
||||
|
||||
### Digital signatures
|
||||
|
||||
As the name suggests, a digital signature can be attached to a document or some other electronic artifact (e.g., a program) to vouch for its authenticity. Such a signature is thus analogous to a hand-written signature on a paper document. To verify the digital signature is to confirm two things. First, that the vouched-for artifact has not changed since the signature was attached because it is based, in part, on a cryptographic _hash_ of the document. Second, that the signature belongs to the person (e.g., Alice) who alone has access to the private key in a pair. By the way, digitally signing code (source or compiled) has become a common practice among programmers.
|
||||
|
||||
Let’s walk through how a digital signature is created. As mentioned before, there is no digital signature without a public and private key pair. When using OpenSSL to create these keys, there are two separate commands: one to create a private key, and another to extract the matching public key from the private one. These key pairs are encoded in base64, and their sizes can be specified during this process.
|
||||
|
||||
The private key consists of numeric values, two of which (a _modulus_ and an _exponent_) make up the public key. Although the private key file contains the public key, the extracted public key does _not_ reveal the value of the corresponding private key.
|
||||
|
||||
The resulting file with the private key thus contains the full key pair. Extracting the public key into its own file is practical because the two keys have distinct uses, but this extraction also minimizes the danger that the private key might be publicized by accident.
|
||||
|
||||
Next, the pair’s private key is used to process a hash value for the target artifact (e.g., an email), thereby creating the signature. On the other end, the receiver’s system uses the pair’s public key to verify the signature attached to the artifact.
|
||||
|
||||
Now for an example. To begin, generate a 2048-bit RSA key pair with OpenSSL:
|
||||
|
||||
**openssl genpkey -out privkey.pem -algorithm rsa 2048**
|
||||
|
||||
We can drop the **-algorithm rsa** flag in this example because **genpkey** defaults to the type RSA. The file’s name (**privkey.pem**) is arbitrary, but the Privacy Enhanced Mail (PEM) extension **pem** is customary for the default PEM format. (OpenSSL has commands to convert among formats if needed.) If a larger key size (e.g., 4096) is in order, then the last argument of **2048** could be changed to **4096**. These sizes are always powers of two.
|
||||
|
||||
Here’s a slice of the resulting **privkey.pem** file, which is in base64:
|
||||
|
||||
|
||||
```
|
||||
\-----BEGIN PRIVATE KEY-----
|
||||
MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBANnlAh4jSKgcNj/Z
|
||||
JF4J4WdhkljP2R+TXVGuKVRtPkGAiLWE4BDbgsyKVLfs2EdjKL1U+/qtfhYsqhkK
|
||||
…
|
||||
\-----END PRIVATE KEY-----
|
||||
```
|
||||
|
||||
The next command then extracts the pair’s public key from the private one:
|
||||
|
||||
**openssl rsa -in privkey.pem -outform PEM -pubout -out pubkey.pem**
|
||||
|
||||
The resulting **pubkey.pem** file is small enough to show here in full:
|
||||
|
||||
|
||||
```
|
||||
\-----BEGIN PUBLIC KEY-----
|
||||
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDZ5QIeI0ioHDY/2SReCeFnYZJY
|
||||
z9kfk11RrilUbT5BgIi1hOAQ24LMilS37NhHYyi9VPv6rX4WLKoZCmkeYaWk/TR5
|
||||
4nbH1E/AkniwRoXpeh5VncwWMuMsL5qPWGY8fuuTE27GhwqBiKQGBOmU+MYlZonO
|
||||
O0xnAKpAvysMy7G7qQIDAQAB
|
||||
\-----END PUBLIC KEY-----
|
||||
```
|
||||
|
||||
Now, with the key pair at hand, the digital signing is easy—in this case with the source file **client.c** as the artifact to be signed:
|
||||
|
||||
**openssl dgst -sha256 -sign privkey.pem -out sign.sha256 client.c**
|
||||
|
||||
The digest for the **client.c** source file is SHA256, and the private key resides in the **privkey.pem** file created earlier. The resulting binary signature file is **sign.sha256**, an arbitrary name. To get a readable (if base64) version of this file, the follow-up command is:
|
||||
|
||||
**openssl enc -base64 -in sign.sha256 -out sign.sha256.base64**
|
||||
|
||||
The file **sign.sha256.base64** now contains:
|
||||
|
||||
|
||||
```
|
||||
h+e+3UPx++KKSlWKIk34fQ1g91XKHOGFRmjc0ZHPEyyjP6/lJ05SfjpAJxAPm075
|
||||
VNfFwysvqRGmL0jkp/TTdwnDTwt756Ej4X3OwAVeYM7i5DCcjVsQf5+h7JycHKlM
|
||||
o/Jd3kUIWUkZ8+Lk0ZwzNzhKJu6LM5KWtL+MhJ2DpVc=
|
||||
```
|
||||
|
||||
Or, the executable file **client** could be signed instead, and the resulting base64-encoded signature would differ as expected:
|
||||
|
||||
|
||||
```
|
||||
VMVImPgVLKHxVBapJ8DgLNJUKb98GbXgehRPD8o0ImADhLqlEKVy0HKRm/51m9IX
|
||||
xRAN7DoL4Q3uuVmWWi749Vampong/uT5qjgVNTnRt9jON112fzchgEoMb8CHNsCT
|
||||
XIMdyaPtnJZdLALw6rwMM55MoLamSc6M/MV1OrJnk/g=
|
||||
```
|
||||
|
||||
The final step in this process is to verify the digital signature with the public key. The hash used to sign the artifact (in this case, the executable **client** program) should be recomputed as an essential step in the verification since the verification process should indicate whether the artifact has changed since being signed.
|
||||
|
||||
There are two OpenSSL commands used for this purpose. The first decodes the base64 signature:
|
||||
|
||||
**openssl enc -base64 -d -in sign.sha256.base64 -out sign.sha256**
|
||||
|
||||
The second verifies the signature:
|
||||
|
||||
**openssl dgst -sha256 -verify pubkey.pem -signature sign.sha256 client**
|
||||
|
||||
The output from this second command is, as it should be:
|
||||
|
||||
|
||||
```
|
||||
`Verified OK`
|
||||
```
|
||||
|
||||
To understand what happens when verification fails, a short but useful exercise is to replace the executable **client** file in the last OpenSSL command with the source file **client.c** and then try to verify. Another exercise is to change the **client** program, however slightly, and try again.
|
||||
|
||||
### Digital certificates
|
||||
|
||||
A digital certificate brings together the pieces analyzed so far: hash values, key pairs, digital signatures, and encryption/decryption. The first step toward a production-grade certificate is to create a certificate signing request (CSR), which is then sent to a certificate authority (CA). To do this for the example with OpenSSL, run:
|
||||
|
||||
**openssl req -out myserver.csr -new -newkey rsa:4096 -nodes -keyout myserverkey.pem**
|
||||
|
||||
This example generates a CSR document and stores the document in the file **myserver.csr** (base64 text). The purpose here is this: the CSR document requests that the CA vouch for the identity associated with the specified domain name—the common name (CN) in CA-speak.
|
||||
|
||||
A new key pair also is generated by this command, although an existing pair could be used. Note that the use of **server** in names such as **myserver.csr** and **myserverkey.pem** hints at the typical use of digital certificates: as vouchers for the identity of a web server associated with a domain such as [www.google.com][6].
|
||||
|
||||
The same command, however, creates a CSR regardless of how the digital certificate might be used. It also starts an interactive question/answer session that prompts for relevant information about the domain name to link with the requester’s digital certificate. This interactive session can be short-circuited by providing the essentials as part of the command, with backslashes as continuations across line breaks. The **-subj** flag introduces the required information:
|
||||
|
||||
|
||||
```
|
||||
% openssl req -new
|
||||
-newkey rsa:2048 -nodes -keyout privkeyDC.pem
|
||||
-out myserver.csr
|
||||
-subj "/C=US/ST=Illinois/L=Chicago/O=Faulty Consulting/OU=IT/CN=myserver.com"
|
||||
```
|
||||
|
||||
The resulting CSR document can be inspected and verified before being sent to a CA. This process creates the digital certificate with the desired format (e.g., X509), signature, validity dates, and so on:
|
||||
|
||||
**openssl req -text -in myserver.csr -noout -verify**
|
||||
|
||||
Here’s a slice of the output:
|
||||
|
||||
|
||||
```
|
||||
verify OK
|
||||
Certificate Request:
|
||||
Data:
|
||||
Version: 0 (0x0)
|
||||
Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (2048 bit)
|
||||
Modulus:
|
||||
00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73:
|
||||
…
|
||||
Exponent: 65537 (0x10001)
|
||||
Attributes:
|
||||
a0:00
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
…
|
||||
```
|
||||
|
||||
### A self-signed certificate
|
||||
|
||||
During the development of an HTTPS web site, it is convenient to have a digital certificate on hand without going through the CA process. A self-signed certificate fills the bill during the HTTPS handshake’s authentication phase, although any modern browser warns that such a certificate is worthless. Continuing the example, the OpenSSL command for a self-signed certificate—valid for a year and with an RSA public key—is:
|
||||
|
||||
**openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout myserver.pem -out myserver.crt**
|
||||
|
||||
The OpenSSL command below presents a readable version of the generated certificate:
|
||||
|
||||
**openssl x509 -in myserver.crt -text -noout**
|
||||
|
||||
Here’s part of the output for the self-signed certificate:
|
||||
|
||||
|
||||
```
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number: 13951598013130016090 (0xc19e087965a9055a)
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
Issuer: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Validity
|
||||
Not Before: Apr 11 17:22:18 2019 GMT
|
||||
Not After : Apr 10 17:22:18 2020 GMT
|
||||
Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com
|
||||
Subject Public Key Info:
|
||||
Public Key Algorithm: rsaEncryption
|
||||
Public-Key: (4096 bit)
|
||||
Modulus:
|
||||
00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73:
|
||||
…
|
||||
Exponent: 65537 (0x10001)
|
||||
X509v3 extensions:
|
||||
X509v3 Subject Key Identifier:
|
||||
3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91
|
||||
X509v3 Authority Key Identifier:
|
||||
keyid:3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91
|
||||
|
||||
X509v3 Basic Constraints:
|
||||
CA:TRUE
|
||||
Signature Algorithm: sha256WithRSAEncryption
|
||||
3a:eb:8d:09:53:3b:5c:2e:48:ed:14:ce:f9:20:01:4e:90:c9:
|
||||
...
|
||||
```
|
||||
|
||||
As mentioned earlier, an RSA private key contains values from which the public key is generated. However, a given public key does _not_ give away the matching private key. For an introduction to the underlying mathematics, see <https://simple.wikipedia.org/wiki/RSA_algorithm>.
|
||||
|
||||
There is an important correspondence between a digital certificate and the key pair used to generate the certificate, even if the certificate is only self-signed:
|
||||
|
||||
* The digital certificate contains the _exponent_ and _modulus_ values that make up the public key. These values are part of the key pair in the originally-generated PEM file, in this case, the file **myserver.pem**.
|
||||
* The exponent is almost always 65,537 (as in this case) and so can be ignored.
|
||||
* The modulus from the key pair should match the modulus from the digital certificate.
|
||||
|
||||
|
||||
|
||||
The modulus is a large value and, for readability, can be hashed. Here are two OpenSSL commands that check for the same modulus, thereby confirming that the digital certificate is based upon the key pair in the PEM file:
|
||||
|
||||
|
||||
```
|
||||
% openssl x509 -noout -modulus -in myserver.crt | openssl sha1 ## modulus from CRT
|
||||
(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769
|
||||
|
||||
% openssl rsa -noout -modulus -in myserver.pem | openssl sha1 ## modulus from PEM
|
||||
(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769
|
||||
```
|
||||
|
||||
The resulting hash values match, thereby confirming that the digital certificate is based upon the specified key pair.
|
||||
|
||||
### Back to the key distribution problem
|
||||
|
||||
Let’s return to an issue raised at the end of Part 1: the TLS handshake between the **client** program and the Google web server. There are various handshake protocols, and even the Diffie-Hellman version at work in the **client** example offers wiggle room. Nonetheless, the **client** example follows a common pattern.
|
||||
|
||||
To start, during the TLS handshake, the **client** program and the web server agree on a cipher suite, which consists of the algorithms to use. In this case, the suite is **ECDHE-RSA-AES128-GCM-SHA256**.
|
||||
|
||||
The two elements of interest now are the RSA key-pair algorithm and the AES128 block cipher used for encrypting and decrypting messages if the handshake succeeds. Regarding encryption/decryption, this process comes in two flavors: symmetric and asymmetric. In the symmetric flavor, the _same_ key is used to encrypt and decrypt, which raises the _key distribution problem_ in the first place: How is the key to be distributed securely to both parties? In the asymmetric flavor, one key is used to encrypt (in this case, the RSA public key) but a different key is used to decrypt (in this case, the RSA private key from the same pair).
|
||||
|
||||
The **client** program has the Google web server’s public key from an authenticating certificate, and the web server has the private key from the same pair. Accordingly, the **client** program can send an encrypted message to the web server, which alone can readily decrypt this message.
|
||||
|
||||
In the TLS situation, the symmetric approach has two significant advantages:
|
||||
|
||||
* In the interaction between the **client** program and the Google web server, the authentication is one-way. The Google web server sends three certificates to the **client** program, but the **client** program does not send a certificate to the web server; hence, the web server has no public key from the client and can’t encrypt messages to the client.
|
||||
* Symmetric encryption/decryption with AES128 is nearly a _thousand times faster_ than the asymmetric alternative using RSA keys.
|
||||
|
||||
|
||||
|
||||
The TLS handshake combines the two flavors of encryption/decryption in a clever way. During the handshake, the **client** program generates random bits known as the pre-master secret (PMS). Then the **client** program encrypts the PMS with the server’s public key and sends the encrypted PMS to the server, which in turn decrypts the PMS message with its private key from the RSA pair:
|
||||
|
||||
|
||||
```
|
||||
+-------------------+ encrypted PMS +--------------------+
|
||||
client PMS--->|server’s public key|--------------->|server’s private key|--->server PMS
|
||||
+-------------------+ +--------------------+
|
||||
```
|
||||
|
||||
At the end of this process, the **client** program and the Google web server now have the same PMS bits. Each side uses these bits to generate a _master secret_ and, in short order, a symmetric encryption/decryption key known as the _session key_. There are now two distinct but identical session keys, one on each side of the connection. In the **client** example, the session key is of the AES128 variety. Once generated on both the **client** program’s and Google web server’s sides, the session key on each side keeps the conversation between the two sides confidential. A handshake protocol such as Diffie-Hellman allows the entire PMS process to be repeated if either side (e.g., the **client** program) or the other (in this case, the Google web server) calls for a restart of the handshake.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The OpenSSL operations illustrated at the command line are available, too, through the API for the underlying libraries. These two articles have emphasized the utilities to keep the examples short and to focus on the cryptographic topics. If you have an interest in security issues, OpenSSL is a fine place to start—and to stay.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.)
|
||||
[2]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1
|
||||
[3]: https://en.wikipedia.org/wiki/HMAC
|
||||
[4]: https://en.wikipedia.org/wiki/Length_extension_attack
|
||||
[5]: https://en.wikipedia.org/wiki/Birthday_problem
|
||||
[6]: http://www.google.com
|
103
sources/tech/20190620 You can-t buy DevOps.md
Normal file
103
sources/tech/20190620 You can-t buy DevOps.md
Normal file
@ -0,0 +1,103 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (You can't buy DevOps)
|
||||
[#]: via: (https://opensource.com/article/19/6/you-cant-buy-devops)
|
||||
[#]: author: (Julie Gunderson https://opensource.com/users/juliegund)
|
||||
|
||||
You can't buy DevOps
|
||||
======
|
||||
But plenty of people are happy to sell it to you. Here's why it's not
|
||||
for sale.
|
||||
![Coffee shop photo][1]
|
||||
|
||||
![DevOps price tag graphic][2]
|
||||
|
||||
Making a move to [DevOps][3] can be a daunting undertaking, with many organizations not knowing the right place to start. I recently had some fun taking a few "DevOps assessments" to see what solutions they offered. I varied my answers—from an organization that fully embraces DevOps to one at the beginning of the journey. Some of the assessments provided real value, linking me back to articles on culture and methodologies, while others merely offered me a tool promising to bring all my DevOps dreams into reality.
|
||||
|
||||
Tools are absolutely essential to the DevOps journey; for instance, tools can continuously deliver, automate, or monitor your environment. However, **DevOps is not a product**, and tools alone will not enable the processes necessary to realize the full value of DevOps. People are what matter most; you can't do DevOps without building the people, mindset, and culture first.
|
||||
|
||||
### Don't 'win' at DevOps; become a champion
|
||||
|
||||
As a DevOps advocate at PagerDuty, I am proud to be a part of an organization with a strong commitment to DevOps methodologies, well beyond just "checking the boxes" of tool adoption.
|
||||
|
||||
I recently had a conversation with PagerDuty CEO Jennifer Tejada about being a winner versus a champion. She talked about how winning is fantastic—you get a trophy, a title, or maybe even a few million dollars (if it's the lottery). However, in the big picture, winning is all about short-term goals, while being a champion means focusing on long-term successes or outcomes. This got me thinking about how to apply this principle to organizations embracing DevOps.
|
||||
|
||||
One of my favorite examples of DevOps tooling is XebiaLabs' [Periodic Table of DevOps Tools][4]:
|
||||
|
||||
[![Periodic Table of DevOps][5]][4]
|
||||
|
||||
(Click table for interactive version.)
|
||||
|
||||
The table shows that numerous tools fit into DevOps. However, too many times, I have heard about organizations "transforming to DevOps" by purchasing tools. While tooling is an essential part of the DevOps journey, a tool alone does not create a DevOps environment. You have to consider all the factors that make a DevOps team function well: collaboration, breaking down silos, defined processes, ownership, and automation, along with continuous improvement/continuous delivery.
|
||||
|
||||
Deciding to purchase tooling is a great step in the right direction; what is more important is to define the "why" or the end goal behind decisions first. Which brings us back to the mentality of a champion; look at Olympic gold medalist Michael Phelps, for example. Phelps is the most decorated Olympian of all time and holds 39 world records. To achieve these accomplishments, Phelps didn't stop at one, two, or even 20 wins; he aimed to be a champion. This was all done through commitment, practice, and focusing on the desired end state.
|
||||
|
||||
### DevOps defined
|
||||
|
||||
There are hundreds of definitions for DevOps, but almost everyone can agree on the core tenet outlined in the [State of DevOps Report][6]:
|
||||
|
||||
> "DevOps is a set of principles aimed at building culture and processes to help teams work more efficiently and deliver better software faster."
|
||||
|
||||
You can't change culture and processes with a credit card. Tooling can enable an organization to collaborate better or automate or continuously deliver; however, without the right mindset and adoption, a tool's full capability may not be achievable.
|
||||
|
||||
For example, one of my former colleagues heard how amazing Slack is for teams transforming to DevOps by opening up channels for collaboration. He convinced his manager that Slack would solve all of their communication woes. However, six months into the Slack adoption, most teams were still using Skype, including the manager. Slack ended up being more of a place to talk about brewing beer than a tool to bring the product to market faster. The issue was not Slack; it was the lack of buy-in from the team and organization and knowledge around the product's full functionality.
|
||||
|
||||
Purchasing a tool can definitely be a win for a team, but purchasing a tool is not purchasing DevOps. Making tooling and best practices work for the team and achieving short- and long-term goals are where our conversation around being a champion comes up. This brings us back to the why, the overall and far-reaching goal for the team or organization. Once you identify the goal, how do you get buy-in from key stakeholders? After you achieve buy-in, how do you implement the solution?
|
||||
|
||||
### Organizational change
|
||||
|
||||
####
|
||||
|
||||
[![Change management comic by Randy Glasbergen][7]][8]
|
||||
|
||||
Change is hard for many organizations and individuals; moreover, meaningful change does not happen overnight. It is important to understand how people and organizations process change. In the [Kotter 8-Step Process for Leading Change][9], it's about articulating the need for a change, creating urgency around the why, then starting small and finding and developing internal champions, _before_ trying to prove wins or, in this case, purchasing a tool.
|
||||
|
||||
If people in an organization are not aware of a problem or that there's a better way of operating, it will be hard to get the buy-in necessary and motivate team members to adopt new ideas and take action. People may be perfectly content with the current state; perhaps the processes in place are adequate or, at a minimum, the current state is a known factor. However, for the overall team to function well and achieve its shared goal in a faster, more agile way, new mechanisms must be put into place first.
|
||||
|
||||
![Kotter 8-Step Process for Leading Change][10]
|
||||
|
||||
### How to be a DevOps champion
|
||||
|
||||
Being a champion in the DevOps world means going beyond the win and delving deeper into the team/organizational structure and culture, thereby identifying outlying issues beyond tools, and then working with others to embrace the right change that leads to defined results. Go back to the beginning and define the end goal. Here are a few sample questions you can ask to get started:
|
||||
|
||||
* What are your core values?
|
||||
* Why are you trying to become a more agile company or team?
|
||||
* What obstacles is your team or organization facing?
|
||||
* What will the tool or process accomplish?
|
||||
* How are people communicating and collaborating?
|
||||
* Are there silos and why?
|
||||
* How are you championing the customer?
|
||||
* Are employees empowered?
|
||||
|
||||
|
||||
|
||||
After defining the end state, find other like-minded individuals to be part of your champion team, and don't lose sight of what you are trying to accomplish. When making any change, make sure to start small, e.g., with one team or a test environment. By starting small and building on the wins, internal champions will start creating themselves.
|
||||
|
||||
Remember, companies are happy and eager to try to sell you DevOps, but at the end of the day, DevOps is not a product. It is a fully embraced methodology and mindset of automation, collaboration, people, and processes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/you-cant-buy-devops
|
||||
|
||||
作者:[Julie Gunderson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/juliegund
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee-shop-devops.png?itok=CPefJZJL (Coffee shop photo)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/devops-pricetag.jpg (DevOps price tag graphic)
|
||||
[3]: https://opensource.com/resources/devops
|
||||
[4]: https://xebialabs.com/periodic-table-of-devops-tools/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/periodic-table-of-devops-tools.png (Periodic Table of DevOps)
|
||||
[6]: https://puppet.com/resources/whitepaper/state-of-devops-report
|
||||
[7]: https://opensource.com/sites/default/files/uploads/cartoon.png (Change management comic by Randy Glasbergen)
|
||||
[8]: https://images.app.goo.gl/JiMaWAenNkLcmkZJ9
|
||||
[9]: https://www.kotterinc.com/8-steps-process-for-leading-change/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/kotter-process.png (Kotter 8-Step Process for Leading Change)
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 infrastructure performance and scaling tools you should be using)
|
||||
[#]: via: (https://opensource.com/article/19/6/performance-scaling-tools)
|
||||
[#]: author: (Pradeep SurisettyPeter Portante https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga)
|
||||
|
||||
7 infrastructure performance and scaling tools you should be using
|
||||
======
|
||||
These open source tools will help you feel confident in your
|
||||
infrastructure's performance as it scales up.
|
||||
![Several images of graphs.][1]
|
||||
|
||||
[Sysadmins][2], [site reliability engineers][3] (SREs), and cloud operators all too often struggle to feel confident in their infrastructure as it scales up. Also too often, they think the only way to solve their challenges is to write a tool for in-house use. Fortunately, there are options. There are many open source tools available to test an infrastructure's performance. Here are my favorites.
|
||||
|
||||
### Pbench
|
||||
|
||||
Pbench is a performance testing harness to make executing benchmarks and performance tools easier and more convenient. In short, it:
|
||||
|
||||
* Excels at running micro-benchmarks on large scales of hosts (bare-metal, virtual machines, containers, etc.) while automating a potentially large set of benchmark parameters
|
||||
* Focuses on installing, configuring, and executing benchmark code and performance tools and not on provisioning or orchestrating the testbed (e.g., OpenStack, RHEV, RHEL, Docker, etc.)
|
||||
* Is designed to work in concert with provisioning tools like BrowBeat or Ansible playbooks
|
||||
|
||||
|
||||
|
||||
Pbench's [documentation][4] includes installation and user guides, and the code is [maintained on GitHub][5], where the team welcomes contributions and issues.
|
||||
|
||||
### Ripsaw
|
||||
|
||||
Baselining is a critical aspect of infrastructure reliability. Ripsaw is a performance benchmark Operator for launching workloads on Kubernetes. It deploys as a Kuberentes Operator that then deploys common workloads, including specific applications (e.g., Couchbase) or general performance tests (e.g., Uperf) to measure and establish a performance baseline.
|
||||
|
||||
Ripsaw is [maintained on GitHub][6]. You can also find its maintainers on the [Kubernetes Slack][7], where they are active contributors.
|
||||
|
||||
### OpenShift Scale
|
||||
|
||||
The collection of tools in OpenShift Scale, OpenShift's open source solution for performance testing, do everything from spinning up OpenShift on OpenStack installations (TripleO Install and ShiftStack Install), installing on Amazon Web Services (AWS), or providing containerized tooling, like running Pbench on your cluster or doing cluster limits testing, network tests, storage tests, metric tests with Prometheus, logging, and concurrent build testing.
|
||||
|
||||
Scale's CI suite is flexible enough to both add workloads and include your workloads when deploying to Azure or anywhere else you might run. You can see the full suite of tools [on GitHub][8].
|
||||
|
||||
### Browbeat
|
||||
|
||||
[Browbeat][9] calls itself "a performance tuning and analysis tool for OpenStack." You can use it to analyze and tune the deployment of your workloads. It also automates the deployment of standard monitoring and data analysis tools like Grafana and Graphite. Browbeat is [maintained on GitHub][10].
|
||||
|
||||
### Smallfile
|
||||
|
||||
Smallfile is a filesystem workload generator targeted for scale-out, distributed storage. It has been used to test a number of open filesystem technologies, including GlusterFS, CephFS, Network File System (NFS), Server Message Block (SMB), and OpenStack Cinder volumes. It is [maintained on GitHub][11].
|
||||
|
||||
### Ceph Benchmarking Tool
|
||||
|
||||
Ceph Benchmarking Tool (CBT) is a testing harness that can automate tasks for testing [Ceph][12] cluster performance. It records system metrics with collectl, and it can collect more information with tools including perf, blktrace, and valgrind. CBT can also do advanced testing that includes automated object storage daemon outages, erasure-coded pools, and cache-tier configurations.
|
||||
|
||||
Contributors have extended CBT to use [Pbench monitoring tools and Ansible][13] and to run the [Smallfile benchmark][14]. A separate Grafana visualization dashboard uses Elasticsearch data generated by [Automated Ceph Test][15].
|
||||
|
||||
### satperf
|
||||
|
||||
Satellite-performance (satperf) is a set of Ansible playbooks and helper scripts to deploy Satellite 6 environments and measure the performance of selected actions, such as concurrent registrations, remote execution, Puppet operations, repository synchronizations and promotions, and more. You can find Satperf [on GitHub][16].
|
||||
|
||||
### Conclusion
|
||||
|
||||
Sysadmins, SREs, and cloud operators face a wide variety of challenges as they work to scale their infrastructure, but luckily there is also a wide variety of tools to help them get past those common issues. Any of these seven tools should help you get started testing your infrastructure's performance as it scales.
|
||||
|
||||
Are there other open source performance and scaling tools that should be on this list? Add your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/performance-scaling-tools
|
||||
|
||||
作者:[Pradeep SurisettyPeter Portante][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf (Several images of graphs.)
|
||||
[2]: /16/12/yearbook-10-open-source-sysadmin-tools
|
||||
[3]: /article/19/5/life-performance-engineer
|
||||
[4]: https://distributed-system-analysis.github.io/pbench/
|
||||
[5]: https://github.com/distributed-system-analysis/pbench
|
||||
[6]: https://github.com/cloud-bulldozer/ripsaw
|
||||
[7]: https://github.com/cloud-bulldozer/ripsaw#community
|
||||
[8]: https://github.com/openshift-scale
|
||||
[9]: https://browbeatproject.org/
|
||||
[10]: https://github.com/cloud-bulldozer/browbeat
|
||||
[11]: https://github.com/distributed-system-analysis/smallfile
|
||||
[12]: https://ceph.com/
|
||||
[13]: https://github.com/acalhounRH/cbt
|
||||
[14]: https://nuget.pkg.github.com/bengland2/cbt/tree/smallfile
|
||||
[15]: https://github.com/acalhounRH/automated_ceph_test
|
||||
[16]: https://github.com/redhat-performance/satperf
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The state of open source translation tools for contributors to your project)
|
||||
[#]: via: (https://opensource.com/article/19/6/translation-platforms-matter)
|
||||
[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/jibec)
|
||||
|
||||
The state of open source translation tools for contributors to your project
|
||||
======
|
||||
There are almost 100 languages with more than 10 million speakers. How
|
||||
many of your active contributors speak one?
|
||||
![Team of people around the world][1]
|
||||
|
||||
In the world of free software, many people speak English: It is the **one** language. English helps us cross borders to meet others. However, this language is also a barrier for the majority of people.
|
||||
|
||||
Some master it while others don't. Complex English terms are, in general, a barrier to the understanding and propagation of knowledge. Whenever you use an uncommon English word, ask yourself about your real mastery of what you are explaining, and the unintentional barriers you build in the process.
|
||||
|
||||
_“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.”_ — Nelson Mandela
|
||||
|
||||
We are 7 billion humans, and less than 400 million of us are English natives. The wonders done day after day by free/libre open source contributors deserve to reach the hearts of the [6.6 billion people][2] for whom English is not their mother tongue. In this day and age, we have the technology to help translate all types of content: websites, documentation, software, and even sounds and images. Even if I do not translate of all of these media personally, I do not know of any real limits. The only prerequisite for getting this content translated is both the willingness of the creators and the collective will of the users, customers, and—in the case of free software—the contributors.
|
||||
|
||||
### Why successful translation requires real tooling
|
||||
|
||||
Some projects are stuck in the stone ages and require translators to use [Git][3], [Mercurial][4], or other development tools. These tools don’t meet the needs of translation communities. Let’s help these projects evolve, as discussed in the section "A call for action."
|
||||
|
||||
Other projects have integrated translation platforms, which are key tools for linguistic diversity and existence. These tools understand the needs of translators and serve as a bridge to the development world. They make translation contribution easy, and keep those doing the translations motivated over time.
|
||||
|
||||
This aspect is important: There are almost 100 languages with more than 10 million speakers. Do you really believe that your project can have an active contributor for each of these languages? Unless you are a huge organization, like Mozilla or LibreOffice, there is no chance. The translators who help you also help two, ten, or a hundred other projects. They need tools to be effective, such as [translation memories][5], progress reports, alerts, ways to collaborate, and knowing that what they do is useful.
|
||||
|
||||
### Translation platforms are in trouble
|
||||
|
||||
However, the translation platforms distributed as free software are disappearing in favor of closed platforms. These platforms set their rules and efforts according to what will bring them the most profit.
|
||||
|
||||
Linguistic and cultural diversity does not bring money: It opens doors and allows local development. It emancipates populations and can ensure the survival of certain cultures and languages. In the 21st century, is your culture really alive if it does not exist in cyberspace?
|
||||
|
||||
The short history of translation platforms is not pretty:
|
||||
|
||||
* In 2011, Transifex ceased to be open when they decided to no longer publish their source code.
|
||||
* Since September 2017, the [Pootle][6] project seems to have stalled.
|
||||
* In October 2018, the [Zanata][7] project shut down because it had not succeeded in building a community of technical contributors capable of taking over when corporate funding was halted.
|
||||
|
||||
|
||||
|
||||
In particular, the [Fedora Project][8]—which I work closely with—has ridden the roller coaster from Transifex to Zanata and is now facing another move and more challenges.
|
||||
|
||||
Two significant platforms remain:
|
||||
|
||||
* [Pontoon][9]: Dedicated to the Mozilla use case (large community, common project).
|
||||
* [Weblate][10]: A generic platform created by developer [Michal Čihař][11] (a generic purpose platform).
|
||||
|
||||
|
||||
|
||||
These two tools are of high quality and are technically up-to-date, but Mozilla’s Pontoon is not designed to appeal to the greatest number of people. This project is dedicated to the specific challenges Mozilla faces.
|
||||
|
||||
### A call for action
|
||||
|
||||
There is an urgent need for large communities to share resources to perpetuate Weblate as free software and promote its adoption. Support is also needed for other tools, such as [po4a][12], the [Translate Toolkit][13], and even our old friend [gettext][14]. Will we accept a sword of Damocles hanging over our heads? Will we continue to consume years of free work without giving a cent in return? Or will we take the lead in bringing security to our communities?
|
||||
|
||||
**What you can do as a contributor**: Promote Weblate as an open source translation platform, and help your beloved project use it. [Hosting is free for open source projects][15].
|
||||
|
||||
**What you can do as a developer**: Make sure all of your project’s content can be translated into any language. Think about this issue from the beginning, as all tools don’t provide the same internationalization features.
|
||||
|
||||
**What you can do as an entity with a budget**: Whether you’re a company or just part of the community, pay for the support, hosting, or development of the tools you use. Even if the amount is symbolic, doing this will lower the risks. In particular, [here is the info for Weblate][16]. (Note: I’m not involved with the Weblate project other than bug reports and translation.)
|
||||
|
||||
**What to do if you’re a language enthusiast**: Contact me to help create an open source language organization to promote our tools and their usage, and find money to fund them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/translation-platforms-matter
|
||||
|
||||
作者:[Jean-Baptiste Holcroft][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jibec/users/jibec
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world)
|
||||
[2]: https://www.ethnologue.com/statistics/size
|
||||
[3]: https://git-scm.com
|
||||
[4]: https://www.mercurial-scm.org
|
||||
[5]: https://en.wikipedia.org/wiki/Translation_memory
|
||||
[6]: http://pootle.translatehouse.org
|
||||
[7]: http://zanata.org
|
||||
[8]: https://getfedora.org
|
||||
[9]: https://github.com/mozilla/pontoon/
|
||||
[10]: https://weblate.org
|
||||
[11]: https://cihar.com
|
||||
[12]: https://po4a.org
|
||||
[13]: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/
|
||||
[14]: https://www.gnu.org/software/gettext/
|
||||
[15]: http://hosted.weblate.org/
|
||||
[16]: https://weblate.org/en/hosting/
|
@ -0,0 +1,307 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Three Ways to Lock and Unlock User Account in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/lock-unlock-disable-enable-user-account-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Three Ways to Lock and Unlock User Account in Linux
|
||||
======
|
||||
|
||||
If password policy had already implemented in your organization, then you no need to look for this options.
|
||||
|
||||
However, if you had set up lock period for 24 hours, in this case you might need to unlock the user’s account manually.
|
||||
|
||||
This tutorial will help you to manually lock and unlock users account in Linux.
|
||||
|
||||
This can be done using the following two Linux Commands in three ways.
|
||||
|
||||
* **`passwd:`**The passwd command is used to update user’s authentication tokens. This task is achieved by calling the Linux-PAM and Libuser API
|
||||
* **`usermod:`**The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
|
||||
|
||||
To exprement this, we are choosing `daygeek` user account. Let’s see, how to do step by step.
|
||||
|
||||
Make a note, you have to use corresponding user account which you need to lock or unlock instead of ours.
|
||||
|
||||
You can check the given user account is available or not in system by using `id Command`. Yes, my account is available in the system.
|
||||
|
||||
```
|
||||
# id daygeek
|
||||
|
||||
uid=2240(daygeek) gid=2243(daygeek) groups=2243(daygeek),2244(ladmin)
|
||||
```
|
||||
|
||||
### Method-1: How To Lock, Unlock and Check Status of the Given User Account in Linux Using passwd Command?
|
||||
|
||||
The passwd command is one of the frequently used command by Linux administrator very often.
|
||||
|
||||
It used to update user’s authentication tokens in the `/etc/shadow` file.
|
||||
|
||||
Run the passwd command with the `-l` switch to lock the given user account.
|
||||
|
||||
```
|
||||
# passwd -l daygeek
|
||||
|
||||
Locking password for user daygeek.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
You can check the locked account status either passwd command or grep the given user name from /etc/shadow file.
|
||||
|
||||
Checking the user account locked status using passwd command.
|
||||
|
||||
```
|
||||
# passwd -S daygeek
|
||||
or
|
||||
# passwd --status daygeek
|
||||
|
||||
daygeek LK 2019-05-30 7 90 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
This will output a short information about the status of the password for a given account.
|
||||
|
||||
* **`LK:`**` ` Password locked
|
||||
* **`NP:`**` ` No password
|
||||
* **`PS:`**` ` Password set
|
||||
|
||||
|
||||
|
||||
Checking the locked user account status using `/etc/shadow` file. Two exclamation mark will be added in front of the password, if the account is already locked.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/shadow
|
||||
|
||||
daygeek:!!$6$tGvVUhEY$PIkpI43HPaEoRrNJSRpM3H0YWOsqTqXCxtER6rak5PMaAoyQohrXNB0YoFCmAuh406n8XOvBBldvMy9trmIV00:18047:7:90:7:::
|
||||
```
|
||||
|
||||
Run the passwd command with the `-u` switch to unlock the given user account.
|
||||
|
||||
```
|
||||
# passwd -u daygeek
|
||||
|
||||
Unlocking password for user daygeek.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
### Method-2: How To Lock, Unlock and Check Status of the Given User Account in Linux Using usermod Command?
|
||||
|
||||
Even, the usermod command also frequently used by Linux administrator very often.
|
||||
|
||||
The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
Run the usermod command with the `-L` switch to lock the given user account.
|
||||
|
||||
```
|
||||
# usermod --lock daygeek
|
||||
or
|
||||
# usermod -L daygeek
|
||||
```
|
||||
|
||||
You can check the locked account status either passwd command or grep the given user name from /etc/shadow file.
|
||||
|
||||
Checking the user account locked status using passwd command.
|
||||
|
||||
```
|
||||
# passwd -S daygeek
|
||||
or
|
||||
# passwd --status daygeek
|
||||
|
||||
daygeek LK 2019-05-30 7 90 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
This will output a short information about the status of the password for a given account.
|
||||
|
||||
* **`LK:`**` ` Password locked
|
||||
* **`NP:`**` ` No password
|
||||
* **`PS:`**` ` Password set
|
||||
|
||||
|
||||
|
||||
Checking the locked user account status using /etc/shadow file. Two exclamation mark will be added in front of the password, if the account is already locked.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/shadow
|
||||
|
||||
daygeek:!!$6$tGvVUhEY$PIkpI43HPaEoRrNJSRpM3H0YWOsqTqXCxtER6rak5PMaAoyQohrXNB0YoFCmAuh406n8XOvBBldvMy9trmIV00:18047:7:90:7:::
|
||||
```
|
||||
|
||||
Run the usermod command with the `-U` switch to unlock the given user account.
|
||||
|
||||
```
|
||||
# usermod --unlock daygeek
|
||||
or
|
||||
# usermod -U daygeek
|
||||
```
|
||||
|
||||
### Method-3: How To Disable, Enable SSH Access To the Given User Account in Linux Using usermod Command?
|
||||
|
||||
Even, the usermod command also frequently used by Linux administrator very often.
|
||||
|
||||
The usermod command is used to modify/update given user’s account information. It used to add a user to a specific group, etc.,
|
||||
|
||||
Alternativly this can be done by assigning the `nologin` shell to the given user. To do so, run the below command.
|
||||
|
||||
```
|
||||
# usermod -s /sbin/nologin daygeek
|
||||
```
|
||||
|
||||
You can check the locked user account details by greping the given user name from /etc/passwd file.
|
||||
|
||||
```
|
||||
# grep daygeek /etc/passwd
|
||||
|
||||
daygeek:x:2240:2243::/home/daygeek:/sbin/nologin
|
||||
```
|
||||
|
||||
We can enable the user ssh access by assigning back to the old shell.
|
||||
|
||||
```
|
||||
# usermod -s /bin/bash daygeek
|
||||
```
|
||||
|
||||
### How To Lock, Unlock and Check Status of Multiple User Account in Linux Using Shell Script?
|
||||
|
||||
If you would like to lock/unlock more than one account then you need to look for script.
|
||||
|
||||
Yes, we can write a small shell script to perform this. To do so, use the following shell script.
|
||||
|
||||
Create The Users list. Each user should be in separate line.
|
||||
|
||||
```
|
||||
$ cat user-lists.txt
|
||||
|
||||
u1
|
||||
u2
|
||||
u3
|
||||
u4
|
||||
u5
|
||||
```
|
||||
|
||||
Use the following shell script to lock multiple users account in Linux.
|
||||
|
||||
```
|
||||
# user-lock.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -l $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-lock.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-lock.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-lock.sh
|
||||
|
||||
Locking password for user u1.
|
||||
passwd: Success
|
||||
Locking password for user u2.
|
||||
passwd: Success
|
||||
Locking password for user u3.
|
||||
passwd: Success
|
||||
Locking password for user u4.
|
||||
passwd: Success
|
||||
Locking password for user u5.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
Use the following shell script to check locked users account in Linux.
|
||||
|
||||
```
|
||||
# vi user-lock-status.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -S $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-lock-status.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-lock-status.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-lock-status.sh
|
||||
|
||||
u1 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u2 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u3 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u4 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
u5 LK 2019-06-10 0 99999 7 -1 (Password locked.)
|
||||
```
|
||||
|
||||
Use the following shell script to unlock multiple users account in Linux.
|
||||
|
||||
```
|
||||
# user-unlock.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
passwd -u $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `user-unlock.sh` file.
|
||||
|
||||
```
|
||||
# chmod + user-unlock.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh user-unlock.sh
|
||||
|
||||
Unlocking password for user u1.
|
||||
passwd: Success
|
||||
Unlocking password for user u2.
|
||||
passwd: Success
|
||||
Unlocking password for user u3.
|
||||
passwd: Success
|
||||
Unlocking password for user u4.
|
||||
passwd: Success
|
||||
Unlocking password for user u5.
|
||||
passwd: Success
|
||||
```
|
||||
|
||||
Run the same shell script `user-lock-status.sh` to check these locked user accounts got unlocked in Linux.
|
||||
|
||||
```
|
||||
# sh user-lock-status.sh
|
||||
|
||||
u1 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u2 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u3 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u4 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
u5 PS 2019-06-10 0 99999 7 -1 (Password set, SHA512 crypt.)
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/lock-unlock-disable-enable-user-account-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,169 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "qfzy1233"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Top 5 Linux Distributions for Productivity"
|
||||
[#]: via: "https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity"
|
||||
[#]: author: "Jack Wallen https://www.linux.com/users/jlwallen"
|
||||
|
||||
五个最具生产力的 Linux 发行版
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_main.jpg?itok=2IKyg_7_)
|
||||
|
||||
必须承认的是,这样一个热门的话题其实很难被总结出来。为什么呢?首先,Linux 在设计层面就是一种有生产力的操作系统。由于它极强的可靠性和稳定的平台,使得工作的开展变得简易化。其次为了衡量工作的效率,你需要考虑到哪项工作需要得到生产力方面的助推。是普通办公?开发类工作?学校事务?数据挖掘?或者是人力资源?你可以看到这一问题变得复杂起来了。
|
||||
|
||||
然而,这并不意味着那些基于推动底层操作系统成为更为高效平台的发行版们可以在配置和呈现方面做的更好。恰恰相反,许多发行版在偏离生产力这条道路上越走越远,所以你不会意识到你自己处在工作的窘境中,而是继续挖掘自己的潜力在工期结束之前拼命赶上进度。这些 Linux 发行版可以帮助你化繁为简,因此或许可以减少你工作流程中的痛点。
|
||||
|
||||
让我们来看一下这些发行版并为你找出适合你的最佳选择。为了更具条理,我按照生产力诉求把他们分成了几类。这项任务本身也是一种挑战,因为每个人在生产力提升上的需要是千差万别的。然而,我所关注的是下列的几项:
|
||||
|
||||
* 普通生产力: 适于从事复杂工作并希望提升工作效率。
|
||||
|
||||
* 平面设计: 适于从事设计创造和图像处理的人们。
|
||||
|
||||
* 开发: 适于那些使用 Linux 桌面发行版来进行编程工作。
|
||||
|
||||
* 管理人员: 适于那些需要某些版本来促进一体化的管理任务的人员。
|
||||
|
||||
* 教育: 适于那些需要桌面发行版可以助力他们在教育领域更创造力的人们。
|
||||
|
||||
|
||||
诚然,有很多很多类别的发行版可供挑选,其中的很多可能用起来十分得心应手,但这五种或许是你最为需要的。
|
||||
|
||||
|
||||
### 普通生产力
|
||||
|
||||
对普通的生产力诉求来说,你不会找到比 [Ubuntu][1] 更为高效的了。在这个类别中首推 Ubuntu 最基础的原因是因为它实现了桌面操作系统、软件、服务的无缝集成。你可能会问为什么我不选择同类别的 Linux Mint 呢?因为 Ubuntu 现在默认的的桌面环境为 GNOME 桌面,而它拥有 GNOME 许多扩展程序的优势的加成(图 1)。
|
||||
|
||||
|
||||
![GNOME Clipboard][3]
|
||||
|
||||
图 1:运行中的 GNOME 桌面的剪切板管理工具。
|
||||
|
||||
[经许可使用][4]
|
||||
|
||||
这些扩展程序在提升生产力方面做了很多努力(所以 Ubuntu 比 Linux Mint 获得了更多的认可)。但是 Ubuntu 不仅仅支持 vanilla 版本的 GNOME 桌面。事实上,他们致力于将它改进的更为轻量化、更为高效、以及用户友好度更高、开箱即用。总而言之,由于 Ubuntu 正确的融合了多种特性,开箱即用,完善的软件支持(仅对工作方面而言),这些特性使它几乎成为了生产力领域最为完美的一个平台。
|
||||
|
||||
不管你是要写一篇文档,制作一张电子表格,写一个新的软件,开啊公司的网站,设计商用的图形,管理一个服务器或是网络,抑或是在你的公司内从事人力资源管理工作, Ubuntu 都可以满足你的需求。Ubuntu 桌面发行版也并不要求你耗费很大的精力才能开始开始开展工作…他只是纯粹的工作(并且十分优秀)。最后,得益于 Debian 的基础,使得 Ubuntu 上安装第三方的软件十分简便。
|
||||
|
||||
很难反对这一特殊的发行版在生产力发行版列表中独占鳌头,尽管 Ubuntu 几乎已经成为几乎所有“顶级发行版”列表的榜首。
|
||||
|
||||
### 平面设计
|
||||
|
||||
如果你正在寻求提升你的平面设计效率,你不能错过[Fedora设计套件][5]。这一 Fedora 的衍生版是由负责 Fedora 艺术类项目的团队亲自操刀制作的。虽然默认选择的应用程序并不是一个庞大的工具集合,但它所包含的工具都是创建和处理图像专用的。
|
||||
|
||||
有了GIMP、Inkscape、Darktable、Krita、Entangle、Blender、Pitivi、Scribus等应用程序(图 2),您将发现完成图像编辑工作所需要的一切都已经准备好了,而且准备得很好。但是Fedora设计套件并不仅限于此。这个桌面平台还包括一堆教程,涵盖了许多已安装的应用程序。对于任何想要尽可能提高效率的人来说,这将是一些非常有用的信息。不过,我要说的是,GNOME Favorites中的教程不过是[此页][6]链接的内容。
|
||||
|
||||
![Fedora Design Suite Favorites][8]
|
||||
|
||||
图 2: Fedora Design Suite Favorites菜单包含了许多工具,可以让您用于图形设计。
|
||||
|
||||
[经许可使用][4]
|
||||
|
||||
那些使用数码相机的用户肯定会喜欢Entangle应用程序,它可以让你在电脑上上控制单反相机。
|
||||
|
||||
### 开发人员
|
||||
|
||||
几乎所有的Linux发行版对于程序员来说都是很好的编程平台。然而,有一种特定的发行版脱颖而出,并超越了其他发行版,它将是您见过的用于编程类最有效率的工具之一。这个操作系统来自[System76][9](译注:一家美国的计算机制造商),名为[Pop!_OS][10]。Pop!_OS是专门为创作者定制的,但不是针对艺术类。相反,Pop!_OS面向专门从事开发、编程和软件制作的程序员。如果您需要一个既能完美的胜任开发平台又包含桌面操作系统的开发环境,Pop!_OS 将会是您的不二选择。 (图 3)
|
||||
|
||||
可能会让您感到惊讶(考虑到这个操作系统是多么“年轻”)的是Pop!_OS也是您使用过的基于 GNOME平台的最稳定系统的之一。这意味着 Pop!_OS 不只是为创造者和制造者准备的,也是为任何想要一个可靠的操作系统的人准备的。你可以下载针对你的硬件的专门 ISO 文件,这一点是许多用户十分欣赏的。如果你有英特尔硬件,[下载][10]Intel或AMD的版本。如果您的显卡是NVIDIA,请下载该特定版本。不管怎样,您肯定会得到针对不同平台进行特殊定制的稳定的版本。
|
||||
|
||||
![Pop!_OS][12]
|
||||
|
||||
图 3: 装有 GNOME 桌面的 Pop!_OS 一览。
|
||||
|
||||
[经许可使用][4]
|
||||
|
||||
有趣的是,在 Pop!_OS 中,您不会找到太多预装的开发工具。你也不会找到IDE或许多其他开发工具。但是,您可以在Pop 商店中中找到所需的所有开发工具。
|
||||
|
||||
### 管理人员
|
||||
|
||||
如果你正在寻找适合管理领域的最生产力的发行版本,[Debian][13]将会是你的不二之选。为什么这么说呢?因为 Debian 不仅仅拥有无与伦比的可靠性,它也是众多能从苦海中将你解救出来的最好的一个版本。Debian是易用性和无限可能性的完美结合。最重要的是,因为它是许多其他发行版的基础,所以可以打赌的是,如果您需要一个任务的管理工具,那么它一定支持 Debian 系统。当然,我们讨论的是一般管理任务,这意味着大多数时候您需要使用终端窗口 SSH 连接到服务器(图4),或者在浏览器上使用网络上基于web的GUI工具。既然如此为什么还要使用一个复杂的桌面呢(比如Fedora中的SELinux或openSUSE中的YaST)呢?所以,应选择更为简洁易用的那一种。
|
||||
![Debian][15]
|
||||
|
||||
图 4: 在 Debian 系统上通过SSH 连接到远程服务器。
|
||||
|
||||
[经授权使用][4]
|
||||
|
||||
你可以选择你想要的不同的桌面(包括GNOME, Xfce, KDE, Cinnamon, MATE, LXDE),确保你所使用的桌面外观最适合你的工作习惯。
|
||||
|
||||
### 教育
|
||||
|
||||
如果你是一名老师或者学生,抑或是其他从事与教育相关工作的人士,你需要适当的工具来变得更具创造力。之前,有 Edubuntu 这样的版本。这一版本位列教育类相关发行版排名的前列。然而,自从 Ubuntu 14.04 版之后这一发行版就再也没有更新。还好,现在有一款基于 openSUSE 的新的以教育为基础的发行版有望夺摘得桂冠。这一改版叫做 [openSUSE:Education-Li-f-e][16] (Linux For Education - 图 5), 它基于 openSUSE Leap 42.1 (所以它可能稍微有一点过时)。
|
||||
|
||||
openSUSE:Education-Li-f-e 包含了一下工具:
|
||||
|
||||
* Brain Workshop(大脑工坊) - 一种基于 dual n-back 模式的大脑训练软件(译注:dual n-back 训练是一种科学的智力训练方法,可以改善人的工作记忆和流体智力)
|
||||
|
||||
* GCompris - 一种针对青少年的教育软件包
|
||||
|
||||
* gElemental - 一款元素周期表查看工具
|
||||
|
||||
* iGNUit - 一款通用的记忆卡片工具
|
||||
|
||||
* Little Wizard - 基于 Pascal 语言的少儿编程开发环境
|
||||
|
||||
* Stellarium - 天文模拟器
|
||||
|
||||
* TuxMath - 数学入门游戏
|
||||
|
||||
* TuxPaint - 一款少儿绘画软件
|
||||
|
||||
* TuxType - 一款为少儿准备的打字入门软件
|
||||
|
||||
* wxMaxima - 一个跨平台的计算机代数系统
|
||||
|
||||
* Inkscape - 矢量图形编辑软件
|
||||
|
||||
* GIMP - 图像处理软件(译注:被誉为 Linux 上的 PhotoShop)
|
||||
|
||||
* Pencil - GUI 模型制作工具
|
||||
|
||||
* Hugin - 全景照片拼接及 HDR 效果混合软件
|
||||
|
||||
|
||||
![Education][18]
|
||||
|
||||
图 5: openSUSE:Education-Li-f-e 发行版拥有大量的工具可以帮你在学校中变得更为高效。
|
||||
|
||||
[经许可使用][4]
|
||||
|
||||
同时还集成在 openSUSE:Education-Li-f-e 中的还有 [KIWI-LTSP Server][19] 。Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. KIWI-LTSP KIWI-LTSP服务器是一个灵活的、成本有效的解决方案,旨在使全世界的学校、企业和组织能够轻松地安装和部署桌面工作站。虽然这可能不会直接帮助学生变得更具创造力,但它肯定会使教育机构在部署供学生使用的桌面时更有效率。有关配置 KIWI-LTSP 的更多信息,请查看openSUSE [KIWI-LTSP quick start guide][20].
|
||||
|
||||
通过 Linux 基金会和 edX 的免费["入门介绍"][21]课程来了解更多关于 Linux 的知识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qfzy1233](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ubuntu.com/
|
||||
[2]: /files/images/productivity1jpg
|
||||
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w "GNOME Clipboard"
|
||||
[4]: /licenses/category/used-permission
|
||||
[5]: https://labs.fedoraproject.org/en/design-suite/
|
||||
[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials
|
||||
[7]: /files/images/productivity2jpg
|
||||
[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH "Fedora Design Suite Favorites"
|
||||
[9]: https://system76.com/
|
||||
[10]: https://system76.com/pop
|
||||
[11]: /files/images/productivity3jpg-0
|
||||
[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD "Pop!_OS"
|
||||
[13]: https://www.debian.org/
|
||||
[14]: /files/images/productivity4jpg
|
||||
[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 "Debian"
|
||||
[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e
|
||||
[17]: /files/images/productivity5jpg
|
||||
[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT "Education"
|
||||
[19]: https://en.opensuse.org/Portal:KIWI-LTSP
|
||||
[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start
|
||||
[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
|
83
translated/tech/20190301 Emacs for (even more of) the win.md
Normal file
83
translated/tech/20190301 Emacs for (even more of) the win.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (oneforalone)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs 的胜利(或是更多)
|
||||
======
|
||||
|
||||
我天天用 Emacs,但我却从意识到。但是每当我用 Emacs 时,它都给我带来了很多乐趣。
|
||||
|
||||
>如果你是个职业作家……Emacs 与其它的编辑器的相比就如皓日与群星一样。不仅更大、更亮,它轻而易举就让其他所有的东西都消失了。
|
||||
|
||||
我用 [Emacs][1] 已有二十多年了。我用它来写几乎所有的东西(Scala 和 Java 我用 [IntelliJ][2])。看邮件的话我是能在 Emacs 里看就在里面看。
|
||||
|
||||
尽管我用 Emacs 已有数十年,我在新年前后才意识到,在过去10年或更长时间里,我对 Emacs 的使用几乎没有什么变化。当然,新的编辑模式出现了,我就会选一两个插件,几年前我确实是用了 [Helm][3],但大多数时候,它只是完成了我需要的所有繁重工作,日复一日,没有抱怨,也没有妨碍我。一方面,这证明了它有多好。另一方面,这是一个邀请,让我深入挖掘,看看我错过了什么。
|
||||
|
||||
于此同时,我也决定从以下几方面改进我的工作方式:
|
||||
|
||||
* **更好的议程管理** 我在工作中负责几个项目,这些项目有定期和临时的会议;有些我是我主持的,有些我只要参加就可以。
|
||||
|
||||
我意识到我对开会变得草率起来了了。坐在一个有会议要开的房间里实在是太容易了,但实际上你可以阅读电子邮件,处理其他事情。(我强烈反对在会议中“禁止携带笔记本电脑”的这条规定,但这就是另一个话题。)
|
||||
|
||||
草率地去开会有几个问题。首先,这是对主持会议的人和其他参与者的不尊重。实际上这是不这么做的完美理由,但我还有意识到令一个问题:它忽视了会议的成本。
|
||||
|
||||
如果你在开会,但同时还要回复电子邮件,也许还要改 bug,那么这个会议就不需要花费任何东西(或同样多的钱)。如果会议成本低廉,那么会议数量将会更多。
|
||||
|
||||
我想要少点、短些的会议。我不想忽视它们的成本,我想让开会变得很有价值,除非绝对必要,否则就可以避免。
|
||||
|
||||
有时,开会是很有必要的。而且我认为一个简短的会能够很快的解决问题。但是,如果我一天有十个短会的话,那还是不要说我做了些有成果的事吧。
|
||||
|
||||
我决定在我参加的所有的会上做笔记。我并不是说一定要做会议记录,而是我在做某种会议记录。这会让我把注意力集中在开会上,而忽略其他事。
|
||||
|
||||
* **更好的时间管理** 我有很多要做和想做的事,或工作的或私人的。之前,我有在问题清单和邮件进程(Emacs 和 [Gmail][4] 中,用于一些稍微不同的提醒)、日历、手机上各种各样的“待办事项列表”和小纸片上记录过它们。可能还有其他地方。
|
||||
|
||||
我决定把它们放在一起。不是说我认为有一个地方就最好或更好,而是说我想完成两件事。首先,把它们都放在一个地方,我能够对我把精力放在哪里有一个更好、更全面的看法。第二,也是因为我想养成一个习惯。固定的或有规律的倾向或行为,尤指难以放弃的。记录、跟踪并保存它们。
|
||||
|
||||
* **更好的说明** 如果你在某些科学或工程领域工作,你就会养成记笔记的习惯。唉,我没有。但我决定这么做。
|
||||
|
||||
我对法律上鼓励装订页面或做永久标记并不感兴趣。我感兴趣的是养成做记录的习惯。我的目标是有一个地方记下想法和设计草图等。如果我突然有了灵感,或者我想到了一个不在测试套件中的边缘案例,我希望我的本能是把它写在我的日志中,而不是草草写在一张小纸片上,或者向自己保证我会记住它。
|
||||
|
||||
|
||||
|
||||
这些决心让我很快或多或少地转到了 [Org][6]。Org 有一个庞大的、活跃的、忠诚的用户社区。我以前也用过它(顺带一提,我有[写过][7]它,至少在几年前),我花了很长的一段时间(将 [MarkLogic 集成][8]到其中。(天哪,这在过去的一两个星期里得到了回报!)
|
||||
|
||||
但我从没用过 Org。
|
||||
|
||||
我现在正在用它。我用了几分钟,我把所有要做的事情都记录下来,我还记了日记。我不确定我试图对它进行边界或列举它的所有特性有多大价值,你可以通过网页快速地搜索找到很多。
|
||||
|
||||
如果你用 Emacs,那你也应该用 Org。如果没用过Emacs,我相信你不会是第一个因 Org 而使用 Emacs 的人。Org 可以做很多。它需要一点时间来学习你的方法和快捷键,但我认为这是值得的。(如果你的口袋中有一台 [iOS][9] 设备,我推荐你在忙的时候使用 [beorg][10] 来记录。)
|
||||
|
||||
当然,我想出了如何[将 XML 从其中提取出来][11]⊕“working out” 确实是“用 elisp 来编程”的一种有趣的拼写方式。然后,如何将它转换回我的 weblog 期望的标记(当然,在 Emacs 中按下一个按钮就可以做到)。这是第一次用 Org 写的帖子。这也不会是最后一次。
|
||||
|
||||
附注:生日快乐,[小博客][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[oneforalone](https://github.com/oneforalone)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -0,0 +1,187 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Create a Custom System Tray Indicator For Your Tasks on Linux)
|
||||
[#]: via: (https://fosspost.org/tutorials/custom-system-tray-icon-indicator-linux)
|
||||
[#]: author: (M.Hanny Sabbagh https://fosspost.org/author/mhsabbagh)
|
||||
|
||||
在 Linux 上为你的任务创建一个自定义的系统托盘指示器
|
||||
======
|
||||
|
||||
时至今日系统托盘图标依然很有用。只需要右击图标,然后选择想要的动作即可,你可以大幅简化你的生活并且减少日常行为中的大量无用的点击。
|
||||
|
||||
一说到有用的系统托盘图标,我们很容易就想到 Skype,Dropbox 和 VLC:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 11][1]
|
||||
|
||||
然而系统托盘图标实际上要更有用得多; 你可以根据自己的需求创建自己的系统托盘图标。本指导将会教你通过简单的几个步骤来实现这一目的。
|
||||
|
||||
### 前置条件
|
||||
|
||||
我们将要用 Python 来实现一个自定义的系统托盘指示器。Python 默认在所有主流的 Linux 发行版中都有安装,因此你只需要确定一下它已经确实被安装好了(版本为 2.7)。另外,我们还需要安装好 gir1.2-appindicator3 包。该库能够让我们很容易就能创建系统图标指示器。
|
||||
|
||||
在 Ubuntu/Mint/Debian 上安装:
|
||||
|
||||
```
|
||||
sudo apt-get install gir1.2-appindicator3
|
||||
```
|
||||
|
||||
在 Fedora 上安装:
|
||||
|
||||
```
|
||||
sudo dnf install libappindicator-gtk3
|
||||
```
|
||||
|
||||
对于其他发行版,只需要搜索包含 appindicator 的包就行了。
|
||||
|
||||
在 GNOME Shell 3.26 开始,系统托盘图标被删除了。你需要安装 [这个扩展 ][2] (或者其他扩展) 来为桌面启用该功能。否则你无法看到我们创建的指示器。
|
||||
|
||||
### 基础代码
|
||||
|
||||
下面是指示器的基础代码:
|
||||
|
||||
```
|
||||
#!/usr/bin/python
|
||||
import os
|
||||
from gi.repository import Gtk as gtk, AppIndicator3 as appindicator
|
||||
|
||||
def main():
|
||||
indicator = appindicator.Indicator.new("customtray", "semi-starred-symbolic", appindicator.IndicatorCategory.APPLICATION_STATUS)
|
||||
indicator.set_status(appindicator.IndicatorStatus.ACTIVE)
|
||||
indicator.set_menu(menu())
|
||||
gtk.main()
|
||||
|
||||
def menu():
|
||||
menu = gtk.Menu()
|
||||
|
||||
command_one = gtk.MenuItem('My Notes')
|
||||
command_one.connect('activate', note)
|
||||
menu.append(command_one)
|
||||
|
||||
exittray = gtk.MenuItem('Exit Tray')
|
||||
exittray.connect('activate', quit)
|
||||
menu.append(exittray)
|
||||
|
||||
menu.show_all()
|
||||
return menu
|
||||
|
||||
def note(_):
|
||||
os.system("gedit $HOME/Documents/notes.txt")
|
||||
|
||||
def quit(_):
|
||||
gtk.main_quit()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
我们待会会解释一下代码是怎么工作的。但是现在,让我们将该文本保存为 tray.py,然后使用 Python 运行之:
|
||||
|
||||
```
|
||||
python tray.py
|
||||
```
|
||||
|
||||
我们会看到指示器运行起来了,如下图所示:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 13][3]
|
||||
|
||||
现在,让我们解释一下魔术的原理:
|
||||
|
||||
* 前三行代码仅仅用来指明 Python 的路径并且导入需要的库。
|
||||
|
||||
* def main() : 此为指示器的主函数。该函数的代码用来初始化并创建指示器。
|
||||
|
||||
* indicator = appindicator.Indicator.new(“customtray”,“semi-starred-symbolic”,appindicator.IndicatorCategory.APPLICATION_STATUS) : 这里我们指明创建一个名为 `customtray` 的指示器。这是指示器的唯一名称这样系统就不会与其他运行中的指示器搞混了。同时我们使用名为 `semi-starred-symbolic` 的图标作为指示器的默认图标。你可以将之改成任何其他值; 比如 `firefox` (如果你希望该指示器使用 FireFox 的图标),或任何其他希望的图标名。最后与 `APPLICATION_STATUS` 相关的部分是指明指示器类别/范围的常规代码。
|
||||
|
||||
* `indicator.set_status(appindicator.IndicatorStatus.ACTIVE)` : 这一行激活指示器。
|
||||
|
||||
* `indicator.set_menu(menu())` : 这里我的是我们想使用 `menu()` 函数 (我们会在后面定义) 来为我们的指示器创建菜单项。这很重要,可以让你右击指示器后看到一个可以实施行为的列别。
|
||||
|
||||
* `gtk.main()` : 运行 GTK 主循环。
|
||||
|
||||
* 在 `menu()` 中我们定义了想要指示器提供的行为或项目。`command_one = gtk.MenuItem(‘My Notes’)` 仅仅使用文本 “My notes” 来初始化第一个菜单项,接下来 `command_one.connect(‘activate’,note)` 将菜单的 `activate` 信号与后面定义的 `note()` 函数相连接; 换句话说,我们告诉我们的系统:“当该菜单项被点击,运行 note() 函数”。最后,`menu.append(command_one)` 将菜单项添加到列表中。
|
||||
|
||||
* `exittray` 相关的行是为了创建一个退出的菜单项让你在想要的时候关闭指示器。
|
||||
|
||||
* `menu.show_all()` 以及 `return menu` 只是返回菜单项给指示器的常规代码。
|
||||
|
||||
* 在 `note(_)` 下面是点击 “My Notes” 菜单项时需要执行的代码。这里只是 `os.system(“gedit $HOME/Documents/notes.txt”)` 这一句话; `os.system` 函数允许你在 Python 中运行 shell 命令,因此这里我们写了一行命令来使用 `gedit` 打开 home 目录下 `Documents` 目录中名为 `notes.txt` 的文件。例如,这个可以称为你今后的日常笔记程序了!
|
||||
|
||||
### 添加你所需要的任务
|
||||
|
||||
你只需要修改代码中的两块地方:
|
||||
|
||||
1。在 `menu()` 中为你想要的任务定义新的菜单项。
|
||||
|
||||
2。创建一个新的函数让给该菜单项被点击时执行特定的行为。
|
||||
|
||||
|
||||
所以,比如说你想要创建一个新菜单项,在点击后,会使用 VLC 播放硬盘中某个特定的视频/音频文件?要做到这一点,只需要在地 17 行处添加下面三行内容:
|
||||
|
||||
```
|
||||
command_two = gtk.MenuItem('Play video/audio')
|
||||
command_two.connect('activate', play)
|
||||
menu.append(command_two)
|
||||
```
|
||||
|
||||
然后在地 30 行添加下面内容:
|
||||
|
||||
```
|
||||
def play(_):
|
||||
os.system("vlc /home/<username>/Videos/somevideo.mp4")
|
||||
```
|
||||
|
||||
将 /home/<username>/Videos/somevideo.mp4 替换成你想要播放的视频/音频文件路径。现在保存该文件然后再次运行该指示器:
|
||||
|
||||
```
|
||||
python tray.py
|
||||
```
|
||||
|
||||
你将会看到:
|
||||
|
||||
![Create a Custom System Tray Indicator For Your Tasks on Linux 15][4]
|
||||
|
||||
而且当你点击新创建的菜单项时,VLC 会开始播放!
|
||||
|
||||
要创建其他项目/任务,只需要重复上面步骤即可。但是要小心,需要用其他命令来为 command_two 改名,比如 command_three,这样在变量之间才不会产生冲突。然后定义新函数,就像 play(_) 函数那样。
|
||||
|
||||
从这里开始的可能性是无穷的; 比如我用这种方法来从网上获取数据(使用 urllib2 库) 并显示出来。我也用它来在后台使用 mpg123 命令播放 mp3 文件,而且我还定义了另一个菜单项来杀掉所有的 mpg123 来随时停止播放音频。比如 Steam 上的 CS:GO 退出很费时间(窗口并不会自动关闭),因此,作为一个变通的方法,我只是最小化窗口然后点击某个自建的菜单项,它会执行 killall -9 csgo_linux64 命令。
|
||||
|
||||
你可以使用这个指示器来做任何事情:升级系统包,运行其他脚本。字面上的任何事情。
|
||||
|
||||
### 自动启动
|
||||
|
||||
我们希望系统托盘指示器能在系统启动后自动启动,而不用每次都手工运行。要做到这一点,只需要在自启动应用程序中添加下面命令即可(但是你需要将 tray.py 的路径替换成你自己的路径):
|
||||
|
||||
```
|
||||
nohup python /home/<username>/tray.py &
|
||||
```
|
||||
|
||||
下次重启系统,指示器会在系统启动后自动开始工作了!
|
||||
|
||||
### 结论
|
||||
|
||||
你现在知道了如何为你想要的任务创建自己的系统托盘指示器了。根据每天需要运行的任务的性质和数量,此方法可以节省大量时间。有些人偏爱从命令行创建别名,但是这需要你每次都打开终端窗口或者需要有一个可用的下拉式终端仿真器,而这里,这个系统托盘指示器一直在工作,随时可用。
|
||||
|
||||
你以前用过这个方法来运行你的任务吗?很想听听你的想法。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fosspost.org/tutorials/custom-system-tray-icon-indicator-linux
|
||||
|
||||
作者:[M.Hanny Sabbagh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fosspost.org/author/mhsabbagh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/02/Screenshot-at-2019-02-28-0808.png?resize=407%2C345&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 12)
|
||||
[2]: https://extensions.gnome.org/extension/1031/topicons/
|
||||
[3]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/03/Screenshot-at-2019-03-02-1041.png?resize=434%2C140&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 14)
|
||||
[4]: https://i2.wp.com/fosspost.org/wp-content/uploads/2019/03/Screenshot-at-2019-03-02-1141.png?resize=440%2C149&ssl=1 (Create a Custom System Tray Indicator For Your Tasks on Linux 16)
|
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (VSCodium: 100% Open Source Version of Microsoft VS Code)
|
||||
[#]: via: (https://itsfoss.com/vscodium/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
VSCodium:Microsoft VS Code 的 100% 开源版本
|
||||
======
|
||||
|
||||
_ **简介:VSCodium 是微软流行的 Visual Studio Code 编辑器的一个分支。它与 VS Code 完全相同,唯一不同的是,VSCodium 不跟踪你的使用数据。** _
|
||||
|
||||
微软的 [Visual Studio Code][1] 是一个出色的编辑器,不仅对于 Web 开发人员,也适合其他程序员。由于它的功能,它被认为是最好的开源代码编辑器之一。
|
||||
|
||||
是的,它是微软众多开源产品之一。因为有 DEB、RPM 和 Snap 包形式的二进制文件,你可以[在 Linux 中轻松安装 Visual Studio Code][2]。
|
||||
|
||||
但它存在一个问题,对于普通用户而言可能不是问题,但对于纯粹开源主义者而言是重要的。
|
||||
|
||||
Microsoft 提供的即用二进制文件不是开源的。
|
||||
|
||||
由困惑么?让我解释下。
|
||||
|
||||
VS Code 的源码是在 MIT 许可下开源的。你可以在 [GitHub][3] 上访问它。但是,[Microsoft 创建的安装包含专有的跟踪程序][4]。
|
||||
|
||||
此跟踪基本上用来收集使用数据并将其发送给 Microsoft 以“帮助改进其产品和服务”。如今,远程报告在软件产品中很常见。即使 [Ubuntu 也这样做,但它透明度更高][5]。
|
||||
|
||||
你可以[在 VS Code 中禁用远程报告][6]但是你能完全信任微软吗?如果答案是否定的,那你有什么选择?
|
||||
|
||||
你可以从源代码构建它,从而保持所有开源。但是[从源代码安装][7]并不总是如今最好的选择,因为我们习惯于使用二进制文件。
|
||||
|
||||
另一种选择是使用 VSCodium!
|
||||
|
||||
### VSCodium:100% 开源形式的 Visual Studio Code
|
||||
|
||||
![][8]
|
||||
|
||||
[VSCodium][9] 是微软 Visual Studio Code 的一个分支。该项目的唯一目的是为你提供现成的二进制文件,而没有 Microsoft 的远程收集代码。
|
||||
|
||||
这解决了你想在没有 Microsoft 的专有代码的情况下使用 VS Code 但你不习惯从源代码构建它的问题,
|
||||
|
||||
由于 [VSCodium 是 VS Code 的一个分支][11],它的外观和功能与 VS Code 完全相同。
|
||||
|
||||
这是 Ubuntu 中第一次运行 VS Code 和 VSCodium 的截图。你能分辨出来吗?
|
||||
|
||||
![Can you guess which is VSCode and VSCodium?][12]
|
||||
|
||||
如果你无法区分这两者,请看下面。
|
||||
|
||||
![That’s Microsoft][13]
|
||||
|
||||
除此之外,还有两个应用的 logo,没有其他明显的区别。
|
||||
|
||||
![VSCodium and VS Code in GNOME Menu][14]
|
||||
|
||||
#### 在 Linux 上安装 VSCodium
|
||||
|
||||
虽然 VSCodium 存在于某些发行版(如 Parrot OS)中,但你必须在其他 Linux 发行版中添加额外的仓库。
|
||||
|
||||
在基于 Ubuntu 和 Debian 的发行版上,你可以使用以下命令安装 VSCodium。
|
||||
|
||||
首先,添加仓库的 GPG 密钥:
|
||||
|
||||
```
|
||||
wget -qO - https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg | sudo apt-key add -
|
||||
```
|
||||
|
||||
然后添加仓库:
|
||||
|
||||
```
|
||||
echo 'deb https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/repos/debs/ vscodium main' | sudo tee --append /etc/apt/sources.list.d/vscodium.list
|
||||
```
|
||||
|
||||
现在更新你的系统并安装 VSCodium:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt install codium
|
||||
```
|
||||
|
||||
你可以在它的页面上找到[其他发行版的安装说明][15]。你还应该阅读[有关从 VS Code 迁移到 VSCodium 的说明][16]。
|
||||
|
||||
**你如何看待 VSCodium?**
|
||||
|
||||
就个人而言,我喜欢 VSCodium 的概念。说的老套一点,它的初心是好的。我认为,致力于开源的 Linux 发行版甚至可能开始将其包含在官方仓库中。
|
||||
|
||||
你怎么看?是否值得切换到 VSCodium 或者你选择关闭远程报告并继续使用 VS Code?
|
||||
|
||||
请不要出现“我使用 Vim” 的评论 :D
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/vscodium/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://code.visualstudio.com/
|
||||
[2]: https://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
[3]: https://github.com/Microsoft/vscode
|
||||
[4]: https://github.com/Microsoft/vscode/issues/60#issuecomment-161792005
|
||||
[5]: https://itsfoss.com/ubuntu-data-collection-stats/
|
||||
[6]: https://code.visualstudio.com/docs/supporting/faq#_how-to-disable-telemetry-reporting
|
||||
[7]: https://itsfoss.com/install-software-from-source-code/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium.png?resize=800%2C450&ssl=1
|
||||
[9]: https://vscodium.com/
|
||||
[11]: https://github.com/VSCodium/vscodium
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-vs-vscode.png?resize=800%2C450&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-vscode-tracking.png?resize=800%2C259&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/vscodium-and-vscode.jpg?resize=800%2C220&ssl=1
|
||||
[15]: https://vscodium.com/#install
|
||||
[16]: https://vscodium.com/#migrate
|
@ -7,28 +7,28 @@
|
||||
[#]: via: (https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-space-ubuntu-mint-debian/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
5 Easy Ways To Free Up Space (Remove Unwanted or Junk Files) on Ubuntu
|
||||
5种简单的方法来在 Ubuntu 上释放空间(移除不想要的或没有用的文件)
|
||||
======
|
||||
|
||||
Most of us may perform this action whenever we fall into out of disk space on system.
|
||||
我们中的大多数人可能在系统磁盘存储不足的情况下执行这个操作。
|
||||
|
||||
Most of us may perform this action whenever we are running out of space on Linux system
|
||||
我们中的大多数人可能在 Linux 系统磁盘存储不足的情况下执行这个操作。
|
||||
|
||||
It should be performed frequently, to make space for installing a new application and dealing with other files.
|
||||
它应该被经常执行,来为安装一个新的应用程序和处理其它文件弥补磁盘存储空间。
|
||||
|
||||
Housekeeping is one of the routine task of Linux administrator, which allow them to maintain the disk utilization is in under threshold.
|
||||
内务处理是 Linux 管理员的一个日常任务,管理员允许内务处理在阈值下维持磁盘利用率。
|
||||
|
||||
There are several ways we can clean up our system space.
|
||||
这里有一些我们可以清理我们系统空间的方法。
|
||||
|
||||
There is no need to clean up your system when you have TB of storage capacity.
|
||||
当你有 TB 级存储容量时,不需要清理你的系统。
|
||||
|
||||
But if your have limited space then freeing up disk space becomes a necessity.
|
||||
但是,如果你空间有限,那么释放磁盘空间,变的不可避免。
|
||||
|
||||
In this article, I’ll show you some of the easiest or simple ways to clean up your Ubuntu system and get more space.
|
||||
在这篇文章中,我将向你展示一些最容易的或简单的方法来清理你的 Ubuntu 系统,获得更多空间。
|
||||
|
||||
### How To Check Free Space On Ubuntu Systems?
|
||||
### 在 Ubuntu 系统上如何检查可用的空间?
|
||||
|
||||
Use **[df Command][1]** to check current disk utilization on your system.
|
||||
在你的系统上使用 **[df 命令][1]** 来检查当前磁盘利用率。
|
||||
|
||||
```
|
||||
$ df -h
|
||||
@ -41,18 +41,18 @@ tmpfs 5.0M 4.0K 5.0M 1% /run/lock
|
||||
tmpfs 997M 0 997M 0% /sys/fs/cgroup
|
||||
```
|
||||
|
||||
GUI users can use “Disk Usage Analyzer tool” to view current usage.
|
||||
图形界面用户可以使用“磁盘利用率分析器工具”来查看当前利用率。
|
||||
[![][2]![][2]][3]
|
||||
|
||||
### 1) Remove The Packages That Are No Longer Required
|
||||
### 1) 移除不再需要的软件包
|
||||
|
||||
The following command removes the dependency libs and packages that are no longer required by the system.
|
||||
下面的命令移除系统不再需要依赖的库和软件包。
|
||||
|
||||
These packages were installed automatically to satisfy the dependencies of an installed package.
|
||||
这些软件包自动地安装来使一个被安装软件包满足的依赖关系。
|
||||
|
||||
Also, it removes old Linux kernels that were installed in the system.
|
||||
同样,它移除安装在系统中的旧的 Linux 内核。
|
||||
|
||||
It removes orphaned packages which are not longer needed from the system, but not purges them.
|
||||
它移除不再被系统需要的孤立的软件包,但是不清除它们。
|
||||
|
||||
```
|
||||
$ sudo apt-get autoremove
|
||||
@ -71,7 +71,7 @@ After this operation, 189 MB disk space will be freed.
|
||||
Do you want to continue? [Y/n]
|
||||
```
|
||||
|
||||
To purge them, use the `--purge` option together with the command for that.
|
||||
为清除它们,与命令一起使用 `--purge` 选项。
|
||||
|
||||
```
|
||||
$ sudo apt-get autoremove --purge
|
||||
@ -90,67 +90,67 @@ After this operation, 189 MB disk space will be freed.
|
||||
Do you want to continue? [Y/n]
|
||||
```
|
||||
|
||||
### 2) Empty The Trash Can
|
||||
### 2) 清空回收站
|
||||
|
||||
There might a be chance, that you may have a large amount of useless data residing in your trash can.
|
||||
这可能有风险,你可能有大量的无用数据存在于你的回收站中。
|
||||
|
||||
It takes up your system space. This is one of the best way to clear up those and get some free space on your system.
|
||||
它占用你的系统空间。这是最好的一个方法来在你的系统上清理这些无用的数据,并获取一些可用的空间。
|
||||
|
||||
To clean up this, simple use the file manager to empty your trash can.
|
||||
为清理这些,简单地使用文件管理器来清空你的回收站。
|
||||
[![][2]![][2]][4]
|
||||
|
||||
### 3) Clean up the APT cache
|
||||
### 3) 清理 APT 缓存文件
|
||||
|
||||
Ubuntu uses **[APT Command][5]** (Advanced Package Tool) for package management like installing, removing, searching, etc,.
|
||||
Ubuntu 使用 **[APT 命令][5]** (高级软件包工具)用于软件包管理,像:安装,移除,搜索等等。
|
||||
|
||||
By default every Linux operating system keeps a cache of downloaded and installed packages on their respective directory.
|
||||
默认情况下,每个 Linux 操作系统在它们各自的命令保留下载和安装的软件包的缓冲。
|
||||
|
||||
Ubuntu also does the same, it keeps every updates it downloads and installs in a cache on your disk.
|
||||
Ubuntu 也做相同的事,它以缓冲的形式在你的磁盘上保留它下载和安装的每次更新。
|
||||
|
||||
Ubuntu system keeps a cache of DEB packages in /var/cache/apt/archives directory.
|
||||
Ubuntu 在 /var/cache/apt/archives 目录中保留 DEB 软件包的缓冲文件。
|
||||
|
||||
Over time, this cache can quickly grow and hold a lot of space on your system.
|
||||
随着时间推移,这些缓存可能快速增长,并在你的系统上占有很多空间。
|
||||
|
||||
Run the following command to check the current utilization of APT cache.
|
||||
运行下面的命令来检查当前 APT 缓存文件的使用率。
|
||||
|
||||
```
|
||||
$ sudo du -sh /var/cache/apt
|
||||
147M /var/cache/apt
|
||||
```
|
||||
|
||||
It cleans obsolete deb-packages. I mean to say, less than clean.
|
||||
它清理过时的 deb 软件包。我想说,一点都清理不干净。
|
||||
|
||||
```
|
||||
$ sudo apt-get autoclean
|
||||
```
|
||||
|
||||
It removes all packages kept in the apt cache.
|
||||
它移除所有在 apt 缓存中的软件包。
|
||||
|
||||
```
|
||||
$ sudo apt-get clean
|
||||
```
|
||||
|
||||
### 4) Uninstall the unused applications
|
||||
### 4) 卸载不使用的应用程序
|
||||
|
||||
I would request you to check the installed packages and games on your system and delete them if you are using rarely.
|
||||
我可能要求你来检查在你的系统上安装的软件包和游戏,,删除它们,如果你很少使用。
|
||||
|
||||
This can be easily done via “Ubuntu Software Center”.
|
||||
这可以简单地完成,通过 “Ubuntu 软件中心”。
|
||||
[![][2]![][2]][6]
|
||||
|
||||
### 5) Clean up the thumbnail cache
|
||||
### 5) 清理缩略图缓存
|
||||
|
||||
The cache folder is a place where programs stored data they may need again, it is kept for speed but is not essential to keep. It can be generated again or downloaded again.
|
||||
缓存文件夹是程序存储它们可能再次需要的数据的地方,它是为速度保留的,而不是必需保留的。它可以被再次生成或再次下载。
|
||||
|
||||
If it’s really filling up your hard drive then you can delete things without worrying.
|
||||
假如它真的填满你的硬盘,那么你可以删除一些东西而不用担心。
|
||||
|
||||
Run the following command to check the current utilization of APT cache.
|
||||
运行下面的命令来检查当前 APT 缓存的利用率。
|
||||
|
||||
```
|
||||
$ du -sh ~/.cache/thumbnails/
|
||||
412K /home/daygeek/.cache/thumbnails/
|
||||
```
|
||||
|
||||
Run the following command to delete them permanently from your system.
|
||||
运行下面的命令来从你的系统中永久地删除它们。
|
||||
|
||||
```
|
||||
$ rm -rf ~/.cache/thumbnails/*
|
||||
@ -162,7 +162,7 @@ via: https://www.2daygeek.com/linux-remove-delete-unwanted-junk-files-free-up-sp
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,125 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu Kylin: The Official Chinese Version of Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-kylin/)
|
||||
[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/)
|
||||
|
||||
Ubuntu Kylin: Ubuntu 的官方中文版本
|
||||
======
|
||||
|
||||
[_**Ubuntu 有几个官方特色版本**_][1] _**并且 Kylin 是它们中的一个。在这篇文章中,你将学习 Ubuntu Kylin,它是什么,它为什么被创建,它提供的特色是什么。**_
|
||||
|
||||
Kylin 最初由中华人民共和国的[国防科技大学][2]的院士在2001年开发。名字来源于 [Qilin][3],一种来自中国神话的神兽。
|
||||
|
||||
Kylin 的第一个版本基于 [FreeBSD][4],计划用于中国军方和其它政府组织。Kylin 3.0 完全基于 Linux 内核,并且在2010年12月发布一个称为 [NeoKylin][5] 的版本。
|
||||
|
||||
在2013年,[Canonical][6] (Ubuntu 的母公司) reached an agreement with the 与中华人民共和国的[工业和信息化部][7] 达成共识,共同创建和发布一个针对中国市场特色的基于 Ubuntu 的操作系统。
|
||||
|
||||
![Ubuntu Kylin][8]
|
||||
|
||||
### Ubuntu Kylin 是什么?
|
||||
|
||||
根据上述2013年的共识,Ubuntu Kylin 现在是 Ubuntu 的官方中国版本。它不仅仅是语言本地化。事实上,它决心服务中国市场,像 Ubuntu 服务全球市场一样。
|
||||
|
||||
[Ubuntu Kylin][9] 的第一个版本与 Ubuntu 13.04 一起到来。像 Ubuntu 一样,Kylin 也有 LTS (长期支持)和非 LTS 版本。
|
||||
|
||||
当前,Ubuntu Kylin 19.04 LTS 实施带有修改启动动画,登录/锁屏程序和操作系统主题的 [UKUI][10] 桌面环境。为给用户提供更友好的体验,它修复错误,它有文件预览功能,有定时注销,最新的 [WPS 办公组件][11]和 [Sogou][12] 输入法集成在其中。
|
||||
|
||||
Kylin 4.0.2 是一个基于 Ubuntu Kylin 16.04 LTS 的社区版本。它包含一些带有长期稳定支持的第三方应用程序。它非常适合服务器和日常桌面办公使用,欢迎开发者[下载][13]。Kylin 论坛积极地获取来自提供的反馈以及解决问题来找到解决方案。
|
||||
|
||||
[][14]
|
||||
|
||||
建议阅读解决 Ubuntu 错误:下载存储库信息失败,检查你的网络链接。
|
||||
|
||||
#### UKUI:Ubuntu Kylin 的桌面环境
|
||||
|
||||
![Ubuntu Kylin 19.04 with UKUI Desktop][15]
|
||||
|
||||
[UKUI][16] 由 Ubuntu Kylin 开发小组设计和开发,有一些非常好的特色和预装软件:
|
||||
|
||||
* 类似 Windows 的交互功能,带来更友好的用户体验。安装导向是用户友好的,所以,用户可以快速使用 Ubuntu Kylin 。
|
||||
* 控制中心有新的主题和窗口设置。更新像开始菜单,任务栏,文件管理器,窗口管理器和其它的组件。
|
||||
* 在 Ubuntu 和 Debian 存储库上都单独可用,为 Debian/Ubuntu 发行版和其全球衍生版的的用户提供一个新单独桌面环境。
|
||||
* 新的登录和锁定程序,它更稳定和具有很多功能。
|
||||
* 包括一个反馈问题的实用的反馈程序。
|
||||
|
||||
|
||||
|
||||
#### Kylin 软件中心
|
||||
|
||||
![Kylin Software Center][17]
|
||||
|
||||
Kylin 有一个软件中心,类似于 Ubuntu 软件中,并被称为 Ubuntu Kylin 软件中心。它是 Ubuntu Kylin 软件商店的一部分,它也包含 Ubuntu Kylin 开发者平台和 Ubuntu Kylin 存储库,具有一个简单的用户界面,并功能强大。它同时支持 Ubuntu 和 Ubuntu Kylin 存储库,并特别适用于由 Ubuntu Kylin 小组开发的中文特有的软件的快速安装!
|
||||
|
||||
#### Youker: 一系列的工具
|
||||
|
||||
Ubuntu Kylin 也有一系列被命名为 Youker 的工具。在 Kylin 开始菜单中输入 “Youker” 将带来 Kylin 助手。如果你在键盘上按 “Windows” 按键,像你在 Windows 上一样,你将获得一个精确地响应。它将启动 Kylin 开始菜单。
|
||||
|
||||
![Kylin Assistant][18]
|
||||
|
||||
其它 Kylin 品牌的应用程序包括 Kylin 影音(播放器),Kylin 刻录,Youker 天气,Youker 企鹅,它们更好地支持办公工作和个人娱乐。
|
||||
|
||||
![Kylin Video][19]
|
||||
|
||||
#### 特别专注于中文
|
||||
|
||||
与 Kingsoft 合作,Ubuntu Kylin 开发者也致力于 Linux 版本的搜狗拼音输入法,快盘,和 Ubuntu Kylin 版本的 Kingsoft WPS ,也解决智能拼音,云存储和办公应用程序。[Pinyin][20] 是中文字符的拉丁化系统。使用这个系统,用户用英文键盘输出,但在屏幕上将显示中文字符。
|
||||
|
||||
[][21]
|
||||
|
||||
建议阅读如何在 Ubuntu 中移除旧的 Linux 内核版本
|
||||
|
||||
#### 有趣的事实:Ubuntu Kylin 运行在中国超级计算机上
|
||||
|
||||
![Tianhe-2 Supercomputer. Photo by O01326 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=45399546][22]
|
||||
|
||||
它已经在公知的[世界500强最快的运行 Linux 的超级计算机][23]中。中国超级计算机[天河-1][24]和[天河-2][25]都使用 Kylin Linux 的64位版本,致力于高性能的[并行计算][26]最优化,电源管理和高性能的[虚拟化计算][27]。
|
||||
|
||||
#### 总结
|
||||
|
||||
我希望你喜欢这篇 Ubuntu Kylin 世界的介绍。你可以从它的[官方网站][28]获得 Ubuntu Kylin 19.04 或基于 Ubuntu 16.04 的社区版本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-kylin/
|
||||
|
||||
作者:[Avimanyu Bandyopadhyay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/avimanyu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://english.nudt.edu.cn
|
||||
[3]: https://www.thoughtco.com/what-is-a-qilin-195005
|
||||
[4]: https://itsfoss.com/freebsd-12-release/
|
||||
[5]: https://thehackernews.com/2015/09/neokylin-china-linux-os.html
|
||||
[6]: https://www.canonical.com/
|
||||
[7]: http://english.gov.cn/state_council/2014/08/23/content_281474983035940.htm
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Ubuntu-Kylin.jpeg?resize=800%2C450&ssl=1
|
||||
[9]: http://www.ubuntukylin.com/
|
||||
[10]: http://ukui.org
|
||||
[11]: https://www.wps.com/
|
||||
[12]: https://en.wikipedia.org/wiki/Sogou_Pinyin
|
||||
[13]: http://www.ubuntukylin.com/downloads/show.php?lang=en&id=122
|
||||
[14]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/ubuntu-Kylin-19-04-desktop.jpg?resize=800%2C450&ssl=1
|
||||
[16]: http://www.ukui.org/
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-software-center.jpg?resize=800%2C496&ssl=1
|
||||
[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-assistant.jpg?resize=800%2C535&ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/kylin-video.jpg?resize=800%2C533&ssl=1
|
||||
[20]: https://en.wikipedia.org/wiki/Pinyin
|
||||
[21]: https://itsfoss.com/remove-old-kernels-ubuntu/
|
||||
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/tianhe-2.jpg?resize=800%2C600&ssl=1
|
||||
[23]: https://itsfoss.com/linux-runs-top-supercomputers/
|
||||
[24]: https://en.wikipedia.org/wiki/Tianhe-1
|
||||
[25]: https://en.wikipedia.org/wiki/Tianhe-2
|
||||
[26]: https://en.wikipedia.org/wiki/Parallel_computing
|
||||
[27]: https://computer.howstuffworks.com/how-virtual-computing-works.htm
|
||||
[28]: http://www.ubuntukylin.com
|
@ -0,0 +1,168 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to send email from the Linux command line)
|
||||
[#]: via: (https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何用 Linux 命令行发电子邮件
|
||||
======
|
||||
Linux 提供了几种命令允许您通过终端发送电子邮件,下面来展示一些有趣的方法。
|
||||
|
||||
![Molnia/iStock][1]
|
||||
|
||||
Linux 可以用多种方式通过命令行发送电子邮件。有一些方法十分简单,有一些相对会复杂一些,不过仍旧提供了很多有用的特性。选择哪一种方式取决于你想要什么 —— 向同事快速发送消息,还是向一批人群发带有附件的更复杂的信息。接下来看一看几种可行方案:
|
||||
|
||||
### mail
|
||||
|
||||
发送一条简单消息最便捷的 Linux 命令是 `mail`。假设你需要提醒老板你今天得早点走,你可以使用这样的一条命令:
|
||||
|
||||
```
|
||||
$ echo "Reminder: Leaving at 4 PM today" | mail -s "early departure" myboss
|
||||
```
|
||||
|
||||
另一种方式是从一个文件中提取出你想要发送的文本信息:
|
||||
|
||||
```
|
||||
$ mail -s "Reminder:Leaving early" myboss < reason4leaving
|
||||
```
|
||||
|
||||
在以上两种情况中,你都可以通过 -s 来为邮件添加标题。
|
||||
|
||||
### sendmail
|
||||
|
||||
使用 `sendmail` 命令可以发送一封不包含标题的快信。(用目标收件人替换 `recip`):
|
||||
|
||||
```
|
||||
$ echo "leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
你可以用这条命令发送一条只有标题,没有内容的信息:
|
||||
|
||||
```
|
||||
$ echo "Subject: leaving now" | sendmail recip
|
||||
```
|
||||
|
||||
你也可以用 `sendmail` 发送一条包含一条标题行的完整信息。不过使用这个方法时,你的标题行会被添加到要发送的文件中,如下例所示:
|
||||
|
||||
```
|
||||
Subject: Requested lyrics
|
||||
I would just like to say that, in my opinion, longer hair and other flamboyant
|
||||
affectations of appearance are nothing more ...
|
||||
```
|
||||
|
||||
你也可以发送这样的文件(lyric 文件包含标题和正文):
|
||||
|
||||
```
|
||||
$ sendmail recip < lyrics
|
||||
```
|
||||
|
||||
`sendmain` 的输出可能会很冗长。如果你感到好奇并希望查看发送系统和接收系统之间的交互,请添加 `-v` (verbose)选项。
|
||||
|
||||
```
|
||||
$ sendmail -v recip@emailsite.com < lyrics
|
||||
```
|
||||
|
||||
### mutt
|
||||
|
||||
`mutt` 是通过命令行发送邮件的一个很好的工具,在使用前你需要安装它。 `mutt` 的一个很方便的优势就是它允许你在邮件中添加附件。
|
||||
|
||||
使用 `mutt` 发送一条快速信息:
|
||||
|
||||
```
|
||||
$ echo "Please check last night's backups" | mutt -s "backup check" recip
|
||||
```
|
||||
|
||||
从文件中获取内容:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip < agenda
|
||||
```
|
||||
|
||||
使用 `-a` 选项在 `mutt` 中添加附件。你甚至可以添加不止一个附件 —— 如下一条命令所示:
|
||||
|
||||
```
|
||||
$ mutt -s "Agenda" recip -a agenda -a speakers < msg
|
||||
```
|
||||
|
||||
在以上的命令中,`msg` 文件包含了邮件中的正文。如果你没有其他补充的内容,你可以这样来代替之前的命令:
|
||||
|
||||
```
|
||||
$ echo "" | mutt -s "Agenda" recip -a agenda -a speakers
|
||||
```
|
||||
|
||||
`mutt` 另一个有用的功能是可以添加抄送(`-c`)和密送(`-b`)。
|
||||
|
||||
```
|
||||
$ mutt -s "Minutes from last meeting" recip@somesite.com -c myboss < mins
|
||||
```
|
||||
|
||||
### telnet
|
||||
|
||||
如果你想深入了解发送电子邮件的细节,你可以使用 `telnet` 来进行电子邮件交互操作。但正如所说的那样,你需要“学习术语”。邮件服务器期望一系列命令,其中包括自我介绍(`EHLO` 命令)、提供发件人(`MAIL FROM` 命令)、指定收件人(`RCPT TO` 命令),然后添加消息(`DATA`)并以 `.` 结束消息。并不是所有的电子邮件服务器都会响应这些请求。此方法通常仅用于故障排除。
|
||||
|
||||
```
|
||||
$ telnet emailsite.org 25
|
||||
Trying 192.168.0.12...
|
||||
Connected to emailsite.
|
||||
Escape character is '^]'.
|
||||
220 localhost ESMTP Sendmail 8.15.2/8.15.2/Debian-12; Wed, 12 Jun 2019 16:32:13 -0400; (No UCE/UBE) logging access from: mysite(OK)-mysite [192.168.0.12]
|
||||
EHLO mysite.org <== introduce yourself
|
||||
250-localhost Hello mysite [127.0.0.1], pleased to meet you
|
||||
250-ENHANCEDSTATUSCODES
|
||||
250-PIPELINING
|
||||
250-EXPN
|
||||
250-VERB
|
||||
250-8BITMIME
|
||||
250-SIZE
|
||||
250-DSN
|
||||
250-ETRN
|
||||
250-AUTH DIGEST-MD5 CRAM-MD5
|
||||
250-DELIVERBY
|
||||
250 HELP
|
||||
MAIL FROM: me@mysite.org <== specify sender
|
||||
250 2.1.0 shs@mysite.org... Sender ok
|
||||
RCPT TO: recip <== specify recipient
|
||||
250 2.1.5 recip... Recipient ok
|
||||
DATA <== start message
|
||||
354 Enter mail, end with "." on a line by itself
|
||||
This is a test message. Please deliver it for me.
|
||||
. <== end message
|
||||
250 2.0.0 x5CKWDds029287 Message accepted for delivery
|
||||
quit <== end exchange
|
||||
```
|
||||
|
||||
### 向多个收件人发送电子邮件
|
||||
|
||||
如果你希望通过 Linux 命令行向一大组收件人发送电子邮件,你可以使用一个循环来帮助你完成任务,如下面应用在 `mutt` 中的例子:
|
||||
|
||||
```
|
||||
$ for recip in `cat recips`
|
||||
do
|
||||
mutt -s "Minutes from May meeting" $recip < May_minutes
|
||||
done
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
有很多方法可以从 Linux 命令行发送电子邮件。有些工具提供了相当多的选项。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3402027/how-to-send-email-from-the-linux-command-line.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Modrisco](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2017/08/email_image_blue-100732096-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Personal assistant with Mycroft and Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/personal-assistant-with-mycroft-and-fedora/)
|
||||
[#]: author: (Clément Verna https://fedoramagazine.org/author/cverna/)
|
||||
|
||||
在 Fedora 中使用私人助理 Mycroft
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
还在找开源的私人助理么?[Mycroft][2] 让你运行开源服务,从而更好地控制你的数据。
|
||||
|
||||
### 在 Fedora 上安装 Mycroft
|
||||
|
||||
Mycroft 目前不存在于官方软件包集合中,但它可以轻松地从源码安装。第一步是从 Mycroft 的 GitHub 仓库下载源码。
|
||||
|
||||
```
|
||||
$ git clone https://github.com/MycroftAI/mycroft-core.git
|
||||
```
|
||||
|
||||
Mycroft 是一个 Python 应用,它提供了一个脚本负责在安装 Mycroft 及其依赖项之前创建虚拟环境。
|
||||
|
||||
```
|
||||
$ cd mycroft-core
|
||||
$ ./dev_setup.sh
|
||||
```
|
||||
|
||||
安装脚本会提示用户帮助他完成安装过程。建议运行稳定版本并获取自动更新。
|
||||
|
||||
当提示在本地安装 Mimic 文字转语音引擎时,请回答否。因为根据安装描述,这可能需要很长时间,并且 Mimic 有适合 Fedora 的 rpm 包,因此可以使用 dnf 进行安装。
|
||||
|
||||
```
|
||||
$ sudo dnf install mimic
|
||||
```
|
||||
|
||||
### 开始使用 Mycroft
|
||||
|
||||
安装完成后,可以使用以下脚本启动 Mycroft 服务。
|
||||
|
||||
```
|
||||
$ ./start-mycroft.sh all
|
||||
```
|
||||
|
||||
要开始使用 Mycroft,需要注册运行服务的设备。因此需要一个帐户,可以在 <https://home.mycroft.ai/> 中创建。
|
||||
|
||||
创建帐户后,可以在 [https://account.mycroft.ai/devices][3] 中添加新设备。添加新设备需要配对码,你的设备会在所有服务启动后告诉你。
|
||||
|
||||
![][4]
|
||||
|
||||
现在可以使用该设备了。
|
||||
|
||||
### 使用 Mycroft
|
||||
|
||||
Mycroft 提供了一组默认启用的[技能][5],它们或者可以从[市场][5]下载。刚开始,你可以简单地向 Mycroft 问好,或天气如何。
|
||||
|
||||
```
|
||||
Hey Mycroft, how are you ?
|
||||
|
||||
Hey Mycroft, what's the weather like ?
|
||||
```
|
||||
|
||||
如果你对它是如何工作的感兴趣,_start-mycroft.sh_ 脚本提供了一个_命令行_选项,它能让你使用命令行交互。它也会显示用于调试的有用信息。
|
||||
|
||||
Mycroft 总在学习新技能,并且有很多方法给 Mycroft 社区做[贡献][6]。
|
||||
|
||||
* * *
|
||||
|
||||
由 [Przemyslaw Marczynski][7] 摄影,发布于 [Unsplash][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/personal-assistant-with-mycroft-and-fedora/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cverna/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/mycroft-816x345.jpg
|
||||
[2]: https://mycroft.ai/
|
||||
[3]: https://account.mycroft.ai/devices
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/Screenshot_2019-06-14-Account.png
|
||||
[5]: https://market.mycroft.ai/skills
|
||||
[6]: https://mycroft.ai/contribute/
|
||||
[7]: https://unsplash.com/@pemmax?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/search/photos/ai?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
Loading…
Reference in New Issue
Block a user