Merge pull request #4 from LCTT/master

Update Feb 4
This commit is contained in:
萌新阿岩 2020-02-04 06:51:18 -08:00 committed by GitHub
commit 266c122a46
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 2471 additions and 726 deletions

View File

@ -0,0 +1,149 @@
分布式跟踪系统的四大功能模块如何协同工作
======
> 了解分布式跟踪中的主要体系结构决策,以及各部分如何组合在一起。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/touch-tracing.jpg?itok=rOmsY-nU)
早在十年前,认真研究过分布式跟踪基本上只有学者和一小部分大型互联网公司中的人。对于任何采用微服务的组织来说,它如今成为一种筹码。其理由是确立的:微服务通常会发生让人意想不到的错误,而分布式跟踪则是描述和诊断那些错误的最好方法。
也就是说,一旦你准备将分布式跟踪集成到你自己的应用程序中,你将很快意识到对于不同的人来说“<ruby>分布式跟踪<rt>Distributed Tracing</rt></ruby>”一词意味着不同的事物。此外,跟踪生态系统里挤满了具有相似内容的重叠项目。本文介绍了分布式跟踪系统中四个(可能)独立的功能模块,并描述了它们间将如何协同工作。
### 分布式跟踪:一种思维模型
大多数用于跟踪的思维模型来源于 [Google 的 Dapper 论文][1]。[OpenTracing][2] 使用相似的术语,因此,我们从该项目借用了以下术语:
![Tracing][3]
* <ruby>跟踪<rt>Trace</rt></ruby>:事物在分布式系统运行的过程描述。
* <ruby>跨度<rt>Span</rt></ruby>:一种命名的定时操作,表示工作流的一部分。跨度可接受键值对标签以及附加到特定跨度实例的细粒度的、带有时间戳的结构化日志。
* <ruby>跨度上下文<rt>Span context</rt></ruby>:携带分布式事务的跟踪信息,包括当它通过网络或消息总线将服务传递给服务时。跨度上下文包含跟踪标识符、跨度标识符以及跟踪系统所需传播到下游服务的任何其他数据。
如果你想要深入研究这种思维模式的细节,请仔细参照 [OpenTracing 技术规范][1]。
### 四大功能模块
从应用层分布式跟踪系统的观点来看,现代软件系统架构如下图所示:
![Tracing][5]
现代软件系统的组件可分为三类:
* **应用程序和业务逻辑**:你的代码。
* **广泛共享库**:他人的代码
* **广泛共享服务**:他人的基础架构
这三类组件有着不同的需求,驱动着监控应用程序的分布式跟踪系统的设计。最终的设计得到了四个重要的部分:
* <ruby>跟踪检测 API<rt>A tracing instrumentation API</rt></ruby>:修饰应用程序代码
* <ruby>线路协议<rt>Wire protocol</rt></ruby>:在 RPC 请求中与应用程序数据一同发送的规定
* <ruby>数据协议<rt>Data protocol</rt></ruby>:将异步信息(带外)发送到你的分析系统的规定
* <ruby>分析系统<rt>Analysis system</rt></ruby>:用于处理跟踪数据的数据库和交互式用户界面
为了更深入的解释这个概念,我们将深入研究驱动该设计的细节。如果你只需要我的一些建议,请跳转至下方的四大解决方案。
### 需求,细节和解释
应用程序代码、共享库以及共享式服务在操作上有显著的差别,这种差别严重影响了对其进行检测的请求操作。
#### 检测应用程序代码和业务逻辑
在任何特定的微服务中,由微服务开发者编写的大部分代码是应用程序或者商业逻辑。这部分代码规定了特定区域的操作。通常,它包含任何特殊、独一无二的逻辑判断,这些逻辑判断首先证明了创建新型微服务的合理性。基本上按照定义,**该代码通常不会在多个服务中共享或者以其他方式出现。**
也即是说你仍需了解它,这也意味着需要以某种方式对它进行检测。一些监控和跟踪分析系统使用<ruby>黑盒代理<rt>black-box agents</rt></ruby>自动检测代码,另一些系统更想使用显式的白盒检测工具。对于后者,抽象跟踪 API 提供了许多对于微服务的应用程序代码来说更为实用的优势:
* 抽象 API 允许你在不重新编写检测代码的条件下换新的监视工具。你可能想要变更云服务提供商、供应商和监测技术,而一大堆不可移植的检测代码将会为该过程增加有意义的开销和麻烦。
* 事实证明,除了生产监控之外,该工具还有其他有趣的用途。现有的项目使用相同的跟踪工具来驱动测试工具、分布式调试器、“混沌工程”故障注入器和其他元应用程序。
* 但更重要的是,若将应用程序组件提取到共享库中要怎么办呢?由上述内容可得到结论:
#### 检测共享库
在大多数应用程序中出现的实用程序代码(处理网络请求、数据库调用、磁盘写操作、线程、并发管理等)通常情况下是通用的,而非特别应用于某个特定应用程序。这些代码会被打包成库和框架,而后就可以被装载到许多的微服务上并且被部署到多种不同的环境中。
其真正的不同是:对于共享代码,其他人则成为了使用者。大多数用户有不同的依赖关系和操作风格。如果尝试去使用该共享代码,你将会注意到几个常见的问题:
* 你需要一个 API 来编写检测。然而,你的库并不知道你正在使用哪个分析系统。会有多种选择,并且运行在相同应用下的所有库无法做出不兼容的选择。
* 由于这些包封装了所有网络处理代码,因此从请求报头注入和提取跨度上下文的任务往往指向 RPC 库。然而,共享库必须了解到每个应用程序正在使用哪种跟踪协议。
* 最后,你不想强制用户使用相互冲突的依赖项。大多数用户有不同的依赖关系和操作风格。即使他们使用 gRPC绑定的 gRPC 版本是否相同?因此任何你的库附带用于跟踪的监控 API 必定是免于依赖的。
**因此一个a没有依赖关系、b与线路协议无关、c使用流行的供应商和分析系统的抽象 API 应该是对检测共享库代码的要求。**
#### 检测共享式服务
最后,有时整个服务(或微服务集合体)的通用性足以使许多独立的应用程序使用它们。这种共享式服务通常由第三方托管和管理,例如缓存服务器、消息队列以及数据库。
从应用程序开发者的角度来看,理解共享式服务本质上是黑盒子是极其重要的。它不可能将你的应用程序监控注入到共享式服务。恰恰相反,托管服务通常会运行它自己的监控方案。
### 四个方面的解决方案
因此,抽象的跟踪应用程序接口将会帮助库发出数据并且注入/抽取跨度上下文。标准的线路协议将会帮助黑盒服务相互连接,而标准的数据格式将会帮助分离的分析系统合并其中的数据。让我们来看一下部分有希望解决这些问题的方案。
#### 跟踪 APIOpenTracing 项目
如你所见,我们需要一个跟踪 API 来检测应用程序代码。为了将这种工具扩展到大多数进行跨度上下文注入和提取的共享库中,则必须以某种关键方式对 API 进行抽象。
[OpenTracing][2] 项目主要针对解决库开发者的问题OpenTracing 是一个与供应商无关的跟踪 API它没有依赖关系并且迅速得到了许多监控系统的支持。这意味着如果库附带了内置的本地 OpenTracing 工具,当监控系统在应用程序启动连接时,跟踪将会自动启动。
就个人而言,作为一个已经编写、发布和操作开源软件十多年的人,在 OpenTracing 项目上工作并最终解决这个观察性的难题令我十分满意。
除了 API 之外OpenTracing 项目还维护了一个不断增长的工具列表,其中一些可以在[这里][6]找到。如果你想参与进来,无论是通过提供一个检测插件,对你自己的 OSS 库进行本地测试,或者仅仅只想问个问题,都可以通过 [Gitter][7] 向我们打招呼。
#### 线路协议: HTTP 报头 trace-context
为了监控系统能进行互操作,以及减轻从一个监控系统切换为另外一个时带来的迁移问题,需要标准的线路协议来传播跨度上下文。
[w3c 分布式跟踪上下文社区小组][8]在努力制定此标准。目前的重点是制定一系列标准的 HTTP 报头。该规范的最新草案可以在[此处][9]找到。如果你对此小组有任何的疑问,[邮件列表][10]和[Gitter 聊天室][11]是很好的解惑地点。
LCTT 译注:本文原文发表于 2018 年 5 月,可能现在社区已有不同进展)
#### 数据协议 (还未出现!!)
对于黑盒服务,在无法安装跟踪程序或无法与程序进行交互的情况下,需要使用数据协议从系统中导出数据。
目前这种数据格式和协议的开发工作尚处在初级阶段,并且大多在 w3c 分布式跟踪上下文工作组的上下文中进行工作。需要特别关注的是在标准数据模式中定义更高级别的概念,例如 RPC 调用、数据库语句等。这将允许跟踪系统对可用数据类型做出假设。OpenTracing 项目也通过定义一套[标准标签集][12]来解决这一事务。该计划是为了使这两项努力结果相互配合。
注意当前有一个中间地带。对于由应用程序开发者操作但不想编译或以其他方式执行代码修改的“网络设备”,动态链接可以帮助避免这种情况。主要的例子就是服务网格和代理,就像 Envoy 或者 NGINX。针对这种情况可将兼容 OpenTracing 的跟踪器编译为共享对象,然后在运行时动态链接到可执行文件中。目前 [C++ OpenTracing API][13] 提供了该选项。而 JAVA 的 OpenTracing [跟踪器解析][14]也在开发中。
这些解决方案适用于支持动态链接,并由应用程序开发者部署的的服务。但从长远来看,标准的数据协议可以更广泛地解决该问题。
#### 分析系统:从跟踪数据中提取有见解的服务
最后不得不提的是,现在有足够多的跟踪监视解决方案。可以在[此处][15]找到已知与 OpenTracing 兼容的监控系统列表,但除此之外仍有更多的选择。我更鼓励你研究你的解决方案,同时希望你在比较解决方案时发现本文提供的框架能派上用场。除了根据监控系统的操作特性对其进行评级外(更不用提你是否喜欢 UI 和其功能),确保你考虑到了上述三个重要方面、它们对你的相对重要性以及你感兴趣的跟踪系统如何为它们提供解决方案。
### 结论
最后,每个部分的重要性在很大程度上取决于你是谁以及正在建立什么样的系统。举个例子,开源库的作者对 OpenTracing API 非常感兴趣,而服务开发者对 trace-context 规范更感兴趣。当有人说一部分比另一部分重要时,他们的意思通常是“一部分对我来说比另一部分重要”。
然而,事实是:分布式跟踪已经成为监控现代系统所必不可少的事物。在为这些系统进行构建模块时,“尽可能解耦”的老方法仍然适用。在构建像分布式监控系统一样的跨系统的系统时,干净地解耦组件是维持灵活性和前向兼容性地最佳方式。
感谢你的阅读!现在当你准备好在你自己的应用程序中实现跟踪服务时,你已有一份指南来了解他们正在谈论哪部分部分以及它们之间如何相互协作。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/distributed-tracing
作者:[Ted Young][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[chenmu-kk](https://github.com/chenmu-kk)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/tedsuo
[1]:https://research.google.com/pubs/pub36356.html
[2]:http://opentracing.io/
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing1_0.png?itok=dvDTX0JJ (Tracing)
[4]:https://github.com/opentracing/specification/blob/master/specification.md
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/tracing2_0.png?itok=yokjNLZk (Tracing)
[6]:https://github.com/opentracing-contrib/
[7]:https://gitter.im/opentracing/public
[8]:https://www.w3.org/community/trace-context/
[9]:https://w3c.github.io/distributed-tracing/report-trace-context.html
[10]:http://lists.w3.org/Archives/Public/public-trace-context/
[11]:https://gitter.im/TraceContext/Lobby
[12]:https://github.com/opentracing/specification/blob/master/semantic_conventions.md
[13]:https://github.com/opentracing/opentracing-cpp
[14]:https://github.com/opentracing-contrib/java-tracerresolver
[15]:http://opentracing.io/documentation/pages/supported-tracers
[16]:https://events.linuxfoundation.org/kubecon-eu-2018/
[17]:https://events.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2018/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11841-1.html)
[#]: subject: (My favorite Bash hacks)
[#]: via: (https://opensource.com/article/20/1/bash-scripts-aliases)
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
@ -14,13 +14,13 @@
![bash logo on green background][1]
要是你整天使用计算机,如果能找到可重复的命令并标记它们,以便以后轻松使用那就太棒了。它们全都在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
要是你整天使用计算机,如果能找到需要重复执行的命令并记下它们以便以后轻松使用那就太棒了。它们全都在那里,藏在 `~/.bashrc` 中(或 [zsh 用户][2]的 `~/.zshrc` 中),等待着改善你的生活!
在本文中,我分享了我最喜欢的这些辅助命令,它们可以帮助我避免一些遗忘的事情,也希望可以帮助到你,以及为你解决一些越来越头疼的问题。
在本文中,我分享了我最喜欢的这些助手命令,对于我经常遗忘的事情,它们很有用,也希望这可以帮助到你,以及为你解决一些经常头疼的问题。
### 完事一声
### 完事一声
当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后必须回过去检查该操作是否已完成。 然而通过有用的 `say`,现在就不用再这样了(这是在 MacOS 上;更改为本地环境等效的方式):
当我执行一个需要长时间运行的命令时,我经常采用多任务的方式,然后就必须回头去检查该操作是否已完成。然而通过有用的 `say` 命令,现在就不用再这样了(这是在 MacOS 上;请根据你的本地环境更改为等效的方式):
```
function looooooooong {
@ -36,11 +36,11 @@ function looooooooong {
}
```
这个命令会记命令的开始和结束时间,计算所需的分钟数,并说出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
这个命令会记命令的开始和结束时间,计算所需的分钟数,并出调用的命令、花费的时间和退出码。当简单的控制台铃声无法使用时,我发现这个超级有用。
### 安装小助手
我在小时候就开始使用 Ubuntu而我需要学习的第一件事是如何安装软件包。我曾经添加的第一个别名之一是它的助手(根据当天的流行梗命名的):
我在小时候就开始使用 Ubuntu而我需要学习的第一件事是如何安装软件包。我曾经首先添加的别名之一是它的助手(根据当天的流行梗命名的):
```
alias canhas="sudo apt-get install -y"
@ -48,7 +48,7 @@ alias canhas="sudo apt-get install -y"
### GPG 签名
有时候,我必须在没有扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
有时候,我必须在没有 GPG 扩展程序或应用程序的情况下给电子邮件签署 [GPG][3] 签名,我会跳到命令行并使用以下令人讨厌的别名:
```
alias gibson="gpg --encrypt --sign --armor"
@ -57,7 +57,7 @@ alias ungibson="gpg --decrypt"
### Docker
Docker 命令很多,但是 Docker compose 命令更多。我曾经使用这些别名来忘记 `--rm` 标志,但是现在不再使用这些有用的别名了:
Docker 的子命令很多,但是 Docker compose 的更多。我曾经使用这些别名来将 `--rm` 标志丢到脑后,但是现在不再使用这些有用的别名了:
```
alias dc="docker-compose"
@ -65,15 +65,15 @@ alias dcr="docker-compose run --rm"
alias dcb="docker-compose run --rm --build"
```
### Google Cloud 的 gcurl 辅助程序
### Google Cloud 的 gcurl 助手
对于我来说Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。gcurl 是一个别名,可确保在用带有身份验证标头的本地 curl 命令连接 Google Cloud API 时,可以获得所有正确的标头。
对于我来说Google Cloud 是一个相对较新的东西,而它有[极多的文档][4]。`gcurl` 是一个别名,可确保在用带有身份验证标头的本地 `curl` 命令连接 Google Cloud API 时,可以获得所有正确的标头。
### Git 和 ~/.gitignore
我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 的辅助程序
我工作中用 Git 很多,因此我有一个专门的部分来介绍 Git 助手
我最有用的辅助程序之一是我用来克隆 GitHub 存储库的助手。你不必运行:
我最有用的助手之一是我用来克隆 GitHub 存储库的。你不必运行:
```
git clone git@github.com:org/repo /Users/glasnt/git/org/repo
@ -90,14 +90,13 @@ clone(){
}
```
即使每次进入 `~/.bashrc` 文件时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
即使每次进入 `~/.bashrc` 文件看到这个时,我总是会忘记和傻笑,我也有一个“刷新上游”命令:
```
alias yoink="git checkout master && git fetch upstream master && git merge upstream/master"
```
给 Git 人的另一个辅助程序是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这写内容是:
给 Git 一族的另一个助手是全局忽略文件。在你的 `git config --global --list` 中,你应该看到一个 `core.excludesfile`。如果没有,请[创建一个][6],然后将你总是放到各个 `.gitignore` 文件中的内容填满它。作为 MacOS 上的 Python 开发人员,对我来说,这些内容是:
```
.DS_Store     # macOS clutter
@ -107,11 +106,11 @@ __pycache__   # ... or source
*.swp         # ... nor any files open in vim
```
你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
你可以在 [Gitignore.io][7] 或 GitHub 上的 [Gitignore 存储库][8]上找到其他建议。
### 轮到你了
你最喜欢的辅助程序命令是什么?请在评论中分享。
你最喜欢的助手命令是什么?请在评论中分享。
--------------------------------------------------------------------------------
@ -120,7 +119,7 @@ via: https://opensource.com/article/20/1/bash-scripts-aliases
作者:[Katie McLaughlin][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,41 +1,42 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11846-1.html)
[#]: subject: (Keep a journal of your activities with this Python program)
[#]: via: (https://opensource.com/article/20/1/python-journal)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用这个 Python 程序记录你的活动
======
Jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
![Writing in a notebook][1]
> jrnl 可以创建可搜索、带时间戳、可导出、加密的(如果需要)的日常活动日志。在我们的 20 个使用开源提升生产力的系列的第八篇文章中了解更多。
![](https://img.linux.net.cn/data/attachment/album/202002/03/105455tx03zo2pu7woyusp.jpg)
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 jrnl 记录日志
在我的公司,许多人会在下班之前发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
在我的公司,许多人会在下班之前在 Slack 上发送一个“一天结束”的状态。在有着许多项目和全球化的团队里,这是一个分享你已完成、未完成以及你需要哪些帮助的一个很好的方式。但有时候我太忙了,以至于我忘了做了什么。这时候就需要记录日志了。
![jrnl][2]
打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助
打开一个文本编辑器并在你做一些事的时候添加一行很容易。但是在需要找出你在什么时候做的笔记,或者要快速提取相关的行时会有挑战。幸运的是,[jrnl][3] 可以提供帮助。
Jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
jrnl 能让你在命令行中快速输入条目、搜索过去的条目并导出为 HTML 和 Markdown 等富文本格式。你可以有多个日志,这意味着你可以将工作条目与私有条目分开。它将条目存储为纯文本,因此即使 jrnl 停止工作,数据也不会丢失。
由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 **pip3 install jrnl**。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
由于 jrnl 是一个 Python 程序,最简单的安装方法是使用 `pip3 install jrnl`。这将确保你获得最新和最好的版本。第一次运行它会询问一些问题,接下来就能正常使用。
![jrnl's first run][4]
现在,每当你需要做笔记或记录日志时,只需输入 **jrnl &lt;some text&gt;**,它将带有时间戳的记录保存到默认文件中。你可以使用 **jrnl -on YYYY-MM-DD** 搜索特定日期条目,**jrnl -from YYYY-MM-DD** 搜索在那日期之后的条目,以及用 **jrnl -to YYYY-MM-DD** 搜索自那日期之后的条目。搜索词可以与 **-and** 参数结合使用,允许像 **jrnl -from 2019-01-01 -and -to 2019-12-31** 这类搜索。
现在,每当你需要做笔记或记录日志时,只需输入 `jrnl <some text>`,它将带有时间戳的记录保存到默认文件中。你可以使用 `jrnl -on YYYY-MM-DD` 搜索特定日期条目,`jrnl -from YYYY-MM-DD` 搜索在那日期之后的条目,以及用 `jrnl -to YYYY-MM-DD` 搜索到那日期的条目。搜索词可以与 `-and` 参数结合使用,允许像 `jrnl -from 2019-01-01 -and -to 2019-12-31` 这类搜索。
你还可以使用 **\--edit** 标志编辑日志中的条目。开始之前,通过编辑文件 **~/.config/jrnl/jrnl.yaml** 来设置默认编辑器。你还可以指定使用什么日志文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vimjrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]
你还可以使用 `--edit` 标志编辑日志中的条目。开始之前,通过编辑文件 `~/.config/jrnl/jrnl.yaml` 来设置默认编辑器。你还可以指定日志使用什么文件、用于标签的特殊字符以及一些其他选项。现在,重要的是设置编辑器。我使用 Vimjrnl 的文档中有一些使用其他编辑器如 VSCode 和 Sublime Text 的[有用提示][5]
![Example jrnl config file][6]
Jrnl 还可以加密日志文件。通过设置全局 **encrypt** 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的每个文件中设置 **encrypt: true** 来加密文件。
jrnl 还可以加密日志文件。通过设置全局 `encrypt` 变量,你将告诉 jrnl 加密你定义的所有日志。还可在配置文件中的针对文件设置 `encrypt: true` 来加密文件。
```
journals:
@ -46,7 +47,7 @@ journals:
    encrypt: true
```
如果日志尚未加密,系统将提示你输入对它任何操作的密码。日志文件将加密保存在磁盘上,免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
如果日志尚未加密,系统将提示你输入对它进行任何操作的密码。日志文件将加密保存在磁盘上,免受窥探。[jrnl 文档][7] 中包含其工作原理、使用哪些加密方式等的更多信息。
![Encrypted jrnl file][8]
@ -59,7 +60,7 @@ via: https://opensource.com/article/20/1/python-journal
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,32 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11849-1.html)
[#]: subject: (Showing memory usage in Linux by process and user)
[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
按照进程和用户查看Linux系统中的内存使用情况
查看 Linux 系统中进程和用户的内存使用情况
======
有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令
[Fancycrave][1] [(CC0)][2]
有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 **free**, **ps** 。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 **top** 。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
> 有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令。
![Fancycrave][1]
有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 `free`、`ps`。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 `top`。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
下面是一些按照进程查看内存使用情况的命令:
### 使用 top
### 按照进程查看内存使用情况
**top** 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 **top** ,然后按下 **shift+m** 这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 **shift+m** ,你的 top 应该会得到类似于下面这样的输出结果:
#### 使用 top
[][3]
HPE 赞助的 BrandPost
[Take the Intelligent Route with Consumption-Based Storage][3]
将 HPE 存储的易用性和经济性与 HPE GreenLake 结合起来,能帮助的你的 IT 部门更加高效
`top` 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 `top`,然后按下 `shift+m`,这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 `shift+m` ,你的 `top` 应该会得到类似于下面这样的输出结果:
```
$top
@ -54,11 +50,11 @@ MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem
2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd
```
注意 **%MEM** 排序。 列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
注意 `%MEM` 排序。列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
### 使用 ps
#### 使用 ps
**ps** 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 **ps** 命令的结果传递给 **sort** 命令。下面是一个有用的演示
`ps` 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 `ps` 命令的结果传递给 `sort` 命令。下面是一个有用的示例
```
$ ps aux | sort -rnk 4 | head -5
@ -69,7 +65,7 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
在上面的例子中(文中已截断),sort 命令使用了 **-r** 选项(反转),**-n** 选项(数字值),**-k** 选项(关键字),使 sort 命令对 ps 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 **ps** 命令的标题,那么将会便于查看。
在上面的例子中(文中已截断),`sort` 命令使用了 `-r` 选项(反转)、`-n` 选项(数字值)、`-k` 选项(关键字),使 `sort` 命令对 `ps` 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 `ps` 命令的标题,那么将会便于查看。
```
$ ps aux | head -1; ps aux | sort -rnk 4 | head -5
@ -81,7 +77,7 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 ~/.bashrc 文件中。
如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 `~/.bashrc` 文件中。
```
$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
@ -89,11 +85,13 @@ $ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
下面是一些根据用户查看内存使用情况的命令:
### 使用 top
### 按用户查看内存使用情况
#### 使用 top
按照用户检查内存使用情况会更复杂一些,因为你需要找到一种方法把用户所拥有的所有进程统计为单一的内存使用量。
如果你只想查看单个用户进程使用情况, **top** 命令可以采用与上文中同样的方法进行使用。只需要添加 -U 选项并在其后面指定你要查看的用户名,然后按下 **shift+m** 便可以按照内存使用有多到少进行查看。
如果你只想查看单个用户进程使用情况,`top` 命令可以采用与上文中同样的方法进行使用。只需要添加 `-U` 选项并在其后面指定你要查看的用户名,然后按下 `shift+m` 便可以按照内存使用有多到少进行查看。
```
$ top -U nemo
@ -115,9 +113,9 @@ MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem
32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions
```
### 使用 ps
#### 使用 ps
你依旧可以使用 **ps** 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 **grep** 命令来筛选得到某个用户的所有进程。
你依旧可以使用 `ps` 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 `grep` 命令来筛选得到某个用户的所有进程。
```
$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more
@ -131,7 +129,7 @@ nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-
```
### 使用 ps 和其他命令的搭配
如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的技术,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 **ps aux | grep -v COMMAND | awk '{print $1}' | sort -u** 命令得到了用户列表。其中包含了系统用户比如 **syslog** 。我们对每个任务使用 **awk** 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的方法,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` 命令得到了用户列表。其中包含了系统用户比如 `syslog`。我们对每个任务使用 `awk` 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
```
#!/bin/bash
@ -171,8 +169,6 @@ $ ./show_user_mem_usage
在 Linux 有许多方法可以报告内存使用情况。可以通过一些用心设计的工具和命令,来查看并获得某个进程或者用户占用着最多的内存。
在 [Facebook][4] and [LinkedIn][5] 中加入 Network World 社区,来评论热门话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html
@ -180,13 +176,13 @@ via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/37LPYOkEE2o
[1]: https://images.idgesg.net/images/article/2018/06/chips_processors_memory_cards_by_fancycrave_cc0_via_unsplash_1200x800-100760955-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.facebook.com/NetworkWorld/

View File

@ -0,0 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (IBM's CEO Virginia Rometty to be replaced by its cloud, Red Hat chiefs)
[#]: via: (https://www.networkworld.com/article/3518795/ibms-ceo-virginia-rometty-to-be-replaced-by-its-cloud-red-hat-chiefs.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
IBM's CEO Virginia Rometty to be replaced by its cloud, Red Hat chiefs
======
IBM cloud leader Arvind Krishna and Red Hat CEO Jim Whitehurst, to take reins from long-time CEO
IBM
If anyone was still wondering how serious IBM is about being a major cloud player that question was resoundly answered this week when its current cloud and cognitive-software leader Arvind Krishna and Red Hat CEO Jim Whitehurst to be CEO and president, respectively, to replace long-time CEO Virginia Rometty.
Krishna, 57, was a principal architect of IBMs $34 billion acquisition of Red Hat last year and is currently IBMs senior vice president of Cloud and Cognitive Software, which has become the companys palpable future.   
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
The [Red Hat acquisition][2] not only made Big Blue a bigger open-source and enterprise-software player, but mostly it got IBM into the lucrative hybrid-cloud business, targeting huge cloud competitor Google, Amazon and Microsoft among others. Gartner says that market will be worth $240 billion by next year.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
In its most recent financial call IBM talked-up the successes of its cloud and Red Hat growth. For example its total cloud revenue of $6.8 billion was up 21% year over year, and Red Hats normalized revenue was up 24%, eclipsing $1 billion in a quarter for the first time, IBM stated.
“The next chapter of cloud will be driven by mission-critical workloads managed in a hybrid, multi-cloud environment. This will be based on a foundation of Linux, with containers and Kubernetes. This quarter we had strong performance in RHEL and OpenShift,”  said Jim Kavanaugh IBM senior vice president and chief financial officer (a full transcript of that financial call is available from Seeking Alpha [here][4].)  “As we look forward, the largest hybrid-cloud opportunity is in services, advising clients on architectural choices, moving workloads, building new applications and of course managing them.”
In announcing the leadershiop transition, which will occur April 6, [Rometty wrote of Krishna][5]: "He is a brilliant technologist who has played a significant role in developing our key technologies such as artificial intelligence, cloud, quantum computing and blockchain.  He is also a superb operational leader, able to win today while building the business of tomorrow."
Under Rometty, who was named CEO in 2012, IBM has acquired 65 companies, reinvented more than 50% of IBM's portfolio, built a $21 billion hybrid-cloud business and established IBM's position in AI, quantum computing and blockchain, IBM stated. Rometty will remain as executive chairman until the end of 2020 and then retire after some 40 years at the company. 
Meanwhile, Rometty had this to say about Whitehurst, 52, who is currently IBM senior vice president and CEO of Red Hat: "Jim is also a seasoned leader who has positioned Red Hat as the world's leading provider of open-source enterprise IT software solutions and services, and has been quickly expanding the reach and benefit of that technology to an even wider audience as part of IBM.”
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518795/ibms-ceo-virginia-rometty-to-be-replaced-by-its-cloud-red-hat-chiefs.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3317517/the-ibm-red-hat-deal-what-it-means-for-enterprises.html
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://seekingalpha.com/article/4318204-international-business-machines-corporation-ibm-q4-2019-results-earnings-call-transcript
[5]: https://newsroom.ibm.com/2020-01-30-Arvind-Krishna-Elected-IBM-Chief-Executive-Officer
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 open governance questions every project needs to answer)
[#]: via: (https://opensource.com/article/20/2/open-source-projects-governance)
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
6 open governance questions every project needs to answer
======
Open governance insights from Chris Aniszczyk, VP of Developer Relations
at the Linux Foundation.
![Two government buildings][1]
When we think about what needs to be in place for an open source project to function, one of the first things to come to mind is probably a license. For one thing, absent an approved [Open Source Initiative (OSI) license][2], a project isnt truly open source in the minds of many. Furthermore, the choice to use a copyleft license like the GNU General Public License (GPL) or a permissive license like Massachusetts Institute of Technology (MIT) can affect the sort of community that grows up around and uses the project.
However, Chris Aniszczyk, VP of Developer Relations at the Linux Foundation, argues that its equally important to consider the **open governance of a project** because the license itself doesnt actually tell you how the project is governed.
These are some of the questions that Aniszczyk argues need be answered. He adds that answering these questions before disputes arise, and answering them in a way thats viewed as open and fair to all participants leads to projects that tend to be more successful long term, especially as they grow in size.
### 6 open governance questions for every project
1. Who makes the decisions?
2. How are maintainers added?
3. Who owns the rights to the domain?
4. Who owns the rights to the trademarks?
5. How are those things governed?
6. Who owns how the build system works?
However, while all of these questions should be considered, there isnt one correct way of answering them. Different projects—and foundations hosting projects—take different approaches, whether to accommodate the requirements of a particular community or just for historical reasons.
The latter is often the case when a project uses something often called the Benevolent Dictator for Life (BDFL) model, in which one person—usually the project's founder—generally has the final say on major project decisions. Many projects end up here by default—perhaps most notably the Linux kernel. However, Red Hats Joe Brockmeier observed to me that its mostly considered an anti-pattern at this point. "While a few BDFL-driven projects have succeeded to do well, others have stumbled with that approach," he says.
Aniszczyk observes that "foundations have different sets of bylaws, charters, and how theyre structured, and there are fascinating differences between these organizations. Like Apache is very famous for the Apache Way, and thats how they expect projects to operate. They very much have guardrails about how releases are done. [Its] kind of an incubator process where every project starts way before it graduates to a top-level project. In terms of how projects are governed, its almost like an infinite amount of approaches," he concludes.
### Minimum requirements
That said, Aniszczyk lists some minimum requirements.
"Our pattern, at least, in many Linux Foundation and Cloud Native Computing Foundation (CNCF) projects, is a _governance.md_ file, which describes how decisions are made, how things are governed, how maintainers are added, removed, how are sub-projects added, removed, etc., how releases are done. That would be step one," he says.
#### Ownership
Secondly, he doesnt "think you could do open governance without assets being neutrally owned. At the end of the day, someone owns the domain, the rights to the trademark, some of the copyright, potentially. There are many great organizations out there that are super lightweight. There are things like the Apache Foundation, Software in the Public Interest, and the Software Freedom Conservancy."
Aniszczyk also sees some common approaches as at least potential anti-patterns. A key example is contributor license agreements (CLA), which define the terms under which intellectual property, like code, is contributed to a project. He says that if a company wants "to build a product or use a dual license type model, thats a very valid reason for a CLA. Otherwise, I view CLA as a high friction tool for developers."
#### Developer Certificate of Origin
Instead, he generally encourages people to "use what we call the 'Developer Certificate of Origin.' Its how the Linux kernel works, where basically it takes all the basic things that most CLAs do, which would be like, Did I write this code? Did I not copy it elsewhere? Do I have the rights to give this to you, and you sign off on? Its been a very successful model played out in the kernel and many other ecosystems. Im generally not really supportive of having CLAs unless theres a real strict business need."
#### Naming a project
He also sees a lot of what he considers mistakes in naming. "Project branding is super important. Theres a common pattern where people will start a project, it could be within a company or yourself, or you have a startup, and youll call it, lets say, 'Docker.' Then you have Docker the project, and you have Docker, the company. Then you also have Docker the product or Docker the enterprise product. All those things serve different audiences. It leads to confusion because I have an inherent belief that the name of something has a value proposition attached to it. Please name your company separate from your project, from your product," he argues.
#### Trust
Finally, Aniszczyk points to the role of open governance in building trust and confidence that a company cant just take a project unilaterally for its own ends. "Trust is table stakes in order to build strong communities because, without openly governed institutions in projects, trust is very hard to come by," he concludes.
_List to the Innovate @Open podcast episode from which Chris Aniszczyks remarks were drawn can be heard [here][3]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-projects-governance
作者:[Gordon Haff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ghaff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov2.png?itok=n36__lZj (Two government buildings)
[2]: https://opensource.org/licenses
[3]: https://grhpodcasts.s3.amazonaws.com/cra1911.mp3

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bulletin Board Systems: The VICE Exposé)
[#]: via: (https://twobithistory.org/2020/02/02/bbs.html)
[#]: author: (Two-Bit History https://twobithistory.org)
Bulletin Board Systems: The VICE Exposé
======
By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.
We have reported before on the dark webs [“hurtcore” communities][1], its [human trafficking markets][2], its [rent-a-hitman websites][3]. We have explored [the challenges the dark web presents to regulators][4], the rise of [dark web revenge porn][5], and the frightening size of [the dark web gun trade][6]. We have kept you informed about that one dark web forum where you can make like Walter White and [learn how to manufacture your own drugs][7], and also about—thanks to our foreign correspondent—[the Chinese dark web][8]. We have even attempted to [catalog every single location on the dark web][9]. Our coverage of the dark web has been nothing if not comprehensive.
But I wanted to go deeper.
We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.
A month ago, I set out to find it. Unsure where to start, I made a post on _Reddit_, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.
Only minutes after I made my post, I received a private message. “If you want to see it, Ill take you there,” wrote _Reddit_ user FingerMyKumquat. “But Ill warn you just once—its not pretty to see.”
### Getting Access
This would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.
But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of “bulletin board systems,” a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.
To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its _phone number_.
The software I needed was called [SyncTerm][10]. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.
When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.
_More than enough for what,_ I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?
![][11] _The vivid blue interface of SyncTerm. My directory of BBSes on the left._
### Heatwave
I decided first to visit the bulletin board system called “Heatwave,” which I imagined must be a hangout for global warming survivalists. I “dialed” in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered “DonPablo,” and “z3r0day,” but finally chose “ripper”—a name I could remember because it is also the name of my great-aunt Merediths Shih Tzu. I was then asked where I was dialing from; I decided “xxx” was the right amount of enigmatic.
And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.
![][12] _The main menu of the Heatwave BBS._
I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the “Heatwave” today? There was a main menu option that read “(L)ast Few Callers,” so I hit “L” on my keyboard.
My screen slowly filled with a large table, listing all of the systems “callers” over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a “Dan,” calling from St. Louis, MO. There was also a “Greg Miller,” calling from Portland, OR. Another caller claimed he was “George” calling from Campellsburg, KY. Most of the entries were like that.
It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They werent fooling me.
I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit “M” for “(M)essage Areas.”
Here, I was presented with a choice. I could enter the area reserved for discussions about “T-99 and Geneve,” which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about “Other,” which seemed like a safe place to start.
The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user “Kevin” was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?
I googled it. Remicade is used to treat rheumatoid arthritis and Crohns disease.
In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. “Finger,” I messaged him, “What is this shit Im looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!”
“Perhaps youre ready for the SpookNet,” he wrote back.
### SpookNet
Each bulletin board system is an island in the television-static ocean of the digital world. Each systems callers are lonely sailors come into port after many a month plying the seas.
But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.
One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial “Reality Check.”
![][13] _The Reality Check BBS._
Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the _X-Files_ pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to “End Times and the Last Days.” There was a board for discussing “Truth, Polygraphs, and Serums,” and another for discussing “Silencers of Information.” Here, surely, I would find something worth writing about in an article for VICE.
I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that “paper mill” is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell “explosive” or “sensitive” documents—as in the sentence, offered as an example by one SpookNet user, “Damn, here comes that paper mill Juan again.” I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.
“These are just a bunch of normal losers,” I finally messaged my guide. “Mostly they complain about anti-vaxxers and verses from the Quran. This is just _Reddit_!”
“Huh,” he replied. “When you said scum of the earth, did you mean something else?”
I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their “warez” for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.
I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called “GREY.”
I downloaded it. It was a complete PDF copy of E. L. James _50 Shades of Grey_.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!<https://t.co/VNwT8wgH8j>
>
> — TwoBitHistory (@TwoBitHistory) [January 5, 2020][16]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2020/02/02/bbs.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.vice.com/en_us/article/mbxqqy/a-journey-into-the-worst-corners-of-the-dark-web
[2]: https://www.vice.com/en_us/article/vvbazy/my-brief-encounter-with-a-dark-web-human-trafficking-site
[3]: https://www.vice.com/en_us/article/3d434v/a-fake-dark-web-hitman-site-is-linked-to-a-real-murder
[4]: https://www.vice.com/en_us/article/ezv85m/problem-the-government-still-doesnt-understand-the-dark-web
[5]: https://www.vice.com/en_us/article/53988z/revenge-porn-returns-to-the-dark-web
[6]: https://www.vice.com/en_us/article/j5qnbg/dark-web-gun-trade-study-rand
[7]: https://www.vice.com/en_ca/article/wj374q/inside-the-dark-web-forum-that-tells-you-how-to-make-drugs
[8]: https://www.vice.com/en_us/article/4x38ed/the-chinese-deep-web-takes-a-darker-turn
[9]: https://www.vice.com/en_us/article/vv57n8/here-is-a-list-of-every-single-possible-dark-web-site
[10]: http://syncterm.bbsdev.net/
[11]: https://twobithistory.org/images/sync.png
[12]: https://twobithistory.org/images/heatwave-main-menu.png
[13]: https://twobithistory.org/images/reality.png
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1213920921251131394?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Private equity firms are gobbling up data centers)
[#]: via: (https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Private equity firms are gobbling up data centers
======
Private equity firms accounted for 80% of all data-center acquisitions in 2019. Is that a good thing?
scanrail / Getty Images
Merger and acquisition activity surrounding [data-center][1] facilities is starting to resemble the Oklahoma Land Rush, and private-equity firms are taking most of the action.
New research from Synergy Research Group saw more than 100 deals in 2019, a 50% growth over 2018, and private-equity companies accounted for 80% of them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
M&amp;A activity broke the 100 transaction mark for the first time in 2019, and that comes despite a 45% decline in public company activity, such as the massive Digital Reality Trust [purchase][3] of Interxion. At the same time, the size of the deals dropped in 2019, with fewer worth $1 billion or more vs. 2018, and the average deal value fell 24% vs. 2018.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Since 2015, there have been approximately 350 data-center deals, both public and private, with a total value of $75 billion, according to Synergy. Over this period, private equity buyers have accounted for 57% of the deal volume. Deals were roughly a 50-50 split until 2018 when public company purchases began to trail off.
Anecdotally, Ive heard one reason for the decline in big deals is there are no more big purchases to be had, at least in the US. DRT/Interxion is an exception, and Interxion is a foreign company. Other big deals, like Equinix purchasing Verizons data centers for $3.6 billion in 2017 or AT&amp;T selling its data centers to private equity company Brookfield in 2019. There just isnt much left to sell.
The question becomes is this necessarily a good thing? Private equity firms have something of a well-earned bad reputation for buying up companies, sucking all the profit out of them and discarding the empty husk.
But John Dinsdale, chief analyst for Synergy, said not to worry, that the private equity firms grabbing data centers are looking to grow them. “This is a heavily infrastructure-oriented business where what you can take out is pretty directly related to what you put in. A lot of these equity investors are looking to build something rather than quickly flipping the assets,” he said via e-mail.
He added “In these types of business there isnt that much manpower, HQ or overhead there to be stripped out.” Which is true. Data centers are pretty low-staffed. It was a national news item several years ago that Apples $1 billion data center in rural North Carolina would only [create 50 jobs][5]. Thats true for most data centers.
At least one big player, Digital Realty Trust, was formed in 2004 after private-equity firm GI Partners bought out 21 data centers from a bankruptcy. DRT has grown to 214 centers in the U.S. and Europe.
So in this case, a private equity firm buying out your data center provider might prove to be a good thing.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3451437/digital-realty-acquisition-of-interxion-reshapes-data-center-landscape.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.cultofmac.com/132012/despite-huge-unemployment-rate-apples-1-billion-data-super-center-only-created-50-new-jobs/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,227 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
How to Go About Linux Boot Time Optimisation
======
[![][1]][2]
_Booting an embedded device or a piece of telecommunication equipment quickly is crucial for time-critical applications and also plays a very major role in improving the user experience. This article gives some important tips on how to enhance the boot-up time of any device._
Fast booting or fast rebooting plays a crucial role in various situations. It is critical for an embedded system to boot up fast in order to maintain the high availability and better performance of all the services. Imagine a telecommunications device running a Linux operating system that does not have fast booting enabled. All the systems, services and the users dependent on that particular embedded device might be affected. It is really important that devices maintain high availability in their services, for which fast booting and rebooting play a crucial role.
A small failure or shutdown of a telecom device, even for a few seconds, can play havoc with countless users working on the Internet. Thus, it is really important for a lot of time-dependent devices and telecommunication devices to incorporate fast booting in their devices to help them get back to work quicker. Let us understand the Linux boot-up procedure from Figure 1.
![Figure 1: Boot-up procedure][3]
![Figure 2: Boot chart][4]
**Monitoring tools and the boot-up procedure**
A user should take note of a number of factors before making changes to a machine. These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
**Boot chart:** To monitor the boot-up speed and the various services that start while booting up, the user can install the boot chart using the following command:
```
sudo apt-get install pybootchartgui.
```
Each time you boot up, the boot chart saves a _.png_ (portable network graphics) file in the log, which enables the user to view the _png_ files to get an understanding about the systems boot-up process and services. Use the following command for this purpose:
```
cd /var/log/bootchart
```
The user might need an application to view the _.png_ files. Feh is an X11 image viewer that targets console users. It doesnt have a fancy GUI, unlike most other image viewers, but it simply displays pictures. Feh can be used to view the _.png_ files. You can install it using the following command:
```
sudo apt-get install feh
```
You can view the _png_ files using _feh xxxx.png_.
Figure 2 shows the boot chart when a boot chart _png_ file is viewed.
However, a boot chart is not necessary for Ubuntu versions later than 15.10. To get very brief information regarding boot up speed, use the following command:
```
systemd-analyze
```
![Figure 3: Output of systemd-analyze][5]
Figure 3 shows the output of the command _systemd-analyze_.
The command _systemd-analyze_ blame is used to print a list of all running units based on the time they took to initialise. This information is very helpful and can be used to optimise boot-up times. systemd-analyze blame doesnt display results for services with _Type=simple_, because systemd considers such services to be started immediately; hence, no measurement of the initialisation delays can be done.
![Figure 4: Output of systemd-analyze blame][6]
Figure 4 shows the output of _systemd-analyze_ blame.
The following command prints a tree of the time-critical chain of units:
```
command systemd-analyze critical-chain
```
Figure 5 shows the output of the command _systemd-analyze critical-chain_.
![Figure 5: Output of systemd-analyze critical-chain][7]
**Steps to reduce the boot-up time**
Shown below are the various steps that can be taken to reduce boot-up time.
**BUM (Boot-Up-Manager):** BUM is a run level configuration editor that allows the configuration of _init_ services when the system boots up or reboots. It displays a list of every service that can be started at boot. The user can toggle individual services on and off. BUM has a very clean GUI and is very easy to use.
BUM can be installed in Ubuntu 14.04 using the following command:
```
sudo apt-get install bum
```
To install it in versions later than 15.10, download the packages from the link _<http://apt.ubuntu.com/p/bum> 13_.
Start with basic things and disable services related to the scanner and printer. You can also disable Bluetooth and all other unwanted devices and services if you are not using any of them. I strongly recommend that you study the basics about the services before disabling them, as it might affect the machine or operating system. Figure 6 shows the GUI of BUM.
![Figure 6: BUM][8]
**Editing the rc file:** To edit the rc file, you need to go to the rc directory. This can be done using the following command:
```
cd /etc/init.d.
```
However, root privileges are needed to access _init.d_, which basically contains start/stop scripts that are used to control (start, stop, reload, restart) the daemon while the system is running or during boot.
The _rc_ file in _init.d_ is called a run control script. During booting, init executes the _rc_ script and plays its role. To improve the booting speed, we make changes to the _rc_ file. Open the _rc_ file (once you are in the _init.d_ directory) using any file editor.
For example, by entering _vim rc_, you can change the value of _CONCURRENCY=none_ to _CONCURRENCY=shell_. The latter allows certain startup scripts to be executed simultaneously, rather than serially.
In the latest versions of the kernel, the value should be changed to _CONCURRENCY=makefile_.
Figures 7 and 8 show the comparison of boot-up times before and after editing the rc file. The improvement in the boot-up speed can be noticed. The time to boot before editing the rc file was 50.98 seconds, whereas the time to boot after making the changes to the rc file is 23.85 seconds.
However, the above-mentioned changes dont work on operating systems later than the Ubuntu version 15.10, since the operating systems with the latest kernel use the systemd file and not the _init.d_ file any more.
![Figure 7: Boot speed before making changes to the rc file][9]
![Figure 8: Boot speed after making changes to the rc file][10]
**E4rat:** E4rat stands for e4 reduced access time (ext4 file system only). It is a project developed by Andreas Rid and Gundolf Kiefer. E4rat is an application that helps in achieving a fast boot with the help of defragmentation. It also accelerates application startups. E4rat eliminates both seek times and rotational delays using physical file reallocation. This leads to a high disk transfer rate.
E4rat is available as a .deb package and you can download it from its official website _<http://e4rat.sourceforge.net/>_.
Ubuntus default ureadahead package conflicts with e4rat. So a few packages have to be installed using the following command:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
Now install the dependencies for e4rat using the following command:
```
sudo apt-get install libblkid1 e2fslibs
```
Open the downloaded _.deb_ file and install it. Boot data is now needed to be gathered properly to work with e4rat.
Follow the steps given below to get e4rat running properly and to increase the boot-up speed.
* Access the Grub menu while booting. This can be done by holding the shift button when the system is booting.
* Choose the option (kernel version) that is normally used to boot and press e.
* Look for the line starting with _linux /boot/vmlinuz_ and add the following code at the end of the line (hit space after the last letter of the sentence):
```
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
```
* Now press _Ctrl+x_ to continue booting. This lets e4rat collect data after booting. Work on the machine, open and close applications for the next two minutes.
* Access the log file by going to the e4rat folder and using the following command:
```
cd /var/log/e4rat
```
* If you do not find any log file, repeat the above mentioned process. Once the log file is there, access the Grub menu again and press e as your option.
* Enter single at the end of the same line that you have edited before. This will help you access the command line. If a different menu appears asking for anything, choose Resume normal boot. If you dont get to the command prompt for some reason, hit Ctrl+Alt+F1.
* Enter your details once you see the login prompt.
* Now enter the following command:
```
sudo e4rat-realloc /var/lib/e4rat/startup.log
```
This process takes a while, depending on the machines disk speed.
* Now restart your machine using the following command:
```
sudo shutdown -r now
```
* Now, we need to configure Grub to run e4rat at every boot.
* Access the grub file using any editor. For example, _gksu gedit /etc/default/grub._
* Look for a line starting with _GRUB CMDLINE LINUX DEFAULT=_, and add the following line in between the quotes and before whatever options there are:
```
init=/sbin/e4rat-preload 18
```
* It should look like this:
```
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
```
* Save and close the Grub menu and update Grub using _sudo update-grub_.
* Reboot the system and you will find noticeable changes in boot speed.
Figures 9 and 10 show the differences between the boot-up time before and after installing e4rat. The improvement in the boot-up speed can be noticed. The time taken to boot before using e4rat was 22.32 seconds, whereas the time taken to boot after using e4rat is 9.065 seconds
![Figure 9: Boot speed before using e4rat][11]
![Figure 10: Boot speed after using e4rat][12]
**A few simple tweaks**
A good boot-up speed can also be achieved using very small tweaks, two of which are listed below.
**SSD:** Using solid-state devices rather than normal hard disks or other storage devices will surely improve your booting speed. SSDs also help in achieving great speeds in transferring files and running applications.
**Disabling GUI:** The graphical user interface, desktop graphics and window animations take up a lot of resources. Disabling the GUI is another good way to achieve great boot-up speed.
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -396,7 +396,7 @@ via: https://www.2daygeek.com/linux-commands-check-memory-usage/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,63 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Read Reddit from the Linux terminal)
[#]: via: (https://opensource.com/article/20/1/open-source-reddit-client)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
Read Reddit from the Linux terminal
======
Take brief mental breaks from your work with Reddit client Tuir in the
eleventh in our series on 20 ways to be more productive with open source
in 2020.
![Digital creative of a browser on the internet][1]
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
### Read Reddit with Tuir
Taking short breaks is essential in staying productive. One of the places I like to go when taking a break is [Reddit][2], which can be a great resource if you want it to be. I find all kinds of articles there about DevOps, productivity, Emacs, chickens, and some ChromeOS projects I play with. These discussions can be valuable. I also follow a couple of subreddits that are just pictures of animals because I like pictures of animals (and not just chickens), and sometimes after a long work session, what I really need are kitten pictures.
![/r/emacs in Tuir][3]
When I'm reading Reddit (and not just looking at pictures of baby animals), I use [Tuir][4], which stands for Terminal UI for Reddit. Tuir is a feature-complete Reddit client and can be run on any system that runs Python. Installation is done through the pip Python installer and is exceptionally painless.
On its first run, Tuir will take you to the default article list on Reddit. The top and bottom of the screen have bars that list different commands. The top bar shows your location on Reddit, and the second line shows the commands filtered by the Reddit "Hot/New/Controversial/etc." categories. Filtering is invoked by pressing the number next to the filter you want to use.
![Filtering by Reddit's "top" category][5]
You can navigate through the list with the arrow keys, or with the **j**, **k**, **h**, and **l** keys, the same ones you use for Vi/Vim. The bottom bar has commands for navigating the app. If you want to jump to another subreddit, simply hit the **/** key to open a prompt and type the name of the subreddit you want to interact with.
![Logging in][6]
Some things aren't accessible unless you are logged in. Tuir will prompt you if you try to do something that requires logging in, like posting a new article (**c**) or up/down voting (**a** and **z**, respectively). To log in, press the **u** key. This will launch a browser to log in via OAuth2, and Tuir will save the token. Afterward, your username should appear in the top-right of the screen.
Tuir can also launch your browser to view images, load links, and so on. With a little tuning, it can even show images on the terminal (although I didn't manage to get that to work properly).
Overall, I'm pretty happy with Tuir for quickly catching up on Reddit when I need a break.
Tuir is one of two forks of the now-defunct [RTV][7]. The other is [TTRV][8], which isn't available via pip (yet) but has the same features. I'm looking forward to seeing how they differentiate themselves over time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-reddit-client
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://www.reddit.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_11-1.png (/r/emacs in Tuir)
[4]: https://gitlab.com/ajak/tuir
[5]: https://opensource.com/sites/default/files/uploads/productivity_11-2.png (Filtering by Reddit's "top" category)
[6]: https://opensource.com/sites/default/files/uploads/productivity_11-3.png (Logging in)
[7]: https://github.com/michael-lazar/rtv
[8]: https://github.com/tildeclub/ttrv

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
Get your RSS feeds and podcasts in one place with this open source tool
======
Keep up with your news feed and podcasts with Newsboat in the twelfth in
our series on 20 ways to be more productive with open source in 2020.
![Ship captain sailing the Kubernetes seas][1]
Last year, I brought you 19 days of new (to you) productivity tools for 2019. This year, I'm taking a different approach: building an environment that will allow you to be more productive in the new year, using tools you may or may not already be using.
### Access your RSS feeds and podcasts with Newsboat
RSS news feeds are an exceptionally handy way to keep up to date on various websites. In addition to Opensource.com, I follow the annual [SysAdvent][2] sysadmin tools feed, some of my favorite authors, and several webcomics. RSS readers allow me to "batch up" my reading, so I'm not spending every day on a bunch of different websites.
![Newsboat][3]
[Newsboat][4] is a terminal-based RSS feed reader that looks and feels a lot like the email program [Mutt][5]. It makes news reading easy and has a lot of nice features.
Installing Newsboat is pretty easy since it is included with most distributions (and Homebrew on MacOS). Once it is installed, adding the first feed is as easy as adding the URL to the **~/.newsboat/urls** file. If you are migrating from another feed reader and have an OPML file export of your feeds, you can import that file with:
```
`newsboat -i </path/to/my/feeds.opml>`
```
After you've added your feeds, the Newsboat interface is _very_ familiar, especially if you've used Mutt. You can scroll up and down with the arrow keys, check for new items in a feed with **r**, check for new items in all feeds with **R**, press **Enter** to open a feed and select an article to read.
![Newsboat article list][6]
You are not limited to just the local URL list, though. Newsboat is also a client for news reading services like [Tiny Tiny RSS][7], ownCloud and Nextcloud News, and a few Google Reader successors. Details on that and a whole host of other configuration options are covered in [Newsboat's documentation][8].
![Reading an article in Newsboat][9]
#### Podcasts
Newsboat also provides [podcast support][10] through Podboat, an included application that facilitates downloading and queuing podcast episodes. While viewing a podcast feed in Newsboat, press **e** to add the episode to your download queue. All the information will be stored in a queue file in the **~/.newsboat** directory. Podboat reads this queue and downloads the episode(s) to your local drive. You can do this from the Podboat user interface (which looks and acts like Newsboat), or you can tell Podboat to download them all with **podboat -a**. As a podcaster and podcast listener, I think this is _really_ handy.
![Podboat][11]
Overall, Newsboat has some really great features and is a nice, lightweight alternative to web-based or desktop apps.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-rss-feed-reader
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://sysadvent.blogspot.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_12-1.png (Newsboat)
[4]: https://newsboat.org
[5]: http://mutt.org/
[6]: https://opensource.com/sites/default/files/uploads/productivity_12-2.png (Newsboat article list)
[7]: https://tt-rss.org/
[8]: https://newsboat.org/releases/2.18/docs/newsboat.html
[9]: https://opensource.com/sites/default/files/uploads/productivity_12-3.png (Reading an article in Newsboat)
[10]: https://newsboat.org/releases/2.18/docs/newsboat.html#_podcast_support
[11]: https://opensource.com/sites/default/files/uploads/productivity_12-4.png (Podboat)

View File

@ -1,56 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's your favorite Linux distribution?)
[#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution)
[#]: author: (Opensource.com https://opensource.com/users/admin)
What's your favorite Linux distribution?
======
Take our 7th annual poll to let us know your preference in Linux
distribution.
![Hand putting a Linux file folder into a drawer][1]
What's your favorite Linux distribution? Take our 7th annual poll. Some have come and gone, but there are hundreds of [Linux distributions][2] alive and well today. The combination of distribution, package manager, and desktop creates an endless amount of customized environments for Linux users.
We asked the community of writers what their favorite is and why. While there were some commonalities (Fedora and Ubuntu were popular choices for a variety of reasons), we heard a few surprises as well. Here are a few of their responses:
"I use the Fedora distro! I love the community of people who work together to make an awesome operating system that showcases the greatest innovations in the free and open source software world." — Matthew Miller
"I use Arch at home. As a gamer, I want easy access to the latest Wine versions and GFX drivers, as well as large amounts of control over my OS. Give me a rolling-release distro with every package at bleeding-edge." —Aimi Hobson
"NixOS, with nothing coming close in the hobbyist niche." —Alexander Sosedkin
"I have used every Fedora version as my primary work OS. Meaning, I started with the first one. Early on, I asked myself if there would ever come a time when I couldn't remember which number I was on. That time has arrived. What year is it, anyway?" —Hugh Brock
"I usually have Ubuntu, CentOS, and Fedora boxes running around the house and the office. We depend on all of these distributions for various things. Fedora for speed and getting the latest versions of applications and libraries. Ubuntu for those that need easy of use with a large community of support. CentOS when we need a rock-solid server platform that just runs." —Steve Morris
"My favorite? For the community, and how packages are built for the distribution (from source, not binaries), I choose Fedora. For pure breadth of packages available and elegance in how packages are defined and developed, I choose Debian. For documentation, I choose Arch. For newbies that ask, I used to recommend Ubuntu but now recommend Fedora." —Al Stone
* * *
We've been asking the community this question since 2014. With the exception of PCLinuxOS taking the lead in 2015, Ubuntu tends to be the fan-favorite from year to year. Other popular contenders have been Fedora, Debian, Mint, and Arch. Which distribution stands out to you in the new decade? If we didn't include your favorite in the list of choices, tell us about it in the comments. 
Here's a look at your favorite Linux distributions throughout the last seven years. You can find this in our latest yearbook, [Best of a decade on Opensource.com][3]. To download the whole eBook, [click here][3]!
![Poll results for favorite Linux distribution through the years][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/favorite-linux-distribution
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://distrowatch.com/
[3]: https://opensource.com/downloads/2019-yearbook-special-edition
[4]: https://opensource.com/sites/default/files/pictures/linux-distributions-through-the-years.jpg (favorite Linux distribution through the years)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,101 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
4 cool new projects to try in COPR for January 2020
======
![][1]
COPR is a [collection][2] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If youre new to using COPR, see the [COPR User Documentation][3] for how to get started.
### Contrast
[Contrast][4] is a small app used for checking contrast between two colors and to determine if it meets the requirements specified in [WCAG][5]. The colors can be selected either using their RGB hex codes or with a color picker tool. In addition to showing the contrast ratio, Contrast displays a short text on a background in selected colors to demonstrate comparison.
![][6]
#### Installation instructions
The [repo][7] currently provides contrast for Fedora 31 and Rawhide. To install Contrast, use these commands:
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8] is a command-line tool for adjusting and monitoring volume levels of sound devices using PulseAudio. You can display the current volume of a device and either set it directly or increase/decrease it, or (un)mute it. Pamixer can list all sources and sinks.
#### Installation instructions
The [repo][9] currently provides Pamixer for Fedora 31 and Rawhide. To install Pamixer, use these commands:
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] is an image editor. It has a simple and well-arranged user interface, where most of the features are available in the toolbars. PhotoFlare provides features such as various color adjustments, image transformations, filters, brushes and automatic cropping, although it doesnt support working with layers. Also, PhotoFlare can edit pictures in batches, applying the same filters and transformations on all pictures and storing the results in a specified directory.
![][11]
#### Installation instructions
The [repo][12] currently provides PhotoFlare for Fedora 31. To install Photoflare, use these commands:
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] is a command-line tool for comparing two file trees. In addition to showing that some files or directories exist in one tree only, tdiff shows differences in file sizes, types and contents, owner user and group ids, permissions, modification time and more.
#### Installation instructions
The [repo][14] currently provides tdiff for Fedora 29-31 and Rawhide, EPEL 6-8 and other distributions. To install tdiff, use these commands:
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Intro to the Linux command line
======
Here are some warm-up exercises for anyone just starting to use the Linux command line. Warning: It can be addictive.
[Sandra Henry-Stocker / Linux][1] [(CC0)][2]
If youre new to Linux or have simply never bothered to explore the command line, you may not understand why so many Linux enthusiasts get excited typing commands when theyre sitting at a comfortable desktop with plenty of tools and apps available to them. In this post, well take a quick dive to explore the wonders of the command line and see if maybe we can get you hooked.
First, to use the command line, you have to open up a command tool (also referred to as a “command prompt”). How to do this will depend on which version of Linux youre running. On RedHat, for example, you might see an Activities tab at the top of your screen which will open a list of options and a small window for entering a command (like “cmd” which will open the window for you). On Ubuntu and some others, you might see a small terminal icon along the left-hand side of your screen. On many systems, you can open a command window by pressing the **Ctrl+Alt+t** keys at the same time.
You will also find yourself on the command line if you log into a Linux system using a tool like PuTTY.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Once you get your command line window, youll find yourself sitting at a prompt. It could be just a **$** or something as elaborate as “**user@system:~$**” but it means that the system is ready to run commands for you.
Once you get this far, it will be time to start entering commands. Below are some of the commands to try first, and [here is a PDF][4] of some particularly useful commands and a two-sided command cheatsheet suitable for printing out and laminating.
```
Command What it does
pwd show me where I am in the file system (initially, this will be your
home directory)
ls list my files
ls -a list even more of my files (including those that start with a period)
ls -al list my files with lots of details (including dates, file sizes and
permissions)
who show me who is logged in (dont be disappointed if its only you)
date remind me what day today is (shows the time too)
ps list my running processes (might just be your shell and the “ps”
command)
```
Once youve gotten used to your Linux home from the command line point of view, you can begin to explore. Maybe youll feel ready to wander around the file system with commands like these:
```
Command What it does
cd /tmp move to another directory (in this case, /tmp)
ls list files in that location
cd go back home (with no arguments, cd always takes you back to your home
directory)
cat .bashrc display the contents of a file (in this case, .bashrc)
history show your recent commands
echo hello say “hello” to yourself
cal show a calendar for the current month
```
To get a feeling for why more advanced Linux users like the command line so much, you will want to try some other features like redirection and pipes. Redirection is when you take the output of a command and drop it into a file instead of displaying it on your screen. Pipes are when you take the output of one command and send it to another command that will manipulate it in some way. Here are commands to try:
[[Get regularly scheduled insights by signing up for Network World newsletters.]][5]
```
Command What it does
echo “echo hello” > tryme create a new file and put the words “echo hello” into
it
chmod 700 tryme make the new file executable
tryme run the new file (it should run the command it
contains and display “hello”)
ps aux show all running processes
ps aux | grep $USER show all running processes, but limit the output to
lines containing your username
echo $USER display your username using an environment variable
whoami display your username with a command
who | wc -l count how many users are currently logged in
```
### Wrap-Up
Once you get used to the basic commands, you can explore other commands and try your hand at writing scripts. You might find that Linux is a lot more powerful and nice to use than you ever imagined.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,77 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ProtonVPN adopts GPLv3, Mozilla Thunderbird gets new home, and more news)
[#]: via: (https://opensource.com/article/20/2/news-february-1)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
ProtonVPN adopts GPLv3, Mozilla Thunderbird gets new home, and more news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this edition of our open source news roundup, we take a look ProtonVPN apps going open, Microsoft's code analysis tool, Mozilla Thunderbird's new home, and more!
### ProtonVPN apps go open source
People wanting to use the internet securely and privately do the deed using a Virtual Private Network (VPN). But which VPNs can you really trust? The company behind the popular ProtonVPN service made a big move to gain that trust by [releasing the source code][2] for all its apps.
By making its apps open source, ProtonVPN is giving security experts the chance to "inspect its encryption implementations and how the company handles user data, giving users confidence that the company is adhering to its strict privacy policy." According to an [article at TechRadar][3], ProtonVPN also engaged "security firm SEC Consult on a full security audit that was able to verify the security of the company's software."
You can find the source code for the apps [on GitHub][4] and links to the audit reports [in this blog post][5].
### Mozilla Thunderbird gets a new home
What a difference a few years makes. When the Mozilla Foundation announced in 2015 that it was considering spinning off the Thunderbird email client, the software's adherents feared the worst. Since then, Thunderbird has persisted but its fate has also been up in the air. [That's changed][6] with the formation of MZLA Technologies Corporation.
MZLA Technologies is "a new wholly owned subsidiary of the Mozilla Foundation" that's the new home of the Thunderbird project. The move to the new corporation means that development will continue on the software and that move "wont have an impact on Thunderbirds day-to-day running." According to Thunderbird's Phillip Kewisch, shifting the project to MZLA Technologies enables it to "explore offering our users products and services that were not possible under the Mozilla Foundation."
### Encrypting the Internet of Things
The so-called Internet of Things (IoT) has promised so much. That promise has been lost under the weight of the often paper-thin security of IoT devices. Teserakt, a Swiss security firm, is trying turn that around with the [release of E4][7], a "cryptographic implant that IoT manufacturers can integrate into their servers."
Teserakt's CEO Jean-Philippe Aumasson said that E4 came about because there are "so many machines and entities that do not have the need to view or modify (data from IoT devices), so they shouldnt have access to it." E4 provides end-to-end encryption of data which bolsters "defenses for information in transit and offers protection against data interception and manipulation."
If you're interesting taking a peek at E4, you can do that in [Teserakt's GitHub repository][8].
#### In other news
* [Scientists working with Google just published the most detailed brain scans ever created][9]
* [AWS JPL open source rover challenge open to competitors][10]
* [Intel joins CHIPS Alliance, contributes advanced interface bus][11]
* [Why UK leaders need open technology for the disrupted future][12]
* [Microsoft open sources code analysis tool][13]
_Thanks, as always, to Opensource.com staff members and moderators for their help this week. Make sure to check out [our event calendar][14], to see what's happening next week in open source._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/news-february-1
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
[2]: https://betanews.com/2020/01/21/protonvpn-open-source/
[3]: https://www.techradar.com/au/news/protonvpn-releases-source-code-and-undergoes-security-audit
[4]: https://github.com/ProtonVPN
[5]: https://protonvpn.com/blog/open-source/
[6]: https://blog.thunderbird.net/2020/01/thunderbirds-new-home/
[7]: https://www.wired.com/story/e4-iot-encryption/
[8]: https://github.com/teserakt-io/
[9]: https://thenextweb.com/artificial-intelligence/2020/01/22/scientists-working-with-google-just-published-the-most-detailed-brain-scans-ever-created/
[10]: https://www.therobotreport.com/aws-jpl-open-source-rover-challenge-open-to-competitors/
[11]: https://chipsalliance.org/announcement/2020/01/22/intel-joins-chips-alliance-to-promote-advanced-interface-bus-aib-as-an-open-standard/
[12]: https://www.information-age.com/leaders-open-technology-disrupted-future-123487150/
[13]: https://www.infoworld.com/article/3516147/microsoft-releases-open-source-source-code-analyzer.html
[14]: https://opensource.com/resources/conferences-and-events-monthly

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Give an old MacBook new life with Linux)
[#]: via: (https://opensource.com/article/20/2/macbook-linux-elementary)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Give an old MacBook new life with Linux
======
Elementary OS's latest release, Hera, is an impressive platform for
resurrecting an outdated MacBook.
![Coffee and laptop][1]
When I installed Apple's [MacOS Mojave][2], it slowed my formerly reliable MacBook Air to a crawl. My computer, released in 2015, has 4GB RAM, an i5 processor, and a Broadcom 4360 wireless card, but Mojave proved too much for my daily driver—it made working with [GnuCash][3] impossible, and it whetted my appetite to return to Linux. I am glad I did, but I felt bad that I had this perfectly good MacBook lying around unused.
I tried several Linux distributions on my MacBook Air, but there was always a gotcha. Sometimes it was the wireless card; another time, it was a lack of support for the touchpad. After reading some good reviews, I decided to try [Elementary OS][4] 5.0 (Juno). I [made a boot drive][5] with my USB creator and inserted it into the MacBook Air. I got to a live desktop, and the operating system recognized my Broadcom wireless chipset—I thought this just might work!
I liked what I saw in Elementary OS; its [Pantheon][6] desktop is really great, and its look and feel are familiar to Apple users—it has a dock at the bottom of the display and icons that lead to useful applications. I liked the preview of what I could expect, so I decided to install it—and then my wireless disappeared. That was disappointing. I really liked Elementary OS, but no wireless is a non-starter.
Fast-forward to December 2019, when I heard a review on the [Linux4Everyone][7] podcast about Elementary's latest release, v.5.1 (Hera) bringing a MacBook back to life. So, I decided to try again with Hera. I downloaded the ISO, created the bootable drive, plugged it in, and this time the operating system recognized my wireless card. I was in business!
![MacBook Air with Hera][8]
I was overjoyed that my very light, yet powerful MacBook Air was getting a new life with Linux. I have been exploring Elementary OS in greater detail, and I can tell you that I am impressed.
### Elementary OS's features
According to [Elementary's blog][9], "The newly redesigned login and lock screen greeter looks sharper, works better, and fixes many reported issues with the previous greeter including focus issues, HiDPI issues, and better localization. The new design in Hera was in response to user feedback from Juno, and enables some nice new features."
"Nice new features" in an understatement—Elementary OS easily has one of the best-designed Linux user interfaces I have ever seen. A System Settings icon is on the dock by default; it is easy to change the settings, and soon I had the system configured to my liking. I need larger text sizes than the defaults, and the Universal Access controls are easy to use and allow me to set large text and high contrast. I can also adjust the dock with larger icons and other options.
![Elementary OS's Settings screen][10]
Pressing the Mac's Command key brings up a list of keyboard shortcuts, which is very helpful to new users.
![Elementary OS's Keyboard shortcuts][11]
Elementary OS ships with the [Epiphany][12] web browser, which I find quite easy to use. It's a bit different than Chrome, Chromium, or Firefox, but it is more than adequate.
For security-conscious users (as we should all be), Elementary OS's Security and Privacy settings provide multiple options, including a firewall, history, locking, automatic deletion of temporary and trash files, and an on/off switch for location services.
![Elementary OS's Privacy and Security screen][13]
### More on Elementary OS
Elementary OS was originally released in 2011, and its latest version, Hera, was released on December 3, 2019. [Cassidy James Blaede][14], Elementary's co-founder and CXO, is the operating system's UX architect. Cassidy loves to design and build useful, usable, and delightful digital products using open technologies.
Elementary OS has excellent user [documentation][15], and its code (licensed under GPL 3.0) is available on [GitHub][16]. Elementary OS encourages involvement in the project, so be sure to reach out and [join the community][17].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/macbook-linux-elementary
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://en.wikipedia.org/wiki/MacOS_Mojave
[3]: https://www.gnucash.org/
[4]: https://elementary.io/
[5]: https://opensource.com/life/14/10/test-drive-linux-nothing-flash-drive
[6]: https://opensource.com/article/19/12/pantheon-linux-desktop
[7]: https://www.linux4everyone.com/20-macbook-pro-elementary-os
[8]: https://opensource.com/sites/default/files/uploads/macbookair_hera.png (MacBook Air with Hera)
[9]: https://blog.elementary.io/introducing-elementary-os-5-1-hera/
[10]: https://opensource.com/sites/default/files/uploads/elementaryos_settings.png (Elementary OS's Settings screen)
[11]: https://opensource.com/sites/default/files/uploads/elementaryos_keyboardshortcuts.png (Elementary OS's Keyboard shortcuts)
[12]: https://en.wikipedia.org/wiki/GNOME_Web
[13]: https://opensource.com/sites/default/files/uploads/elementaryos_privacy-security.png (Elementary OS's Privacy and Security screen)
[14]: https://github.com/cassidyjames
[15]: https://elementary.io/docs/learning-the-basics#learning-the-basics
[16]: https://github.com/elementary
[17]: https://elementary.io/get-involved

View File

@ -0,0 +1,169 @@
[#]: collector: (lujun9972)
[#]: translator: ( guevaraya)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Troubleshoot Kubernetes with the power of tmux and kubectl)
[#]: via: (https://opensource.com/article/20/2/kubernetes-tmux-kubectl)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Troubleshoot Kubernetes with the power of tmux and kubectl
======
A kubectl plugin that uses tmux to make troubleshooting Kubernetes much
simpler.
![Woman sitting in front of her laptop][1]
[Kubernetes][2] is a thriving open source container orchestration platform that offers scalability, high availability, robustness, and resiliency for applications. One of its many features is support for running custom scripts or binaries through its primary client binary, [kubectl][3]. Kubectl is very powerful and allows users to do anything with it that they could do directly on a Kubernetes cluster.
### Troubleshooting Kubernetes with aliases
Anyone who uses Kubernetes for container orchestration is aware of its features—as well as the complexity it brings because of its design. For example, there is an urgent need to simplify troubleshooting in Kubernetes with something that is quicker and has little need for manual intervention (except in critical situations).
There are many scenarios to consider when it comes to troubleshooting functionality. In one scenario, you know what you need to run, but the command's syntax—even when it can run as a single command—is excessively complex, or it may need one or two inputs to work.
For example, if you frequently need to jump into a running container in the System namespace, you may find yourself repeatedly writing:
```
`kubectl --namespace=kube-system exec -i -t <your-pod-name>`
```
To simplify troubleshooting, you could use command-line aliases of these commands. For example, you could add the following to your dotfiles (.bashrc or .zshrc):
```
`alias ksysex='kubectl --namespace=kube-system exec -i -t'`
```
This is one of many examples from a [repository of common Kubernetes aliases][4] that shows one way to simplify functions in kubectl. For something simple like this scenario, an alias is sufficient.
### Switching to a kubectl plugin
A more complex troubleshooting scenario involves the need to run many commands, one after the other, to investigate an environment and come to a conclusion. Aliases alone are not sufficient for this use
case; you need repeatable logic and correlations between the many parts of your Kubernetes deployment. What you really need is automation to deliver the desired output in less time.
Consider 10 to 20—or even 50 to 100—namespaces holding different microservices on your cluster. What would be helpful for you to start troubleshooting this scenario?
* You would need something that can quickly tell which pod in which namespace is throwing errors.
* You would need something that can watch logs of all the pods in a namespace.
* You might also need to watch logs of certain pods in a specific namespace that have shown errors.
Any solution that covers these points would be very useful in investigating production issues as well as during development and testing cycles.
To create something more powerful than a simple alias, you can use [kubectl plugins][5]. Plugins are like standalone scripts written in any scripting language but are designed to extend the functionality of your main command when serving as a Kubernetes admin.
To create a plugin, you must use the proper syntax of **kubectl-&lt;your-plugin-name&gt;** to copy the script to one of the exported pathways in your **$PATH** and give it executable permissions (**chmod +x**).
After creating a plugin and moving it into your path, you can run it immediately. For example, I have kubectl-krawl and kubectl-kmux in my path:
```
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-krawl
/usr/local/bin/kubectl-kmux
$ kubectl kmux
```
Now let's explore what this looks like when you power Kubernetes with tmux.
### Harnessing the power of tmux
[Tmux][6] is a very powerful tool that many sysadmins and ops teams rely on to troubleshoot issues related to ease of operability—from splitting windows into panes for running parallel debugging on multiple machines to monitoring logs. One of its major advantages is that it can be used on the command line or in automation scripts.
I created [a kubectl plugin][7] that uses tmux to make troubleshooting much simpler. I will use annotations to walk through the logic behind the plugin (and leave it for you to go through the plugin's full code):
```
#NAMESPACE is namespace to monitor.
#POD is pod name
#Containers is container names
# initialize a counter n to count the number of loop counts, later be used by tmux to split panes.
n=0;
# start a loop on a list of pod and containers
while IFS=' ' read -r POD CONTAINERS
do
           # tmux create the new window for each pod
            tmux neww $COMMAND -n $POD 2&gt;/dev/null
           # start a loop for all containers inside a running pod
        for CONTAINER in ${CONTAINERS//,/ }
        do
        if [ x$POD = x -o x$CONTAINER = x ]; then
        # if any of the values is null, exit.
        warn "Looks like there is a problem getting pods data."
        break
        fi
           
            # set the command to execute
        COMMAND=”kubectl logs -f $POD -c $CONTAINER -n $NAMESPACE”
        # check tmux session
        if tmux has-session -t &lt;session name&gt; 2&gt;/dev/null;
        then
        &lt;set session exists&gt;
        else
        &lt;create session&gt;
        fi
           # split planes in the current window for each containers
        tmux selectp -t $n \; \
        splitw $COMMAND \; \
        select-layout tiled \;
           # end loop for containers
        done
           # rename the window to identify by pod name
        tmux renamew $POD 2&gt;/dev/null
       
            # increment the counter
        ((n+=1))
# end loop for pods
done&lt; &lt;(&lt;fetch list of pod and containers from kubernetes cluster&gt;)
# finally select the window and attach session
 tmux selectw -t &lt;session name&gt;:1 \; \
  attach-session -t &lt;session name&gt;\;
```
After the plugin script runs, it will produce output similar to the image below. Each pod has its own window, and each container (if there is more than one) is split by the panes in its pod window, streaming logs as they arrive. The beauty of tmux can be seen below; with the proper configuration, you can even see which window has activity going on (see the white tabs).
![Output of kmux plugin][8]
### Conclusion
Aliases are always helpful for simple troubleshooting in Kubernetes environments. When the environment gets more complex, a kubectl plugin is a powerful option for using more advanced scripting. There are no limits on which programming language you can use to write kubectl plugins. The only requirements are that the naming convention in the path is executable, and it doesn't have the same name as an existing kubectl command.
To read the complete code or try the plugins I created, check my [kube-plugins-github][7] repository. Issues and pull requests are welcome.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-tmux-kubectl
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://kubernetes.io/docs/reference/kubectl/overview/
[4]: https://github.com/ahmetb/kubectl-aliases/blob/master/.kubectl_aliases
[5]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
[6]: https://opensource.com/article/19/6/tmux-terminal-joy
[7]: https://github.com/abhiTamrakar/kube-plugins
[8]: https://opensource.com/sites/default/files/uploads/kmux-output.png (Output of kmux plugin)

View File

@ -0,0 +1,430 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ansible Roles Quick Start Guide with Examples)
[#]: via: (https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Ansible Roles Quick Start Guide with Examples
======
Ansible is an excellent configuration management and orchestration tool.
It is designed to easily automate the entire infrastructure.
We have written three articles in the past about Ansible.
If you are new to Ansible, I advise you to read the articles below, which will help you understand the basics of Ansible.
* **Part-1: [Ansible Automation Tool Installation, Configuration and Quick Start Guide][1]**
* **Part-2: [Ansible Ad-hoc Command Quick Start Guide with Examples][2]**
* **Part-3: [Ansible Playbooks Quick Start Guide with Examples][3]**
### Whats Ansible Roles?
Ansible Roles provides the framework for automatically loading certain tasks, files, vars, templates, and handlers from a known file structure into the playbook.
The primary mechanism of role is to break a playbook into multiple pieces (files).
This makes it easier for you to write complex playbooks and makes them easier to reuse.
Also, it reduces the syntax error by breaking it into multiple files.
Ansible Playbook is a set of roles, and each role essentially performs a specific function.
The Ansible roles are reusable (you can import the roles into other paybooks as well) because the roles are independent of each other and do not depend on others while executing.
Ansible offers a two-example directory structure that helps you organize your ansible playbook content, and its use.
It is not limited to using the same data structure, and you can create your own directory structure based on your needs.
Each directory is have a **“main.yml”** file, which contains the basic content:
### Ansible Roles Default Directory Structure
Ansible Best Practices provides the following two directory structures. The first is very simple and well suited for a small environment with simple production and inventory files.
```
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch production staging site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
The second one is appropriate when you have a very complex inventory environment.
```
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p inventories/{production,staging}/{group_vars,host_vars}
$ sudo touch inventories/{production,staging}/hosts
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
### How to Create a Simple Ansible Roles Directory Structure
By default there is no “Roles” directory in your Ansible directory, so you have to create it first.
```
$ sudo mkdir /etc/ansible/roles
```
Use the following Ansible Galaxy command to create a simple directory structure for a role.
```
$ sudo ansible-galaxy init [/Path/to/Role_Name]
```
### Whats Ansible Galaxy?
Ansible Galaxy refers to the Galaxy Website, a free platform for finding, downloading and sharing community developed roles.
The Galaxy website offers pre-packaged units such as roles and collections. Provisioning infrastructure, deploy applications and youll find plenty of roles for all the tasks that you do on a daily basis.
While writing this article I saw **23478** results and it is growing on a daily basis.
To prove this, we are going to create the **“webserver”** role. To do so, run the following command.
```
$ sudo ansible-galaxy init /etc/ansible/roles/webserver
- Role /etc/ansible/roles/webserver was created successfully
```
Once you have created a new role, use the tree commmand to view the detailed directory structure.
```
$ tree /etc/ansible/roles/webserver
/etc/ansible/roles/webserver
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
8 directories, 8 files
```
It comes with 8 directories and 8 files, details are as follows.
* **defaults:** Default variables for the role
* **handlers:** It contains handlers, which may be used by this role or even anywhere outside this role.
* **meta:** Defines some meta data for this role.
* **tasks:** It contains the main list of tasks to be executed by the role.
* **templates:** It contains templates which can be deployed via this role.
* **vars:** Other variables for the role.
This is a sample playbook that sets up the Apache Web server on Debian and Red Hat-based systems.
```
$ sudo nano /etc/ansible/playbooks/webserver.yml
---
- hosts: web
become: yes
name: "Install and Configure Apache Web Server on Linux"
tasks:
- name: "Install Apache Web Server on RHEL Based Systems"
yum: name=httpd update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "redhat"
- name: "Install Apache Web Server on Debian Based Systems"
apt: name=apache2 update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "debian"
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
handlers:
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
```
Lets break the playbook above into Ansible roles. If you only have simple contents, add them to the **“main.yml”** file, otherwise create separate **“xyz.yml”** files for each task.
**Make a note:** **“notify”** should be included in the last task, which is why we have added it to the **“module.yml”** file.
Create a separate task to install the Apache Web Server on Red Hat-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/redhat.yml
---
- name: "Install Apache Web Server on RHEL Based Systems"
yum:
name: httpd
update_cache: yes
state: latest
```
Create a separate task to install the Apache Web Server on Debian-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/debian.yml
---
- name: "Install Apache Web Server on Debian Based Systems"
apt:
name: apache2
update_cache: yes
state: latest
```
Create a separate task to start the Apache web server on Red Hat based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-httpd.yml
---
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
```
Create a separate task to start the Apache web server on Debian based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-apache2.yml
---
- name: "Start the Apache Web Server"
service:
name: apache2
state: started
enabled: yes
```
Create a separate task to copy the index file into the Apache web root directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/configure.yml
---
- name: Copy index.html file
copy: src=files/index.html dest=/var/www/html
```
Create a separate task to install rewrite module on Debian based systems.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/modules.yml
---
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
```
Finally import all tasks into the **“main.yml”** file of the Tasks directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/main.yml
---
tasks file for /etc/ansible/roles/webserver
- import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
- import_tasks: service-httpd.yml
- import_tasks: service-apache2.yml
- import_tasks: configure.yml
- import_tasks: modules.yml
when: ansible_facts['os_family']|lower == 'debian'
```
Add the handler information to the **“main.yml”** file of the handlers directory.
```
$ sudo nano /etc/ansible/roles/webserver/handlers/main.yml
---
#handlers file for /etc/ansible/roles/webserver
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
```
Add an index.html file to the files directory. This is the file you want to copy on the target server.
```
$ sudo nano /etc/ansible/roles/webserver/files/index.html
This is the test page of 2DayGeek.com for Ansible Tutorials
```
You have successfully broken the playbook into Ansible roles using the steps above. Now, your new Ansible role may look like the one below.
![][4]
If you have done everything for your Ansible role, then finally import this role into your playbook.
```
$ sudo nano /etc/ansible/playbooks/webserver-role.yml
---
- hosts: all
become: yes
name: "Install and Configure Apache Web Server on Linux"
roles:
- webserver
```
Once you have done everything, I advise you to check the Playbook syntax before executing it.
```
$ ansible-playbook /etc/ansible/playbooks/webserver-role.yml --syntax-check
playbook: /etc/ansible/playbooks/webserver-role.yml
```
Finally execute the Ansible playbook to see the magic.
![][4]
Hope this tutorial helped you to learn about the Ansible roles. If you are satisfied, please share the article on social media. If you would like to improve this article, add your comments in the comment section.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-ansible-automation-tool-linux-quick-start-guide/
[2]: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
[3]: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers)
[#]: via: (https://itsfoss.com/simplelogin/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers
======
_**Brief: SimpleLogin is an open-source service to help you protect your email address by giving you a permanent alias email address.**_
Normally, you have to use your real email address to sign up for services that you want to use personally or for your business.
In the process, youre sharing your email address right? And, that potentially exposes your email address to spammers (depending on where you shared the information).
What if you can protect your real email address by providing an alias for it instead? No Im not talking about disposable email addresses like 10minutemail which could be useful for temporary sign-ups even though theyve been blocked by certain services.
Im talking about something similar to “_[Hide My Emai for Sign in with Apple ID][1]_” but a free and open-source solution i.e [SimpleLogin][2].
### SimpleLogin: An open source service to protect your email inbox
![][3]
_It is worth noting that you still have to use your existing email client (or email service) to receive and send emails but with this service, you get to hide your real email ID._
SimpleLogin is an open-source project (you can find it on [GitHub][4]) available for free (with premium upgrade options) that aims to keep your email private.
Unlike temporary email services, it generates a permanent random alias for your email address that you can use to sign up for services without revealing your real email.
The alias works as a point of contact to forward the emails intended to your real email ID.
**Youll receive the emails sent to the alias email address in your real email inbox and if you believe that the alias is receiving too many spams, you block the alias. This way, you completely stop getting spam emails sent to the particular aliased email address.**
Not just limited to receiving emails but you can also send emails through the alias email address. Interesting, right? And, using this coupled with [secure email services][5] should be a good combination to protect your privacy.
**Recommended Read:**
![][6]
#### [Best VPN Services for Privacy Minded Linux Users][7]
Here are our recommendations for best VPN services for Linux users to secure their privacy and enhance their online security. Check it out.
### Features of SimpleLogin
![][8]
Before taking a look at how it works, let me highlight what it offers overall to the Internet users and web developers as well:
* Protects your real email address by generating an alias address
* Send/Recieve emails through your alias
* Block the alias if emails get too spammy
* Custom domain supported with premium plans
* You can choose to self-host it
* If youre a web developer, you can follow the [documentation][9] to integrate a “**Sign in with SimpleLogin**” button to your login page.
You can either utilize the web browser or use the extension for Firefox, Chrome and Safari.
[SimpleLogin][2]
### How SimpleLogin Works?
![][10]
To start with, youll have to sign up for the service with your primary email ID that you want to keep private.
Once done you have to use your alias email to sign up for any other services you want.
![][11]
The number of aliases generated is limited in the free plan however, you can upgrade to the premium plan if you want to generate different alias email addresses for every site.
You dont necessarily need to use the web portal, you can use the browser extension to generate aliases and use them when needed as shown in the image below:
![][12]
Even if you want to send an email without revealing your real email ID, just generate an alias email by typing in the receivers email ID and paste the alias in your email client to send it.
### Brief conversation with SimpleLogins founder
I was quite impressed to see an open-source service like this so I reached out to [**Son Nguyen Kim**][13] (_SimpleLogins founder_). Heres a few things I asked along with the responses I got:
**How can you assure users that they can rely on your service for their personal/business use?**
**Son Nguyen Kim:** SimpleLogin follows all the best practices in terms of [email deliverability][14] to reduce the emails ending up in the Spam folder. To mention a few:
* SPF, DKIM and strict DMARC
* TLS everywhere
* “Clean” IP: we made sure that our IP addresses are not blacklisted anywhere
* Constant monitoring to avoid abuses.
* Participate in email providers postmaster programs
**How sustainable is your business currently? **
**Son Nguyen Kim:** Though in Beta, we already have paying customers. They use SimpleLogin both personally (to protect privacy) and for their business (create emails with their domains).
**What features have you planned for the future?**
**Son Nguyen Kim**: An iOS app is already in progress, the Android app will follow just after.
* [PGP][15] to encrypt emails
* Able to strip images from emails. Email tracking is usually done [using a 1-pixel image][16] so tracking will also be removed with this feature enabled.
* [U2F][17] support (Yubikey)
* Better integration with existing email infrastructure for people who want to self-host SimpleLogin
You can also find a public roadmap to their plans on [Trello][18].
**Wrapping Up**
Personally, I would really love to see this succeed as a privacy-friendly alternative to social network sign-up options implemented on various web services.
In addition to that, as it stands now as a service to generate alias email that should suffice a lot of users who do not want to share their real email address. My initial impressions on SimpleLogins beta phase is quite positive. Id recommend you to give it a try!
They also have a [Patreon][19] page if you wish to donate instead of opting for a paying customer to help the development of SimpleLogin.
Have you tried something like this before? How exciting do you think SimpleLogin is? Feel free to share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/simplelogin/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://support.apple.com/en-us/HT210425
[2]: https://simplelogin.io/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-website.jpg?ssl=1
[4]: https://github.com/simple-login/app
[5]: https://itsfoss.com/secure-private-email-services/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/best-vpn-linux.png?fit=800%2C450&ssl=1
[7]: https://itsfoss.com/best-vpn-linux/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-settings.jpg?ssl=1
[9]: https://docs.simplelogin.io/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-details.png?ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-dashboard.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-extensions.jpg?ssl=1
[13]: https://twitter.com/nguyenkims
[14]: https://blog.hubspot.com/marketing/email-delivery-deliverability
[15]: https://www.openpgp.org/
[16]: https://www.theverge.com/2019/7/3/20681508/tracking-pixel-email-spying-superhuman-web-beacon-open-tracking-read-receipts-location
[17]: https://en.wikipedia.org/wiki/Universal_2nd_Factor
[18]: https://trello.com/b/4d6A69I4/open-roadmap
[19]: https://www.patreon.com/simplelogin

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10
======
_**Brief: Ubuntu 19.04 has reached the end of life on 23rd January 2020. This means that systems running Ubuntu 19.04 wont receive security and maintenance updates anymore and thus leaving them vulnerable.**_
![][1]
[Ubuntu 19.04][2] was released on 18th April, 2019. Since it was not a long term support (LTS) release, it was supported only for nine months.
Completing its release cycle, Ubuntu 19.04 reached end of life on 23rd January, 2020.
Ubuntu 19.04 brought a few visual and performance improvements and paved the way for a sleek and aesthetically pleasant Ubuntu look.
Like any other regular Ubuntu release, it had a life span of nine months. And that has ended now.
### End of life for Ubuntu 19.04? What does it mean?
End of life is means a certain date after which an operating system release wont get updates.
You might already know that Ubuntu (or any other operating system for that matter) provides security and maintenance upgrades in order to keep your systems safe from cyber attacks.
Once a release reaches the end of life, the operating system stops receiving these important updates.
If you continue using a system after the end of life of your operating system release, your system will be vulnerable to cyber attacks and malware.
Thats not it. In Ubuntu, the applications that you downloaded using APT from Software Center wont be updated as well. In fact, you wont be able to [install new software using apt-get command][3] anymore (gradually, if not immediately).
### All Ubuntu 19.04 users must upgrade to Ubuntu 19.10
Starting 23rd January 2020, Ubuntu 19.04 will stop receiving updates. You must upgrade to Ubuntu 19.10 which will be supported till July 2020.
This is also applicable to other [official Ubuntu flavors][4] such as Lubuntu, Xubuntu, Kubuntu etc.
#### How to upgrade to Ubuntu 19.10?
Thankfully, Ubuntu provides easy ways to upgrade the existing system to a newer version.
In fact, Ubuntu also prompts you that a new Ubuntu version is available and that you should upgrade to it.
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
If you have a good internet connection, you can use the same [Software Updater tool that you use to update Ubuntu][6]. In the above image, you just need to click the Upgrade button and follow the instructions. I have written a detailed guide about [upgrading to Ubuntu 18.04][7] using this method.
If you dont have a good internet connection, there is a workaround for you. Make a backup of your home directory or your important data on an external disk.
Then, make a live USB of Ubuntu 19.10. Download Ubuntu 19.10 ISO and use the Startup Disk Creator tool already installed on your Ubuntu system to create a live USB out of this ISO.
Boot from this live USB and go on installing Ubuntu 19.10. In the installation procedure, you should see an option to remove Ubuntu 19.04 and replace it with Ubuntu 19.10. Choose this option and proceed as if you are [installing Ubuntu][8] afresh.
#### Are you still using Ubuntu 19.04, 18.10, 17.10 or some other unsupported version?
You should note that at present only Ubuntu 16.04, 18.04 and 19.10 (or higher) versions are supported. If you are running an Ubuntu version other than these, you must upgrade to a newer version.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/

View File

@ -0,0 +1,175 @@
MidnightBSD 可能是你通往 FreeBSD 的大门
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_4_0.jpg?itok=T2gpLVui)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 [Berkeley Software Distribution伯克利软件套件][2]。FreeBSD 的第一个版本在 1993 年发布并且仍然很强大。2007 年左右Lucas Holt 想创建一个 FreeBSD 的分支,利用 OpenStep (现在是 Cocoa) 的 Objective-C 框架widget 工具包和应用程序开发工具的 [GnuStep][3] 的实现。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD (以 Lucas 的猫命名Midnight) 仍然在积极地(尽管缓慢)开发。自2017年8月可以获得最新的稳定发布版本 (0.8.6) 。尽管 BSD 发行版不是你所说的用户友好型发行版,通过命令行来加速它们的安装速度是一个你自己熟悉如何处理一个文本( ncurses )安装和最后完成安装的非常好的方法。
为此,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。但是如果你是一名 Linux 用户正在寻找扩展你的技能的方法,它将做一点工作,… 这是一个很好的起始地方。
我想带你走过安装 MidnightBSD 的工艺流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的,这是一个文本( ncurses ) 安装过程,因此在这里没有找到可用的鼠标。相反,你将使用你键盘的 Tab 和箭头按键。在你下载 [最新的发布版本][4] 后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装器将打开并给你三个选项(图 1)。选择安装(使用你的键盘的箭头按键),并敲击 Enter 键。
![MidnightBSD installer][6]
图 1: 启动 MidnightBSD 安装器。
在这点上,在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件(文档游戏32位兼容性系统源码代码)
4. 分区硬盘
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如获得 shell)
9. 添加用户(图 2)
![Adding a user][7]
图 2: 向系统添加一个用户。
在你向系统添加用户后,你将被拖拽到一个窗口中(图 3),在这里,你可以处理任何你可能忘记的或你想重新配置的东西。如果你不需要作出任何更改,选择 Exit ,然后你的配置将被应用。
![Applying your configurations][8]
图 3: 应用你的配置。
在接下来的窗口中,当出现提示时,选择 No ,接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### Post 安装
当你最新安装的 MidnightBSD 启动时你将发现你自己在一个命令提示符中。在这一点上在这里未找到图形界面。我安装应用程序MidnightBSD 依赖于 mport 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在有已经安装的 Xorg 窗口服务器,它将允许你来安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在已经安装 Xfce 。不过,我们需要和命令 startx 一起运行来启用它。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 已安装,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 startx, Xfce 桌面环境将启动。你应该会感到一点在家里的感觉(图 4).
![ Xfce][9]
图 4: Xfce桌面界面已准备好服务。
因为你不想总是必需发出命令 startx ,你希望启用登录守护进程。然而,却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 /etc/rc.conf 文件中添加一个项目来在启动时启用 mlogind 。在 rc.conf 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后,我最后得到一个空白屏幕和不想要的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必需使用 startx 命令。
### 安装
开箱即用,你将不能找到很多能使用的应用程序。如果你尝试安装应用程序(使用 mport ),你将很快发现你自己的沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 svnlite 命令来查看检查出可用的 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 ~/mports 的新的命令。 更改到这个目录(使用命令 cd ~/.mports 。发出 ls 命令,然后你应该看到许多的类别(图 5)。
![applications][10]
图 5: 对于 mport 现在可用的应用程序类别。
你想安装 Firefox ?如果你查看 www 目录,你将看到一个 linux-firefox 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项目。翻找所有的类别,并使用 mport 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是, mport (通过via svnlite) 仅能找到的一个 office 套件的版本是 OpenOffice 3 。那是非常过时的。尽管 在 ~/mports/editors 目录中找到 Abiword ,但是它看起来不可用于安装。甚至在安装 OpenOffice 3 后,它会输出一个 Exec 格式错误。换句话说,你将不能使用 MidnightBSD 在 office 生产效率方面做很多的事情。但是,嘿嘿,如果你周围躺有一个旧的 Palm 导航器,你也安装 pilot 链接。换句话说,可用的软件不能生成一个极其有用的桌面发行版… 至少对普通用户不是。但是,如果你想在 MidnightBSD 上开发,你将找到很多可用的工具,准备安装(查看 ~/mports/devel 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然,在此之后,你将需要创建一个数据库( MySQL 已经安装),安装 Apache (sudo mport install apache24) ,并配置必需的 Apache 指令。
显然地,已安装的和能够安装什么是一个已经应用程序,系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务特殊目的的发行版。
### 享受 *BSD 优良
这就是你如何使 MidnightBSD 启动,并在一个有点用的桌面发行版中运行。它不像很多其它的 Linux 发行版一样快速容易但是如果你想要一个你想要的发行版这可能正是你正在寻找的。尽管很多竞争对手有很多为安装而准备的可用的软件标题MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣的挑战。
通过来自 Linux 基金会和 edX 的免费的[" Linux 简介" ][11]课程学习更多关于 Linux 的信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -7,15 +7,18 @@
[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Zipping files on Linux: the many variations and how to use them
在 Linux 上压缩文件zip 命令的各种变体及用法
======
> 除了压缩和解压缩文件外,你还可以使用 zip 命令执行许多有趣的操作。这是一些其他的 zip 选项以及它们如何提供帮助。
![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg)
Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, were going to look at standard zipping and unzipping as well as some other interesting zipping options.
为了节省一些磁盘空间并将文件打包在一起进行归档,我们中的一些人已经在 Unix 和 Linux 系统上压缩文件数十年了。即使这样,并不是所有人都尝试过一些有趣的压缩工具的变体。因此,在本文中,我们将介绍标准的压缩和解压缩以及其他一些有趣的压缩选项。
### The basic zip command
### 基本的 zip 命令
First, lets look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work:
首先,让我们看一下基本的 `zip` 命令。它使用了与 `gzip` 基本上相同的压缩算法,但是有一些重要的区别。一方面,`gzip` 命令仅用于压缩单个文件,而 `zip` 既可以压缩文件,也可以将多个文件结合在一起成为归档文件。另外,`gzip` 命令是“就地”压缩。换句话说,它会留下一个压缩文件,而不是原始文件。 这是工作中的 `gzip` 示例:
```
$ gzip onefile
@ -23,7 +26,7 @@ $ ls -l
-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz
```
And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension.
而这是 `zip`。请注意,此命令要求为压缩存档提供名称,其中 `gzip`(执行压缩操作后)仅使用原始文件名并添加 `.gz` 扩展名。
```
$ zip twofiles.zip file*
@ -35,9 +38,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
Notice also that the original files are still sitting there.
请注意,原始文件仍位于原处。
The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable.
所节省的磁盘空间量(即获得的压缩程度)将取决于每个文件的内容。以下示例中的变化很大。
```
$ zip mybin.zip ~/bin/*
@ -56,9 +59,9 @@ $ zip mybin.zip ~/bin/*
adding: bin/tt (deflated 6%)
```
### The unzip command
### unzip 命令
The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file.
`unzip` 命令将从一个 zip 文件中恢复内容,并且,如你所料,原来的 zip 文件还保留在那里,而类似的`gunzip` 命令将仅保留未压缩的文件。
```
$ unzip twofiles.zip
@ -71,9 +74,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
### The zipcloak command
### zipcloak 命令
The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original.
`zipcloak` 命令对一个 zip 文件进行加密,提示你输入两次密码(以确保你不会“胖手指”),然后将该文件原位存储。你可以想到,文件大小与原始文件会有所不同。
```
$ zipcloak twofiles.zip
@ -89,11 +92,11 @@ total 204
unencrypted version
```
Keep in mind that the original files are still sitting there unencrypted.
请记住,压缩包之外的原始文件仍处于未加密状态。
### The zipdetails command
### zipdetails 命令
The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files.
`zipdetails` 命令将向你显示详细信息:有关压缩文件的详细信息,可能比你想象的要多得多。即使我们正在查看一个加密的文件,`zipdetails` 也会显示文件名以及文件修改日期、用户和组信息、文件长度数据等。请记住,这都是“元数据”。我们看不到文件的内容。
```
$ zipdetails twofiles.zip
@ -233,9 +236,9 @@ $ zipdetails twofiles.zip
Done
```
### The zipgrep command
### zipgrep命令
The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below.
`zipgrep` 命令将使用 `grep` 类的功能来找到压缩文件中的特定内容。如果文件已加密,则需要为要检查的每个文件输入为加密所提供的密码。如果只想检查归档文件中单个文件的内容,请将其名称添加到 `zipgrep` 命令的末尾,如下所示。
```
$ zipgrep hazard twofiles.zip file1
@ -243,9 +246,9 @@ $ zipgrep hazard twofiles.zip file1
Certain pesticides should be banned since they are hazardous to the environment.
```
### The zipinfo command
### zipinfo 命令
The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions.
`zipinfo` 命令提供有关压缩文件内容的信息,无论是否加密。这包括文件名、大小、日期和权限。
```
$ zipinfo twofiles.zip
@ -256,9 +259,9 @@ Zip file size: 21313 bytes, number of entries: 2
2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1%
```
### The zipnote command
### zipnote 命令
The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this:
`zipnote` 命令可用于从 zip 归档中提取注释或添加注释。要显示注释,只需在命令前面加上归档名称即可。如果之前未添加任何注释,你将看到类似以下内容:
```
$ zipnote twofiles.zip
@ -269,21 +272,21 @@ $ zipnote twofiles.zip
@ (zip file comment below this line)
```
If you want to add comments, write the output from the zipnote command to a file:
如果要添加注释,请先将 `zipnote` 命令的输出写入文件:
```
$ zipnote twofiles.zip > comments
```
Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one:
接下来,编辑你刚刚创建的文件,将注释插入到 `(comment above this line)` 行上方。然后使用像这样的`zipnote` 命令添加注释:
```
$ zipnote -w twofiles.zip < comments
```
### The zipsplit command
### zipsplit 命令
The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file.
当归档文件太大时,可以使用 `zipsplit` 命令将一个 zip 归档文件分解为多个 zip 归档文件,这样你就可以将其中某一个文件放到小型 U 盘中。最简单的方法似乎是为每个部分的压缩文件指定最大大小,此大小必须足够大以容纳最大的包含文件。
```
$ zipsplit -n 12000 twofiles.zip
@ -296,15 +299,11 @@ $ ls twofile*.zip
-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip
```
Notice how the extracted files are sequentially named "twofile1" and "twofile2".
请注意,提取的文件是如何依次命名为 `twofile1``twofile2` 的。
### Wrap-up
### 总结
The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]**
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
`zip` 命令及其一些压缩工具变体,对如何生成和使用压缩文件归档提供了很多控制。
--------------------------------------------------------------------------------
@ -312,7 +311,7 @@ via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-t
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,232 @@
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Go About Linux Boot Time Optimisation)
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
如何进行 Linux 启动时间优化
======
[![][1]][2]
_快速启动一台嵌入式设备或一台电信设备对于时间要求严格的应用程序是至关重要的并且在改善用户体验方面也起着非常重要的作用。这个文件给予一些关于如何增强任意设备的启动时间的重要技巧。_
快速启动或快速重启在各种情况下起着至关重要的作用。对于一套嵌入式设备来说,开始启动是为了保持所有服务的高可用性和更好的性能。设想一台电信设备运行一套没有启用快速启动的 Linux 操作系统。依赖于这个特殊嵌入式设备的所有的系统,服务和用户可能会受到影响。这些设备在其服务中维持高可用性是非常重要的,为此,快速启动和重启起着至关重要的作用。
一台电信设备的一次小故障或关机,甚至几秒钟,都可能会对无数在因特网上工作的用户造成破坏。因此,对于很多对时间要求严格的设备和电信设备来说,在它们的服务中包含快速启动以帮助它们快速重新开始工作是非常重要的。让我们从图表 1 中理解 Linux 启动过程。
![Figure 1: Boot-up procedure][3]
![Figure 2: Boot chart][4]
**监视工具和启动过程**
A user should take note of a number of factors before making changes to a machine.这包括机器的当前启动速度,以及服务,进程或应用程序 These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
**Boot chart:** 为监视启动速度和在启动期间启动的各种服务,用户可以使用下面的命令来安装 boot chart
```
sudo apt-get install pybootchartgui.
```
你每次启动时boot chart 在日志中保存一个 _.png_ (便携式网络图片)文件,使用户能够查看 _png_ 文件来理解系统的启动过程和服务。为此,使用下面的命令:
```
cd /var/log/bootchart
```
用户可能需要一个应用程序来查看 _.png_ 文件。Feh 是一个面向控制台用户的 X11 图像查看器。不像大多数其它的图像查看器它没有一个精致的图形用户界面但是它仅仅显示图片。Feh 可以用于查看 _.png_ 文件。你可以使用下面的命令来安装它:
```
sudo apt-get install feh
```
你可以使用 _feh xxxx.png_ 来查看 _png_ 文件。
图表 2 显示查看一个 boot chart 的 _png_ 文件时的启动图表。
但是,对于 Ubuntu 15.10 以后的版本不再需要 boot chart 。 为获取关于启动速度的简短信息,使用下面的命令:
```
systemd-analyze
```
![Figure 3: Output of systemd-analyze][5]
图表 3 显示命令 _systemd-analyze_ 的输出。
命令 _systemd-analyze_ blame 用于打印所有正在运行的基于初始化所用的时间的单元。这个信息是非常有用的并且可用于优化启动时间。systemd-analyze blame 不会显示服务于使用 _Type=simple_ 的结果,因为 systemd 认为这些服务是立即启动的;因此,不能完成测量初始化的延迟。
![Figure 4: Output of systemd-analyze blame][6]
图表 4 显示 _systemd-analyze_ blame 的输出.
下面的命令打印一个单元的时间关键的链的树:
```
command systemd-analyze critical-chain
```
图表 5 显示命令_systemd-analyze critical-chain_ 的输出。
![Figure 5: Output of systemd-analyze critical-chain][7]
**减少启动时间的步骤**
下面显示的是一些可采取的用于减少启动时间的步骤。
**BUM (启动管理器):** BUM 是一个运行级配置编辑器,当系统启动或重启时,允许 _init_ 服务的配置。它显示在启动时可以启动的每个服务的一个列表。用户可以打开和关闭之间切换个别的服务。 BUM 有一个非常干净的图形用户界面,并且非常容易使用。
在 Ubuntu 14.04 中, BUM 可以使用下面的命令安装:
```
sudo apt-get install bum
```
为在 15.10 以后的版本中安装它,从链接 _<http://apt.ubuntu.com/p/bum> 13_ 下载软件包。
以基础的事开始,禁用扫描仪和打印机相关的服务。如果你没有使用蓝牙和其它不想要的设备和服务,你也可以禁用它们中一些。我强烈建议你在禁用相关的服务前学习它们的基础知识,因为它可能会影响机器或操作系统。图表 6 显示 BUM 的图形用户界面。
![Figure 6: BUM][8]
**编辑 rc 文件:** 为编辑 rc 文件,你需要转到 rc 目录。这可以使用下面的命令来做到:
```
cd /etc/init.d.
```
然而,访问 _init.d_ 需要 root 用户权限,它基本上包含了开始/停止脚本,当系统在运行时或在启动期间,控制(开始,停止,重新加载,启动启动)守护进程。
_rc_ 文件在 _init.d_ 中被称为一个运行控制脚本。在启动期间init 执行 _rc_ 脚本并发挥它的作用。为改善启动速度,我们更改 _rc_ 文件。使用任意的文件编辑器打开 _rc_ 文件(当你在 _init.d_ 目录中时)。
例如,通过输入 _vim rc_ ,你可以更改 _CONCURRENCY=none_ 的值为 _CONCURRENCY=shell_ 。后者允许同时执行某些起始阶段的脚本,而不是连续地间断地交替执行。
在最新版本的内核中,该值应该被更改为 _CONCURRENCY=makefile_
图表 7 和 8 显示编辑 rc 文件前后的启动时间的比较。启动速度的改善可以被注意到。在编辑The time to boot before editing the rc 文件前的启动时间是 50.98 秒,然而在对 rc 文件进行更改后的启动时间是 23.85 秒。
但是,上面提及的更改方法在 Ubuntu 15.10 以后的操作系统上不工作,因为使用最新内核的操作系统使用 systemd 文件,而不再是 _init.d_ 文件。
![Figure 7: Boot speed before making changes to the rc file][9]
![Figure 8: Boot speed after making changes to the rc file][10]
**E4rat:** E4rat 代表 e4 ‘减少访问时间’ (仅在 ext4 文件系统的情况下). 它是由 Andreas Rid 和 Gundolf Kiefer 开发的一个项目. E4rat 是一个在碎片整理的帮助下来达到一次快速启动的应用程序。它也加速应用程序的启动。E4rat 排除使用物理文件重新分配的寻道时间和旋转延迟。这导致一个高速的磁盘传输速度。
E4rat 作为一个可以获得的 .deb 软件包,你可以从它的官方网站 _<http://e4rat.sourceforge.net/>_ 下载它.
Ubuntu 默认的 ureadahead 软件包与 e4rat 冲突。因此不得不使用下面的命令安装几个软件包:
```
sudo dpkg purge ureadahead ubuntu-minimal
```
现在使用下面的命令来安装 e4rat 的依赖关系:
```
sudo apt-get install libblkid1 e2fslibs
```
打开下载的 _.deb_ 文件,并安装它。现在需要恰当地收集启动数据来使 e4rat 工作。
遵循下面所给的步骤来使 e4rat 正确地运行,并提高启动速度。
* 在启动期间访问 Grub 菜单。这可以在系统启动时通过按住 shift 按键来完成。
* 选择通常用于启动的选项(内核版本),并按 e
* 查找以 _linux /boot/vmlinuz_ 开头的行,并在该行的末尾添加下面的代码(在句子的最后一个字母后按空格键)
```
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
```
* 现在,按 _Ctrl+x_ 来继续启动。这让 e4rat 在启动后收集数据。在机器上工作,打开应用程序,并在接下来的两分钟时间内关闭应用程序。
* 通过转到 e4rat 文件夹,并使用下面的命令来访问日志文件:
```
cd /var/log/e4rat
```
* 如果你没有找到任何日志文件,重复上面的过程。一旦日志文件在这里,再次访问 Grub 菜单,并按 e 作为你的选项。
* 在你之前已经编辑过的同一行的末尾输入 single 。这将帮助你访问命令行。如果出现一个要求任何东西的不同菜单选择恢复正常启动Resume normal boot。如果你不知为何不能进入命令提示符按 Ctrl+Alt+F1 组合键。
* 在你看到登录提示后,输入你的详细信息。
* 现在输入下面的命令:
```
sudo e4rat-realloc /var/lib/e4rat/startup.log
```
这个进程需要一段时间,依赖于机器的磁盘速度。
* 现在使用下面的命令来重启你的机器:
```
sudo shutdown -r now
```
* 现在,我们需要配置 Grub 来在每次启动时运行 e4rat 。
* 使用任意的编辑器访问 grub 文件。例如, _gksu gedit /etc/default/grub 。_
* 查找以 _GRUB CMDLINE LINUX DEFAULT=_ 开头的一行,并在引号之间和任何选项之前添加下面的行:
```
init=/sbin/e4rat-preload 18
```
* 它应该看起来像这样:
```
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
```
* 保存并关闭 Grub 菜单,并使用 _sudo update-grub_ 更新 Grub 。
* 重启系统,你将在启动速度方面发现显著的变化。
图表 9 和 10 显示在安装 e4rat 前后的启动时间的不同。启动速度的改善可以被注意到。在使用 e4rat 前启动所用时间是 22.32 秒,然而在使用 e4rat 后启动所用时间是 9.065 秒。
![Figure 9: Boot speed before using e4rat][11]
![Figure 10: Boot speed after using e4rat][12]
**一些易做的调整**
一个极好的启动速度也可以使用非常小的调整来实现,其中两个在下面列出。
**SSD:** 使用固态设备而不是普通的硬盘或者其它的存储设备将肯定会改善启动速度。SSD 也帮助获得在传输文件和运行应用程序方面的极好速度。
**禁用图形用户界面:** 图形用户界面,桌面图形和窗口动画占用大量的资源。禁用图形用户界面是另一个实现极好的启动速度的好方法。
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
作者:[B Thangaraju][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/b-thangaraju/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Read Reddit from the Linux terminal)
[#]: via: (https://opensource.com/article/20/1/open-source-reddit-client)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
在 Linux 终端中阅读 Reddit
======
在我们的 20 个使用开源提升生产力的系列的第十一篇文章中使用 Reddit 客户端 Tuir 在工作中短暂休息一下。
![Digital creative of a browser on the internet][1]
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Tuir 阅读 Reddit
短暂休息对于保持生产力很重要。我休息时喜欢去的地方之一是 [Reddit][2],如果你愿意,这可能是一个很好的资源。我在那里发现了各种有关 DevOps、生产力、Emacs、鸡和 ChromeOS 项目的文章。这些讨论可能很有价值。我还关注了一些只有动物图片的子板,因为我喜欢动物(而不只是鸡)照片,有时经过长时间的工作后,我真正需要的是小猫照片。
![/r/emacs in Tuir][3]
当我阅读 Reddit不仅仅是看动物宝宝的图片我使用 [Tuir][4],代表 Terminal UI for RedditReddit 终端 UI。Tuir 是功能齐全的 Reddit 客户端,可以在运行 Python 的任何系统上运行。安装是通过 pip 完成的,非常轻松。
首次运行时Tuir 会进入 Reddit 默认文章列表。屏幕的顶部和底部有列出不同命令的栏。顶部栏显示你在 Reddit 上的位置,第二行显示根据 Reddit “Hot/New/Controversial” 等类别筛选的命令。按下筛选器前面的数字触发筛选。
![Filtering by Reddit's "top" category][5]
你可以使用箭头键或 **j**、**k**、**h** 和 **l** 键浏览列表,这与 Vi/Vim 使用的键相同。底部栏有用于应用导航的命令。如果要跳转到另一个子板,只需按 **/** 键打开提示,然后输入你要进入的子板名称。
![Logging in][6]
某些东西除非你登录,否则无法访问。如果你尝试执行需要登录的操作,那么 Tuir 就会提示你,例如发布新文章 **c** 或赞成/反对 **a** 和 **z**)。要登录,请按 **u** 键。这将打开浏览器以通过 OAuth2 登录Tuir 将保存令牌。之后,你的用户名应出现在屏幕的右上方。
Tuir 还可以打开浏览器来查看图像、加载链接等。稍作调整,它甚至可以在终端中显示图像(尽管我没有设法使其正常工作)。
总的来说,我对 Tuir 在我需要休息时能快速跟上 Reddit 感到很满意。
Tuir 是现已淘汰的 [RTV][7] 的两个分叉之一。另一个是 [TTRV][8],它还无法通过 pip 安装,但功能相同。我期待看到它们随着时间的推移脱颖而出。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-reddit-client
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://www.reddit.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_11-1.png (/r/emacs in Tuir)
[4]: https://gitlab.com/ajak/tuir
[5]: https://opensource.com/sites/default/files/uploads/productivity_11-2.png (Filtering by Reddit's "top" category)
[6]: https://opensource.com/sites/default/files/uploads/productivity_11-3.png (Logging in)
[7]: https://github.com/michael-lazar/rtv
[8]: https://github.com/tildeclub/ttrv

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get your RSS feeds and podcasts in one place with this open source tool)
[#]: via: (https://opensource.com/article/20/1/open-source-rss-feed-reader)
[#]: author: (Kevin Sonney https://opensource.com/users/ksonney)
使用此开源工具将你的RSS 订阅源和播客放在一起
======
在我们的 20 个使用开源提升生产力的系列的第十二篇文章中使用 Newsboat 跟上你的新闻 RSS 源和播客。
![Ship captain sailing the Kubernetes seas][1]
去年,我在 19 天里给你介绍了 19 个新(对你而言)的生产力工具。今年,我换了一种方式:使用你在使用或者还没使用的工具,构建一个使你可以在新一年更加高效的环境。
### 使用 Newsboat 访问你的 RSS 源和播客
RSS 新闻源是了解各个网站最新消息的非常方便的方法。除了 Opensource.com我还会关注 [SysAdvent][2] 年度 sysadmin 工具还有一些我最喜欢的作者以及一些网络漫画。RSS 阅读器可以让我“批处理”阅读内容,因此,我每天不会在不同的网站上花费很多时间。
![Newsboat][3]
[Newsboat][4] 是一个基于终端的 RSS 订阅源阅读器,外观感觉很像电子邮件程序 [Mutt][5]。它使阅读新闻变得容易,并有许多不错的功能。
安装 Newsboat 非常容易,因为它包含在大多数发行版(以及 MacOS 上的 Homebrew中。安装后只需在 **~/.newsboat/urls** 中添加订阅源。如果你是从其他阅读器迁移而来,并有导出的 OPML 文件,那么可以使用以下方式导入:
```
`newsboat -i </path/to/my/feeds.opml>`
```
添加订阅源后Newsboat 的界面非常熟悉,特别是如果你使用过 Mutt。你可以使用箭头键上下滚动使用 **r** 检查某个源中是否有新项目,使用 **R** 检查所有源中是否有新项目,按**回车**打开订阅源,并选择要阅读的文章。
![Newsboat article list][6]
但是,你不仅限于本地 URL 列表。Newsboat 还是 [Tiny Tiny RSS][7]、ownCloud 和 Nextcloud News 等新闻阅读服务以及一些 Google Reader 后续产品的客户端。[Newsboat 的文档][8]中涵盖了有关此的详细信息以及其他许多配置选项。
![Reading an article in Newsboat][9]
#### 播客
Newsboat 还通过 Podboat 提供了[播客支持][10]podboat 是一个附带的应用,它可帮助下载和排队播客节目。在 Newsboat 中查看播客源时,按下 **e** 将节目添加到你的下载队列中。所有信息将保存在 **~/.newsboat** 目录中的队列文件中。Podboat 读取此队列并将节目下载到本地磁盘。你可以在 Podboat 的用户界面(外观和行为类似于 Newsboat执行此操作也可以使用 ** podboat -a ** 让 Podboat 下载所有内容。作为播客人和播客听众我认为这_真的_很方便。
![Podboat][11]
总体而言Newsboat 有一些非常好的功能,并且是一些基于 Web 或桌面应用的不错的轻量级替代方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/open-source-rss-feed-reader
作者:[Kevin Sonney][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ksonney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://sysadvent.blogspot.com/
[3]: https://opensource.com/sites/default/files/uploads/productivity_12-1.png (Newsboat)
[4]: https://newsboat.org
[5]: http://mutt.org/
[6]: https://opensource.com/sites/default/files/uploads/productivity_12-2.png (Newsboat article list)
[7]: https://tt-rss.org/
[8]: https://newsboat.org/releases/2.18/docs/newsboat.html
[9]: https://opensource.com/sites/default/files/uploads/productivity_12-3.png (Reading an article in Newsboat)
[10]: https://newsboat.org/releases/2.18/docs/newsboat.html#_podcast_support
[11]: https://opensource.com/sites/default/files/uploads/productivity_12-4.png (Podboat)

View File

@ -0,0 +1,57 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's your favorite Linux distribution?)
[#]: via: (https://opensource.com/article/20/1/favorite-linux-distribution)
[#]: author: (Opensource.com https://opensource.com/users/admin)
你最喜欢哪个 Linux 发行版?
======
参与我们的第七届[年度调查][5],让我们了解你对 Linux 发行版的偏好。
![Hand putting a Linux file folder into a drawer][1]
你最喜欢哪个 Linux 发行版?参与我们的第七届[年度调查][5]。虽然有所变化,但现在仍有数百种 [Linux 发行版][2] 保持活跃且运作良好。发行版、包管理器和桌面的组合为 Linux 用户创建了无数客制化系统环境。
我们询问了社区的作者们哪个是他们的最爱以及原因。尽管回答中存在一些共性由于各种原因Fedora 和 Ubuntu 是最受欢迎的选择),但我们也听到一些惊奇的回答。以下是他们的一些回答:
“我使用 Fedora 发行版我喜欢这样的社区成员们共同创建一个令人敬畏的操作系统展现了开源软件世界最伟大的造物。”——Matthew Miller
“我在家中使用 Arch。作为一名游戏玩家我希望可以轻松使用最新版本的 Wine 和 GFX 驱动同时最大限度地掌控我的系统。所以我选择一个滚动升级并且每个包都保持领先的发行版。”——Aimi Hobson
“NixOS在业余爱好者市场中没有比这更合适的。”——Alexander Sosedkin
“我用过每个 Fedora 版本作为我的工作系统。这意味着我从第一个版本开始使用。从前我问自己是否将有一天我会忘记我用的系统版本号。而这一天已经到来了。所以现在是什么时候呢”——Hugh Brock
“通常,在我的房屋和办公室里都有运行 Ubuntu、CentOS 和 Fedora 的机器。我依赖这些发行版来完成各种工作。Fedora 用于高性能和获取最新版本的应用和库。Ubuntu 用于那些需要大型支持社区。CentOS 则当我们需要稳如磐石的服务器平台时。”——Steve Morris
“我最喜欢?对于社区以及如何为发行版构建软件包(从源码构建而非二进制文件),我选择 Fedora。对于可用包的范围和包的定义和开发我选择 Debian。对于文档我选择 Arch。对于新手的提问我以前会推荐 Ubuntu而现在会推荐 Fedora。”——Al Stone
* * *
自从 2014 以来,我们一直向社区提出这一问题。除了 2015 年 PCLinuxOS 出乎意料的领先Ubuntu 往往每年都获得粉丝们的青睐。其他受欢迎的竞争者还包括 Fedora、Debian、Mint 和 Arch。在新的十年里哪个发行版更吸引你如果我们的投票列表中没有你最喜欢的选择请在评论中告诉我们。
下面是过去七年来你最喜欢的 Linux 发行版投票的总览。你可以在我们去年的年刊《[Opensource.com 上的十年最佳][3]》中看到它。[点击这里][3]下载完整版电子书!
![Poll results for favorite Linux distribution through the years][4]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/1/favorite-linux-distribution
作者:[Opensource.com][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/admin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
[2]: https://distrowatch.com/
[3]: https://opensource.com/downloads/2019-yearbook-special-edition
[4]: https://opensource.com/sites/default/files/pictures/linux-distributions-through-the-years.jpg (favorite Linux distribution through the years)
[5]: https://opensource.com/article/20/1/favorite-linux-distribution

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2020.01
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Contrast
[Contrast][4]是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5]中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6]
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Contrast。要安装 Contrast请使用以下命令
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8]是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Pamixer。要安装 Pamixer请使用以下命令
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
![][11]
#### 安装说明
[仓库][7]当前为 Fedora 31 提供 PhotoFlare。要安装 PhotoFlare请使用以下命令
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
#### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供 tdiff。要安装 tdiff请使用以下命令
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。[Sandra Henry-Stocker / Linux][1] [(CC0)][2]
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面使用大量工具与应用时键入命令会产生兴奋。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 Activities 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(如 “cmd” ,它将为你打开窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 **Ctrl+Alt+t** 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
[][3]
由 HPE 赞助的 BrandPost
[走消费存储智能化之路][3]
将 HPE 存储的灵活性和经济性与 HPE GreenLake 结合起来,高效地运转你的 IT 部门。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 **$** 或者像 “**user@system:~$**” 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
```
命令 用途
pwd 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录)
ls 列出我的文件
ls-a 列出我更多的文件(包括隐藏文件)
ls-al 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限)
who 告诉我谁登录了(如果只有你,不要失望)
date 日期提醒我今天是星期几(也显示时间)
ps 列出我正在运行的进程可能只是你的shell和“ps”命令
```
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
```
命令 用途
cd /tmp 移动到其他文件夹(本例中,打开 /tem 文件夹)
ls 列出当前位置的文件
cd 回到主目录(不带参数的 cd 总是能将你带回到主目录)
cat .bashrc 显示文件的内容(本例中显示 .bashrc 文件的内容)
history 显示最近执行的命令
echo hello 跟自己说 “hello”
cal 显示当前月份的日历
```
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。 重定向是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。管道是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
[[通过注册 Network World 简讯来获得定期安排的详解]][5]
```
命令 用途
echo “echo hello” > tryme 创建一个新的文件并将 “echo hello” 写入该文件
chmod 700 tryme 使新建的文件可执行
tryme 运行新文件(它应当运行文件中包含的命令并且显示 “hello”
ps aux 显示所有运行中的程序
ps aux | grep $USER 显示所有运行中的程序,但是限制输出的内容包含你的用户名
echo $USER 使用环境变量显示你的用户名
whoami 使用命令显示你的用户名
who | wc -l 计数所有当前登录的用户数目
```
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多。
加入 [Facebook][6] 和 [LinkedIn][7] 上的 Network World 社区,来评论最热门的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
四大亮点带你看 Linux Kernel 5.6
======
当我们体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。
说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将开箱集成 Linux 5.5,你也需要真正了解 Linux 5.6 kernel 为我们提供了什么。
我将在本文中重点介绍 Linux 5.6 发行版中值得期待的关键更改和功能:
### Linux 5.6 功能亮点
![][1]
当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
#### 1\. 支持 WireGuard
WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
你可以在官网上进一步了解 [WireGuard][3] 的优点。当然,如果你使用过它,那你可能已经知道它比 OpenVPN 更好的原因。
同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
#### 2\. 支持 USB4
Linux 5.6 也将支持 **USB4**
如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5].
根据文档“_USB4 将使 USB 的最大带宽增大一倍并支持同时多数据和显示接口并行。_”
另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。
#### 3\. 使用 LZO/LZ4 压缩 F2FS 数据
Linux 5.6 也将支持使用LZO/LZ4 算法压缩 F2FS 数据。
换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展名。
#### 4\. 解决 32 位系统的 2038 年问题
Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
这意味着对于32位系统时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
Linux kernel 5.6 解决了这个问题,因此 32 位系统可以运行到 2038 年以后。
#### 5\. 改进硬件支持
很显然,在下一次发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样紧迫。
新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPUs、NVIDIA GPUs 和 Intel Tiger Lake 芯片组的支持)。
#### 6\. 其他更新
此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
* 改进 AMD Zen 的温度/功率报告
* 修复华硕飞行堡垒系列笔记本中 AMD CPUs 过热
* 开源支持 NVIDIA RTX 2000 图灵系列显卡
* 内建 FSCRYPT 加密
[Phoronix][6] 跟踪了 Linux 5.6 带来的许多技术性更改。因此,如果你好奇 Linux 5.6 所涉及的全部更改,则可以亲自了解一下。
现在你已经了解了 Linux 5.6 发布版带来的新功能,对此有什么看法呢?在下方评论中留下你的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
[2]: https://openvpn.net/
[3]: https://www.wireguard.com/
[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular